Purchase individual online access for 1 year to this journal.
Price: EUR 105.00
Technology and Disability communicates knowledge about the field of assistive technology devices and services, within the context of the lives of end users - persons with disabilities and their family members. While the topics are technical in nature, the articles are written for broad comprehension despite the reader's education or training.
Technology and Disability's contents cover research and development efforts, education and training programs, service and policy activities and consumer experiences.
The term Technology refers to assistive devices and services.
- The term Disability refers to both permanent and temporary functional limitations experienced by people of any age within any circumstance.
- The term and underscores the editorial commitment to seek for articles which see technology linked to disability as a means to support or compensate the person in daily functioning.
The Editor also attempts to link the themes of technology and disability through the selection of appropriate basic and applied research papers, review articles, case studies, programme descriptions, letters to the Editor and commentaries. Suggestions for thematic issues and proposed manuscripts are welcomed.
Abstract: In the first part of this paper the principles and the state of the art of speech processing, and especially speech synthesis and recognition, are explained. Then, a speech-based human-computer dialogue system is discussed. The next section gives a brief overview of the available recommendations, guidelines and standards that are directly related with the application of speech technologies. The last part of the paper is dedicated to applications of speech technology for the disabled. The main focus is on blind and partially sighted people and those with hearing loss. Concerning the blind, many multilingual text-to-speech synthesis systems exist, and some…polyglot ones, that can convert printed and electronic documents to audio, but further research is needed for structured text, tables and above all graphics to be efficiently transformed into speech. For the deaf persons, there are still big challenges in the development of adequate communication aids. Although a high-speed transformation of speech into text is possible with state-of-the-art speech recognizers (and thus a quasi real-time information transfer from a hearing to a deaf person), the automatic gesture recognition, needed for the reverse transfer, is still in research state. Other applications discussed in this paper include speech-based cursor control for those with physical disabilities, transformation of dysarthric speech into intelligible speech, voice output communication aids for the language impaired and those without speech, and accessibility options for public terminals and Automated Teller Machines through the incorporation of speech technologies. The paper concludes with an outlook and recommendations for research areas that need further study.
Abstract: This paper explores how multimodal interfaces make it easier for people with sensory impairments to interact with mobile terminals such as PDAs and 3rd generation mobile phones (3G/UMTS). We have developed a flexible speech centric composite multimodal interface to a map-based information service on a mobile terminal. This user interface has proven useful for different types of disabilities, from persons with muscular atrophy combined with some minor speaking problems to a severe dyslectic and an aphasic. Some of the test persons did not manage to use the ordinary public information service, neither on the web (text only) nor by…calling a manual operator phone (speech only). But they fairly easily employed our multimodal interface by pointing at the map on the touch screen while uttering short commands or phrases. Although this is a limited qualitative evaluation it indicates that development of speech centric multimodal interfaces to information services is a step in the right direction for achieving the goal of design for all.
Keywords: Speech centric multimodality, mobile interface design, design for all, disabled users
Abstract: Speech and sounds are important sources of information in our everyday lives for communication with our environment, be it interacting with fellow humans or directing our attention to technical devices with sound signals. For hearing impaired persons this acoustic information must be supplemented or even replaced by cues using other senses. We believe that the most natural modality to use is the visual, since speech is fundamentally audiovisual and these two modalities are complementary. We are hence exploring how different visualization methods for speech and audio signals may support hearing impaired persons. The goal in this line of research is…to allow the growing number of hearing impaired persons, children as well as the middle-aged and elderly, equal participation in communication. A number of visualization techniques are proposed and exemplified with applications for hearing impaired persons.
Keywords: Sound classification, speech processing, hearing impairment, communication support, talking heads, speech reading, speech training
Abstract: In this paper, we describe some of the current issues in computational sign language processing. Despite the seeming similarities between computational spoken language and sign language processing, signed languages have intrinsic properties that pose some very difficult problems. These include a high level of simultaneous actions, the intersection between signs and gestures, and the complexity of modeling grammatical processes. Additional problems are posed by the difficulties that computers face in extracting reliable information on the hands and the face from video images. So far, no single research group or company has managed to tackle all the hard problems and produced…a real working system for analysis and recognition. We present a summary of our research into sign language recognition and how it interacts with sign language linguistics. We propose solutions to some of the aforementioned problems, and also discuss what problems are still unsolved. In addition, we summarize the current state of the art in our sign language recognition and facial expression analysis frameworks.
Abstract: We present an approach to automatically recognize sign language and translate it into a spoken language. A system to address these tasks is created based on state-of-the-art techniques from statistical machine translation, speech recognition, and image processing research. Such a system is necessary for communication between deaf and hearing people. The communication is otherwise nearly impossible due to missing sign language skills on the hearing side, and the low reading and writing skills on the deaf side. As opposed to most current approaches, which focus on the recognition of isolated signs only, we present a system that recognizes complete sentences…in sign language. Similar to speech recognition, we have to deal with temporal sequences. Instead of the acoustic signal in speech recognition, we process a video signal as input. Therefore, we use a speech recognition system to obtain a textual representation of the signed sentences. This intermediate representation is then fed into a statistical machine translation system to create a translation into a spoken language. To achieve good results, some particularities of sign languages are considered in both systems. We use a publicly available corpus to show the performance of the proposed system and report very promising results.
Abstract: In this work a review of speech technologies and their applications that provide or augment access to the printed or electronic information, the daily or social activities, as well as the private or public facilities for blind or low vision persons is presented. Speech technologies are currently considered to be essential for providing general purpose interfacing besides providing accessibility for the people who are visually impaired. Speech-enabled devices, reading machines, accessible computers, software applications, World Wide Web (WWW) content and structured environments constitute the main areas addressed throughout this paper with reference to the background technologies, architectures, formats, on-going research…activities and projects. In the state-of-the art of the accessibility field, the speech communication channel is considered by the authors as one of the most important modality to benefit the blind and low vision persons.
Abstract: This paper describes the principle of Ambient Intelligence (AmI) and the impact on the information and communication technology. AmI provides a vision of the Information Society where the emphasis is on greater user friendliness, more efficient services support, user empowerment, and support for human interactions. In an AmI environment people are surrounded by invisible, intelligent, and intuitive interfaces which act and react in a way of an attentive butler. AmI primarily aims at the private citizen and includes elderly persons as well as persons with disabilities. Three dimensions characterise AmI and are briefly described: the technological, the social-ethical and the…political dimension. In order to illustrate the impact of AmI, the European ISTAG (Information Society Technologies Advisory Group) and other research groups have created scenarios which describe the all-day situation of people in the near future and being surrounded by AmI. Some of these scenarios are presented and discussed. Then communication aspects of AmI are described which clearly show the dominant role of speech and the benefits of a multimodal presentation. Finally, examples of local and mobile applications of AmI are given and commented.
Keywords: Ambient Intelligence, speech processing, communication