Universal Design for Interactive Systems
The technique of developing items such that they can be utilized by as many individuals as possible in as many scenarios as feasible is known as universal design. This can be accomplished by designing systems with built-in redundancy or compatibility with assistive devices.
Universal design principles
A group from North Carolina State University in the United States suggested seven universal design principles in the late 1990s. These were created to cover all aspects of design and may be used to create interactive systems as well. These guidelines provide us with a framework for creating universal designs.
The first principle is equitable use: the design is functional and appealing to individuals of all abilities. There is no stigmatization or exclusion of users. Wherever feasible, everyone should have the same access; if that isn’t possible, similar use should be supported. All people should have access to security, privacy, and safety when it is necessary.
The second Principle is usability flexibility: the design accommodates a wide range of abilities and preferences by offering a variety of methods of operation and adapting to the user’s pace, precision, and preferences.
The third principle is that the system should be easy to use, independent of the user’s education, experience, language, or concentration level. The user’s expectations must be met, and varying language and literacy abilities must be accommodated. It should not be overly complicated and designed to make access to the most crucial regions as simple as possible. As much as feasible, it should prompt and offer feedback.
The fourth principle is Perceptible information : independent of the user’s abilities or the surroundings, the design should offer effective conveyance of information. The importance of presentation redundancy cannot be overstated: information should be delivered in a variety of ways (e.g. graphic, verbal, text, touch). The most important information should be highlighted and distinguished from the rest of the text. People with diverse sensory capacities should be able to obtain information using a variety of devices and strategies.
The fifth principle is tolerance for error , which involves limiting the effect and damage caused by mistakes or undesired action. Potentially hazardous circumstances should be avoided or made difficult to access. Warnings should be used to protect against potential dangers. From the user’s standpoint, systems should be fail-safe, and users should be assisted in activities that demand focus.
Low physical effort is the sixth principle: systems should be designed to be comfortable to use, reducing physical effort and tiredness. The system’s physical design should allow the user to retain a natural posture while exerting little effort. Actions that are repeated or sustained should be avoided.
The system should be placed in such a way that it can be accessed and used by any user, regardless of body size, posture, or mobility. Principle seven needs size and space for approach and usage. Both seated and standing users should have important features on their line of sight. Users who are seated or standing should be able to access all physical components comfortably. Hand sizes should be accommodated, and enough space should be provided for the use of assistive equipment.
These seven concepts provide an excellent foundation for thinking about universal design.
A situation in which the user is given numerous modes for engaging with the system is referred to as multi-modal interaction. Multi-modal systems are defined as those that integrate two or more user input modes, such as speech, touch, vision, and learning, with multimedia system output in a coordinated manner.
Sound in the interface
The importance of sound in usability cannot be overstated. The inclusion of aural confirmation of modes, in the form of variations in key clicks, has been shown to minimize mistakes in experiments. Sound can transmit ephemeral information while taking up no screen real estate, making it a viable option for mobile applications.
Sound in the interface
The language is both rich and complicated. As youngsters, we naturally learn communication ‘by example,’ that is, by listening to and imitating the speech of others around us. We sometimes overlook the complexity of this process because it appears so simple, and it is only when we try to acquire a new language later in life, or make explicit the rules of the one we already speak, that the problems of language comprehension become obvious. Speech recognition and synthesis by computers are extremely challenging due to its intricacy.
Speech recognition: Many attempts have been made to construct speech recognition systems, but despite the fact that commercial systems are now widely available and inexpensive, their success is still confined to single-user systems that require extensive training.
Speech recognition is a new way of communication that may be used to enhance or replace current channels. Speech may prove to be the best input medium when a user’s hands are already engaged, such as in a factory. Because speech input does not need the use of a bulky keyboard, such systems may have a place in lightweight mobile scenarios. As we will see later, it also provides an alternate mode of input for people with visual, physical, or cognitive limitation. Single-user, limited vocabulary systems can perform well, but recognition success rates for general users and unconstrained language are still poor.
Speech synthesis is a tool that works in tandem with speech recognition. Many users, especially those who do not consider themselves computer savvy, find the idea of being able to communicate easily with a computer intriguing since it represents their normal, daily medium of expression and conversation. However, voice synthesis has just as many issues as speech recognition. The most challenging issue is that humans are extremely sensitive to speech changes and intonation, and as a result, we are intolerant of flaws in synthetic speech. We are so accustomed to hearing genuine speech that we have trouble adjusting to the monotonous, non-prosodic tones that synthetic speech might provide.
Uninterpreted Speech: Speech that hasn’t been interpreted by a computer can still be valuable in the interface. To enhance or replace visual information, fixed taped messages might be employed. Although the quality of the recordings is occasionally poor, they feature realistic human prosody and pronunciation. Speech segments can be combined to create messages, as seen in many airports and train stations’ announcements.
Non-speech sounds can provide a variety of benefits. Due to the serial nature of communication, we must listen to the majority of a phrase before understanding what is being said. Non-speaking sounds may frequently be processed considerably more quickly than spoken sounds. Because speech is language dependent, a speech-based technology that is utilized for another language group requires translation. Non-speech sounds have meanings that may be acquired regardless of language. The user’s attention is required when speaking. Non-speech sound can take advantage of the auditory adaptation phenomena, in which background noises are disregarded unless they alter or stop. Non-speech noises, on the other hand, must be taught, whereas the meaning of a spoken communication is evident (at least to a user who is familiar with the language used).
Synthetic sounds are an alternative to using natural sounds. Ear cons express activities and things using motives, which are organized groupings of notes. Rhythm, pitch, timbre, scale, and loudness are all different. There are two sorts of ear con combinations. Compound ear cons combine several motivations to form a single action, such as combining the motives for ‘make’ and ‘file.’ Compound ear cons of comparable sorts are grouped together as family ear cons. Operating system issues and syntax mistakes, for example, would fall into the ‘error’ category. Ear cons can be hierarchically constructed to represent menus in this fashion.
Touch in the interface
The only sense that can both send and receive information is touch. Despite the fact that it is not yet commonly employed in computer interaction, there is a large amount of research being done in this field, and commercial applications are becoming accessible.
Haptic interaction refers to the use of touch in an interface. Haptics is a broad term that refers to touch, although it may be separated into two categories: cutaneous perception, which deals with tactile sensations through the skin, and kinesthetics, which deals with movement and position perception. Both are beneficial in interactions, but they necessitate the employment of distinct technology.
Handwriting looks to enable both textual and graphical input using the same tools, which makes the prospect of being able to understand handwritten data highly intriguing. The use of handwriting as an input medium has a number of drawbacks.
Individual handwriting differs significantly; also, a single person’s handwriting changes from day to day and evolves with time.
This is so difficult to perform consistently that no systems for universal cursive script recognition are currently in use.
When letters are written separately with a tiny separation, however, the performance of systems improves, however they must be trained to distinguish the features of distinct users. When performed on an untrained person, success is once again restricted.
In multi-modal systems, gesture is a component of human–computer interaction that has gotten a lot of attention. The ability to operate the computer with specific hand gestures would be useful in many instances where typing is impossible or when other senses are completely occupied. If signing could be ‘translated’ into speech or vice versa, it might help persons with hearing loss communicate. However, gestures, like speech, are user-dependent and prone to change and co-articulation. The technology used to capture motions is costly.
Designing for diversity
Designing for users with disabilities
Employers and producers of computing equipment have not just a moral, but frequently also a legal obligation to provide accessible goods. In many nations, law now mandates that workplaces be structured to be accessible or flexible to all employees.
Visual impairment is the sensory impairment that has gotten the most attention from researchers, probably because it is potentially one of the most debilitating in terms of interaction. The increased usage of graphical interfaces limits the options available to visually challenged users. Screen readers employing synthetic speech or braille output devices gave total access to computers in text-based interaction: input was based on touch-typing, with these mechanisms providing output. Today’s usual interface, however, is graphical.
There are two key approaches to extending access: the use of sound and the use of touch. A number of systems use sound to provide access to graphical interfaces for people with visual impairment.
A hearing impairment may appear to have no influence on the usage of an interface when compared to a visual handicap, which has an instant impact on engaging with a graphical interface. After all, the visual, not the aural, is the most commonly used channel. To some extent, this is correct, and computer technology may really help persons with hearing loss communicate more effectively. Email and instant messaging are excellent equalizers, as they can be utilized by both hearing and deaf people.
Gesture recognition has also been proposed as a way to translate signing to voice or writing, which would let non-signers communicate more effectively.
Users with physical disabilities vary in the amount of control and movement that they have over their hands, but many find the precision required in mouse control difficult.
Speech input and output is an option for those without speech difficulties.
An alternative is the eye gaze system, which tracks eye movements to control the cursor, or a keyboard driver that can be attached to the user’s head. If the user is unable to control head movement, gesture and movement tracking can be used to allow the user control.
Multimedia systems provide a variety of communication options for individuals with speech and hearing impairments, including synthetic voice, text-based communication, and conferencing systems. Textual communication is sluggish, which might reduce the message’s efficacy. To decrease the amount of typing necessary, predictive algorithms were utilized to anticipate and fill in the words used. Conventions, such as the “smilie😊”:-), which indicates a joke, might serve to add context that is lost in face-to-face conversation. Natural communication is further aided by facilities that allow for the establishment of turn-taking processes. To match realistic conversational tempo, speech synthesis must be quick, so replies may be pre-programmed and selected with a single switch.
Designing for different age groups
The needs of the elderly may differ dramatically from those of other demographic groups, and even within the same population group. With age, the number of persons with impairments rises: more than half of those over 65 have a handicap. Technology, much like in younger persons with impairments, can help with declining vision, hearing, speech, and movement. In circumstances when mobility or speech impairments limit face-to-face interactions, new communication techniques such as email and instant messaging can enable social connection. Where there is age-related memory decline, mobile technology can be employed to give memory support.
While not opposed to utilizing technology, some elder users may be unfamiliar with it and dread learning. The language used in manuals and training materials may be difficult to understand and strange to them. Younger users may have different interests and concerns.
Despite the potential benefits of interactive technology for elderly people, it has received little attention until lately. Researchers are starting to look at how technology can best help older people, what the important design concerns are, and how older people may be successfully engaged in the design process, and this topic is expected to expand in prominence in the future.
When it comes to technology, children have different demands than adults, and they are varied as a group. The needs of a three-year-old will differ significantly from those of a 12-year-old, as will the means available to discover them. Children, on the other hand, are distinct from adults in that they have their own objectives, preferences, and dislikes. It’s crucial to include them in the design of interactive systems for their usage, yet this can be difficult because they may not share the designer’s vocabulary or be able to express themselves verbally. As a result, design methodologies have been created expressly to incorporate children as active members of the design team.
An intergenerational design team that focuses on knowing and evaluating context includes children. Team members, including youngsters, capture their observations using a variety of drawing and note-taking tools. Paper prototyping, which uses art materials that are familiar to children, allows both adults and children to participate equally in the creation and refinement of prototype designs. The method has been successfully used to the development of a variety of innovative technologies for children.
Younger children may find it challenging to use a keyboard and may lack good hand–eye coordination. Pen-based interfaces can be a good alternative to traditional keyboards.
Children may find it simpler to utilize interfaces that allow for many modalities of input, such as touch or handwriting, than keyboard and mouse. Their experience will be enhanced by redundant displays that convey information in text, pictures, and music.
Designing for cultural difference
Cultural differences are sometimes confused with national differences, however this is an oversimplification. Other characteristics, such as age, gender, color, sexuality, class, religion, and political persuasion, may all impact an individual’s attitude to a system. This is especially true when it comes to websites, where the stated goal is typically to design for a specific culture or subculture.
While all of these factors contribute to a person’s cultural identity, they are not all significant in the design of a system. However, we can identify a few crucial characteristics that must be carefully considered if universal design is to be implemented. Language, cultural symbols, gestures, and color usage are among them.
In various cultures, symbols have diverse meanings. In Judeo-Christian religions, the rainbow represents a covenant with God, variety in the LGBT community, and hope and peace in the cooperative movement. We can’t assume that everyone will understand symbols in the same manner, therefore we need to make sure that different interpretations of symbols don’t cause issues or confusion.
Colors are frequently employed in user interfaces to represent ‘universal’ norms like red for danger and green for go. But how widespread are these customs? In reality, in different nations, red and green have distinct meanings. Red denotes life (India), happiness (China), and monarchy, in addition to danger (France). Green is a color associated with fertility in Egypt and youth in China, as well as safety (Anglo-American). It’s impossible to presume a universal interpretation of color, but redundancy — offering the same information in another form — might help to support and clarify the intended meaning of certain hues.
Software engineering Undergraduate
University of Kelaniya