What matters to you.
0:00
0:00
NEXT UP:
 
Top

Guideline E: Interactivity

Guideline E

Provide access to interactive activities for all users with disabilities.
The basic principles of creating accessible interactive activities come from general software design guidelines. However, there are some aspects of an interactive activity that deserve particular attention in educational software. Techniques for meeting these checkpoints are found in the software design guidelines for specific development environments, as well as in the general software accessibility guidelines at The Trace R&D Center's Software Accessibility Guidelines.

Checkpoint E1

Ensure that all actions can be completed from the keyboard.
Providing a complete keyboard interface for an activity means that users who cannot use a mouse will be able to complete the activity. Users who are blind or have low vision rely on the keyboard for input and interaction. Some users with physical disabilities also use the keyboard or an alternative input device that passes keystrokes to the operating system. Pay particular attention to activities where users are expected to "drag and drop" one item onto another or must use the mouse to select an item from among alternatives. The Windows OS provides a common and well-known model of a keyboard interface to a graphical user interface. Using the conventions provided by this model allows users to avoid learning a new keyboard interface.

Note that it is possible to provide access to all users on platforms that do not include a keyboard, such as kiosks. See the Trace R&D Center for information about kiosk accessibility.

Checkpoint E2

Present information in ways that are accessible to both blind and deaf users.
Users who are deaf need visual access to any information presented in audio. Users who are blind will benefit from audio access to visual information (and at a minimum must be able to convert visual information into audio or braille through use of a screen reader).

Multimedia presentations are one clear example of the need for multimodal information, but other aspects of a program's educational content and interface should be considered as well. If important warnings or instructions are provided in more than one mode, they will be immediately useful to more users. And when educational content is provided multimodally, many other students can benefit in addition to those with disabilities. For example, Tindall-Ford and colleagues showed in several different experiments that when information was presented in audio and visual form, performance on complex tasks was improved (1997). And J.R. Williams reviewed about 100 studies from the literature on use of multimedia in instruction and found that combining visual and verbal information can lead to enhanced comprehension (1998). The groups of students who may better understand text both seen and heard includes those with learning disabilities or difficulty reading, students learning English as a second language, and students who learn better auditorily rather than visually.

Checkpoint E3

Allow users to customize any timing of events.
Users of assistive technology may not be able to respond to on-screen events as quickly as other users. For example, it may take longer to hear a message using text-to-speech technology than it would to read it visually. A user with a magnified screen may need extra time to locate a message on screen before reading it. And users with physical disabilities may have slower response time to messages. For all of these reasons, users should have the option to change or eliminate any requirement for timed responses. This includes the ability to freeze and repeat any audio or visual presentations. Allowing changes in the timing of required actions also helps to solve some of the problems described in Checkpoint E4 for users who cannot attend to two sources of information at once.

Checkpoint E4

Provide features that allow users to access multiple sources of information separately when they are delivered simultaneously.
Users with disabilities may not be able to monitor two sources of information at once. For example, blind users with screen readers can monitor only one portion of the screen at a time. If important information appears simultaneously in two places, they may need to check each location separately. A user with a magnified screen cannot see two widely separated parts of the screen at the same time and may need extra time to locate new information or may not be aware of both pieces of information. Users who are hard of hearing may be able to use some auditory features but will have difficulty attending to two sounds at the same time. And users with cognitive disabilities may find it difficult to focus on more than one new piece of information at a time.

Separate access to multiple components must be considered when planning multimedia. For example, if instructions for an activity are given in audio but the user must watch something happen at the same time to understand the instructions, a deaf user reading captioned audio may not be able to follow along visually. Users must be able to pause any action to give them time to read the instructions and see the context for those instructions. Similarly, if there is an audio description for blind users simultaneous with audio in the program, it is best to freeze the audiovisual presentation and allow the audio description to play completely, then resume the audiovisual presentation.

Checkpoint E5

Provide a simpler version of any screen with complex backgrounds.
Users with low vision may have difficulty distinguishing important screen elements or text from background images. Avoid placing background images near important elements or under text. If this cannot be avoided, provide a feature that simplifies the screen when needed and ensure that this feature is clearly documented.

Screen shot of the Wild West game, original screen
Screen shot of the Wild West game, simplified screen