Develop for all degrees of visual impairments and be careful with multimodal interaction

In user interfaces multimodal interaction is defined as the option for the users of a software or application to receive the same message on different sensory channels, thus ensuring redundancy. An effective multimodal interaction requires that information presented to the different sensory channels is coordinated and made congruent informational as well as spatially and temporally. There are specific guidelines to develop auditory and haptic icons and ensure they are correctly designed and safely interacting with the other parts of the user interface.

A blind or low vision person should already pay attention to complex information on multiple sensory channels: auditory, tactile and haptic (with their feet, sticks and hands), olfactive (some may recognize the smell of different vehicles and also gain information about objects’ distance). To successfully integrate in your digital application environmental cues and the guidance about the current status (e.g. of stop lights), multimodal interaction should not be too invasive or cause information overload that may cause anxiety and pose risks.

  • Implement auditory labels, including the possibility to turn them on and off. Messages presented using a voiceover tend to be assimilated with less effort than the same messages presented through visual media and all users could benefit from their advantages.
  • Add a voice-assisted menu (for people with reduced vision).
  • Test your auditory and haptic icons with the people who will most benefit from such integration: the blind people and the visually impaired. To go the extra mile, actual blindfold challenges could be an activity offering important insights to service providers and developers and it is does not require much effort or cost.