Building AI for the Blind: How We Made Dreami Accessible in One Day

Dreami
Dreami improved accessibility for blind users by collaborating live with one user to fix screen reader issues and rapidly implementing continuous voice input.

Summary

The developers of Dreami prioritized genuine usability over mere compliance with screen readers like JAWS, which is notoriously difficult to optimize for. They achieved this by conducting a one-hour live development session with a blind user, fixing screen reader annoyances in real time as they arose. During this session, the user requested voice-to-text, which the team implemented within hours. Realizing that requiring a button press for every sentence broke the conversational flow, they quickly added a continuous conversation mode where the microphone stays on until the dialogue concludes. This single feature benefits blind users by providing a natural conversational flow, drivers by offering a hands-free assistant, and all users by allowing visual confirmation of the transcribed text, unlike some competing voice modes. Furthermore, the intentional 15-30 second response time reflects the AI synthesizing the entire conversation context rather than just reacting to the last prompt. The team emphasizes that true accessibility requires actively engaging with and addressing the frustrations of the target users.

(Source:Dreami)