“We don’t want to leave people behind”: AI is aiding disabled individuals in remarkable new ways.

This screenshot from a video shared on YouTube by OpenAI and Be My Eyes depicts someone utilizing an AI-enhanced version of Be My Eyes to request a taxi.

When Matthew Sherwood shops for clothes, he needs assistance to ensure he’s choosing the correct color or style. Blind for over 15 years, Sherwood manages a family, a successful career in investing, and a guide dog named Chris, but daily tasks like shopping still challenge his independence.

Artificial intelligence (AI) might soon change that. Sherwood currently uses an app called Be My Eyes, which connects visually impaired users with sighted volunteers via live video to help with tasks such as verifying outfit colors or checking milk carton expiration dates. However, advancements in AI technology are beginning to reduce the need for volunteer assistance.

Last year, Be My Eyes partnered with OpenAI to allow its AI model to “see” and describe objects to users. In a recent demo, OpenAI showcased the AI-powered Be My Eyes app helping a user hail a taxi by instructing when to raise their arm. Similarly, in May, Google announced a feature for its app “Lookout,” designed to assist visually impaired users. These applications are part of the growing field of “assistive technology,” tools that aid disabled or elderly individuals.

Tech companies like Apple and Google are increasingly developing AI-powered tools to assist people with various disabilities. These range from eye-tracking technology that enables physically disabled users to control their iPhones with their eyes to detailed voice guidance in Google Maps for blind users.

Since the launch of ChatGPT, it has become evident that AI will transform many aspects of life, including work, communication, and perceptions of reality. For people with disabilities, however, AI holds the potential to profoundly enhance their lives in unique ways.

“It used to be that if you were in business and you were blind, you had to have an administrative assistant reading to you,” Sherwood said. “But now, you have this new power… For some, this is great technology. For blind people, this is an opportunity to gain employment and an opportunity to compete in business, an opportunity to succeed.”

The advantages of AI for accessibility

Technology companies have long used early AI iterations to enhance product accessibility, such as automated closed captioning and screen readers. However, recent advancements in AI, fueled by vast datasets and robust computing systems, are dramatically expanding possibilities in assistive technology. For example, to effectively assist blind individuals in hailing taxis, AI must excel at recognizing taxi appearances, necessitating extensive training on diverse examples.

In another instance, Google has upgraded its tool for blind or low-vision users, now incorporating a “question and answer” feature powered by its generative AI technology, which describes on-screen content.

Eve Andersson, Google’s senior director of product inclusion, equity, and accessibility, highlighted to CNN that AI’s potential has been apparent for years, yet reaching a high quality level is crucial for it to become a practical inclusion in products.

Generative AI tools are particularly promising for accessibility because they can understand and generate information across various formats: text, audio, photos, and videos. This capability allows AI to serve as an intermediary, converting, for example, audio into written text for users with hearing impairments.

Andersson emphasized that people’s accessibility needs vary widely, often revolving around how they perceive and interact with information. AI excels in translating between different modes of information, addressing needs stemming from hearing, vision, motor, speech, cognitive disabilities, among others.

Designing inclusive AI systems

Ensuring that AI systems continue to serve all users requires ongoing investment. Because AI models are trained on human-generated data, there’s a risk that they might replicate existing biases. Examples have already surfaced, such as AI image generators struggling with racial representation or algorithms perpetuating gender stereotypes in job advertisements.

To mitigate these risks, major tech companies like Apple, Google, and Microsoft are collaborating with researchers from the University of Illinois Urbana-Champaign on the Speech Accessibility Project. This initiative aims to create a diverse dataset for training AI speech recognition tools, including speech patterns from individuals with conditions like Parkinson’s, Down Syndrome, and ALS. By incorporating over 200,000 recordings, the project has reduced speech recognition errors from 20% to 12%, improving accessibility for users who rely on tools like voice assistants and translators.

Clarion Mendes, a clinical assistant professor involved in the project, emphasizes the importance of diverse speech data, noting its role in enhancing AI’s ability to understand individuals with non-standard speech patterns. She highlights how assistive technology can significantly improve life participation for individuals facing communication barriers, potentially increasing their independence and opportunities.

Emil Andersson, discussing the broader impact, underscores that investing in accessible AI not only aligns with ethical considerations but also presents sound business opportunities. By ensuring inclusivity, companies can expand their market reach to government agencies and educational institutions, reinforcing the idea that technology has the power to level the playing field and enhance societal inclusion.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like