Tech

Voice assistants are finally being trained to account for speech impediments

Inclusivity and accessibility tend to take longer to arrive despite arguably being more valuable to those who need it than those who don't.

Woman talking on the phone with the digital voice assistant
Shutterstock

Voice-recognition technology has come a long way. Today we can turn to Alexa or Siri and ask whatever is on our minds and hold conversations, albeit awkward ones, with these smart assistants. But there is major room for improvement as scores of people around the world with speech impairments and disabilities still haven't been included in the conversation.

Now tech companies are trying to make up for that by training voice assistants to understand atypical speech, The Wall Street Journal reports. According to the National Institute on Deafness and Other Communication Disorders, there are 7.5 million individuals in America who have some form of speech impairment and struggle to have their speech understood. Getting smart speakers and related technologies to understand them would be a major leap forward, and a genuine display of "smarts."

New skills — To make sure that people with speech disabilities are understood by their smart devices, Google's artificial intelligence team is working on expanding the company's voice technology skills. Product managers and interns are using atypical speech data from different people and training Google Assistant to recognize the patterns for better familiarity. In some cases, voice recognition tools are being trained on data from seniors who may have degenerative diseases, individuals with amyotrophic lateral sclerosis (ALS) — a neuron disease that results in slurred or slowed speech — stutterers, and those with other speech impediments. If the tech industry gets this right, voice technology is about to become a lot more hospitable, inclusive, and useful.

Welcoming everyone — According to the WSJ, Amazon is working on integrating software called Voiceitt with Alexa. The program trains algorithms on different vocal patterns, including non-standard ones. Apple, meanwhile, is also trying to make sure Siri is able to listen to users without interrupting them, which could be highly beneficial for users who stammer or stutter and who might need it to pause during their questions and replies.

Voice assistants improve in proportion to the quantity and quality of data provided to them, and based on the rules their designers create for them. When it comes to people with atypical speech, making smart assistants work for them is about making deliberate changes to the way systems work. Atypical speakers have always existed, after all.

Of course, it will take time before Alexa, Siri, Google Assistant, Bixby (remember Samsung's smart assistant?) and the smart assistants found elsewhere (like in high-end cars) are able to fully address the issue of accessibility, but the fact that the biggest players are now paying attention to the problem is encouraging.