How Artificial Intelligence is Opening Up the World for Those with disabilities

AI, or artificial intelligence, is something that is far more present in society than most people realise. Most of us use applications that are powered by artificial intelligence every day and we don’t even know it, whether that is in replying to an email with one of Gmail’s suggested responses or watching a Facebook video with automatic captions. Increasingly the technological world is using AI to open up technology and the instant access to information that it gives us to those who would other wise struggle owing to a disability or sensory impairment. So, what are the three main ways that AI and machine learning are improving accessibility for the disabled community.

Security

For those who are either physically incapable of typing, or unable to remember a traditional password due to cognitive difficulties, AI presents alternative options for protecting your computers, phones, tablets and sensitive files etc. These days we can use our fingerprints, iris scanning, and facial recognition etc as ways to access our technology rather than having to remember a number or letter password and type this in.

There are also many systems utilising AI that are designed to help the elderly, such as devices that can sense when a person falls. It can then send out a notice of this to any present contacts such as those of a family member of carer.

Inclusion

AI is increasingly being used to make the world more inclusive for those who have sensory disabilities, such as those with hearing or sight impairments. Facebook for example has developed a screen-reading captioning tool that describes photos to visually impaired people, and Google’s Cloud Vision API can understand the context of objects in photos so that the user can get  a complete picture of the photo.

AI has also shown promising results in Microsoft’s partnership with Rochester Institute of Technology’s National Technical Institute for the Deaf, which is piloting the use of Microsoft’s AI powered speech and language technology to provide real time captioning for students during lectures. By using an advanced form of speech recognition, they can convert raw speech along with every ‘um’ and stutter into fluent punctuated text, so that deaf students can get information at the same time as their hearing peers.

Independence

Voice interactive products have given those with limited sight or mobility, especially in their hands, far greater access to technology, There are many devices such as the Amazon Echo and Google Home, along with virtual assistant such as Siri and Cortana that make it easy to access applications and the internet, even for those who can’t see or use their hands. They provide these people with a more natural way of interacting with online services.

We are also starting to see products that almost seem futuristic hitting the market, such as smart canes and self-driving cars. The first of these are a modification of the traditional white cane used by blind and visually impaired people to help them navigate the world. Whilst they can also use it as a traditional white cane, it syncs with their mobile phone and feeds directions into their ear from their mobile so that they can navigate unfamiliar areas, and warns them not only of objects on the ground but above them also. Self-driving vehicles are slowly starting to hit the roads in pilot trials, providing the opportunity for those who are otherwise unable to drive with the ability to travel with independence rather than having to rely on other people.