Joe Buglewicz / The New York Times

February 18, 2019

Cornell Tech Aims to Assist the Visually and Hearing Impaired with Augmented Reality Software

Print More

In the United States alone, over 3.4 million people are blind or visually impaired and over 30 million Americans live with some degree of hearing loss. To make daily life tasks more accessible for the visually and hearing impaired, Prof. Shiri Azenkot, Cornell Tech is researching augmented reality solutions.

AR is the addition of a computer generated image to someone’s field of vision, a technology that allows people to see an enhanced version of their environment with a hand-held screen. One of Azenkot’s projects, Cuesee, is an AR application that aims to help people with impaired vision find items while shopping. It does so by focusing on augmenting visual search, the brain’s visual processing technique.

“Visual search is very common… [it is used] when you are looking for a word in a document, when you are looking for a friend in a cafeteria or… looking for the bathroom or exit sign. There are lots of examples,” Azenkot said.

According to Azenkot, there are no current tools to help the visually impaired with broad visual scanning tasks like visual search. This makes it difficult for impaired individuals to complete everyday tasks like shopping. As a solution, Azenkot and her team designed five different cues that help attract a user’s attention to where products are located in grocery stores.

“We designed visual cues based on principles from cognitive psychology and what is accessible to people with different visual conditions,” Azenkot said.

unnamed

Cuesee highlighting a product at a grocery store using augmented reality

Azenkot is hopeful about applications of this research beyond grocery shopping. She is working on other accessibility projects, such as interactive 3D-printed learning tools for those who are visually impaired.

“We developed this system where, as you touch a 3D print, it recognizes where you are touching and it can speak descriptions of what you are touching,” Azenkot said.

Christopher Caulfield ’19 and Devon Bain ’19, advisees of Azenkot at Cornell Tech, are also working on AR projects to improve learning and accessibility for individuals with disabilities. They are developing methods to make conversing easier for the deaf community.

Together, Caulfield and Bain are creating a captioning system so that users who are deaf or hard of hearing can read what someone says to them. To facilitate this process, computer vision will be used to place captions below the chin of the person talking.

“We realized that a lot of people want to be able to maintain eye contact and also be able to read their lips,” Caulfield said.

After interviewing people who are deaf or hard of hearing to identify needs, Caulfield and Bane created image and video prototypes to simulate the user experience of having a conversation with someone while reading captions of what they say.

Another design innovation developed by Caulfield and Bane is the use of color to show tone or affect, with negative speech highlighted in red and positive or enthusiastic speech highlighted in green. According to Caulfield, tone is often difficult to interpret for people with hearing loss, so indicating tone in captioning will facilitate easier communication.

Both Azenkot and Caulfield expressed concern about the weight and bulkiness of currently available AR headsets. However, Azenkot remains optimistic.

“I hope that someday all of this will be available to use, that all our research will be translated into products,” Azenkot said.