Image
© Ryan McVay/Getty
Read my lips. A new invention can recognise "silent speech" by keeping tabs on your tongue and ears.

By training it to recognise useful phrases, it could allow people who are disabled or work in loud environments to quietly control wearable devices.

The new device relies in part on a magnetic tongue control system, previously designed to help people with paralysis drive a power wheelchair via tongue movements.

But the researchers were concerned that the technology - which relies on a magnetic tongue piercing or a sensor affixed to the tongue - might be too invasive for some users.

Thad Starner, a professor at the Georgia Institute of Technology and technical lead on the wearable computer Google Glass, was inspired to try tracking ear movements after a dentist's appointment. The dentist stuck a finger in Starner's ear and asked him to bite down, a quick test for jaw function. As his jaw moved, so too did the space in his ears.

"I said, well, that's cool. I wonder if we can do silent speech recognition with that?" says Starner.

Infrared ear sensor

The resulting device combines tongue control with earpieces that look somewhat like headphones. Each is embedded with a proximity sensor that uses infrared light to map the changing shape of ear canal. Different words require different jaw movements, deforming the canal in slightly different ways.

As a test, the team listed 12 phrases that might be required, such as "I need to use the bathroom" or "Give me my medicine, please". People were then recorded repeating these while wearing the device.

With both the tongue and ear trackers in, the software recognised what the wearer was saying 90 per cent of the time. Using ear trackers alone, the accuracy was slightly lower (IEEE Computer, DOI: 10.1109/MC.2015.310).

'Jaw-emes'

The researchers hope to build up a phrasebook of useful words and sentences recognisable just from the ear data. "We're trying to figure out the fundamental parts of speech we can recognise. We call them 'jaw-emes'," says Abdelkareem Bedri, a graduate student at Georgia Tech.

In addition, they've started looking into other potential uses for the ear data. One experiment with a modified version of the ear trackers reached 96 per cent accuracy in recognising simple jaw gestures, like a move from left to right. Such gestures could let the wearer discreetly control a wearable device. Heartbeat monitoring also seems feasible, and could help the system verify it is placed correctly in the wearer's ear.

Bruce Denby works on silent speech in his lab at the Pierre and Marie Curie University in Paris. He says that demonstrating that the technology is "industry ready" will be key to bringing the technology to the market place.

"The true holy grail of silent speech is continuous speech recognition," says Denby, but the ability to recognise even a limited set of phrases is already a tremendous boon for some disabled individuals, he adds.