Researchers at MIT have created a piece of wearable tech that can read your thoughts like it’s reading a children’s book. The device, AlterEgo, is a headset that attaches to the face and jaw and picks up neuromuscular signals that go off when words are internally verbalized.
Sorry, Alexa, but verbal communication is so 2017. Here’s a look at how the AlterEgo works:
Put together by the MIT Media Lab, the product video depicts a wearer going about his daily routine with the AlterEgo strapped to his head. Despite the fact that it looks more like a dental prosthesis than a mind-reading device, the headset manages some nifty feats. Just think “time,” and the device tells you the time of day via the earpiece. Think “down” when it’s hooked up to your smart TV and it’ll scroll through your shows for you. Or, to add up the price of your grocery shop, just think the prices and the headset does the math — you know, just in case you’re not an MIT genius.
“The motivation for this was to build an IA device — an intelligence-augmentation device,” Arnav Kapur, a graduate student at the MIT Media Lab and lead developer on the device, said in a release. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”
It’s a similar concept to the way we currently use our mobile phones to access information, but with less of a barrier to entry. With a device like AlterEgo, humans no longer have to pull out their phone, then enter a passcode, then open a browser, probably getting distracted by a Facebook notification along the way, only to come back to your initial task 15 minutes later, which they’ll execute by typing with two thumbs or speaking into a microphone. This technology aims to keep us in the moment by becoming like a “second self.”
“My students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present,” adds Pattie Maes, Kapur’s thesis advisor.
Currently, the non-verbal communication device’s vocab is pretty limited — it understands just 20 words. But with those 20, it has a 92 percent accuracy rate, which has got to be better than Siri, who’s constantly offering weather reports for Torno, Italy, when all you want to know is if it’s really going to snow again in Toronto this week. (Not to mention Alexa’s recent habit of eerily laughing without any prompt whatsoever.)
If this is any sort of glimpse into the future, it won’t be long before we’re communicating with our devices — and perhaps also with each other — in non-verbal ways. We might just stick to texting, though.