No longer science fiction: Mind reading through EEG could soon become reality

By Jerrin Thomas Panachakel and Dr Angarai Ganesan Ramakrishnan, Department of Electrical Engineering, Indian Institute of Science

Credit: YAKOBCHUK VIACHESLAV / Shutterstock.com

Jerrin Thomas Panachakel is a PhD candidate at the Department of Electrical Engineering at the Indian Institute of Science (IIS), Bangalore. He studies the use of artificial intelligence (so-called "Deep Learning") to extract "imagined speech" -- that is, thinking in the form of sound but not expressed in words or gestures -- from EEG readings. His thesis advisor, Dr Angarai Ganesan Ramakrishnan is a Senior Professor of electrical engineering at IIS and Head of the Medical Intelligence and Language Engineering laboratory. Together they recently published a review article in Frontiers in Neuroscience, where they give a comprehensive synthesis of research over the last decade on decoding imagined speech from EEGs, from electronic design to extracting and categorizing words. Here, they give an introduction to this rapidly growing field, specifically written for laypersons.

Mind reading (or telepathy) is the ability to transfer thoughts from one person to another without using the usual sensory channels of communication such as speech. Although mind reading is so far been a theme in science fiction , scientists have now shown that it could soon become reality.

The first reported work of creating a synthetic mind-reading system making using of electrical activities of the brain was by Dr Hans Berger (1873-1941), a German psychiatrist. Although he failed to make mind reading a success, the efforts led to the invention of electroencephalography (EEG). EEG is currently widely used in medicine for capturing the electrical activity of the brain. EEG is also one of the most popular tools used in an attempt to decode speech imagery to enable mind reading.

What is speech imagery?

As human beings, we talk within us most of the times. We rehearse over and over again how to manage a particular difficult situation, what to talk to a prospective customer, how to answer certain critical questions in an interview, and so on. This is called speech imagery or covert speech. Unlike the overt speech in a conversation with another person, there is no movement of the articulators in speech imagery. Even when someone's muscles are paralyzed and one is not able to move one's articulators, one can still imagine speaking or actively think.

What makes EEG the most popular choice is the fact that it has good temporal resolution, is non-invasive, and more affordable

Jerrin Thomas Panachakel & Ramakrishnan Ganesan

Why EEG instead of fMRI, fNIRS or other methods?

As mentioned in our new review article in Frontiers in Neuroscience, EEG is not the only choice of method for decoding imagined speech, but it is the most popular. There are several limitations of EEG when compared to other modalities such as fMRI (functional magnetic resonance imaging) and fNIRS (functional near-infrared spectroscopy). What makes EEG the most popular choice is the fact that it has good temporal resolution, is non-invasive, and more affordable. On the other side, EEG is almost always corrupted by noise and lacks any “structure”, making decoding imagined speech from EEG more challenging.

As an illustration, immediately below are shown the EEG signals captured from a subject when he imagined speaking the words "in" and "cooperate", respectively. Although the words have different lengths, it is not evident from the EEG. The signals lack any “structure”, making it challenging to decode the word imagined by the subject.


► Read original article

► Download original article


What are the applications?

A system to decode imagined speech from EEG has a plethora of applications. For instance, someone whose muscles are completely paralyzed (e.g., a patient in locked-in syndrome, LIS) can still imagine speaking and these thoughts can be converted to voice signals so that they can to communicate with the outside world. The system can also be used for assisting someone with severe speech impairments. One military application of the system is in combat environment where the background noise prevents regular vocal communication. In this scenario, the thoughts of the soldiers can be transmitted across instead of transmitting their voices buried in background noise.

So, is synthetic mind reading a reality now?

No, we do not think that we have reached that state yet. The systems we currently have for decoding imagined speech from EEG have some major limitations. Firstly, the systems are not real-time. That is, there is a significant delay between EEG acquisition and predicting what the person imagined. Secondly, we do not have assured accuracies across subjects, meaning that a system that works well for one person may not work well for another person. Thirdly, the current systems have limited vocabulary, meaning the words which you can imagine are limited.  We have discussed in detail in our paper the considerations in designing a practical system for decoding imagined speech from EEG.

Since the current systems have limited vocabulary, what words would be an ideal choice?

What we currently know is that words of different length are a good choice. Also, words from different languages are also good if the participant knows multiple languages. One counter-intuitive result given in our paper is based on the study in which five bilinguals who have Hindi as their primary language and English as their secondary language participated. The system for decoding imagined speech performed better when these participants imagined in Hindi rather than in English. We need to conduct further experiments to ascertain the reason for this observation. In our paper, we have discussed the considerations involved in choosing the prompts.

When will the system for decoding imagined speech from EEG become a reality?

To answer this, let us look at the history of similar tools. Audrey, the first speech recognition system which took up an entire room, could recognize only digits, that too from familiar voices. Today we have Google Voice, Siri and Alexa which we can carry in our pockets. The technology for decoding imagined speech is still in its infancy but in a decade’s time, we do not know what it will transform into and how it will transform our lives. One day, we will have a fail-proof mind reading system.

Jerrin Panachakel (left) and an anonymous subject during one of the EEG data acquisition sessions in Prof Ramakrishnan's laboratory. The subject imagines speaking the word displayed in the monitor. Credit: the authors.

If you have recently published a research paper with Frontiers and would like to write an editorial about your research, get in touch with the Science Communications team at press@frontiersin.org with ‘guest editorial’ in your subject line.

REPUBLISHING GUIDELINES: Open access and sharing research is part of Frontiers’ mission. Unless otherwise noted, you can republish articles posted in the Frontiers news site — as long as you include a link back to the original research. Selling the articles is not allowed.