It is challenging a research project with the task of converting brain signals into words for people who cannot write letters instead of words due to the effects of disease or input letters with a keyboard. In the midst of this, a research team at the University of California, San Francisco, developed a system that predicts prepared questions and answers from human brain signals and communicates at a natural conversational speed.
The research team is developing a system that allows people with severe disabilities to communicate more smoothly. As yet, there is no auxiliary system that allows the handicapped to exchange meanings in the same time unit like a conversation. The study, funded by Facebook, was conducted in cooperation with three patients with epilepsy who visited neurosurgery for surgery. To determine the site of the epileptic seizure before surgery, the patient was wearing an electrode patch that monitors brain activity for at least a week. At this time, the patient’s brain activity was scanned.
During hospitalization, the patient listened to 9 questions and answered by selecting from a list of 24 answers. The research team built and trained computer models to match these question-and-answer patterns and brain activity patterns. As a result, the trained model was able to identify almost instantly what questions were asked with only signal patterns from the brain without voice, with 76% accuracy and 61% accuracy of how to answer them. It is possible to display the contents to be answered by the patient in text, such as a display to confirm the answer from the brain signal.
In this experiment, brain signals were used to answer the patient’s favorite musical genre and to answer questions such as whether the room temperature is cold or whether the lights are bright or dark. It was achieved with only a fairly limited vocabulary, but in the future, it is expected to expand the accuracy and breadth of translation.
This study is a big step toward creating a system that enables people with disabilities to communicate smoothly, but many tasks remain. The software needs to be improved and it should be able to convert brain signals into speech as well as text.
Another task is to read the sentences spoken only in your mind. This experiment uses brain signals input to move the lips, tongue, and jaw to read the words the subject wants to say, but certain injuries or neurological disorders may not detect enough brain signals. In order to target a wider range of people with disabilities, it is necessary to develop a method to detect sentences made in the brain.
However, searching for and reading words that come to mind in the brain may reveal the insides you don’t want others to know, which raises ethical issues. The research team says that, although technically possible, they are not interested in developing skills to read words that come to mind in the brain. On the other hand, if there is a patient who wants to communicate but is suffering from a disability, doctors and scientists have argued that they have a responsibility to restore the basic human ability to communicate with others. Related information can be found here .