{"id":270140,"date":"2023-11-14T01:00:00","date_gmt":"2023-11-14T06:00:00","guid":{"rendered":"https:\/\/platohealth.ai\/amazing-tiny-brain-implant-translates-brain-signals-into-speech\/"},"modified":"2023-11-14T06:02:25","modified_gmt":"2023-11-14T11:02:25","slug":"amazing-tiny-brain-implant-translates-brain-signals-into-speech","status":"publish","type":"post","link":"https:\/\/platohealth.ai\/amazing-tiny-brain-implant-translates-brain-signals-into-speech\/","title":{"rendered":"Amazing Tiny Brain Implant Translates Brain Signals Into Speech","gt_translate_keys":[{"key":"rendered","format":"text"}]},"content":{"rendered":"

Amazing Tiny Brain Implant Translates Brain Signals Into Speech<\/a><\/h3>\n

Posted on November 14, 2023<\/a><\/span> Updated on November 13, 2023<\/a><\/span> <\/span> <\/p>\n

<\/p>\n

<\/a><\/figure>\n

As we speak, our brains choreograph an intricate dance of muscles in our mouths and throats to form the sounds that make up words. This complex performance is reflected in the electrical signals sent to speech muscles.<\/p>\n

In a new breakthrough, scientists have now crammed a huge array of tiny sensors into a space no larger than a postage stamp, to read this complex mix of electrical signals, in order to predict the sounds a person is trying to make.<\/p>\n

The \u2018speech prosthetic\u2019 opens the door to a future where people unable to speak due to neurological conditions can communicate through thought.<\/p>\n

Your initial reaction might be to assume it reads minds. More accurately \u2013 the sensors detect which muscles we want to move in our lips, tongue, jaw, and larynx.<\/p>\n

\u201cThere are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,\u201d says<\/a> co-senior author, neuroscientist Gregory Cogan from Duke University.<\/p>\n

\u201cBut the current tools available to allow them to communicate are generally very slow and cumbersome.\u201d<\/p>\n

\"device
Depiction of the sensors on the device (the dotted portion on the white band). (Dan Vahaba\/Duke University)<\/figcaption><\/figure>\n

Similar recent technology<\/a> decodes speech at about half the average speaking rate. The team thinks their technology should improve the delay as it fits more electrodes on a tiny array to record more precise signals, though work needs to be done before the speech prosthetic can be made available to the public.<\/p>\n

\u201cWe\u2019re at the point where it\u2019s still much slower than natural speech, but you can see the trajectory where you might be able to get there,\u201d co-senior author and Duke University biomedical engineer Jonathan Viventi said in September<\/a>.<\/p>\n

The researchers constructed their electrode array on medical-grade, ultrathin flexible plastic, with electrodes spaced less than two millimeters apart that can detect specific signals even from neurons extremely close together.<\/p>\n

\"electrode
Current speech prosthetics with 128 electrodes (left), and the new device that accommodates twice as many sensors in a significantly smaller array. (Dan Vahaba\/Duke University)<\/figcaption><\/figure>\n

To test how useful these micro-scale brain recordings are for speech decoding, they temporarily implanted their device in four patients without speech impairment.<\/p>\n

Seizing the opportunity while the patients were having surgery \u2013 three of them for movement disorders and one to remove a tumor \u2013 they had to make it quick.<\/p>\n

\u201cI like to compare it to a NASCAR pit crew,\u201d Cogan says<\/a>. \u201cWe don\u2019t want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes.<\/p>\n

\u201cAs soon as the surgeon and the medical team said \u2018Go!\u2019 we rushed into action and the patient performed the task.\u201d<\/p>\n

While the tiny array was implanted, the team was able to record activity in the brain\u2019s speech motor cortex that signals to speech muscles while patients repeated 52 meaningless words. The \u2018non-words\u2019 included nine different phonemes<\/a>, the smallest units of sound that create spoken words.<\/p>\n

\n

\n[embedded content]<\/span>\n<\/p>\n<\/figure>\n

The recordings showed phonemes elicited different patterns of signal firing, and they noted these firing patterns occasionally overlapped one another, kind of like the way that musicians in an orchestra blend their notes. This suggests our brains dynamically adjust our speech in real time as the sounds are being made.<\/p>\n

Duke University biomedical engineer Suseendrakumar Duraivel used a machine learning<\/a> algorithm to evaluate the recorded information to determine how well brain activity could predict future speech.<\/p>\n

Some sounds were predicted with 84 percent accuracy, particularly if the sound made the start of a non-word, like \u2018g\u2019 in gak. Accuracy varied and dropped in more complicated situations, like for phonemes in the middle and end of non-words, and overall the decoder had an average accuracy rate of 40 percent.<\/p>\n

This is based on just a 90-second sample of data from each participant, impressive considering existing technology needs hours of data to decode.<\/p>\n

A substantial grant from the National Institutes of Health has been awarded<\/a> to support further research and fine-tuning the technology as a result of this promising start.<\/p>\n

\u201cWe\u2019re now developing the same kind of recording devices, but without any wires,\u201d says<\/a> Cogan. \u201cYou\u2019d be able to move around, and you wouldn\u2019t have to be tied to an electrical outlet, which is really exciting.\u201d<\/p>\n

The study has been published in Nature Communications<\/em><\/a>.<\/p>\n

Source : 1<\/a><\/p>\n