A 69-year-old gentleman with numbness and a head implant was able to sail a virtual helicopter through difficult obstacle courses, just by thinking about moving his fingers, thanks to an experimental device developed by researchers from Stanford University.

The student, who has quadriplegia from a C4 spinal cord injury, navigated problem courses and strange journey patterns using neurological signals from two tiny cathode arrays implanted in his mind. His ability to combine various movements together represents a significant progress in brain-computer software technology.

For the study, researchers developed a program that can decode four different manage dimensions from mind signals. This level of control matches what able-bodied players achieve with natural devices, the scientists said.

“ Just as able-bodied people of digital devices use their hands to manipulate keys and game devices, this system allows an intuitive framework for a brain-controlled digital interface, providing options for entertainment and socialization as well as eliciting feelings of acceleration. ”

“ He expressed on multiple occasions ( even before enrollment in the clinical trial ) that one of his most important personal priorities was to use a BCI to control a quadcopter, ” the researchers wrote in their paper. “ He felt controlling a quadcopter would enable him, for the first time since his injury, to figuratively ‘rise up ’ from his bed/chair. ”

This desire drove impressive results: after various tries, the unnamed subject was able to finish 12 laps around an obstacle course averaging 222 seconds per lap, and navigating through 28 randomly placed rings in just 10 minutes.

How the Technology Works

When you think about moving your fingers, neurons ( brain cells ) in the motor cortex ( the brain’s movement control center ) fire electrical signals. Even if the body is paralyzed, these signals still exist. Mind reading studies have been trying to decode these signals to trigger external devices capable of achieving what the brain wants to do.

The system that helped this man fly a virtual drone relies on two 96-channel silicon microelectrode arrays placed in the “hand knob” area of the participant’s motor cortex. The electrodes pick up spike-band power, a measure of how active neurons are. For example, when the participant imagines flexing their thumb, specific neurons fire rapidly and these electrodes detect the neural activity patterns.

Then, a computer uses a machine learning algorithm ( like a smart translator ) to convert these signals into finger movements in real time. The algorithm was trained by having the participant watch a virtual hand move and try to mimic it mentally. Over time, the system learned patterns that associated specific electrical patterns to specific finger movements.

The control scheme then learned to map different imagined finger movements to specific drone actions:

  • Thumb movements control forward/backward and left/right motion
  • Index-middle finger movements control altitude
  • Ring-little finger movements handle rotation

” Flying it is tiny little finesses off a middle line, a little bit up, a little bit down,” the patient explained. The system allows for smooth, simultaneous control across all dimensions, enabling complex maneuvers like combining forward movement with turns.

Mind reading technology is not exactly new; AI has given a massive boost to the discipline. Recent advances across multiple labs and companies show how quickly the field is evolving.

For instance, Neuralink, Elon Musk’s brain-computer interface company, has made headlines with its first two human patients. Its second participant, known as” Alex,” broke records for brain-computer interface cursor control and managed to play the video game Counter-Strike 2 and use 3D design software just one month after receiving the implant.

Elon Musk expects to make brain-computer interface devices massive. “If all goes well, there will be hundreds of people with Neuralinks within a few years, maybe tens of thousands within five years, millions within ten years, ” Musk tweeted shortly after sharing the results of Alex’s performance.

However, some experts believe Neuralink’s approach is too invasive. This led one of its researchers to leave the company and fund another brain-control interface startup: “Precision Neuroscience” which is working on a device that registers activity by “wrapping ” the brain rather than sticking needles in it.

Synchron, a New York-based company, has developed a less invasive brain implant called The Stentrode that avoids traditional brain surgery by being inserted through blood vessels. Their patient, a 64-year-old identified as” Mark,” successfully controlled Amazon Alexa devices and interacted with an Apple Vision Pro headset using just his thoughts. The device is implanted via the jugular vein and positioned near the motor cortex.

There are many other examples, from practical to more experimental.

Unbabel has been able to convert thoughts directly into text, UC San Francisco researchers have developed a thoughts-to-speech system and even Meta has been working on non-invasive brain-machine interfaces for augmented reality applications, developing a system that converts thoughts into images almost in real time.

In 2023, UC Berkeley researchers were able to reconstruct music directly from brain activity. Their system successfully recreated Pink Floyd’s” Another Brick in the Wall, Part 1″ by analyzing neural signals from epilepsy patients. The breakthrough suggests potential applications for helping speech-impaired patients communicate through thought.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Share This Story, Choose Your Platform!