Reuters — In a laboratory in the Computer Engineering Department at the University of Washington, Justin Abernethy slides into a seat in front of a video monitor. A colleague adjusts the working end of a Transcranial Magnetic Stimulation (TMS) device that must be positioned correctly to send a signal to the brain.
Information from a computer screen helps get the TMS coil to the right spot so that the stimulation is sent non-invasively, as Rajesh Rao, Professor of Computer Science & Engineering at the University of Washington in Seattle, explained.
“The fundamental question that we were asking here was “Is it possible to send information from artificial sensors or computer-created worlds directly into the brains so that the brain can start to understand that information and make use of it to solve a task?”, said Rao, who is also director of Seattle’s Center for Sensorimotor Neural Engineering.
This is the first step in figuring out how direct brain stimulation can help people interact with virtual realities. In the experiment, each participant must navigate 21 different mazes with only input from direct brain stimulation. There are no visual, auditory, or other sensory cues.
When the TMS machine sends a signal to the test subject’s brain, it is perceived as a very brief flash of light, known as a phosphene. The person being tested is then asked to navigate through a simple maze, moving forward or down, depending on whether they saw a phosphene or not. A researcher controls the experiment, choosing from a variety of different maze options.
“So what’s going to happen is the TMS is going to present the phosphene to me, which is a brief flash of light represented in my brain and whenever I see that phosphene I’ll know there’s a wall in this maze and my job is to go down the ladder,” explained Justin Abernethy, research assistant at the University of Washington’s Institute for Learning & Brain Science (I-LABS). “Whenever I don’t see a phosphene, so I don’t see a flash of light, my job is just to go forward in the hall and get out of the maze.”
A paper on the experiment was published online in November in ‘Frontiers in Robotics and AI’. According to the study, when direct brain stimulation was used to send data, five test subjects made the correct moves 92 percent of the time. That’s compared to 15 percent of the time without TMS.
“We’re essentially trying to give humans a sixth sense, so to speak. They’re navigating this maze with information delivered directly to their brain non-invasively through magnetic stimulation they’re navigating this maze without their traditional five senses,” said I-LABS researcher Darby Losey.
Researchers believe this type of non-invasive sensory stimulation technology could eventually lead to a wide variety of real-world uses, including potentially help for the visually impaired.
“If you look at virtual reality today, we use goggles, headsets, and displays but ultimately it’s the brain that creates reality for us and so, you can look at in the future, you know, the possibility that the experience of virtual reality could be much richer if we can send much richer information into the brain directly – for example, the sense of touch, maybe even the sense of smell or taste, which we cannot do today,” said Rao.
“The kind of approach we are taking and that we’re suggesting here could potentially lead to much richer virtual reality experiences, as well as lead to new technologies to assist people with sensory deficits, for example, the blind, using non-invasive brain stimulation technologies.”
The research team is also investigating how altering the location and intensity of direct brain stimulation can be used to create richer visual and other sensory experiences, currently lacking in augmented or virtual reality. The team is also looking into a smaller headset for the TMS.
The University of Washington team says it could be fifteen to twenty years before their work can be put to commercial use.