How you and your friends can play a video game together using only your minds

Authored by washington.edu and submitted by Wagamaga
image for How you and your friends can play a video game together using only your minds

How you and your friends can play a video game together using only your minds

Telepathic communication might be one step closer to reality thanks to new research from the University of Washington. A team created a method that allows three people to work together to solve a problem using only their minds.

In BrainNet, three people play a Tetris-like game using a brain-to-brain interface. This is the first demonstration of two things: a brain-to-brain network of more than two people, and a person being able to both receive and send information to others using only their brain. The team published its results April 16 in the Nature journal Scientific Reports, though this research previously attracted media attention after the researchers posted it September to the preprint site arXiv.

“Humans are social beings who communicate with each other to cooperate and solve problems that none of us can solve on our own,” said corresponding author Rajesh Rao, the CJ and Elizabeth Hwang professor in the UW’s Paul G. Allen School of Computer Science & Engineering and a co-director of the Center for Neurotechnology. “We wanted to know if a group of people could collaborate using only their brains. That’s how we came up with the idea of BrainNet: where two people help a third person solve a task.”

As in Tetris, the game shows a block at the top of the screen and a line that needs to be completed at the bottom. Two people, the Senders, can see both the block and the line but can’t control the game. The third person, the Receiver, can see only the block but can tell the game whether to rotate the block to successfully complete the line. Each Sender decides whether the block needs to be rotated and then passes that information from their brain, through the internet and to the brain of the Receiver. Then the Receiver processes that information and sends a command — to rotate or not rotate the block — to the game directly from their brain, hopefully completing and clearing the line.

The team asked five groups of participants to play 16 rounds of the game. For each group, all three participants were in different rooms and couldn’t see, hear or speak to one another.

The Senders each could see the game displayed on a computer screen. The screen also showed the word “Yes” on one side and the word “No” on the other side. Beneath the “Yes” option, an LED flashed 17 times per second. Beneath the “No” option, an LED flashed 15 times a second.

“Once the Sender makes a decision about whether to rotate the block, they send ‘Yes’ or ‘No’ to the Receiver’s brain by concentrating on the corresponding light,” said first author Linxing Preston Jiang, a student in the Allen School’s combined bachelor’s/master’s degree program.

The Senders wore electroencephalography caps that picked up electrical activity in their brains. The lights’ different flashing patterns trigger unique types of activity in the brain, which the caps can pick up. So, as the Senders stared at the light for their corresponding selection, the cap picked up those signals, and the computer provided real-time feedback by displaying a cursor on the screen that moved toward their desired choice. The selections were then translated into a “Yes” or “No” answer that could be sent over the internet to the Receiver.

“To deliver the message to the Receiver, we used a cable that ends with a wand that looks like a tiny racket behind the Receiver’s head. This coil stimulates the part of the brain that translates signals from the eyes,” said co-author Andrea Stocco, a UW assistant professor in the Department of Psychology and the Institute for Learning & Brain Sciences, or I-LABS. “We essentially ‘trick’ the neurons in the back of the brain to spread around the message that they have received signals from the eyes. Then participants have the sensation that bright arcs or objects suddenly appear in front of their eyes.”

If the answer was, “Yes, rotate the block,” then the Receiver would see the bright flash. If the answer was “No,” then the Receiver wouldn’t see anything. The Receiver received input from both Senders before making a decision about whether to rotate the block. Because the Receiver also wore an electroencephalography cap, they used the same method as the Senders to select yes or no.

The Senders got a chance to review the Receiver’s decision and send corrections if they disagreed. Then, once the Receiver sent a second decision, everyone in the group found out if they cleared the line. On average, each group successfully cleared the line 81% of the time, or for 13 out of 16 trials.

The researchers wanted to know if the Receiver would learn over time to trust one Sender over the other based on their reliability. The team purposely picked one of the Senders to be a “bad Sender” and flipped their responses in 10 out of the 16 trials — so that a “Yes, rotate the block” suggestion would be given to the Receiver as “No, don’t rotate the block,” and vice versa. Over time, the Receiver switched from being relatively neutral about both Senders to strongly preferring the information from the “good Sender.”

The team hopes that these results pave the way for future brain-to-brain interfaces that allow people to collaborate to solve tough problems that one brain alone couldn’t solve. The researchers also believe this is an appropriate time to start to have a larger conversation about the ethics of this kind of brain augmentation research and developing protocols to ensure that people’s privacy is respected as the technology improves. The group is working with the Neuroethics team at the Center for Neurotechnology to address these types of issues.

“But for now, this is just a baby step. Our equipment is still expensive and very bulky and the task is a game,” Rao said. “We’re in the ‘Kitty Hawk’ days of brain interface technologies: We’re just getting off the ground.”

Co-authors include Darby Losey, a graduate student at Carnegie Mellon University who completed this research as a UW undergraduate in computer science and neurobiology; Justin Abernethy, a research assistant at I-LABS; and Chantel Prat, an associate professor in the Department of Psychology and I-LABS. This research was funded by the National Science Foundation, a W.M. Keck Foundation Award and a Levinson Emerging Scholars Award.

For more information, contact Jiang at [email protected], Stocco at [email protected] or Rao at [email protected].

Krilpa on July 2nd, 2019 at 02:03 UTC »

I'm married to one of the researchers on this project and friends with the leads. Trying to convince them to do an AMA. Any interest?

brbwiki on July 1st, 2019 at 21:42 UTC »

The technique of communicating via TMS-induced phosphenes is really clever. I wonder how refined it could become in delineating particular shapes/colors. Would probably be used against us in the coming boring dystopia tho. Marketing skipping your eyes and going straight to your visual cortex.

Lickmehardi on July 1st, 2019 at 21:34 UTC »

“We essentially ‘trick’ the neurons in the back of the brain to spread around the message that they have received signals from the eyes. Then participants have the sensation that bright arcs or objects suddenly appear in front of their eyes.”

Sounds like a very weird and crude interface.