With this new system, robots can ‘read’ your mind
Directing bots with brain waves and muscle twitches could make for a speedier response time
BY MARIA TEMMING 12:00AM, JUNE 20, 2018
Getting robots to do what we want would be a lot easier if they could read our minds.
That sci-fi dream might not be so far off. With a new robot control system, a human can stop a bot from making a mistake and get the machine back on track using brain waves and simple hand gestures. People who oversee robots in factories, homes or hospitals could use this setup, to be presented at the Robotics: Science and Systems conference on June 28, to ensure bots operate safely and efficiently.
Electrodes worn on the head and forearm allow a person to control the robot. The head-worn electrodes detect electrical signals called error-related potentials — which people’s brains unconsciously generate when they see someone goof up — and send an alert to the robot. When the robot receives an error signal, it stops what it is doing. The person can then make hand gestures — detected by arm-worn electrodes that monitor electrical muscle signals — to show the bot what it should do instead.
MIT roboticist Daniela Rus and colleagues tested the system with seven volunteers. Each user supervised a robot that moved a drill toward one of three possible targets, each marked by an LED bulb, on a mock airplane fuselage. Whenever the robot zeroed in on the wrong target, the user’s mental error-alert halted the bot. And when the user flicked his or her wrist left or right to redirect the robot, the machine moved toward the proper target. In more than 1,000 trials, the robot initially aimed for the correct target about 70 percent of the time, and with human intervention chose the right target more than 97 percent of the time.
The team plans to build a system version that recognizes a wider variety of user movements. That way, “you can gesture how the robot should move, and your motion can be more fluidly interpreted,” says study coauthor Joseph DelPreto, also a roboticist at MIT.
Issuing commands via brain and muscle activity could work especially well in noisy or poorly lit places like factories or outdoors. In such areas, other hands-off means of directing robots, such as visual cues or verbal instructions, may not work as well, says Alexandre Barachant, a brain-computer interface researcher at CTRL-Labs in New York City. This technique could also be used to direct robots that assist people who can’t speak or can hardly move, such as patients with amyotrophic lateral sclerosis, or ALS (SN: 11/16/13, p. 22).
What’s more, the system can correct robot errors almost instantly. Error-related potentials in the brain are discernible a few hundred milliseconds after a person notices a mistake, and electrical muscle signals can be detected before actual movement, Barachant says. This feature could be useful in situations where quick reaction time is key for the safety of the bot and others — as with self-driving cars or manufacturing machines.
For this system to enter widespread use, though, the equipment that tracks users’ brain activity would need to be more broadly accessible than it is now, Barachant says. This mind-monitoring device can cost thousands of dollars, and electrode caps are hardly the most comfortable headwear. But if researchers could measure brain waves with cheaper, more comfortable headsets, the system could provide a relatively quick, easy way for average users to make a robot do their bidding.
Citations
J. DelPreto et al. Plug-and-play supervisory control using muscle and brain signals for real-time gesture and error detection. Robotics: Science and Systems. Pittsburgh, Pa., June 28, 2018.