Wild new ‘brainsourcing’ technique trains A.I. directly with human brainwaves

Wild new ‘brainsourcing’ technique trains A.I. directly with human brainwaves. Picture a room full of desks, numbering more than two dozen in total. At each identical desk, there is a computer with a person sitting in front of it playing a simple identification game. The game asks the user to complete an assortment of basic recognition tasks, such as choosing which photo out of a series that shows someone smiling or depicts a person with dark hair or wearing glasses. The player must make their decision before moving onto the next picture.

Only they don’t do it by clicking with their mouse or tapping a touchscreen. Instead, they select the right answer simply by thinking it.

Each person in the room is equipped with an electroencephalogram (EEG) skull cap; a trail of wires leading from each person to a nearby recording device that monitors the electrical voltage activity on their scalp. The scene looks like an open plan office in which everyone is jacked into The Matrix.

John MacDougall / Getty

“The participants [in our study] had the simple task of just recognizing [what they were asked to look for],” Tuukka Ruotsalo, a research fellow at the University of Helsinki, which led the recently published research, told Digital Trends. “They were not asked to do anything else. They just looked at the images they were shown. We then built a classifier to see if we could identify the correct face with the target features, solely based on the brain signal. Nothing else was used, apart from the EEG signal at the moment when the participants saw the picture.”

In the experiment, a total of 30 volunteers were shown images of synthesized human faces (to avoid the chance that one of the participants might recognize a person they were shown, and therefore skew the results). Participants were asked to mentally label the faces based on what they saw and were asked to look for. Using only that brain activity data, an A.I. algorithm learned to recognize images, such as when a blonde person appeared on-screen.

A fresh spin on an old idea

This is impressive stuff, but it’s not especially new. For at least the past decade, researchers have used brain activity data, gathered via EEG or fMRI, to carry out an assortment of increasingly impressive thought-reading demonstrations. In some cases, it’s identifying a particular image or video, as with a recent study during which researchers at the Neurorobotics Lab in Moscow showed that it’s possible to figure out which video clips people are watching by monitoring their brain activity.

In other cases, these insights can be used to trigger certain responses. For example, in 2011 researchers at Washington University in St. Louis placed temporary electrodes over the speech center of a person’s brain and then demonstrated that they were able to move a computer cursor on screen simply by having the person think about where they wanted to move it. Still other studies have shown that brain data can be used to move robotic limbs or hover drones.

What makes the University of Helsinki’s recent study novel and interesting is that it focuses on how the brain activity of a group of people, rather than single people, can be used to draw conclusions, such as classifying images. Not only have they shown that it works, but that — at least up to a point — the more people you add to the group, the more accurate the data becomes.

Chris So / Getty

“When we add more people into the brain-sourcing pool, so that brain data is recorded from a group of people, we achieve performance of well over 90% accuracy,” Ruotsalo said. “[That is] almost at the level of [asking a group to manually tag answers.]”

This might initially sound counterintuitive. If brain data is noisy, wouldn’t adding more people make it even noisier? After all, if you want to listen out for a particularly hard-to-hear sound in a room, it’s easier if you’re only got one person talking over the top of it than 10. Or 30. But as the history of the big data revolution, and many of the most notable demonstrations of machine learning in action, have made clear, the more data you’ve got at your disposal to throw at a problem, the more accurate systems become.

“The signal is noisy in general from EEG or any other brain imaging, and participants or humans are not always attending 100%,” Ruotsalo explained. “Think about looking at pictures yourself. Sometimes, after looking [at] many, your mind could be wandering. Even with single participants, researchers often use tricks, such as repeating the same stimulus all over again to be able to average the noise out. Here, we use signals from many participants.”

The chance that at least some individuals are focused at each time is greatly increased versus just one individual. Add in the notion of the wisdom of crowds (more on that later) and you’ve got one heck of a powerful combination.

Enter the world of brainsourcing

Tuukka Ruotsalo and his team call this group-based brain-reading “brainsourcing.” It’s a play on the term crowdsourcing, referring to a way of breaking up one big task into smaller tasks that can be distributed to large groups of people to help solve. Here in 2020, crowdsourcing might be most synonymous with money-raising platforms such as Kickstarter, where the “big task” is the startup capital needed to launch a product and the distributed crowd-based element involves asking people to chip in smaller sums of money.

However, crowdsourcing can lend itself to other applications as well. Amazon’s Mechanical Turk platform and Apple’s ResearchKit are crowdsourcing tools that harness the power of the crowd for tasks that range from answering surveys to carrying out important academic research. Meanwhile, companies like TaskRabbit and 99designs leverage the crowd to help customers match up with the right person to deliver anything from yard work and grocery shopping to designing you the perfect logo or masthead for your website.

A.I. can also benefit from crowdsourcing. Consider, for instance, Google’s reCAPTCHA technology. Most of us likely consider reCAPTCHA to be a way that websites can check whether or not we’re a bot before allowing us to perform a particular task. Completing a reCAPTCHA might involve reading a wiggly line of text or clicking every image in a selection that includes a cat. But reCAPTCHAs aren’t just about testing whether we’re humans or not; they’re also a very clever way of gathering data that can be used to make Google’s image recognition A.I. smarter. Each time you read a fragment of text from a roadside sign on a reCAPTCHA image, you could be contributing to making, say, Google’s self-driving cars slightly better at recognizing the real world. When Google has collected enough answers for an image, Google is reasonably certain that it has a correct answer.

It is too early to consider how brainsourcing could practically build on these ideas. “We’ve been trying to think about this ourselves,” Ruotsalo said. “I don’t think we even have the ideas yet. It’s just a proof-of-concept that we can do this. Now it’s open for other people to explore how well, and what kinds of tasks, and what types of groups of people we could use this for.”

The future is coming

But the potential is certainly there. Commercially available wearable EEG monitors are now starting to become available — in forms that range from brain-reading headphones to smart tattoos. At present, EEG demonstrations like the one in this study measure only a tiny percentage of a person’s total brain activity. But over time this could increase, meaning that a less binary collection of information may be gathered. Rather than just getting a “yes” or “no” answer to questions, this technology could observe people’s response to more complex questions, could monitor responses to media like a TV show or movie and then feed aggregate crowd data back to its makers.

“Instead of using conventional ratings or like buttons, you could simply listen to a song or watch a show, and your brain activity alone would be enough to determine your response to it,” Keith Davis, a student and research assistant on the project, said in a press release accompanying the work.

Imagine if millions of people wore EEG-tracking wearables and you offered a percentage of them a micropayment 10 times a day in exchange for taking a few seconds to help solve a particular task. Fanciful? Maybe right now, but so too did many of today’s crowdsourcing technologies just a few years ago.

On the game show Who Wants To Be A Millionaire, one of the “lifelines” available to the contestants is the option to ask the audience a certain question. When this one-off lifeline is triggered, the audience uses voting pads attached to their seats and votes for the answer to a multiple-choice question they believe is correct. The computer then tallies the results and shows them as a percentage to the contestant. According to James Surowiecki’s book, The Wisdom of Crowds, asking the audience yields the correct answer more than 90% of the time. That is significantly better than the show’s 50/50 option, which eliminates two incorrect answers, and the option to phone a friend, which gives you the right answer around two-thirds of the time.

READ MORE:  Can Law Enforcement Track Someone Down with an IP Address?

Leave a Reply