By: CIFAR
29 Aug, 2018
At the 2018 Game Developers Conference (GDC), CIFAR presented a panel session on “The Future of VR: Neuroscience and Biosensor Driven Development”.
The goal of the panel was to bring visionary perspectives from science and game development to explore how brain research and biosensors like EEG and eye/motion tracking can mutually advance compelling VR and brain science.
The panel included CIFAR Azrieli Global Scholars Drs Craig Chapman (University of Alberta) and Alona Fyshe (University of Victoria/Alberta) who shared their cutting-edge research in human cognition, movement, and neuroscience. They were joined by industry representative Jake Stauch (CEO, NeuroPlus), who shared his experience developing a lightweight EEG headset that integrates with gameplay in real time. Kent Bye (host, Voices of VR), an expert in the virtual reality field, moderated the session with an audience of almost 200 participants from across the gaming industry. Key insights from the session are described below.
Chapman is a cognitive neuroscientist who is fascinated by the way we move. His work is motivated by a simple idea: to watch someone move is to watch them think. Chapman is leveraging motion tracking biosensors to interpret what body language communicates about internal states.
Chapman shared insights from some of his research in which he recorded video of people pushing one of two buttons the choice based on either off-screen instruction or their own decision. These recordings were then played for other study participants, who were unaware that there was any difference between the instructed and self-determined videos. Participants were then tasked with pointing to the button they thought would be pressed before the recorded participant reached it. Chapman’s findings show that people viewing the videos are quicker to point to the button when they are watching someone who decided on a button themselves as the location of the button the recorded participants have chosen to press is visible in their body language early in the motion.
This study indicates we read and interpret the body language of others unconsciously. More broadly, this research reveals how information-rich human movement is and how powerful movement can be as a game input. Chapman suggested identifying and understanding these hidden subtleties is important to developing realistic in-game characters and interactions and to building truly immersive and interactive virtual reality.
In a new line of research, Chapman is collaborating with Fyshe to incorporate brain data to better understand the links between mind and body. This brain data has strong potential to improve the world of gaming. For example, a game developer might use such data to identify the optimal moment to surprise a player, strengthening immersion. He noted, however, that harnessing this research will require collaboration between the game industry and neuroscience experts.
Fyshe literally reads minds. She records the brain activity of people reading, then builds statistical models that can predict what words a person is reading from their brain activity. If the statistical model proves effective, it will allow researchers to observe certain situations to understand what the brain is doing and how it works.
Fyshe noted that she came to neuroscience by way of computer science, and currently specializes in machine learning, a branch of science in which the computer programs itself from collected data. She shared an example of machine learning in action: supposing we have a dataset of brain images and a corresponding collection of mental state labels that indicate whether the person is drowsy or rested, we can expose the computer to pairs of images and labels and the computer can determine how to predict the mental state of the participant. Machine learning techniques are not only able to pick up on broad data patterns that are visible to the human eye, but also on more subtle details that humans might not perceive.
With current tools, we can tell from brain images whether or not a person has learned a task, if things have happened as they expected, or if they are excited. This can be highly useful in game development: if a developer knows whether a player is excited or frustrated, they can optimize the game to maximize excitement and minimize frustration. This will produce a much more engaged player. Fyshe noted that this technology is available today with companies producing biosensor add-ons to VR headsets, such as EEG, and that game developers can start using brain data to produce better games now.
Stauch is a game-developer who is already using brain data as inputs to his games. He is the founder and CEO of NeuroPlus, a start-up company that has developed an EEG headset and specializes in neuro-based games for kids with ADHD.
The games build on a core theme: the better you concentrate, the more good things happen in the game; if you are tense or move too much, the more points and health you lose in the game. The neurological components of the games aim to make the experience more engaging for kids, but also have a positive impact on cognitive health. A study of the effectiveness of a NeuroPlus game for treating ADHD in children showed that playing the game was nearly four times more effective than traditional ADHD treatment (i.e. medication).
Stauch suggested that while technological advancements often overload the user with more information and interruptions, the advancement of neurological interfaces could help users become more focused and less distracted. However, he cautioned that the new type of interface presents new challenges for game design, as the input collected from a headset differs significantly from that of the traditional controller button. For example, input is not instantaneous and results are probabilistic in that brainwaves need to be observed over a period of seconds to get a promising result for a player’s brain state. This is in contrast to the traditional controller interface which is immediate and accurate as it is designed around a button being either pressed or not pressed. Gameplay would naturally become frustrating if the button input had a few second delay and only accurately signalled the player’s intention part of the time. Stauch emphasised that game developers need to keep this in mind, relying not on brain data in place of a button but instead building new gameplay paradigms to leverage this novel input.
Kent Bye, host of the podcast Voices of VR, chaired an interactive discussion with the panel and audience. Key areas of discussion included:
The session sought to engage the game industry to help further brain research and to share areas where current research can help game development. For researchers it was clear the value that game makers, experts in crafting authentic, meaningful, and engaging experiences, would bring in helping to design elements of scientific experiments so that they can effectively study presence, focus, and learning and how insights from this research in turn could drive new advances in immersive game development. This sort of cross-sectoral collaboration has the potential to to open new frontiers in both virtual reality development and brain research.