Brain Computer Interface Project MBSI

Brain

So I had the opportunity to co-found a club call the Melbourne Bioinnovation Student Initiative (MBSI)

We are a group of undergraduate, postgraduate and medical students with backgrounds across science, engineering and medicine. We share an interest in technology and its applications in medicine and the biological sciences.

Our aim is to help students and other young people explore this space by providing opportunities for multidisciplinary collaboration and learning.



After the hard work I put in with Dr Sam John working on a BCI project, I thought why not start a team and build bigger things?

Since by starting the project it would serve to target some key things I'm passionate about:

  1. Build cool things: applying theory to real life projects
  2. Make BCI more accessible to the general cohort: it's researchy and hard to get into
  3. Get medical students and science students to work together
  4. Create opportunities for students to get their hards dirty
  5. Raise awareness of BCI and the users of BCI that would need this technology

Wheelchair team

Steven Hawking, we gotchu fam

Problem statement

Traditional BCI wheelchairs have a few problems:

  1. They rely on gaze to determine where to go: this means they can't really look at where they are going despite having very high decoding accuracy and being great practically
  2. Use expensive equipment in order to perform hardcore decoding
  3. Give too much power to the BCI which makes the mental effort to control the wheelchair difficult
  4. Often cannot move very far easily and requires consistent mental effort to get anywhere
  5. Additionally, many have not considered a safety mechanism incase the BCI system gets it wrong!

Preposed system

We will be adopting the way-pointing system that ships use. This is done by travelling to a point determined by the user and the directions are generated autonomously as opposed to how you would drive a car which is deciding on each individual direction to get to a point

  1. We will use gaze tracking to determine where to go in 3D space as opposed to a predetermined direction. This allows them to actually see where they are going but also let them use their eyes after the location is determined
  2. The G.tec Unicorn Hybrid Black will be used which is one of the best portable EEGs and costs around $1400 AUD which is much better than most
  3. The location will be determined by the eyes, the intent to start movement is decided with the BCI. This takes away directional control from the BCI and instead is used in a more natural start-stop way
  4. By using the eyes to determine where to go in 3D space we can move much further with less mental effort
  5. By using Error Related Potentials (ErrP), we can detect when the user has realised a mistake in our system and stop the wheelchair from moving!

To summarise:

For more information about motor imagery, check out my personal BCI project here



BCI decoding pipeline

All specifications are determined from the literature



Gaze tracking

We are using MediaPipe Iris solution, a model developed by CSAIL at MIT which works really well.

P300 speller team

|

Problem statement

There is no problem statement, the focus of this was to create a communication device for the BCI wheelchair that would benefit from the EEG system and also be modular to perform typing or turn on the lights in the house for example.

The P300 is a signal that arises out of surprise; something that you did not expect occured. If you were staring at a letter and the letter happened to light it, it would cause a P300 to occur. Now imagine that we flash the rows and columns, at some point your letter will have flashed twice because in a grid of letters it holds a place in a row and column

When we detect two P300 signals then we know what letter you were looking at because we know when each row and column would have flashed.

Here is a video to illustrate the process





Offline decoding pipeline and results

Based off the existing literature we have designed this pipeline:

We are using this dataset from kaggle. We have also published our pipeline here!

Below are some the results for decoding all the individual P300 flashes, the main takeaway is that performance is definitely different between subjects.

So the maximum accuracy is about 78% here, this is not so bad because we can repeat the trial many times for the same letter. All the subject has to do is stare for a bit longer. This means than with 10 repeats we can actually get 100% letter accuracy!

Below is a graph showing the number of letter repeats very the accuracy of the letter decoding

So to summarise, we can definitely show that P300 spelling is possible. But also accuracy is subject dependent and that it's also possible to speed up the letter decoding by reducing the number of repetitions.