Going Further : Amplifiers, BCI-types, Decoder Config

You can run the BCI in different modes by specifying different arguments on the command line. Or by modifying the basic configuration file online_bci.json.

Alternative Amplifiers

Brainflow supported

This online_bci uses brainflow by default for interfacing with the EEG amplifier. Specificially the file in examples\acquisition\utopia_brainflow.py is used to setup the brainflow connection. You can check in this file to see what options are available to configure different amplifiers. In particular you should setup the board_id and and additional parameters as discussed in the brainflow documentation.

You can specify the configuration for your amplifer in the acq_args section of the configuration file online_bci.json. For example to specify to use the brainflow simulated board use

"acq_args":{ "board_id":-1}

Or to use the openBCI Cyton on com-port 4

"acq_args":{
    "board_id":0,
    "serial_port":"COM4"
 }

Non-Brainflow

Alternatively, thanks to valuable support from their developers, we support some non-brainflow amplifiers ‘out-of-the-box’, specifically;

We are also happy to add support for additional amplifiers if EEG makers request it and are willing to provide open-source SDKs and test hardware.

Add your own AMP support

If you have an amp which is not currently supported, and you have a way of getting raw samples out of it, then you can easily (7 lines of Python!) add support for your device as described in the Add a new Amplifier tutorial.

Alternative BCI types / Stimulus

By default we use the mindaffect NoiseTagging style stimulus with a 25-symbol letter matrix for presentation. You can easily try different types of stimulus and selection matrices by modifying the symbols and stimfile in presentation_args section of the configuration file online_bci.json Where:
  • _symbols_ : can either by a list-of-lists of the actual text to show, for example for a 2x2 grid of sentences.
"presentation_args":{
    "symbols":[ ["I'm happy","I'm sad"], ["I want to play","I want to sleep"] ],
    "stimfile":"mgold_65_6532_psk_60hz.png",
    "framesperbit":1
}

or a file from which to load the set of symbols as a comma-separated list of strings like the file symbols.txt.

  • _stimfile_ : is a file which contains the stimulus-code to display. This can either be a text-file with a matrix specified with a white-space separated line per output or a png with the stimulus with outputs in ‘x’ and time in ‘y’ like.

You can clearly see the difference between the two main types of BCI stimulus file used here when viewed as an image. Firstly, this is the stimulus file for the noisecodes.

_images/mgold_61_6521_psk_60hz.png

which clearly shows the noise-like character of this code.

By contrast the, classical P300 row-column speller stimulus sequence looks like.

_images/rc5x5.png

which shows the more structured row-column structure, and that only a few outputs are ‘on’ at any time.

Change Decoder parameters

The decoder is the core of the BCI at it takes in the raw EEG and stimulus information and generates predictions about which stimulus the user is attending to. Generating these predictions relies on signal processing and machine learning techniques to learn the best decoding parameters for each user. However, ensuring best performance means the settings for the decoder should be appropriate for the particular BCI being used. The default decoder parameters are found in the configuration file online_bci.json in the decoder_args section, and are setup for a noisetagging BCI.

The default settings for noisetagging are

"decoder_args":{
    "stopband" : [3,25,"bandpass"],
    "out_fs" : 80,
    "evtlabs" : ["re","fe"],
    "tau_ms" : 450,
    "calplots" : true,
    "predplots" : false
}

The key parameters here are:

  • stopband: this is a temporal filter which is applied as a pre-processing step to the incomming data. This is important to remove external noise so the decoder can focus on the target brain signals. Here the filter is specified as a list of bandpass or band stop filters, which specify which signal frequencies should be suppressed, (where, in classic python fashion -1 indicates the max-possible frequency). Thus, in this example only frequencies between 3 and 25Hz remain after filtering.
  • out_fs: this specifies the post-filtering sampling rate of the data. This reduces the amount of data which will be processed by the rest of the decoder. Thus, in this example after filtering the data is re-sampled to 80Hz. (Note: to avoid []() out_fs should be greater than 2x the maximum frequency passed by the stop-band).
  • evtlabs: this specifies the stimulus properties (or event labels) the decoder will try to predict from the brain responses. The input to the decoder (and the brain) is the raw-stimulus intensity (i.e. it’s brightness, or loudness). However, depending on the task the user is performing, the brain may not respond directly to the brightness, but some other property of the stimulus. For example, in the classic P300 ‘odd-ball’ BCI, the brain responds not to the raw intensity, but to the start of surprising stimuli. The design of the P300 matrix-speller BCI means this response happens when the users choosen output ‘flashes’, or gets bright. Thus, in the P300 BCI the brain responses to the rising-edge of the stimulus intensity. Knowing, exactly what stimulus property the brain is responding to is a well studied neuroscientific research question, with examples including, stimulus-onset (a.k.a. rising-edge, or ‘re’), stimulus-offset (a.k.a. falling-edge, or ‘fe’), stimulus intensity (‘flash’), stimulus-duration etc. Getting the right stimulus-coding is critical for BCI peformance, see stim2event.py for more information on supported event types.
  • tau_ms: this specifies the maximum duration of the expected brain response to a triggering event in milliseconds. As with the trigger type, the length of the brian response to a triggering event depends on the type of response expected. For example for the P300 the response is between 300 and 600 ms after the trigger, whereas for a VEP the response is between 100 and 400 ms. Ideally, the response window should be as small as possible, so the learning system only gets the brain response, and not a lot of non-response containing noise which could lead the machine learning component to overfit.