Creating a simple Presentation Module

This tutorial walks you through how to make your own Presentation module for the BCI. By the end of this tutorial you will be able to: * design your own user interface screen with virtual ‘buttons’ selectable by brain signals * connect to the decoder and run through the calibration to train the BCI * use your designed user-interface screen to select buttons the on-screen buttons

Before running this tutorial you should have read how an evoked bci works to get an overview of how this BCI works and it’s main components, and run through quickstart tutorial to quickly test your installation and try the BCI.

The presentation module is responsible for displaying the user-interface to the user with flickering options they can select from with brain signals (see how an evoked bci works for more information). In order for the presentation flickering to generate the strongest possible brain response, and hence maximise the BCI performance, it must display the correct stimuli to the user with precise timing and communicate this timing information to the MindAffect decoder. Further, in order to train the decoding model presentation happens in two different modes:

  • calibration mode where we cue the user where to attend to obtain correctly labelled brain data to train the machine learning algorithms in the decoder and
  • prediction mode where the user actually uses the BCI to make selections.

The noisetag module provides a number of tools to hide this complexity (different modes, timestamp recording) from the application developers. Using the most extreme of these all the application developer has to do is provide a function to draw the display as instructed by the noisetag module. To use it, we import the module and create the noisetag object:

[ ]:
from mindaffectBCI.noisetag import Noisetag, sumstats
nt = Noisetag()

The Draw function

We will now write a function to draw the screen. Here we use the python gaming library pyglet to draw 2 squares on the screen, with the given colors.

[ ]:
import pyglet
# define a simple 2-squares drawing function
def draw_squares(col1,col2):
    # draw square 1: @100,190 , width=100, height=100
    x=100; y=190; w=100; h=100;
    pyglet.graphics.draw(4,pyglet.gl.GL_QUADS,
                         ('v2f',(x,y,x+w,y,x+w,y+h,x,y+h)),
             ('c3f',(col1)*4))
    # draw square 2: @440,100
    x=640-100-100
    pyglet.graphics.draw(4,pyglet.gl.GL_QUADS,
                         ('v2f',(x,y,x+w,y,x+w,y+h,x,y+h)),
             ('c3f',(col2)*4))

Updating the Display

Now we write a function which, 1) asks the noisetag framework how the selectable squares should look, 2) updates the noisetag framework with information about how the display was updated.

[ ]:
# dictionary mapping from stimulus-state to colors
state2color={0:(.2,.2,.2), # off=grey
             1:(1,1,1),    # on=white
             2:(0,1,0),    # cue=green
             3:(0,0,1)}    # feedback=blue
def draw(dt):
    '''draw the display with colors from noisetag'''
    # send info on the *previous* stimulus state, with the recorded vsync time (if available)
    fliptime = window.lastfliptime if window.lastfliptime else nt.getTimeStamp()
    nt.sendStimulusState(timestamp=fliptime)
    # update and get the new stimulus state to display
    try :
        nt.updateStimulusState()
        stimulus_state,target_state,objIDs,sendEvents=nt.getStimulusState()
    except StopIteration :
        pyglet.app.exit() # terminate app when noisetag is done
        return

    # draw the display with the instructed colors
    if stimulus_state :
        draw_squares(state2color[stimulus_state[0]],
                     state2color[stimulus_state[1]])

    # some textual logging of what's happening
    if target_state is not None and target_state>=0:
        print("*" if target_state>0 else '.',end='',flush=True)
    else:
        print('.',end='',flush=True)

Integrating Output

As a final step we integrate the behaviour of the output module created in the Simple Output Tutorial by attaching a selection callback which will be called whenever a selection is made by the BCI. For now we will use the “Hello World” example created in the Output Tutorial and add a function that prints the object ID of our second square when selected.

[ ]:
def helloworld(objID):
    print("hello world")

def printID(objID):
    print("Selected: %d"%(objID))

# define a trivial selection handler
def selectionHandler(objID):
    selection_mapping = {
        1:helloworld,
        2:printID
    }
    func = selection_mapping.get(objID)
    func(objID)

Timing Accuracy

Now, we need a bit of python hacking. Because our BCI depends on accurate timelock of the brain data (EEG) with the visual display, we need to have accurate time-stamps for when the display changes. Fortunately, pyglet allows us to get this accuracy as it provides a flip method on windows which blocks until the display is actually updated. Thus we can use this to generate accurate time-stamps. We do this by adding a time-stamp recording function to the windows normal flip method with the following magic:

[ ]:
import types

def timedflip(self):
    '''pseudo method type which records the timestamp for window flips'''
    type(self).flip(self) # call the 'real' flip method...
    self.lastfliptime=nt.getTimeStamp()

Next, we initialize the window to display the stimulus, and setup the flip-time recording for it. Be sure that you have vsync turned-on. Many graphics cards turn this off by default, as it (in theory) gives higher frame rates for gaming. However, for our system, frame-rate is less important than exact timing, hence always turn vsync on for visual Brain-Compuber-Interfaces!

Note: always set fullscreen=True when using the presentation module to improve screen timing accuracy. We set it to False here so the tutorial stays visible when the stimulus is running.

Note: When running in a notebook the pyglet window always starts minimized – so if you can’t see it check your task bar.

[ ]:
# Initialize the drawing window
# make a default window, with fixed size for simplicty, and vsync for timing
config = pyglet.gl.Config(double_buffer=True)
window = pyglet.window.Window(fullscreen=False, width=640,height=480, vsync=True, config=config)

# Setup the flip-time recording for this window
window.flip = types.MethodType(timedflip,window)
window.lastfliptime=None

Start the BCI decoder in the background.

To successfully test your presentation module it is important to have the other components of the BCI running. As explained in the quickstart tutorial, additionally to the presentation we build here, we need the Hub, Decoder, and Acquisition components for a functioning BCI.
For a quick test (with fake data) of this presentation module you can run all these components with a given configuration file using.

N.B. if you run directly in this notebook, don’t forget to shutdown the decoder at the end.

[ ]:
import mindaffectBCI.online_bci
config = mindaffectBCI.online_bci.load_config('fake_recogniser.json')
mindaffectBCI.online_bci.run(**config)

Alternatively you can run this config from the command line with:

python3 -m mindaffectBCI.online_bci --config_file fake_recogniser.json

Or from you Anaconda environment:

python -m mindaffectBCI.online_bci --config_file fake_recognizer.json

See our tutorial Running Custom Presentation to set-up a BCI using your own Presentation module

Run the Experiment!

To run the experiment we connect to the Hub, add our selection handler, tell the noisetag module to run a complete BCI ‘experiment’ with calibration and feedback mode, and start the pyglet main loop.

[ ]:
# Initialize the noise-tagging connection
nt.connect(timeout_ms=5000)
nt.addSelectionHandler(selectionHandler)

# tell the noisetag framework to run a full : calibrate->prediction sequence
nt.setnumActiveObjIDs(2)
nt.startExpt(nCal=4,nPred=10,duration=4)

# run the pyglet main loop
pyglet.clock.schedule(draw)
pyglet.app.run()

Shutdown the decoder

[ ]:
import mindaffectBCI.online_bci
mindaffectBCI.online_bci.shutdown()