zum Hauptinhalt wechseln zum Hauptmenü wechseln zum Fußbereich wechseln Universität Bielefeld Play Search

Abteilung Psychologie

© Universität Bielefeld

Research

FuncSim

A functionally sequenced scanpath similarity method for comparing and evaluating scanpath similarity based on a task's inherent sequence of functional (action) units

FuncSim Toolbox for Matlab

On this page you will find the Matlab code of our functionally sequenced scanpath similarity method that compares and evaluates scanpath similarity according to a task's inherent sequence of functional units. FuncSim reveals whether gaze characteristics are similar in the same functional units of a task, opposed to when participants are engaged in different functional units of the task. In addition, it calculates a random baseline, i.e. similarity of one of the observed scanpaths to its scrambled derivative.

The toolbox should work with any version of Matlab since version 2010b and runs without any other toolboxes installed. However, it has only been tested under Windows.

Tutorial

Download the FuncSim toolbox. Unzip the files and set them to your Matlab path.

Open Matlab. Open example data by typing in the Command Window:

>>load FuncSimExampleData.mat

This file contains four variables each containing one example scanpath. You will see the four variables (path1, path2, path3, path4) in the Workspace. Double click on one of these paths to open it in the Variable Editor. Each path you want to compare with FuncSim has to have the same format as the example paths. Each row contains one fixation. The first column indicates to which functional unit the fixation belongs. The second and third columns contain the fixation?s x-and y-coordinates, respectively. The forth column contains the fixation duration.

Define the alignment method FuncSim shall use by typing in the Command Window:

>>method='average';

or

>>method='reldur';

The method 'average' will average each scanpath dimension (location, duration, length, direction) within each functional unit before calculating similarity across paths. This method is adequate if each functional unit contains a small number of fixations. The method 'reldur' will align fixations across paths within functional units according to their proportion to the functional unit?s overall dwell time.

Perform the comparison by typing in the Command Window:

>>[dataDiff,diff]=FuncSimCore(path1,path2,method);

Between-path differences (BPD) between path1 and path2 and random baseline differences (RBD) between path1 and its scrambled derivative for each dimension (location, duration, length, direction) will be displayed in the Command Window, e.g.:

Method: 'reldur'
BPDloc: 3.7289
RBDloc: 7.1413
BPDdur: 138.3157
RBDdur: 117.9000
BPDlen: 4.4453
RBDlen: 5.6136
BPDdir: 40.5908
RBDdir: 61.9408

The results will also be stored to the variable diff in the Workspace. In addition, the alignment matrix is stored to the variable dataDiff. dataDiff contains the aligned values of path1, path2, and the scrambled path1 as well as all calculated differences (BPDs and RBDs).

Instructions are summarized in FuncSim_Instructions.m!

Artificial examples

The toolbox also contains eight artificial scanpath pairs (FuncSimArtificialExamples.mat) with different characteristics that might cause problems with different scanpaths similarity methods.

  • Pair1 'Equal': Path1 & Path2_1. The paths are exactly the same scanpaths.
  • Pair2 'Duration': Path1 & Path2_2. Fixations of Path2_2 have half the duration of Path1_2.
  • Pair3 'Scaled': Path1 & Path2_3. Path2_3 was generated by scaling x- and y-coordinates of Path1_3 by the factor 0.5
  • Pair4 'Unequal number of fixations': Path1 & Path2_4. Path2_4 was generated by shifting Path1_4 30 pixels up and 50 pixels to the left. Fixations within same sub-units are located within same grid regions
  • Pair5 'Spatial offset': Path1 & Path2_5. Path2_5 was generated by shifting the red path 30 pixels up and 50 pixels to the left. Fixations within same sub-actions are located in different grid regions. All fixations are 20 pixels lower and 20 pixels more to the right than in Pair4.
  • Pair6 'Grid problem': Path1_6 & Path2_6. Path2_6 contains fewer fixations than Path1_6. Path2_6 is also shifted 30 pixels up and 50 pixels to the left.
  • Pair7 'Unit assignment': Path1_7 & Path2_7. Fixations of Path2_7 are differently assigned to the functional units than the fixations of Path1_7.
  • Pair8 'Random unit assignment': Path1_8 & Path2_8. Fixation groups of both paths are randomly assigned to functional units.

Citation and Contact

When publishing data analyzed with FuncSim or using the artificial scanpath pairs, cite:

  • Foerster, R. M., & Schneider, W. X. (2013). Functionally sequenced scanpath similarity method (FuncSim): Comparing and evaluating scanpath similarity based on a task's inherent sequence of functional (action) units. Journal of Eye Movement Research, 6(5):4, 1-22.

as well as

  • Foerster, R. M., & Schneider, W. X. (2013). FuncSim Toolbox for Matlab. CITEC, Bielefeld University. doi:10.4119/unibi/citec.2013.7

In case of problems or questions contact Rebecca M. Foerster.

Acknowledgement

This research was supported by the Cluster of Excellence Cognitive Interaction Technology CITEC (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG).


 

Virtual Reality tests of Visual Performance (VR_ViP)

Toolboxes for using virtual reality devices for neuropsychological assessments of visual performance

2018: Virtual reality test of visual performance: Selective attention, processing speed, and working memory

as described in Foerster, Poth, Behler, Botsch, & Schneider (2019)

Devices: HTC Vive, HTC Vive Pro, & Oculus Rift

Tests: combiTVA, TVA whole report, and TVA partial report

Here we provide the C++ code of our TVA-based (theory of visual attention, Bundesen, 1990) assessment implemented on the virtual reality devices HTC Vive and Oculus Rift (download). The assessment can meassure up to five components of visual processing capabilities: The threshold of conscious perception, the capacity of visual working memory, visual processing speed, the top-down controlled selectivity, and the lateral attentional bias (for more information see Foerster, Poth, Behler, Botsch, & Schneider, 2019)

DISCLAIMER: Please note that this software is a modified version of the software used in Foerster et al. (2019). We do not guarantee that the code is completely free from bugs, nor do we take over responsibility for data produced with this software.

Prerequisites

  • You need a Windows PC. The program was tested on Windows 7 and 8.1, but Windows 10 should probably work as well.
  • You need either an HTC Vive, an HTC Vive Pro or an Oculus Rift. The program can also be run on a CRT (see "Configuring the program").
  • You need to have the Steam software installed (http://store.steampowered.com/).
  • You need to register a Steam account.
  • You need a Visual C++ 2013 Runtime (https://www.microsoft.com/en-us/download/details.aspx?id=40784). This is most likely already installed.
  • For the Oculus Rift, you need to have the Oculus Runtime Environment installed (https://www.oculus.com/setup/).
  • Please note that VR Applications in general require a relatively powerful computer. See the requirements on the manufacturers' websites.

Running the application

  • Connect your head-mounted display (either the HTC Vive, the HTC Vive Pro or the Oculus Rift).
  • Start the Steam program.
  • The program can be run in an offline mode. Therefore, log in to your Steam account, Then, go to the Steam menue and select Go Offline.
  • Start Steam VR and wait until your head-mounted display is connected.
  • Open the folder "TVA_VR" of our program. Make sure you start the application from the folder where the data, shaders folders and the tva.cfg file are located.
  • Execute the application file "tva.exe"
  • Results are stored as dat files to the logs folder (main results file, history file, and practice block file).
  • The program can be stopped at any time by pressing escape.

Configuring the application

  • Open the folder "TVA_VR" of our program.
  • Open the configuration file named "tva.cfg" in a text editor.
  • Change the values in the desired way.

Citation and Contact

If you publish data obtained with TVA_VR.rar or if you use the software otherwise, please cite:

  • Foerster, R. M., Poth, C. H., Behler, C., Botsch, M., & Schneider, W. X. (2019). Neuropsychological assessment of visual selective attention and processing capacity with head-mounted display. Neuropsychology, 33, 309-318. doi: 10.1037/neu0000517 (PDF)

as well as

  • Behler, C., Poth, C. H., Foerster, R. M., Schneider, W. X., & Botsch, M. (2018). Virtual reality test of visual performance: Selective attention, processing speed, and working memory. Bielefeld University. doi:10.4119/unibi/2920105

In case of problems or questions contact Rebecca M. Foerster

Acknowledgement

This research was supported by the Cluster of Excellence Cognitive Interaction Technology CITEC (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG).


 

2016: Virtual reality test of visual performance: Processing speed and working memory

as described in Foerster*, Poth*, Behler, Botsch, & Schneider (2016)

*shared first authorship

Device: Oculus Rift Development Kid 2

Test: TVA whole report

Here we provide the C++ code of our TVA-based (theory of visual attention, Bundesen, 1990) whole report assessment implemented on the virtual reality device Oculus Rift (download). The assessment meassures three components of visual processing capabilities: The visual processing speed, the threshold of conscious perception, and the capacity of visual working memory (for more information see Foerster, Poth, Behler, Botsch, & Schneider, 2016)

Contents

OculusTVA - main directory:

OculusTVA program: The folder contains the compiled version of the OculusTVA application. Use this if you want to try it out, see README.txt in that folder for details.

OculusTVA source: The folder contains the source code for the OculusTVA application.

lgl3: The folder contains OpenGL helper classes used for building the application.

libs: The folder contains libraries used for building the application.

lstd: The folder contains some general helper classes used for building the application.

Running the Application

If you have the necessary prerequisites (see below) installed, you can run the OculusTVAwholeReport application with the OculusTVA.exe in the OculusTVA program directory, see the README.txt in that folder for details.

Building the Application

If you want to experiment with the code yourself, there is a Visual Studio 2013 solution in the OculusTVA source directory. It should compile right away if the file structure is kept the same. However, you should have basic knowledge of C++ and know what you are doing. The program will compile into the OculusTVA source/Release/ directory, so don't worry about breaking the program.

Prerequisites

You need a Windows PC. The program was tested on Windows 7 and 8.1, but Windows 10 should probably work as well.

Obviously you need an Oculus Rift. We used the Development Kit 2, because the Consumer Version was not released when we conducted the experiment. It might work with the Consumer Version as well, but we did not have the chance to try this yet.

You need to have an Oculus Runtime Environment installed (https://www3.oculus.com/en-us/setup/). You may need an older version (the version we used was SDK 0.4.3).

You need a Visual C++ 2013 Runtime (https://www.microsoft.com/en-us/download/details.aspx?id=40784). This is most likely already installed.

Set the Monitor to disable Vsync (if you press F2 in the program, the framerate should be higher than 60. If you are sure Vsync is turned off and your framerate is nevertheless too low, your computer is too slow to run this version of the software - we are planning to release a new version that solves this problem).

Please note that VR Applications in general require a relatively powerful computer.

Citation and Contact

If you publish data obtained with oculusTVAwholeReport or if you use the software otherwise, please cite:

  • Foerster, R. M., Poth, C. H., Behler, C., Botsch, M., & Schneider, W. X. (2016). Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities. Scientific Reports, 6, 37016. doi:10.1038/srep37016 (LINK)

as well as

  • Behler, C., Poth, C. H., Foerster, R. M., Schneider, W. X., & Botsch, M. (2016). Virtual reality test of visual performance: Processing speed and working memory. Bielefeld University. doi:10.4119/unibi/2906585

In case of problems or questions contact Rebecca M. Foerster or Christian H. Poth

Acknowledgement

This research was supported by the Cluster of Excellence Cognitive Interaction Technology CITEC (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG).


 

MovVis

A movement visualization toolbox

MovVis Toolbox for Matlab

Here you will find the Matlab/Octave code of our movement visualization toolbox MovVis.

  • MovVis creates videos and animated gifs of input movement data.
  • MovVis can display movements at any speed.
  • MovVis has three options for aligning multiple movements.
  • MovVis can display movements on a chosen background image.
  • MovVis can display movements with desired visual attributes.

The toolbox was conducted with Matlab 2013b. The toolbox requires Matlab's statistics and image processing toolboxes (commercially available from MathWorks), but should work without any further Matlab toolboxes installed. The toolbox was adapted to Octave 4.4.1 (open source) and requires Octave's statistics, image processing, and video packages. MovVis has only been tested under Windows.

Tutorial

Download the MovVis toolbox. Unzip the files and set them to your Matlab path.

Open Matlab or Octave. Open MovVis_Script.m by double clicking or typing

>>open MovVis_Script.m;

This file explains the MovVis toolbox and contains code for example visualizations on the example movement data file mymovedata.txt. All movement data you want to visualize with MovVis must have the same format as the example data. Each row contains one movement sample. The first column contains the timestamps. In a variable number of following columns, coordinates can be tagged by categories to which they belong (e.g., subject in column 2, trial in column 3, and condition in column 4). The next column (2+number of category columns) marks to which sub-movement unit the sample belongs. The next column (3+number of category columns) marks to which event the sample belongs. The last two columns contain the x- and y-coordinates, respectively.

Data columns: timestamp, category 1-n, unit, event, x, y.

Specify the data by constructing a variable called 'data' containing either a string of the full path to your movement data file or is a matrix containing the data.

>>data='C://User/data/mymovedata.txt';

Visualize the data with default options by typing in the Command Window:

>>MovVis(data);

Alternatively, you can specify your desired visualization options. Maximal specification is achieved by

>>[xy,xyA]=MovVis(data,picpath,speed,vissize,alignment,prefix,drawtrace,missub,marker,faceCol,edgeCol,traceCol,traceLine,markerSize);

You can choose as many specifications as you like. Note, however, that they need to have the right name and format as explained in MovVis_Instructions.txt and MovVisScript.m in the MovVis folder.

Citation and Contact

When publishing data visualized with MovVis or results interpreted with the help of MovVis, cite:

  • Foerster, R. M. (2019). MovVis: A movement visualization toolbox for Matlab and Octave. CITEC, Bielefeld University. doi:10.4119/unibi/2934028

In case of problems or questions contact Rebecca M. Foerster

Example data visualizations

MovVis_Script also contains eight example visualizations on the example movement data 'mymovedata.txt' in the MovVis folder. The example data are described in detail in

  • Foerster, R. M. (2016). Task-irrelevant expectation violations in sequential manual actions: Evidence for a 'check-after-surprise' mode of visual attention and eye-hand decoupling. Frontiers in Psychology, 7, 1-12. https://doi.org/10.3389/fpsyg.2016.01845

The example data contains cursor (event 9) and eye (event 23) movements defined in column 5, recorded from 2 trials (65 & 66) defined in column 3, belonging to 20 subjects (1-20) defined in column 2, while subjects performed a sequential number clicking task (computerized TMT-A). In this task, subjects have to click as fast as possible and in ascending sequence on numbered circles displayed on a computer screen (here 1-9). The movements until a click on a specific number are tagged by the number identity (unit 1 for all movements until click on number 1). The file contains 100 eye and 30 cursor movement samples per second. A constant visuospatial number-configuration as can be viewed from the ScreenPrechange.jpg picture in the MovVis folder was completed for 65 trials (prechange trials). Unannounced to the subjects, in trial 66, the font of number 4 changed from Arial font (ScreenPrechange.jpg) to the font MVBoli as can be seen in the ScreenChange.jpg picture in the MovVis folder.

Run the examples in MovVisScript.m for getting acquainted with MovVis.

The following ten example visualizations will be stored as gif and avi video.

Example 1: Default starts-aligned visualization

By default, all movements are visualized as black 10-point circles moving continuously on a grey background. Click on the gif to open the video.


 

Example 2: Fully specified units-aligned visualization

Eye movements are visualized as 6 point-sized diamonds and leave a broken trace, while cursor movements are visualized as 10 point-sized circles and leave a solid trace. All moving markers have white marker edge color. The marker face color of all movements belonging to the prechange trial 65 are visualized in high contrast blue (0 0 1) and leave a lower contrast blue trace (0.4 0.4 0.8). The marker face color of all movements belonging to the change trial 66 are visualized in high contrast red (0.6 0.2 0.2) and leave a lower contrast red trace (0.4 0.2 0.2). All movements are superimposed on the display with the deviant font of number 4 (ChangeScreen.jpg). The temporal resolution of all movements is upscaled to 150 Hz, and the missing interim data points of the higher resolution are substituted by the next point, thereby creating a smooth slow-motion visualization when replayed with 25 Hz (typical display speed). As not only starts, but also ends are aligned per sub-unit, movements are streched to match a unit's slowest 150-Hz upscaled movement. The spatial resolution of the visualization is reduced to half the size of the original picture size [height of 384, and width of 512]. Inspecting the visualization shows nicely that during the last prechange trial (blue), participants scan the display sequentially with their eyes (diamonds and broken lines) followed by the cursor (circles and solid lines) with a slight delay. During the first change trial (red), the eyes (diamonds and broken lines) initially scan the display sequentially. However, after having completed the changed number 4, the eyes look back at it occasionally without much of an effect on the cursor movements (circles and solid lines). This indicates that they noticed the change and checked for it, however, without letting it deteriorate their cursor performance too much. This was only possible by occasionally decoupling eye and cursor (hand) movements. Click on the gif to open the video.


Example 3: Starts-aligned visualization with selected specifications.

By defining only the background picture (picpath), the marker styles(marker), their face color (faceCol), and their size (markerSize) as in example 2, defaults are used for speed (real-time), alignment (starts-aligned), missub (substitution active), edge color (edgeCol=faceCol), and the trace specifications (no trace). The spatial resolution of the visualization is reduced to half the size of the original picture size [384,512]. Thus eye movements are visualized by small red diamonds and cursor movements by big blue circles, both without leaving a trace. Markers belonging to the prechange trial 65 are blue and markers belonging to the change trail 66 are red. The temporal resolution of all movements is reduced to 25 Hz and only their starts are aligned, so that they are diplayed in real-time. Click on the gif to open the video.


 

Example 4: Ends-aligned visualization with individual colors per subject

The movements are visualized with randomly chosen individual colors per subject. Note that all four movements (eye and cursor in prechange and change trial) belonging to a specific subject have the same face color. Example 4 visualizes eye movements as diamonds and cursor movement as circles. Movement markers belonging to the prechange trial have black edge colors, while movements belonging to the change trial have white edge colors. Movements are again superimposed on the half-sized change display. Starts and ends of all trials are aligned ignoring unit bounderies. Click on the gif to open the video.


 

Example 5: Selected movements - prechange trial

Only the movements of the prechange trial are visualized. Eye movements are visualized as green diamonds leaving green broken traces and cursors move as black circles leaving black solid traces on the half-sized prechange display, all with white marker edge color. Starts and ends are alinged per unit after cursor movements were upscaled to 100 Hz. The example shows nicely how movement data with different temporal resolutions can be handeled by MovVis. Each position shift of the cursor markers is accompanied by three position shifts of the eye markers. Click on the gif to open the video.


 

Example 6: Selected movements - change trial

Only the movements of the change trial are visualized. Eye movements are visualized as green diamonds leaving green broken traces and cursors move as black circles leaving black solid traces on the half-sized change display, all with white marker edge color. Starts and ends are alinged per unit after cursor movements were upscaled to 100 Hz. The example shows nicely how movement data with different temporal resolutions can be handeled by MovVis. Each position shift of the cursor markers is accompanied by three position shifts of the eye markers. Click on the gif to open the video.


 

Example 7: Eye and cursor movement of subject 3 in the change trial

Only two movements are visualized, namely the eye and cursor movement of the third subject in the change trial. Eye movements are visualized as green diamonds leaving a green broken trace and the cursors move as black circles leaving a black solid trace on the half-sized change display. Movements are down-scaled to the default 25 Hz and only starts are aglined. From the visualization it can be viewed nicely that subject 3 fixates on the number with the deviant font four times. The first fixation is followed by a cursor movement, while the three refixations are not. Click on the gif to open the video.


 

Example 8: Unit-averaged eye movements of subject 3

Only two movements are visualized, namely the unit-averaged eye movement during the prechange and the change trial of subject 3. Eye movements of the prechange trial are visualized as blue solid traces and eye movements of the change trial are visualized as red solid traces on the half-sized change display. Each data point is displayed for 200 ms. From the visualization it can be viewed nicely that the averaged scanpaths of subject 3 are highly similar in the prechange and the change trial despite the font change. Click on the gif to open the video.


 

Example 9: Units-aligned averaged movements (average observer)

Based on the units-aligned data (extracted by examples 5 and 6), mean x- and y-coordinates of all subjects' movements are calculated per event and trial. The resulting data matrix contains four movement paths. An average path of the eye movements during the prechange trial. An average paths of the cursor movements during the prechange trial. An average path of the eye movements during the change trial. An average path of the cursor movements during the change trial. Eye movements are visualized as diamonds leaving broken traces and cursors move as circles leaving solid traces on the half-sized change display. Movement markers belonging to the prechange trial are blue and leave blue traces. Movement markers belonging to the change trial are red and leave red traces. Movements are starts-aglined and visualized with 100 Hz. From the visualization it can be viewed that the average cursor paths have relatively straight sequential connections between the ascending numbers. The eye movement traces are slightly curved towards number 4 when considering connections of numbers higher than 3 due to the individual refixations of number 4. However, the average eye movement path touches number 4 only once, namely when it is followed by a cursor movement. This reveals that the different subjects refixate on the number 4 at different times within the seqeunce, so that averaging brings the paths more in line with the required clicking sequence. The example shows nicely how averaging can cover individual patterns (all participants refixate on number 4) and pretend smooth equal patterns (all participants scan sequentially with some curvature to number 4) across participants that can lead to wrong conclusions (no inspection of the surprising number 4). Click on the gif to open the video.


Example 10: Successive movements of all subjects

The movements of all subjects are visualized successively. Eye movements are visualized as diamonds and cursor movements as circles. Movements belonging to the prechange trial are visualized in blue and movements belonging to the change trial are visualized in red. All movements are visualized on the half-sized change display. A pause of 1,000 ms separates the successive movements. From the visualization it can be seen how eye and cursor movements of each individual subject differ in the prechange and the change trial. Click on the gif to open the video.

Zum Seitenanfang