Skip to content

JAQuent/Object-Location-Memory-Task

Repository files navigation

What is the Object Location Memory Task?

The object location memory (OLM) task was designed to investigate grid-like activity using fMRI on one hand and on the other hand to assess spatial memory in humans inspired by Doeller et al. (2010). The task was primarily created to support our own research but is made available to the community at-large so it can be useful to as many researhers as possible. The design of the task is kept general & configurable so it can be used without the need to adapt the Unity3D code itself. However, the Unity3D code is shared alongside the build versions so further development is possible. While the task is shared under GPL-3.0 license, several free assets and third-party resources are used, which are excluded from this license. A list of these resources can be found here.

The task is programmed using the wonderful Unity Experiment Framework (UXF) package. More documentation can be found on their website, their wiki and in their corresponding paper Brookes et al. (2020).

Documentation

In this task, participants learn where a number of objects are "hidden" within environments during the encoding trials. Learning & performance is assessed on retrieval trials. As an additional condition, I have added control trials, where all spatial cues apart from the ground and the walls are removed in order to provide an appropriate condition that can be used as a control in fMRI analyses as it features translation without spatial cues necessary for navigation and orientation (i.e. landmarks).

In standard mode, each trial starts with a cue period during which the target object is presented as an image on the screen on top of a semi-transparent grey panel. This is followed by a delay period where the image of the target is replaced with a fixation marker. After this, the participant can move and complete the task. This is followed by a stationary ITI period. The duration of the three periods (cue, delay & ITI) can be configured separately for each trial. All trials end with the participant "collecting" the target object by running over/into it. Unless the experiment is set to continuous mode, which only makes sense for encoding trials because no cue is presented apart from on the first trial, the participant is teleported to a pre-specified location a the beginning of the trial.

If specified, a message (e.g. with further instructions) can be displayed after a trial. However, before the actual message is displayed, a standard pause display is presented that is closed by pressing the space bar. The actual message is closed soon as the letter S is pressed or an MRI scanner sends an S to the task computer (more on this below).

In order for the task to begin after filling out the UXF start up menu, the task is waiting for "S" to be pressed on a keyboard.

Special keys

  • Escape (ESC) will close the task.
  • Backspace will count the number of remaining trials in the block by repeatedly beeping. If a progress bar is used, this feature is largely unnecessary.
  • R will rotate the camera 180 degrees, which can be used to "unstuck" the participant.
  • S need to be pressed to start each block, which was used to make the task fMRI compatible.
  • Spacebar closes pause screens.

Trial types

Encoding

The idea behind the encoding trial is that the participant has to learn the object locations. The object is therefore presented from the beginning and just has to be collected.

Control

Control trials are identical to encoding trials apart from the fact that all background objects (e.g. landmarks) and other spatial cues are removed to serve as a control condition that has translation but does not rely on "map-based navigation". Note even the sun orientation is changed so it shines orthogonal to the plane of the ground removing the spatial cues provided by the shadows.

Retrieval

In contrast to the two other trial types, the object is not visible from the beginning for retrieval trials. Instead, participants are supposed to navigate to the location where they think the target object is hidden and then press the confirmation key after which the object will appear providing feedback to the participants. After that, the object only needs to be collected like on the other trials.

The environments

The OLM task comprises two main environments that are meant for experiments plus an additional simple environment that is meant to teach participants how this task works. The main environments have been optimised to run even on slightly weaker hardware including on laptop using integrated GPUs.

Grassy Arena Practice Arena Desert Arena
Shape Circle Hexagon Square
Dimensions 180 vm diameter 50 vm side length 190 vm side length

The objects available

The "classical" objects were chosen based on being relatively symmetric along their vertical axes to avoid view-dependent confounds and sourced from the internet from various websites. Because at far distance, objects are still difficult to see, I added a large glowing-red arrow that rotates and jumps up and down above the objects whenever the object is visible. In total there are 48 objects that can be used.

Object name Object number Meant for environment To be used for Category Version
Barrel 1 Grassy Arena Classical v2.1.1
Basketball 2 Desert arena Classical v2.1.1
Cake 3 Desert arena Classical v2.1.1
Traffic cone 4 Grassy Arena Classical v2.1.1
Dice 5 Desert arena Classical v2.1.1
Donut 6 Grassy Arena Classical v2.1.1
Drum 7 Desert arena Classical Musical instruments v2.1.1
Football 8 Grassy Arena Classical v2.1.1
Gift 9 Control trials Classical v2.1.1
Lamp 10 Desert arena Classical v2.1.1
Pawn 11 Grassy Arena Classical v2.1.1
Pineapple 12 Desert arena Classical Fruits v2.1.1
Vase 13 Grassy Arena Classical v2.1.1
Teddy bear 14 Practice but also available in other environments (> v3.0.0). Practice v2.1.1
Apple 15 Categorical Fruits v3.0.0
Banana 16 Categorical Fruits v3.0.0
Chimpanzee 17 Categorical Animals v3.0.0
Clarinet 18 Categorical Musical instruments v3.0.0
Cutter knife 19 Categorical Tools v3.0.0
Deer 20 Categorical Animals v3.0.0
Dog 21 Categorical Animals v3.0.0
Drill 22 Categorical Tools v3.0.0
Elephant 23 Categorical Animals v3.0.0
Guitar 24 Categorical Musical instruments v3.0.0
Hammer 25 Categorical Tools v3.0.0
Harmonica 26 Categorical Musical instruments v3.0.0
Hippo 27 Categorical Animals v3.0.0
Insulating tape 28 Categorical Tools v3.0.0
Kangaroo 29 Categorical Animals v3.0.0
Keyboard for music 30 Categorical Musical instruments v3.0.0
Lemon 31 Categorical Fruits v3.0.0
Measurement tape 32 Categorical Tools v3.0.0
Orange 33 Categorical Fruits v3.0.0
Pear 34 Categorical Fruits v3.0.0
Piano 35 Categorical Musical instruments v3.0.0
Pliers 36 Categorical Tools v3.0.0
Polar bear 37 Categorical Animals v3.0.0
Pomegranate 38 Categorical Fruits v3.0.0
Saw 39 Categorical Tools v3.0.0
Saxophone 40 Categorical Musical instruments v3.0.0
Strawberry 41 Categorical Fruits v3.0.0
Tiger 42 Categorical Animals v3.0.0
Trumpet 43 Categorical Musical instruments v3.0.0
Turnscrew 44 Categorical Tools v3.0.0
Violin 45 Categorical Musical instruments v3.0.0
Watermelon 46 Categorical Fruits v3.0.0
Wrench 47 Categorical Tools v3.0.0
Zebra 48 Categorical Animals v3.0.0

All objects are available in all environments with the exception that the teddy is only available in the practice arena.

Again: Note that these objects have not been created by my and do not fall under my licensing terms. Please check out the credits to see where they can be found. A list of these resources can be found here.

Movement & keys

Because this task was primarily designed to be used inside an MRI scanner, standard movement ("actionNeedToBeEnded": true) might be relatively unintuitive. That is in order to move forward, the participants have to press the forward key. The forward movement then continues until the forward key is pressed again. Many video game applications require a continued button presses for continued forward movement. Furthermore, before a new action can be started (e.g. turning to the left or to the right), the ongoing action has to be stopped. The rationale for this is that a) the use of MRI button boxes can not register prolonged button presses and b) I wanted movement only to be possible in straight lines. The speed of forward translation and rotation are set to a constant value that can be changed trial-by-trial.

The default keys are: "W" for forward, "A" for left, "D" for right, "L" for confirm, "S" for start and "space" for skipping the experimenter messages.

If the participants hits a wall and the collision is shaking the player character so that it might fall over, a script will righten the character immediately and log this with a timestamp.

How to configure the task?

The task was created with the explicit aim to make it as general and useful as possible to other researchers even with they have no prior experience with Unity3D. For this, corresponding input files in the StreamingAssets folder of the build need to edited/pasted. Here is an overview of the files and what can/has to be configured in them.

Here you can find an example configurations of the task.

The welcome.json file

The three different environments are accessed via the welcome UI. The welcome UI is configured with the welcome.json file. In this file you can specify:

  • button1Label, button2Label, button3Label = Strings for the the labels of the buttons that load the grassy, practice and desert arena respectively. Even if you do not show one of the buttons the numbering is still fixed (i.e. you need to use button3Label for the desert arena).
  • button1Show, button1Show, button1Show = Boolean that controls whether or not the buttons will be available to the participants. This can be used in order to prevent participants opening the wrong environment.
  • title = String with which you can change the title. The default would be "Object Location Memory Task".
  • billboardText = String which controls what instructions are displayed on there.

In principle, three files are needed for each environment: a .csv file that controls trial-by-trial behaviour, a .json file starting with OLM_ plus the name of the arena (i.e. grassy, practice, desert) and a .json file that controls the UXF start up menu UI. Further suffixes can be added making it possible to choose from several .json files that for instance follow the pattern OLM_grassy*.json but caution should be exercised here if more than one file following the same pattern is placed into the StreamingAssets folder.

The trial .csv file

A valid .csv files needs to contain the following columns but extra columns can be added to make analysis easier:

  • trial_num = Number of the trial (special UXF name check their documentation). This will determine the order in which the trials are presented to the participant.
  • block_num = Number of the block (special UXF name check their documentation).
  • targets = The target given as the object number (see above how these numbers correspond to the object names). Object 14 is only available in the practice arena.
  • start_x = Start location of the player where they are teleported to. In continuous mode, only relevant on trial 1.
  • start_z = Start location of the player where they are teleported to. In continuous mode, only relevant on trial 1.
  • start_yRotation = Start rotation/heading angle of the player in degrees (0 - 360). In continuous mode, only relevant on trial 1.
  • object_x = Object location of the target.
  • object_z = Object location of the target.
  • cue = Period the cue is presented in seconds.
  • delay = Delay period after the cue was presented in seconds.
  • ITI = inter-trial-interval (ITI) in seconds.
  • trialType = Trial type i.e. "encoding", "retrieval" or "control". The spelling and capitalisation is important.
  • speedForward = Forward speed in vm/s.
  • rotationSpeed = Rotation speed in degrees/s.
  • messageToDisplay = Integer indicating whether a messages should be displayed after the trial (yes if >= 0, no if -1). Numbers above -1 are used as the index of the message from the list specified by the corresponding .json file (see below).

Optional columns:

  • feedback_critVal1 & feedback_critVal2 = Values that control the three tiered feedback system (green, yellow & red) for retrieval trial performance (i.e. Euclidean placement error in virtual metres).

The main .json file

In the main .json file several things can and have to be configured in order for the task to run. Here is a complete list:

  • targetFrameRate = An integer that can cap the frames per second (FPS) at which the task is presented. If no cap is wished, simply choose a very high value (e.g. 9999). No that this can cause issues or might not work in WebGL builds.
  • trial_specification_name = The file name of the .csv file (see above). This is a UXF field that is needed.
  • continuousMode = Specification whether or not to use continuous (see below). Possible values: false/true.
  • soundMode = This controls the sound in the experiment. Possible values: 1 = all sounds (collection & message sound), 2 = only plays sound when messages are displayed and 3 = no sound. soundMode = 2 can for instance be useful if MRI-operators wish to be notified when run/block is over.
  • warningCriterium = The minimum distance a participant should move before pressing the confirmation button.
  • warningMessage = The warning message that will be displayed if the participant moves less than the criterium before pressing the confirmation button. This is relative to the starting position not absolute moved distance.
  • waitForExperimenter = The pause message that is presented before the actual messages. This is mainly meant for fMRI data collection where the MRI operator wants to reset the recording. This message is skipped by pressing space bar.
  • blockMessage = A list of strings for each message that the experimenter wants to be displayed. The correct message is chosen by the messageToDisplay variable from the .csv file, which serves as an index starting with 0.
  • objectNames = A list of strings to rename the objects. This needs to be specified but only has consequences for the results. Default: ["barrel", "basketball", "cake", "cone", "dice", "donut", "drum", "football", "gift", "lamp", "pawn", "pineapple", "vase"]. Also note that, for practice another entry has to be added (e.g. "teddy").
  • useHTTPPost = Specification whether or not data should be send a server using HTTPPost see (here). Possible values: false/true. If false, then no countdown message is shown because it is not needed.
  • endCountDown = Integer of how many seconds at the end of the experiment should be waited to allow data to be send to the server. This was added because currently there is no way to know when the web request is completed. We typically set this to 60 seconds just to make sure.
  • endMessage = String for message to be displayed at the end of the experiment as part of the countdown from endCountDown to zero. After that, the task closes automatically.
  • url = String for the url of the HTTPPost server.
  • username = String for the username for the HTTPPost server.
  • password = String for the password for the HTTPPost server. BE CAREFUL not to share a sensitive password with participants as because it can be accessed by everyone.
  • changeKeys = Specification whether you want to change the default keys from W, A, D plus L to something else. Possible values: false/true.
  • keys = A list of strings for the four keys necessary for the task. Please check here to get the correct names. Please do not use "S", "R" or "space bar" as they are reserved keys.
  • useResponsePixx = Specification whether you want to use a responsePixx button box. This needs to be configured separately (see below).
  • objectRotationSpeed = Floating poing number controlling the rotation speed of the objects.
  • showConstantCue = Boolean whether or not to show a constant cue of the target object at the bottom of the screen.
  • shuffleBlocks = List of indices of blocks within which the trials should be randomly shuffled.
  • actionNeedToBeEnded =

The start up menu .json file

The .json files needed to control the start menus are called: startupText_grassy.json, startupText_practice.json and startupText_desert.json.

The here is an image and the corresponding file as there is not much else to explain:

{
    "chromeBar": "Startup",
    "instructionsPanelContent1": "Welcome to OLM task! ",
    "instructionsPanelContent2": "You could use this space to display some instructions to the researcher or the participant.",
    "expSettingsprofile": "Experiment settings profile",
    "localPathElement": "Local data save directory",
    "localPathElement_placeHolder": "Press browse button to select...",
    "participantID": "Participant ID",
    "participantID_placeholder": "Enter text...",
    "sessionNumber": "Session number",
    "termsAndConditions": "Please tick if you understand the instructions and agree for your data to be collected and used for research purposes.<color=red>*</color>",
    "beginButton": "Begin session."
}

Extra: How to configure the responsePIXX button box interface?

The responsePixx button box is configured via responsePixx.json. You have to configure the following values:

  • yellowCode = Button code.
  • redCode = Button code
  • blueCode = Button code.
  • greenCode = Button code.
  • deviceType = Possible values: 1 = DATAPixx, 2 = DATAPixx2, 3 = DATAPixx3, 4 = VIEWPixx, 5 = PROPixx Controller, 6 = PROPixx, 7 = TRACKPixx.
  • dinBuffAddr = Address of the buffer. Default is 12000000.
  • dinBuffSize = Size of the buffer. Default is 1000.

For more documentation please check with the company itself. The responsePixx integration is very basic but it works for us.

Continuous vs. standard mode

Mainly to create videos of this task, a continuous mode is available. In this mode, participants are not teleported at the end but the next trial starts immediately without any interruption (hence the name). This also means that there are no cue, delay and ITI periods apart for the first trial. As a consequence, starting locations and cue/delay values only need to be specified for the first trial. For subsequent trials, the can be left as non-specified.

Since there is no cue period showing what object is the target, the mode is not suited for retrieval.

WebGL

Versions higher 4.0.0 now support WebGL. To use the task on platforms like AWS or your own server, you have to add a link to a study dictionary .json hosted online to the study_dict_url.txt in the StreamingAssets folder of the build. An example study dictionary can be found here.

What platforms/hardware/language does the OLM task support?

The OLM task was mainly developed for Windows but it can also be build for macOS. Though there are some issue and it is difficult to build a version for macOS that works for all different versions of this operating system. The macOS build(s) should therefore be regarded as highly experimental!. Later releases were not build for macOS for this reason.

The standard input device is the keyboard. So any button box that translates button presses into key presses should work. As noted above, responsePixx from vpixx.com is also integrated.

The task has been optimised to run even on slightly weaker hardware but this should be tested before running the experiment. When run on laptops, it is probably sensible to have it plugged in because most systems slow down drastically. Furthermore, changes to the Graphic settings of windows can help as well. Here high performance mode is recommended.

The task was created to work with Chinese and English text. Other languages with different character sets should also work but this was not tested.

Special comments on using the task for fMRI research.

This task was primarily created to investigate grid-like activation patterns during virtual navigation using fMRI. In order to create fMRI runs in the cleanest way, it is important to use UXF's "block_num" and to display a message at the end of the last trial of the preceding run. This is because before the actual message is displayed. A standard pause message is shown, which gives the MRI operator enough time to handle the MRI data collection (e.g. manually stopping the run). An experimenter then has to press "space" to set the task into a state where it waits for the first S to be received by the data collection computer. This will reset the timer for the run itself. Note if you want to discard the first volumes you should configure with the scanner so that no S is send at that time. However, future version of this task could make this configurable as well. If that is something that might be of interest to you, submit a Feature Request to this repository. For the first block, the task also only starts as soon as the first S is send by the scanner. In our experience, it is better to re-start the task for each run but multiple runs can also be implemented without this.

Apart from this, what is saved from this task is also optimised to run fMRI-based spatial memory/navigation studies.

What is saved in this task?

The data is saved by UXF so more information on this can be found in their documentation. Only the basics and the unique aspects of this task are covered.

Trial results

The main results including the behavioural performance be found in trial_results.csv file for each participant. Also note that any column that is included in the .csv input file is also copied to the trial results. The following columns are saved:

  • ppid = The participant ID.
  • end_x = End position at the end of the trial or when pressing the confirmation button.
  • end_z = End position at the end of the trial or when pressing the confirmation button.
  • euclideanDistance = This is the main performance measure: The distance between the end position and the target/object position in vm.
  • objectName = Name of the object as specified in the .json file.
  • runStartTime = Time point in seconds when the block/run started. This is when the first S of the block/run is pressed/arrives.
  • objectNumber = Number of the object.
  • navStartTime = Time point in seconds at which participant is able move after cue and delays and starts navigating.
  • navEndTime = End of the trial in seconds. This is a duplicate of end_time.
  • navTime = The total time between the start of navigation and the end of the trial in seconds.
  • timesObjectPresented = The number of times an object was the target. All trial types count.
  • confirmButtonTime = Time point when the confirm button is pressed.
  • movedDistance = Distance between start and end position in vm.
  • warningShown = Was a warning shown that the participant did not move enough?
  • player_movement_location_0 = Path to the tracking file of this trial.

Tracking of position & rotation

For each trial, the position and rotation of the participant is tracked for each frame. For this UXF's standard position/rotation has been adapted to add a boolean whether there forward translation. The files all follow the same name convention (e.g. player_movement_T001.csv) and are saved for each trial separately. The columns of the tracking files are:

  • time = Time of the frame in seconds.
  • pos_x, pos_y & pos_z = Position at that frame.
  • rot_x, rot_y & rot_z = Rotation at the frame. x & z should mostly be unchanged unless the participants runs into the wall.
  • moving = A boolean whether the participant is moving forward.

This file can used to calculate the FPS for each trial because the position is recorded on reach trial.

Log

Everything else is saved in the log.csv file. Some of the things saved in the log are redundant but it makes it easier to search for something. Here is an incomplete list of things saved in the log:

  • Session start time as date: E.g. "Session start time 9/2/2022 10:23:31 AM"
  • Screen resolution: E.g. "1920 x 1080 @ 60Hz"
  • Platform used: E.g. "Platform used is UNITY_STANDALONE_WIN"
  • Trigger time: E.g. "A trigger was send 9/2/2022 10:24:15 AM Run time: 52.66564"
  • Cue start: E.g. "Cue start of trial 1"
  • Cue end: E.g. "Cue end of trial 1"
  • Delay start: E.g. "Start of delay period of trial 1"
  • Delay end: E.g. "End of delay period of trial 1"
  • ITI start: E.g. "Start of ITI period of trial 1"
  • ITI end: E.g. "End of ITI period of trial 1"
  • Wait for experimenter: E.g. "Waiting for experimenter to press space."
  • Experimenter pressed space: E.g. "Experimenter pressed space bar."
  • The participant hit the wall and had to be rightened: E.g. "Attempting to righten the player."
  • Movement: "forwardKey was pressed.", "leftTurn was pressed." and "rightTurn was pressed."

How to cite this work?

TBA

License

The OLM task is licensed under the GPL-3.0 license.

Important notice: The repository contains various free assets that are not part of this license. Please contact the copyright holders to make sure you have permission if you want to re-use them. A list of these resources can be found here.

About

A Spatial Memory Task to study grid-like activity in fMRI using Unity3D

Resources

License

Stars

Watchers

Forks

Packages

No packages published