(1) Overview

Introduction

Working memory is a theoretical concept originating in the field of cognitive psychology [, ]. Baddeley [] describes working memory as “a limited capacity system, which temporarily maintains and stores information” (p. 829). According to Baddeley (p. 829), working memory “supports human thought processes by providing an interface between perception, long-term memory and action.” Based on this description, one would expect working memory to play a crucial role in the performance of many cognitive tasks. Indeed, this appears to be the case when reviewing the working memory literature. These studies contain a multitude of findings showing that measures of working memory correlate with many different cognitive abilities, ranging from general intelligence to reading ability (see [] for a review).

Baddeley’s [] model of working memory is split into components corresponding to different sensory modalities. For example, Baddeley conceptualizes auditory and visuospatial working memory as discrete, yet related, components or abilities. This distinction is central to the current article, since visual monitoring tasks, such as the PyVDT described here, are thought to measure visual working memory [].

Visual monitoring tasks have been used in studies of hearing impairment (see [] for a review). Knutson et al. [] appear to have been the first to use this type of monitoring task to study cognitive factors assumed to be predictive of speech perception following cochlear implantation (i.e., the surgical insertion of an electrode into the cochlea which allows profoundly deaf individuals to regain the sense of hearing). Knutson et al. used the Visual Monitoring Task (VMT), a test that requires participants to “watch the computer monitor as single-digit numbers are presented one at a time” (p. 819) and to “respond when those digits […] produce the specified pattern” (p. 819), an even-odd-even pattern of digits (e.g., “2–5–4”).

Knutson et al. [] found that VMT scores predicted post-implant audiological outcomes (i.e., measures of speech perception). Others (e.g., []) have since reported similar associations between speech reception in noise and visual monitoring scores. Lunner and Sundewall-Thorén used a translated version of the Visual Letter Monitoring test (VLM) originally developed by the MRC Institute of Hearing Research [] – a test which is similar in principle to the VMT used by Knutson et al. Instead of digits, however, the VLM uses three-letter words (e.g., P–E–N) as target sequences.

Akeroyd [] and Gfeller et al. [] view the VMT as a measure of attention, reaction time, and working memory. In line with this, Knutson et al. [] state that the VMT requires the participant to “maintain the last two digits in working memory” (p. 819) and to “update the two digits in working memory” (p. 819). This notion of updating is echoed by Akeroyd [] who describes the VMT as requiring “continuous performance” (p. S58). More recently, Knutson [] highlighted the “probable importance of working memory in the VMT” (p. 156) and stated that the test includes “a memory component” (p. 154). Also, since these monitoring tasks display stimuli that can be verbalized (i.e., digits and letters), they likely also reflect the ability of participants to overtly or covertly verbalize stimuli.

The association between working memory and speech perception first reported by Knutson et al. [] is central to the Ease of Language Understanding (ELU) model []. Rönnberg et al. [] theorize that working memory is beneficial to speech perception in difficult listening conditions (i.e., background noise), since – according to the ELU model – working memory helps maintain relevant information while inhibiting irrelevant information. The software described here allows researchers to test theories such as the ELU model and to attempt to replicate findings such as those reviewed in the preceeding paragraphs.

Traditionally, researchers wanting to investigate the concept of working memory have had to purchase standardized tests or create their own. The former often ensures greater reliability and validity, but can be expensive; the latter is often cheaper, but less reliable and valid.

The PyVDT represents a first step toward providing a reliable and valid visual monitoring task as free software. Since the PyVDT very closely mimics the original Visual Monitoring Task [], the PyVDT can be assumed to be largely as reliable and valid as the VMT. In any case, since the PyVDT code is freely available, anyone can scrutinize the code, verify the reliability and validity of the test, and carry out independent replication studies. Additionally, bug reports, patches, forks and suggestions for improvement are encouraged.

Recently, Stone & Towse [] made a Java-based working memory test battery available as free software. Stone and Towse’s battery consists of seven span tests: digit, matrix, arrow, reading, operation, rotation, and symmetry span. The PyVDT adds to this work by providing an additional, complementary measure of working memory.

Within hearing aid and cochlear implant research, the majority of studies investigating the relationship between monitoring scores and speech perception have used either the Visual Monitoring Task [] or the MRC Visual Letter Monitoring task []. Neither of these tests are available as free software, however. It is not straightforward, therefore, for researchers to inspect the source code of these tests or to attempt to replicate published findings. The PyVDT, again, aims to resolve these issues.

Like the VMT, the PyVDT displays single digits one at a time. Participants are instructed to press the space bar whenever the three most recently displayed digits match the pattern “even digit – odd digit – even digit” (i.e., the target sequence; see Fig. 1).

Figure 1 

Schematic representation of the presentation of digits during the PyVDT. The software shows single digits, one at a time, on screen. Digits change according to the presentation rate. The three rightmost (i.e., most recently presented) digits make up a target sequence (i.e., any sequence beginning with an even digit, followed by an odd digit, and ending with an even digit). Pressing the space bar while the final digit in a target sequence (here, the number two) is displayed would count as a “hit”, whereas failing to press the space bar in this case would count as a “miss”. Pressing the space bar at the end of a non-target sequence (e.g., the three leftmost digits) would count as a “false alarm”, whereas failing to press the space bar would count as a “correct rejection”. See Macmillan and Creelman [] for a detailed discussion of these four response types.

The PyVDT consists of two subtests. The default configuration displays one digit every two seconds during the first subtest, and one digit every second during the second subtest. Thus, this configuration is identical to the one used by Knutson et al. []. The digit presentation rates can be adjusted using the configuration dialog (Fig. 2). The duration of each subtest is two minutes and 32 seconds. Participant performance is measured using d’ (“dee-prime”), a measure of sensitivity originating in signal detection theory (or simply “detection theory”) [].

Figure 2 

PyVDT configuration dialog.

Implementation and architecture

PyVDT is based on the PsychoPy software suite [, ] which facilitates the development of computer-based experiments by providing an application programming interface (API) for presenting auditory and visual stimuli, among other things. PyVDT leverages the PsychoPy API to display stimuli and to log detailed diagnostic data about each test run (e.g., timestamped information about keypresses and dropped frames). Most importantly, PsychoPy’s TextStim function is used to display digits, and its logging function is used to save diagnostics to log files. These log files (named <prefix>-<subject number><subject name>.log) can be used for general troubleshooting or to evaluate the performance of the graphics processing unit (GPU) installed in the computer running PyVDT, in order to verify that stimuli are presented at the correct rate. The frame log files generated by PyVDT (named *-frames.log) contain a comma-separated list of the durations in milliseconds of all presented frames. These frame log files can be used to identify “dropped” frames (see “Quality control” section below).

PyVDT comes with a number of pre-defined (pseudo-random) digit sequences, each containing the same amount of targets. The targets make up roughly one quarter of the total number of stimuli presented. The digit sequences are defined in the two .csv files provided with the software (pyvdtSequences-rate1.csv and pyvdtSequences-rate2.csv).

Instructions for use

In order to run the PyVDT, two software packages have to be installed: PsychoPy and PyVDT itself. After installation, the PyVDT can be launched by opening the file pyvdt.py via PsychoPy Coder and clicking the green “run” button. This brings up the configuration dialog (Fig. 2) which allows the user to input participant names and numbers, digit presentation rates, prefixes for the output files, monitor refresh rates, etc. After clicking “OK”, the program switches to full-screen mode. That is, the normal GUI interface of the operating system (e.g., icons, taskbars, etc.) is replaced with a white background taking up the entire screen, and a welcome message is displayed. After pressing a key on the keyboard, a fixation dot is briefly shown in the center of the screen. Immediately afterwards, the first digit is displayed and the test is running. Following the first test, a message is shown, asking the participant to press any key to continue with the second test. After the second test ends, a “thank you” message is shown. The messages shown in full-screen mode can be customized by editing the file pyvdt.ini.

PyVDT saves output data to .csv files that can be imported into a statistical software package for further analysis. The .csv files named <prefix>-data-{1,2}.csv (one file per subtest) contain the main output variables of interest (e.g., d’). These output files contain data from multiple PyVDT runs, provided that these runs use the same output file prefix (defined in the configuration dialog; see Figure 2). This allows experimenters to save the results of different experiments in separate .csv files (e.g., a pilot study might use the prefix pilot, the main study might use main, etc.).

Aside from the main output .csv files, two additional .csv files are generated per run. These files (named <prefix>-<timestamp>-{1,2}.csv) contain raw, mostly unprocessed data relating to the presentation of stimuli and the calculation of the sensitivity measures. These .csv files are primarily provided for diagnostic purposes. In most cases, experimenters can safely ignore these particular files.

Sample output .csv files are included with PyVDT. The different variables saved in these .csv files are described in the comments found in the file csvfunc.py.

Quality control

The PyVDT has been successfully used in a study of 25 cochlear implant users []. Additionally, PyVDT uses the Python doctest module [] to verify that the measures reported by the PyVDT (e.g., the d’ measure of sensitivity; []) are calculated correctly. To see the results of these doctests, run Python with verbose output enabled (e.g., python pyvdt.py –v). The formulae used to calculate the sensitivity measures are those reported by Macmillan and Creelman [] (p. 21).

The PyVDT also includes two self-tests that are accessible via the configuration dialog (see the text box labeled “self-test mode” in Figure 2). The first self-test (enabled by entering y in the self-test text box) allows users to simulate participant responses by generating sets of random responses and corresponding output data, while the second self-test (enabled by entering p in the self-test text box) plots frame durations from previous PyVDT runs. Sample output data can be generated by running the former self-test. Sample digit sequences are provided in two .csv files (pyvdtSequences-rate1.csv and pyvdtSequences-rate2.csv).

Running these self-tests before using the PyVDT in an experimental study is highly recommended in order to verify that testing conditions are similar for all participants and that results are reliable. Ideally, digit presentation rates should be constant across experiments. Therefore, it is important to ensure that the computer used to run the PyVDT is performing adequately. Computer monitors are updated a large number of times per second (as indicated by the monitor’s refresh rate in Hz). These rapid screen updates are known as frames. If the computer hardware is unable to present the stimuli at the desired rate, frames may be discarded, or “dropped”. Thus, dropped frames often signify performance problems. The second self-test (see above) allows users to identify performance problems; dropped frames show up as spikes on the plot.

The software has been tested and found to work on Windows (8.1), Mac (Snow Leopard), and Linux (Debian Jessie) platforms. Bugs can be reported to the author via the “Issues” function on GitHub (https://github.com/criticalmads/pyvdt/issues).

(2) Availability

Operating system

Linux, Windows, Mac.

Programming language

Python.

Additional system requirements

PyVDT was designed to run on laptops. The hardware requirements are modest. No unusual equipment is needed. The software has been extensively tested on a 2009 13-inch MacBook Pro (model identifier “MacBookPro5,5”; 2.26 GHz Intel Core 2 Duo processor, 2 GB DDR3 RAM) and should run smoothly on desktop PCs and laptops with dedicated GPUs supporting OpenGL 2.0. When using slower hardware, however, it is advisable to verify (by examining the log files or by plotting frames via the built-in self-test) that no frames are dropped or delayed significantly during testing. PyVDT itself takes up less than 1 megabyte of disk space, while PsychoPy requires a few hundred megabytes of disk space, depending on the platform.

Dependencies

PsychoPy version 1.73 or higher (http://www.psychopy.org).

List of contributors

Mads Hansen, developer, Department of Psychology and Behavioural Sciences, Aarhus University, Denmark.

Software location:

Archive

Name: Figshare

Persistent identifier:https://dx.doi.org/10.6084/m9.figshare.1583394.v3

Licence: GPLv3

Publisher: Mads Hansen

Date published: 23/10/15 (v1); 27/01/16 (v2); 05/02/16 (v3)

Code repository

Name: GitHub

Identifier:https://github.com/criticalmads/pyvdt

Licence: GPLv3

Date published: 23/10/15

Language

English. PyVDT comes with user-selectable English and Danish participant instructions to be shown on screen. Additional languages can be defined in the configuration file (pyvdt.ini).

(3) Reuse potential

The PyVDT is well-suited for experimental studies of working memory within psychology, neuroscience, cognitive science, and related fields. The software could easily be modified or extended to use letters as stimuli (like the MRC letter monitoring task []). Also, the software could be modified to use adaptive presentation rates rather than the fixed rates currently used.

Technical support is provided by the author via GitHub.