Pre-packaged Test Paradigms

Our library of pre-packaged tests paradigms offers a large spectrum of different cognitive tasks types that can be used out-of-the-box and configured with respect to block design and task characteristics.

General Task Types

  • Reaction Time Tasks presented in blocks of randomized trials
  • Time-boxed Continuous Performance
  • Staircase with different threshold probabilities
  • Ascending Complexity (e.g. memory span)
  • Word List (free recall, recognition)
  • Visual Analogue Scales
  • Likert Scales
  • Tower of Hanoi
  • Trail Making Task
  • Maze Vigilance Task
  • Hidden Path Learning Test
  • Unstable Tracking (Joystick in Chrome only)
  • Road Tracking (Joystick in Chrome only)

 

Configuration Options

  • Length: Number of trials or test duration
  • Block Design: Number and order of blocks
  • Test-specific task characteristics

 

Language

  • English (default)
  • German
  • Other languages available per request

 

Test Authoring

In addition to the pre-packaged test paradigms, Cognition Lab provides the ERTS scripting language to add custom experiments with custom stimulus material, trial structures, and session structure.

Stimulus Material

  • Text
  • Pixel Graphics
  • Images
  • Videos
  • Sound Samples
  • Sine-wave tones (Chrome only)
  • Scales (VAS, Likert)

Screen Rendering

  • mm coordinate system for accurate screen positioning
  • Default: Direct HTML5-canvas rendering mode
  • Optional: Synchronization with Refresh Cycle
  • User-defined: Pre-loaded images from backbuffer

 

Input Devices

  • Keyboard
  • Mouse
  • Touch
  • Voice-key (Chrome only)
  •  

External Web Applications

Tests implemented in form of external web applications can be integrated via URL and simple API to post back results to Cognition Lab. Once integrated, you can combine and schedule these tests via the Cognition Lab Web Console.

Tested Integrations

  • jsPsych

 

Study Management

On top of the pre-packaged or self-written test paradigms, Cognition Lab provides a powerful environment to setup an experiment or study design.

Participant Enrollment

  • Individual: Assign individual participants to an experiment
  • Group: Assign group of participants to an experiment
  • Import: Import list (CSV) of participants
  • ad-hoc: Assign new participant as they walk in
  • self-service: participants sign-up via published URL
  •  

Session Schedule

  • Session schedules: Multiple session sequences per study
  • Measurement Points: Multiple measurement points (sessions) per schedule
  • Test Battery: Single or multiple tests per measurement point
  • Test configurations: Multiple configurations of same test paradigm
  •  

Session Start

  • Supervised: Lab staff starts scheduled session via Web Console
  • Email Invitation: Send email invitation with URL to participant
  • Self-registration: Publish URL as call for participation
  • Personal Test Center: Participant logs into Personal Test Center and starts test
  •  

Data Handling

All aggregated data and charts are generated using the R Statistics Package. Usage of personal data is optional and depends on customer policies. Visibility of data can be adjusted at experiment level and measurement point level.

Result Visibility

  • Double Blind: Study owner and participants have no access to results
  • Single Blind: Study owner has access to results
  • Self: Participant has access to own data (e.g. students process their own data)
  •  

Participant Coding

  • Personal Data: Usage of personal data is optional and depends on customer policies
  • Identity: First and Last Name, User Name, Email, Global SubjectID
  • Demographics: Age, Handedness
  • Experiment specific coding: Subject ID within context of one experiment
  •  

Reporting

  • Online Performance Charts: PDF format online or to download
  • Online Trend Charts per Subject: PDF format online or to download
  • Online Norm Charts per Test Paradigm: PDF format online or to download
  • Raw Data: Download of trial-wise responses in CSV format
  • Test Scores: Download of aggregated key performance scores or dependent variable in CSV format
  • Exceptions: Download of all sessions which have been commented in CSV format
  • Audit Trail: Download activity feeds of every user
  •  

Deployment

Since all tests are implemented as HTML5/JavaScript web applications, there is no deployment of code. Tests are started via a URL in the browser end executed as self-contained applications in the web browser on the client

Supported Devices

  • Operating System: Windows, Mac OS, iOS, Android, and more
  • Web Browser: Chrome (required for joystick), Internet Explorer, Firefox, Safari
  • Form Factor: Auto-scaling with optional manual calibration of physical dimensions of display

 

Quality and Security

The platform has been designed to meet FDA GCP Part11 compliance by providing password-protected role-based access and an audit trail logging relevant user activities.

System Access

  • Password Policies: Standard password policies are in place
  • Role-based access: Depending on user role, users have only access to functions they need
  • Secure data transfer: Via secure https connections

 

Client Validation

  • Functional Check: Timing, input devices, screen dimensions
  • Dimensions of Monitor: Optional setting of dimensions via Cookies

 

Auditability

  • Session Coding: All data rows are coded with experiment, session schedule, measurement point, and participants
  • Session Timestamps: Launch test, begin test, end of test
  • Exception: Exceptional Session are annotated with exception condition and reason
  • Event log: Access to system and data objects is logged