Filter Results
54 results
Immersive stereoscopic footage of a Coordinate Response Measure (CRM) recorded from two actors. The audio-visual recorded corpus consists of 8 CALLs and 32 COMMANDs per actor. The CALLs and COMMANDs are to be combined at rendering time into full sentences that always follow the same structure: “ready CALL go to COMMAND now ”. The COMMANDs consists of one in four colors (blue, green, red or white) followed by one in eight numbers (1 to 8). This generates a full combinatorial of 256 individual sentences when combined with one of the 8 CALLS (arrow, baron, charlie, eagle, hopper, laker, ringo, tiger). Additionally the dataset also includes the UV positions to texturize the semi-spheres at the rendering time. These have been calculated from the intrinsic and extrinsic calibration parameters of the cameras to facilitate the correct rendering of the video footage. Our system for recording the actors consists of a custom wide-angle stereo camera system made of two Grasshopper 3 cameras with fisheye Fujinon lenses (2.7mm focal length) reaching 185 degrees of Field of View (FoV). The cameras were mounted parallel to each other and separated by 65 mm distance (average human interpupillary distance39) to provide stereoscopic capturing. The video is encoded in H264 format reaching 28-30 frames per second encoding speed at 1600x1080 resolution per camera/eye. The audio was recorded through a near range microphone at a 44kHz sampling rate and 99kbps and both the audio and video are synchronized within 10ms range and saved in mp4 format. The recording room was equipped for professional recording with monobloc LED lighting and chromakey screen. The actor sat at 1 meter distance from the camera recording setup and read the corpus sentences when presented on the screen behind the cameras. The actors were recorded separately in two sessions, seating each at 30 degrees from the bisection, and their videos can be synthetically attached at the rendering time. In the post processing the audio was equalized for all words, and the video was stitched to combine the actors and generate the full the corpus. Sentences were band passed at 80Hz to 16kHz. The corpus sentences are temporally aligned within the range of 64ms in our case, which is below the described 200ms to be perceived. So two or more CRMs can be played synchronously generating an overlap.
Data Types:
  • Video
  • Text
  • Audio
Sounds from "Hausfeld, L., Gutschalk, A., Formisano, E., Riecke, L. (2017). Effects of cross-modal asynchrony on informational masking in human cortex. Journal of Cognitive Neuroscience". Informational masking paradigm containing a pulsating tone (target) embedded in a multi-tone cloud (masker).
Data Types:
  • Text
  • Audio
Sound files generated from nine radio transients detected by the Very Large Array toward cosmological radio transient, FRB 121102. The sound of FRB 121102 has a "chirp" that is caused by dispersion. Original publication is Chatterjee et al (2017), "The direct localization of a fast radio burst and its host". Original data is available at https://doi.org/10.7910/DVN/TLDKXG.
Data Types:
  • Audio
Open Mind: Discoveries in Cognitive Science Forthcoming.
Data Types:
  • Other
  • Video
  • Document
Open Mind: Discoveries in Cognitive Science Forthcoming.
Data Types:
  • Video
  • Document
This is the data and the script for the analysis of a manuscript submitted to be published. Abstract: Embodied cognition frameworks suggest a direct link between sensorimotor experience and cognitive representations of concepts (Shapiro, 2011). We examined whether this holds also true for abstract concepts that cannot be directly perceived with the sensorimotor system (i.e., temporal concepts). To test this, participants learned object – space (Exp. 1) or object – time (Exp. 2) associations. Afterwards, participants were asked to assign the objects to their location in space/time meanwhile they walked backward, forward, or stood on a treadmill. We hypothesized that walking backward should facilitate the on-line processing of ”behind”- and “past”-related stimuli, but hinder the processing of “ahead”- and “future”-related stimuli, and a reversed effect for forward walking. Indeed, “ahead”- and “future”-related stimuli were processed slower during backward walking. During forward walking and in the control condition, all stimuli were processed equally fast. The results provide partial evidence for the activation of specific spatial and temporal concepts by means of whole-body movements and are discussed in the context of movement familiarity.
Data Types:
  • Software/Code
  • Tabular Data
  • Audio
Thorius narisovalis - head only
Data Types:
  • Software/Code
  • Image
  • Video
  • Document
Thorius narisovalis
Data Types:
  • Software/Code
  • Video
  • Document
  • File Set
Thorius minutissimus - head only
Data Types:
  • Software/Code
  • Video
  • Document
  • File Set
Thorius pinicola
Data Types:
  • Software/Code
  • Video
  • Document
  • File Set