Filter Results
31 results
Immersive stereoscopic footage of a Coordinate Response Measure (CRM) recorded from two actors. The audio-visual recorded corpus consists of 8 CALLs and 32 COMMANDs per actor. The CALLs and COMMANDs are to be combined at rendering time into full sentences that always follow the same structure: “ready CALL go to COMMAND now ”. The COMMANDs consists of one in four colors (blue, green, red or white) followed by one in eight numbers (1 to 8). This generates a full combinatorial of 256 individual sentences when combined with one of the 8 CALLS (arrow, baron, charlie, eagle, hopper, laker, ringo, tiger). Additionally the dataset also includes the UV positions to texturize the semi-spheres at the rendering time. These have been calculated from the intrinsic and extrinsic calibration parameters of the cameras to facilitate the correct rendering of the video footage. Our system for recording the actors consists of a custom wide-angle stereo camera system made of two Grasshopper 3 cameras with fisheye Fujinon lenses (2.7mm focal length) reaching 185 degrees of Field of View (FoV). The cameras were mounted parallel to each other and separated by 65 mm distance (average human interpupillary distance39) to provide stereoscopic capturing. The video is encoded in H264 format reaching 28-30 frames per second encoding speed at 1600x1080 resolution per camera/eye. The audio was recorded through a near range microphone at a 44kHz sampling rate and 99kbps and both the audio and video are synchronized within 10ms range and saved in mp4 format. The recording room was equipped for professional recording with monobloc LED lighting and chromakey screen. The actor sat at 1 meter distance from the camera recording setup and read the corpus sentences when presented on the screen behind the cameras. The actors were recorded separately in two sessions, seating each at 30 degrees from the bisection, and their videos can be synthetically attached at the rendering time. In the post processing the audio was equalized for all words, and the video was stitched to combine the actors and generate the full the corpus. Sentences were band passed at 80Hz to 16kHz. The corpus sentences are temporally aligned within the range of 64ms in our case, which is below the described 200ms to be perceived. So two or more CRMs can be played synchronously generating an overlap.
Data Types:
  • Video
  • Text
  • Audio
Sounds from "Hausfeld, L., Gutschalk, A., Formisano, E., Riecke, L. (2017). Effects of cross-modal asynchrony on informational masking in human cortex. Journal of Cognitive Neuroscience". Informational masking paradigm containing a pulsating tone (target) embedded in a multi-tone cloud (masker).
Data Types:
  • Text
  • Audio
Sound files generated from nine radio transients detected by the Very Large Array toward cosmological radio transient, FRB 121102. The sound of FRB 121102 has a "chirp" that is caused by dispersion. Original publication is Chatterjee et al (2017), "The direct localization of a fast radio burst and its host". Original data is available at https://doi.org/10.7910/DVN/TLDKXG.
Data Types:
  • Audio
This is the data and the script for the analysis of a manuscript submitted to be published. Abstract: Embodied cognition frameworks suggest a direct link between sensorimotor experience and cognitive representations of concepts (Shapiro, 2011). We examined whether this holds also true for abstract concepts that cannot be directly perceived with the sensorimotor system (i.e., temporal concepts). To test this, participants learned object – space (Exp. 1) or object – time (Exp. 2) associations. Afterwards, participants were asked to assign the objects to their location in space/time meanwhile they walked backward, forward, or stood on a treadmill. We hypothesized that walking backward should facilitate the on-line processing of ”behind”- and “past”-related stimuli, but hinder the processing of “ahead”- and “future”-related stimuli, and a reversed effect for forward walking. Indeed, “ahead”- and “future”-related stimuli were processed slower during backward walking. During forward walking and in the control condition, all stimuli were processed equally fast. The results provide partial evidence for the activation of specific spatial and temporal concepts by means of whole-body movements and are discussed in the context of movement familiarity.
Data Types:
  • Software/Code
  • Tabular Data
  • Audio
These are the data files of the manuscript "Does Movement Influence Representations of Time and Space?". All files that contain "t" in their name belong to the temporal Experiment 1, all files that contain "s" in their name belong to the spatial Experiment 3. Further, all files that contain "f" in their name belong to the forward condition, all files that contain "b" in their name belong to the backward condition, all files that contain "s" in their name belong to the standing condition. The targets of the temporal as well as spatial experiment are also uploaded ("Wednesday_Meeting_Question.wav" and "Which one of these widgets is aehead.JPG")
Data Types:
  • Image
  • Audio
These audio files guide the listener in the performance of various Qigong practices. They were made available to participants online as part of the MM training intervention to supplement the in-person instruction. They were produced by Peter Payne and Mardi Crane during 2014. SPECIFIC USE RESTRICTIONS APPLY--SEE TERMS
Data Types:
  • Audio
Meeting summaries Audio recordings of meetings Guides to the summaries and recordings.
Data Types:
  • Document
  • Audio
This study explores the relationship between gender identity and the use of creaky voice (a non-modal phonation commonly referred to as "vocal fry.") While early research suggested that men were more likely to use creaky voice, more recently its use has been associated with the language use of young, urban, American women. This study explores the relationship between creaky voice and gender identity in American English, and investigates the social stratification of creaky voice for additional social factors like sexual orientation, age, and socioeconomic status. Production and perception data were gathered from 69 participants with a range of gender identities (including men, women, and non-binary individuals, as well as individuals who identify as both cis and trans) in 2013. The dataset contains audio files and tabular data.
Data Types:
  • Tabular Data
  • Document
  • Text
  • Audio
Zip file with Stimuli for Deng & Sloutsky (2012)
Data Types:
  • Software/Code
  • Image
  • Video
  • Document
  • Text
  • Audio
Raw data of 8 legs recorded. The left, metathoracic leg from 8 cockroaches was removed via a transverse coxa cut. One stainless steel “map pin” electrode was inserted into the coxa, the other map-pin electrode in the tibia/tarsus joint, and the electrodes were plugged into a “SpikerBox” Neural Amplifier. 10 seconds of spontaneous activity were recorded, followed by 10 seconds of lightly tapping the prominente barbs on the tibia of the cockroach leg with a plastic probe. After this measurement, the electrode that was in the coxa was then inserted into the center of the femur, and the same recording procedure repeated (10 seconds of spontaneous activity followed by 10 seconds of light touch). Finally, the coxa was then removed with forceps, the electrodes left in place, and the recording procedure repeated once more. The three electrode conditions are separated by periods of silence in the .wav files.
Data Types:
  • Audio