Contributors: Gonzalez-Franco, Mar
... Immersive stereoscopic footage of a Coordinate Response Measure (CRM) recorded from two actors. The audio-visual recorded corpus consists of 8 CALLs and 32 COMMANDs per actor. The CALLs and COMMANDs are to be combined at rendering time into full sentences that always follow the same structure: “ready CALL go to COMMAND now ”. The COMMANDs consists of one in four colors (blue, green, red or white) followed by one in eight numbers (1 to 8). This generates a full combinatorial of 256 individual sentences when combined with one of the 8 CALLS (arrow, baron, charlie, eagle, hopper, laker, ringo, tiger). Additionally the dataset also includes the UV positions to texturize the semi-spheres at the rendering time. These have been calculated from the intrinsic and extrinsic calibration parameters of the cameras to facilitate the correct rendering of the video footage. Our system for recording the actors consists of a custom wide-angle stereo camera system made of two Grasshopper 3 cameras with ﬁsheye Fujinon lenses (2.7mm focal length) reaching 185 degrees of Field of View (FoV). The cameras were mounted parallel to each other and separated by 65 mm distance (average human interpupillary distance39) to provide stereoscopic capturing. The video is encoded in H264 format reaching 28-30 frames per second encoding speed at 1600x1080 resolution per camera/eye. The audio was recorded through a near range microphone at a 44kHz sampling rate and 99kbps and both the audio and video are synchronized within 10ms range and saved in mp4 format. The recording room was equipped for professional recording with monobloc LED lighting and chromakey screen. The actor sat at 1 meter distance from the camera recording setup and read the corpus sentences when presented on the screen behind the cameras. The actors were recorded separately in two sessions, seating each at 30 degrees from the bisection, and their videos can be synthetically attached at the rendering time. In the post processing the audio was equalized for all words, and the video was stitched to combine the actors and generate the full the corpus. Sentences were band passed at 80Hz to 16kHz. The corpus sentences are temporally aligned within the range of 64ms in our case, which is below the described 200ms to be perceived. So two or more CRMs can be played synchronously generating an overlap.
Contributors: Hausfeld, Lars, Gutschalk, Alexander, Formisano, Elia, Riecke, Lars
... Sounds from "Hausfeld, L., Gutschalk, A., Formisano, E., Riecke, L. (2017). Effects of cross-modal asynchrony on informational masking in human cortex. Journal of Cognitive Neuroscience". Informational masking paradigm containing a pulsating tone (target) embedded in a multi-tone cloud (masker).
Contributors: Becker, Kara, Khan, Sameer ud Dowla, Zimman, Lal
... This study explores the relationship between gender identity and the use of creaky voice (a non-modal phonation commonly referred to as "vocal fry.") While early research suggested that men were more likely to use creaky voice, more recently its use has been associated with the language use of young, urban, American women. This study explores the relationship between creaky voice and gender identity in American English, and investigates the social stratification of creaky voice for additional social factors like sexual orientation, age, and socioeconomic status. Production and perception data were gathered from 69 participants with a range of gender identities (including men, women, and non-binary individuals, as well as individuals who identify as both cis and trans) in 2013. The dataset contains audio files and tabular data.
Top results from Data Repository sources. Show only results like these.
Contributors: Sloutsky, Vladimir
... Zip file with Stimuli for Deng & Sloutsky (2012)
Contributors: Köhne, Judith, Weinbach, Silas, Demberg, Vera
... Visual world stimuli for replicating Demberg and Sayeed 2015, will be connected to other forthcoming work. These materials were originally used for work by Demberg and Köhne, see the related publications.
Contributors: Ferhat, Allain-Thibeault, Le Sourd, Anne-Marie, de Chaumont, Fabrice, Olivo-Marin, Jean-Christophe, Bourgeron, Thomas, Ey, Elodie
... Video and audio recordings of social interactions in adult male C57BL/6J mice under different conditions (habituation time to the test cage and cage shape and size)
Contributors: Mateo Pedro, Pedro
... This research was supported by the Documenting Endangered Languages (DEL) program at NEH/NSF (http://www.neh.gov/grants/preservation/documenting-endangered-languages) and the DRCLAS center at Harvard (http://www.drclas.harvard.edu/).
Replication data for: Habit Persistence, Nonseparability between Consumption and Leisure, or Rule-of-Thumb Consumers: Which Accounts for the Predictability of Consumption Growth?
Contributors: Michael Kiley