Filter Results
4036 results
Moment maps of M33 from the HI mosaic observed with the VLA in C-configuration. The data have a resolution of ~19 arcsec, corresponding to a physical resolution of ~80 pc (assuming a distance of 840 kpc).
Data Types:
  • Image
Immersive stereoscopic footage of a Coordinate Response Measure (CRM) recorded from two actors. The audio-visual recorded corpus consists of 8 CALLs and 32 COMMANDs per actor. The CALLs and COMMANDs are to be combined at rendering time into full sentences that always follow the same structure: “ready CALL go to COMMAND now ”. The COMMANDs consists of one in four colors (blue, green, red or white) followed by one in eight numbers (1 to 8). This generates a full combinatorial of 256 individual sentences when combined with one of the 8 CALLS (arrow, baron, charlie, eagle, hopper, laker, ringo, tiger). Additionally the dataset also includes the UV positions to texturize the semi-spheres at the rendering time. These have been calculated from the intrinsic and extrinsic calibration parameters of the cameras to facilitate the correct rendering of the video footage. Our system for recording the actors consists of a custom wide-angle stereo camera system made of two Grasshopper 3 cameras with fisheye Fujinon lenses (2.7mm focal length) reaching 185 degrees of Field of View (FoV). The cameras were mounted parallel to each other and separated by 65 mm distance (average human interpupillary distance39) to provide stereoscopic capturing. The video is encoded in H264 format reaching 28-30 frames per second encoding speed at 1600x1080 resolution per camera/eye. The audio was recorded through a near range microphone at a 44kHz sampling rate and 99kbps and both the audio and video are synchronized within 10ms range and saved in mp4 format. The recording room was equipped for professional recording with monobloc LED lighting and chromakey screen. The actor sat at 1 meter distance from the camera recording setup and read the corpus sentences when presented on the screen behind the cameras. The actors were recorded separately in two sessions, seating each at 30 degrees from the bisection, and their videos can be synthetically attached at the rendering time. In the post processing the audio was equalized for all words, and the video was stitched to combine the actors and generate the full the corpus. Sentences were band passed at 80Hz to 16kHz. The corpus sentences are temporally aligned within the range of 64ms in our case, which is below the described 200ms to be perceived. So two or more CRMs can be played synchronously generating an overlap.
Data Types:
  • Video
  • Text
  • Audio
Images of the CMZ taken with JCMT's SCUBA instrument. In the future, may also include SCUBA-2 and/or HARP data.
Data Types:
  • Image
SHARC images of the Galactic Center from Bally et al, 2010: http://adsabs.harvard.edu/abs/2010ApJ...721..137B
Data Types:
  • Image
Sounds from "Hausfeld, L., Gutschalk, A., Formisano, E., Riecke, L. (2017). Effects of cross-modal asynchrony on informational masking in human cortex. Journal of Cognitive Neuroscience". Informational masking paradigm containing a pulsating tone (target) embedded in a multi-tone cloud (masker).
Data Types:
  • Text
  • Audio
This data relates to our paper "Stereotype and Most-Popular Recommendations in the Digital Library Sowiport". The data includes a list of the 28 million delivered and clicked recommendations as CSV file, the R script to analyze the data, and the figures and tables presented in this paper as PNG and CSV files. This open access to the data allows replicating our analyses, checking the results for correctness, and conducting additional analyses.
Data Types:
  • Software/Code
  • Image
  • Tabular Data
  • Text
Updated Prefecture Points now have mostly complete spatial coverage for the Dynastic administrative units from 1350 - 1911 CE. Prefectures from earlier periods, 221 BCE to 1350 CE, still have gaps in spatial coverage.
Data Types:
  • Image
  • File Set
See README.pdf
Data Types:
  • Other
  • Software/Code
  • Image
  • Document
  • Text
Sound files generated from nine radio transients detected by the Very Large Array toward cosmological radio transient, FRB 121102. The sound of FRB 121102 has a "chirp" that is caused by dispersion. Original publication is Chatterjee et al (2017), "The direct localization of a fast radio burst and its host". Original data is available at https://doi.org/10.7910/DVN/TLDKXG.
Data Types:
  • Audio
Open Mind: Discoveries in Cognitive Science Forthcoming.
Data Types:
  • Other
  • Video
  • Document