Contributors: Po-Chih Kuo, Lily Tseng, Karl Zilles, Summit Suen, Simon Eickhoff, Juin-Der Lee, Philip Cheng, Michelle Liou
... This dataset contains auditory fMRI scans and T1-weighted anatomical scans acquired under eyes-closed and -open conditions. In each condition, the subject was presented twelve different sound clips, including human voices and animal vocalizations. The dataset can be used to assess reproducible brain activity and connectivity networks under natural sound stimulation. It also allows for investigation of changes in fMRI responses between eyes-closed and eyes-open conditions, between human voices and animal vocalizations, as well as between the different sounds during auditory stimulation.
Contributors: Md. Asifuzzaman Jishan, Khan Raqib Mahmud, Abul Kalam Al Azad
... We represented a new Bangla dataset with a Hybrid Recurrent Neural Network model which generated Bangla natural language description of images. This dataset achieved by a large number of images with classification and containing natural language process of images. We conducted experiments on our self-made Bangla Natural Language Image to Text (BNLIT) dataset. Our dataset contained 8,743 images. We made this dataset using Bangladesh perspective images. We used one annotation for each image. In our repository, we added two types of pre-processed data which is 224 × 224 and 500 × 375 respectively alongside annotations of full dataset. We also added CNN features file of whole dataset in our repository which is features.pkl.
Research article manuscripts, data pieces, literature review, presentations etc on sharecropping and power-relations in villages in Sindh, Pakistan
Contributors: Ghulam Hussain
... Research article manuscripts based on research done in 2013-2014 on sharecropping, peasants and village culture in Sindh, Pakistan
Top results from Data Repository sources. Show only results like these.
Contributors: Abdourrahmane ATTO
... Collection including non-violent and violent scenes (short video clips either in mp4 or avi formats) from Mediaeval challenge and Technicolor data, in addition with certain UCF-101 combat sports. The dataset can be used for binary or ternary affective computing and classification.
Contributors: Willem Roux, Imtiaz Gandikota, Guilian Yi
... These are the computer models used for structural topology optimization from the paper "A spatial kernel approach for topology optimization" by Roux, Yi, and Gandikota.
Contributors: Anna Letizia Allegra Mascaro, Emilia Conti, Alessandro Scaglione
... RehabDS Dataset. Subjects = ['GCaMP-ChR2-7', # CTRL 'GCaMP-ChR2-17', # CTRL "GCaMP-ChR2-23", # CTRL "GCaMP-ChR2-24", # CTRL 'GCaMP-ChR2-8', # STROKE "GCaMP-ChR2-9", # STROKE 'GCaMP-ChR2-19', # STROKE 'GCaMP-ChR2-22', # STROKE 'GCaMP-ChR2-25', # STROKE 'GCaMP-ChR2-26', # STROKE 'GCaMP-ChR2-11', # REHAB 'GCaMP-ChR2-12', # REHAB 'GCaMP-ChR2-14', # REHAB 'GCaMP-ChR2-15', # REHAB 'GCaMP16', # REHAB 'GCaMP18']. # REHAB Groups are defined accordingly to the paper. Data are arranged in the date/subject format, so that the top folder represents the day when data was acquired and the child folders contain data for the subject Every mat file is 3D vector where first dim = time. second and third dimension makes an image of size 200x200, use imshow to visualize it. Every .csv file has data parceled accordingly to the Allan Brain Atlas, masks files can be found in the masks folders Information about the masks names areas, centroids resolution of images in the mat file is 60 um/pixel Bregma is located at image coordinates 100, 75 Stroke coordinates : 0.5 mm AP 1.75 mm ML from bregma : image coordinates 129 (100+1750/60), 67(75-500/60) The masks have also been updated to match the resolution of the mat file images. The mat files are basically stack of images the first axis is time, the other axes generate the image coordinates (https://www.mathworks.com/help/images/image-coordinate-systems.html), for this you can use: 'MATLAB' >> imshow(squeeze(data(1,:,:))) "PYTHON" >>> import numpy as np >>> %pylab >>> import scipy.io as io >>> data = io.loadmat('171105/GCaMP22/gcamp22_171105_trialx15.aln.mat')['gcamp22_171105_trialx15'] >>> imshow(data[0)) To show the first image of the sequence.
Contributors: Ridhwan Al-Debsi, ashraf elnagar, Omar Einea
... NADiA Dataset is the largest, to the best of our knowledge, source for Arabic textual data that can be used in any NLP related task such as text classification. We chose the abbreviation NADiA as it is a common Arabic name. The data was collected by scraping ‘SkyNewsArabia’ and ‘Masrawy’ news websites using Python scripts that are fine-tuned for each website. SkyNewsArabia will be referred to as NADiA1, while the latter would be NADiA2. NADiA1 is a big dataset containing 37,445 files, while NADiA2 is a huge dataset that contains 678,563 files. However, after filtering and cleaning we reduced the numbers to 35,416 and 451,230 for NADiA 1 and 2, respectively. NADiA1 consists of the following categories (24, displayed in English for easy referencing): News, North Africa, Levant, Middle East, The Americas, Research, Finance & Economy, War & Terrorism, Gulf, Europe, Political Figures, Iran, Technology, Russia, Sports, Tennis, Football, English League, Arabian Sports, Spanish League, Health, East Asia, Environment, Other Countries NADiA2 consists of the following categories (28, displayed in English for easy referencing): Politics, Middle East, Asia, Africa, United States, Europe, Other Countries, Leaders, Sports, Arabian Sports, Football Clubs, Spanish League, Egyptian League, Finance, Arts, Cinema & TV, Fashion, Health, Pregnancy & Delivery, Cancer, Obesity, Social Media, Technology, Religion, Islamic, Fatawa, Worship, Prophet Biography
Data for: Correlative FLIM-Confocal-Raman mapping applied to plant lignin composition and autofluorescence.
Contributors: Raymond Wightman, Marta Busse-Wicher, Paul Dupree
... These are the raw data in their original filetypes as follows: Lignin_LASX_Leica.lif = The confocal images showing lignin autofluorescence in the WT and lignin mutant. The data were created in Leica LAS X software and additionally contain all the metadata describing the imaging parameters. The .lif file can additionally be read in Fiji/ImageJ. Lignin_FLIM.sptw.zip = ZIP archive of the SymPhoTime 64 workspace containing all the FLIM data of the same regions as the above confocal images taken in LAS X. Can be read by SymPhoTime software (Picoquant). WT.wdf and cad2_6_mutant.wdf = Original Raman mapping data .wdf files taken on a Renishaw InVia Raman microscope using WiRE 4 software. The data have been baseline subtracted. There are two files - one of a map of WT, the other is the lignin mutant. FLIM_Raman_replicates = ZIP archive of the SymPhoTime 64 workspace containing FLIM biological replicates labelled WT1, WT2 and WT3 for WT and cad_1, cad_2 for mutant. Also contains associated original Raman data (as .wdf files) taken on a Renishaw InVia Raman microscope using WiRE 4 software. The data have been baseline subtracted. Note WT1 represents a FLIM dataset that was taken without associated Raman mapping as a control experiment for FLIM with no prior Raman illumination.
Data belonging to Predicting the 1p/19q co-deletion status of presumed low grade glioma with an externally validated machine learning algorithm
Contributors: Sebastian van der Voort, Fatih Incekara, Maarten Wijnenga, Georgios Kapas, Mayke Gardeniers, Joost Schouten, Martijn Starmans, Rishie Nandoe Tewarie, Geert Lycklama, Pim French
... Data belonging to the 'Predicting the 1p/19q co-deletion status of presumed low grade glioma with an externally validated machine learning algorithm' paper, as publisched in Clinical Cancer Research. When using this data please cite: (Citation follows later). Data includes trained SVM models, image features derived for all patients, labels for all patients, PCE models used to derive feature importance and segmentations made for the LGG-1p19qDeletion dataset from The Cancer Imaging Archive (https://wiki.cancerimagingarchive.net/display/Public/LGG-1p19qDeletion#a888d85b04c640eeaf802e12db2dc8ad)
Data for: Revealing paraphyly and placement of extinct species within Epischura (Copepoda: Calanoida) using molecular data and quantitative morphometrics
Contributors: Larry Bowman, Marianne Moore, Dan MacGuigan, Madeline Cahillane, Madeline Gorchels
... All data files used to create Calanoida time-calibrated tree, and Epischura/Epischurella/Heterocope gene+morphological trees.