Filter Results
308260 results
Smoke Test on 17Jul2019 natscilivecustomer (Dataset-1) Smoke Test on 17Jul2019 natscilivecustomer (Dataset-2)
Data Types:
  • Other
  • Software/Code
  • Image
  • Video
  • Tabular Data
  • Dataset
  • Document
  • Text
  • Audio
Associated research in : Gordon, B. L., Paige, G. B., Miller, S. N., Claes, N., & Parsekian, A. D. (2020). Field scale quantification indicates potential for variability in return flows from flood irrigation in the high altitude western US. Agricultural Water Management, 232, 106062. Readme: The included files are: Calculated Flow, Calculated_Losses, Calculated_Return_Flows, ET_Not_Interpolated, Precipitation, and GIS Database. All the data (except GIS) are in tab delimited ASCII files. GIS data are in standard formats, most site specific information including soils, meadow delineation, instrumentation, etc. can be found in the site_information file. Flow data (Calculated_Flow, Calculated_Losses, Calculated_Return_Flows) were obtained using developed rating curves at each site, where each stilling well was instrumented with a pressure transducer (Level TROLL 500 Data Logger, In-Situ, USA) and manual flow measurements consisting of 25+ individual points for each measurement were made using an electromagnetic current meter (MF Pro, OTT Hydromet, USA). ET data include both measurements from a Large Aperture Scintillometer (LAS MKII, Kipp & Zonen, NLD) and from Penman-Monteith Calculations performed on raw meteorological data collected on site. For Penman-Monteith, we include both raw values and values modified using a crop coefficient from Pochop et al. (1992). Precipitation data were collected using a tipping bucket rain gauge (Rain Collector II, Davis Instruments, USA). All data (except the ET data for the scintillometer) are from May 2015 to October 2015; the ET data from the scintillometer are from June 2015 to October 2015. If you have any questions, or would like raw flow data or unprocessed meterological data, please contact me via email at: beatrice.gordon1@gmail.com
Data Types:
  • Dataset
  • Text
  • File Set
Data and code for: Time-Varying Causality between Bond and Oil Markets of the United States: Evidence from Over One and Half Centuries of Data
Data Types:
  • Dataset
  • File Set
Batch mesophilic 37oC reactors fed with acetic acid (0.5 mL AC/L every 5-6 days), have been amended with different amount of incineration bottom ash/ and ammonium chloride for 120 days. At the start of the experiments, different mass of IBA was added to the reactors which had been amended with IBA. Then the group of the reactors which amended with NH4CL had received 4 g/L NH4Cl every run of 5-6 days. In parallel batch reactors without IBA/ and NH4Cl were also run. Reactor performance (methane production) and stability (pH drop and VFA accumulation) were investigated. On day end of the experiments i.e. on day 120, a representative digestate sample was collected from each reactor, then sequenced for 16S rRNA gene. The sequence files shown in this data set are fastq files from the illumina sequencing analysis.
Data Types:
  • Software/Code
  • Dataset
  • Text
  • File Set
Quality control from RNA extraction was performed using the Agilent bio-analyzer, processed using the Illumina™ TotalPrep™-96 RNA Amplification Kit (ThermoFisher 4393543), hybridized to Illumina HT12v4 microarrays (Catalog number: 4393543), and scanned on an Illumina HiScan scanner [22] [23].   For each of the 22 individuals, three biological replicates were profiled, with each sample being assessed at both standard glucose conditions (11mM of glucose), as well as high glucose conditions (30mM of glucose). Biological replicates were split from the same mother flask; cells were grown in separate flasks and run on different microarray plates on different days. Each biological replicate was generated from a separate frozen aliquot of that cell line. The gene expression data comprised a total of 144 samples from 22 individuals (3 replicates per individual and treatment, except for 3 individuals with 5 replicates).  Gene expression was assessed in two conditions, standard glucose and high glucose, and generated from four different groups of gene expression arrays run at a given time that were carefully designed to minimize potential batch effects. BeadChip data were extracted using GenomeStudio (version GSGX 1.9.0) and the raw expression and control probe data from the four different batches were preprocessed using a lumiExpresso function in the lumi R package [8, 9] in three steps: (i) background correction (lumiB function with the bgAdjust method); (ii) variance stabilizing transformation (lumiT function with the log2 option); (iii) normalization (lumiN function with the robust spline normalization (rsn) algorithm that is a mixture of quantile and loess normalization). To remove unexpressed probes, we applied a detection filter to retain probes with strong true signal by applying Illumina BeadArrays detection p-values < 0.01 followed by removing probes that did not have annotated genes, resulting in a total of 15,591 probes.  
Data Types:
  • Dataset
  • Text
This dataset includes the calculation of dialect distance for 278 Chinese prefecture cities used in the paper "Promoting or preventing labor migration? Revisiting the role of language" (https://doi.org/10.1016/j.chieco.2020.101407)
Data Types:
  • Dataset
  • File Set
BarkVN-50 consists of 50 categories of bark texture images. Total number is 5,578 images with 303× 404 pixels. Image classification task can be performed on this dataset.
Data Types:
  • Dataset
  • File Set
Data File
Data Types:
  • Dataset
  • File Set
HealthAidKB, a Knowledge Base, is the result of an automatic extraction and clustering pipeline of common procedural knowledge in the domain of health. Our goal is to construct domain targeted high precision procedural knowledge base containing task frames. We developed a pipeline of methods leveraging Open IE to extract procedural knowledge by tapping in to on-line communities. In addition, we devise a mechanism to canonicalize the task frames in to clusters based on the similarity of the problems they intend to solve. The resulting knowledge base shows high precision based on an evaluation by human experts in the domain. We extracted the procedural knowledge by tapping in to health category of wiki how (https://www.wikihow.com/Category:Health ) and how to cure (https://howtocure.com/).
Data Types:
  • Software/Code
  • Tabular Data
  • Dataset
  • Text
We represented a new Bangla dataset with a Hybrid Recurrent Neural Network model which generated Bangla natural language description of images. This dataset achieved by a large number of images with classification and containing natural language process of images. We conducted experiments on our self-made Bangla Natural Language Image to Text (BNLIT) dataset. Our dataset contained 8,743 images. We made this dataset using Bangladesh perspective images. We used one annotation for each image. In our repository, we added two types of pre-processed data which is 224 × 224 and 500 × 375 respectively alongside annotations of full dataset. We also added CNN features file of whole dataset in our repository which is features.pkl.
Data Types:
  • Other
  • Software/Code
  • Dataset
  • Text
  • File Set
3