Skip to main content

Share your research data

Mendeley Data is a free and secure cloud-based communal repository where you can store your data, ensuring it is easy to share, access and cite, wherever you are.

Create a Dataset

Find out more about our institutional offering, Digital Commons Data

Search the repository

Recently published

140902 results
  • Time-resolved difference absorption spectra ΔА(λ,t) of D1/D2/cyt b559 reaction center complexes and Photosystem II core complexes from spinach
    File format: space-delimited text files. Legend to file names: - D1D2: D1/D2/cyt b559 reaction center complexes; - PSII: Photosystem II core complexes; - 650 nm, 655 nm, 690 nm, 700 nm: excitation pulse centered at 650 nm, 655 nm, 690 nm, 700 nm; - 5 nJ, 10 nJ, 12 nJ, 15 nJ, 20 nJ, 35 nJ: energy of excitation pulse of 5 nJ, 10 nJ, 12 nJ, 15 nJ, 20 nJ, 35 nJ; - magic, orthog, parall: magic, orthogonal, parallel polarisations of probe pulse with respect to the pump pulse; - Wavelength [nm] in the first row (400-850 nm)), time delay [fs] in the first column (from -200 to 500000 fs).
  • Dataset: Risk-Informed Siting and O&M Routing for Floating Offshore Wind
    This dataset provides a comprehensive benchmark for floating offshore wind (FOW) deployment, integrating multi-layer GIS spatial constraints, metocean-driven performance indicators, and structural fatigue metrics for the Spanish coastal area. It includes leakage-safe machine learning surrogates for predictive analytics and a hybrid metaheuristic optimization framework for operation and maintenance (O&M) routing under variable environmental conditions.
  • Legislative Data
    Public Legislative Data from Brazilian Câmara dos Deputados
  • Critical thinking skills in nursing competency regulation: A documentary mapping study with curricular implications
    This dataset provides the analytical materials supporting the study that maps, explicitly and traceably, the seven critical thinking (CT) skills defined by Scheffer and Rubenfeld (2000) onto the General Care Nurse Competency Profile Regulation in Portugal (Ordem dos Enfermeiros [OE], 2011). The purpose of this dataset is to (i) make available the aggregated results of a structured expert consultation and (ii) enable auditability and replicability of the documentary mapping. Only the following files are included: Supplementary_Material.pdf — Contains five supplementary components: (S1) version history and harmonization decisions for the translation of the seven CT skills into European Portuguese; (S2) structured expert consultation instrument (Round 1) and full item-level aggregated results; (S3) complete codebook with CT skills definitions, operational criteria, direct and indirect anchors, and cross-cutting coding rules; (S4) [see Excel file below]; (S5) joint display presenting the macro–micro integration of expert consultation ratings and documentary mapping results by competency × CT skill. Supplementary_S4_CodingMatrix.xlsx — Complete criterion-level coding matrix (IDs 1–96), including CT1/CT2, anchor type (direct/indirect), uncertainty level (low/medium/high), brief rationale, rules applied, uncertainty justification, audit trail note, and recoding/rectification notes. Structured expert consultation The study included a structured expert consultation (modified Delphi; Round 1, n = 9). This repository provides aggregated results only. Confidentiality note: participant-level data are not shared due to the risk of indirect identification given the small panel size and participants' professional characteristics. Source and version of the analyzed document (corpus) Documentary analysis was conducted on a publicly accessible normative document: Ordem dos Enfermeiros (2011). Regulamento do Perfil de Competências do Enfermeiro de Cuidados Gerais. https://www.ordemenfermeiros.pt/media/8910/divulgar-regulamento-do-perfil_vf.pdf Unit of analysis and ID mapping (1–96) Unit of analysis: each competency criterion in the Regulation (OE, 2011). The dataset uses a unique identifier ID (1–96) assigned sequentially to all criteria included in the corpus. Each row in the coding matrix corresponds to one criterion (ID) and also records the higher-level competency (A1–C3) to support contextual interpretation. Methodological framework Analysis was conducted as deductive content analysis guided by a theoretical framework, using predefined categories corresponding to the seven CT skills (Scheffer & Rubenfeld, 2000). The purpose of coding is to describe the normative textualization of cognitive operations in the regulatory document — not to evaluate CT demonstrated in clinical practice or to infer CT learning gains.
  • fuzz4db
    Replication package for the paper: "Fuzz4DB: A Practice of LLM-Agent-Guided Fuzzing for Databas Feature-Level Delta Testing", Chunling Qin, Yong Hu, Xiao Zhang, Jinchuan Chen, Fangtao Gu, Baoxun Wang, Yuxing Chen, Anqun Pan, Lixiong Zheng, ASE '26, October 12–16, 2026, Munich, Germany. Contents: - sql/ SQL test corpora (a subset of the generated test cases) for the 9 open-source commits - patches-generated/ Commit diff patches used for diff-cover analysis - final_results/ Pre-computed coverage artifacts (LCOV traces, LLVM profdata, diff-cover reports) per commit, with summary manifest (final_results.json / final_results.tsv) - bug/ 63 open-source bug reports (DBMS, bug ID, upstream link, root cause, trigger condition, status) - commits.json Commit metadata (hash, DBMS, feature, delta size) - README.md Step-by-step instructions to reproduce Table 1, Table 2, Table 3, and Table 4 in the paper - *.sh Reproduction scripts (unpack, reproduce, verify, show results) Docker images are hosted on GitHub Container Registry (ghcr.io) and pulled automatically by the scripts. No local image files are included in this package. Note: The Fuzz4DB source code is not included due to commercial licensing restrictions. Environment: - OS: Linux (Docker host) - Docker >= 20.10 - python3 >= 3.8 - Internet access to pull Docker images from ghcr.io - For full execution-based reproduction (Mode 3): - mysql client (for c3, c4, c10-c12) - psql client (for c1, c2) - mariadb client (for c5, c6) - ≥16 GB RAM, ≥4 cores (≥8 cores recommended) - ~500 MB disk space for this package; Docker images are pulled on demand (~22 GB total) Reproduced results: - Table 1: Tested commits (commits.json) - Table 2: Per-commit incremental line coverage (show_final_results.sh) - Table 3: Bug counts by DBMS (bug/Fuzz4db_bug.csv) - Table 4: Open-source bug summary (bug/Fuzz4db_bug.csv) Estimated reproduction time: - ~5 minutes to view packaged results (Mode 1: show_final_results.sh) - ~15 minutes to re-validate from bundled artifacts (Mode 2: verify_final_results_from_artifacts.sh) - ~2–3 hours for full execution-based rerun (Mode 3: reproduce_coverage.sh, all 9 commits) on a machine with ≥16 GB RAM and ≥8 cores.
  • Environmental Digital Governance Index for Chinese Cities, 2015–2023
    This dataset provides a comprehensive panel dataset for investigating the impact of environmental digital governance (EDG) on urban carbon emission intensity (CEI) across Chinese prefecture-level cities from 2015 to 2023. Core Variables and Measurement The primary explanatory variable is the Environmental Digital Governance Index (EDG). It is constructed from 77,514 government procurement contracts (total value ~RMB 106.2 billion) identified via a three-stage text-analysis pipeline: (i) dual-domain keyword expansion covering digital technology and environmental governance; (ii) large language model (KIMI) semantic annotation on 20,000 randomly sampled contracts; and (iii) LSTM-RoBERTa deep learning classification (accuracy: 92.62%; F1: 0.9270) applied to the full sample. Contract values are aggregated at the city-year level and normalized by regional GDP to measure governance intensity, with cumulative investment used to account for the long service life of digital infrastructure. The dependent variable, Carbon Emission Intensity (CEI), is derived from the EDGAR v2025 global emission inventory (European Commission JRC / PBL). Annual gridded CO₂ emissions at ~10×10 km resolution are spatially aggregated to prefecture-level boundaries via GIS zonal statistics and normalized by city-level real GDP. Sample and Scope The dataset covers an unbalanced panel of 2,619 city-year observations spanning 291 prefecture-level cities, excluding municipalities directly under central government and cities with boundary adjustments or missing data. Control Variables The dataset includes extensive city-characteristic controls: per capita GDP, urbanization rate, land area, industrial structure upgrading (tertiary/secondary ratio), financial development, trade openness, and green innovation (green patent applications). Government-behavior controls include fiscal expenditure/GDP, R&D expenditure ratio, environmental regulation intensity (text-mined from government work reports), and policy dummies for carbon emission trading pilots and energy conservation fiscal policy pilots. Files Included The repository contains the full panel dataset in Stata format (.dta) alongside replication code for baseline regressions, endogeneity tests (IV, PSM, DML, DML-IV), mechanism analyses, and heterogeneity analyses reported in the associated manuscript. Potential Applications This dataset is suitable for research on environmental economics, digital governance, climate policy evaluation, and urban sustainability. Researchers may use it to analyze the carbon-reduction effects of digital regulatory tools, examine heterogeneous policy impacts across technological hierarchies and institutional contexts, or serve as a benchmark for alternative measures of government digital transformation.
  • The Synergistic Effect of Physical Disorder and Social Context on Fear of Crime in an Urban Park: A Pilot Cross-modal Virtual Reality Study
    This dataset provides the integrated experimental results of a study investigating the interplay between physical signs of disorder (e.g., unmanaged vegetation) and socio-contextual factors on the fear of crime in urban parks. Data was collected from 56 participants using immersive virtual reality (VR) stimuli of a low-maintenance park in Tokyo, Japan. The dataset is provided as a consolidated file containing the following metrics for each participant: ・Psychological Measures: Scores from the State-Trait Anxiety Inventory (STAI) and the Positive and Negative Affect Schedule (PANAS). ・Physiological Data: Heart Rate (HR) and Heart Rate Variability (HRV) recordings. ・Eye-tracking Data: Visual attention and gaze metrics toward specific environmental features (e.g., unmanaged vegetation). This research quantifies how visual and non-visual cues interact to shape psychological safety, providing an empirical foundation for Crime Prevention Through Environmental Design (CPTED).
  • CFD Simulation Data for Supersonic Flow over an Aircraft Wing
    Dataset containing CFD simulation inputs and outputs for supersonic flow analysis over an aircraft wing, including mesh data, boundary conditions, pressure and Mach distributions, shock-wave results, and aerodynamic performance outputs.
  • ARMD_CEFID_Publication_Data
    Research data to the publication "Pushing the Limits of Energy Resolution by Combining Monochromated EELS with the CEOS Energy-Filtering and Imaging Device in the JEOL Atomic-Resolution Multi-Dimensional TEM"
  • MC859-Co-compra-e-Recomenda-o-Indutiva-na-Rede-Amazon
    Project graph instances
View more
GREI

The Generalist Repository Ecosystem Initiative

Elsevier's Mendeley Data repository is a participating member of the National Institutes of Health (NIH) Office of Data Science Strategy (ODSS) GREI project. The GREI includes seven established generalist repositories funded by the NIH to work together to establish consistent metadata, develop use cases for data sharing, train and educate researchers on FAIR data and the importance of data sharing, and more.

Find out more

Why use Mendeley Data?

Make your research data citable
Unique DOIs and easy-to-use citation tools make it easy to refer to your research data.
Share data privately or publicly
Securely share your data with colleagues and co-authors before publication.
Ensure long-term data storage
Your data is archived for as long as you need it by Data Archiving & Networked Services.
Keep access to all versions
Mendeley Data supports versioning, making longitudinal studies easier.

The Mendeley Data communal data repository is powered by Digital Commons Data.

Digital Commons Data provides everything that your institution will need to launch and maintain a successful Research Data Management program at scale.

Find out more