Search the repository
Recently published
137880 results
- Research dataThe datasets used in this study were obtained from publicly available online sources, as summarized in the submitted data-source table. The boundary of the study area was defined using the 2021 version of the Qinghai-Tibet Plateau boundary shapefile. All input datasets were preprocessed, clipped to the study-area boundary, and harmonized to a consistent spatial framework prior to analysis. These data were used to derive the RUSLE factors, landscape metrics, and explanatory variables for the investigation of water-erosion dynamics and threshold-driven mechanisms on the Qinghai-Tibet Plateau.
- Dataset for: Discrete Choice Experiment on Technology-Institution Misfit and Frontline Bureaucratic Burden in ChinaThis dataset contains the survey responses from a discrete choice experiment (DCE) examining how technology-institution misfit affects frontline bureaucratic burden and coping strategies in the context of digital governance in China. Data were collected from 200 frontline bureaucrats (township officials, sub-district office staff, and village cadres) recruited through the Credamo platform. Each respondent evaluated 8 choice sets, each presenting two alternative digital governance scenarios characterized by different combinations of three misfit attributes (goal misfit, process misfit, and data standard misfit) at three levels (full alignment, partial misfit, and severe misfit). The experimental design was generated using a D-efficient algorithm via the dcreate command in Stata. The dataset includes binary choice outcomes, Likert-scale ratings of perceived task complexity, perceived work redundancy, and perceived psychological anxiety for each scenario, coping strategy selections, and respondent demographic variables (gender, age, educational attainment, and work location). All data have been anonymized to protect respondent identities. This dataset supports the analyses reported in: Yang, D., Deng, S., Gao, Z., & Zhou, Y. The Empowerment Paradox: How Technology-Institution Misfit Shapes Frontline Bureaucratic Burden in Digital Governance. Government Information Quarterly.
- Induction of magnetodielectric coupling in lanthanum ferrite multiferroics by cobalt dopingExperimental results and treatments on LaFeO3 cobalt doping research.
- Dataset on Adolescent Stress Coping and Social Media Use in the Context of Social IsolationThis dataset contains de-identified, item-level online survey responses from 466 adolescents and emerging adults (ages 12–26) collected to examine whether social isolation is associated with stress coping, whether this relationship is mediated by negative emotions, and whether social media use intensity buffers the negative link between negative emotions and coping (i.e., a moderated mediation framework), with an additional exploratory focus on a potential inverted U-shaped association between daily social media time and coping. The file includes coarsened demographics (e.g., gender, age group, education group, region group, urban/rural), social media behavior indicators (e.g., multi-platform use, preferred platform indicators, friends category, daily time category), and item-level scores for four scales: Social Isolation (6 items), Negative Emotions/DASS-21 (21 items), Social Media Use Intensity (6 items), and Coping Style (20 items; positive and negative coping), along with computed construct scores (e.g., SI, NE_mean, SMU, SCP, SCN, and an overall coping index SC) documented in the codebook for reproducibility. To protect participant privacy (including minors), fine-grained geographic identifiers and open-text fields are excluded and key demographic variables are aggregated; the dataset is suitable for reproducing the reported models (e.g., moderated mediation via regression/PROCESS-equivalent methods), conducting psychometric checks, and performing secondary analyses of digital media use, emotion, and coping processes.
- How can high-tech manufacturing achieve high innovation productivity? A configurational path analysis under the TOE framework.1. What is this dataset? This repository contains the comprehensive dataset and original execution scripts (in R and Python) supporting the dynamic Qualitative Comparative Analysis (QCA) of high-tech manufacturing innovation productivity in China. It provides all necessary materials to fully reproduce the configurational path analysis, temporal trend visualizations, industry heterogeneity evaluations, and out-of-sample predictive validity tests presented in the manuscript based on the Technology-Organization-Environment (TOE) framework. 2. How was this dataset collected? The raw panel data were collected from Chinese A-share listed high-tech manufacturing firms covering the period from 2015 to 2024. Financial and patent data were sourced from authoritative databases including CSMAR and WIND. 3. What files are included? The repository is structured into 6 core files to ensure complete transparency and reproducibility: PANELDATA.csv: The primary panel dataset containing the foundational data for the analytical sample, used as the main input for the dynamic QCA process. DYNAMIC.R: The core R script utilizing the QCA and admisc packages. It executes the fuzzy-set calibration, necessity and sufficiency analyses (truth table minimization), and computes both between-group and within-group consistencies across different industry configurations. Calibrated_Data.csv: The fully calibrated fuzzy-set dataset exported from the main QCA procedure, serving as the direct input for the out-of-sample testing. Out-of-Sample Predictive Validity Test.py: A Python script utilizing pandas and seaborn to perform predictive validity testing on a holdout sample (2020-2024). It calculates the consistency and coverage of the specific configurations and automatically generates scatter plots for validation. plot_data.csv: A highly structured dataset specifically extracted and formatted from the QCA clustering results, dedicated to generating temporal trend lines. photo.R: An R script utilizing the ggplot2 package to read plot_data.csv and visualize the intertemporal evolutionary trends of configurational consistency over the decade. 4. How can this dataset be used? Researchers and reviewers can download this complete package into a single local directory to achieve "plug-and-play" reproducibility. By running the R and Python scripts sequentially, users can replicate the exact configurational pathways, robustness checks, and high-quality figures discussed in the study. Furthermore, it serves as a methodological template for scholars intending to integrate dynamic QCA with machine-learning-inspired out-of-sample prediction in management research.
- The impact of global regional trade agreement network centrality on exports1. Research hypothesis and findings. Since the 1990s, regional trade agreements (RTAs) have increasingly evolved into complex global networks. We hypothesize that a country’s embeddedness in these RTA networks-measured by network centrality - enhances its export performance by fostering technical progress and strengthening comparative advantages. Using a gravity model with global RTA data from 2010 to 2021, we find that higher RTA network centrality significantly promotes exports. The positive effects are particularly strong for countries with stable hub positions, a greater number of agreements with partner countries, and closer geographical proximity. 2. Description of the data. The dataset combines publicly available sources on global RTAs and trade flows. It contains both raw and cleaned versions of the data, as well as constructed variables that capture RTA depth, breadth, and network centrality measures. All variable definitions, construction methods, and data sources are described in detail in the accompanying Replication Package README file. 3. How to interpret and use the data. The uploaded zip archive (The impact of global regional trade agreement network centrality on exports_Readme_Replication.zip) includes four folders: (1) data - containing raw datasets and cleaned datasets used in the analysis. (2) do file - including the Stata script (Code-EMR-Replication.do), which reproduces all empirical results. (3) README Replication - providing detailed documentation of variables, data construction, sources, and replication steps. (4) results - including all empirical outputs (Tables 1-10, Figures 1-3, Appendix Tables 1-4). These materials allow other researchers to understand the structure of the dataset, replicate the analysis line by line, and extend the study to new contexts.
- Chemical and isotope data of the Sijiaotian graniteChemical and isotope data of the Sijiaotian granite
- Original Datapache features of alpine grassland
- Road erosion on the Loess Plateau in 2022Road surface erosion on the Loess Plateau in 2022
- BanglaFace: A Curated In-the-Wild Bangladeshi Facial Image Dataset for Face Inpainting and Computer VisionDATASET OVERVIEW This dataset consists of 1,625 unique high-quality facial images of Bangladeshi demographics extracted from 45 YouTube videos across 9 content categories. All of these images were collected from publicly available videos which has been enlisted in video_metadat.csv. No additional personal or confidential information is included. These images are curated for face inpainting, facial recognition, and generative modeling. The dataset features both raw in-the-wild and geometrically aligned face representations with comprehensive metadata. SUMMARY STATISTICS OF DATASET - Total Images: 1,625 unique individuals (deduplicated from 80,137 raw detections) - Source Videos: 45 across 9 categories - Total Duration: 22 hours 22 minutes 24 seconds - Demographics: Bangladeshi population RAW VS. ALIGNED IMAGES - Raw Faces: Original crops from video frames via MTCNN detection, retaining natural pose and scale variations for authentic in-the-wild facial data. - Aligned Faces: Standardized 112x112 pixel images with facial landmarks normalized to consistent positions, facilitating model training on facial identity features. PROCESSING PARAMETERS - Face Detector: MTCNN with 0.85 confidence threshold - Minimum Face Resolution: 100x100 pixels - Deduplication: ResNet-34 embeddings (128-dim) with 0.54 similarity threshold - Blur Filtering: Laplacian Variance metric, threshold 80.0 - Frame Sampling: 1 frame per 10 frames - Video Resolution: Minimum 720p CATEGORY DISTRIBUTION - Festival (9 videos): 247 images, 15.20% - University (9): 319 images, 19.63% - Travel (15): 642 images, 39.51% - Market (4): 207 images, 12.74% - Public Events (3): 162 images, 9.97% - Lifestyle (1): 14 images, 0.86% - Education (1): 23 images, 1.42% - Historical Place (1): 8 images, 0.49% - Documentaries (2): 3 images, 0.18% ORGANIZATION Sequential IDs (000001-001625) are organized by category, video, and then by frame order. The image_metadata.csv and video_metadata provide complete traceability, linking each image to the source video, category, and frame ID. The directory structure is shown as follows: BanglaFace_Dataset/ ├── images/ # 1,625 raw face images (000001.jpg - 001625.jpg) ├── aligned_images/ # 1,625 aligned face images (000001.jpg - 001625.jpg) └── metadata/ └── image_metadata.csv # Complete image metadata └── video_metadata.csv # Complete video metadata USE CASES This dataset is suitable for: - Face inpainting and completion tasks - Facial recognition and verification systems - Face alignment and landmark detection - Generative model training (GANs, VAEs) - Face attribute analysis - Demographic studies on facial characteristics

The Generalist Repository Ecosystem Initiative
Elsevier's Mendeley Data repository is a participating member of the National Institutes of Health (NIH) Office of Data Science Strategy (ODSS) GREI project. The GREI includes seven established generalist repositories funded by the NIH to work together to establish consistent metadata, develop use cases for data sharing, train and educate researchers on FAIR data and the importance of data sharing, and more.
Find out moreWhy use Mendeley Data?
Make your research data citable
Unique DOIs and easy-to-use citation tools make it easy to refer to your research data.
Share data privately or publicly
Securely share your data with colleagues and co-authors before publication.
Ensure long-term data storage
Your data is archived for as long as you need it by Data Archiving & Networked Services.
Keep access to all versions
Mendeley Data supports versioning, making longitudinal studies easier.
The Mendeley Data communal data repository is powered by Digital Commons Data.
Digital Commons Data provides everything that your institution will need to launch and maintain a successful Research Data Management program at scale.
Find out more