Search the repository
Recently published
140483 results
- USO DE LA INTELIGENCIA ARTIFICIAL EN EL CONTEXTO UNIVERSITARIOLa incorporación de la inteligencia artificial (IA) en el ambiente universitario ha provocado un cambio significativo en la dinámica institucional, el rol académico y la práctica educativa. El propósito de esta investigación es examinar el impacto que la inteligencia artificial (IA) está teniendo en el ambiente universitario. Se examinan sus capacidades para optimizar la gestión académica y el aprendizaje, pero sin omitir los peligros de crear dependencia cognitiva. El método utilizado fue el enfoque cualitativo, su nivel fue descriptivo y el diseño se basó en un estudio de casos múltiples, enfocado en varias instituciones universitarias. En este caso particular, la Universidad Politécnica del estado de Mérida Kleber Ramírez y la Universidad Nacional Abierta son las que se estudian. Uno de los hallazgos fue que la inteligencia artificial personaliza el aprendizaje, ajustándola a las necesidades de cada estudiante. Optimiza la gestión, haciendo procesos más eficientes. Potencia la investigación, facilitando el acceso y procesamiento de información.
- Customer Experience and Tour Guiding Specialization in Gaziantep and NaplesData For -Customer Experience and Tour Guiding Specialization in Gaziantep and Naples-.
- PICTographPlus: Data Sets and ResourcesAll the code are freely available at https://github.com/KarchinLab/pictographPlus and https://github.com/KarchinLab/PICTographPlus_manuscript. Benchmarking dataset ------------------------------------------------------------------------------- all benchmarking scripts and data information can be found in benchmarking folder. A README.md is provided with detailed instructions. Case analysis dataset ------------------------------------------------------------------------------- Inputs: CRUK0004_input: input data for TRACERx patient. CRUK0004_SNV.csv: input somatic mutation file CRUK0004_facets.csv: input somatic copy number file CRUK0004_RNA.csv: input RNA read counts MK74_input: input data for Semaan et al. IPMN patient. MK74_snv.csv: input somatic mutation file MK74_cn.csv: input somatic copy number file MK74_rna.csv: input RNA read counts PCSI0380_input: input data for PanCuRx PDAC patient. PCSI0380_snv.csv: input somatic mutation/copy number file PCSI0380_.csv: input RNA read counts Outputs: CRUK0004_output: output data for TRACERx patient. MK74_output: output data for Semaan et al. IPMN patient. PCSI0380_output: output data for PanCuRx PDAC patient. Each of the above output folder contains the following files: tree.csv: PICTographPlus tree subclone_proportion.csv: subclone proportion for each sample clusterAssign.csv: assignment of mutation to each cluster purity.csv: estimated purity per sample mcf.csv: cluster mutation cellular fraction clonal_expression.csv: clone-level gene expression GSEA: gene set enrichment analysis results
- Ribo-Tweezer: rapid removal of ribosomal proteins reveals new layers of post-transcriptional gene-regulationThis dataset consists all unprocessed images, blots and gels for the related publication (Chen and Cheng et al., 2026) .
- Population-level encoding of somatosensationFiles to run both anesthetized and awake analyses from "Population-level encoding of somatosensation in mouse sensorimotor cortex" by Lipton, M.H., Park, S., & Dadarlat, M.C. (2026). Please refer to data availability statement in Alonso, I., Scheer, I., et al. (2023) for links to data used in "awake" analyses.
- Keywords in Context and HeadlinesThere are two descriptions of the corpus. The first is made up of tags, comprised of keywords, while the second is made up of headlines in the reportage of climate change. The data were extracted from Mail and Guardian, with the main focus on Green Guardian columns.
- Curriculum Competency Mapping for Indonesian IT Programs – Course Learning Outcomes and Skill Statements (2025)This dataset contains course learning outcomes and competency statements collected from Indonesian Information Technology (IT) bachelor’s degree programs. Each entry represents a specific learning objective, skill outcome, or competency expectation associated with the IT curriculum. The data were compiled to support research on bilingual skill mapping and alignment between university curricula and job market demands. Variables include the text of each competency statement or learning outcome. These statements cover a wide range of IT domains such as web development, database systems, networking, programming languages, digital media production, business and entrepreneurship topics relevant to IT, system analysis and design, network and security, and emerging computing topics. This dataset is intended to be used for curriculum analysis, text mining, skill mapping studies, and educational alignment research.
- Job Skill Requirements from IT Job Advertisements – Indonesian IT Sector (2025)This dataset contains job skill descriptions and requirements extracted from IT job advertisements in Indonesia. Each entry represents a skill, responsibility, or competency expected in IT-related job roles, covering areas such as software development (frontend and backend), database management, API design, DevOps practices, application maintenance, UI/UX collaboration, infrastructure and cloud deployment, quality assurance, troubleshooting, technical support, networking, system administration, and various technology stacks (e.g., Laravel, React, Docker, AWS, Golang, Python, SQL/NoSQL). The dataset is useful for analyzing industry skill demands, mapping academic curricula to job market requirements, identifying trends in IT competencies, and supporting research in education-to-employment alignment.
- Enhanced GoCJ: Google Cloud Jobs Dataset The GoCJ dataset is comprised of multiple files, where each file contains the sizes of a specified number of jobs expressed in Million Instructions (MI), derived from workload behaviors observed in Google cluster traces. The name of each file indicates the number of jobs it contains; for example, GoCJ_Dataset_1000 includes 1000 jobs along with their associated SLA classes and arrival times. In this study, a modified version of the GoCJ dataset is employed. Each dataset file consists of three columns: (i) job length in terms of Million Instructions (MI), (ii) Service Level Agreement (SLA: {1, 2, 3}), representing different levels of priorities, and (iii) job arrival time, which captures realistic workload submission behavior. The experimental evaluation is conducted using the following dataset files: GoCJ_Dataset_1000.csv, GoCJ_Dataset_2000.csv, GoCJ_Dataset_3000.csv, GoCJ_Dataset_4000.csv, GoCJ_Dataset_5000.csv, and GoCJ_Dataset_6000.csv, enabling performance analysis under increasing workload scales.The file Original_Enhanced_Dataset.txt contains the 50 seed job sizes required as input for both the Java-based generator (EnhancedGoCJGenerator.java) and the Excel-based generator (GoCJ_Enhanced_Generator.xlsx) to reproduce datasets of any desired size while preserving the original workload distribution properties. Also the Java based generator java coding file is available online at : https://github.com/Mohsin-Nawaz/Enhanced_GoCJ-Java-Generator
- The Grad-CAM visualizations-Oral cancer classificationThe Grad-CAM visualizations for all images in the from the oral classification case study test set together with the corresponding original images and class labels (encoded in the lenames). This is part of the supporting information for the manuscript "Methodological Failures and Barriers to Reproducibility in AI-Based Dental Research" submitted to the Discover Applied Sciences Journal.

The Generalist Repository Ecosystem Initiative
Elsevier's Mendeley Data repository is a participating member of the National Institutes of Health (NIH) Office of Data Science Strategy (ODSS) GREI project. The GREI includes seven established generalist repositories funded by the NIH to work together to establish consistent metadata, develop use cases for data sharing, train and educate researchers on FAIR data and the importance of data sharing, and more.
Find out moreWhy use Mendeley Data?
Make your research data citable
Unique DOIs and easy-to-use citation tools make it easy to refer to your research data.
Share data privately or publicly
Securely share your data with colleagues and co-authors before publication.
Ensure long-term data storage
Your data is archived for as long as you need it by Data Archiving & Networked Services.
Keep access to all versions
Mendeley Data supports versioning, making longitudinal studies easier.
The Mendeley Data communal data repository is powered by Digital Commons Data.
Digital Commons Data provides everything that your institution will need to launch and maintain a successful Research Data Management program at scale.
Find out more