Filter Results
169757 results
The version 3.01 of ELMAG, a Monte Carlo program for the simulation of electromagnetic cascades initiated by high-energy photons and electrons interacting with extragalactic background light (EBL), is presented. Pair production and inverse Compton scattering on EBL photons as well as synchrotron losses are implemented using weighted sampling of the cascade development. New features include, among others, the implementation of turbulent extragalactic magnetic fields and the calculation of three-dimensional electron and positron trajectories, solving the Lorentz force equation. As final result of the three-dimensional simulations, the program provides two-dimensional source images as function of the energy and the time delay of secondary cascade particles.
Data Types:
  • Dataset
  • File Set
MiTMoJCo (Microscopic Tunneling Model for Josephson Contacts) is C code which aims to assist modeling of superconducting Josephson contacts based on the microscopic tunneling theory. The code offers implementation of a computationally demanding part of this calculation, that is evaluation of superconducting pair and quasiparticle tunnel currents from the given tunnel current amplitudes (TCAs) which characterize the junction material. MiTMoJCo comes with a library of pre-calculated TCAs for frequently used Nb-AlOx-Nb and Nb-AlN-NbN junctions, a Python module for developing custom TCAs, supplementary optimum filtration module for extraction of a constant component of a sinusoidal signal and examples of modeling few common cases of superconducting Josephson contacts.
Data Types:
  • Dataset
  • File Set
We present JeLLyFysh-Version1.0, an open-source Python application for event-chain Monte Carlo (ECMC), an event-driven irreversible Markov-chain Monte Carlo algorithm for classical N-body simulations in statistical mechanics, biophysics and electrochemistry. The application’s architecture mirrors the mathematical formulation of ECMC. Local potentials, long-ranged Coulomb interactions and multi-body bending potentials are covered, as well as bounding potentials and cell systems including the cell-veto algorithm. Configuration files illustrate a number of specific implementations for interacting atoms, dipoles, and water molecules.
Data Types:
  • Dataset
  • File Set
Variant calls for the in vivo and in vitro studies of virus diversity and genetic stability reported in the manuscript, "A new live-attenuated polio vaccine constructed by rational design improves safety by preventing reversion to virulence", by Yeh, et al.
Data Types:
  • Sequencing Data
  • Tabular Data
  • Dataset
Genarris is an open source Python package for generating random molecular crystal structures with physical constraints for seeding crystal structure prediction algorithms and training machine learning models. Here we present a new version of the code, containing several major improvements. A MPI-based parallelization scheme has been implemented, which facilitates the seamless sequential execution of user-defined workflows. A new method for estimating the unit cell volume based on the single molecule structure has been developed using a machine-learned model trained on experimental structures. A new algorithm has been implemented for generating crystal structures with molecules occupying special Wyckoff positions. A new hierarchical structure check procedure has been developed to detect unphysical close contacts efficiently and accurately. New intermolecular distance settings have been implemented for strong hydrogen bonds. To demonstrate these new features, we study two specific cases: benzene and glycine. Genarris finds the experimental structures of the two polymorphs of benzene and the three polymorphs of glycine.
Data Types:
  • Document
  • Dataset
  • File Set
This files contains all codes, data as well as descriptions of its use accompanied with reproducible examples.
Data Types:
  • Dataset
  • File Set
We present an open-source library for coupling particle codes, such as molecular dynamics (MD) or the discrete element method (DEM), and grid based computational fluid dynamics (CFD). The application is focused on domain decomposition coupling, where a particle and continuum software model different parts of a single simulation domain with information exchange. This focus allows a simple library to be developed, with core mapping and communication handled by just four functions. Emphasis is on scaling on supercomputers, a tested cross-language library, deployment with containers and well-documented simple examples. Building on this core, a template is provided to facilitate the user development of common features for coupling, such as averaging routines and functions to apply constraint forces. The interface code for LAMMPS and OpenFOAM is provided to both include molecular detail in a continuum solver and model fluids flowing through a granular system. Two novel development features are highlighted which will be useful in the development of the next generation of multi-scale software: (i) The division of coupled code into a smaller blocks with testing over a range of processor topologies. (ii) The use of coupled mocking to facilitate coverage of various parts of the code and allow rapid prototyping. These two features aim to help users develop coupled models in a test-driven manner and focus on the physics of the problem instead of just software development. All presented code is open-source with detailed documentation on the dedicated website (cpl-library.org) permitting useful aspects to be evaluated and adopted in other projects.
Data Types:
  • Dataset
  • File Set
A program named MQCT is developed for calculations of rotationally and vibrationally inelastic scattering of molecules using the mixed quantum/classical theory approach. Calculations of collisions between two general asymmetric top rotors are now possible, which is a feature unavailable in other existing codes. Vibrational states of diatomic molecules can also be included in the basis set expansion, to carry out calculations of ro-vibrational excitation and quenching. Minimal input for the code assumes several defaults and is very simple, easy to set-up and run by non-experts. Multiple options, available for expert calculations, are listed in the Supplemental Information. The code is parallel and takes advantage of intrinsic massive parallelism of the mixed quantum/classical approach. A Monte-Carlo sampling procedure, implemented as option in the code, enables calculations for complicated systems with many internal states and large number of partial scattering waves. The coupled-states approximation is also implemented as an option. Integral and differential cross sections can be computed for the elastic channel. Rotational symmetry of each molecule, as well as permutation symmetry of two collision partners, are implemented. Potential energy surfaces for H_2 O + He, H_2 O + H_2, and H_2 O + H_2 O are included in the code. Example input files are also provided for these systems.
Data Types:
  • Dataset
  • File Set
We provide preprocessed Sentinel-1 SAR images with corresponding CORINE labels that can be used for training and evaluating Deep Learning (DL) semantic segmentation models for land cover mapping. The data comes from 14 raw Sentinel-1 scenes with two polarisation channels (single, such as HH or VV, and cross-pol, such as HV and VH) that were multilooked, calibrated, terrain-flattened, and terrain-corrected. The Sentinel-1 scenes were split into ~7K 512x512 pixel imagelets. To create RGB images suitable for training DL models from the imagelets, each of the two SAR pols is used as one channel in the resulting RGB format, and a free Digital Elevation Model (DEM) layer is added as a third channel. To create labels, CORINE land cover map is simply split into pieces corresponding to the imagelets areas. We provide imagelets in the .geotiff format so that the georeference information is preserved. The folder structure is suitable for training and evaluating deep learning models: • test • test-labels • train • train-labels • val • val-labels
Data Types:
  • Dataset
  • File Set
The data presented hereis part of the whole slide imaging (WSI) datasets generated in European project AIDPATH. This data is also related to the research paper entitle “Glomerulosclerosis Identification in Whole Slide Images using Semantic Segmentation”, published in Computer Methods and Programs in Biomedicine Journal (DOI: 10.1016/j.cmpb.2019.105273) . In that article, different methods based on deep learning for glomeruli segmentation and their classification into normal and sclerotic glomerulous are presented and discussed. These data will encourage research on artificial intelligence (AI) methods, create and compare fresh algorithms, and measure their usability in quantitative nephropathology. Parameters for data collection: Tissue samples were collected with a biopsy needle having an outer diameter between 100μm and 300μm. Afterwards, paraffin blocks were prepared using tissue sections of 4μm and stained using Periodic acid–Schiff (PAS). Then, images at 20x magnification were selected. Description of data collection: The tissue samples were scanned at 20x with a Leica Aperio ScanScope CS scanner. Data format: DATASET_A_DIB: Raw data, original images in SVS format. DATASET_B_DIB: Classified: Detected glomeruli to be used for classification in PNG format. The data is composed of two datasets: 1.) DATASET_A: Raw data with 31 whole slide images (WSI) in SVS format. The size of the WSI range between 21651x10498 pixels and 49799 x 32359 pixles acquired at 20x. The images contain different types of glomeruli that were detected using the algorithms explained at the following article [https://doi.org/10.1016/j.cmpb.2019.105273]. The detected glomeruli are provided in DATASET_B. 2.) DATASET_B: 2,340 images with a single glomerulous, 1,170 normal glomeruli and 1,170 sclerosed glomeruli. All of them are in PNG format. Value of the Data · These data can be used for benchmarking to encourage further research on AI methods applied to digital pathology in nephrology. · The additional value of this data is that it has been acquired and evaluated by expert pathologists from different European countries. · All researches in digital pathology can benefit from these data, to test classification algorithms. And particularly for glomeruli identification in nephrology studies. · This data can be used for further development and new experiments in glomeruli classification with more classes, like focal glomeruli besides normal and sclerotic glomeruli.
Data Types:
  • Dataset
  • File Set
7