Skip to main content

Computer Physics Communications

ISSN: 0010-4655

Visit Journal website

Datasets associated with articles published in Computer Physics Communications

Filter Results
1970
2024
1970 2024
5923 results
  • Neutrinos from muon-rich ultra high energy electromagnetic cascades: The MUNHECA code
    An ultra high energy electromagnetic cascade, a purely leptonic process and initiated by either photons or e^±, can be a source of high energy neutrinos. We present a public python3 code, MUNHECA, to compute the neutrino spectrum by taking into account various QED processes, with the cascade developing either along the propagation in the cosmic microwave background in the high-redshift universe or in a predefined photon background surrounding the astrophysical source. The user can adjust various settings of MUNHECA, including the spectrum of injected high energy photons, the background photon field and the QED processes governing the cascade evolution. We improve the modeling of several processes, provide examples of the execution of MUNHECA and compare it with some earlier and more simplified estimates of the neutrino spectrum from electromagnetic cascades.
    • Dataset
  • BIMBAMBUM: A potential flow solver for single cavitation bubble dynamics
    In the absence of analytical solutions for the dynamics of non-spherical cavitation bubbles, we have implemented a numerical simulation solver based on the boundary integral method (BIM) that models the behavior of a single bubble near an interface between two fluids. The density ratio between the two media can be adjusted to represent different types of boundaries, such as a rigid boundary or a free surface. The solver allows not only the computation of the dynamics of the bubble and the fluid-fluid interface, but also, in a secondary processing phase, the computation of the surrounding flow field quantities. We present here the detailed implementation of this solver and validate its capabilities using theoretical solutions, experimental observations, and results from other simulation softwares. This solver is called BIMBAMBUM which stands for Boundary Integral Method for Bubble Analysis and Modeling in Bounded and Unbounded Media.
    • Dataset
  • The polarimeter vector for τ → 3πν_τ decays
    The polarimeter vector of the τ represents an optimal observable for the measurement of the τ spin. In this paper we present an algorithm for the computation of the τ polarimeter vector for the decay channels τ^- → π^- π^+ π^- ν_τ and τ^- → π^- π^0 π^0 ν_τ. The algorithm is based on a model for the hadronic current in these decay channels, which was fitted to data recorded by the CLEO experiment [1].
    • Dataset
  • Evaluation of classical correlation functions from 2/3D images on CPU and GPU architectures: Introducing CorrelationFunctions.jl
    Correlation functions are becoming one of the major tools for quantification of structural information that is usually represented as 2D or 3D images. In this paper we introduce CorrelationFunctions.jl open-source package developed in Julia and capable of computing all classical correlation functions based on imaging input data. Images include both binary and multi-phase representations. Our code is capable of evaluating two-point probability S_2, phase cross-correlation ρ_ij, cluster C_2, lineal-path L_2, surface-surface F_ss, surface-void F_sv, pore-size P and chord-length p distribution functions on both CPU and GPU architectures. Where possible, we presented two types of computations: full correlation map (correlations of each point with other points on the image, that also allows obtaining ensemble averaged CF) and directional correlation functions (currently in major orthogonal and diagonal directions). Such an implementation allowed for the first time to assemble a completely free solution to evaluate correlation functions under any operating system with well documented application programming interface (API). Our package includes automatic tests against analytical solutions that are described in the paper. We measured execution times for all CPU and GPU implementations and as a rule of thumb full correlation maps on GPU are faster than other methods. However, full maps require more RAM and, thus, are limited to available RAM resources. On the other hand, directional CFs are memory efficient and can be evaluated for huge datasets – this way they are the first candidates for structural data compression of feature extraction. The package itself is available through Julia package ecosystem and on GitHub, the latter source also contains documentation and additional helpful resources such as tutorials. We believe that a single powerful computational tool such as CorrelationFunctions.jl presented in this paper will significantly facilitate the usage of correlation functions in numerous areas of structural description and research of porous materials, as well as in machine learning applications. We also present some examples as applied to ceramic, soil composite and oil-bearing rock samples based on their 3D X-ray tomography and 2D scanning electron microscope images. Finally, we conclude our paper with discussion of possible ways to further improve presented computational framework.
    • Dataset
  • Massively parallel implementation of iterative eigensolvers in large-scale plane-wave density functional theory
    The Kohn-sham density functional theory (DFT) is a powerful method to describe the electronic structures of molecules and solids in condensed matter physics, computational chemistry and materials science. However, large and accurate DFT calculations within plane waves process a cubic-scaling computational complexity, which is usually limited by expensive computation and communication costs. The rapid development of high performance computing (HPC) on leadership supercomputers brings new opportunities for developing plane-wave DFT calculations for large-scale systems. Here, we implement parallel iterative eigensolvers in large-scale plane-wave DFT calculations, including Davidson, locally optimal block preconditioned conjugate gradient (LOBPCG), projected preconditioned conjugate gradient (PPCG) and the Chebyshev subspace iteration (CheFSI) algorithms, and analyze the performance of these algorithms in massively parallel plane-wave computing tasks. We adopt a two-level parallelization strategy that combines the message passing interface (MPI) with open multi-processing (OpenMP) parallel programming to handle data exchange and matrix operations in the construction and diagonalization of large-scale Hamiltonian matrix within plane waves. Numerical results illustrate that these iterative eigensolvers can scale up to 42,592 processing cores with high peak performance of 30% on leadship supercomputers to study the electronic structures of bulk silicon systems containing 10,648 atoms.
    • Dataset
  • micrOMEGAs 6.0: N-component dark matter
    micrOMEGAs is a numerical code to compute dark matter (DM) observables in generic extensions of the Standard Model (SM) of particle physics. We present a new version of micrOMEGAs that includes a generalization of the Boltzmann equations governing the DM cosmic abundance evolution which can be solved to compute the relic density of N-component DM. The direct and indirect detection rates in such scenarios take into account the relative contribution of each component such that constraints on the combined signal of all DM components can be imposed. The co-scattering mechanism for DM production is also included, whereas the routines used to compute the relic density of feebly interacting particles have been improved in order to take into account the effect of thermal masses of t-channel particles. Finally, the tables for the DM self-annihilation - induced photon spectra have been extended down to DM masses of 110 MeV, and they now include annihilation channels into light mesons.
    • Dataset
  • A SPIRED code for the reconstruction of spin distribution
    In Nuclear Magnetic Resonance (NMR), it is of crucial importance to have an accurate knowledge of the spin probability distribution corresponding to inhomogeneities of the magnetic fields. An accurate identification of the sample distribution requires a set of experimental data that is sufficiently rich to extract all fundamental information. These data depend strongly on the control fields (and their number) used experimentally to perturb the spin system. In this work, we present and analyze a greedy reconstruction algorithm, and provide the corresponding SPIRED code, for the computation of a set of control functions allowing the generation of data that are appropriate for the accurate reconstruction of a sample probability distribution. In particular, the focus is on NMR and spin dynamics governed by the Bloch system with inhomogeneities in both the static and radio-frequency magnetic fields applied to the sample. We show numerically that the algorithm is able to reconstruct non trivial joint probability distributions of the two inhomogeneous Hamiltonian parameters. A rigorous convergence analysis of the algorithm is also provided.
    • Dataset
  • PolyHoop: Soft particle and tissue dynamics with topological transitions
    We present PolyHoop, a lightweight standalone C++ implementation of a mechanical model to simulate the dynamics of soft particles and cellular tissues in two dimensions. With only few geometrical and physical parameters, PolyHoop is capable of simulating a wide range of particulate soft matter systems: from biological cells and tissues to vesicles, bubbles, foams, emulsions, and other amorphous materials. The soft particles or cells are represented by continuously remodeling, non-convex, high-resolution polygons that can undergo growth, division, fusion, aggregation, and separation. With PolyHoop, a tissue or foam consisting of a million cells with high spatial resolution can be simulated on conventional laptop computers.
    • Dataset
  • Quadrature of functions with endpoint singular and generalised polynomial behaviour in computational physics
    Fast and accurate numerical integration always represented a bottleneck in high-performance computational physics, especially in large and multiscale industrial simulations involving Finite (FEM) and Boundary Element Methods (BEM). The computational demand escalates significantly in problems modelled by irregular or endpoint singular behaviours which can be approximated with generalised polynomials of real degree. This is due to both the practical limitations of finite-arithmetic computations and the inefficient samples distribution of traditional Gaussian quadrature rules. We developed a non-iterative mathematical software implementing an innovative numerical quadrature which largely enhances the precision of Gauss-Legendre formulae (G-L) for integrands modelled as generalised polynomial with the optimal amount of nodes and weights capable of guaranteeing the required numerical precision. This methodology avoids to resort to more computationally expensive techniques such as adaptive or composite quadrature rules. From a theoretical point of view, the numerical method underlying this work was preliminary presented in [1] by constructing the monomial transformation itself and providing all the necessary conditions to ensure the numerical stability and exactness of the quadrature up to machine precision. The novel contribution of this work concerns the optimal implementation of said method, the extension of its applicability at run-time with different type of inputs, the provision of additional insights on its functionalities and its straightforward implementation, in particular FEM applications or other mathematical software either as an external tool or embedded suite. The open-source, cross-platform C++ library Monomial Transformation Quadrature Rule (MTQR) has been designed to be highly portable, fast and easy to integrate in larger codebases. Numerical examples in multiple physical applications showcase the improved efficiency and accuracy when compared to traditional schemes.
    • Dataset
  • ERSN-OpenMC-Py: A python-based open-source software for OpenMC Monte Carlo code
    The graphical user interface is a key element in facilitating the use of complex simulation software. This project describes the development of a graphical user interface called “ERSN-OpenMC-Py” for an existing neutron simulation code, OpenMC. The main goal is to make simulation more accessible to a wider audience by providing a user-friendly and intuitive user interface. The process of developing the graphical user interface is described in detail, including the different stages of development such as user interface design, user interface implementation, and user interface integration with the OpenMC simulation code. The development tools used, such as Python3 and PyQt5, are also explained. The user interface allows the user to control the simulation parameters and interact with the simulation results. Key features of the user interface include visualization of simulation results, modification of simulation parameters, saving and loading simulation configurations, as well as managing output files. The end result is a functional user interface that allows users to easily visualize simulation results and control simulation parameters in an intuitive manner. This user interface also provides a better user experience for non-programming experts who wish to use the simulation code for their own projects.
    • Dataset
1