Skip to main content

Computer Physics Communications

ISSN: 0010-4655

Visit Journal website

Datasets associated with articles published in Computer Physics Communications

Filter Results
1970
2026
1970 2026
4334 results
  • Corrections to binary enthalpies and elemental data in HEAPS software for reliable high-entropy alloy design
    We report critical corrections to the foundational data_base.m file used in the HEAPS software (P. Martin et al., Comput. Phys. Commun. 278 (2022) 108,398) for high-entropy alloy design. Our updates specifically rectify erroneous transcriptions of binary mixing enthalpies (ΔH^m_ij) and correct systematic misalignments in elemental property values where data was inadvertently shifted, affecting approximately ten elements. These essential corrections preserve HEAPS’ original functionality while significantly enhancing the accuracy and reliability of its alloy design predictions. The validated datasets have been openly provided for direct implementation by users.
  • MFC 5.0: An exascale many-physics flow solver
    Many problems of interest in engineering, medicine, and the fundamental sciences rely on high-fidelity flow simulation, making performant computational fluid dynamics solvers a mainstay of the open-source software community. Previous work MFC 3.0 was made a published, documented, and open-source solver via Bryngelson et al. Comp. Phys. Comm. (2021) with numerous physical features, numerical methods, and scalable infrastructure. MFC 5.0 is a significant update to MFC 3.0, featuring a broad set of well-established and novel physical models and numerical methods, as well as the introduction of GPU and APU (or superchip) acceleration. We exhibit state-of-the-art performance and ideal scaling on the first two exascale supercomputers, OLCF Frontier and LLNL El Capitan. Combined with MFC’s single-accelerator performance, MFC achieves exascale computation in practice, and achieved the largest-to-date public CFD simulation at 200 trillion grid points as a 2025 ACM Gordon Bell Prize finalist. New physical features include the immersed boundary method, N-fluid phase change, Euler–Euler and Euler–Lagrange sub-grid bubble models, fluid-structure interaction, hypo- and hyper-elastic materials, chemically reacting flow, two-material surface tension, magnetohydrodynamics (MHD), and more. Numerical techniques now represent the current state-of-the-art, including general relaxation characteristic boundary conditions, WENO variants, Strang splitting for stiff sub-grid flow features, and low Mach number treatments. Weak scaling to tens of thousands of GPUs on OLCF Summit and Frontier and LLNL El Capitan achieves efficiencies within 5% of ideal to over 90% of their respective system sizes. Strong scaling results for a 16-times increase in device count show parallel efficiencies over 90% on OLCF Frontier. MFC’s software stack has undergone further improvements, including continuous integration, which ensures code resilience and correctness through over 300 regression tests; metaprogramming, which reduces code length while maintaining performance portability; and code generation for computing chemical reactions.
  • DTLreactingFoam: An efficient CFD tool for laminar reacting flow simulations using detailed chemistry and transport with time-correlated thermophysical properties
    The official OpenFOAM distributions are currently not well-suited for accurate simulations of laminar reacting flows, primarily due to the restrictive Sutherland transport model and the oversimplified unity Lewis number assumption. These limitations can be addressed by employing a detailed transport model (DTM) grounded in kinetic gas theory. However, this approach significantly increases computational cost. To resolve this trade-off, we present a newly developed framework, DTLreactingFoam, designed for simulating laminar flames with integrated detailed transport and chemical kinetics while maintaining computational efficiency. The first level of cost reduction is achieved by incorporating a polynomial-fit transport model (FTM). Further acceleration is provided by a time-correlated thermophysical property evaluation (coTHERM) method, which dynamically updates properties at each time step or iteration by exploiting their temporal correlations. The framework is validated through a series of canonical laminar flame simulations. The results show excellent agreement with experimental measurements and benchmark software, confirming the accurate implementation of both the DTM and FTM. Moreover, validation results demonstrate that coupling the coTHERM method with either the DTM or FTM enables high-fidelity laminar flame simulations with substantially reduced computational cost. Notably, using the coTHERM method in conjunction with the FTM achieves up to a 77% reduction in computational time compared to the direct use of the DTM, without compromising accuracy.
  • plasmonX: An open-source code for nanoplasmonics
    We present the first public release of plasmonX, a novel open-source code for simulating the plasmonic response of complex nanostructures. The code supports both fully atomistic and implicit descriptions of nanomaterials. In particular, it employs the frequency-dependent fluctuating charges (ωFQ) and dipoles (ωFQFμ) models to describe the response properties of atomistic structures, including simple and d-metals, graphene-based structures, and multi-metal nanostructures. For implicit representations, the Boundary Element Method is implemented in both the dielectric polarizable continuum model (DPCM) and integral equation formalism (IEF-PCM) variants. The distribution also includes a post-processing module that enables analysis of electric field-induced properties such as charge density and electric field patterns.
  • Kira 3: Integral reduction with efficient seeding and optimized equation selection
    We present version 3 of Kira, a Feynman integral reduction program for high-precision calculations in quantum field theory and gravitational-wave physics. Building on previous versions, Kira 3 introduces optimized seeding and equation selection algorithms, significantly improving performance for multi-loop and multi-scale problems. New features include convenient numerical sampling, symbolic integration-by-parts reductions, and support for user-defined additional relations. We demonstrate its capabilities through benchmarks on two- and three-loop topologies, showcasing up to two orders of magnitude improvement over Kira 2.3. Kira 3 is publicly available and poised to support ambitious projects in particle physics and beyond.
  • HepLib: a C++ library for high energy physics (version 1.2)
    The version 1.2 of the HepLib (a C++ Library for computations in High Energy Physics) is presented. HepLib builds on top of other well-established libraries or programs, including GINAC, FLINT, FORM, FIRE, etc., its first version has been released in Comput. Phys. Commun. 265, 107,982 (2021). Here we provide another minor upgraded version 1.2, in which the internal depended libraries or programs are updated to their latest versions, several bugs are fixed, many functional performances are improved, and lots of new features are also introduced. We also carry out experimental tests on the program FIRE, employing FLINT to enhance its performance with multivariate polynomials in the integrate-by-parts (IBP) reduction.
  • Chromo: A high-performance python interface to hadronic event generators for collider and cosmic-ray simulations
    Simulations of hadronic and nuclear interactions are essential in both collider and astroparticle physics. The Chromo package provides a unified Python interface to multiple widely used hadronic event generators, including EPOS, DPMJet, Sibyll, QGSJet, and Pythia. Built on top of their original Fortran and C++ implementations, Chromo offers a zero-overhead abstraction layer suitable for use in Python scripts, Jupyter notebooks, or from the command line, while preserving the performance of direct calls to the generators. It is easy to install via precompiled binary wheels distributed through PyPI, and it integrates well with the Scientific Python ecosystem. Chromo supports event export in HepMC, ROOT, and SVG formats and provides a consistent interface for inspecting, filtering, and modifying particle collision events. This paper describes the architecture, typical use cases, and performance characteristics of Chromo and its role in contemporary astroparticle simulations, such as in the MCEq cascade solver.
  • ACFlow 2.0 : An open source toolkit for analytic continuation of quantum Monte Carlo data
    Analytic continuation is an essential step in quantum Monte Carlo calculations. We present version 2.0 of the ACFlow package, a full-fledged open source toolkit for analytic continuation of quantum Monte Carlo simulation data. The new version adds support for three recently developed analytic continuation methods, namely the barycentric rational function approximation method, the stochastic pole expansion method, and the Nevanlinna analytical continuation method. The well-established maximum entropy method is also enhanced with the Bayesian reconstruction entropy algorithm. Furthermore, a web-based graphical user interface and a testing toolkit for analytic continuation methods are introduced. In this paper, we at first summarize the basic principles of the newly implemented analytic continuation solvers, and the most important improvements of ACFlow 2.0. Then a representative example is provided to demonstrate the new usages and features.
  • Object-oriented programming as a tool for constructing high-order quantum-kinetic BBGKY equations
    Theoretical methods based on the density matrix provide powerful tools for describing open quantum systems. However, such methods are complicated and intricate to be used analytically. Here we present an object-oriented framework for constructing the equation of motion of the correlation matrix at a given order within the quantum BBGKY hierarchy, which is widely used to describe the interaction of many-particle systems. The algorithm of machine derivation of equations includes the implementation of the principles of quantum mechanics and operator algebra. It is based on the description and use of classes in the Python programming environment. Class objects correspond to the elements of the equations that are derived: density matrix, correlation matrix, energy operators, commutator and several operators indexing systems. The program contains a special class that allows one to define a statistical ensemble with an infinite number of subsystems. For all classes, methods implementing the actions of the operator algebra are specified. The number of subsystems of the statistical ensemble for the physical problem and the types of subsystems between which pairwise interactions are possible are specified as an input parameters. It is shown that this framework allows one to derive the equations of motion of the fourth-order correlation matrix in less than one minute.
  • SUperman: Efficient permanent computation on GPUs
    The permanent is a function, defined for a square matrix, with applications in various domains including quantum computing, statistical physics, complexity theory, combinatorics, and graph theory. Its formula is similar to that of the determinant; however, unlike the determinant, its exact computation is #P-complete, i.e., there is no algorithm to compute the permanent in polynomial time unless P=NP. For an n × n matrix, the fastest algorithm has a time complexity of O(2^(n-1) n). Although supercomputers have been employed for permanent computation before, there is no work and, more importantly, no publicly available software that leverages cutting-edge High-Performance Computing accelerators such as GPUs. In this work, we design, develop, and investigate the performance of SUperman, a complete software suite that can compute matrix permanents on multiple nodes/GPUs on a cluster while handling various matrix types, e.g., real/complex/binary and sparse/dense, etc., with a unique treatment for each type. SUperman run on a single Nvidia A100 GPU is up to 86 ×  faster than a state-of-the-art parallel algorithm on 44 Intel Xeon cores running at 2.10GHz. Leveraging 192 GPUs, SUperman computes the permanent of a 62 × 62 matrix in 1.63 days, marking the largest reported permanent computation to date.