Filter Results
4361 results
- MQCT 2026: a program for calculations of inelastic scattering of two molecules (new version announcement)This is a revised and updated version of the package MQCT [Comput. Phys. Commun. 252 (2020) 107155; 294 (2024) 108938]. The package includes routines for the calculations of ro-vibrational state-to-state transition cross sections for molecule + molecule collisions, using the mixed quantum/classical theory (MQCT) approach. One major improvement in computational efficiency is implemented for calculations of potential coupling matrix elements, which accelerates this part of calculations by several orders of magnitude. Another important addition to the package is an efficient method that combines the coupled-states (CS) approximation with the adiabatic-trajectory (AT) approach, named AT-CS-MQCT, which enables calculations for larger and more complex molecules at higher collision energies. Corrections to the treatment of indistinguishable collision partners, such as CO + CO and H2O + H2O, include proper incorporation of exchange parity and statistical weight factors. A revised version of user manual is provided.
- A computational toolkit for generating WSe2 grain boundaries: Tilted, mirror, and polycrystalsGrain boundaries (GBs) are among the most common defects in 2D Transition-Metal Dichalcogenides (TMDs), critically influencing their electronic, optical, mechanical, and catalytic properties. Understanding and controlling the GBs is therefore essential for optimizing TMD performance in applications of flexible electronics, optoelectronics, catalysis, and lubrication. Yet the structural diversity and atomic complexity make both experimental characterization and modeling for atomistic simulations highly challenging. Here, we introduce an experimentally verified computational toolkit for generating and analyzing GBs in 2D TMDs. We demonstrate its capabilities by using tungsten diselenide (WSe2) as a model material and validating the GBs produced by the toolkit against experimentally observed atomic-resolution scanning transmission electron microscopy images of WSe2, hBN and graphene. This tool enables the study of TMD materials to predict and tailor material properties through controlled defect generation. It not only advances our understanding of GBs dynamics in TMDs but also supports the broader application of these materials in various technological fields.
- Symbolic evaluation of transfer matrices for the XXX modelWe aim to exploit the important fact that a spin chain has nowadays the status of a quantum integrable system: essentially, its transfer matrix involves a full information on the constants of motion which provides an exact determination of all eigenstates of this system. Here we consider the specific case of the XXX 1/2 model, whose spherical symmetry, and thus a relative numerical simplicity, admits a straightforward presentation of results. This case is also specific as a quantum integrable system since the eigenproblem of its transfer matrix exhibits multiplets of the total spin, where each highest weight “ancestor” state is degenerate with all its “descendants”, and thus the z-component of the total spin should be added to integrability constants of motion to form a complete system of commuting observables. We provide here an efficient algorithm for evaluation of all elements, A(λ), B(λ), C(λ), and D(λ), of the monodromy matrix M(λ) of the XXX 1/2 model, defined within the famous algebraic Bethe Ansatz (ABA) formalism. Accordingly, the transfer matrix T(λ) will be explicitly presented as the N-th degree polynomial in the rapidity λ, with coefficients T(λ) being Hermitian operators (constants of motion), whose spectra determine each eigenstate of the system. In this way, all eigenstates of the model can be determined by an immediate diagonalization of the transfer matrix, that is, by N linear eigenproblems, instead of solving a cumbersome system of r nonlinear Bethe Ansatz equations, with N being the size of a chain, and r - the number of inverted spins in an eigenstate.We believe that exact results, reached in this way for moderate sizes N, will be helpful in quantum information processing and nanotechnology. The algorithm bases on an observation that the monodromy matrix is the sum of some elementary operators, referred by us as to “gear racks”, with, essentially, single-particle nature. We provide here a complete combinatoric description of these objects, and appropriate programming for determination of the transfer matrix T(λ), some its important submatrices, as well as creation B(λ) and annihilation C(λ) operators of Bethe pseudoparticles.
- T3FF: Toroidal 3-dimensional full-f full-k code for isothermal gyrofluid drift-Alfvén turbulenceT3FF is a gyrofluid code for computation of isothermal three-dimensional electromagnetic edge turbulence in magnetized plasmas with consistent finite Larmor radius (FLR) treatment. The model allows for arbitrary fluctuation amplitudes (“full-f”) and includes second-order accurate FLR effects in the polarization (“full-k”). The presented code version employs a field-aligned flux-tube geometry for simplified circular toroidal geometry on the closed flux surface edge region of a tokamak plasma. The T3FF code is intended for basic physics studies with the most elementary implementation of a full-f three-dimensional electromagnetic drift wave turbulence model with consistent FLR effects, and as basis for later extension up to a thermal six-moment gyrofluid model.
- On the development of OpenFOAM solvers for simulating MHD micropolar fluid flows with or without the effect of micromagnetorotationAny micropolar fluid containing magnetic particles, such as blood and ferrofluids, under the influence of an applied magnetic field experiences a magnetic torque resulting from the misalignment between the magnetization of these particles and the magnetic field, called micromagnetorotation (MMR). Although critical in such fluids, MMR remains underexplored in blood flows, where erythrocyte magnetization is often neglected. To address this, two transient OpenFOAM solvers were developed: epotMicropolarFoam, for incompressible, laminar MHD micropolar flows, and epotMMRFoam, which extends the former by incorporating MMR. In epotMicropolarFoam, the PISO algorithm is used for pressure-velocity coupling, while the low-magnetic-Reynolds-number approximation is adopted for the MHD phenomena simulation. Micropolar effects are included by incorporating the microrotation–vorticity difference in the momentum equation, and solving the internal angular momentum equation. EpotMMRFoam also uses the PISO algorithm and the low-magnetic-Reynolds-number approximation with the MMR term included in the internal angular momentum equation. In this solver, a constitutive magnetization equation is also solved. Validation against the analytical MHD micropolar Poiseuille flow showed excellent accuracy (error <2%). Including MMR caused notable reductions in velocity (up to 40%) and microrotation (up to 99.9%), especially under strong magnetic fields and high hematocrit values. Without MMR, magnetic effects were minimal due to the blood’s low electrical conductivity. The simulations of 3D MHD artery and 2D MHD aneurysm flows supported these results. Especially in the aneurysm, MMR suppressed any recirculation cores, highlighting its stabilizing and shear-dampening effects. The solvers show strong promise for biomedical applications such as magnetic hyperthermia and targeted drug delivery.
- QEDtool: A python package for numerical quantum information in quantum electrodynamicsThis is the manual of the first version of QEDtool, an object-oriented Python package that performs numerical quantum electrodynamics calculations, with focus on full state reconstruction in the internal degrees of freedom, correlations and entanglement quantification. Our package rests on the evaluation of Feynman amplitudes in the momentum-helicity basis within a relativistic framework. Users can specify both pure and mixed initial scattering states in polarization space. From the specified initial state and Feynman amplitudes, QEDtool reconstructs correlations that fully characterize the quantum polarization and entanglement within the final state. These quantities can be expressed in any inertial frame by arbitrary, built-in Lorentz transformations.
- COREFL: An open-source GPU-accelerated high-fidelity solver for compressible reactive flows on generalized curvilinear coordinatesCOmpressible REactive Flow soLver (COREFL) is an open-source computational fluid dynamics solver for high-fidelity simulations of compressible reactive flows. COREFL is written in C++/CUDA, and parallelized by message passing interface to achieve large scale computations on modern high-performance computing architectures represented by multi-CPU/GPU clusters. The solver solves the compressible Navier-Stokes equations coupled with species conservation equations, where a finite difference framework in structured curvilinear coordinates is adopted. For scale-resolving simulations of compressible reactive flows, a hybrid of the seventh-order Weighted Essentially Non-Oscillatory scheme and the linear upwind scheme is used to discretize convective terms. The transport properties are evaluated by the mixture-averaged model to treat the viscous terms accurately, while time integration of detailed chemical kinetics is handled via a balanced splitting method to ensure both efficiency and robustness. The static polymorphism based on template class/function of C++ is used to provide a flexible way of extending the codes to additional physical models without losing runtime performance and without changing code structures. Data structures are designed, and computational logics are optimized to fully exploit the power of modern GPUs. COREFL is validated with benchmark cases covering multiple scenarios. The solver is proved to be capable of simulating compressible reactive flows efficiently and acquires reliable results. Finally, a speedup over 800 is achieved on an Nvidia A100 GPU compared to a CPU code developed by the authors in reactive cases with a kinetic mechanism of 9 species, 19 reactions.
- Zandpack: A general tool for time-dependent transportsimulation of nanoelectronicsThe auxiliary mode approach to time-dependent open quantum system calculations is implemented and refined to yield a feasible computational approach to simulate nanostructures far from equilibrium. It is done by a careful diagonalization of the electrode level-width function, and provides an efficient approach which can simulate large, open systems at the level of time-dependent density functional theory. The approach, as given in this work, is implemented in the new open-source code Zandpack. The framework is applied to three systems perturbed by the same THz electromagnetic field pulse-form: 1) A Hubbard model for hydrogen on graphene is used to calculate spin-currents, mutual information, spin-transitions, and a pump-probe setup. 2) An armchair graphene nanoribbon (AGNR) probed by a metal tip showing electrons excited from the valence band of the AGNR into the tip via electron-electron interactions. 3) A gold break-junction is modeled with various gap distances, and displays behavior that is more different from the adiabatic case as the gap widens. In the examples 2 and 3, we develop and use a general linearization scheme for time-dependent open system calculations, which utilizes the DFTB+or SIESTAcodes.
- DL_POLY 5: Calculation of system properties on the fly for very large systems via massive parallelismModelling has become a third distinct line of scientific enquiry, alongside experiments and theory. Molecular dynamics (MD) simulations serve to interpret, predict and guide experiments and to test and develop theories. A major limiting factor of MD simulations is system size and in particular the difficulty in handling, storing and processing trajectories of very large systems. This limitation has become significant as the need to simulate large system sizes of the order of billions of atoms and beyond has been steadily growing. Examples include interface phenomena, composite materials, biomaterials, melting, nucleation, atomic transport, adhesion, radiation damage and fracture. More generally, accessing new length and energy scales often brings qualitatively new science, but this has currently reached a bottleneck in MD simulations due to the traditional methods of storing and post-processing trajectory files. To address this challenge, we propose a new paradigm of running MD simulations: instead of storing and post-processing trajectory files, we calculate key system properties on-the-fly. Here, we discuss the implementation of this idea and on-the-fly calculation of key system properties in the general-purpose MD code, DL_POLY. We discuss code development, new capabilities and the calculation of these properties, including correlation functions, viscosity, thermal conductivity and elastic constants. We give examples of these on-the-fly calculations in very large systems. Our developments offer a new way to run MD simulations of large systems efficiently in the future.
- PaScaL_TDMA 2.1: A register-resident multi-GPU tridiagonal matrix solver with optimized communication for large-scale CFD simulationsWe present PaScaL_TDMA 2.1, a GPU-oriented release of the PaScaL_TDMA library [3] for efficiently solving large batches of distributed tridiagonal systems on modern multi-GPU platforms. Building on the original CPU-based PaScaL_TDMA formulation and the shared-memory buffering strategy introduced in PaScaL_TDMA 2.0 [2], version 2.1 reformulates the core kernels and communication path to better match the GPU execution model. CUDA threads are mapped to contiguous tridiagonal lines to achieve coalesced global-memory access, and the elimination kernels are optimized to a fully register-resident implementation to reduce memory traffic and synchronization. To lower inter-GPU overhead, the reduced-system assembly is performed via a single consolidated MPI_Alltoall exchange, and the kernel interface is restructured to eliminate descriptor transfers at launch. Benchmarks on the NURION system show that PaScaL_TDMA 2.1 reduces wall time from 0.127 s on dual-socket Intel Skylake CPUs to 9.2 ms on an NVIDIA A100 and 6.1 ms on an H100, corresponding to speedups of 14.0 × and 20.7 × , respectively. Strong- and weak-scaling studies quantify the performance gains from the optimization stages and demonstrate sustained scalability on multi-GPU systems. Finally, PaScaL_TDMA 2.1 is integrated into an immersed-boundary LES solver and validated through large-scale CFD simulations, including an industrial-scale cleanroom configuration with up to 128 A100 GPUs and O(10^10) degrees of freedom.
