Implementing a neural network interatomic model with performance portability for emerging exascale architectures

Published: 22 September 2021| Version 1 | DOI: 10.17632/x948kyy7jh.1
Contributors:
,
,

Description

The two main thrusts of computational science are increasingly accurate predictions and faster calculations; to this end, the zeitgeist in molecular dynamics (MD) simulations is pursuing machine learned and data driven interatomic models, e.g. neural network potentials, and novel hardware architectures, e.g. GPUs. Current implementations of neural network potentials are orders of magnitude slower than traditional interatomic models and while looming exascale computing offers the ability to run large, accurate simulations with these models, achieving portable performance for MD with new and varied exascale hardware requires rethinking traditional algorithms, using novel data structures, and library solutions. We re-implement a neural network interatomic model in CabanaMD, an MD proxy application, built on libraries developed for performance portability. Our implementation shows significantly improved thread scaling in this complex kernel as compared to a current LAMMPS implementation, across both strong and weak scaling. Our single-source solution enables simulations up to 20 million atoms on a single CPU node and 4 million atoms with improved performance on a single GPU. We also explore parallelism and data layout choices (using flexible data structures called AoSoAs) and their effect on performance, seeing up to ∼50% and ∼5% improvements in performance on a GPU by choosing the right level of parallelism and data layout respectively.

Files

Categories

Condensed Matter Physics, Computational Physics, Molecular Dynamics, Neural Network

Licence