Mdcodes
Codes interfaced with PLUMED
PLUMED can be incorporated into an MD code and used to analyze or bias a molecular dynamics run on the fly. Some MD codes already include calls to the PLUMED library and be PLUMED-ready in their original distribution. As far as we know, the following MD codes can be used with PLUMED out of the box:
- Amber, pmemd module, since version 20.
- AmberTools, sander module, since version 15.
- CP2K, since Feb 2015.
- ESPResSo, in a version that has been patched with PLUMED can be found here.
- PINY-MD, in its plumed branch.
- IPHIGENIE.
- AceMD, see this link.
- OpenMM, using the openmm-plumed plugin.
- DL_POLY4.
- VNL-ATK, see this link.
- ABIN.
- i-pi.
- LAMMPS since Nov 2018.
- Yaff, since Jul 2019.
- DFTB+, since release 20.1.
- Metalwalls
- ASE
- GPUMD
- GROMACS as of version 2025, with limited support (no replica exchange, no ENERGY collective variable, no lambda dynamics).
We also provide patches for namd and quantum espresso as well as a patch for GROMACS that you should use in place of the native implementation if you want to do replica exchange, use the ENERGY collective variable or do lambda dynamics.
If you maintain another MD code that is PLUMED-ready let us know and we will add it to this list.
The status of the interface between some of these MD codes and PLUMED are tested here. You can find examples of how we build the interfaces between PLUMED and these codes on that site. However, you will in general need to refer to the documentation of the MD code to know how to use it with the latest PLUMED release.
PLUMED can also be used as a tool for post processing the results from molecular dynamics or enhanced sampling calculations. Notice that PLUMED can be used as an analysis tool from the following packages:
- PLUMED-GUI is a VMD plugin that computes PLUMED collective variables.
- HTMD can use PLUMED collective variables for analysis.
- OpenPathSampling, using the PLUMED Wrapper for OpenPathSampling.
GROMACS and PLUMED with GPU
Since version 4.6.x GROMACS can run in an hybrid mode making use of both your CPU and your GPU (either using CUDA or OpenCL for newer versions of GROMACS). The calculation of the short-range non-bonded interactions is performed on the GPU while long-range and bonded interactions are at the same time calculated on the CPU. By varying the cut-off for short-range interactions GROMACS can optimize the balance between GPU/CPU loading and obtain amazing performances.
GROMACS patched with PLUMED takes into account PLUMED in its load-balancing, adding the PLUMED timings to the one resulting from bonded interactions and long- range interactions. This means that the CPU/GPU balance will be optimized automatically to take into account PLUMED!
It is important to notice that the optimal setup to use GROMACS alone on the GPU or GROMACS + PLUMED can be different, try to change the number of MPI/OpenMP processes (\ref Openmp) used by GROMACS and PLUMED to find optimal performances. Remember that in GROMACS multiple MPI threads can use the same GPU:
i.e. if you have 4 cores and 2 GPU you can:
- use 2 MPI/2GPU/2OPENMP:
export PLUMED_NUM_THREADS=2
mpiexec -np 2 gmx_mpi mdrun -nb gpu -ntomp 2 -pin on -gpu_id 01
- use 4 MPI/2GPU:
export PLUMED_NUM_THREADS=1
mpiexec -np 4 gmx_mpi mdrun -nb gpu -ntomp 1 -pin on -gpu_id 0011
Of notice that since plumed 2.5 and gromacs 2018.3 the number of openMP threads can automatically set by gromacs (so PLUMED_NUM_THREADS is not needed, and the number of OpenMP threads used by plumed is set by -ntomp)
mpiexec -np 2 gmx_mpi mdrun -nb gpu -ntomp 2 -pin on -gpu_id 01