HISTOGRAM
This is part of the analysis module

Accumulate the average probability density along a few CVs from a trajectory.

When using this method it is supposed that you have some collective variable \(\zeta\) that gives a reasonable description of some physical or chemical phenomenon. As an example of what we mean by this suppose you wish to examine the following SN2 reaction:

\[ \textrm{OH}^- + \textrm{CH}_3Cl \rightarrow \textrm{CH}_3OH + \textrm{Cl}^- \]

The distance between the chlorine atom and the carbon is an excellent collective variable, \(\zeta\), in this case because this distance is short for the reactant, \(\textrm{CH}_3Cl\), because the carbon and chlorine are chemically bonded, and because it is long for the product state when these two atoms are not chemically bonded. We thus might want to accumulate the probability density, \(P(\zeta)\), as a function of this distance as this will provide us with information about the overall likelihood of the reaction. Furthermore, the free energy, \(F(\zeta)\), is related to this probability density via:

\[ F(\zeta) = - k_B T \ln P(\zeta) \]

Accumulating these probability densities is precisely what this Action can be used to do. Furthermore, the conversion of the histogram to the free energy can be achieved by using the method CONVERT_TO_FES.

We calculate histograms within PLUMED using a method known as kernel density estimation, which you can read more about here:

https://en.wikipedia.org/wiki/Kernel_density_estimation

In PLUMED the value of \(\zeta\) at each discrete instant in time in the trajectory is accumulated. A kernel, \(K(\zeta-\zeta(t'),\sigma)\), centered at the current value, \(\zeta(t)\), of this quantity is generated with a bandwidth \(\sigma\), which is set by the user. These kernels are then used to accumulate the ensemble average for the probability density:

\[ \langle P(\zeta) \rangle = \frac{ \sum_{t'=0}^t w(t') K(\zeta-\zeta(t'),\sigma) }{ \sum_{t'=0}^t w(t') } \]

Here the sums run over a portion of the trajectory specified by the user. The final quantity evaluated is a weighted average as the weights, \(w(t')\), allow us to negate the effect any bias might have on the region of phase space sampled by the system. This is discussed in the section of the manual on Analysis.

A discrete analogue of kernel density estimation can also be used. In this analogue the kernels in the above formula are replaced by Dirac delta functions. When this method is used the final function calculated is no longer a probability density - it is instead a probability mass function as each element of the function tells you the value of an integral between two points on your grid rather than the value of a (continuous) function on a grid.

Additional material and examples can be also found in the tutorials Lugano tutorial: Brief guide to PLUMED syntax and analyzing trajectories.

A note on block averaging and errors

Some particularly important issues related to the convergence of histograms and the estimation of error bars around the ensemble averages you calculate are covered in Trieste tutorial: Averaging, histograms and block analysis. The technique for estimating error bars that is known as block averaging is introduced in this tutorial. The essence of this technique is that the trajectory is split into a set of blocks and separate ensemble averages are calculated from each separate block of data. If \(\{A_i\}\) is the set of \(N\) block averages that are obtained from this technique then the final error bar is calculated as:

\[ \textrm{error} = \sqrt{ \frac{1}{N} \frac{1}{N-1} \sum_{i=1}^N (A_i^2 - \langle A \rangle )^2 } \qquad \textrm{where} \qquad \langle A \rangle = \frac{1}{N} \sum_{i=1}^N A_i \]

If the simulation is biased and reweighting is performed then life is a little more complex as each of the block averages should be calculated as a weighted average. Furthermore, the weights should be taken into account when the final ensemble and error bars are calculated. As such the error should be:

\[ \textrm{error} = \sqrt{ \frac{1}{N} \frac{\sum_{i=1}^N W_i }{\sum_{i=1}^N W_i - \sum_{i=1}^N W_i^2 / \sum_{i=1}^N W_i} \sum_{i=1}^N W_i (A_i^2 - \langle A \rangle )^2 } \]

where \(W_i\) is the sum of all the weights for the \(i\)th block of data.

If we wish to calculate a normalized histogram we must calculate ensemble averages from our biased simulation using:

\[ \langle H(x) \rangle = \frac{\sum_{t=1}^M w_t K( x - x_t,\sigma) }{\sum_{t=1}^M w_t} \]

where the sums runs over the trajectory, \(w_t\) is the weight of the \(t\)th trajectory frame, \(x_t\) is the value of the CV for the \(t\)th trajectory frame and \(K\) is a kernel function centered on \(x_t\) with bandwidth \(\sigma\). The quantity that is evaluated is the value of the normalized histogram at point \(x\). The following ensemble average will be calculated if you use the NORMALIZATION=true option in HISTOGRAM. If the ensemble average is calculated in this way we must calculate the associated error bars from our block averages using the second of the expressions above.

A number of works have shown that when biased simulations are performed it is often better to calculate an estimate of the histogram that is not normalized using:

\[ \langle H(x) \rangle = \frac{1}{M} \sum_{t=1}^M w_t K( x - x_t,\sigma) \]

instead of the expression above. As such this is what is done by default in HISTOGRAM or if the NORMALIZATION=ndata option is used. When the histogram is calculated in this second way the first of the two formula above can be used when calculating error bars from block averages.

Compulsory keywords
STRIDE ( default=1 ) the frequency with which the data should be collected and added to the quantity being averaged
CLEAR ( default=0 ) the frequency with which to clear all the accumulated data. The default value of 0 implies that all the data will be used and that the grid will never be cleared
BANDWIDTH the bandwidths for kernel density estimation
KERNEL ( default=gaussian ) the kernel function you are using. More details on the kernels available in plumed plumed can be found in kernelfunctions.
NORMALIZATION ( default=ndata ) This controls how the data is normalized it can be set equal to true, false or ndata. See above for an explanation
GRID_MIN the lower bounds for the grid
GRID_MAX the upper bounds for the grid
Options
SERIAL ( default=off ) do the calculation in serial. Do not use MPI
LOWMEM ( default=off ) lower the memory requirements
TIMINGS

( default=off ) output information on the timings of the various parts of the calculation

LOGWEIGHTS list of actions that calculates log weights that should be used to weight configurations when calculating averages
CONCENTRATION the concentration parameter for Von Mises-Fisher distributions
ARG the input for this action is the scalar output from one or more other actions. The particular scalars that you will use are referenced using the label of the action. If the label appears on its own then it is assumed that the Action calculates a single scalar value. The value of this scalar is thus used as the input to this new action. If * or *.* appears the scalars calculated by all the proceeding actions in the input file are taken. Some actions have multi-component outputs and each component of the output has a specific label. For example a DISTANCE action labelled dist may have three components x, y and z. To take just the x component you should use dist.x, if you wish to take all three components then use dist.*.More information on the referencing of Actions can be found in the section of the manual on the PLUMED Getting Started. Scalar values can also be referenced using POSIX regular expressions as detailed in the section on Regular Expressions. To use this feature you you must compile PLUMED with the appropriate flag. You can use multiple instances of this keyword i.e. ARG1, ARG2, ARG3...
DATA input data from action with vessel and compute histogram
VECTORS input three dimensional vectors for computing histogram
GRID_BIN the number of bins for the grid
GRID_SPACING the approximate grid spacing (to be used as an alternative or together with GRID_BIN)
UPDATE_FROM Only update this action from this time
UPDATE_UNTIL Only update this action until this time
Examples

The following input monitors two torsional angles during a simulation and outputs a continuous histogram as a function of them at the end of the simulation.

TORSION ATOMS=1,2,3,4 LABEL=r1
TORSION ATOMS=2,3,4,5 LABEL=r2
HISTOGRAM ...
  ARG=r1,r2
  GRID_MIN=-3.14,-3.14
  GRID_MAX=3.14,3.14
  GRID_BIN=200,200
  BANDWIDTH=0.05,0.05
  LABEL=hh
... HISTOGRAM

DUMPGRID GRID=hh FILE=histo

The following input monitors two torsional angles during a simulation and outputs a discrete histogram as a function of them at the end of the simulation.

TORSION ATOMS=1,2,3,4 LABEL=r1
TORSION ATOMS=2,3,4,5 LABEL=r2
HISTOGRAM ...
  ARG=r1,r2
  KERNEL=DISCRETE
  GRID_MIN=-3.14,-3.14
  GRID_MAX=3.14,3.14
  GRID_BIN=200,200
  LABEL=hh
... HISTOGRAM

DUMPGRID GRID=hh FILE=histo

The following input monitors two torsional angles during a simulation and outputs the histogram accumulated thus far every 100000 steps.

TORSION ATOMS=1,2,3,4 LABEL=r1
TORSION ATOMS=2,3,4,5 LABEL=r2
HISTOGRAM ...
  ARG=r1,r2
  GRID_MIN=-3.14,-3.14
  GRID_MAX=3.14,3.14
  GRID_BIN=200,200
  BANDWIDTH=0.05,0.05
  LABEL=hh
... HISTOGRAM

DUMPGRID GRID=hh FILE=histo STRIDE=100000

The following input monitors two torsional angles during a simulation and outputs a separate histogram for each 100000 steps worth of trajectory. Notice how the CLEAR keyword is used here and how it is not used in the previous example.

TORSION ATOMS=1,2,3,4 LABEL=r1
TORSION ATOMS=2,3,4,5 LABEL=r2
HISTOGRAM ...
  ARG=r1,r2 CLEAR=100000
  GRID_MIN=-3.14,-3.14
  GRID_MAX=3.14,3.14
  GRID_BIN=200,200
  BANDWIDTH=0.05,0.05
  LABEL=hh
... HISTOGRAM

DUMPGRID GRID=hh FILE=histo STRIDE=100000