LCOV - code coverage report
Current view: top level - bias - ReweightWham.cpp (source / functions) Hit Total Coverage
Test: plumed test coverage Lines: 65 70 92.9 %
Date: 2026-03-30 13:16:06 Functions: 9 11 81.8 %

          Line data    Source code
       1             : /* +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
       2             :    Copyright (c) 2017-2023 The plumed team
       3             :    (see the PEOPLE file at the root of the distribution for a list of names)
       4             : 
       5             :    See http://www.plumed.org for more information.
       6             : 
       7             :    This file is part of plumed, version 2.
       8             : 
       9             :    plumed is free software: you can redistribute it and/or modify
      10             :    it under the terms of the GNU Lesser General Public License as published by
      11             :    the Free Software Foundation, either version 3 of the License, or
      12             :    (at your option) any later version.
      13             : 
      14             :    plumed is distributed in the hope that it will be useful,
      15             :    but WITHOUT ANY WARRANTY; without even the implied warranty of
      16             :    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
      17             :    GNU Lesser General Public License for more details.
      18             : 
      19             :    You should have received a copy of the GNU Lesser General Public License
      20             :    along with plumed.  If not, see <http://www.gnu.org/licenses/>.
      21             : +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ */
      22             : #include "ReweightBase.h"
      23             : #include "core/ActionRegister.h"
      24             : #include "tools/Communicator.h"
      25             : 
      26             : //+PLUMEDOC REWEIGHTING REWEIGHT_WHAM
      27             : /*
      28             : Calculate the weights for configurations using the weighted histogram analysis method.
      29             : 
      30             : Suppose that you have run multiple \f$N\f$ trajectories each of which was computed by integrating a different biased Hamiltonian. We can calculate the probability of observing
      31             : the set of configurations during the \f$N\f$ trajectories that we ran using the product of multinomial distributions shown below:
      32             : \f[
      33             : P( \vec{T} ) \propto \prod_{j=1}^M \prod_{k=1}^N (c_k w_{kj} p_j)^{t_{kj}}
      34             : \label{eqn:wham1}
      35             : \f]
      36             : In this expression the second product runs over the biases that were used when calculating the \f$N\f$ trajectories.  The first product then runs over the
      37             : \f$M\f$ bins in our histogram.  The \f$p_j\f$ variable that is inside this product is the quantity we wish to extract; namely, the unbiased probability of
      38             : having a set of CV values that lie within the range for the \f$j\f$th bin.
      39             : 
      40             : The quantity that we can easily extract from our simulations, \f$t_{kj}\f$, however, measures the number of frames from trajectory \f$k\f$ that are inside the \f$j\f$th bin.
      41             : To interpret this quantity we must consider the bias that acts on each of the replicas so the \f$w_{kj}\f$ term is introduced.  This quantity is calculated as:
      42             : \f[
      43             : w_{kj} =
      44             : \f]
      45             : and is essentially the factor that we have to multiply the unbiased probability of being in the bin by in order to get the probability that we would be inside this same bin in
      46             : the \f$k\f$th of our biased simulations.  Obviously, these \f$w_{kj}\f$ values depend on the value that the CVs take and also on the particular trajectory that we are investigating
      47             : all of which, remember, have different simulation biases.  Finally, \f$c_k\f$ is a free parameter that ensures that, for each \f$k\f$, the biased probability is normalized.
      48             : 
      49             : We can use the equation for the probability that was given above to find a set of values for \f$p_j\f$ that maximizes the likelihood of observing the trajectory.
      50             : This constrained optimization must be performed using a set of Lagrange multipliers, \f$\lambda_k\f$, that ensure that each of the biased probability distributions
      51             : are normalized, that is \f$\sum_j c_kw_{kj}p_j=1\f$.  Furthermore, as the problem is made easier if the quantity in the equation above is replaced by its logarithm
      52             : we actually chose to minimize
      53             : \f[
      54             : \mathcal{L}= \sum_{j=1}^M \sum_{k=1}^N t_{kj} \ln c_k  w_{kj} p_j + \sum_k\lambda_k \left( \sum_{j=1}^N c_k w_{kj} p_j - 1 \right)
      55             : \f]
      56             : After some manipulations the following (WHAM) equations emerge:
      57             : \f[
      58             : \begin{aligned}
      59             : p_j & \propto \frac{\sum_{k=1}^N t_{kj}}{\sum_k c_k w_{kj}} \\
      60             : c_k & =\frac{1}{\sum_{j=1}^M w_{kj} p_j}
      61             : \end{aligned}
      62             : \f]
      63             : which can be solved by computing the \f$p_j\f$ values using the first of the two equations above with an initial guess for the \f$c_k\f$ values and by then refining
      64             : these \f$p_j\f$ values using the \f$c_k\f$ values that are obtained by inserting the \f$p_j\f$ values obtained into the second of the two equations above.
      65             : 
      66             : Notice that only \f$\sum_k t_{kj}\f$, which is the total number of configurations from all the replicas that enter the \f$j\f$th bin, enters the WHAM equations above.
      67             : There is thus no need to record which replica generated each of the frames.  One can thus simply gather the trajectories from all the replicas together at the outset.
      68             : This observation is important as it is the basis of the binless formulation of WHAM that is implemented within PLUMED.
      69             : 
      70             : \par Examples
      71             : 
      72             : */
      73             : //+ENDPLUMEDOC
      74             : 
      75             : namespace PLMD {
      76             : namespace bias {
      77             : 
      78             : class ReweightWham : public ReweightBase {
      79             : private:
      80             :   double thresh;
      81             :   unsigned nreplicas;
      82             :   unsigned maxiter;
      83             :   bool weightsCalculated;
      84             :   std::vector<double> stored_biases;
      85             :   std::vector<double> final_weights;
      86             : public:
      87             :   static void registerKeywords(Keywords&);
      88             :   explicit ReweightWham(const ActionOptions&ao);
      89          12 :   bool buildsWeightStore() const override {
      90          12 :     return true;
      91             :   }
      92             :   void calculateWeights( const unsigned& nframes ) override;
      93             :   void clearData() override;
      94             :   double getLogWeight() override;
      95             :   double getWeight( const unsigned& iweight ) const override;
      96             : };
      97             : 
      98       13809 : PLUMED_REGISTER_ACTION(ReweightWham,"REWEIGHT_WHAM")
      99             : 
     100          16 : void ReweightWham::registerKeywords(Keywords& keys ) {
     101          16 :   ReweightBase::registerKeywords( keys );
     102          16 :   keys.remove("ARG");
     103          32 :   keys.add("compulsory","ARG","*.bias","the biases that must be taken into account when reweighting");
     104          32 :   keys.add("compulsory","MAXITER","1000","maximum number of iterations for WHAM algorithm");
     105          32 :   keys.add("compulsory","WHAMTOL","1e-10","threshold for convergence of WHAM algorithm");
     106          16 : }
     107             : 
     108          12 : ReweightWham::ReweightWham(const ActionOptions&ao):
     109             :   Action(ao),
     110             :   ReweightBase(ao),
     111          12 :   weightsCalculated(false) {
     112          12 :   parse("MAXITER",maxiter);
     113          12 :   parse("WHAMTOL",thresh);
     114          12 :   if(comm.Get_rank()==0) {
     115          12 :     nreplicas=multi_sim_comm.Get_size();
     116             :   }
     117          12 :   comm.Bcast(nreplicas,0);
     118          12 : }
     119             : 
     120        4224 : double ReweightWham::getLogWeight() {
     121        4224 :   if( getStep()==0 ) {
     122             :     return 1.0;  // This is here as first step is ignored in all analyses
     123             :   }
     124        4212 :   weightsCalculated=false;
     125        4212 :   double bias=0.0;
     126        8424 :   for(unsigned i=0; i<getNumberOfArguments(); ++i) {
     127        4212 :     bias+=getArgument(i);
     128             :   }
     129             : 
     130        4212 :   std::vector<double> biases(nreplicas,0.0);
     131        4212 :   if(comm.Get_rank()==0) {
     132        4212 :     multi_sim_comm.Allgather(bias,biases);
     133             :   }
     134        4212 :   comm.Bcast(biases,0);
     135       29484 :   for(unsigned i=0; i<biases.size(); i++) {
     136       25272 :     stored_biases.push_back( biases[i] );
     137             :   }
     138             :   return 1.0;
     139             : }
     140             : 
     141           0 : void ReweightWham::clearData() {
     142           0 :   stored_biases.resize(0);
     143           0 : }
     144             : 
     145        2457 : double ReweightWham::getWeight( const unsigned& iweight ) const {
     146             :   plumed_dbg_assert( weightsCalculated && iweight<final_weights.size() );
     147        2457 :   return final_weights[iweight];
     148             : }
     149             : 
     150           7 : void ReweightWham::calculateWeights( const unsigned& nframes ) {
     151           7 :   if( stored_biases.size()!=nreplicas*nframes ) {
     152           0 :     error("wrong number of weights stored");
     153             :   }
     154             :   // Get the minimum value of the bias
     155           7 :   double minv = *min_element(std::begin(stored_biases), std::end(stored_biases));
     156             :   // Resize final weights array
     157           7 :   plumed_assert( stored_biases.size()%nreplicas==0 );
     158           7 :   final_weights.resize( stored_biases.size() / nreplicas, 1.0 );
     159             :   // Offset and exponential of the bias
     160           7 :   std::vector<double> expv( stored_biases.size() );
     161       14749 :   for(unsigned i=0; i<expv.size(); ++i) {
     162       14742 :     expv[i] = std::exp( (-stored_biases[i]+minv) / simtemp );
     163             :   }
     164             :   // Initialize Z
     165           7 :   std::vector<double> Z( nreplicas, 1.0 ), oldZ( nreplicas );
     166             :   // Now the iterative loop to calculate the WHAM weights
     167        2674 :   for(unsigned iter=0; iter<maxiter; ++iter) {
     168             :     // Store Z
     169       18718 :     for(unsigned j=0; j<Z.size(); ++j) {
     170       16044 :       oldZ[j]=Z[j];
     171             :     }
     172             :     // Recompute weights
     173             :     double norm=0;
     174      941248 :     for(unsigned j=0; j<final_weights.size(); ++j) {
     175             :       double ew=0;
     176     6570018 :       for(unsigned k=0; k<Z.size(); ++k) {
     177     5631444 :         ew += expv[j*Z.size()+k]  / Z[k];
     178             :       }
     179      938574 :       final_weights[j] = 1.0 / ew;
     180      938574 :       norm += final_weights[j];
     181             :     }
     182             :     // Normalize weights
     183      941248 :     for(unsigned j=0; j<final_weights.size(); ++j) {
     184      938574 :       final_weights[j] /= norm;
     185             :     }
     186             :     // Recompute Z
     187       18718 :     for(unsigned j=0; j<Z.size(); ++j) {
     188       16044 :       Z[j] = 0.0;
     189             :     }
     190      941248 :     for(unsigned j=0; j<final_weights.size(); ++j) {
     191     6570018 :       for(unsigned k=0; k<Z.size(); ++k) {
     192     5631444 :         Z[k] += final_weights[j]*expv[j*Z.size()+k];
     193             :       }
     194             :     }
     195             :     // Normalize Z and compute change in Z
     196             :     double change=0;
     197             :     norm=0;
     198       18718 :     for(unsigned k=0; k<Z.size(); ++k) {
     199       16044 :       norm+=Z[k];
     200             :     }
     201       18718 :     for(unsigned k=0; k<Z.size(); ++k) {
     202       16044 :       Z[k] /= norm;
     203       16044 :       double d = std::log( Z[k] / oldZ[k] );
     204       16044 :       change += d*d;
     205             :     }
     206        2674 :     if( change<thresh ) {
     207           7 :       weightsCalculated=true;
     208           7 :       return;
     209             :     }
     210             :   }
     211           0 :   error("Too many iterations in WHAM" );
     212             : }
     213             : 
     214             : }
     215             : }

Generated by: LCOV version 1.16