Constants | Function to minimise, $f$ |
---|---|

$p,\,S$ | $H$ |

$p,\,T$ | $G$ |

$p,\,H$ | $-S$ |

$V,\,S$ | $U$ |

$V,\,T$ | $A$ |

$V,\,U$ | $-S$ |

# Minimisation details

This section deals with the computational problem of implementing a general thermodynamic potential minimiser. Unless you are doing this, you probably want to skip this chapter and go straight to the models.

The previous section established that finding equilibrium means finding the minimum of a thermodynamic potential, $f$. Which potential depends on which variables are held constant (see the table). This section digs into the minimisation itself to understand the approach, what thermodynamic properties are needed to actually perform it, and any gotchas.

## Thermodynamic potential minimisation

To be as general as possible, we're going to consider a closed system (our universe) containing $N_p$ thermodynamic sub-systems. Each subsystem $\alpha$ is typically a different phase with $N_{C,\,\alpha}$ components within it and is described by a set of state variables $\vec{X}_\alpha=\left\{\left\{N_{i,\alpha}\right\}^{N_{C,\,\alpha}},\,T_\alpha,\,p_\alpha\text{ or }\,V_\alpha\right\}$ where either the pressure $p_\alpha$ or the volume $V_\alpha$ is used depending on the thermodynamic model of the subsystem/phase. The state vector of our "universe" is then generated by collecting all of these variables together, i.e., $\vec{X}=\left\{\vec{X}_\alpha\right\}^{N_P}$.

Our aim is to determine the equilibrium state, $\vec{X}_{equil.}$,
of the universe under some conditions. Mathematically the
minimisation can be expressed as follows,
\begin{align*}
f\left(\vec{X}_{equil.}\right)&=\min_{\vec{X}}\left(f\left(\vec{X}\right)\right)
& &\\ \vec{X} &\ge \vec{0} & &\\ g_k\left(\vec{X}\right) &= 0 &
k&\in[1,N_{constraints}], \end{align*}
Lets discuss each line in turn. The first line simply says the
equilibrium value of the potential is found at the minimium of the
potential. The second line is a non-negativity constraint to keep
the minimisation in a physical region, i.e.
\begin{align}\label{eq:nonNegConstraints}
\left\{\left\{N_{i,\alpha}\right\}^{N_{C,\,\alpha}},\,T_\alpha,p_\alpha\text{
or }\,V_\alpha\right\}^{N_P} &\ge 0, \end{align}
This is needed as without it the minimiser will dumbly react away
unstable compounds until their amounts turn negative, and keep going
as it continues to lower the thermodynamic potential, $f$.
Some other implementations must allow electron amounts to go
negative (i.e. to allow positive ions to form), but SimCem tracks
the total electron count in the system allowing a non-negative
constraint to be applied even in this case.
Finally, there are a number of additional *equality*
constraints which we write in general as follows,
\begin{align}
g_k\left(\vec{X}\right) &= 0,
\end{align}
where $k$ is the index of an equality constraint which holds some
function of the state variables, $g_k\left(\vec{X}\right)$, to a
value of zero. One example of these are the *material*
constraints arising from mass/mole balances (e.g. conservation of
elements) whereas the other constraints arise from constraints on
thermodynamic variables (e.g., constant enthalpy and
pressure). Before these are discussed, we review how constrained
minimisation is carried out in general to better understand how
these constraints are combined with the minimisation problem.

### Lagrange multipliers

Computerised minimisation routines simply repeatedly change the
value of the state variable and track the state with the lowest
value of the thermodynamic potential. The only difference between
various algorithms is how intelligently the system selects the new
values of the state variables to try.
In this simple approach, its easy to implement the non-negativity
constraints on the state variables
(Eq.\eqref{eq:nonNegConstraints}).
Whenever the computer attempts to changes the state variables to
below zero it is prevented from doing so in a method called called
bounds checking. The equality constraints embodied by $g_k$ may be
highly non-linear and thus its not easy to immediately decide how to
make sure they are not violated; however, constrained minimisation
problems can be transformed into *unconstrained* searches for stationary
points using the method of Lagrange multipliers. To derive this
approach, a new function called the Lagrangian, $F$, is
constructed like so,
\begin{align*} F\left(\vec{X},\left\{\lambda_k\right\}\right) =
f\left(\vec{X}\right) + \sum_k \lambda_k\,g_k\left(\vec{X}\right),
\end{align*}
where $\lambda_k$ is an as-yet unknown constant called the
Lagrange multiplier for the $k$th equality constraint. The Lagrangian has the unique property that
the constrained minima's of $f$ now occur at the (unconstrained) *extrema* of $F$. For example, it is easy to
see that the derivatives of $F$ with respect to each Lagrange
multiplier are zero if the constraints are satisfied,
\begin{align}\label{eq:constraintResult}
\frac{\partial F}{\partial \lambda_k} = g_k &= 0 & \text{if constraints are satisfied}.
\end{align}
Thus we are searching for a point where $\partial F/\partial
\lambda_k=0$ to ensure the constraints are satisfied. Taking a
derivative of the Lagrangian with respect to the state variables,
$\vec{X}$, yields the following,
\begin{align*}
\frac{\partial F}{\partial \vec{X}} = \frac{\partial f}{\partial \vec{X}} + \sum_k \lambda_k \frac{\partial c_k}{\partial \vec{X}},
\end{align*}
or in vector notation,
\begin{align*}
\nabla_\vec{X}\,F &= \nabla_\vec{X}\,f + \sum_k \lambda_k \nabla_\vec{X}\,c_k
\end{align*}
Lets consider when $\partial F/\partial
\vec{X}=\nabla_\vec{X}\,F=\vec{0}$, at this point the following must be
true,
\begin{align}\label{eq:lagrangianResult}
\nabla_\vec{X}\,f = - \sum_k \lambda_k \nabla_\vec{X}\,c_k.
\end{align}
This makes it clear that at the point where $\partial F/\partial
\vec{X}=\vec{0}$, the downhill direction of $f$ can be decomposed into
directions where the constraint functions also change
($\nabla_\vec{X}\,c_k$ are basis vectors of $\nabla_\vec{X}\,f$, and
$-\lambda_k$ are the coordinate values in this basis). Thus,
attempting to lower $f$ any further will cause the constraint values
to alter away from zero if they are already satisfied at this point
(as guaranteed by $\frac{\partial F}{\partial \lambda_k}=0$).

As a result, the new strategy to find equilibrium is to find a stationary point of the Lagrangian, \begin{align}\label{eq:stationaryPoint} \frac{\partial F}{\partial \vec{X}} &= 0 & \frac{\partial F}{\partial \left\{\lambda_k\right\}} &=g_k(\vec{X})=0. \end{align} This is a stationary point and not a maximum or minimum as no statements on the second derivatives of the Lagrangian have been made. In fact, most stationary points of the Lagrangian turn out to be saddle points therefore minimisation of the Lagrangian is not a suitable approach and instead a root search for the first derivative of the Lagrangian should be attempted. Even this approach may converge to a stationary point of $F$ which is not a minimum but a maximisation of $f$. Implementation of a suitable routine is therefore an art but fortunately general algorithms are available which perform this analysis, such as NLopt, Opt++, and SLSQP. Most of these packages have challenges to use with non-linear constrained problems like this but IPOPT, when correctly scaled, appears to be the most robust (and free) approach and is what is used by default in SimCem. Global minimisation is not yet considered.

The purpose of introducing the Lagrangian is to demonstrate that derivatives of $f$ and $g_k$ are needed, but also to introduce the Lagrangian multipliers $\lambda_k$, which will later be shown to correspond to physical properties of the system. We will summarise the values that must be calculated before discussing how to calculate these values.

### Required properties for minimisation

To determine the extreema of the Lagrangian we need to numerically calculate the terms above. As a root find in the derivatives of the Lagrangian is to be attempted, numerical expressions for the derivatives of the lagrangian are required, \begin{align} \frac{\partial F}{\partial \left\{\lambda_k\right\}} &=\left\{g_k(\vec{X})\right\} & \frac{\partial F}{\partial \vec{X}} &= \frac{\partial f}{\partial \vec{X}} + \sum_k \lambda_k\frac{\partial g_k}{\partial \vec{X}}. \end{align} As illustrated above, the derivatives with respect to the lagrangian multipliers are given by the constraint functions themselves, thus no additional calculation is required there if these are defined. The thermodynamic potential, $f$, of the overall system can be broken down into the contribution from each subsystem/phase (this is implicitly ignoring any contribution from the interfaces between the subsystems/phases), \begin{align}\label{eq:} f = \sum_{\alpha}^{N_p} f_\alpha, \end{align} where $f_\alpha$ is the contribution arising from a single phase, $\alpha$. This allows us to rewrite the derivative of the Lagrangian with respect to the state variables as follows, \begin{align}\label{eq:Fderiv} \frac{\partial F}{\partial \vec{X}} &= \sum_\alpha^{N_P} \frac{\partial f_\alpha}{\partial \vec{X}} + \sum_k \lambda_k\frac{\partial g_k}{\partial \vec{X}} \end{align} It should be noted that each subsystem/phase only depends on its state variables, \begin{align*} \frac{\partial f_\alpha}{\partial \vec{X}_\beta} &= 0 & \text{for $\alpha \neq\beta$}. \end{align*} Thus, each individual phase's derivatives can be considered separately as only the $\partial f_\alpha/\partial \vec{X}_\alpha$ terms are nonzero. To make more progress we need to understand what the variables $\vec{X}_\beta$ are, and this will be discussed in the models section. To understand what derivatives are needed from the equality constraints these are explored now.

## Minimisation equality constraints

The Gibbs phase rule states that the number of independent intensive variables (AKA degrees of freedom), $F$, required to completely specify the equilibrium state of a thermodynamic system is, \begin{align} F=C+2-N_P, \end{align} where $N_P$ is the number of phases and $C$ is the number of independent components in the system. It should be noted that in general $C\neq N_C$, as components may be linked by the constraints of elemental or molecular balances.

In SimCem, $\sum_\alpha^{N_P}\left(N_{C,\alpha}+2\right)$ state variables are always used to describe a system (there are $N_{C,\alpha}$ molar amounts $\left\{N_{i,\alpha}\right\}^{N_{C,\alpha}}$, the subsystem temperature $T_\alpha$, and either the subsystem pressure $p_\alpha$ or volume $V_\alpha$ for each subsystem). In general, $\sum_\alpha^{N_P}\left(N_{C,\alpha}+2\right)\ge C+2-N_P$, thus the state of a multi-phase and/or reactive system, $\vec{X}$, is typically over specified and constraints must exist between the variables minimisation to eliminate the redundant degrees of freedom.

### Constraints on $S,\,H,\,U,\text{ or }T$ and $p\text{ or } V$

Two systems in equilibrium must have equal temperature, pressure, and chemical potential. These constraints will arise naturally from the minimisation; however, it is efficient to remove any variables of the minimisation wherever we can (and also helps with numerical accuracy/stability).

As the temperatures of all phases are equal at equilibrium and all models used in SimCem have a temperature variable, $\left\{T_\alpha\right\}^{N_{p}}$, these individual values are eliminated from $\vec{X}$ and set equal to a single system temperature, $T$, which is inserted into $\vec{X}$.

If a constant temperature is being considered, then the system temperature, $T$, is simply set to the constrained value and eliminated from $\vec{X}$ entirely. If temperature is free, then a constraint on $S$, $H$, or $U$ is added, i.e., \begin{align} g_{S}\left(\vec{X}\right) = S- S_{target} = 0. \end{align}

Not all models used in SimCem have a pressure variable, thus it is a little more challenging to reduce it to a single value in $\vec{X}$, so this is not done (yet); however, if constant pressure is required, then the pressure of the first phase is constrained only and the other phases then equilibrate to this pressure via the minimisation, \begin{align} g_{p}\left(\vec{X}\right) = p_1- p_{target} = 0. \end{align} If the volume is held constant then a different constraint function is used, \begin{align} g_{V}\left(\vec{X}\right) = \sum_\alpha^{N_p} V_\alpha- V_{target} = 0. \end{align} Any phase volumes appearing as independent variables must remain in $\vec{X}$ as it is the overall volume of the system which is constrained to $V_{target}$, thus individual phases themselves have unconstrained volumes.

### Constraints on elements/species

SimCem has a reactive and non-reactive mode which selects between
the conservation of elements or species respectively. For example,
consider the water/steam equilibrium system,
\begin{align*}
H_2O_{steam} &\rightleftharpoons H_2O_{water}
\end{align*}
The variables for this system in SimCem are typically as follows,
\begin{align*}
\vec{X} = \left\{T,\,p,\,N_{H_2O,steam},\,N_{H_2O,water}\right\}.
\end{align*}
H_{2}O may be present in both the steam and water phase, but
the total amount is constrained to the initial amount,
$N_{H_2O}^0$. Thus we'd like to add a **non-reactive** species
constraint,
\begin{align*}
g_{H_2O}\left(\vec{X}\right) = N_{H_2O,steam}+N_{H_2O,water} - N_{H_2O}^0 = 0;
\end{align*}
In **reactive** systems, the types of molecules are no-longer
conserved but the elements are. For example, consider the combustion
of graphite,
\begin{align*}
C+O_2&\leftrightharpoons CO_2\\
C+\frac{1}{2}O_2&\leftrightharpoons CO.
\end{align*}
The state variables are,
\begin{align*}
\vec{X} = \left\{T,\,p,\,N_{C,graphite},\,N_{O_2,gas},\,N_{CO,gas},\,N_{CO_2,gas}\right\}.
\end{align*}
Selecting a **reactive** system, SimCem will generate elemental
constraints,
\begin{align*}
g_{C}\left(\vec{X}\right) &= N_{C,solid}+N_{CO,gas}+N_{CO_2,gas}-N_{C}^0 = 0\\
g_{O}\left(\vec{X}\right) &= 2\,N_{O_2,gas}+N_{CO,gas}+2\,N_{CO_2,gas}-N_{O}^0 = 0.
\end{align*}
Elemental constraints allow any and all possible
reactions/rearrangements of elements to minimise the free
energy. For example, the following reaction is also allowed by these
constraints,
\begin{align*}
CO+\frac{1}{2}O_2&\leftrightharpoons CO_2
\end{align*}
Sometimes this is not desired, and only specific reaction paths are
fast enough to be considered (e.g. in catalysed reactions). In this
case, custom constraints will need to be implemented. At the moment,
Simcem will only automatically generate elemental and
molecular constraints.

### Eliminating redundant constraints

Consider the water-steam system again. Imagine that a user selects
this as a **reactive** system. SimCem will attempt to use a
hydrogen and oxygen balance constraint:
\begin{align*}
g_{H}\left(\vec{X}\right) = 2\,N_{H_2O,steam}+2\,N_{H_2O,water} - 2\,N_{H_2O}^0 = 0
\\
g_{O}\left(\vec{X}\right) = N_{H_2O,steam}+N_{H_2O,water} - N_{H_2O}^0 = 0
\end{align*}
These two constraints are identical aside from a multiplicative
factor. This leads to ambiguity in the values of the lagrange
multipliers as either constraint can apply and this indeterminacy
can cause issues with solvers, so we need to eliminate them.

Both elemental and species constraints are linear functions of the molar amounts, $N_i$, thus they can be expressed in matrix form, \begin{align}\label{eq:genMolConstraint} \vec{g} = \vec{C}\cdot\left(\vec{N} - \vec{N}^0\right) =0 \end{align} where $\vec{C}$ is a matrix where each row has an entry for the amount of either an element or molecule represented by a particular molar amount and $\vec{N}^0$ is the initial molar amounts.

To determine which rows of the matrix $\vec{C}$ are redundant, we perform singular-value decomposition on the matrix $\vec{C}=\vec{U}\,\vec{S}\,\vec{V}^T$. The benefit of this is the rows of $V^T$ corresponding to the non-zero singular values form a set of orthagonal constraints equivalent to the original constraint matrix $\vec{C}$, so we use the so-called "thin" $\vec{V}^{T}$ (only containing the non-zero singular rows) as our new constraint matrix. As we would like to extract the original lagrangian multipliers for later calculations, we need to be able to map back and forth between the original and reduced lagrangians. This mapping can be found by considering the original constraint and its lagrangians, \begin{align} \vec{\lambda}\cdot\vec{g} &= \vec{\lambda} \cdot \vec{C} \left(\vec{N} - \vec{N}^0\right) \\ &= \vec{\lambda} \cdot \vec{U}\,\vec{S}\,\vec{V}^T \left(\vec{N} - \vec{N}^0\right) \\ &= \vec{\lambda_r} \cdot \vec{V}^T \left(\vec{N} - \vec{N}^0\right) \end{align} where the final line implicitly defines the reduced set of lagrange multipliers $\vec{\lambda}_r$. Performing the minimisation using $\vec{V}^T$, we can then recover the original lagrange multipliers like so, \begin{align} \vec{\lambda} &= \vec{U}^T\,\vec{S}^{-1}\cdot\vec{\lambda_r} \end{align} As a side note, the matrices $\vec{U}$ and $\vec{T}$ are orthonormal/rotation matrices thus the tranpose is their inverse. Also, as the diagonal matrix $\vec{S}$ is singular, $\vec{S}^{-1}$ is the generalised inverse (i.e., only the non-zero diagonal values are inverted).

With the constraints outlined, the actual definition of thermodynamic models and the calculations may begin.

### Derivatives required for constraints

For the constraint functions, the only non-zero derivative of the element/molecular constraint function given in Eq.\eqref{eq:genMolConstraint} is as follows, \begin{align} \frac{\partial g_{N_k}}{\partial N_i} &= C_{ik} & \frac{\partial g_{N_k}}{\partial \left\{p,T,V_\alpha\right\}} &= 0 \end{align} \begin{align}\label{eq:GradEqsStart} \frac{\partial g_{p_\alpha}}{\partial N_i} &= \begin{cases} 0 & \text{if $N_i\notin \vec{X}_\alpha$ or $V_\alpha\notin\vec{X}_\alpha$} \\ \left(\frac{\partial p_\alpha}{\partial N_i}\right)_{V_\alpha,T,N_{j\neq i}} & \text{if $N_i\in \vec{X}_\alpha$ and $V_\alpha\in\vec{X}_\alpha$} \end{cases} \\ \frac{\partial g_{p_\alpha}}{\partial \left\{p,T,V_\alpha\right\}} &= \left\{-1, \left(\frac{\partial p_\alpha}{\partial T}\right)_{N_i,V_\alpha},\left(\frac{\partial p_\alpha}{\partial V_\alpha}\right)_{N_i,T}\right\} \\ \frac{\partial g_{V}}{\partial N_i} &= \left(\frac{\partial V}{\partial N_i}\right)_{\vec{X}\setminus \left\{N_i\right\}} \\ \label{eq:GradEqsEnd} \frac{\partial g_{V}}{\partial \left\{p,T,V_\alpha\right\}} &= \left\{\left(\frac{\partial V}{\partial p}\right)_{\vec{X}\setminus \left\{p\right\}}, \left(\frac{\partial V}{\partial T}\right)_{\vec{X}\setminus \left\{T\right\}}, 1\right\} \end{align}