Performance and stability

The by far fastest way to solve large linear problems are commercial solvers using the Barrier algorithm. But Barrier is very sensitive to the numerical properties of the problem and often the algorithm is slowed or aborted due to numerical issues. For this reason, AnyMOD provides a range of tools that adjust the problem in order to improve solver performance. These measures are based on recommendations from the Gurobi documentation.


The coefficients within the matrix of an optimization problem are advised be kept between $10^{-3}$ to $10^6$, which implies a maximum range of coefficients of $10^9$ within each row. To achieve this, AnyMOD automatically applies a two-step scaling process to each model that is created. This process is outlined based on the simplified optimization problem (excluding an objective) given below.

In the example, currently the first and second row do not comply with the targeted range. Also, the maximum range of coefficients in the second row is $10^{11}$ (= $\frac{10^{2}}{10^{-9}}$), which exceeds $10^9$.

1. Column substitution

The first step substitutes columns (= variables) of the optimization problem. In the example, the variable $x_1$ is substituted with $10^3 \, x'_1$:

After substitution the range of coefficients still does not lie within $10^{-3}$ to $10^6$, but the range of coefficients within each row does not exceed $10^9$ anymore. This is a prerequisite to move all coefficients to the desired range in the next step.

In AnyMOD substitution is done directly within the var column of a variable's dataframe. As a result, only the value of the variable within the optimization problem is affected, but the value accessed by the user is already corrected and always complies with the units provided in the Variables section. The optional argument scaFac of the model constructor overwrites the default factors used for column scaling. As long as no numerical problems occur, it is not advised to change the defaults. The table below lists the fields of the optional argument, to what variables they apply, and what their default value is.

fieldscaled variablesdefault factor
objobjective variable$10^{0}$

The optional argument checkRng can be used to specify a maximum range of coefficients for each row. Rows (= constraints) that violate this range after column scaling will be printed to the REPL. This helps to test and adjust the factors used for substitution.

2. Row scaling

The second step scales the rows (= constraints) of the optimization problem by multiplying them with a constant factor. If the scaling of columns successfully decreased the range of coefficients to $10^9$, this allows to move coefficients into a range from $10^{-3}$ to $10^6$. In the example, the used factors are $10^{2}$ and $10^4$ for the first and second row, respectively, which results in the following optimization problem:

By default, ranges in anyMOD are more narrow than in the example: matrix coefficients range from $10^{-2}$ to $10^{5}$ and values on the right-hand side from $10^{-2}$ to $10^{2}$. Again, these defaults can be overwritten by the coefRng argument of the model constructor.

Variable limits

Numerical stability of the Barrier algorithm can be increased by imposing additional upper limits on model variables. For this purpose, general variable limits can be added to a model by using the bound argument of the model constructor. Since the sole purpose of these limits is to increase solver performance, they are not intended to have any real world equivalents. Consequently, they should be set to high values that prevent them from becoming binding constraints.

Limits are provided as a NamedTuple with fields for dispatch variables, capacity variables, and for the objective itself: (disp = NaN, capa = NaN, obj = NaN) For capacity and dispatch, values are provided in GW and limits on dispatch are scaled to energy units to comply with the temporal resolution of the respective carrier. The limit on the objective function is provided in million Euros.

In general, it is strongly advised to provide a limit for the objective function. Doing so achieves a noticeable increase in performance without risking to distort model results. Instead, a model will just turn infeasible, if the set limit is below the actual objective value.

General limits on dispatch and capacity variables should only be set with great caution and used as a measure of last resort against numerical instability. Improper limits could create binding constraints that impact final results but remain undetected by the user. In addition, their positive impact on performance is not as clear, because they also cause a substantial increase in model size.

Range of factors

Since numerical stability is closely linked to the range of factors in the optimization problem, two more options are available to limit that range. Again, both these options are set as optional arguments of the model constructor.

avaMin sets a lower limit for the availablity parameter meaning all availabilities below this limit are replaced by zero. Since the parameter is inversed within the capacity restrictions, small availabilites can lead to large factors and cause numerical instability. The default value is 0.01 (= 1%).

The argument emissionLoss controls, if losses incurred by exchange and self-discharge of storage are taken into account when emissions are computed. Due to the high range of factors, emissions constraints are already neuralgic points when it comes to numerical stability. Adding even smaller factors to account for the named losses adds to this problem. The default is true meaning self-discharge and exchange losses are accounted for.