`EarlyStopping.Disjunction`

— Type`Disjunction(criteria...)`

An early stopping criterion for loss-reporting iterative algorithms.

Combines the specified stopping `criteria`

dijunctively: if any one of the criteria applies, then stop.

**Syntactic sugar.** `c1 + c2 + ...`

is equivalent to `Disjunction(c1, c2, ...)`

.

`EarlyStopping.EarlyStopper`

— Type`EarlyStopper(c...; verbosity=0)`

Instantiate an object for tracking whether one or more stopping criterion `c`

apply, given a sequence of losses.

For a list of possible criterion, do `subtypes(EarlyStopping.StoppingCriterion)`

.

**Sample usage**

```
stopper = EarlyStopper(Patience(1), NotANumber())
done!(stopper, 0.123) # false
done!(stopper, 0.234) # true
julia> message(stopper)
"Early stop triggered by Patience(1) stopping criterion. "
```

**Training losses**

For criteria tracking both an "out-of-sample" loss and a "training" loss (eg, stopping criterion of type `PQ`

), specify `training=true`

if the update is for training, as in

`done!(stopper, 0.123; training=true)`

Zero or more training updates may precede each out-of-sample update.

The state of the stopper can be reset or restored to a prior state using `reset!`

`EarlyStopping.GL`

— Type`GL(; alpha=2.0)`

An early stopping criterion for loss-reporting iterative algorithms.

A stop is triggered when the (rescaled) generalization loss exceeds the threshold `alpha`

.

**Terminology.** Suppose $E_1, E_2, ..., E_t$ are a sequence of losses, for example, out-of-sample estimates of the loss associated with some iterative machine learning algorithm. Then the *generalization loss* at time `t`

, is given by

$GL_t = 100 (E_t - E_{opt}) \over |E_{opt}|$

where $E_{opt}$ is the minimum value of the sequence.

`EarlyStopping.InvalidValue`

— Type`InvalidValue()`

An early stopping criterion for loss-reporting iterative algorithms.

Stop if a loss (or training loss) is `NaN`

, `Inf`

or `-Inf`

(or, more precisely, if `isnan(loss)`

or `isinf(loss)`

is `true`

).

For a customizable loss-based stopping criterion, use `WithLossDo`

or `WithTrainingLossesDo`

with the `stop_if_true=true`

option.

`EarlyStopping.Never`

— Type`Never()`

An early stopping criterion for loss-reporting iterative algorithms.

Indicates early stopping is to be disabled.

See also `NotANumber`

, for stopping on encountering `NaN`

.

`EarlyStopping.NotANumber`

— Type`NotANumber()`

An early stopping criterion for loss-reporting iterative algorithms.

Stop if a loss of `NaN`

is encountered.

**Now deprecated** in favour of `InvalidValue`

.

`EarlyStopping.NumberLimit`

— Type`NumberLimit(; n=100)`

An early stopping criterion for loss-reporting iterative algorithms.

A stop is triggered by `n`

consecutive loss updates, excluding "training" loss updates.

If wrapped in a `stopper::EarlyStopper`

, this is the number of calls to `done!(stopper)`

.

`EarlyStopping.NumberSinceBest`

— Type`NumberSinceBest(; n=6)`

An early stopping criterion for loss-reporting iterative algorithms.

A stop is triggered when the number of calls to the control, since the lowest value of the loss so far, is `n`

.

For a customizable loss-based stopping criterion, use `WithLossDo`

or `WithTrainingLossesDo`

with the `stop_if_true=true`

option.

`EarlyStopping.PQ`

— Type`PQ(; alpha=0.75, k=5, tol=eps(Float64))`

A stopping criterion for training iterative supervised learners.

A stop is triggered when Prechelt's progress-modified generalization loss exceeds the threshold $PQ_T > alpha$, or if the training progress drops below $P_j ≤ tol$. Here `k`

is the number of training (in-sample) losses used to estimate the training progress.

**Context and explanation of terminology**

The *training progress* at time $j$ is defined by

$P_j = 1000 |M - m|/|m|$

where $M$ is the mean of the last `k`

training losses $F_1, F_2, …, F_k$ and $m$ is the minimum value of those losses.

The *progress-modified generalization loss* at time $t$ is then given by

$PQ_t = GL_t / P_t$

where $GL_t$ is the generalization loss at time $t$; see `GL`

.

PQ will stop when the following are true:

- At least
`k`

training samples have been collected via`done!(c::PQ, loss; training = true)`

or`update_training(c::PQ, loss, state)`

- The last update was an out-of-sample update. (
`done!(::PQ, loss; training=true)`

is always false) - The progress-modified generalization loss exceeds the threshold $PQ_t > alpha$
**OR**the training progress stalls $P_j ≤ tol$.

`EarlyStopping.Patience`

— Type`Patience(; n=5)`

An early stopping criterion for loss-reporting iterative algorithms.

A stop is triggered by `n`

consecutive increases in the loss.

Denoted "*UP*s" in Prechelt, Lutz (1998): "Early Stopping- But When?", in *Neural Networks: Tricks of the Trade*, ed. G. Orr, Springer..

For a customizable loss-based stopping criterion, use `WithLossDo`

or `WithTrainingLossesDo`

with the `stop_if_true=true`

option.

`EarlyStopping.Threshold`

— Type`Threshold(; value=0.0)`

An early stopping criterion for loss-reporting iterative algorithms.

A stop is triggered as soon as the loss drops below `value`

.

`WithLossDo`

or `WithTrainingLossesDo`

with the `stop_if_true=true`

option.

`EarlyStopping.TimeLimit`

— Type`TimeLimit(; t=0.5)`

An early stopping criterion for loss-reporting iterative algorithms.

Stopping is triggered after `t`

hours have elapsed since the stopping criterion was initiated.

Any Julia built-in `Real`

type can be used for `t`

. Subtypes of `Period`

may also be used, as in `TimeLimit(t=Minute(30))`

.

Internally, `t`

is rounded to nearest millisecond. ``

`EarlyStopping.Warmup`

— Type`Warmup(c::StoppingCriterion, n)`

Wait for `n`

updates before checking stopping criterion `c`

`EarlyStopping.done!`

— Method`done!(stopper::EarlyStopper, loss; training = false)`

`EarlyStopping.reset!`

— Method```
reset!(stopper::EarlyStopper)
reset!(stopper::EarlyStopper, state)
```

Reset a stopper to it's uninitialized state or to a particular state

`EarlyStopping.stopping_time`

— Method```
stopping_time(criterion, losses; verbosity=0)
stopping_time(criterion, losses, is_training; verbosity=0)
```

Determine the stopping time for the iterator `losses`

, given `stopping::StoppingCriterion`

. Include the `Bool`

vector `is_training`

of matching length, when there is a distinction between "out-of-sample" losses and "training" losses.

If losses completes before a stop, then `0`

is returned.

```
julia> stopping_time(NotANumber(), [10.0, 3.0, NaN, 4.0])
3
julia> stopping_time(NotANumber(), [10.0, 3.0, 5.0, 4.0])
0
```