MCMCDiagnosticTools
Effective sample size and potential scale reduction
The effective sample size (ESS) and the potential scale reduction can be estimated with ess_rhat
.
MCMCDiagnosticTools.ess_rhat
— Functioness_rhat(
samples::AbstractArray{<:Union{Missing,Real},3}; method=ESSMethod(), maxlag=250
)
Estimate the effective sample size and the potential scale reduction of the samples
of shape (draws, parameters, chains) with the method
and a maximum lag of maxlag
.
See also: ESSMethod
, FFTESSMethod
, BDAESSMethod
The following methods are supported:
MCMCDiagnosticTools.ESSMethod
— TypeESSMethod <: AbstractESSMethod
The ESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the biased estimator of the autocovariance, as discussed by Geyer. In contrast to Geyer, the divisor n - 1
is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.
References
Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
MCMCDiagnosticTools.FFTESSMethod
— TypeFFTESSMethod <: AbstractESSMethod
The FFTESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
The algorithm is the same as the one of ESSMethod
but this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.
To be able to use this method, you have to load a package that implements the AbstractFFTs.jl interface such as FFTW.jl or FastTransforms.jl.
MCMCDiagnosticTools.BDAESSMethod
— TypeBDAESSMethod <: AbstractESSMethod
The BDAESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by Vehtari et al. and uses the variogram estimator of the autocorrelation function discussed by Gelman et al.
References
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis.
Monte Carlo standard error
MCMCDiagnosticTools.mcse
— Functionmcse(x::AbstractVector{<:Real}; method::Symbol=:imse, kwargs...)
Compute the Monte Carlo standard error (MCSE) of samples x
. The optional argument method
describes how the errors are estimated. Possible options are:
:bm
for batch means [Glynn1991]:imse
initial monotone sequence estimator [Geyer1992]:ipse
initial positive sequence estimator [Geyer1992]
R⋆ diagnostic
MCMCDiagnosticTools.rstar
— Functionrstar(
rng=Random.GLOBAL_RNG,
classifier,
samples::AbstractMatrix,
chain_indices::AbstractVector{Int};
subset::Real=0.8,
verbosity::Int=0,
)
Compute the $R^*$ convergence statistic of the samples
with shape (draws, parameters) and corresponding chains chain_indices
with the classifier
.
This implementation is an adaption of algorithms 1 and 2 described by Lambert and Vehtari.
The classifier
has to be a supervised classifier of the MLJ framework (see the MLJ documentation for a list of supported models). It is trained with a subset
of the samples. The training of the classifier can be inspected by adjusting the verbosity
level.
If the classifier is deterministic, i.e., if it predicts a class, the value of the $R^*$ statistic is returned (algorithm 1). If the classifier is probabilistic, i.e., if it outputs probabilities of classes, the scaled Poisson-binomial distribution of the $R^*$ statistic is returned (algorithm 2).
The correctness of the statistic depends on the convergence of the classifier
used internally in the statistic.
Examples
julia> using MLJBase, MLJXGBoostInterface, Statistics
julia> samples = fill(4.0, 300, 2);
julia> chain_indices = repeat(1:3; outer=100);
One can compute the distribution of the $R^*$ statistic (algorithm 2) with the probabilistic classifier.
julia> distribution = rstar(XGBoostClassifier(), samples, chain_indices);
julia> isapprox(mean(distribution), 1; atol=0.1)
true
For deterministic classifiers, a single $R^*$ statistic (algorithm 1) is returned. Deterministic classifiers can also be derived from probabilistic classifiers by e.g. predicting the mode. In MLJ this corresponds to a pipeline of models.
julia> @pipeline XGBoostClassifier name = XGBoostDeterministic operation = predict_mode;
julia> value = rstar(XGBoostDeterministic(), samples, chain_indices);
julia> isapprox(value, 1; atol=0.1)
true
References
Lambert, B., & Vehtari, A. (2020). $R^*$: A robust MCMC convergence diagnostic with uncertainty using decision tree classifiers.
Other diagnostics
MCMCDiagnosticTools.discretediag
— Functiondiscretediag(chains::AbstractArray{<:Real,3}; frac=0.3, method=:weiss, nsim=1_000)
Compute discrete diagnostic where method
can be one of :weiss
, :hangartner
, :DARBOOT
, :MCBOOT
, :billinsgley
, and :billingsleyBOOT
.
References
Benjamin E. Deonovic, & Brian J. Smith. (2017). Convergence diagnostics for MCMC draws of a categorical variable.
MCMCDiagnosticTools.gelmandiag
— Functiongelmandiag(chains::AbstractArray{<:Real,3}; alpha::Real=0.95)
Compute the Gelman, Rubin and Brooks diagnostics [Gelman1992] [Brooks1998]. Values of the diagnostic’s potential scale reduction factor (PSRF) that are close to one suggest convergence. As a rule-of-thumb, convergence is rejected if the 97.5 percentile of a PSRF is greater than 1.2.
MCMCDiagnosticTools.gelmandiag_multivariate
— Functiongelmandiag_multivariate(chains::AbstractArray{<:Real,3}; alpha::Real=0.05)
Compute the multivariate Gelman, Rubin and Brooks diagnostics.
MCMCDiagnosticTools.gewekediag
— Functiongewekediag(x::AbstractVector{<:Real}; first::Real=0.1, last::Real=0.5, kwargs...)
Compute the Geweke diagnostic [Geweke1991] from the first
and last
proportion of samples x
.
The diagnostic is designed to asses convergence of posterior means estimated with autocorrelated samples. It computes a normal-based test statistic comparing the sample means in two windows containing proportions of the first and last iterations. Users should ensure that there is sufficient separation between the two windows to assume that their samples are independent. A non-significant test p-value indicates convergence. Significant p-values indicate non-convergence and the possible need to discard initial samples as a burn-in sequence or to simulate additional samples.
MCMCDiagnosticTools.heideldiag
— Functionheideldiag(
x::AbstractVector{<:Real}; alpha::Real=0.05, eps::Real=0.1, start::Int=1, kwargs...
)
Compute the Heidelberger and Welch diagnostic [Heidelberger1983]. This diagnostic tests for non-convergence (non-stationarity) and whether ratios of estimation interval halfwidths to means are within a target ratio. Stationarity is rejected (0) for significant test p-values. Halfwidth tests are rejected (0) if observed ratios are greater than the target, as is the case for s2
and beta[1]
.
MCMCDiagnosticTools.rafterydiag
— Functionrafterydiag(
x::AbstractVector{<:Real}; q=0.025, r=0.005, s=0.95, eps=0.001, range=1:length(x)
)
Compute the Raftery and Lewis diagnostic [Raftery1992]. This diagnostic is used to determine the number of iterations required to estimate a specified quantile q
within a desired degree of accuracy. The diagnostic is designed to determine the number of autocorrelated samples required to estimate a specified quantile $\theta_q$, such that $\Pr(\theta \le \theta_q) = q$, within a desired degree of accuracy. In particular, if $\hat{\theta}_q$ is the estimand and $\Pr(\theta \le \hat{\theta}_q) = \hat{P}_q$ the estimated cumulative probability, then accuracy is specified in terms of r
and s
, where $\Pr(q - r < \hat{P}_q < q + r) = s$. Thinning may be employed in the calculation of the diagnostic to satisfy its underlying assumptions. However, users may not want to apply the same (or any) thinning when estimating posterior summary statistics because doing so results in a loss of information. Accordingly, sample sizes estimated by the diagnostic tend to be conservative (too large).
Furthermore, the argument r
specifies the margin of error for estimated cumulative probabilities and s
the probability for the margin of error. eps
specifies the tolerance within which the probabilities of transitioning from initial to retained iterations are within the equilibrium probabilities for the chain. This argument determines the number of samples to discard as a burn-in sequence and is typically left at its default value.
- Glynn1991Glynn, P. W., & Whitt, W. (1991). Estimating the asymptotic variance with batch means. Operations Research Letters, 10(8), 431-435.
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483.
- Gelman1992Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472.
- Brooks1998Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of computational and graphical statistics, 7(4), 434-455.
- Geweke1991Geweke, J. F. (1991). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments (No. 148). Federal Reserve Bank of Minneapolis.
- Heidelberger1983Heidelberger, P., & Welch, P. D. (1983). Simulation run length control in the presence of an initial transient. Operations Research, 31(6), 1109-1144.
- Raftery1992A L Raftery and S Lewis. Bayesian Statistics, chapter How Many Iterations in the Gibbs Sampler? Volume 4. Oxford University Press, New York, 1992.