CalibrationErrorsDistributions.jl

Estimation of calibration errors for models that output probability distributions from Distributions.jl.

Build Status DOI Coverage Coverage Code Style: Blue Bors enabled

There are also Python and R interfaces for this package

This package extends calibration error estimation for classification models in the package CalibrationErrors.jl to more general probabilistic predictive models that output arbitrary probability distributions.

CalibrationTests.jl implements statistical hypothesis tests of calibration.

pycalibration is a Python interface for CalibrationErrors.jl, CalibrationErrorsDistributions.jl, and CalibrationTests.jl.

rcalibration is an R interface for CalibrationErrors.jl, CalibrationErrorsDistributions.jl, and CalibrationTests.jl.

Reference

If you use CalibrationsErrorsDistributions.jl as part of your research, teaching, or other activities, please consider citing the following publication:

Widmann, D., Lindsten, F., & Zachariah, D. (2019). Calibration tests in multi-class classification: A unifying framework. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019) (pp. 12257–12267).

Widmann, D., Lindsten, F., & Zachariah, D. (2021). Calibration tests beyond classification. To be presented at ICLR 2021.