CalibrationErrors.jl

Estimation of calibration errors.

A package for estimating calibration errors from data sets of predictions and targets.

CalibrationErrorsDistributions.jl extends calibration error estimation in this package to more general probabilistic predictive models that output arbitrary probability distributions.

CalibrationTests.jl implements statistical hypothesis tests of calibration.

pycalibration is a Python interface for CalibrationErrors.jl, CalibrationErrorsDistributions.jl, and CalibrationTests.jl.

rcalibration is an R interface for CalibrationErrors.jl, CalibrationErrorsDistributions.jl, and CalibrationTests.jl.

Talk at JuliaCon 2021

The slides of the talk are available as Pluto notebook.

Citing

If you use CalibrationsErrors.jl as part of your research, teaching, or other activities, please consider citing the following publications:

Widmann, D., Lindsten, F., & Zachariah, D. (2019). Calibration tests in multi-class classification: A unifying framework. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019) (pp. 12257–12267).

Widmann, D., Lindsten, F., & Zachariah, D. (2021). Calibration tests beyond classification. International Conference on Learning Representations (ICLR 2021).