Star

Overview

CausalELM enables Estimation of causal quantities of interest in research designs where a counterfactual must be predicted and compared to the observed outcomes. More specifically, CausalELM provides a simple API to execute interupted time series analysis, G-Computation, and double machine learning as well as estimation of the CATE via S-Learning, T-Learning, X-Learning, and R-learning. Once a causal model has beeen estimated, CausalELM's summarize method provides basic information about the model as well as a p-value and standard error estimated with approximate randomization inference. One can then validate causal modeling assumptions for any model with a single call to the validate method. In all of these implementations, CausalELM predicts the counterfactuals using an Extreme Learning Machine that includes an L2 penalty by default. In this context, ELMs strike a good balance between prediction accuracy, generalization, ease of implementation, speed, and interpretability.

Features

  • Simple interface enables estimating causal effects in only a few lines of code
  • Analytically derived L2 penalty reduces cross validation time and multicollinearity
  • Fast automatic cross validation works with longitudinal, panel, and time series data
  • Includes 13 activation functions and allows user-defined activation functions
  • Single interface for continous, binary, and categorical outcome variables
  • Estimation of p-values and standard errors via asymptotic randomization inference
  • No dependencies outside of the Julia standard library
  • Validate causal modeling assumptions with one line of code
  • Non-parametric randomization (permutation) inference-based p-values for all models

What's New?

  • Corrected calculation of L2 penalty
  • Corrected estimation of ATE using double machine learning
  • Added support for categorical treatments for double machine learning
  • Implemented R-learner
  • Fixed E-value calculation
  • Simplified module structure and refactored code

Comparison with Other Packages

Other packages, mainly EconML, DoWhy, and CausalML, have similar funcitonality. Beides being written in Julia rather than Python, the main differences between CausalELM and these libraries are:

  • CausalELM uses extreme learning machines rather than tree-based or deep learners
  • CausalELM performs cross validation during training
  • CausalELM performs inference via asymptotic randomization inference rather than bootstrapping
  • CausalELM does not require you to instantiate a model and pass it into a separate class or struct for training
  • CausalELM creates train/test splits automatically
  • CausalELM does not have external dependencies: all the functions it uses are in the Julia standard library

Installation

CausalELM requires Julia version 1.7 or greater and can be installed from the REPL as shown below.

using Pkg 
Pkg.add("CausalELM")