GLFixedEffectModels.bias_correction
— Methodbias_correction(model::GLFixedEffectModel,df::DataFrame;i_symb::Union{Symbol,Nothing}=nothing,j_symb::Union{Symbol,Nothing}=nothing,t_symb::Union{Symbol,Nothing}=nothing,L::Int64=0,panel_structure::Symbol=:classic)
Asymptotic bias correction after fitting binary choice models with a two-/three-way error.
Arguments
Required Arguments
model::Integer
: aGLFixedEffectModel
object which can be obtained by usingnlreg()
.df::DataFrame
: the Data Frame on which you just runnlreg()
.
Optional Arguments
L:Int64
: choice of binwidth, see Hahn and Kuersteiner (2011). The default value is 0.panel_structure
: choose from "classic" or "network". The default value is "classic".i_symb
: the variable name for i index in the data framedf
j_symb
: the variable name for j index in the data framedf
t_symb
: the variable name for t index in the data framedf
Available Model
We only support the following models:
- Binomial regression, Logit link, Two-way, Classic
- Binomial regression, Probit link, Two-way, Classic
- Binomial regression, Logit link, Two-way, Network
- Binomial regression, Probit link, Two-way, Network
- Binomial regression, Logit link, Three-way, Network
- Binomial regression, Probit link, Three-way, Network
- Poisson regression, Log link, Three-way, Network
- Poisson regression, Log link, Two-way, Network
GLFixedEffectModels.nlreg
— MethodEstimate a generalized linear model with high dimensional categorical variables
Arguments
df
: a TableFormulaTerm
: A formula created using@formula
distribution
: ADistribution
. See the documentation of GLM.jl for valid distributions.link
: ALink
function. See the documentation of GLM.jl for valid link functions.CovarianceEstimator
: A method to compute the variance-covariance matrixsave::Vector{Symbol} = Symbol[]
: Should residuals/predictions/eta's/estimated fixed effects be saved in the dataframeaugmentdf
? Can contain any subset of[:residuals,:eta,:mu,:fe]
.method::Symbol
: A symbol for the method. Default is :cpu. Alternatively, :gpu requiresCuArrays
. In this case, use the optiondouble_precision = false
to useFloat32
.contrasts::Dict = Dict()
An optional Dict of contrast codings for each categorical variable in theformula
. Any unspecified variables will haveDummyCoding
.maxiter::Integer = 1000
: Maximum number of iterationsmaxiter_center::Integer = 10000
: Maximum number of iterations for centering procedure.double_precision::Bool
: Should the demeaning operation use Float64 rather than Float32? Default to true.dev_tol::Real
: Tolerance level for the first stopping condition of the maximization routine.rho_tol::Real
: Tolerance level for the stephalving in the maximization routine.step_tol::Real
: Tolerance level that accounts for rounding errors inside the stephalving routinecenter_tol::Real
: Tolerance level for the stopping condition of the centering algorithm. Default to 1e-8 ifdouble_precision = true
, 1e-6 otherwise.separation::Symbol = :none
: method to detect/deal with separation. Currently supported values are:none
,:ignore
and:mu
. See readme for details.separation_mu_lbound::Real = -Inf
: Lower bound for the Clarkson-Jennrich separation detection heuristic.separation_mu_ubound::Real = Inf
: Upper bound for the Clarkson-Jennrich separation detection heuristic.separation_ReLU_tol::Real = 1e-4
: Tolerance level for the ReLU algorithm.separation_ReLU_maxiter::Integer = 1000
: Maximal number of iterations for the ReLU algorithm.
Examples
using GLM, RDatasets, Distributions, Random, GLFixedEffectModels
rng = MersenneTwister(1234)
df = dataset("datasets", "iris")
df.binary = 0.0
df[df.SepalLength .> 5.0,:binary] .= 1.0
df.SpeciesDummy = categorical(df.Species)
m = @formula binary ~ SepalWidth + fe(SpeciesDummy)
x = nlreg(df, m, Binomial(), GLM.LogitLink() , start = [0.2] )