Statistics

Knowledge snippets about statistics

Introduction: My Knowledge Cards

Bayesian Information Criterion

Published:
Summary: BIC considers the number of parameters and the total number of data records.
Pages: 31

Bayes Factors

Published:
Summary: p(M1|y)p(M2|y)=p(M1)p(M2)p(y|M1)p(y|M2) Bayes factor BF12=m(y|M1)m(y|M2) BF12: how many time more likely is model M1 than M2.
Pages: 31

Akaike Information Criterion

Published:
References: - Akaike Information Criterion @ Wikipedia - Vandekerckhove, J., & Matzke, D. (2015). Model comparison and the principle of parsimony. Oxford Library of Psychology.
Summary: Suppose we have a model that describes the data generation process behind a dataset. The distribution by the model is denoted as f^. The actual data generation process is described by a distribution f. We ask the question: How good is the approximation using f^? To be more precise, how much information is lost if we use our model dist f^ to substitute the actual data generation distribution f? AIC defines this information loss as AIC=2lnp(y|θ^)+2k y: data set θ^: parameter of the model that is estimated by maximum-likelihood lnp(y|θ^): log maximum likelihood (the goodness-of-fit) k: number of adjustable model params; +2k is then a penalty.
Pages: 31

Reparametrization in Expectation Sampling

Published:
Category: { statistics }
Summary: Reparametrize the sampling distribution to simplify the sampling
Pages: 31

Explained Variation

Published:
Category: { statistics }
References: - Explained variation
Summary: Using [[Fraser information]] Fraser Information The Fraser information is IF(θ)=g(X)lnf(X;θ),dX. When comparing two models, θ0 and θ1, the information gain is (F(θ1)F(θ0)). The Fraser information is closed related to [[Fisher information]] Fisher Information Fisher information measures the second moment of the model sensitivity with respect to the parameters. , Shannon information, and [[Kullback information]] KL Divergence Kullback–Leibler divergence indicates … , we can define a relative information gain by a model ρC2=1exp(2F(θ1))exp(2F(θ0)),
Pages: 31

Copula

Published:
Category: { statistics }
Summary: Given two uniform marginals, we can apply the inverse cdf of a continuous distribution to form a new joint distribution. Some examples in this notebook. Uniform marginals [[Gaussian]] Multivariate Normal Distribution Multivariate Gaussian distribution copula: Normal, Normal Some other examples: [[Normal]] Normal Distribution Gaussian distribution and [[Beta]] Beta Distribution Beta Distribution Interact Alpha Beta mode ((beta_mode)) median ((beta_median)) mean ((beta_mean)) ((makeGraph)) : Normal, Beta Gumbel and [[Beta]] Beta Distribution Beta Distribution Interact Alpha Beta mode ((beta_mode)) median ((beta_median)) mean ((beta_mean)) ((makeGraph)) : Gumbel, Beta [[t distribution]] t Distribution t distribution : t, t
Pages: 31

2 t Distribution

Published:
Category: { Distributions }
References: -
Summary: t distribution
Pages: 31

1 Normal Distribution

Published:
Category: { Distributions }
Summary: Gaussian distribution
Pages: 31