# Poisson Process

Published:
Category: { Statistics }
References:
Summary:
Pages: 24

# Bayes' Theorem

Published:
Category: { Math }
Tags:
References:
Summary: Bayes’ Theorem is stated as $$P(A\mid B) = \frac{P(B \mid A) P(A)}{P(B)}$$ $P(A\mid B)$: likelihood of A given B $P(A)$: marginal probability of A There is a nice tree diagram for the Bayes’ theorem on Wikipedia. Tree diagram of Bayes’ theorem
Pages: 24

# Kendall Tau Correlation

Published:
Category: { Statistics }
Tags:
Summary: Definition two series of data: $X$ and $Y$ cooccurance of them: $(x_i, x_j)$, and we assume that $i<j$ concordant: $x_i < x_j$ and $y_i < y_j$; $x_i > x_j$ and $y_i > y_j$; denoted as $C$ discordant: $x_i < x_j$ and $y_i > y_j$; $x_i > x_j$ and $y_i < y_j$; denoted as $D$ neither concordant nor discordant: whenever equal sign happens Kendall’s tau is defined as $$\begin{equation} \tau = \frac{C- D}{\text{all possible pairs of comparison}} = \frac{C- D}{n^2/2 - n/2} \end{equation}$$
Pages: 24

# Jackknife Resampling

Published:
Category: { Statistics }
References:
Summary: Jackknife resampling method
Pages: 24

# Covariance Matrix

Published:
Category: { Math }
Tags:
References:
Summary: Also known as the second central moment is a measurement of the spread.
Pages: 24

# Gamma Distribution

Published:
Category: { Statistics }
References:
Summary: Gamma Distribution PDF: $$\frac{\beta^\alpha x^{\alpha-1} e^{-\beta x}}{\Gamma(\alpha)}$$ Visualize
Pages: 24

# Cauchy-Lorentz Distribution

Published:
Category: { Statistics }
References:
Summary: Cauchy-Lorentz Distribution .. ratio of two independent normally distributed random variables with mean zero. Source: https://en.wikipedia.org/wiki/Cauchy_distribution Lorentz distribution is frequently used in physics. PDF: $$\frac{1}{\pi\gamma} \left( \frac{\gamma^2}{ (x-x_0)^2 + \gamma^2} \right)$$ The median and mode of the Cauchy-Lorentz distribution is always $x_0$. $\gamma$ is the FWHM. Visualize
Pages: 24

# Categorical Distribution

Published:
Category: { Statistics }
References:
Summary: By generalizing the Bernoulli distribution to $k$ states, we get a categorical distribution. The sample space is $\{s_1, s_2, \cdots, s_k\}$. The corresponding probabilities for each state are $\{p_1, p_2, \cdots, p_k\}$ with the constraint $\sum_{i=1}^k p_i = 1$.
Pages: 24

# Binomial Distribution

Published:
Category: { Statistics }
References:
Summary: The number of successes in $n$ independent events where each trial has a success rate of $p$. PMF: $$C_n^k p^k (1-p)^{n-k}$$
Pages: 24

# Beta Distribution

Published:
Category: { Statistics }
References:
Summary: Beta Distribution Interact {% include extras/vue.html %} ((makeGraph))
Pages: 24

# Bernoulli Distribution

Published:
Category: { Statistics }
References:
Pages: 24

# Akaike Information Criterion

Published:
References: - Vandekerckhove, J., & Matzke, D. (2015). Model comparison and the principle of parsimony. Oxford Library of Psychology.
Summary: Suppose we have a model that describes the data generation process behind a dataset. The distribution by the model is denoted as $\hat f$. The actual data generation process is described by a distribution $f$. We ask the question: How good is the approximation using $\hat f$? To be more precise, how much information is lost if we use our model dist $\hat f$ to substitute the actual data generation distribution $f$?
Pages: 24