Explained Variation

Using [[Fraser information]] Fraser Information The Fraser information is $$ I_F(\theta) = \int g(X) \ln f(X;\theta) , \mathrm d X. $$ When comparing two models, $\theta_0$ and $\theta_1$, the information gain is $$ \propto (F(\theta_1) - F(\theta_0)). $$ The Fraser information is closed related to [[Fisher information]] Fisher Information Fisher information measures the second moment of the model sensitivity with respect to the parameters. , Shannon information, and [[Kullback information]] KL Divergence Kullback–Leibler divergence indicates … , we can define a relative information gain by a model

$$ \rho_C ^2 = 1 - \frac{ \exp( - 2 F(\theta_1) ) }{ \exp( - 2 F(\theta_0) ) }, $$

where $F(\theta_0)$ is the Fraser information assuming the features and target variables are all independent variables while $F(\theta_1)$ is the Fraser information of a model that predicts the target variable using the features.

$\rho_C^2$ is also the explained variation as it indicates the dispersion in the data that is explained using the features.

Planted: by ;

Lei Ma (2021). 'Explained Variation', Datumorphism, 05 April. Available at: https://datumorphism.leima.is/cards/statistics/explained-variation/.