Bayesian Linear Regression
Nonnegative Matrix Factorizatioin has a bright future
Single-layer neural network creates embedding space
Bias Variance Trade off is a key concept in statistical learning
Principal component analysis is a method to remove redundancies of the features by looking into the variances.
It was discovered that the success of [[mutual information based contrastive learning]] Contrastive …
Normalizing flow is a method to convert a complicated distribution $p(x)$ to a simpler distribution …
In GAN, the latent space input is usually random noise, e.g., Gaussian noise. The objective of …
logistics regression is a simple model for classification
random forest in machine learning
A generalization of matrix factorization
unsupervised learning: support vector machine
Contrastive Predictive Coding, aka CPC, is an autoregressive model combined with InfoNCE loss1. …
Max Global Mutual Information Why not just use the global mutual information of the input and …
Autoencoders (AE) are machines that encodes inputs into a compact latent space. The simplest …
In an inference problem, $p(z\vert x)$, which is used to infer $z$ from $x$. $$ p(z\vert x) = …
Layer norm
Batch norm
Centered Kernel Alignment (CKA) is a similarity metric designed to measure the similarity of between representations of features in neural networks.