#Machine Learning #Embedding #Word2vec

Word2vec is a word embedding model that learns the probability of some words being neighbours in a sentence $p_{neighbours}(w_i, w_o)$.

  1. Build a dataset of adjacent words. CBOW; skipgram; negative sampling;
  2. Encode the words using vectors.
  3. Build a model $f(\{\theta_i\})$ to calculate the probability of the words being neighours and improve the parameters $\{\theta_i\}$ using the dataset.

Published: by ;

Table of Contents

Current Ref:

  • wiki/machine-learning/embedding/word2vec.md