Negative Sampling

Knowledge of [[CBOW]] CBOW: Continuous Bag of Words Use the context to predict the center word or [[skipgram]] skipgram: Continuous skip-gram Use the center word to predict the context is required.

A naive model to train a model of words is to

  1. encode input words and output words using vectors,
  2. use the input word vector to predict the output word vector,
  3. calculate the errors between predicted output word vector and real output word vector,
  4. minimize the errors.

However, it is very expensive to project out the output words and calculate the error every time. A trick is to use negative sampling.

Negative sampling adds a new column to the data as the predictions.

Input (Center Word)Output (Context)Target (is Neighbour)
intendedextravagant1
intendeddisplay1
intendedto1
intendedattract1

Now we have a problem. The target is always 1. This dataset might lead to network that outputs 1 all the time. We need some nagative samples to make it noisy. We randomly sampled words from the dictionary.

Input (Center Word)Output (Context)Target (is Neighbour)
intendedextravagant1
intendeddisplay1
intendedto1
intendedattract1
intendedI0
intendeda0
intendedintellect0
intendedmating0
intendedcourse0

For more rigorous derivations, please follow Goldberg20141.

Planted: by ;

Dynamic Backlinks to cards/machine-learning/embedding/negative-sampling:

L Ma (2020). 'Negative Sampling', Datumorphism, 01 April. Available at: https://datumorphism.leima.is/cards/machine-learning/embedding/negative-sampling/.