skipgram: Continuous skip-gram
Published:
Category: { Machine Learning::Embedding }
Tags:
References:
- Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv:1301.3781
- The Illustrated Word2vec
Summary: Use the center word to predict the context
Pages: 4
Negative Sampling
Published:
Category: { Machine Learning::Embedding }
Tags:
References:
- Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv:1301.3781
- Goldberg Y, Levy O. word2vec Explained: deriving Mikolov et al.’s negative-sampling word-embedding method. arXiv [cs.CL]. 2014. Available: http://arxiv.org/abs/1402.3722
- The Illustrated Word2vec
Summary: negative sampling makes the calculations faster
Pages: 4
CBOW: Continuous Bag of Words
Published:
Category: { Machine Learning::Embedding }
Tags:
References:
- Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv:1301.3781
- The Illustrated Word2vec
Summary: Use the context to predict the center word
Pages: 4
Alignment and Uniformity
Published:
Category: { Machine Learning::Embedding }
Tags:
Summary: A good representation should be able to separate different instances and cluster similar instances.
Pages: 4