Uni-Polar Sigmoid
Published:
Category: { Neural Networks }
Tags:
Summary: Uni-polar sigmoid function and its properties
Pages: 13
ReLu
Published:
Category: { Neural Networks }
Tags:
Summary: Rectified Linear Unit, aka ReLu, and its properties
Pages: 13
Radial Basis Function
Published:
Category: { Neural Networks }
Tags:
Summary: Radial Basis Function function and its properties
Pages: 13
Leaky ReLu
Published:
Category: { Neural Networks }
Tags:
Summary: Leaky ReLu and its properties
Pages: 13
Hyperbolic Tanh
Published:
Category: { Neural Networks }
Tags:
Summary: Tanh function and its properties
Pages: 13
Conic Section Function
Published:
Category: { Neural Networks }
Tags:
References:
- Dorffner G. UNIFIED FRAMEWORK FOR MLPs AND RBFNs: INTRODUCING CONIC SECTION FUNCTION NETWORKS. Cybern Syst. 1994;25: 511–554. doi:10.1080/01969729408902340
- Shenouda EAMA. A Quantitative Comparison of Different MLP Activation Functions in Classification. Advances in Neural Networks - ISNN 2006. Springer Berlin Heidelberg; 2006. pp. 849–857. doi:10.1007/11759966_125
Summary: Conic Section Function and its properties
Pages: 13
BiPolar Sigmoid
Published:
Category: { Neural Networks }
Tags:
Summary: BiPolar sigmoid function and its properties
Pages: 13
Rosenblatt's Perceptron
Published:
Category: { Machine Learning }
Tags:
References:
- Vladimir N. Vapnik. 2000. The Nature of Statistical Learning Theory.
- Novikoff, Albert B. 1963. On Convergence Proofs for Perceptrons. STANFORD RESEARCH INST MENLO PARK CA.
Summary: Connected perceptrons
Pages: 13
McCulloch-Pitts Model
Published:
Category: { Machine Learning }
Tags:
Summary: Artificial neuron that separates the state space
Pages: 13
The log-sum-exp Trick
Published:
Category: { Machine Learning }
Tags:
References:
- Eisele R. The log-sum-exp trick in Machine Learning • Computer Science and Machine Learning. In: Robert Eisele [Internet]. 22 Jun 2016 [cited 28 Jul 2021]. Available: https://www.xarg.org/2016/06/the-log-sum-exp-trick-in-machine-learning/
- Wang X. Numerical stability of binary cross entropy loss and the log-sum-exp trick – Integrative Biology and Predictive Analytics. In: Integrative Biology and Predictive Analytics [Internet]. 26 Sep 2018 [cited 28 Jul 2021]. Available: http://tagkopouloslab.ucdavis.edu/?p=2197
Summary: For numerical stability we can use the log-sum-exp trick to calculate some loss such as cross entropy
Pages: 13
Initialize Artificial Neural Networks
Published:
Category: { Neural Networks }
Tags:
References:
- Lippe P. Tutorial 3: Activation Functions — UvA DL Notebooks v1.1 documentation. In: UvA Deep Learning Tutorials [Internet]. [cited 23 Sep 2021]. Available: https://uvadlc-notebooks.readthedocs.io
- Ouannes P. How to initialize deep neural networks? Xavier and Kaiming initialization. In: Title [Internet]. 22 Mar 2019 [cited 24 Sep 2021]. Available: https://pouannes.github.io/blog/initialization
- Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. Teh YW, Titterington M, editors. 2010;9: 249–256. Available: http://proceedings.mlr.press/v9/glorot10a.html
- Katanforoosh & Kunin, "Initializing neural networks", deeplearning.ai, 2019.
Summary: Initialize a neural network is important for the training and performance. Some initializations simply don't work, some will degrade the performance of the model. We should choose wisely.
Pages: 13