1. Glorot, X. et al.: Deep sparse rectifier neural networks. AISTATS ’11 Proc. 14th Int. Conf. Artif. Intell. Stat. 15, 315–323 (2011).
  2. Guide, T., S, T.: Natural Computing Series. Sci. York. JANUARY, 1–9 (2003).
  3. Hirose, A.: Complex-valued neural networks. Stud. Comput. Intell. 400, 1–214 (2012).
  4. Kruse, R. et al.: Computational Intelligence: Eine methodische Einführung in Künstliche Neuronale Netze, Evolutionäre Algorithmen, Fuzzy-Systeme und Bayes-Netze. (2015).
  5. Maas, A.L. et al.: Rectifier Nonlinearities Improve Neural Network Acoustic Models. Proc. 30 th Int. Conf. Mach. Learn. 28, 6 (2013).
  6. Zeiler, M.D. et al.: On rectified linear units for speech processing. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. pp. 3517–3521 (2013).
  7. Neural Network that Changes Everything - Computerphile, https://www.youtube.com/watch?v=py5byOOHZM8.
  8. Stanford Convolutional Neural Network, http://ufldl.stanford.edu/tutorial/supervised/ConvolutionalNeuralNetwork/.
  9. TensorFlow and deep learning, without a PhD, https://codelabs.developers.google.com/codelabs/cloud-tensorflow-mnist/#0.
  10. Wikipedia Convolutional neural network, https://en.wikipedia.org/wiki/Convolutional_neural_network.
  11. CS231n Convolutional Neural Networks for Visual Recognition, http://cs231n.github.io/convolutional-networks/#pool.
  12. Srivastava, N. et al.: Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014).