The lecture will provide an introduction to the practical implementation of Deep Learning architectures and will discuss the Statistical Mechanics approach to understanding the general principles underlying the success of such implementations. In particular, we will discuss the structure of deep neural networks, back-propagation algorithm, training of neural networks using the MNIST dataset as an example, analysis of Gibbs and online learning of a perceptron in the teacher-student configuration, calculation of quenched averages using the replica method, analysis of two-layer networks using the Committee Machine as an example, bias-variance trade-off, random matrix theory and analysis of weight matrices, application of neural networks to solve physical problems. An introduction to the use of TensorFlow and Keras will be given. 

The introductory part of the lecture will follow the book

Other parts of the lecture will be based on the book Statistical Mechanics of Learning by A. Engel and C. Van den Broeck, which is available in the library.