The role of a layer in deep learning
Deep artificial neural networks (DNNs) have been driving many of the
recent advancements in machine learning. An important question on the
theory side of DNNs concerns the role played by each layer in the
network. Recently two bold conjectures were made: The first is that
DNNs learn to perform a series of Renormalization-Group (RG)
transformations on the data they are given. The second claims that
each subsequent layer in a DNN increases more and more a certain
conditional-entropy. In this talk, I’ll discuss some tests and
refinements of these two conjectures. In particular, I’ll present an
information-theory based formulation of real-space RG and compare it
with more conventional training algorithms for DNNs. Time permitting
I’ll also discuss the training of DNNs using the above
conditional-entropy based goal.
Relevant papers
[1] M. Koch-Janusz and Z.R. (2018)
https://www.nature.com/articles/s41567-018-0081-4
[2] Z.R. and R. A. de Bem (2017) https://openreview.net/forum?id=BJGWO9k0Z
[3] P. M. Lenggenhager, Z.R.,S. D. Huber, M. Koch-Janusz (2018)
https://arxiv.org/pdf/1809.09632.pdf
Last Updated Date : 05/12/2022