Universality of information processing by shallow neural networks
In this talk, at first, I will show a formal equivalence among archetypical models in biologically-inspired neural networks (e.g. the Hopfield model for Hebbian learning) and artificial neural networks typically involved in Machine Learning (e.g. the restricted Boltzmann machine): thresholds for learning, storing and retrieving information will be shown to be the same for both these models (analytically in the random setting and numerically on structured datasets). Ultimately, this happens because these models share the same underlying statistical mechanical picture: once understood this duality, it will be enlarged toward more complex and performing neural architectures with a particular focus on dreaming neural networks.
Last Updated Date : 08/11/2022