Research

Our group seeks to understand how computation emerges from the complex dynamics and connectivity of artificial and biological systems. We employ a variety of approaches rooted in statistical physics and applied math.

Statistical mechanics of deep learning

We are focused on understanding the typical learning and generalization performance of neural networks. To this aim, we use tools from statistical physics of disordered systems and random matrix theory.

In collaboration with the group of Federica Gerace and Pietro Rotondo, we are currently developing a theory for transfer and multi-task learning in Bayesian deep networks.

We are also particularly interested in understanding the impact of biological constraints on computation, deriving fundamental bounds for the function of biologically-plausible neural networks.

Suggested reading:

Efficiency of neural computation

We study the efficiency of neural computation from both an energetic and information theoretic perspective.

Using methods from Stochastic Thermodynamics, we recently formulated a computation-dissipation bottleneck for stochastic recurrent neural networks operating at the mesoscale.

The aim of my group is to develop a mathematical formalism to characterize fundamental tradeoffs between computation and energy consumption in both rate and spiking networks performing function.

Suggested reading:

Dynamics and learning in recurrent neural networks

We use concepts and methods from statistical physics and control theory to analyze learning in recurrent neural networks. We are recently focused on understanding quantitatively the role of heterogeneity in RNNs, using both dynamical mean field theory and numerical approaches.

In parallel, we build data-driven models for neural population dynamics, using large-scale recordings in behaving animals performing tasks. This work is manly done in collaboration with Timo Kerkoele’s lab and Wouter Kroot.

Suggested reading:

Multiscale methods for neural networks

We employ advanced Monte Carlo methods from Soft Matter Physics to study the geometry of loss landscapes in neural network learning problems. We are currently developing novel optimal coarse graining methods to analyze how internal representations evolve over the course of learning, in an attempt to identify relevant subsets of neurons in deep networks.

This work is mainly carried out in collaboration with the group of Raffaello Potestio at UniTn.

Suggested reading: