Riccardo De Santi

Decision Making Under Uncertainty. Foundations and applications in Scientific Discovery.


I am a PhD student in Machine Learning at the ETH AI Center. I am lucky to be advised by Andreas Krause (Learning and Adaptive Systems group), Niao He (Optimization and Decision Intelligence group), and Kjell Jorner (Digital Chemistry Laboratory). I am also part of the Institute of Machine Learning at ETH. During my Bachelor I have started my research journey in Reinforcement Learning with prof. Marcello Restelli at Politecnico di Milano. Afterwards, during a MS in Machine Learning and Theoretical CS at ETH, I have done visitings at the University of Oxford and Imperial College London, both times under the supervision of profs. Michael Bronstein and Marcello Restelli. My research has been recently awarded with an Outstanding Paper Award at ICML 2022.

research interests

I am broadly interested in the foundations of algorithmic decision-making and its applications in automatic scientific discovery. This spans a wide spectrum of areas including:

  • Decision Making Under Uncertainty (Reinforcement/Active Learning, Bayesian Optimization, Bandits)
  • Optimal Experimental Design
  • Submodular and Non-Convex Optimization
  • Causality and Geometric Machine Learning

I strive to design reliable algorithms with guarantees leading to theoretical understanding of the underlying problems as well as relevant for real world applications mostly related with digital chemistry, including molecular design, drug discovery, and experimental design in chemical spaces.

If you are a MS student wishing to work with me, feel free to contact me here.

Contacts: Google Scholar   |   Twitter   |   LinkedIn   |   Github   |   rdesanti [at] ethz [dot] ch


Jan 17, 2024 Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning accepted at ICLR 2024!
Nov 23, 2023 My TEDx talk Beyond the Limits of the Mind: Scientific Discovery Reimagined is now available online!
Nov 22, 2023 On December 1st I will ufficially start my PhD at the ETH AI Center advised by Andreas Krause, Niao He, and Kjell Jorner.
Oct 27, 2023 Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning accepted at the Causal Representation Learning Workshop at NeurIPS 2023!

selected publications

  1. ICLR
    Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning
    Mirco Mutti, Riccardo De Santi, Marcello Restelli, and 2 more authors
    International Conference on Learning Representations (ICLR), 2024
    Causal Representation Learning Workshop at NeurIPS 2023
  2. JMLR
    Convex Reinforcement Learning in Finite Trials
    Mirco Mutti, Riccardo De Santi, Piersilvio De Bartolomeis, and 1 more author
    Journal of Machine Learning Research (JMLR), 2023
  3. AAAI
    Provably efficient causal model-based reinforcement learning for systematic generalization
    Mirco Mutti*, Riccardo De Santi*, Emanuele Rossi, and 3 more authors
    Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2023
    Workshop on Spurious Correlations, Invariance, and Stability at ICML 2022 and A Causal View on Dynamical Systems Workshop at NeurIPS 2022
  4. Non-Markovian Policies for Unsupervised Reinforcement Learning in Multiple Environments
    Pietro Maldini*, Mirco Mutti*, Riccardo De Santi, and 1 more author
    Decision Awareness in Reinforcement Learning Workshop and First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward at ICML 2022
  5. NeurIPS
    Challenging common assumptions in convex reinforcement learning
    Mirco Mutti*, Riccardo De Santi*, Piersilvio De Bartolomeis, and 1 more author
    Advances in Neural Information Processing Systems (NeurIPS), 2022
    Complex Feedback in Online Learning (CFOL) Workshop at ICML 2022
  6. ICML
    The importance of non-markovianity in maximum state entropy exploration
    Mirco Mutti*, Riccardo De Santi*, and Marcello Restelli
    International Conference on Machine Learning (ICML), 2022
    Outstanding Paper Award at ICML 2022
    ICML Workshop on Reinforcement Learning Theory, 2021