Riccardo De Santi

ETH AI Center PhD student. Generative Optimization and Exploration for Large-Scale Scientific Discovery.

desanti_riccardo.jpg

I am a PhD student in Machine Learning at the ETH AI Center, advised by Andreas Krause, Niao He, and Kjell Jorner, and affiliated with the Institute of Machine Learning and NCCR Catalysis. My research focuses on optimization and exploration via generative models — bridging decision-making under uncertainty, optimization and generative modeling to tackle fundamental challenges in large-scale scientific discovery. I work on mathematical foundations, scalable learning methods, and real-world applications including enzyme design for sustainable chemistry.

Before this, I worked on unsupervised exploration in RL, earning an Outstanding Paper Award at ICML with Marcello Restelli, and visited Michael Bronstein at the University of Oxford and Imperial College London.

Feel free to reach out if you wish to collaborate, exchange ideas, or seek thesis supervision.

Contacts: rdesanti@ethz.ch   |   Google Scholar   |   Twitter   |   LinkedIn   |   Github

news

Jun 1, 2024 Global Reinforcement Learning: Beyond Linear and Convex Rewards via Submodular Semi-gradient Methods and Geometric Active Exploration in Markov Decision Processes: the Benefit of Abstraction accepted at ICML 2024!
Jan 17, 2024 Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning accepted at ICLR 2024!
Nov 23, 2023 My TEDx talk Beyond the Limits of the Mind: Scientific Discovery Reimagined is now available online!
Nov 22, 2023 On December 1st I will ufficially start my PhD at the ETH AI Center advised by Andreas Krause, Niao He, and Kjell Jorner.
Oct 27, 2023 Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning accepted at the Causal Representation Learning Workshop at NeurIPS 2023!

selected publications

  1. ICML
    Global Reinforcement Learning: Beyond Linear and Convex Rewards via Submodular Semi-gradient Methods
    Riccardo De Santi*, Manish Prajapat*, and Andreas Krause
    International Conference on Machine Learning (ICML), 2024
  2. ICML
    Geometric Active Exploration in Markov Decision Processes: the Benefit of Abstraction
    Riccardo De Santi, Federico Arangath Joseph, Noah Liniger, and 2 more authors
    International Conference on Machine Learning (ICML), 2024
  3. ICLR
    Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning
    Mirco Mutti, Riccardo De Santi, Marcello Restelli, and 2 more authors
    International Conference on Learning Representations (ICLR), 2024
    Causal Representation Learning Workshop at NeurIPS 2023
  4. JMLR
    Convex Reinforcement Learning in Finite Trials
    Mirco Mutti, Riccardo De Santi, Piersilvio De Bartolomeis, and 1 more author
    Journal of Machine Learning Research (JMLR), 2023
  5. AAAI
    Provably efficient causal model-based reinforcement learning for systematic generalization
    Mirco Mutti*, Riccardo De Santi*, Emanuele Rossi, and 3 more authors
    Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2023
    Workshop on Spurious Correlations, Invariance, and Stability at ICML 2022 and A Causal View on Dynamical Systems Workshop at NeurIPS 2022
  6. Non-Markovian Policies for Unsupervised Reinforcement Learning in Multiple Environments
    Pietro Maldini*, Mirco Mutti*, Riccardo De Santi, and 1 more author
    2022
    Decision Awareness in Reinforcement Learning Workshop and First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward at ICML 2022
  7. NeurIPS
    Challenging common assumptions in convex reinforcement learning
    Mirco Mutti*, Riccardo De Santi*, Piersilvio De Bartolomeis, and 1 more author
    Advances in Neural Information Processing Systems (NeurIPS), 2022
    Complex Feedback in Online Learning (CFOL) Workshop at ICML 2022
  8. ICML
    The importance of non-markovianity in maximum state entropy exploration
    Mirco Mutti*, Riccardo De Santi*, and Marcello Restelli
    International Conference on Machine Learning (ICML), 2022
    Outstanding Paper Award at ICML 2022
    ICML Workshop on Reinforcement Learning Theory, 2021