I am a PhD student in Machine Learning at the ETH AI Center. I am lucky to be advised by Andreas Krause (Learning and Adaptive Systems group), Niao He (Optimization and Decision Intelligence group), and Kjell Jorner (Digital Chemistry Laboratory). I am also part of the Institute of Machine Learning at ETH. During my Bachelor I have started my research journey in Reinforcement Learning with prof. Marcello Restelli at Politecnico di Milano. Afterwards, during a MS in Machine Learning and Theoretical CS at ETH, I have done visitings at the University of Oxford and Imperial College London, both times under the supervision of profs. Michael Bronstein and Marcello Restelli. My research has been recently awarded with an Outstanding Paper Award at ICML 2022.
I am broadly interested in the foundations of algorithmic decision-making and its applications in automatic scientific discovery. This spans a wide spectrum of areas including:
- Decision Making Under Uncertainty (Reinforcement/Active Learning, Bayesian Optimization, Bandits)
- Optimal Experimental Design
- Submodular and Non-Convex Optimization
- Causality and Geometric Machine Learning
I strive to design reliable algorithms with guarantees leading to theoretical understanding of the underlying problems as well as relevant for real world applications mostly related with digital chemistry, including molecular design, drug discovery, and experimental design in chemical spaces.
If you are a MS student wishing to work with me, feel free to contact me here.
|Jan 17, 2024
|Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning accepted at ICLR 2024!
|Nov 23, 2023
|My TEDx talk Beyond the Limits of the Mind: Scientific Discovery Reimagined is now available online!
|Nov 22, 2023
|On December 1st I will ufficially start my PhD at the ETH AI Center advised by Andreas Krause, Niao He, and Kjell Jorner.
|Oct 27, 2023
|Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning accepted at the Causal Representation Learning Workshop at NeurIPS 2023!
- ICLRExploiting Causal Graph Priors with Posterior Sampling for Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2024Causal Representation Learning Workshop at NeurIPS 2023
- JMLRConvex Reinforcement Learning in Finite TrialsJournal of Machine Learning Research (JMLR), 2023
- AAAIProvably efficient causal model-based reinforcement learning for systematic generalizationProceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2023Workshop on Spurious Correlations, Invariance, and Stability at ICML 2022 and A Causal View on Dynamical Systems Workshop at NeurIPS 2022
- Non-Markovian Policies for Unsupervised Reinforcement Learning in Multiple Environments2022Decision Awareness in Reinforcement Learning Workshop and First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward at ICML 2022
- NeurIPSChallenging common assumptions in convex reinforcement learningAdvances in Neural Information Processing Systems (NeurIPS), 2022Complex Feedback in Online Learning (CFOL) Workshop at ICML 2022
- ICMLThe importance of non-markovianity in maximum state entropy explorationInternational Conference on Machine Learning (ICML), 2022Outstanding Paper Award at ICML 2022ICML Workshop on Reinforcement Learning Theory, 2021