About me
I am a founding member at Recursive excited about open-endedness and automating scientific discovery.
In parallel I am wrapping up my ELLIS PhD. I was supervised by both Eric Schulz (LMU & Helmholtz AI) and Jane X. Wang (Google DeepMind) and was working at the intersection between Cognitive Science and LLMs. During my PhD I spent time at Google DeepMind (Mountain View, CA) and Meta (Menlo Park, CA) as an intern.
When not doing research, I enjoy football, bouldering, skiing, surfing, calisthenics and dancing.
News
- February 2026: Moved to London to start working at Recursive.
- February 2026: Our paper on ‘The Illusion of Latent Generalization: Bi-Directionality and the Reversal Curse’ was accepted at the Re-Align ICLR workshop.
- December 2025: Our paper on ‘Exploring System 1 and 2 communication for latent reasoning in LLMs’ got accepted at the FoRLM NeurIPS workshop.
- September 2025: Started an internship at Google DeepMind in Mountain View, CA.
- July 2025: “A foundation model to predict and capture human cognition” is out in Nature.
- May 2025: “Playing repeated games with large language models” is now out in Nature Human Behaviour.
- March 2025: Moved to California (Menlo Park) to start an internship at Meta.
- May 2024: Two papers accepted at ICML 2024: CogBench: a large language model walks into a psychology lab and Ecologically rational meta-learned inference explains human category learning.
- April 2024: I gave a talk about Meta-learning in deep neural networks for the Harvard Efficient-ML seminar series as the ‘rising star speaker’.
- Sep 2023: Our paper on Meta-in-context learning in large language models was accepted at NeurIPS 2023.
- Aug 2023: I attended the MIT Brains, Minds & Machines Summer School in Woods Hole, MA.
