Publications

Peer-Reviewed Conferences

CogBench: a large language model walks into a psychology lab
Julian Coda-Forno, Marcel Binz, Jane X. Wang, Eric Schulz International Conference on Machine Learning (ICML) (2024).

Ecologically rational meta-learned inference explains human category learning lab
Akshay K. Jagadish, Julian Coda-Forno, Mirko Thalmann, Eric Schulz, and Marcel Binz International Conference on Machine Learning (ICML) (2024).

Meta-in-context learning in large language models.
Julian Coda-Forno, Marcel Binz, Zeynep Akata, Matthew Botvinick, Jane X Wang, Eric Schulz.
Advances in Neural Information Processing Systems, 36 (2023).

Conference Workshops

Leveraging Episodic Memory to Improve World Models for Reinforcement Learning.
Julian Coda-Forno, Changmin Yu, Qinghai Guo, Zafeirios Fountas, Neil Burgess. Memory in Artificial and Real Intelligence (MemARI) workshop at NeurIPS (2022).

Preprints

Inducing anxiety in large language models increases exploration and bias.
Julian Coda-Forno, Kristin Witte, Akshay K Jagadish, Marcel Binz, Zeynep Akata, Eric Schulz (2023).

Playing repeated games with Large Language Models.
Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, Eric Schulz (2023).