Notebook
Three papers at ICLR 2026
Three papers accepted at ICLR 2026 workshops.
Directly Optimizing Calibrated Test-Time Uncertainty (TTU workshop) — the core uncertainty paper. The argument is that standard training objectives do not optimise directly for calibration at test time, and that you can do better by targeting the calibration objective explicitly. Ramon has been driving this line for a while; it connects cleanly to the broader ML theory work on what networks actually learn and when.
Direct Learning of Calibration-Aware Uncertainty for Neural PDE Surrogates (AI&PDE workshop) — applies the same ideas to uncertainty quantification for PDE surrogate models, where reliable confidence estimates matter a great deal for downstream scientific use. A focused application with enough structure to make the calibration problem concrete.
Thinking About Thinking With Machines That Think (Post-AGI Science and Society workshop) — the Champalimaud collaboration, with Zachary Mainen, Mariana Amendoeira Duarte, Adrian Razvan Sandru, Kumar Neelabh, Margarida Lopes Gingeira, Mengjiao Zuo, Philippine Decaix, Dunya Yasser Assaf, and Ariel Ziqian Xu. The paper comes out of the INDP doctoral intensive and develops the decomposition principle for measuring structured process in human–AI intellectual work.
Three different corners of the lab’s research programme arriving at the same venue. Papa Legba will handle the public communications when the workshop dates land.