Notebook
NeurIPS in Mexico City
The motor control paper was presented at NeurIPS 2025 in Mexico City.
World Models as Reference Trajectories for Rapid Motor Adaptation makes the case for treating reinforcement learning and control as genuinely separate problems. The RL module chooses what to do over long horizons; a reflexive controller keeps it stable under noise and drift. The world model plays both roles — it supports planning and generates the reference trajectory that the controller tracks in latent space.
The parallel to the cerebellum is not metaphorical. Cortico–basal ganglia loops handle slow value-based choice; cerebellum handles rapid predictive corrections in the space those loops define. The paper makes that division precise enough to be testable.
Presenting at NeurIPS gave the work its first real public airing outside the lab. The robotics and neuroscience audiences asked different questions, which is the useful kind of friction. The cerebellar angle drew more interest than expected — most attendees had not seen a framework that tries to be simultaneously a robotics architecture and a mechanistic account of a specific brain structure.
The same paper was also presented at the ICLR 2025 Robot Learning Workshop earlier in the year. Mexico City was the full conference version.