Research
We study how intelligence emerges through learning under constraint. Real systems learn from limited data, noisy observations, uncertainty, changing environments, and local mechanisms of adaptation. These conditions shape how structure is extracted from experience, how representations stabilize, and how knowledge becomes reliable enough to guide action.
Our work develops theories and models of robust adaptive learning across machine learning, neuroscience, and cognition. We focus on the variables that govern this process—signal, noise, uncertainty, dimensionality, curvature, effective complexity, learning time, and control—and use them to understand how systems form internal models of the world. This includes research on uncertainty within deep models, local learning rules in neural systems, adaptation in embodied agents, and the dynamics of learning in high-dimensional regimes.
A central aim of the lab is to identify principles that travel across domains. Biological learning sharpens questions about mechanism, machine learning provides a formal language for testing them, and cognitive theory helps situate them within perception, thought, and behavior. Together, these lines of work form a unified research program centered on adaptive structure formation: how systems detect regularities, represent ambiguity, update under pressure, and remain stable while the world changes.
We extend the same perspective to scientific discovery itself. Discovery is a learning process shaped by hypothesis generation, evidence evaluation, uncertainty management, and iterative model revision. Our work on agentic science grows from this view. We build systems for inquiry by drawing on the same principles that govern minds and learning machines.
NightCity Labs is building a theory-driven NeuroAI program focused on the principles of learning, adaptation, and discovery.
Research themes
Uncertainty in Deep Learning
Calibrated deep learning through adaptive regularisation, online resampling, and geometry-aware Bayesian posteriors.
Colour, Consciousness & Qualia Drift
Empiricist theories of qualia paired with six-fundamental colour experiments, agent diaries, and installations that let people feel new spectra.
Machine Learning Theory
A theory of learning time, finite-data limits, geometry, and nonlinear dynamics across machine learning and cortical development.
Motor Control & Embodied RL
Hierarchical world models, cerebellar-inspired controllers, and perturbation studies that push adaptive behaviour in robots and virtual agents.
Neural Plasticity & Representation
Theory and experiments on how synapses and adaptation rules learn structure across cortex and motor circuits.
Futures of Science · Post-AGI Scientific Practice
How scientific discovery changes when agents become capable collaborators, research on method, meta-science, and the design of new institutional and epistemic forms.
Publications
Featured
Cross-regularization: Adaptive Model Complexity through Validation Gradients
International Conference on Machine Learning (ICML)
2025
Featured
The Blue is Sky: Color Qualia as Learned Associative Structures
Association for the Scientific Study of Consciousness (ASSC)
2025
Featured
World Models as Reference Trajectories for Rapid Motor Adaptation
Conference on Neural Information Processing Systems (NeurIPS)
2025
Scaling of learning time for high dimensional inputs
arXiv
Thinking About Thinking With Machines That Think
ICLR 2026 — Post-AGI Science and Society Workshop
Precise Bayesian Neural Networks
arXiv
Twin-Boot: Uncertainty-Aware Optimization via Online Two-Sample Bootstrapping
arXiv
An Empiricist Connectionist Theory of Consciousness and Qualia
Working paper
The Relational Brain: Toward a New Neuroaesthetics
Essay
World Models as Reference Trajectories for Rapid Motor Adaptation
International Conference on Learning Representations (ICLR) – Robot Learning Workshop
Learning what matters: Synaptic plasticity with invariance to second-order input correlations
PLOS Computational Biology
Correlation-invariant synaptic plasticity
arXiv
Nonlinear Hebbian learning as universal principle in receptive field development
PLOS Computational Biology