NCL NightCity Labs

Research theme

Neural Plasticity & Representation

Theory and experiments on how synapses and adaptation rules learn structure across cortex and motor circuits.

Why plasticity matters

How does cortex build useful internal pictures of the world using only local synaptic changes and no external teacher? We look for a handful of learning rules that make sense as algorithms and match what we know about biological plasticity. The aim is a general account of representation learning in cortex: how raw sensory streams become features and concepts, and how that mirrors what artificial networks do.

What is being learned?

Representation learning is about discovering structure in sensory inputs—edges, textures, phonemes, rhythms, recurring patterns that aid prediction and behaviour. Deep networks solve this with backpropagation and large datasets. Brains do not have a global error or central coordinator; synapses only see spikes and local chemistry, yet development yields precise selectivities. We ask: what local rules could give rise to this kind of feature learning across areas and modalities?

A single idea behind many models

Decades of models—sparse coding, ICA, BCM, STDP networks—appear different but share a core mechanism. The 2016 nonlinear Hebbian work showed that combining Hebbian plasticity with neuronal nonlinearities yields an effective learning rule: neurons tune themselves to the input patterns that drive them above threshold. Applied to natural statistics, this produces oriented, localised V1-like receptive fields and analogous features in other modalities without elaborate machinery. Many classic theories fall out as special cases of this generic nonlinear-Hebbian behaviour.

Learning what matters in messy inputs

Cortical neurons do not receive decorrelated, preprocessed signals; they listen to other neurons whose activities vary wildly in amplitude, correlation, and noise. In that regime, simple Hebbian rules latch onto whatever fluctuates most—overall brightness or shared drift—rather than the sparse structure we care about.

Correlation-invariant synaptic plasticity tackles this directly by keeping the rule local and Hebbian while filtering out broad second-order correlations. In the full formulation:

  1. Potentiation amplifies synapses whenever the neuron fires strongly in response to a particular combination of presynaptic inputs.
  2. Depression tracks average correlations across those inputs and subtracts them off so that common co-fluctuations do not dominate.
  3. Slow homeostasis keeps the neuron in a dynamical regime where both effects can operate without saturating.

Together these act like a local searchlight for rare, informative patterns while ignoring large but uninformative co-variations. In analysis and simulation the rule recovers meaningful features even with heavy correlations and noise, distributes specialisations across the population with little redundancy, and produces V1-like receptive fields from natural images without pre-whitening or hand-tuned inhibition. It also lines up with phenomenological rules such as BCM and triplet STDP, suggesting those experiments are glimpses of the same underlying learning objective.

Matching theory, algorithm, and biology

This programme ties together normative theory (sparse, informative features), learning algorithms (local updates that pursue those objectives), and biology (plasticity mechanisms observed in cortex). Nonlinear Hebbian and correlation-invariant frameworks say: if cortex learns structure from raw streams, synaptic changes should look like these rules—and they should be implementable with known excitatory plasticity ingredients. We are not chasing plausibility for deep learning; we are proposing candidate correct learning rules for unsupervised representation learning in cortex.

Toward a general theory

The current focus is single-layer sensory circuits where receptive fields map to data. Next steps:

  • Cortical hierarchies. Stack these rules across layers with inhibition and feedback to see how multi-stage representations emerge.
  • Dendrites and compartments. Understand how dendritic nonlinearities reshape the effective learning rule and widen the feature class.
  • Time and prediction. Combine unsupervised rules with temporal structure, prediction errors, and neuromodulators so they operate in dynamic, reward-driven settings.

In parallel, the same ideas inform artificial representation learning; many self-supervised objectives look like large-scale descendants of these local principles.