Research theme
Futures of Science · Post-AGI Scientific Practice
How scientific discovery changes when agents become capable collaborators, research on method, meta-science, and the design of new institutional and epistemic forms.
The question
Science is a learning process. It depends on hypothesis generation, evidence evaluation, uncertainty management, and iterative model revision. These are the same operations we study in adaptive systems, and they are now being reshaped by agents capable of performing them.
The central question is not whether AI makes science faster. It is how scientific practice changes shape when agents become genuine collaborators: how ideas are generated and filtered, how experiments are designed, how results are interpreted, how institutions learn to think with machine collaborators, and what survives of individual judgment and authorship in that transition.
This theme is about that design problem. It sits at the intersection of meta-science, philosophy of inquiry, and the practical work of building an AI-native institute.
What this research covers
The scope moves across three connected lines:
Discovery as a designed system. The NightCity Labs project is built on the premise that research can be treated as a software process, one that can be decomposed, instrumented, and improved. This means studying not just what science produces but how it runs: the structures, rhythms, interfaces, and coordination mechanisms through which a lab generates reliable knowledge. Research here includes the design of agent workflows for literature synthesis, hypothesis generation, experiment planning, and consolidation, and the question of how human and machine contributions are allocated and combined.
Preservation of plurality. As AI systems become capable of producing competent outputs across intellectual tasks, the question of what humans supply becomes structurally important. Intellectual ecosystems depend on attributable positions, perspectives that are situated and answerable, because this is where diversity and disagreement arise. Research here asks how to design collaboration so that individual judgment remains structurally present rather than smoothed out by the statistical tendency of language models toward consensus-shaped prose. This includes formal frameworks for decomposition, commitment, and epistemic ownership in human–AI work.
Institutions and forms. New scientific practice requires new institutional forms. The lab itself is an experiment in this direction: an AI-native research institute that operates with a small human core and an extended population of agents. Open questions include how credit, authorship, and accountability work when agents participate in the research loop, and how the transition from traditional academic structures to post-AGI ones can be managed without losing what is valuable in existing forms.
Example: Thinking About Thinking With Machines That Think
Thinking About Thinking With Machines That Think (Mainen, Brito et al., ICLR 2026 Post-AGI Workshop) is an early exploration in this direction. The paper develops a theoretical account of process flattening, the failure mode in which intellectual work loses visible decision structure and drifts toward generic, high-probability discourse as human–AI interaction defaults to the model’s assistant mode.
The work was conducted at Champalimaud Research with Zachary Mainen’s group, in collaboration with the 2025 cohort of the INDP (International Neuroscience Doctoral Programme) at the Champalimaud Foundation. The doctoral intensive, a one-week course in which neuroscience PhD students wrote essays on thinking with AI at every stage, served as both testbed and subject matter. The core contribution is the decomposition principle: structured stages create explicit decision points at which participants must select among alternatives and commit to positions, rather than allowing the interaction to resolve by default. The paper also introduces the blind prompt method, a capability-agnostic instrument for measuring the contribution of structured process beyond default model resolution.
The collaboration with Champalimaud reflects the lab’s view that post-AGI science questions are most productively studied inside working research environments, not as external observations of them.
Connection to the residency
The Futures of Science path inside the NightCity Residency is the direct extension of this theme. Residents working in this area engage with the open questions through technical essays, frameworks, and prototypes that connect capabilities to measurable changes in how research is practiced.