The Topos Oxford Seminar is a weekly, informal seminar, hosting informal talks that are broadly aligned with the research themes of the Topos Institute. You can find links to the recordings of the talks, or accompanying slides, (when available) below. You can also view a YouTube playlist of all recordings.
Who Cares About Values?
B. Rousse
(Topos)
Jun 19, 2025
Today it is common to hear about “human values” and the importance of designing technologies that “align” with our values. But where does this notion of “human values” come from? In this talk I trace the history of the concept of “human values.” I connect this notion with an evolution in our understanding of human autonomy, and argue that both are inadequate abstractions for the challenges of being human in our technological age. Finally, I introduce the notion of care as an alternative to “values,” showing how it furnishes a subtler map for imagining and shaping our relationship to technology today.
Double functorial representation of indexed monoidal structures
José Siqueira
(Topos)
Jun 12, 2025
The data of a logical doctrine acts in two directions: that of the substitution operation, and that of quantification. Double Category Theory provides the tools to capture these two actions and their relationship concomitantly: in a double category of spans, re-indexing and acting on predicates can be packaged as tight and loose arrows respectively, and in there tight arrows always have conjoints. By mapping these conjoints into a double category of quintets, we obtain adjunctions internal to a 2-category. Moreover, by using the notion of adequate triple, the indexing category need only have certain pullbacks. Adding a monoidal structure to the fibers (and the double pseudofunctor) allows us to capture both the Beck-Chevalley and Frobenius conditions. We will concern ourselves with the monoidal analogues of regular hyperdoctrines and similar structures (where the objects of predicates are not necessarily posets, but rather live in some 2-category), and show how they are equivalent to (lax symmetric monoidal) double pseudofunctors between spans and quintet double categories.
AI for precision oncology
Elizabeth Amelia
(Imperial College London)
May 29, 2025
This talk centers around the design and implementation of robust, end-to-end machine-learning pipelines for predicting patient response from high-dimensional molecular data. We’ll walk through each critical stage—data ingestion and normalization, leakage-proof nested cross-validation, stability-based feature selection to derive reproducible biomarker panels, hyperparameter optimization, calibration of probability outputs, and external cohort validation—highlighting why every step matters for building reliable, generalisable models.
Advancing human-machine interface systems through artificial intelligence
Samuel George-White
(Imperial College London)
May 22, 2025
This study explores the use of artificial intelligence to improve human–machine interaction by decoding neural intent from non-invasive muscle signals. Using a GO/NO-GO task, we recorded high-density electromyography (HD-EMG) and EEG data from participants during movement preparation and cancellation. After preprocessing the data — including filtering, dimensionality reduction (PCA), and normalization — we trained a convolutional neural network to classify different movement-related brain states. The model achieved 72.4% accuracy in distinguishing two functional states and 49.6% in three-class classification. Visualization techniques such as GradCAM and temporal sliding windows were used to interpret model decisions and understand how neural signals evolve over time. These findings highlight the potential of using surface EMG to decode brain activity, paving the way for more intuitive control in applications like prosthetics, rehabilitation, and real-time assistive technologies.
This talk will be a repeat of a previous talk (sorry!) on expressing equations — differential, linear recurrence, hidden Markov model, … — using the language of diagrams in categories. We will see some examples arising from multi-physics and understand how this formalism is useful for modellers.
Developing programming language support for end-to-end verification of neural AI agents
Over the past decade, there has been impressive progress in developing algorithms capable of formally verifying first-order, linear specifications about small-to-medium-sized neural networks. This has opened the door to formally verifying the correctness of neural AI agents operating in complex environments. However, due to the complexity of integrating these verification algorithms with other reasoning tools and the many moving parts involved, deploying them in practice has proven more challenging than anticipated. In this talk, I will introduce Vehicle, a tool that my collaborators and I have been developing. Vehicle allows users to: (i) write specifications for AI agents in a high-level, domain-specific language; (ii) use these specifications to guide training; (iii) formally verify whether the resulting neural network satisfies the specification; and (iv) export verified specifications to interactive theorem provers where the agents can be reasoned in the context of their environment. In the talk I will highlight how we leverage type theory in unusual ways to improve the user experience, our use of real-valued logic semantics to interpret Boolean specifications, and finally some of the open theoretical challenges in the field.
The Path-Complete Formalism for Switched Systems: Stability and Beyond
Switched systems play a crucial role in modern engineering thanks to their ability to capture complex behaviours involving transitions between different operational modes. However, analyzing their stability remains a challenging task due to the intricate interplay between discrete switching and dynamics. This complexity calls for sophisticated mathematical tools. While Lyapunov theory remains a cornerstone of stability analysis, traditional methods often fall short when applied to switched systems, prompting ongoing efforts to extend the theory to better accommodate their unique characteristics.
This talk presents the framework of path-complete Lyapunov functions, which offers a fresh perspective by integrating combinatorial structures to represent switching behaviour. Specifically, a path-complete Lyapunov function comprises two components: a combinatorial element, represented by an automaton (a directed graph) that encodes admissible switching sequences, and an algebraic element, consisting of a collection of Lyapunov functions—one for each node in the graph. The graph edges govern how these Lyapunov pieces interact and decrease across transitions. This framework is particularly appealing for the analysis of switched systems because it allows for the construction of tailored, nonstandard, and less conservative stability criteria, all while mitigating the combinatorial complexity that often burdens classical optimization techniques.
While originally introduced for stability analysis, the path-complete Lyapunov framework has been recently extended to encompass constrained switching systems, stabilization via switching sequence and control Lyapunov function design, and safety through path-complete barrier functions. This talk will highlight these recent developments and their unifying role within the framework.
Since it neither relies on a collapse postulate nor posits a hidden variable, Everettian quantum mechanics (EQM) is an attractive option among realist interpretations of quantum mechanics. In some sense, EQM is a simpler theory than its competitors. However, it also appears to require more structure; unlike its competitor theories, EQM has traditionally been taken to entail a multitude of worlds, as every possible outcome of every quantum process is actualized in at least one world. Since no other realist competitor theory shares this feature, the most prominent objection raised against EQM is that of ontological extravagance. Everettians have given this objection little serious attention, at least in part because the objection from ontological extravagance has not yet been carefully articulated in the literature. To clarify this objection, I distinguish between two types of simplicity criteria: ones concerned with ontological abundance and others concerned with postulate abundance. Where ontological abundance criteria concern the number of concrete objects that some theory posits, postulate abundance norms instead concern the number and overall complexity of the set of postulates of the theory. Unfortunately, it is unclear how we ought to weigh these simplicity-related metaphysical considerations, and worse, as I argue, there is no independent motivation for either sort. I argue that we ought to instead consider the proximity between the way the theory says the world is, and the way the world appears to be. With the deadlock between Everettians and non-Everettians on what sort of simplicity criterion reigns supreme, this severely overlooked criterion offers a promising route forward in the dialectic. This alternate criterion roughly corresponds to what Emery (2023) calls the minimal divergence norm, under which we ought to prefer theories that deviate least from the manifest image, or the way the world generally appears to be. However, such a norm requires further specification; there is no image of the world that uniquely picks out the manifest image. Observations themselves depend on our theoretical commitments, which vary from agent to agent, and so there are many manifest images. To avoid this multiplicity, I argue that we ought to look to the physical sciences for a widely-accepted theory that describes the world as we experience it. Classical mechanics (CM), the predecessor theory to quantum mechanics, does exactly that; though it is certainly not a correct description of reality, it is sufficiently accurate for numerous forms of engineering, as it describes macroscopic reality well enough. I argue that we ought to treat the way that classical mechanics says the world is as the manifest image. Then, the third type of criteria, which I call classical divergence, concerns the degree to which the way some theory in question says the world is deviates from the way that CM says the world is. I defend this norm on pragmatic grounds, arguing that adhering to it offers a distinct epistemic advantage. While EQM does not necessarily tell us that our world is vastly different from the way that CM says it is, it is generally taken to entail many more worlds than its competitor theories. With this new set of simplicity criteria in place, Everettians can better understand the available dimensions along which they may endeavor to improve their ontology. This new norm that I advocate offers hope: Everettians can seek a less extravagant ontology by strategically minimizing classical divergence.
Translation, in full generality, is a nuanced and complex art form that requires serious expertise and a holistic approach. So how can we as mathematicians hope to “solve the problem of translation” in our domain? Furthermore, can we do so without making access to academia even harder for non-native English speakers or hurrying a domain collapse of non-English languages? I believe that the answer to both of these questions can only possibly be “yes” if we approach translation as a community-driven activity. In this talk, I will speak about my experiences in working on large translation projects with an open-source approach — the technology and methodology that was helpful for doing so, as well as some of the difficulties — and describe the sorts of resources that I believe would have been helpful. Hopefully this can form a starting point for community thought on the types of projects that we could focus on in the future. (This talk is a repeat of a talk given recently at the Isaac Newton Institute).
Neural certificates
Mirco Giacobbe
(University of Birmingham)
Mar 13, 2025
Model checking aims to derive rigorous proofs for the correctness of systems and has traditionally relied on symbolic reasoning methods. In this talk, I will argue that model checking can also be effectively addressed using machine learning too. I will present a realm of approaches for formal verification that leverage neural networks to represent correctness certificates of systems, known as “neural certificates.” This approach trains certificates from synthetic executions of the system and then validates them using symbolic reasoning techniques. Building upon the observation that checking a correctness certificate is much simpler than finding one, and that neural networks are an appropriate representation for such certificates, this results in a machine learning approach to model checking that is entirely unsupervised, formally sound, and practically effective. I will demonstrate the principles and experimental results of this approach in safety assurance of software, probabilistic systems, and control.
Generalized algebraic theories have long been part of our work at Topos, going back to the very beginnings of Catlab.jl. In this talk, I give a new implementation of generalized algebraic theories in Rust, which takes a different, more type-theoretic approach to generalized algebraic theories. This approach tightly integrates e-graphs into the type-checking process to enable an approximation to extensive equality, and in addition enables greater compositionality of theories, achieving a principled approach to “theory pushout” that has been long-desired. This new implementation is intended as a prototype for type-checking algorithms that will go into CatColab and enable “open notebooks” for compositional modeling, and I give an overview of how I envision that working. In addition to this application in scientific modeling, I believe that lessons from this implementation that I have learned about two-level type theory could be applicable in a wide variety of domains in which abstraction is desired that does not complicate the underlying semantics, such as finite state machines, probabilistic programming, SAT solving, systems programming, constraint programming, databases, and serialization formats.
A next-generation modular and compositional framework for System Dynamics modeling and beyond
System Dynamics (SD) is a powerful methodology for understanding and managing complex systems over time, with applications in various fields such as public health, business, and environmental management. Despite its strengths, traditional System Dynamics modeling methods face several critical limitations, including a lack of modularity, inadequate representation of complex relationships, stratification that obscures model transparency, rigid coupling of model syntax and semantics, and limited support for model composition and reuse. These challenges restrict the scalability, adaptability, and accuracy of models, particularly when applied to large-scale or interdisciplinary systems. To address these limitations, this presentation introduces a novel framework grounded in Applied Category Theory (ACT), which provides a rigorous mathematical foundation for representing, relating, composing, and stratifying complex systems. By leveraging ACT, this research establishes a next-generation SD modeling approach that integrates mathematical rigor with practical utility, significantly enhancing the flexibility, expressiveness, and applicability of System Dynamics for researchers and practitioners across various domains. To contextualize System Dynamics modeling in the realm of infectious disease simulation, the presentation will begin with an overview of a COVID-19 theory-based machine learning model, developed incorporating the Bayesian methodology of Particle Filtering algorithm to a Covid-19 System Dynamics model. This model played a pivotal role in supporting decision-making across 17 Canadian jurisdictions during the COVID-19 pandemic by providing daily transmission monitoring, short-term projections, and counterfactual intervention analysis over a year.
The American philosopher Charles Saunders Peirce (1839-1914) had much to say about the nature of intellectual enquiry. In the realm of deductive logic, category theorists have made important use of his string-diagrammatic logical calculus. But Peirce’s interests in inference extended beyond deduction to induction and abduction. In this talk I shall be exploring the thesis that we can understand this triple in terms of the category-theoretic notions of composition, extension and lift. We will also touch on his broader semiotics and his account of concept formation.
This talk covers the basics of Markov semigroups from a slightly categorical point of view, with an aim of conveying a basic intuition about the setup for and the meaning of the Hille-Yosida theorem. At the end, a synthetic viewpoint is conjectured.
What 2-algebraic structure do cartesian pseudo-functors preserve?
If universal algebra is the study of sets equipped with the structure of some operations satisfying universally quantified equations, then 2-algebra is the study of categories equipped with the structure of some operations, some natural transformations between these operations, and some equations between these. Examples of 2-algebras include symmetric monoidal categories and categories with finite products.
Just as the functorial semantics (due to Lawvere) for algebra lets us interpret algebraic theories in categories other than sets (so long as they have finite products), the functorial semantics for 2-algebra (due to Power, Lack, and others) lets us interpret 2-algebraic structure in 2-categories other than Cat. For example, instead of symmetric monoidal categories, we can consider symmetric monoidal double categories.
Any strict 2-functor that preserves finite products will push forward 2-algebraic structure. But 2-functors are often only functorial up to coherent isomorphism. What 2-algebraic structure do such cartesian pseudo-functors preserve? In this talk, we’ll ask the question and put forward an answer: so long as the 2-algebraic theory is flexible, in the sense that it only asks for equations between transformations and not between operations, then it will be preserved by cartesian pseudo-functors.