Formal Category Theory for Scaled Institutions
Theorists have a moral imperative to help guide humanity through the AI transition. Capital interest now has the ability to massively scale cognitive systems, such as LLMs, to do good and cause damage. Regulation becomes possible with tools to interpret these models, and I argue these tools will come from robust formalizations of a Theory of Generalized Systems. This theory will not be elegant, but the lesson of scaled cognitive systems is that representations emerge at scale. Therefore, to develop a generalized systems theory, we should scale a research institution to an arbitrary size, with the goal of developing a generalized systems theory. To cohere arbitrary ideas at the scale of all of science, such an institution would need powerful, general, and implementable mathematical structures and modern AI systems. Such an institution should use the mathematical structures and systems it develops to add expressive capabilities to itself, allowing it to grow and remain coherent. I attempt to model this philosophy rigorously using higher functorial semantics. I proffer a toy model of omnimodal systems space, and discuss the dynamics of systems - such as institutions - in systems space.