Author: Giulio Ruffini
I’m excited to share our work featured in the upcoming Special Issue of Entropy, “The Mathematics of Structured Experience: Exploring Dynamics, Topology, and Complexity in the Brain.” In this issue, we explore how deep mathematical ideas—from dynamical systems theory to group-theoretic symmetry—can shed light on the very foundations of structured experience, a central concept that bridges the first and third-person views of cognition. Structured experience is defined as the spatial, temporal, and conceptual organization of our first-person experience of the world and ourselves as agents in it. Inspired by the Kolmogorov theory of consciousness (see 1,2,3,4,5), our approach posits that the brain’s remarkable ability to compress information lies at the heart of both cognition and subjective experience.
The special issue focuses on several interconnected themes:
Compressive World Models: How do agents—whether natural or artificial—build internal models that encapsulate the regularities of the external world? The issue examines the link between a model’s mathematical structure and the dynamics it produces.
Mapping to Dynamical Systems: By interpreting neural networks as dynamical systems, we can study the geometry and topology of their invariant manifolds. This approach helps clarify how structured experience emerges from underlying neural dynamics.
Empirical Paradigms and AI Implications: The contributions explore how experimental paradigms and computational models can validate these theories, potentially guiding the design of next-generation AI systems.
For more details, see the Special Issue page on Entropy.
As the first contribution to the Special Issue, in Structured Dynamics in the Algorithmic Agent, we take a close look at how the brain’s need to track and compress incoming data actually shapes the architecture of its own models. It turns out that this isn’t just about tweaking dynamics; the very structure of a neural network is forced to align with the invariances of the world. Using the language of group theory—specifically, Lie pseudogroups—we show that the continuous transformations inherent in natural data set strict rules for how neural masses must be organized and connected. This means that the way these models are built, their connectivity, and their hierarchical layout are all optimized to capture and compress the essential structure of the environment.
In our framework (KT), we picture the agent (e.g., us) as an intelligent, model-building system also capable of planning actions with goals such as surviving (homeostasis) or, more importantly, reproducing (telehomeostasis). Rather than storing every single detail (unfeasible and unnecessary to attain its goals), the agent builds a compressed, generative model of the world—a sort of low-dimensional map that captures the core features, the invariances, of what it perceives. This means that the agent learns to extract essential properties (like what makes a cat, a cat) and uses these to predict and interpret incoming sensory data. In essence, its architecture—its neural masses and the way they’re connected—is shaped by the very patterns and symmetries present in the environment, allowing it to adapt and fine-tune its internal dynamics.
![Flowchart of an agent's modeling and planning engine interacting with the world. It includes components like Simulator, Updater, Objective Function. Bridging Brain Dynamics, Symmetry, and Consciousness in Brain Modeling. Neuroelectrics](https://static.wixstatic.com/media/777348_48bad1cb3910499b955aa29ce6b5cdbd~mv2.png/v1/fill/w_901,h_855,al_c,q_90,enc_avif,quality_auto/777348_48bad1cb3910499b955aa29ce6b5cdbd~mv2.png)
The starting point of our new paper is thinking of world data as being the product of a generative model. We employ the language of group theory—specifically Lie pseudogroups—to describe the continuous transformations that generate data. This formalism provides a rigorous framework for understanding generative models.
For example, the concept of “cat” emerges as the invariant model that is capable of generating all possible images of cats. The idea extends beyond images or cats, of course.
![Cats arranged in a circle pattern on left; intricate geometric designs on right. Black background, monochrome theme. Bridging Brain Dynamics, Symmetry, and Consciousness in Brain Modeling. Neuroelectrics](https://static.wixstatic.com/media/777348_c90d470a58ef42c6a8d386078a93e0a6~mv2.png/v1/fill/w_980,h_500,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/777348_c90d470a58ef42c6a8d386078a93e0a6~mv2.png)
The core idea is that the world, in all its complexity, is actually generated by simple, underlying models. Take the example of cat images: instead of needing to memorize every cat picture, a compact, low-dimensional model can capture what it means to be a cat—its essential features encoded in a few parameters. Every cat image, no matter how varied, is just a different point in this “cat image latent space.”
The idea is that all world data is, at its basis, a mathematical model generated in a similarly compressible way, through symmetry. In other words, the world is built on simple generative models that produce complex phenomena.
Nature is a mathematician, and abstract algebra and algorithmic information theory are her favorite subjects.
This idea has big implications for our brains. Since the world is generated by simple models, brains—evolved to efficiently process vast amounts of sensory data—can discover and exploit these simple models if they search for them. In effect, our brains are optimized to look for and learn these compressed, invariant representations, allowing us to make sense of a seemingly overwhelming amount of information by reducing it to its essential components.
This connection not only explains how we perceive and recognize patterns like “catness” but also suggests that by focusing on simple generative models, our brains can efficiently encode, predict, and interact with the world.
There’s a neat parallel here with physics. Just as Noether’s theorem tells us that symmetry leads to conservation laws, our analysis suggests that tracking data forces neural networks to develop dynamical invariants. These invariants, which govern the time evolution of neural states, mirror the symmetry properties of the external world. In practice, this creates a hierarchical organization within the network—a feature that aligns perfectly with the manifold hypothesis, where high-dimensional neural signals find themselves neatly organized on lower-dimensional manifolds.
Beyond the theoretical elegance, these insights have exciting implications for personalized therapy – computational neurotherapeutics. Structural principles derived from symmetry and associated dynamics constrain brain architecture, simplifying the creation of models created from patient-specific neuroimaging data, like fMRI or EEG. Computational models built on these principles can simulate how targeted interventions, such as neuromodulation or pharmacotherapy, might restore a balanced and healthy brain dynamic. Essentially, by better understanding and modeling the boundary conditions of the structure and dynamics of the brain, we’re laying the groundwork for precision therapies that are tailor-made for individual patients.
These ideas suggest that the simple mathematical structures underpinning the world—those generative models that compress vast sensory information into a few key parameters—are reflected in the brain’s own architecture and dynamics. As the brain discovers and exploits these regularities, its internal, algorithmic models naturally acquire a structured organization that shapes neural activity over time. This structured neural dynamics, in turn, gives rise to what we call “structured experience”: a rich, organized subjective world that mirrors the mathematical invariances of nature. In essence, the way our brains learn and represent the world not only drives efficient computation and behavior but also forms the very fabric of our structured conscious experience.
![Connected circles diagram with "Structure" at the center, linked to "Algorithm," "Experience," and "Dynamics," showing related concepts. Bridging Brain Dynamics, Symmetry, and Consciousness in Brain Modeling. Neuroelectrics](https://static.wixstatic.com/media/777348_d8b086082e37469e991ec8947cf4f621~mv2.png/v1/fill/w_656,h_572,al_c,q_90,enc_avif,quality_auto/777348_d8b086082e37469e991ec8947cf4f621~mv2.png)
Ultimately, by uniting these perspectives, our contribution aims to foster a broader dialogue across disciplines. I believe that grounding our theories in solid mathematical frameworks not only deepens our understanding of natural cognition but also paves the way for innovative approaches in artificial intelligence. I invite researchers to dive into these ideas and further explore the interplay between symmetry, dynamics, and consciousness.
References:
Special Entropy Issue: The Mathematics of Structured Experience: Exploring Dynamics, Topology, and Complexity in the Brain
Paper: Structured Dynamics in the Algorithmic Agent (Entropy 2025)
Related background (see also 1,2,3,4,5):
An algorithmic information theory of consciousness (Neuroscience of Consciousness, 2017)
The Algorithmic Agent Perspective and Computational Neuropsychiatry: From Etiology to Advanced Therapy in Major Depressive Disorder (Entropy 2024)
Comentarios