**Manifold:**a topological space that locally resembles a Euclidean space in mathematics.**Perceptual manifold:**the population structures of sensory neurons that emerge as a result of identity-preserving variations in the input stimulus space.**Neural manifold:**low-dimensional subspaces underlying population activities embedded in high-dimensional neural state space**Point-cloud manifold:**a set of data points with an underlying manifold structure**Neural population geometry:**neural manifolds embedded in neural state space

Scientists Sue Yeon Chung and L.F. Abbott from Columbia University developed an approach for understanding neural networks, by analyzing the geometric properties of neural populations and understanding how information is embedded and processed through high-dimensional representations to solve complex tasks. When a group of neurons demonstrates variability in response to stimuli or through internal recurrent dynamics, manifold-like representations emerge.

In their work, the team highlights the important examples of how geometrical techniques and the insights they provide have aided the understanding of biological and artificial neural networks.

They investigate the geometry of these high-dimensional representations, i.e., neuronal population geometry, using mathematical and computational tools. Their review includes a variety of geometrical approaches that provide insight into the function of the networks.

An important aspect of their research is the insight that dividing sets of neural population activities is more difficult when their representation patterns are on a curved surface, instead of a linear one – a concept that plays an important role in the analysis of neural population geometry and goes back to the beginnings of artificial neural networks.

Creating them into forms that are linearly separable comes from the role of the visual system that transforms visual objects into representation so that they become “untangled”. This process makes it easier to predict future neural activity, if the created trajectories are straight, which in turn leads to the “temporal straightening hypothesis” (the visual system transforms its input into a representation that follows a “straighter” trajectory through time, from which predictions can be achieved through simple linear extrapolation).

This geometrical framework for perception leads to a geometry of abstraction, the process of linear separability providing insight into more complex tasks beyond categorization. The use of ‘untangled’ representation, as assessed by a geometric measure called the parallelism score, is seen in recordings from the prefrontal cortex, hippocampus, and results from task-trained neural networks. While achieving abstraction from the processed information from different sets of tasks (that involve uncued “context”), the neural circuits also keep the information about other variables.

The presence of sources of variability for prediction will cause points (in the neural space) or the one-dimensional trajectory (for time-dependent stimuli) to jitter. This introduces the need to cluster responses into point-cloud manifolds.

For example, if we want to distinguish dogs from cats, we may want to group the responses to images for different viewing angles, sizes, and animal breeds into one dog manifold and one cat manifold. In this perspective, the problem of invariant object discrimination becomes that of separating neural manifolds.

Sue Yeon Chung

By considering multiple frameworks for building this type of manifold, the team concludes that the untangling hypothesis can be extended to the idea that visual processing aims to develop well-separated manifolds that provide information about object identity while maintaining other image-related variables such as pose, position, and scale. Further theoretical advancements are motivated, such as the manifold capacity theory, which allows for a more refined geometric analysis of representations in biological and artificial neural networks.

The team also studied the way the neural activity is built on lower-dimensional subspaces, i.e. neural manifolds, that have an intrinsic set of properties and functions. These determine a geometry of representation, as a consequence of more subtle but fundamental symmetries between said elements. In understanding the structure of these neural manifolds, nonlinear dimensionality reduction techniques are employed for showing how intrinsic dimensions of the manifold encode and process topological and geometric data.

These findings illustrate an exciting trend at the intersection of machine learning, neuroscience, and geometry, in which neural population geometry provides a useful population-level mechanistic descriptor underlying task implementation.

Sue Yeon Chung

*Neural population geometry: An approach for understanding biological and artificial neural networks*, Sue Yeon Chung, L.F. Abbott

Published: October 2021

DOI: 10.1016/j.conb.2021.10.010