Many forms of cognitive, emotional, and behavioral flexibility place diverse demands on brain representations, ranging from generalization in unfamiliar settings to the capacity to create a wide variety of responses to complicated patterns of inputs. According to research done by a team at Columbia University, the shape of neural representations might be essential for understanding how the brain supports these diverse types of flexible behavior.
Cognitive flexibility is exemplified by the capacity to decide what to do when faced with a novel scenario. This skill depends on how the environment is organized and how frequently novel circumstances resemble those that have already been encountered. These common characteristics offer a condensed description that only needs a few variables to describe the environment. The “curse of dimensionality” can be overcome by constructing this representation through a technique of dimensionality reduction that eliminates the requirement to observe all conceivable combinations of values of all characteristics occurring in the environment. The variables characterizing characteristics shared by multiple instances can be encoded in the brain in an “abstract” manner, which can facilitate generalization in unfamiliar circumstances.
Motivated by the fact that the process of abstraction enables generalization, we developed analytic methods for determining when the geometry of neural representations encodes variables in an abstract format.
After studying the relationship between neural representations and behavior, the role of abstraction in reinforcement learning the team concludes that abstraction provides a solution for the notorious “curse of dimensionality,” the exponential growth of the solution space required to encode all states of the environment.
Our use of neural networks as function approximators to represent a decision policy effectively constitutes a state abstraction method.
The dimensionality curse is lessened by neural networks’ inductive bias, which promotes generalization across inputs sharing a characteristic. The modeling shows that neural networks produce geometry that is comparable to that seen in the data, indicating that their analytic approaches may be helpful in elucidating the geometric characteristics that underlie the success of strong neural networks.
The Geometry of Abstraction in the Hippocampus and Prefrontal Cortex, Silvia Bernardi, Marcus K.Benna, Mattia Rigotti, Jérôme Munuera, Stefano Fusi, Daniel Salzman
Published: October 2020