Now Reading
Geometric framework predicts structure from neural functions

Geometric framework predicts structure from neural functions

In both biological and artificial networks, neural computing is based on the nonlinear summation of a large number of inputs. Although there are complicated and nuanced quantitative relationships between neural network structure and function, the structural connectivity matrix of synaptic weights across neurons is a key predictor of overall network function. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. It is mainly uncertain if particular synaptic connection patterns are necessary to produce particular network-level calculations.

Tirthabir Biswas and James E. Fitzgerald introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of threshold-linear neurons. 

We ultimately use this geometric characterization to derive certainty conditions guaranteeing a nonzero synapse between neurons. Our theoretical framework could thus be applied to neural activity data to make rigorous anatomical predictions that follow generally from the model architecture.

The solution space geometry can undergo topological changes as the allowable error rises, which might offer insight into both machine learning and neurology, according to a generalization that takes noise into consideration.

Figure 1. Cartoon of theoretical framework. (a) We first specify some steady-state responses of a recurrent threshold-linear neural network receiving feedforward input. (b) We then find all synaptic weight matrices that have fixed points at the specified responses. Red (blue) matrix elements are positive (negative) synaptic weights. (c) When a weight is consistently positive (or consistently negative) across all possibilities, then the model needs a nonzero synaptic connection to generate the responses. We therefore make the experimental prediction that this synapse must exist. We also predict whether the synapse is excitatory or inhibitory. © Biswas

They analytically calculate the solution space of all feedforward and recurrent connectivity matrics that can produce the specified responses from the network inputs, and assume that the number of specified response patterns does not exceed the number of input synapses. A generalization accounting for noise further reveals that the solution space geometry can undergo topological transitions as the allowed error increases, which could provide insight into both neuroscience and machine learning.

In summary, they counted all threshold-linear recurrent neural networks that produce a given set of fixed points under the presumption that there are at least that many potential synapses on each neuron. The results demonstrate an intuitively simple geometry of the solution space and explain a coordinate transformation that makes it straightforward to categorize weight-space dimensions into restricted, semi-constrained, and unconstrained variants.

Additionally, this geometric method expanded to approximate error-surfaces of model inputs that erratically produce the fixed points. They examined the structure-function relationships in neural networks using this geometric representation of the error surface and discovered, in particular, that it is frequently easy to pinpoint the synapses that the network needs in order to function. The idea was tested using simulations of feedforward and recurrent neural networks.

Geometric framework to predict structure from function in neural networks, Tirthabir Biswas and James E. Fitzgerald
Published: June 2022
DOI: 10.1103/PhysRevResearch.4.023255

© 2013 - 2024 GEOMETRY MATTERS. ALL RIGHTS RESERVED.
Scroll To Top