Identifying Connectivity Distributions from Neural Dynamics Using Flows

Explainable & Ethical AI
Published: arXiv: 2603.26506v1
Authors

Timothy Doyeon Kim Ulises Pereira-Obilinovic Yiliu Wang Eric Shea-Brown Uygar Sümbül

Abstract

Connectivity structure shapes neural computation, but inferring this structure from population recordings is degenerate: multiple connectivity structures can generate identical dynamics. Recent work uses low-rank recurrent neural networks (lrRNNs) to infer low-dimensional latent dynamics and connectivity structure from observed activity, enabling a mechanistic interpretation of the dynamics. However, standard approaches for training lrRNNs can recover spurious structures irrelevant to the underlying dynamics. We first characterize the identifiability of connectivity structures in lrRNNs and determine conditions under which a unique solution exists. Then, to find such solutions, we develop an inference framework based on maximum entropy and continuous normalizing flows (CNFs), trained via flow matching. Instead of estimating a single connectivity matrix, our method learns the maximally unbiased distribution over connection weights consistent with observed dynamics. This approach captures complex yet necessary distributions such as heavy-tailed connectivity found in empirical data. We validate our method on synthetic datasets with connectivity structures that generate multistable attractors, limit cycles, and ring attractors, and demonstrate its applicability in recordings from rat frontal cortex during decision-making. Our framework shifts circuit inference from recovering connectivity to identifying which connectivity structures are computationally required, and which are artifacts of underconstrained inference.

Paper Summary

Problem
The main problem this paper addresses is that current methods for inferring neural connectivity from population recordings are underconstrained and degenerate. This means that multiple connectivity structures can generate identical dynamics, making it difficult to determine the underlying neural circuit mechanisms. Additionally, existing approaches typically return a single point estimate of recurrent weights, which can be misleading given the observed biological diversity of synaptic connectivity.
Key Innovation
The key innovation of this work is the development of an inference framework called Connector, which learns distributions over synaptic connectivity consistent with observed population dynamics. Instead of estimating a single connectivity matrix, Connector learns the maximally unbiased distribution over connection weights. This approach captures complex yet necessary distributions, such as heavy-tailed connectivity found in empirical data.
Practical Impact
This research has significant practical implications for understanding neural circuit mechanisms and developing more accurate models of brain function. By learning distributions over synaptic connectivity, Connector can identify which connectivity structures are computationally required and which are artifacts of underconstrained inference. This can help researchers to better understand how neural circuits generate computation and make predictions about neural dynamics. Additionally, Connector can be applied to real-world data, such as recordings from rat frontal cortex during decision-making.
Analogy / Intuitive Explanation
Think of neural connectivity as a complex web of relationships between neurons. Current methods are like trying to take a snapshot of this web, which can be misleading because there are many possible configurations that can produce the same dynamics. Connector, on the other hand, is like a camera that can take a video of the web, showing how the relationships between neurons change over time and identifying the underlying patterns and structures that are necessary for computation. This allows researchers to gain a more nuanced understanding of neural circuit mechanisms and make more accurate predictions about neural dynamics.
Paper Information
Categories:
q-bio.NC cs.LG
Published Date:

arXiv ID:

2603.26506v1

Quick Actions