EventQueues: Autodifferentiable spike event queues for brain simulation on AI accelerators

Agentic AI
Published: arXiv: 2512.05906v1
Authors

Lennart P. L. Landsmeer Amirreza Movahedin Said Hamdioui Christos Strydis

Abstract

Spiking neural networks (SNNs), central to computational neuroscience and neuromorphic machine learning (ML), require efficient simulation and gradient-based training. While AI accelerators offer promising speedups, gradient-based SNNs typically implement sparse spike events using dense, memory-heavy data-structures. Existing exact gradient methods lack generality, and current simulators often omit or inefficiently handle delayed spikes. We address this by deriving gradient computation through spike event queues, including delays, and implementing memory-efficient, gradient-enabled event queue structures. These are benchmarked across CPU, GPU, TPU, and LPU platforms. We find that queue design strongly shapes performance. CPUs, as expected, perform well with traditional tree-based or FIFO implementations, while GPUs excel with ring buffers for smaller simulations, yet under higher memory pressure prefer more sparse data-structures. TPUs seem to favor an implementation based on sorting intrinsics. Selective spike dropping provides a simple performance-accuracy trade-off, which could be enhanced by future autograd frameworks adapting diverging primal/tangent data-structures.

Paper Summary

Problem
Spiking neural networks (SNNs) are complex models used in both computational neuroscience and neuro-inspired machine learning. However, training and simulating these models efficiently is a significant challenge due to their sparse and delayed spike events. Current hardware and software solutions often struggle to handle these demands, leading to inefficient and slow simulations.
Key Innovation
The authors of this paper introduce a new concept called "EventQueues" that allows for efficient gradient-based training of SNNs on AI accelerators. They develop a method to derive gradient computation through spike event queues, including delays, and implement memory-efficient, gradient-enabled event queue structures.
Practical Impact
The EventQueues system has the potential to revolutionize the field of SNNs by enabling fast and efficient simulation and training of these complex models. This could lead to breakthroughs in fields such as brain-inspired computing, neuro-inspired machine learning, and computational neuroscience. Additionally, the system could be applied in real-world scenarios such as image and speech recognition, natural language processing, and robotics.
Analogy / Intuitive Explanation
Imagine a busy coffee shop where customers (neurons) order coffee (spike events) at different times. The barista (event queue) needs to manage the orders efficiently to ensure that each customer receives their coffee at the right time. The EventQueues system is like a high-tech barista that can handle a large number of orders (spike events) and deliver them to the customers (neurons) in a timely and efficient manner, even when there are delays and complex interactions between the customers.
Paper Information
Categories:
cs.NE
Published Date:

arXiv ID:

2512.05906v1

Quick Actions