Leveraging Classical Algorithms for Graph Neural Networks

AI in healthcare
Published: arXiv: 2510.21574v1
Authors

Jason Wu Petar Veličković

Abstract

Neural networks excel at processing unstructured data but often fail to generalise out-of-distribution, whereas classical algorithms guarantee correctness but lack flexibility. We explore whether pretraining Graph Neural Networks (GNNs) on classical algorithms can improve their performance on molecular property prediction tasks from the Open Graph Benchmark: ogbg-molhiv (HIV inhibition) and ogbg-molclintox (clinical toxicity). GNNs trained on 24 classical algorithms from the CLRS Algorithmic Reasoning Benchmark are used to initialise and freeze selected layers of a second GNN for molecular prediction. Compared to a randomly initialised baseline, the pretrained models achieve consistent wins or ties, with the Segments Intersect algorithm pretraining yielding a 6% absolute gain on ogbg-molhiv and Dijkstra pretraining achieving a 3% gain on ogbg-molclintox. These results demonstrate embedding classical algorithmic priors into GNNs provides useful inductive biases, boosting performance on complex, real-world graph data.

Paper Summary

Problem
Graph Neural Networks (GNNs) excel at processing graph data but often struggle to generalize beyond the training data distribution. Classical algorithms, on the other hand, guarantee correctness but lack flexibility. This dichotomy raises the question of whether classical algorithms can be leveraged to improve the performance of GNNs on real-world tasks.
Key Innovation
The researchers explore whether pretraining GNNs on classical algorithms can improve their performance on molecular property prediction tasks. They train GNNs on 24 classical algorithms from the CLRS Algorithmic Reasoning Benchmark and use the learned weights to initialize and freeze selected layers of a second GNN for molecular prediction.
Practical Impact
This research has significant implications for the field of molecular property prediction. By leveraging classical algorithms, GNNs can be endowed with inductive biases that improve property prediction. This can lead to the discovery of new active compounds and the development of more effective treatments for diseases such as HIV. The results of this study demonstrate that classical algorithms can provide useful inductive biases for GNNs, boosting performance on complex, real-world graph data.
Analogy / Intuitive Explanation
Imagine a graph as a map of a city, where nodes represent buildings and edges represent roads. A GNN is like a GPS system that can navigate the city, but it may get lost if the map is too complex. By pretraining the GNN on classical algorithms, it's like giving the GPS system a set of rules to follow, such as "always take the shortest path" or "avoid traffic congestion." This allows the GNN to navigate the city more efficiently and accurately predict the properties of molecules.
Paper Information
Categories:
cs.LG
Published Date:

arXiv ID:

2510.21574v1

Quick Actions