Bureaucratic Silences: What the Canadian AI Register Reveals, Omits, and Obscures

Explainable & Ethical AI
Published: arXiv: 2604.15514v1
Authors

Dipto Das Christelle Tessono Syed Ishtiaque Ahmed Shion Guha

Abstract

In November 2025, the Government of Canada operationalized its commitment to transparency by releasing its first Federal AI Register. In this paper, we argue that such registers are not neutral mirrors of government activity, but active instruments of ontological design that configure the boundaries of accountability. We analyzed the Register's complete dataset of 409 systems using the Algorithmic Decision-Making Adapted for the Public Sector (ADMAPS) framework, combining quantitative mapping with deductive qualitative coding. Our findings reveal a sharp divergence between the rhetoric of "sovereign AI" and the reality of bureaucratic practice: while 86\% of systems are deployed internally for efficiency, the Register systematically obscures the human discretion, training, and uncertainty management required to operate them. By privileging technical descriptions over sociotechnical context, the Register constructs an ontology of AI as "reliable tooling" rather than "contestable decision-making." We conclude that without a shift in design, such transparency artifacts risk automating accountability into a performative compliance exercise, offering visibility without contestability.

Paper Summary

Problem
The main problem addressed in this paper is the lack of transparency and accountability in the use of artificial intelligence (AI) systems by governments. The paper focuses on the Canadian AI Register, which was released in November 2025, and argues that it does not provide a complete picture of how AI is used in the public sector. The researchers claim that the register systematically obscures important information about human discretion, training, and uncertainty management required to operate AI systems.
Key Innovation
The key innovation of this work is the concept of "bureaucratic silences," which refers to the way in which AI registers structure what is disclosed about public sector AI systems and what remains illegible or off the record. The researchers use the Algorithmic Decision-Making Adapted for the Public Sector (ADMAPS) framework to analyze the Canadian AI Register and identify these silences. This framework combines quantitative mapping with deductive qualitative coding to examine the register's data and reveal its limitations.
Practical Impact
This research has significant practical implications for the design of AI registers and the governance of AI systems in the public sector. The paper argues that AI registers should be understood as instruments of ontological design, which shape how accountability is defined and enacted. To support meaningful democratic oversight and public trust, future registers should be designed to disclose more information about human discretion, training, and uncertainty management. This could involve incorporating sociomaterial practices that actively shape the kinds of beings, relationships, and ways of knowing that can exist in the world.
Analogy / Intuitive Explanation
Imagine a transparency report that only shows the tip of the iceberg, while hiding the underlying processes and decisions that shape the system. This is similar to what the Canadian AI Register does, according to the researchers. The register provides a list of AI systems used in the public sector, but it does not reveal the human discretion, training, and uncertainty management required to operate these systems. This creates a "bureaucratic silence" that obscures important information about how AI is used in the public sector.
Paper Information
Categories:
cs.AI cs.CY cs.HC
Published Date:

arXiv ID:

2604.15514v1

Quick Actions