Futurity as Infrastructure: A Techno-Philosophical Interpretation of the AI Lifecycle

Explainable & Ethical AI
Published: arXiv: 2508.15680v1
Authors

Mark Cote Susana Aires

Abstract

This paper argues that a techno-philosophical reading of the EU AI Act provides insight into the long-term dynamics of data in AI systems, specifically, how the lifecycle from ingestion to deployment generates recursive value chains that challenge existing frameworks for Responsible AI. We introduce a conceptual tool to frame the AI pipeline, spanning data, training regimes, architectures, feature stores, and transfer learning. Using cross-disciplinary methods, we develop a technically grounded and philosophically coherent analysis of regulatory blind spots. Our central claim is that what remains absent from policymaking is an account of the dynamic of becoming that underpins both the technical operation and economic logic of AI. To address this, we advance a formal reading of AI inspired by Simondonian philosophy of technology, reworking his concept of individuation to model the AI lifecycle, including the pre-individual milieu, individuation, and individuated AI. To translate these ideas, we introduce futurity: the self-reinforcing lifecycle of AI, where more data enhances performance, deepens personalisation, and expands application domains. Futurity highlights the recursively generative, non-rivalrous nature of data, underpinned by infrastructures like feature stores that enable feedback, adaptation, and temporal recursion. Our intervention foregrounds escalating power asymmetries, particularly the tech oligarchy whose infrastructures of capture, training, and deployment concentrate value and decision-making. We argue that effective regulation must address these infrastructural and temporal dynamics, and propose measures including lifecycle audits, temporal traceability, feedback accountability, recursion transparency, and a right to contest recursive reuse.

Paper Summary

Problem
The main problem addressed in this research paper is the need for a new regulatory framework for Artificial Intelligence (AI) that takes into account the long-term dynamics of data within AI systems. The authors argue that existing regulatory frameworks are insufficient because they do not account for the recursive value chains generated by the AI lifecycle, which can lead to power asymmetries and the concentration of value and decision-making power in the hands of tech oligarchs.
Key Innovation
The paper introduces a new conceptual tool to critically frame the AI pipeline, which includes data, training regimes, deep learning architectures, feature stores, and transfer learning processes. The authors also propose a formal reading of AI inspired by Gilbert Simondon's philosophy of technology, which reworks his concept of individuation to model AI's developmental lifecycle. This approach highlights the recursively generative, non-rivalrous nature of data in deep learning systems and the importance of considering the temporal dynamics of AI becoming.
Practical Impact
The research has several practical implications, including the need for regulatory frameworks that account for the infrastructural and temporal dynamics of AI becoming. The authors propose several regulatory proposals, such as lifecycle-based audit regimes, temporal traceability, feedback accounting, and the introduction of an AI windfall tax to support a public Futurity Value Redistribution Fund. These proposals aim to reorient the flow of AI futurity towards public value and ensure that the benefits of AI are shared more equitably.
Analogy / Intuitive Explanation
The concept of futurity can be thought of as a self-reinforcing cycle where increased data availability enhances model performance, deepens personalization, and enables new domains of application. This cycle is similar to a snowball effect, where the initial momentum builds upon itself, creating an exponential growth in value and power. However, just as a snowball can become uncontrollable and destructive, the self-reinforcing cycle of AI futurity can lead to power asymmetries and the concentration of value and decision-making power in the hands of a few individuals or organizations. The authors propose regulatory frameworks that can help to mitigate these effects and ensure that the benefits of AI are shared more equitably.
Paper Information
Categories:
cs.AI cs.HC I.2.6; I.2.11; K.4.1; K.6.0
Published Date:

arXiv ID:

2508.15680v1

Quick Actions