Using Large Language Models and Knowledge Graphs to Improve the Interpretability of Machine Learning Models in Manufacturing

Generative AI & LLMs
Published: arXiv: 2604.16280v1
Authors

Thomas Bayer Alexander Lohr Sarah Weiß Bernd Michelberger Wolfram Höpken

Abstract

Explaining Machine Learning (ML) results in a transparent and user-friendly manner remains a challenging task of Explainable Artificial Intelligence (XAI). In this paper, we present a method to enhance the interpretability of ML models by using a Knowledge Graph (KG). We store domain-specific data along with ML results and their corresponding explanations, establishing a structured connection between domain knowledge and ML insights. To make these insights accessible to users, we designed a selective retrieval method in which relevant triplets are extracted from the KG and processed by a Large Language Model (LLM) to generate user-friendly explanations of ML results. We evaluated our method in a manufacturing environment using the XAI Question Bank. Beyond standard questions, we introduce more complex, tailored questions that highlight the strengths of our approach. We evaluated 33 questions, analyzing responses using quantitative metrics such as accuracy and consistency, as well as qualitative ones such as clarity and usefulness. Our contribution is both theoretical and practical: from a theoretical perspective, we present a novel approach for effectively enabling LLMs to dynamically access a KG in order to improve the explainability of ML results. From a practical perspective, we provide empirical evidence showing that such explanations can be successfully applied in real-world manufacturing environments, supporting better decision-making in manufacturing processes.

Paper Summary

Problem
The main problem addressed by this research paper is the lack of interpretability of Machine Learning (ML) models in manufacturing. Despite the significant improvements brought about by ML in various domains, including manufacturing, users struggle to understand how specific ML results are generated, making it difficult for them to trust and act on ML outcomes. This is a challenge that needs to be addressed in order to support better decision-making in manufacturing processes.
Key Innovation
The key innovation of this work is the use of a Knowledge Graph (KG) and a Large Language Model (LLM) to enhance the interpretability of ML models. The KG is used to store domain-specific knowledge along with ML results, creating a structured link between data, models, and insights. The LLM is then used to selectively retrieve relevant information from the KG and generate user-friendly explanations of ML results.
Practical Impact
This research has the potential to improve the decision-making process in manufacturing by providing users with clear and understandable explanations of ML results. By integrating domain-specific knowledge with ML insights, the proposed approach can help users to trust and act on ML outcomes, leading to better decision-making in manufacturing processes. The practical impact of this work is significant, as it can be applied in real-world manufacturing environments to support better decision-making.
Analogy / Intuitive Explanation
Imagine trying to understand a complex recipe without having access to the ingredients, cooking techniques, and kitchen equipment. You might be able to get some general idea of what to do, but you wouldn't be able to fully understand the process. Similarly, when trying to interpret ML results, users often struggle to understand how specific results are generated without having access to the underlying knowledge and data. The proposed approach is like providing a detailed recipe book that explains the ingredients, cooking techniques, and kitchen equipment used to generate the ML results, making it easier for users to understand and trust the outcomes.
Paper Information
Categories:
cs.AI
Published Date:

arXiv ID:

2604.16280v1

Quick Actions