Generating Part-Based Global Explanations Via Correspondence

Generative AI & LLMs
Published: arXiv: 2509.15393v1
Authors

Kunal Rathore Prasad Tadepalli

Abstract

Deep learning models are notoriously opaque. Existing explanation methods often focus on localized visual explanations for individual images. Concept-based explanations, while offering global insights, require extensive annotations, incurring significant labeling cost. We propose an approach that leverages user-defined part labels from a limited set of images and efficiently transfers them to a larger dataset. This enables the generation of global symbolic explanations by aggregating part-based local explanations, ultimately providing human-understandable explanations for model decisions on a large scale.

Paper Summary

Problem
Deep learning models, such as those used in image classification, are often "black-box" systems that are difficult to understand and trust. While they have achieved impressive results in various fields, their complexity raises concerns about their interpretability. This is particularly important in human-interactive and safety-critical contexts, where understanding the decisions made by these models is crucial.
Key Innovation
Researchers have proposed a new approach called GEPC (Global Explanations via Part Correspondence) to address this problem. GEPC uses a combination of local explanation search, part correspondence, and greedy set cover to generate global symbolic explanations for model decisions. This approach leverages user-defined part labels from a limited set of images and efficiently transfers them to a larger dataset, enabling the generation of human-understandable explanations on a large scale.
Practical Impact
The practical impact of GEPC is significant. By providing global explanations for model decisions, GEPC enables users to understand what parts of an image are responsible for the model's classification, even when the model is complex and difficult to interpret. This is particularly important in safety-critical industries, such as self-driving cars, where understanding the decisions made by these models is crucial to ensuring public safety. Additionally, GEPC can be applied to various tasks, such as gene expression analysis, activity recognition in videos, and question answering from texts, making it a versatile tool for explaining complex models.
Analogy / Intuitive Explanation
Think of GEPC like a detective trying to solve a mystery. The detective has a limited set of clues (user-defined part labels) that they use to search for other clues (local explanations) that can help explain the case (model decision). By combining these clues and using a greedy set cover approach, the detective can piece together a global explanation that reveals the key parts of the case that led to the solution (model decision). This analogy illustrates how GEPC uses a combination of local and global explanations to provide a comprehensive understanding of complex models.
Paper Information
Categories:
cs.CV cs.AI
Published Date:

arXiv ID:

2509.15393v1

Quick Actions