Mitigating Memorization in Text-to-Image Diffusion via Region-Aware Prompt Augmentation and Multimodal Copy Detection

AI in healthcare
Published: arXiv: 2603.13070v1
Authors

Yunzhuo Chen Jordan Vice Naveed Akhtar Nur Al Hasan Haldar Ajmal Mian

Abstract

State-of-the-art text-to-image diffusion models can produce impressive visuals but may memorize and reproduce training images, creating copyright and privacy risks. Existing prompt perturbations applied at inference time, such as random token insertion or embedding noise, may lower copying but often harm image-prompt alignment and overall fidelity. To address this, we introduce two complementary methods. First, Region-Aware Prompt Augmentation (RAPTA) uses an object detector to find salient regions and turn them into semantically grounded prompt variants, which are randomly sampled during training to increase diversity, while maintaining semantic alignment. Second, Attention-Driven Multimodal Copy Detection (ADMCD) aggregates local patch, global semantic, and texture cues with a lightweight transformer to produce a fused representation, and applies simple thresholded decision rules to detect copying without training with large annotated datasets. Experiments show that RAPTA reduces overfitting while maintaining high synthesis quality, and that ADMCD reliably detects copying, outperforming single-modal metrics.

Paper Summary

Problem
Text-to-image diffusion models have become incredibly good at generating high-quality images, but they have a major flaw: they can memorize and reproduce training images. This creates copyright and privacy risks, as the models can produce exact copies of images without permission. This problem is especially concerning when it comes to large-scale image collections, where the models are trained on web-scale, weakly curated pairs of captions and images.
Key Innovation
To address this problem, the researchers introduce two complementary methods: Region-Aware Prompt Augmentation (RAPTA) and Attention-Driven Multimodal Copy Detection (ADMCD). RAPTA is a training-time augmentation scheme that uses an object detector to find salient regions in the image and turn them into semantically grounded prompt variants. This increases diversity while maintaining semantic alignment. ADMCD is a lightweight detector that aggregates local patch, global semantic, and texture cues with a lightweight transformer to produce a fused representation, and applies simple thresholded decision rules to detect copying.
Practical Impact
These two methods can be applied in real-world scenarios to mitigate the risk of memorization in text-to-image diffusion models. RAPTA can be used to train models that are more robust and less prone to overfitting, while ADMCD can be used to detect copied content in generated images. This can help ensure that generated images do not infringe on copyright or violate privacy.
Analogy / Intuitive Explanation
Think of RAPTA like a language teacher who wants to help a student learn a new language. The teacher would not just teach the student a single phrase, but would also break down the phrase into smaller parts and teach the student how to use each part in different contexts. Similarly, RAPTA breaks down the prompt into smaller parts and teaches the model to use each part in different contexts, increasing the model's understanding of the prompt and reducing overfitting. ADMCD is like a quality control inspector who checks the generated image for any signs of copying. The inspector looks at the image from different angles and uses different criteria to determine whether the image is an original or a copy.
Paper Information
Categories:
cs.CV
Published Date:

arXiv ID:

2603.13070v1

Quick Actions