ImAgent: A Unified Multimodal Agent Framework for Test-Time Scalable Image Generation

Generative AI & LLMs
Published: arXiv: 2511.11483v1
Authors

Kaishen Wang Ruibo Chen Tong Zheng Heng Huang

Abstract

Recent text-to-image (T2I) models have made remarkable progress in generating visually realistic and semantically coherent images. However, they still suffer from randomness and inconsistency with the given prompts, particularly when textual descriptions are vague or underspecified. Existing approaches, such as prompt rewriting, best-of-N sampling, and self-refinement, can mitigate these issues but usually require additional modules and operate independently, hindering test-time scaling efficiency and increasing computational overhead. In this paper, we introduce ImAgent, a training-free unified multimodal agent that integrates reasoning, generation, and self-evaluation within a single framework for efficient test-time scaling. Guided by a policy controller, multiple generation actions dynamically interact and self-organize to enhance image fidelity and semantic alignment without relying on external models. Extensive experiments on image generation and editing tasks demonstrate that ImAgent consistently improves over the backbone and even surpasses other strong baselines where the backbone model fails, highlighting the potential of unified multimodal agents for adaptive and efficient image generation under test-time scaling.

Paper Summary

Problem
The main problem addressed in this research paper is the limitation of current text-to-image (T2I) models in generating visually realistic and semantically coherent images, particularly when the textual descriptions are vague or underspecified. These models often produce images that deviate from the intended meaning and fail to capture the user's intent.
Key Innovation
The innovation of this work lies in the introduction of ImAgent, a unified multimodal agent framework that integrates reasoning, generation, and self-evaluation within a single framework for efficient test-time scaling. ImAgent is guided by a policy controller that dynamically selects and executes the most appropriate action for a given case, without relying on external models.
Practical Impact
The practical impact of this research is significant, as ImAgent has the potential to improve image generation quality and efficiency. By integrating multiple generation actions within a single framework, ImAgent can adaptively select the optimal action for a given case, allocate computational resources accordingly, and execute the selected action within the agent itself. This can lead to faster and more accurate image generation, with reduced computational overhead and increased user satisfaction.
Analogy / Intuitive Explanation
Think of ImAgent as a personal assistant that helps you generate images based on your descriptions. When you give ImAgent a vague or underspecified description, it acts like a language expert, refining your query to get more accurate results. It then uses its knowledge of image generation to produce a high-quality image that meets your expectations. ImAgent is like a combination of a language model, an image generator, and a self-evaluator, all working together in harmony to produce the best possible image.
Paper Information
Categories:
cs.CV cs.AI
Published Date:

arXiv ID:

2511.11483v1

Quick Actions