Published:
KOPS - The Institutional Repository of the University of Konstanz, 2023
Published in:ACM Transactions on Graphics. Association for Computing Machinery (ACM). 2023, 42(6), 244. ISSN 0730-0301. eISSN 1557-7368. Available under: doi:10.1145/3618342
Language:
English
DOI:
https://doi.org/10.1145/3618342
ISBN:
1873394780
Origination:
Footnote:
Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
Description:
Personalizing generative models offers a way to guide image generation with user-provided references. Current personalization methods can invert an object or concept into the textual conditioning space and compose new natural sentences for text-to-image diffusion models. However, representing and editing specific visual attributes such as material, style, and layout remains a challenge, leading to a lack of disentanglement and editability. To address this problem, we propose a novel approach that leverages the step-by-step generation process of diffusion models, which generate images from low to high frequency information, providing a new perspective on representing, generating, and editing images. We develop the Prompt Spectrum Space P*, an expanded textual conditioning space, and a new image representation method called ProSpect. ProSpect represents an image as a collection of inverted textual token embeddings encoded from per-stage prompts, where each prompt corresponds to a specific generation stage (i.e., a group of consecutive steps) of the diffusion model. Experimental results demonstrate that P* and ProSpect offer better disentanglement and controllability compared to existing methods. We apply ProSpect in various personalized attribute-aware image generation applications, such as image-guided or text-driven manipulations of materials, style, and layout, achieving previously unattainable results from a single image input without fine-tuning the diffusion models. Our source code is available at https://github.com/zyxElsa/ProSpect. ; published