Nested Attention: Semantic-aware Attention Values for Concept Personalization
Personalizing text-to-image models to generate images of specific subjects across diverse scenes and styles is a rapidly advancing field. Current approaches often face challenges in maintaining a balance between identity preservation and alignment with the input text prompt. Some methods rely on a single textual token to represent a subject, which limits expressiveness, while others employ richer representations but disrupt the model’s prior, diminishing prompt alignment. In this work, we introduce Nested Attention, a novel mechanism that injects a rich and expressive image representation into the model’s existing cross-attention layers. Our key idea is to generate querydependent subject values, derived from nested attention layers that learn to select relevant subject features for each region in the generated image. We integrate these nested layers into an encoder-based personalization method, and show that they enable high identity preservation while adhering to input text prompts.
These encoders embed the subject into a latent representation, which is then used in conjunction with diverse text prompts to generate images of the subject in multiple contexts.
A key challenge in personalizing text-to-image models is balancing identity preservation and prompt alignment [5, 17, 19, 55]. Most encoder-based works [17, 19, 52, 53, 55] tackle personalization by encoding the subject into a large number of visual tokens which are injected into the diffusion model using new cross-attention layers. Such approaches are highly expressive and can achieve high fidelity to the subject, but they tend to overwhelm the model’s prior, harming text-to-image alignment (see Section 2).