LLM Reasoning and Architecture Knowledge Retrieval and RAG Language Understanding and Pragmatics

Why do decoder-only models underperform as text encoders?

Decoder-only LLMs use causal attention, which limits each token to seeing only prior context. This explores whether removing this constraint could make them competitive universal encoders without architectural redesign.

Note · 2026-02-22 · sourced from LLM Architecture
RAG What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

LLM2Vec (2404.05961) identifies a specific architectural reason for the slow adoption of decoder-only LLMs as text encoders: causal attention limits each token's representation to information from preceding tokens only. At any layer, the representation of token at position i is influenced solely by positions 0 through i-1. While necessary for generative capability, this is suboptimal for text embeddings that need to capture information across the entire input sequence.

The fix is surprisingly simple — a 3-step unsupervised transformation:

  1. Enable bidirectional attention (remove the causal mask)
  2. Masked next token prediction (adapt to the bidirectional regime)
  3. Unsupervised contrastive learning (align representations for similarity)

Applied to models from 1.3B to 8B parameters, this achieves SOTA on MTEB among models training only on publicly available data. Word-level tasks see the largest margin over encoder-only models, and sequence-level tasks reach competitive performance without any supervised training or synthetic GPT-4 data.

The finding has implications for the embedding retrieval architecture debate. Since Do embedding dimensions fundamentally limit retrievable document combinations?, the quality of embeddings matters within those geometric constraints. LLM2Vec shows that the representation quality bottleneck in decoder-only models is the causal mask, not the model size or training data. Removing this constraint accesses the full representational capacity of the pretrained model.

Since Do vector embeddings actually measure task relevance?, LLM2Vec's contrastive learning step is relevant — it aligns representations for similarity rather than association, potentially addressing the semantic-vs-relevance gap at the encoder level.


Source: LLM Architecture

Related concepts in this collection

Concept map
13 direct connections · 126 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

causal attention inherently limits decoder-only models as text encoders — enabling bidirectional attention transforms them into competitive universal encoders