How Multimodal LLMs Solve Image Tasks: A Lens on Visual Grounding, Task Reasoning, and Answer Decoding

Paper · arXiv 2508.20279 · Published August 27, 2025
MultimodalMechInterp

Multimodal Large Language Models (MLLMs) have demonstrated strong performance across a wide range of vision-language tasks, yet their internal processing dynamics remain underexplored. In this work, we introduce a probing framework to systematically analyze how MLLMs process visual and textual inputs across layers. We train linear classifiers to predict fine-grained visual categories (e.g., dog breeds) from token embeddings extracted at each layer, using a standardized anchor question. To uncover the functional roles of different layers, we evaluate these probes under three types of controlled prompt variations: (1) lexical variants that test sensitivity to surface-level changes, (2) semantic negation variants that flip the expected answer by modifying the visual concept in the prompt, and (3) output format variants that preserve reasoning but alter the answer format. Applying our framework to LLaVA-1.5, LLaVA-Next-LLaMA-3, and Qwen2-VL, we identify a consistent stage-wise structure in which early layers perform visual grounding, middle layers support lexical integration and semantic reasoning, and final layers prepare task-specific outputs. We further show that while the overall stage-wise structure remains stable across variations in visual tokenization, instruction tuning data, and pretraining corpus, the specific layer allocation to each stage shifts notably with changes in the base LLM architecture. Our findings provide a unified perspective on the layer-wise organization of MLLMs and offer a lightweight, model-agnostic approach for analyzing multimodal representation dynamics.

At test time, we evaluate the same probes on a held-out set of images from the same N classes, each paired with systematically perturbed versions of the anchor prompt. Each prompt variant is designed to target a specific aspect of the input—such as lexical form, semantic content, or output format—while preserving the correct visual label. The core assumption is that different layers of the transformer are specialized for different types of computation. Therefore, the extent to which probe accuracy is affected by each variant reflects the sensitivity of a given layer to that type of perturbation, revealing its functional role. By aggregating these effects across prompt types and layers, we derive a structured view of how MLLMs process and integrate multimodal information.

We design each variant to target a distinct stage of the model’s internal computation, enabling us to localize where different processing stages occur. Lexical variants test where the model begins aligning visual information with specific prompt phrasing—layers involved in grounding should be sensitive even to small wording changes. Semantic negation helps identify where the model begins to commit to an answer, as representations at these layers should reflect changes in the predicted outcome. However, because negation also changes the underlying visual concept being reasoned about, we introduce output format variants to decouple reasoning from decoding. These variants keep the visual input and its interpretation fixed while altering how the answer is expected to be expressed (e.g., “yes or no” vs. “1 or 0”), allowing us to test whether the model’s internal representations encode the decision itself or just the tokens used to communicate it. Together, these controlled variations enable a fine-grained, layer-wise map of how multimodal information flows and transforms within the model.