Can unlabeled UI video teach models what users intend?
Can temporal masking on screen recordings learn task-aware representations without paired text labels? This matters because labeled UI video is scarce and expensive, so self-supervised learning could unlock scaling.
UI-JEPA argues that prior UI-understanding approaches misframe the problem at two levels. Pretrained UI transformers operate at the component level and miss the concept of a task. Image-encoder-plus-LLM systems handle static screenshots and miss temporal structure — they can list widgets but cannot understand what a sequence of UI actions accomplishes. Crawler-based systems handle specific tasks but generalize poorly to unseen ones.
The hypothesis is that user intent is a temporal property of UI activity, not a spatial property of any frame. UI-JEPA therefore processes video sequences of UI actions during task execution, training a JEPA-based encoder with temporal masking on unlabeled UI video — predicting fully masked frames from unmasked frames. Because predicting masked frames forces the encoder to capture temporal relationships and task structure, the resulting representations encode what the user is trying to do, not just what is on the screen.
The decoder side is an LLM conditioned on these representations to produce textual user-intent descriptions. The empirical claim that earns its keep is data efficiency: fine-tuning the decoder requires a fraction of the paired video-text data and compute that SOTA MLLMs need. This matters because labeled UI video is scarce and expensive — the architecture trades the bottleneck of paired labels for the abundance of unlabeled screen recordings.
The broader implication is a separation of concerns: temporal/structural understanding learned self-supervised on unlabeled streams, semantic intent inference layered via a small LLM decoder on top. When labeled data is scarce, the right move is to push the learning into self-supervision and keep the supervised layer thin. This is the same architectural move as Why do vision-only GUI agents struggle with screen interpretation? — factor the perception sub-problem out of the foundation model and hand it the structured signal it can actually use.
Source: Tool Computer Use
Related concepts in this collection
-
Why do vision-only GUI agents struggle with screen interpretation?
Exploring whether GPT-4V's performance bottleneck in GUI automation stems from the simultaneous cognitive load of parsing icon semantics and predicting actions, and whether factoring these tasks improves reliability.
complements: same factoring move (specialized perception layer + foundation model on top) applied to temporal video rather than spatial screenshots.
-
Do text-based GUI agents actually work in the real world?
Can language-only agents that rely on HTML or accessibility trees handle actual user interfaces without structured metadata? This matters because deployed systems face visual screenshots, not oracle data.
extends: ShowUI argues UI perception needs UI-specialized VLA models; UI-JEPA is the upstream pretraining recipe — UI-shaped self-supervision before the supervised layer.
-
How can GUI agents adapt when software constantly changes?
Can desktop automation agents stay current by combining real-time web documentation with learned task patterns and concrete execution memories? This explores how to avoid training obsolescence in open-world software environments.
complements: Agent S relies on episodic memory of UI traces — UI-JEPA-style representations could provide a richer encoding for that episodic store than raw screenshots.
-
Can models reason without generating visible thinking steps?
Do machine reasoning systems actually require verbalized chains of thought, or can they solve complex problems through hidden computation? This challenges how we measure and understand reasoning.
extends: same family principle — useful representations don't require verbalization. UI-JEPA shows the predictive-feature principle works for UI temporal understanding.
-
Can 1000 carefully chosen examples align models effectively?
Does alignment require massive datasets, or can strategic curation of small, high-quality examples achieve comparable performance? LIMA tests whether quality beats quantity in post-training.
complements: data efficiency in the LLM decoder layer is enabled by self-supervised pretraining on the encoder side; the two notes cover paired sides of the data-efficiency story.
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
predictive video masking on UI activity learns user intent without paired text — JEPA-style self-supervision turns unlabeled screen recordings into a usable signal