Can API calls outperform UI navigation for agent task completion?
Can agents work faster and more accurately by calling APIs directly instead of clicking through user interfaces? This explores whether changing how agents interact with applications solves latency and error problems that plague current LLM-based systems.
Current LLM-based UI agents suffer from two compounding problems: latency scales with sequential interactions (each UI step requires an LLM call with large visual context), and hallucination risk increases per step (each reasoning step adds probability of selecting a wrong UI control). Inserting a 2x2 table in Word requires "Insert → Table → 2x2 Table" — three sequential UI interactions, each requiring full UI state processing.
AXIS (Agent eXploring API for Skill integration) demonstrates that prioritizing API calls over UI interactions resolves both problems simultaneously:
- 65-70% task completion time reduction — API calls execute directly without sequential UI navigation
- 97-98% accuracy maintained — comparable to human performance
- 38-53% cognitive workload reduction — users specify intent, not procedures
The HACI (Human-Agent-Computer Interaction) paradigm shift: API-first agents replace UI agents, falling back to UI interaction only when relevant APIs are unavailable. API calls require fewer tokens and produce more reliable code-formatted responses compared to UI state descriptions.
The self-exploration mechanism is key to practicality: AXIS automatically explores existing applications, learns from support documents and action trajectories, and constructs new APIs from existing ones. This addresses the bootstrapping problem — APIs don't need to be manually created for every application.
Because Are reasoning model failures really about reasoning ability?, the UI-to-API shift removes execution failure as a bottleneck. UI interaction is execution; API interaction is closer to specification. The agentic hierarchy becomes: user intent → agent reasoning → API execution, removing the fragile UI navigation layer.
This connects to Can reasoning and tool execution run in parallel?: API-first interaction is a structural form of the same decoupling — separating what the agent wants to do from the mechanics of how the application implements it.
Source: Agents
Related concepts in this collection
-
Are reasoning model failures really about reasoning ability?
Explores whether the performance collapse in language reasoning models reflects actual reasoning limitations or merely execution constraints. Tests whether tool access changes the picture.
API-first removes the execution failure layer
-
Can reasoning and tool execution run in parallel?
Standard LLM tool use halts for each response, creating redundant prompts and sequential delays. Do alternative architectures that separate reasoning from tool observation actually eliminate these costs?
same decoupling principle applied to agent-application interaction
-
Why can't advanced AI models take initiative in conversation?
Despite extraordinary capability in answering and reasoning, LLMs fundamentally cannot initiate, redirect, or guide exchanges. Understanding this gap—and whether it's fixable—matters for building AI that truly collaborates rather than merely responds.
AXIS addresses execution-layer passivity: UI-based agents passively follow sequential interaction steps determined by the application, while API-first agents directly specify intent; the HACI paradigm shift parallels the conversational passivity diagnosis — both reveal that current agent architectures are reactive to their environment (UI state, user query) rather than proactive
-
Why do AI agents fail at workplace social interaction?
Explores why current AI agents struggle most with communicating and coordinating with colleagues in realistic workplace settings, despite strong reasoning capabilities in other domains.
AXIS directly addresses one of the two hardest failure modes (complex UI navigation), potentially raising the 30% ceiling for workplace tasks that require professional tool interaction
-
Do generated interfaces outperform text-based chat for most tasks?
Explores whether LLMs should create interactive UIs instead of text responses, and under what conditions users prefer dynamic interfaces to traditional conversational chat.
converging evidence from opposite directions: AXIS moves agents from UI-based to API-based interaction (65-70% faster); generative interfaces move users from chat to dynamically generated UIs (70% preference); both challenge the chat paradigm as default and point toward intent-specification over procedure-following as the interaction model
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
API-first agent interaction reduces task completion time by 65 to 70 percent compared to UI-based agent interaction — reframing human-agent-computer interaction