Can a single transformer become universally programmable through prompts?
Explores whether prompts can function as genuine programs that unlock universal computation in fixed-size models, and whether this theoretical possibility translates to practical training outcomes.
"Ask, and it shall be given: Turing completeness of prompting" (2024) proves that there exists a finite-size Transformer such that for any computable function, there exists a corresponding prompt following which the Transformer computes the function. Furthermore, this single finite-size Transformer achieves nearly the same complexity bounds as the class of all unbounded-size Transformers.
The result establishes a theoretical underpinning for prompt engineering: prompts are not merely heuristic nudges that help a model do what it already can — they are, in principle, the mechanism that makes a fixed model universally programmable. The prompt IS the program.
However, the gap between expressiveness and learnability is critical. The proof shows the existence of such a Transformer but does not imply that standard training produces models that learn to implement arbitrary programs through CoT steps. This mirrors the broader pattern: since Can prompt optimization teach models knowledge they lack?, the practical limitation is not what prompts CAN express but what models HAVE learned to respond to.
The result also reframes the "prompts as programs" analogy used by several papers in this space. Promptbreeder treats prompts as self-modifiable programs. APE treats prompt search as program synthesis. The Turing completeness result validates these analogies — prompts genuinely are programs in the formal sense, not just metaphorically. But the practical implication is bounded by the model's training: the space of prompts that a trained model responds to meaningfully is a tiny subset of the theoretically expressible space.
Source: Prompts Prompting
Related concepts in this collection
-
Can prompt optimization teach models knowledge they lack?
Explores whether sophisticated prompting techniques can inject new domain knowledge into language models, or if they're limited to activating existing training knowledge.
practical ceiling on the Turing completeness result: the model must have learned the relevant patterns
-
Can we automatically optimize both prompts and agent coordination?
This explores whether language agents can be represented as computational graphs whose structure and content adapt automatically. Why it matters: current agent systems require hand-engineered orchestration; automatic optimization could unlock more capable multi-agent systems.
agents as computational graphs is the practical instantiation of prompts-as-programs
-
Can algorithms plus limited LLM calls solve complex tasks better?
Explores whether decomposing tasks into step-specific prompts within algorithmic control flow—rather than asking the LLM to manage full state—overcomes context window and reasoning limits while improving task performance.
algorithmic control flow over prompts as practical programming
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
prompting is Turing complete — a single finite-size transformer can compute any computable function given the right prompt