LLM Reasoning and Architecture Reinforcement Learning for LLMs Agentic and Multi-Agent Systems

Can a single transformer become universally programmable through prompts?

Explores whether prompts can function as genuine programs that unlock universal computation in fixed-size models, and whether this theoretical possibility translates to practical training outcomes.

Note · 2026-03-28 · sourced from Prompts Prompting
What kind of thing is an LLM really?

"Ask, and it shall be given: Turing completeness of prompting" (2024) proves that there exists a finite-size Transformer such that for any computable function, there exists a corresponding prompt following which the Transformer computes the function. Furthermore, this single finite-size Transformer achieves nearly the same complexity bounds as the class of all unbounded-size Transformers.

The result establishes a theoretical underpinning for prompt engineering: prompts are not merely heuristic nudges that help a model do what it already can — they are, in principle, the mechanism that makes a fixed model universally programmable. The prompt IS the program.

However, the gap between expressiveness and learnability is critical. The proof shows the existence of such a Transformer but does not imply that standard training produces models that learn to implement arbitrary programs through CoT steps. This mirrors the broader pattern: since Can prompt optimization teach models knowledge they lack?, the practical limitation is not what prompts CAN express but what models HAVE learned to respond to.

The result also reframes the "prompts as programs" analogy used by several papers in this space. Promptbreeder treats prompts as self-modifiable programs. APE treats prompt search as program synthesis. The Turing completeness result validates these analogies — prompts genuinely are programs in the formal sense, not just metaphorically. But the practical implication is bounded by the model's training: the space of prompts that a trained model responds to meaningfully is a tiny subset of the theoretically expressible space.


Source: Prompts Prompting

Related concepts in this collection

Concept map
12 direct connections · 166 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

prompting is Turing complete — a single finite-size transformer can compute any computable function given the right prompt