What formal languages actually help transformers learn natural language?
Not all formal languages are equally useful for pre-pretraining. This explores which formal languages transfer well to natural language and why—combining structural requirements with what transformers can actually learn.
Pre-pretraining on formal languages improves natural language acquisition, but not all formal languages produce equal transfer. Between Circuits and Chomsky (2025) proposes a two-constraint model:
Constraint 1 (Chomsky hierarchy): The formal language must capture hierarchical dependency structures present in natural language. Within the Chomsky hierarchy, context-sensitive languages transfer best to natural language (Papadimitriou & Jurafsky 2023). Simpler formal languages — regular, context-free — transfer poorly because they don't capture the hierarchical dependencies that natural language syntax requires.
Constraint 2 (circuit complexity): The formal language must be learnable by transformers with length generalization. Transformers cannot learn all context-sensitive languages — both in theory and in practice. Many formal languages within the Chomsky hierarchy are either impossible for transformers to learn or can only be learned without length generalization. Pre-pretraining on formal languages that fall outside transformer computational limits may fail to transfer even if those languages are structurally appropriate.
The optimal transfer zone is the intersection of these two constraints: formal languages expressive enough to capture hierarchical dependencies (Chomsky), and learnable by transformers with length generalization (circuit complexity). The paper formalizes this using C-RASP, a restricted programming language whose functions allow length generalization.
Empirical support: formal languages satisfying both constraints achieve equal or better transfer than matched natural language training. Formal languages satisfying only constraint 1 (hierarchical but not in C-RASP) show equivalent or slightly worse performance on some evaluations.
The broader principle: architectural computational limits are not just engineering constraints — they determine what inductive biases can actually be learned. The Chomsky hierarchy describes what structures are grammatically relevant; the circuit complexity hierarchy describes what structures are architecturally learnable. Effective pre-pretraining requires both.
Source: Linguistics, NLP, NLU
Related concepts in this collection
-
Can formal language pretraining make language models more efficient?
Does training language models on hierarchical formal languages before natural language improve how efficiently they learn syntax? This explores whether structural inductive biases in training data matter more than raw data volume.
the empirical finding this explains
-
Can non-reasoning models catch up with more compute?
Explores whether inference-time compute budget can close the performance gap between standard models and those trained for reasoning, and what training mechanisms might enable this.
parallel: architectural limits determine what capabilities can be learned, not just compute
-
How should we categorize different test-time scaling approaches?
Test-time scaling research spans multiple strategies for improving model performance at inference. Understanding how these approaches differ—and how they relate—helps researchers and practitioners choose the right method for their constraints.
architectural constraints recur across training and inference
-
Can explicit stack tracking improve how transformers learn recursive syntax?
Can adding an explicit stack tape to transformers help them track recursive structure more efficiently? This matters because standard transformers struggle with long-tail recursive patterns despite their size and data.
expands the architectural learnability boundary: the two-constraint model applies to standard transformers, but explicit stack tape extends transformer computational limits — potentially expanding the set of formal languages that produce positive transfer
-
Do formal language prototypes improve reasoning across different domains?
This explores whether training LLMs on abstract reasoning patterns in formal languages like Prolog and PDDL creates generalizable reasoning foundations that transfer to structurally similar problems across diverse domains.
ProtoReasoning confirms the two-constraint model from the reasoning side: Prolog and PDDL satisfy both hierarchical structure (Chomsky) and learnability (transformer limits), producing 4-6% cross-domain gains; the formal languages that work for reasoning transfer are the same ones this analysis predicts should work for language acquisition transfer
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
effective formal language pre-pretraining requires matching formal language complexity to transformer computational limits