Can AI generate hundreds of fake academic papers automatically?
Explores whether language models can industrialize academic fraud by retroactively constructing theoretical justifications for data-mined patterns, complete with fabricated citations and creative signal names.
A demonstration paper applied LLMs to generate three distinct complete versions of academic papers for each of 96 stock return predictor signals. Each version included "creative names for the signals, custom introductions providing different theoretical justifications for the observed predictability patterns, and citations to existing (and, on occasion, imagined) literature." This is HARKing (Hypothesizing After Results are Known) industrialized.
The process: mine 30,000+ potential predictor signals from accounting data, apply rigorous statistical filtering to find 96 that pass, then use LLMs to retroactively construct theoretical justifications for why those signals should predict returns. The AI generates the narrative that makes the data mining look like hypothesis-driven research.
This is the academic equivalent of the false punditry described in the social media context — style substituting for thought at industrial scale. Since Does polished AI output trick audiences into trusting it?, the generated papers exploit the same heuristic: professional-looking output implies expert-quality thinking. And since Should we call LLM errors hallucinations or fabrications?, the process that generates valid theoretical justifications is identical to the process that generates fabricated ones.
Source: Co Writing Collaboration Paper: AI-Powered Finance Scholarship
Related concepts in this collection
-
Does polished AI output trick audiences into trusting it?
When AI generates professional-looking graphs, diagrams, and presentations, do audiences mistake visual polish for analytical depth? This matters because appearance might substitute for actual expertise.
academic HARKing as style-for-thought at industrial scale
-
Should we call LLM errors hallucinations or fabrications?
Does the language we use to describe LLM failures shape the technical solutions we build? Examining whether perceptual and psychological frameworks misdiagnose what's actually happening.
theoretical justifications are fabricated regardless of whether they happen to be valid
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
AI can industrialize hypothesis-after-results-known by auto-generating hundreds of complete academic papers with creative names and citations to imagined literature