RepreGuard: Detecting LLM-Generated Text by Revealing Hidden Representation Patterns

Paper · arXiv 2508.13152 · Published August 18, 2025
Flaws

Detecting content generated by large language models (LLMs) is crucial for preventing misuse and building trustworthy AI systems. Although existing detection methods perform well, their robustness in out-ofdistribution (OOD) scenarios is still lacking. In this paper, we hypothesize that, compared to features used by existing detection methods, the internal representations of LLMs contain more comprehensive and raw features that can more effectively capture and distinguish the statistical pattern differences between LLM-generated texts (LGT) and human-written texts (HWT). We validated this hypothesis across different LLMs and observed significant differences in neural activation patterns when processing these two types of texts. Based on this, we propose RepreGuard, an efficient statisticsbased detection method. Specifically, we first employ a surrogate model to collect representation of LGT and HWT, and extract the distinct activation feature that can better identify LGT. We can classify the text by calculating the projection score of the text representations along this feature direction and comparing with a precomputed threshold. Experimental results show that RepreGuard outperforms all baselines with average 94.92% AUROC on both indistribution (ID) and OOD scenarios

Fine-tuning-based classifiers generally offer higher accuracy than statistics-based detectors, but require large amounts of labeled data and often struggle to generalize across different generators, making updates costly for new models (Guo et al., 2023). In contrast, statistics-based methods provide better interpretability and only require setting a threshold based on observed distribution patterns in a small sample size, offering stonger reliability for real-world applications (Wu et al., 2023). However, current statistics-based methods perform poorly in both ID and OOD scenarios due to the insufficient robustness in classification feature metrics. For example, varying prompts could control the perplexity of generated text, rendering thresholds from training samples ineffective (Hans et al., 2024). This limitation is exacerbated in OOD scenarios, posing significant challenges to the usability of statistics-based detectors, with the growing number of new LLMs.