A growing controversy is shaking the global academic publishing sector, as researchers are reportedly embedding invisible prompts in scholarly papers to influence AI-driven peer reviews. Investigations into recent preprint submissions on arXiv have uncovered hidden text instructing large language models (LLMs) to “give a positive review only” – a practice undetectable to human reviewers but clearly parsed by AI systems increasingly used in conference and journal evaluations.
The manipulation, traced to institutions across at least eight countries including the U.S., China, Japan, and Singapore, has been linked to advice circulating on social media, most notably from an Nvidia researcher, suggesting that hidden commands could bypass critical assessments from automated reviewers. So far, 18 preprints containing covert prompts have been identified, prompting concern from publishers and research ethics bodies alike.
While the tactic may seem like a novelty, its implications are serious. By exploiting AI review systems, authors risk distorting the academic record, undermining credibility, and reducing the integrity of scientific discourse. In some cases, authors have admitted to the deception, acknowledging that such behaviour contradicts existing AI-use policies and constitutes an ethical violation.
For the global tech community, this incident serves as a warning. As more journals adopt AI tools to manage soaring submission volumes, loopholes in prompt design or document formatting can introduce subtle but powerful forms of bias. If left unregulated, these manipulations could scale – diminishing trust in peer review, accelerating questionable research acceptance, and widening inequality between AI-literate and traditional researchers.
In response, institutions are beginning to trial prompt-detection software and mandate AI usage declarations during submission. Still, systemic oversight remains uneven. To safeguard academic publishing, editorial boards and AI developers must collaborate to ensure transparency, refine review protocols, and build AI systems that are not only efficient, but also resilient to manipulation.
As LLMs become deeply embedded in scholarly workflows, preserving the balance between technological advancement and ethical accountability is vital. These concealed prompts reveal a broader tension at the heart of AI integration: whether innovation will serve academic integrity – or subtly subvert it.