Researchers developed a tool to is accurately identifiedover 99% of academic scientific texts which are written with Artificial Intelligence. The study was published in the journal Cell Reports Physical Science.

Although many AI text scanners are available online and perform quite well, no tools have been built specifically for academic writing. To fill the void, this particular research team aimed to build a better-performing tool for just that purpose. The research focused on a type of articles called Perspectives, which contain an overview of specific research topics written by scientists. The team selected 64 such articles and generated 128 articles with ChatGPT on the same topics. When they compared the articles, the researchers found one indicator for AI-written articles: predictability.

Unlike AI, humans have more complex paragraph structures, varying in number of sentences and total words per paragraph, as well as sentence length. Preferences in punctuation and vocabulary are also indicative. The team compiled 20 features that the tool should look out for.

When tested, the tool achieved a 100% accuracy rate in distinguishing AI-generated articles from those written by humans. For identifying individual paragraphs within the article, the model had an accuracy rate of 92%.

In a next phase, the research team wants to test the tool on more extensive datasets and different types of academic scientific writing. However, the tool wasn’t designed to capture AI-generated student reports, but as lead author and University of Kansas professor Heather Desire notes, humans can easily copy these methods to create models for their own research purposes.