In December 2023, an article was published in a scientific journal entitled “World Business Strategies in the Digital Age”, with the author of the Professor of the Athens University of Economics and Economics, Diomedes Spinellis. The article, with a plethora of bibliographic references, met all the formal criteria of a publication. But with one significant difference: he had not been written by Mr. Spinellis.

When the OPA’s Professor of the Department of Administrative Science & Technology fell on the perception of this article, a thorough research began, which “brought to light” in late May 2025 the systematic use of productive artificial intelligence (GENAI) for the creation and publication of a number of misleading articles.

As it turned out, from 53 articles studied, 48 appears to have been produced by artificial intelligence (from 88% to 100% in the Turnitin detection tool). At the same time, many articles had falsely attributed to researchers from prestigious universities, such as Berkeley University in California and the University of Shanghai. Indeed, in two cases, falsely referred to as writers of the articles did not live when they were published.

The issue is not new, however, as time goes on, it can take on increasingly dimensions. As Mr Spinellis told RES-EIA, ‘there is evidence that the number of publications produced by productive Artificial intelligence

The quantity (articles) against (academic) quality

As Mr. Spinellis explained to RES-EIA, there is a category of academic magazines, which are described as “predatory journals”, in which some authors pay, “in order to publish articles and” inflate “their resume without necessarily doing so.” At the same time, articles are mentioned as writers of well -known scientists from credible universities, to show that the magazine is prestigious, but in fact, the publications are false, as in the case of Mr. Spinnelis.

“It is a deception of the organizations that evaluate the researchers, who measure the amount of publications, without evaluating what these publications relate and with what process they have been published,” the professor explained. This deception leads to the incorrect evaluation of these researchers and the universities to which they belong.

In addition, however, this practice poses a great danger: the erosion of the scientific research ecosystem, as they penetrate poor quality articles by unreliable journals. “As the internet is” flooded “with such articles, one can accidentally take into account it. Students, but also academics, may not be able to distinguish differences. At least one of the articles investigated and consumed by TN, has been found to have been referred by another, prestigious magazine, “explained Mr. Spinellis.

Consequently, the knowledge of the diarrhea is eroded and its level gradually lowers. “The term” Modern Collapse “has been given to describe exactly: As long as the models and agents are fed by TN’s creations, gradually falls the level of knowledge that diffuses, into something that is much lower than the original,” Mr. Spinellis emphasized: ” The same is true if TN is fueled by its own texts: little by little the subtle concepts are lost and knowledge becomes a porridge. “

Moral use and additional countermeasures

This entropy, however, seems to find in front of her slowly restrictions and obstacles. The question of the moral use of TN, the protection of mental property, as well as the public over misinformation, gradually gains ground.

In December 2023, the New York Times had legally moved against OPENAI (which owns the popular “Chatgpt” tool) and Microsoft (which belongs to the corresponding “Copilot” tool), as according to NTT, the companies “trained their models” with the articles of the group.

It is worth noting that OpenAi’s agreements on the use of media such as the Wall Street Journal, the New York Post, Politico and Bild, and Google has a similar agreement with the Associated Press.

At the same time, it is recalled that massive use last March of the filter, with which users could reproduce photos with the special and iconic style of the Japanese animation company, Studio Ghibli (“Spirited Away”, “My Neighbour Totoro”), caused many talks Inspection of Creativity.

At a regulatory level, the European Union is in force the law on artificial intelligence (AI ACT), which is the first comprehensive legislative effort to regulate TN issues. In addition, recently, the announcement was made by the Danish government that it will amend the copyright law to ensure that citizens have the right to the body, their facial features and their voices, with the aim of limiting the creation and dissemination of Deepfake.

At the same time, its appearance has been made by Cope – Committee on Publication Ethics, a non -profit organization that aims to define optimal practices in terms of ethics in relation to scientific publications, while helping them.

As stated by the Organization: “The ethics of publications is a key part of maintaining honesty and credibility in research, helps to ensure its credibility and promote knowledge. This benefits the entire research community and, in turn, helps in the progress of society. “

“It is something we will continue to see, the effort to protect mental property or to limit misinformation and in turn and artificial intelligence is becoming more and more difficult to detect,” Mr Spinellis said. “It will be a constant struggle,” he said.

Artificial Intelligence in the Bees

Despite the dangers, Mr Spinellis insists that TN productive is “extremely useful”. “There are simply risks that we should keep in mind and try to limit them. In this case, with better identification of researchers and magazines, such problems can be reduced, “he said and insisted that” the main profit is to decide to use artificial intelligence. “

As Mr. Spinellis explained, there are good and bad ways of using it. He urges his students to use it, as they become familiar with and can use it better. “They should avoid solving artificial intelligence exercises, because they will not learn the foundations they need and will not be able to do more complex things,” he said.

‘Ginni doesn’t get into the lamp again’

As for whether he is optimistic or pessimistic about the outcome of the “struggle” between “good and evil” in the field of TN, Mr. Spinellis replies that he is realistic. “It can’t be the gin again in the lamp,” he said. “Human discoveries have good and bad uses, but we always go by looking at us better,” he added.

“At the moment scientists have specialized assistants 24 hours a day, which they can use and have access to all knowledge through them. At the same time, we must focus on education of the new generation of scientists. Let us not end up with students and students not having basic knowledge, because artificial intelligence has done their jobs and have no foundations. “