Analysis: ChatGPT: what excites and what scares about the new artificial intelligence

by

In recent weeks, the emergence of ChatGPT, a new chatbot developed by OpenAI, has drawn international attention. This new artificial intelligence (AI) tool could revolutionize an extremely wide range of human activities, but it also raised a considerable number of concerns among technology specialists.

It is highly likely that regulators will be the next to take an interest in this technology.
ChatGPT is one of the latest applications of the Generative Pretrained Transformer 3 (GPT-3) algorithmic model. This state-of-the-art natural language processing model was first introduced in 2020, raising several ethical questions regarding its use.

ChatGPT is able to understand natural human language and generate a response similar to the fruit of human intellect upon receiving a request. The tool gained prominence for three main reasons.

Firstly, for the extremely refined level of your answers. Its almost faithful simulation of texts written by humans and its precise and coherent answers to the vast majority of questions, even the most complex ones, is impressive.

Such evolution becomes extremely useful to facilitate a myriad of tasks and, consequently, extremely profitable for those who manage to reach this level of sophistication. In this sense, the second –but perhaps we should consider it the first– reason why ChatGPT caught the attention of experts was the incessant rumors about the huge investment of US$ 10 billion that OpenAI, led by Elon Musk and Sam Altman, would be negotiating with Microsoft, after having already received an investment of US$ 1 billion from Microsoft in 2019.

According to Sam Altman, CEO of OpenAI, ChatGPT would have already reached the mark of 1 million users, even though it was launched less than a month ago. For now, the AI ​​system is being made available “free of charge”, enlisting users in a massive advertising and testing operation for the technology.

Thirdly, and in a more general perspective, ChatGPT highlights the arrival of a new phase of automation in which the AI ​​does not only perform a limited task, but can interact in a much more refined way with the user, assisting in a much more elaborate way. , from writing an article, to fixing software code, in the space of a few seconds.

For example, the so-called “code debug” is the first example offered on its official website on how ChatGPT can contribute to examining and improving code, based on a “dialogue” with the programmer.
This ability to generate answers to any kind of question is extremely promising, but also frightening. When a user asks for malware – i.e., malicious software typically used for hacking attacks – ChatGPT quickly delivers a great draft of code that would enable – or at least greatly facilitate – particularly pernicious attacks, even if the perpetrator was a novice .

So far, nobody seems to have asked ChatGPT to craft a series of fake news to harm public figures or institutions, but the system performed very well in composing phishing emails to be sent to online scams. One can imagine that the same kind of high-level performance can be achieved by crafting fake news.

Perhaps one of the most successful applications of chat could be academic research, which forces us to radically rethink how individuals’ intellectual competences are assessed. What sense can it make to evaluate a student with a course conclusion work or an essay if such works can be prepared in a few seconds by ChatGPT?

The ICML (International Conference on Machine Learning), one of the most prestigious AI conferences in the world, has already banned the use of tools like ChatGPT to write or correct scientific papers. This decision gave rise to an ethical debate in academia and among machine learning professionals.

Some of the ethical issues raised by the conference were the difficulty in distinguishing texts written through the chat from texts of human authorship and that its use allows the reformulation of texts available on the internet, without crediting the original authors.

Plagiarism is considered a serious problem in educational institutions, but systems like ChatGPT make the practice extremely easy and, at the same time, difficult to detect. While the prohibition established by ICML is fully understandable, its implementation seems extremely difficult. It is still not clear how the ICML organizing committee –or anyone else– could determine whether a text was prepared with the aid of these tools.

ChatGPT applications can be extremely valuable and save millions of hours of work writing emails, notes, plans, projects, software etc. However, alongside the friendly and bona fide uses come many other much less noble potential (ab)uses.

The purpose for which any technology is used depends on its user. But the developer cannot be entirely exempt from liability and must have a duty of care when the technology can easily be used for harmful purposes.

ChatGPT raises very relevant issues of cybersecurity, data protection, pedagogy and ethics. It forces us to rethink our relationship with AI in order to start considering the need to evolve with it, becoming smarter and more prepared thanks to it. The other, lazier, and far more pernicious option: making us dependent rather than protagonists of AI.

The arrival of a more complex and challenging type of AI forces us to analyze all the effects, whether positive or negative, of such advances, to prepare seriously and understand how to regulate efficiently.

You May Also Like

Recommended for you

Immediate Peak