by Jyoti Narayan, Krystal Hu and Martin Coulter

(Reuters) – A group of experts in the artificial intelligence (AI) industry calls in an open letter for a six-month pause in the development of more powerful generative AI systems than GPT-4, recently launched by OpenAI, citing potential risks to society and humanity.

The letter, published by the nonprofit organization Future of Life Institute and signed by more than 1,000 people, including Elon Musk, pleads for a pause until common safety protocols are developed, enforced and overseen by independent experts.

“Powerful AI systems should only be developed when we are certain that their effects will be positive and their risks will be manageable,” the document states.

Experts say these systems could cause economic and political disruption and urge developers to work with policymakers and regulators.

Co-signers include Stability AI chief executive Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI big names Yoshua Bengio and Stuart Russell.

The concerns come as Europol, the European police agency, released a report on Monday warning against the misuse of systems like ChatGPT in phishing attempts, disinformation and cybercrime.

Sam Altman, chief executive of OpenAI, did not sign the letter, a Future of Life spokesman told Reuters. OpenAI did not immediately respond to a request for comment.

“The letter isn’t perfect, but the spirit is right: We need to slow down until we better understand the ramifications,” said Gary Marcus, a New York University professor who signed the letter.

“Big players are increasingly secretive about what they do, which makes it difficult for society to defend itself against any damages that may arise.”

(Reporting Jyoti Narayan in Bangalore, Krystal Hu in New York and Martin Coulter in London, Nathan Vifflin, editing by Kate Entringer)

Copyright © 2023 Thomson Reuters