Potentially disastrous scenarios in the face of the advancement of Artificial Intelligence

Artificial Intelligence Progress

In 2023 the world has witnessed great innovations in artificial intelligence (AI). Depending on what is read, these advances came to improve people’s lives or destroy them completely in a kind of rebellion of the machines. One of the most impactful news this year was the launch of ChatGPT, which generated enthusiasm and also fear among people.

ChatGPT is part of a new generation of AI systems that can converse, generate readable text, and even produce novel images and videos based on what they have “learned” from a vast database of digital books, online writings, and other media.

Derek Thompson, editor and journalist at The Atlantic magazine, asked himself a series of questions to find out if new AI advances are really to be feared and think that they lead to the end of the human race or if they are inspiring tools that will improve people’s lives.

Consulted by the American media, computer scientist Stephen Wolfram explains that large linguistic models (LLM) such as ChatGPT work in a very simple way: they create and train a neural network to create texts that are fed from a large sample of text. that is on the web such as books, digital libraries, etc.

If someone asks an LLM to imitate Shakespeare, they will produce a text with an iambic pentameter structure. Or if you ask him to write in the style of some science fiction writer, he will imitate the more general characteristics of that author.

“Experts have known for years that LLMs are awesome, they create dummy things, they can be useful, but they are really stupid systems, and they are not scary,” said Yann Lecun, AI chief scientist for Meta, consulted by The Atlantic.

The US media points out that the development of AI is concentrating on large companies and new companies backed by capital from technology investment firms.

The fact that these developments are concentrated in companies and not in universities and governments can improve the efficiency and quality of these AI systems.

I have no doubt that AI will develop faster within Microsoft, Meta, and Google than it would, for example, the United States military,” says Derek Thompson.

However, the American media warns that companies could make mistakes when trying to quickly introduce a product that is not in optimal condition to the market. For example, the Bing (Microsoft) chatbot was aggressive toward the people who used it when it was first released. There are other examples of other errors of this type, such as the Google chatbot that was a failure due to being launched in a hurry.

Philosopher Toby Ord warns that these advances in AI technology are not keeping pace with ethical developments around the use of AI. Asked by The Atlantic, Ord made a comparison of the use of AI to “a prototype jet engine that can reach never-before-seen speeds, but without corresponding improvements in steering and control.” For the philosopher, it is as if humanity were on board a powerful Mach 5 jet but without the manual to steer the aircraft in the desired direction.

Regarding the fear that AIs are the beginning of the end of the human race, the outlet points out that systems like Bing and ChatGPT are not good examples of artificial intelligence. But they can show us our ability to develop a super-intelligent machine.

Others fear that AI systems are not aligned with the intentions of their designers. On this subject, many machine ethicists have warned about this potential problem.

“ How to ensure that the AI ​​that is built, which could well be significantly more intelligent than anyone who has ever lived, is aligned with the interests of its creators and the human race? asks The Atlantic.

And the great fear about the previous question: a super-intelligent AI could be a serious problem for humanity.

Another question that worries experts and is formulated by the American media is: “Do we have more to fear from non-aligned AI or from AI aligned with the interests of bad actors?”

One possible solution to this is to develop a set of laws and regulations that ensure that the AIs that are developed are aligned with the interests of their creators and that these interests do not harm humanity. And developing an AI outside of these laws would be illegal.

However, there are going to be actors or regimes with rogue interests that can develop AI with dangerous behaviors.

Another issue that raises questions is: How much should education change in relation to the development of these AI systems?

The development of these AI systems is also useful in other industries such as finance or programming. In some companies, some AI systems outperform analysts in picking the best stocks.

“ChatGPT has demonstrated good writing skills for demand letters, summary pleadings and judgments, and even drafted questions for cross-examination,” said Michael Cembalest, President of Investment and Market Strategy for JP Morgan Asset Management.

LLMs are not replacements for lawyers, but rather can increase your productivity, particularly when legal databases like Westlaw and Lexis are used to train them,” Cembalest added.

For a few decades, it has been said that AIs will replace workers in some trades such as radiologists. However, the use of AI in radiology remains an adjunct for clinicians rather than a replacement. As in radiology, these technologies are expected to serve as a complement to improve people’s lives.