Artificial Intelligence and Evolution: Beyond the Myths of Killer Machines

I recently listened to an interview with Eliezer Yudkowsky on the Lex Fridman Show on YouTube. He claims that AI might turn against us and kill us.

While his arguments are intriguing, I am not entirely convinced by his reasoning, particularly regarding his ideas on evolution (02:30:03 into the interview). It's possible that his written work is more nuanced, but in the interview, he seemed to present natural selection as the sole, albeit ineffective, force of evolution. This perspective seems somewhat simplistic. While it's true that preferred individuals are favored because they are adapted to their environment, what about the other kind of adaptation where intelligence controls preference?

For millennia, humans have controlled the breeding of animals, thereby shaping their genetics. In my opinion, a cow is still a product of evolution, even though human intelligence plays a role in selecting the preferred individuals. Why? Because human intelligence is a product of natural selection, and when a human selects the ideal breeding bull or chooses a partner, it is an extension of the evolutionary process.

This issue is relevant to the question of artificial intelligence. Yudkowsky is correct that we cannot fully understand the internal data of a large language model, as it operates like a black box, even though the data is  stored on hard drives and present in RAM. It is accessible, but not understandable for a human brain. 

However, this does not mean that the hidden consciousness, the self-awareness, or the desire to destroy humanity that Yudkowsky envisions is necessarily present there. He leaps from one assumption to another, stating, "Everyone will die if it is not aligned." This resonates well with popular fiction. 

Yudkowsky seems to believe that an AI will inherently possess an inclination towards egotism and violence. Why? Because it learns about these concepts from its dataset? I find this hard to accept. The AI also learns about altruism, Buddhism, human rights, and positive concepts. The situation is different for biological products of evolution, as we carry a heavy and partly dark burden of evolutionary heritage with its aggression and egotism.

This is not the case for AI, even though it is part of the evolution, since it is (currently) being built by human intelligence. Why is it different? Because an AI is designed and used as a tool for humans, significantly enhancing our capabilities. The problem arises when AI is employed by malicious individuals or programmed for nefarious purposes. It is human aggression which is the challenge.

It would be irresponsible to halt the development of GPT-5 due to speculative, science fiction-inspired concerns about machines awakening and turning against us. On the contrary, we must recognize that AI development is occurring worldwide, including in China and Russia, and the democratic world cannot afford to fall behind in this field.

The advancement of large language models must continue.

Lex Fridman #368: https://youtu.be/AaTRHFaaPG8

Written with some help from GPT-4
Image from SDXL BETA (at Nightcafe)

Kommentarer

Populære innlegg fra denne bloggen

Ny sky?

Oslo Børs 3: En ny verden

Ny frisk med e-blekk: Onyx BOOX Nova Air C 7,8" 32GB Bundle