Post Image

Max Tegmark on AI: “A Cancer Which Can Kill All of Humanity”.

Respected cosmologist and AI safety advocate Max Tegmark recently expressed his concerns about the potential dangers of artificial intelligence, echoing the sentiment of AI expert Eliezer Yudkowsky. In a conversation that delved into the philosophical disagreements between the two thinkers, Tegmark highlighted the high probability that humanity might not survive the unchecked development of AI.

Yudkowsky is known for his belief that there is an almost certain chance that AI will ultimately lead to humanity’s demise. While Tegmark shares some of these concerns, he also considers alternative trajectories where the outcome may not be as bleak. However, the possibility of a future without humans on the planet remains a significant and distressing prospect for Tegmark, who recently became a father.

Drawing an analogy between the AI threat and a cancer diagnosis, Tegmark described the potential consequences of AI as a “cancer which can kill all of humanity.” The gravity of this situation has led him to ponder the future of his newborn child and the generations to come. Despite the potential risks, Tegmark’s views reveal the importance of considering multiple perspectives on AI safety and its implications for humanity’s future.

As the debate on AI safety continues, researchers like Tegmark and Yudkowsky urge caution and responsible development of AI technologies. The conversation surrounding AI’s potential impact on humanity underscores the need for ethical guidelines, robust safety measures, and a collective effort to ensure that AI benefits all of humanity rather than leading to our extinction.

The actual quotes from the interview

Lex:

“Can you just linger on this maybe high level of philosophical disagreement with Eliezer Yudkowsky, in the hope you’re stating. So he is very sure, he puts a very high probability, very close to one, depending on the day he puts it at one, that AI is going to kill humans. That there’s just, he does not see a trajectory, which it doesn’t end up with that conclusion.

What trajectory do you see that doesn’t end up there? And maybe can you see the point he’s making, and can you also see a way out?”

Max:

“First of all, I tremendously respect Eliezer Yudkowsky and his thinking.

Second, I do share his view that there’s a pretty large chance that we’re not gonna make it as humans.

There won’t be any humans on the planet, in a not-too-distant future, and that makes me very sad.

You know, we just had a little baby and I keep asking myself, you know, is, (long pause)

how old is he even gonna get, you know? And I ask myself,

it feels, I said to my wife recently, it feels a little bit like I was just diagnosed with some sort of cancer,

which has some, you know, risk of dying from and some risk of surviving, you know.

Except this is a kind of cancer which can kill all of humanity. So I completely take seriously his concerns,”

The full interview

toroblocks Protection
svgThe Godfather of AI Warns About Chatbots Seeking Power: A Nightmare Scenario.
svg
svgUS Fails to Adequately Address AI Regulation Concerns: Limited Funding and Exclusion of Ethics Researchers.