Post Image

Superintelligence Rising – Are We Prepared for Artificially Created Minds?

In 1993, acclaimed sci-fi author and computer scientist Vernor Vinge made a bold prediction – within 30 years, advances in technology would enable the creation of artificial intelligence surpassing human intelligence, leading to “the end of the human era.”

Vinge theorized that once AI becomes capable of recursively improving itself, it would trigger a feedback loop of rapid, exponential improvements to AI systems. This hypothetical point in time when AI exceeds human intelligence has become known as “the Singularity.”

While predictions of superhuman AI may have sounded far-fetched in 1993, today they are taken seriously by many AI experts and tech investors seeking to develop “artificial general intelligence” or AGI – AI capable of fully matching human performance on any intellectual task.

Leading AI researcher Roman Yampolskiy explains, “The whole point is that, once machines take over the process of doing science and engineering, the progress is so quick, you can’t keep up.”

Yampolskiy sees a microcosm of this in his own field of AI research, where new innovations are being published at a pace too rapid for experts to stay current. He and others believe AGI could trigger a runaway cycle of improvements, allowing machines to accelerate scientific understanding and technological innovation beyond human comprehension.

Once developed, an AGI system could be tasked with designing even more capable AI systems, progressing at a pace no human researcher could match. This scenario alarms researchers like Yampolskiy, who argues that because humans cannot reliably predict or understand the capabilities of AGI systems, we will be unable to control or contain them. To Yampolskiy, the only way to avoid catastrophic consequences is to avoid ever building AGI in the first place.

However, expert opinion remains mixed on the feasibility and risks of AGI. In a 2022 survey of over 700 AI researchers by think tank AI Impact, only 33% considered an uncontrollable AGI scenario either “likely” or “quite likely”, while 47% considered it “unlikely” or “quite unlikely.”

Critics like Sameer Singh, an AI researcher at UC Irvine, argue that speculation about AGI and the Singularity distracts from pressing real-world issues posed by today’s AI systems – including biases, job displacement, and legal concerns around content generation. Singh feels too much focus on speculative futures takes attention away from these concrete problems that need addressing now.

“It’s much more exciting to talk about reaching this sci-fi goal than actual realities of things,” he says. Singh supports calls for a moratorium on developing AI more powerful than models like GPT-3 to give researchers time to study risks and ethics.

The AGI debate highlights a growing rift in the AI community. Pioneers like Geoffrey Hinton and Yoshua Bengio have expressed doubts about the field’s trajectory, calling for caution around developing increasingly capable AI systems.

Yampolskiy backs a moratorium, arguing “the only way to win is not to do it.” But many leading AI labs are invested heavily in the race to build ever-more-powerful models, betting society will benefit from pushing ahead. With billions in funding available, the pressure to advance AI capabilities remains intense.

Starkly contrasting visions are emerging – some fear AI could end the human era in our lifetimes, while others feel these worries are overblown and distract from practical concerns. But both sides agree that as AI systems grow more advanced, researchers have a profound responsibility to pursue progress safely and ethically.

How to steer a rapidly accelerating field clouded by uncertainty? For now, the contest between dramatic speculation and pragmatic calls for oversight remains unsettled. But the choices researchers make today could resonate for generations to come.

svgThe Eternal August of Artificial Intelligence!
svg
svgAI war: "Gemini Smashes GPT-4 By 5X"