Can Machines Achieve Superintelligence Without Human Intervention?

Author comments from Scientific American:

In 1965, statistician Irving John Good introduced the concept of an “ultraintelligent machine,” which posits that a sufficiently sophisticated computer would rapidly improve itself, triggering an “intelligence explosion.” This idea was revisited recently with the development of AI systems like AlphaGo Zero, which, using no human game data, played itself millions of times to achieve unprecedented improvements in a remarkably short period.

Today’s leading AI models have made significant strides in self-improvement, particularly in writing and refining their own software. OpenAI’s Codex and Anthropic’s Claude Code can operate independently for extended periods, generating new code or updating existing code. For instance, Codex can create a working website in a matter of minutes, as demonstrated by a recent experiment where a prompt was input into a phone, and a functional website was produced before reaching home.

However, despite these advancements, current AI systems still rely heavily on human oversight. They require skilled coders to set goals, design experiments, and evaluate progress. This limitation raises questions about the likelihood of AI systems evolving into superintelligence without human intervention. While some predictions of imminent superintelligence may seem exaggerated, it is essential to consider whether current AI ← →

Image

Today⁘s leading AI models can already write and refine their own software. The question is whether that self-improvement can ever snowball into true…

Find other details related to this topic: See here