Ban on superintelligent AI: A necessity?

Avatar
Lisa Ernst · 22.10.2025 · Technology · 5 minutes

This move, circulating worldwide today, October 22, 2025, raises important questions and ties into earlier debates about the regulation of artificial intelligence.

Introduction

The debate about the future of artificial intelligence (AI) is reaching a new high. A broad coalition, including Prince Harry and Meghan, , call for a moratorium on the development of so-called superintelligence. This appeal, published today, October 22, 2025, demands that development be halted until safety and democratic consent are guaranteed. The signatories, including also Media figures, politicians, researchers and tech pioneers, , see significant risks in the uncontrolled development of superintelligent AI systems.

Superintelligence refers to hypothetical AI systems that exceed humans in virtually all cognitive tasks. . This differentiates them from Artificial General Intelligence (AGI), which has human-like capabilities, and Artificial Superintelligence (ASI), which goes beyond that. The current move recalls the Pause appeal by the Future of Life Institute (FLI) from 2023, , which called for a pause in the development of very large AI models.

Background and analysis

The organizers have today issued a concise 30-word statement published: “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” This statement is reported consistently. The signatories include among others AP, Reuters und dem Guardian consistently reported. Among the prominent signatories are, in addition to Prince Harry and Meghan, the AI pioneers Geoffrey Hinton und Yoshua Bengio, , Apple co-founder Steve Wozniak, Richard Branson, and political voices such as Susan Rice. Notably, the political diversity of the supporters also includes Steve Bannon und Glenn Beck umfasst.

The appeal explicitly targets the development of superintelligence and not useful, limited AI applications. The strategic idea behind a stop-for-now is to create a to slow the race toward superhuman systems, , which could dilute incentives for safety. A public ban, linked to conditions such as scientific consensus and broad acceptance, should slow this race and hold policymakers accountable. The Future of Life Institute has argued for years that existential risks must be systematically regulated, , in a manner similar to other high-risk technologies.

Polls in the United States show growing public concern: Pew and Gallup report in 2025 a clear appetite for stronger rules and safety testing before the release of AI systems. This indicates broad support for regulatory measures.

Quelle: YouTube

The Interview with Geoffrey Hinton (BBC Newsnight) provides background on why leading researchers view superintelligence as a particular risk category.

The vision of Artificial Superintelligence (ASI) – a concept that evokes both fascination and concern.

Quelle: youtube.com

The vision of Artificial Superintelligence (ASI) – a concept that evokes both fascination and concern.

Controversies and reactions

The call to ban superintelligence is well-documented: there is a widely supported call, , to prohibit the development of superintelligence until strict conditions are met. The 30-word formulation is publicly documented and is confirmed by major news agencies such as AP, Reuters, and The Guardian.

However, it remains unclear how close superintelligence is technically. Experts disagree on this; many see great uncertainties in development paths and timelines. . It would be misleading to interpret the call as a blanket AI ban. It instead addresses a specific target — systems that surpass humans in nearly all cognitive tasks — and ties the lifting of restrictions explicitly to safety and consent conditions.

The prohibition sign symbolizes calls for a moratorium or a complete ban on the development of superintelligent AI.

Quelle: alamy.de

The prohibition sign symbolizes calls for a moratorium or a complete ban on the development of superintelligent AI.

Technology sectors and governments warn that radical stops could jeopardize innovation and competitiveness. Reuters cites resistance to moratoria in politics and industry. . Other voices emphasize that ASI may still be far off, and regular regulation, such as the EU AI Regulation, , provides a sufficient framework. The EU AI Regulation, already defines prohibited practices and regulates high-risk applications, but does not offer an explicit superintelligence ban.

Practically, the discussion is shifting from “How fast can we go?” to “Under what conditions are we allowed to operate?”. For Europe, it is relevant that the EU AI Regulation already defines prohibited practices (e.g., social scoring) and strictly regulates high-risk applications — a ready-made governance mechanism, but no explicit superintelligence ban. For classification, it is important to always examine the original text of the appeal and pay attention to precise terms (AGI vs. ASI). Reliable sources for regulatory status are Original text of the appeal to check and pay attention to precise terms (AGI vs. ASI). Reliable sources for regulatory statuses are EU pages and major news organizations.

Quelle: YouTube

The Talk by Max Tegmark offers a good introduction to the opportunities, risks, and governance ideas around superintelligence.

The rapid development of AI leads to a future whose complexity and implications are not yet fully foreseeable.

Quelle: nexperts.ai

The rapid development of AI leads to a future whose complexity and implications are not yet fully foreseeable.

Open questions and conclusion

Open questions remain: Who defines operationally when a development is 'superintelligence'? Without robust metrics and independent audits a ban would be hard to enforce. Which institutions would have the mandate and resources for enforcement—national, European, global? Here existing regimes such as the EU AI Regulation connections, but no ready ASI-specific rule. And finally: How close are such systems in reality? The assessment of the temporal proximity remains disputed.

The global call to stop superintelligent AI is a political signal with a clear condition: first safety and consent, then further development. For you, this means: pay attention to precise terms, check primary sources, and compare appeals with existing legal frameworks such as the EU AI Regulation. This keeps you able to act in the debate – beyond hype and alarmism, such as Reuters and The Guardian report.

Teilen Sie doch unseren Beitrag!