Vitalik Buterin calls for pausing AI hardware to ensure humanity’s safety.

Vitalik Buterin calls for pausing AI hardware to ensure humanity’s safety.

Introduction

In a groundbreaking blog post, Ethereum co-founder Vitalik Buterin has sparked a heated debate about the potential dangers of artificial intelligence (AI) superintelligence and the ethical implications of accelerating its development. In his Jan. 5 post, Buterin advocates for a concept he refers to as "defensive accelerationism," or d/acc, which involves slowing down AI development by temporarily halting the creation of industrial-scale computational resources globally accessible to humanity.

Buterin acknowledges that AI superintelligence—defined as an AI system capable of surpassing human intelligence in all domains—is not only a theoretical possibility but could potentially emerge within just five years. He emphasizes that the outcomes of such an advance are unpredictable, leaving humanity at the mercy of a technology that could bring unimaginable harm if unregulated.

The Threat of AI Superintelligence

Defining AI Superintelligence

AI superintelligence is often conceptualized as a system that surpasses human intelligence across all domains of expertise. This concept has been explored by researchers and thought leaders for decades, with some warning that it may become a reality sooner than we can imagine. In his blog post, Buterin cites a March 2023 open letter signed by over 2,600 tech executives and researchers, who have called on the AI community to halt development due to the "profound risks to society and humanity."

AI superintelligence is not only about computational power but also about the ethical implications of creating systems that can operate with a level of autonomy and intelligence surpassing human capabilities. This raises concerns about control, privacy, and the potential for misuse.

The Case for Defensive Accelerism

Buterin’s idea of defensive accelerationism (d/acc) is rooted in the belief that accelerating AI development is inherently risky. He argues that while some forms of AI—such as narrow AI designed for specific tasks—are relatively safe, superintelligent AI could have devastating consequences if it comes into the hands of powerful entities.

The Soft Pause Proposal

To mitigate this risk, Buterin proposes a "soft pause" on industrial-scale computational resources. This measure would reduce global computing power by up to 99% over a period of one to two years, effectively halting AI development at a critical juncture. He believes that such a pause could buy humanity the time needed to prepare for the potential consequences of an AI superintelligence.

The Need for Safeguards

Buterin acknowledges that his initial proposal was vague and did not provide concrete safeguards. He emphasizes that he is seeking "muscular alternatives" to liability rules, which would allow individuals or organizations using, deploying, or developing AI to be held accountable for the harm caused by these technologies.

One potential solution proposed by Buterin involves modifying AI chips to include a mechanism that allows them to continue operating only if they receive a trio of signatures from major international bodies each week. These signatures could be issued by organizations such as the World Health Organization, the United Nations, or other global governing bodies. The system would require device-independent verification, potentially even requiring proof of authorization using blockchain technology.

The Zero-Knowledge Proof Concept

Buterin suggests that if desired, a zero-knowledge proof could be used to ensure that these signatures are valid without revealing any sensitive information about the devices themselves. This would add an additional layer of privacy and security to the system.

Alternative Approaches: Effective Accelerism

Buterin distinguishes his defensive accelerationism from effective accelerationism (e/acc), which involves unrestricted and unbridled advancement of AI technology. He argues that while e/acc could lead to significant benefits, it also carries much greater risks due to the lack of safeguards.

The Cultural Impact: A Cult Grows Around AI Memecoin ‘Religions’

In a related development, Buterin has noted the growing cultural impact of AI-related technologies. A small but vocal group of people is beginning to take an almost religious interest in AI memecoin "religions," which are essentially cryptocurrencies based on the concept of artificial intelligence.

This phenomenon raises questions about the ethical implications of consumer behavior in the face of rapidly evolving technology and the potential for manipulation or exploitation by tech companies.

Conclusion

In his blog post, Vitalik Buterin calls on the AI community to prioritize caution and responsibility as we approach the threshold of superintelligent AI. His proposed defensive accelerationism offers a potential solution to the ethical dilemma posed by this groundbreaking technology. However, even with safeguards in place, the development of AI superintelligence could still pose significant risks to humanity.

Buterin’s call for action highlights the urgent need for global cooperation and ethical leadership in the pursuit of AI advancement. As we continue to explore the boundaries of what is possible with artificial intelligence, it is essential that we remain vigilant about the potential consequences of our actions and work together to ensure that this technology serves humanity, not the other way around.

Cryptocurrency