Two of the most important figures in technology this year compared their viewpoints on the development of artificial intelligence, drawing attention to the growing disconnect between innovation and safety.
CEO Sam Altman revealed Sunday evening in a blog post about his company’s growth that OpenAI has tripled its user base to over 300 million weekly active users as it works toward artificial general intelligence ( AGI ) in a post about its trajectory.
” We are now convinced we know how to build AGI as we have usually understood it”, Altman said, claiming that in 2025, AI agents was” visit the labor” and “materially change the outcome of companies”.
According to Altman, OpenAI is moving beyond just AI agents and AGI, and that it is beginning to work on” superintelligence in the true sense of the word.”
A timeline for the supply of AGI or superintelligence is not known. OpenAI did not respond to a remark demand right away.
However, Vitalik Buterin, the co-creator of Ethereum, suggested using blockchain technology to build global failsafe mechanisms for sophisticated AI systems, including a” soft delay” capability that was temporarily halt industrial-scale AI businesses if warning signs appear.
Crypto-based surveillance for AI health
Buterin speaks around about “d/acc” or decentralized/defensive motion. In the simplest feeling, d/acc is a variant on e/acc, or productive motion, a philosophical action espoused by high-profile Silicon Valley characters such as a16z’s Marc Andreessen.
Buterin’s d/acc even favors advances made in technology but places a premium on those that improve health and human organization. Unlike effective accelerationism ( e/acc ), which takes a “growth at any cost” approach, d/acc focuses on building defensive capabilities first.
” D/acc is an extension of the underlying values of crypto ( decentralization, censorship resistance, open global economy, and society ) to other areas of technology”, Buterin wrote.
Looking up at how d/acc has advanced over the past year, Buterin outlined how a more optimistic approach to superintelligent and Expert systems could be used with existing crypto methods like zero-knowledge proofs.
Under Buterin’s proposal, big AI computers may have regular approval from three foreign groups to maintain running.
” The signatures would be device-independent ( if desired, we could even require a zero-knowledge proof that they were published on a blockchain ), so it would be all-or-nothing: there would be no practical way to authorize one device to keep running without authorizing all other devices”, Buterin explained.
The system would operate like a master switch, preventing anyone from performing selective enforcement by allowing either all or none of the approved computers to run.
” Until such a critical moment happens, merely having the capability to soft-pause would cause little harm to developers”, Buterin noted, describing the system as a form of insurance against catastrophic scenarios.
In any case, OpenAI’s explosive growth from 2023—from 100 million to 300 million weekly users in just two years —shows how AI adoption is progressing rapidly.
From an independent research lab into a major tech company, Altman acknowledged the challenges of building” an entire company, almost from scratch, around this new technology”.
The ideas are a reflection of wider industry debates regarding managing AI development. Promoters have previously argued that putting in place a global control system would necessitate unmatched collaboration between powerful AI developers, governments, and the crypto industry.
” A year of’ wartime mode ‘ can easily be worth a hundred years of work under conditions of complacency”, Buterin wrote. It seems preferable to place everyone on an equal footing and do the hard work of actually trying to organize that as opposed to having one party try to rule over everyone else if we have to limit people.
Generally Intelligent Newsletter
A generative AI model’s generative AI model, Gen, tells a weekly AI journey.