AGI, a term commonly used to describe the extent to which AI systems can understand, learn, and carry out any intellectual job that a human can you, was the bold declaration made by Sam Altman when he launched 2025.
He also stated that this year’s start of the first storm of AI agents, which he describes as a turning point in modern history, could be a turning point.
Altman depicted OpenAI’s transition from a quiet study facility to a business that claims to be on the verge of creating AGI.
While ChatGPT celebrated its next birthday less than a month ago, Altman suggests the next generation of AI models with intricate reasoning is already here. The timeline seems ambitious, perhaps too ambitious.
From that, it’s all about integrating near-human Artificial into society until AI beats us at all.
Wen AGI, Wen ASI?
Altman’s explanation of AG I’s implied meaning remained ambiguous, and his timetable predictions drew the attention of AI experts and veterans of the business.
We are then comfortable in our ability to implement AGI as we have always believed it to be, according to Altman. ” We believe that, in 2025, we may see the second AI providers’ join the workforce ‘ and significantly change the outcome of companies”.
Because there isn’t a universal definition of AGI, Altman’s description is obscure. As AI versions become more potent but not necessarily more skilled, the club has to be raised.
” When considering what Altman said about AGI-level AI providers, it’s important to concentrate on how the concept of AGI has been evolving”, Humayun Sheikh, CEO of Fetch. ai and Chairman of the ASI Alliance, told .
Although these systems can now pass many of the traditional AGI benchmarks, such as the Turing test, Sheikh said, this doesn’t mean they are sentient. “AG I has not yet reached a stage of genuine intelligence, and I don’t think it will for quite some time”.
What does Altman mean by “AGI” is the disconnect between his enthusiasm and skilled consensus? His development on AI brokers” joining the labor” in 2025 sounds more like developed automation than real artificial general intelligence.
” Superintelligent resources could significantly accelerate scientific discovery and development well beyond what we are capable of doing on our own,” he wrote in his article. This would in turn significantly increase abundance and prosperity.
But does Altman have it right that AGI or adviser integration will be possible in 2025? Not everyone is so positive.
Charles Wayn, co-founder of decentralized very software Galxe, stated to Decrypt,” There are just too many insects and discrepancies with existing Artificial models that must be fixed first.” ” That said, it’s likely a matter of years rather than years before we see AGI-level Artificial agents”.
Some researchers suspect Altman’s bold projections may serve another purpose.
In any case, OpenAI has been burning through cash at a staggering rate, necessitating significant investments to maintain its AI growth on track.
Promising upcoming developments could support, in the opinion of some, keep investors interested despite the agency’s high operating costs.
We then aspire to target beyond that, to promote hype in the purest sense of the word, because we are assured that we can rewrite bullshit at unprecedented levels and getting away with it. We love our goods, but we are here for the magnificent future rounds of funding. With infinite funding, we … https ://t.co/cH9xN5oJxK
— Gary Marcus ( @GaryMarcus ) January 6, 2025
That’s quite an apostrophe for anyone claiming to be on the cusp of one of humankind’s most important technological breakthroughs.
However, some are backing Altman’s statements.
According to Harrison Seletsky, chairman of business development at digital identity system SPACE ID,” If Sam Altman is saying that AGI is coming shortly, then he definitely has some files or business skills to back up this claim,”
If Altman’s claims are accurate and technology keeps evolving in the same place, Seletsky said, “generally smart AI agents may become a year or two away.”
The CEO of OpenAI made the suggestion that AGI would not suffice for him, and that his business wants to create ASI, a state of AI development where designs outnumber individual capabilities for all tasks.
We are beginning to redefine our definition of superintelligence beyond that. We love our present materials, but we are here for the magnificent potential. With superintelligence, we can do anything more,” Altman wrote in the website.
Some claim that computers will replace all beings by 2116, despite Altman’s lack of details regarding an ASI timeline.
According to Altman, ASI is only a matter of” a few thousand days,” while forecasting institute experts estimate that there is a 50 % chance ASI will be achieved in , at least in 2060.
Achieving AGI requires both knowing how to do it and being able to do it.
Humanity is still far from reaching this step, according to Yan Lecun, general AI researcher for Meta, because of restrictions in the training method and the equipment needed to process for large amounts of data.
I said that reaching Human-Level Artificial” may take several years if no a century.”
Sam Altman says” several thousand days” which is at least 2000 days ( 6 years ) or perhaps 3000 days ( 9 years ).
So we’re never in dispute.But I think the distribution has a long tail: it could take… https ://t.co/EZmuuWyeWz
— Yann LeCun ( @ylecun ) October 16, 2024
Eliezer Yudkowsky, a well-known AI researcher and scientist, has also suggested that this may be a marketing ruse to essentially profit OpenAI in the near future.
OpenAI benefits both from the short-term hype, and even from people next after saying”, Ha ha look at this hype-based field that didn’t deliver, pretty not risky, no need to shut down OpenAI. “https ://t.co/ybkh9DGUm5
— Eliezer Yudkowsky ⏹ ️ ( @ESYudkowsky ) January 5, 2025
Human Workers vs AI Agents
Agentic behavior is a phenomenon, unlike AGI or ASI, and the number of AI brokers is growing faster than some people would have expected.
Develop methods of AI brokers with diverse functions, including the ability to collaborate with people, was made possible by systems like Crew AI, Autogen, or LangChain.
What does it imply for the typical Joe, and will it be advantageous or detrimental for regular employees?
Researchers aren’t very concerned.
” I don’t think we’ll see extraordinary organizational changes immediately,” Fetch. ai’s Sheikh said”. Although there may be a decrease in human funds, especially for recurring tasks, these advancements may also tackle more complex repetitive tasks that remote-piloted aircraft systems are unable to handle right now.
Seletsky also believes Agencies will most likely perform repeated tasks rather than those that call for some degree of decision-making.
In other words, if people can use their imagination and skill to their advantage and bear the consequences of their actions, they are healthy.
” I don’t think decision-making will always be led by AI officials in the near future, because they can cause and interpret, but they don’t have that human ingenuity however”, he told Decrypt..
And there seems to be some degree of consensus, at least in the short term.
” The key distinction lies in the lack of” humanity “in AG I’s approach. It’s an objective, data-driven approach to financial research and investing. Because it eliminates some emotional biases that frequently cause rash decisions, this can help rather than hinder financial decisions, Galxe’s Wayn said.
Experts are already aware of the potential social repercussions of deploying AI agents.
According to research from the City University of Hong Kong, agents like genealogical AI and humans in general must coexist with one another in order for society to grow healthy and sustainably.
” AI has created both challenges and opportunities in various fields, including technology, business, education, healthcare, as well as arts and humanities”, the research paper reads. The key to addressing issues and maximizing opportunities created by generative AI is” AI-human collaboration.”
Despite this push for human-AI collaboration, companies have started substituting human workers for AI agents with mixed results.
Generally speaking, they always need a human to handle tasks agents cannot do due to hallucinations, training limitations, or simply lack of context understanding.
Nearly 25 % of CEOs are excited by the idea of having their farm of digitally enslaved agents do the same work humans do without investing in labor costs as of 2024.
No one is really safe, according to other experts who claim that an AI agent can probably do better for almost 80 % of what a CEO does.
Generally Intelligent Newsletter
A generative AI model’s voiceover for a weekly AI journey.