Artificial general intelligence (AGI) is already being hyped but realizing it will take time. How much time is highly debatable. For example, Sam Altman stated he thought AGI would be achieved by 2025, which was earlier than other estimates. Later Altman changed the forecast to “during Trump’s term.” Most recently, he’s said that AGI is a pointless term and some IT leaders agree, arguing that AI is a continuum, and that AGI will be realized incrementally rather than suddenly.
“[W]e think about AGI in terms of stepwise progress toward machines that can go beyond visual perception and question answering to goal-based decision-making,” says Brian Weiss, chief technology officer at hyperautomation and enterprise AI infrastructure provider Hyperscience, in an email interview. “The real shift comes when systems don’t just read, classify and summarize human-generated document content, but when we entrust them with the ultimate business decisions.”
On the 2025 Gartner Hype Cycle for AI graph, AGI appears behind but relatively close to other forms of artificial intelligence, including AI agents, multimodal AI and AI TRiSM (ethical and secure AI), which Gartner recommends IT leaders focus on in 2025.
OpenAI’s newly released GPT-5 isn’t AGI, though it can purportedly deliver more useful responses across different domains. Tal Lev-Ami, CTO and co-founder of media optimization and visual experience platform provider Cloudinary, says “reliable” is the operative word when it comes to AGI.
“I predict we will see functionally broad AI systems that appear AGI-like in limited contexts within the next five to seven years, especially in areas like creative content, code generation and customer interaction,” says Lev-Ami in an email interview. “However, true AGI [that is] adaptable, explainable and ethical across domains is still likely more than 10 years out.”

Tal Lev-Ami, Cloudinary
Other estimates are even longer. For example, Josh Bosquez, chief technology officer at public benefit software provider Second Front Systems, thinks AGI probably won’t be a reality for one or two decades, and that reliable, production-ready AGI will likely take even longer.
“We may see impressive demonstrations sooner, but building systems that people can depend on for critical decisions requires extensive testing, safety measures, and regulatory frameworks that don’t exist yet,” says Bosquez in an email interview.
Jim Rowan, principal, Deloitte Consulting and US Head of AI, says that while the timeline for and definition of achieving AGI remain uncertain, organizations are already preparing for its arrival.
“By implementing standards, addressing regulatory challenges and optimizing their data ecosystems, companies are strengthening current AI capabilities and laying the foundation for AGI. These proactive measures make the path toward AGI feel increasingly within reach,” says Rowan in an email interview.
Any estimates of AGI’s arrival are subject to change, given the accelerating rate of AI innovation and emerging regulation.
Challenges With AGI
Artificial narrow intelligence or ANI (what we’ve been using) still isn’t perfect. Data is often to blame, which is why there’s a huge push toward AI-ready data. Yet, despite the plethora of tools available to manage data and data quality, some enterprises are still struggling. Without AI-ready data, enterprises invite reliability issues with any form of AI.
“Today’s systems can hallucinate or take rogue actions, and we’ve all seen the examples. But AGI will run longer, touch more systems, and make higher-stakes decisions. The risk isn’t just a bad response. It’s cascading failure across infrastructure,” says Kit Colbert, platform CTO at Invisible Technology, a software services provider supporting the AI value chain, in an email interview. “We will need a sophisticated set of safeguards in place to ensure this doesn’t happen. Today these exist as basic access controls to sensitive systems, but with AGI we’ll need much more advanced mechanisms.”
Deloitte’s Rowan says his company’s concerns are less about the technology and more about organizational preparedness and potential mismanagement.
“Without the right frameworks and governance, AGI implementation could amplify existing challenges, such as strategic misalignment. Robust preparedness will be crucial to maximize AGI’s benefits and minimize its risks,” says Rowan. “As with previous AI advancements, CIOs should approach AGI with a strategic and business focused approach that looks for opportunities to drive long-term value. [S]tart with low-risk, high-value pilots that improve internal productivity or automate repetitive tasks before expanding AGI to solve cross-departmental challenges. This phased approach helps teams adapt gradually, builds trust in AGI systems and allows operational challenges early.”

Jim Rowan, Deloitte
Cloudinary’s Lev-Ami is concerned about hallucinations and opacity.
“My top concern is [the] ‘illusion of understanding.’ Systems that sound competent but have no grounded comprehension can cause real harm, especially when used in high-stakes decisions, accessibility or misinformation-heavy contexts,” says Lev-Ami. “I’m also concerned about opaque dependency chains. If core business logic starts relying on evolving black-box models, how do we ensure continuity, accountability and auditability? Even if we carefully test the AI, once we give it full autonomy, how can we trust what it will do when it encounters a situation it’s never seen before? The risk is that [AGI’s] mistakes could be unpredictable and potentially unlimited.”
David Guarrera, EY Americas generative AI leader believes today’s challenges will remain challenges for AGI. “Power and resources are becoming increasingly concentrated in a small number of technology companies, creating a new form of digital hegemony that could have broad societal implications,” says Guarrera in an email interview. “At the same time, we’re witnessing the spread of misinformation and a flood of low-quality AI-generated content [that] threatens to degrade the information ecosystem people rely on to make decisions. These trends risk fueling greater polarization, as algorithms reinforce divides and push communities further apart.”
There are also economic concerns.
“[A]utomation is already displacing certain categories of jobs, and AGI would likely accelerate that trend dramatically. Beyond job loss, we face the possibility that agentic workflows could make catastrophic mistakes or hallucinate in ways that cause real-world harm if given too much autonomy,” says EY’s Guarrera. “Looking further ahead, AGI raises the profound question of alignment. Will the goals of these systems truly align with humanity’s best interests? As we grant them more trust and responsibility, we need to be certain they won’t act against us.”
Hyperscience’s Weiss underscores the need for accountability and safety.
“AGI isn’t just about capability, it’s about trust. In mission-critical systems [such as] underwriting, government forms processing or financial approvals, we’re dealing with decisions that have major consequences. If a system makes a wrong call, or worse, an unexplainable one, the liability can be severe,” says Weiss. “We’re also watching the industry lean too hard into generalized models, which often lack the rigor, domain expertise or data specificity needed to be safe in enterprise settings.”
How IT leaders Should Approach AGI
Aaron Harris, CTO at Sage Group, an accounting, financial HR and payroll technology provider for small and medium businesses (SMBs), says IT leaders need to recognize that they’ll eventually have to embrace AGI. If they don’t, their organizations will be left behind.
“Companies must continue to clean their data, understand their data, make their data accessible [and] create the governance and assurance programs around their data. All these things are no less important now than they were,” says Harris. “I think the companies that really succeed will be the ones who take that seriously. Yes, it’s about understanding AI capabilities, picking the right tools [and] solving the right problems, but I think the winners are the ones who create the right foundation for AI to operate on.”
Ashish Khushu, CTO of engineering and technology services provider L&T Technology Services, says IT leaders should approach AGI with strategic caution and proactive experimentation. Key steps include cultivating AGI literacy across teams, prioritizing use case driven research, leading with agility and vision, strengthening the foundational infrastructures and investing in core AGI capabilities. He also recommends piloting agentic systems in controlled environments and engaging with policy and ethics communities.
“Treat AGI not as a product, but as a paradigm shift. It’s not just about tech, it’s about governance, culture and responsibility,” says Khushu in an email interview.
.jpg?width=1280&auto=webp&quality=80&disable=upscale)
Ashish Khushu, L&T Technology Services
Roman Rylko, CTO at Python development company Pynest says IT leaders should start building a habit of visibility now. “Even if AGI is years away, the groundwork is cultural, how you document assumptions, evaluate system output [and] build guardrails around fast-moving tools? Treat [AGI] like any complex system: scoped, monitored and continuously stress-tested,” says Rylko in an email interview. “And make sure you’re not the only one thinking about it. The best ideas — and the best constraints — usually come from people closer to the edge cases than the strategy deck.”
Other Points to Consider
Cloudinary is already seeing ANI radically reshape how developers and marketers collaborate. AGI could further blur the lines.
“[I]magine product managers directly generating UI prototypes, or designers orchestrating content pipelines with simple intent-driven prompts,” says Cloudinary’s Lev-Ami. “This would create the need for new roles: AI experience designers, model governance leads [and], synthetic data auditors. Our architecture would shift toward modular, model-driven infrastructure where orchestration, not just execution, becomes the core competency.”
Sage’s Weiss says today’s systems excel at retrieval-based tasks and act as research assistants, but independent decision-making at the level of complex, regulated enterprise processes is another frontier entirely.
“We’re in the early innings of cognition for interactivity, models that can retrieve information or chat and generate content, but cognition that supports independent analytics, makes autonomous decisions inside workflows and justifies those decisions? That’s a different level,” says Weiss.
EY America’s Guarrera reasons that if machines outperform humans in most economically valuable work, the entire workforce structure would be upended. Roles in all organizations would shrink dramatically, and ownership and control of technology would become even more concentrated.
“While some envision a utopia of abundance driven by unmatched productivity gains, the reality is the transition would be disruptive,” says Guarrera.
“Managing that balance between opportunity and disruption would be one of the greatest challenges companies will ever face.”

David, Guarrera, EY
Second Front Systems’ Bosquez says AGI would fundamentally reshape how his company thinks about technology strategy, staffing and organizational structure.
“In the near term, we’re already seeing AI augment our development teams, which is improving code quality, accelerating prototyping and enhancing decision-making processes,” says Bosquez. “If true AGI emerges, we will likely see flatter organizational structures and technology stacks that have AGI as a core platform component. Hopefully, this transition will happen gradually, so we can adapt our workforce to this new paradigm.”
Case in Point
Ryan Achterberg, CTO at tech and data consulting firm Resultant, believes consulting firms may soon find their traditional value proposition under intense pressure. Weeks of market research, benchmarking, and scenario planning will be achievable in hours or even minutes. AGI could monitor clients’ businesses and markets in real time, surfacing risks and opportunities as they arise.
“The traditional consulting pyramid, with many junior analysts feeding a small number of senior partners, will shrink as automation handles routine data-heavy work. In its place will be leaner teams of AI-native consultants and professionals adept at guiding and validating AGI outputs while bringing deep industry insight and human nuance. Soft skills such as influence, facilitation and executive coaching will rise in value,” says Achterberg in an email interview.
Firms that shift from “we deliver answers” to “we help you act on the right answers” will thrive, he says. Those clinging to traditional slide-deck delivery models won’t.
“At Resultant, we face a fundamental choice that defines our approach to artificial general intelligence: Should we enhance our current operations with AI tools, or should we completely reimagine our business with AI as the foundation? says Achterberg. “We’ve chosen both paths. Our dual-track approach delivers immediate value while preparing for a radically transformed future.”
At present, the Resultant team is reconstructing its essential workflows from client acquisition through project completion, assuming AI is an integral collaborator rather than an add-on tool.
“This approach ensures we’re not simply accelerating outdated methods with new technology but genuinely transforming how work gets done,” says Achterberg.