VentureBeat

AI agents are hitting a liability wall. Mixus has a plan to overcome it using human overseers on high-risk workflows

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more While enterprises face the challenges of deploying AI agents in critical applications, a new, more pragmatic model is emerging that puts humans back in control as a strategic safeguard against AI failure.  One such example is Mixus, a platform that uses a “colleague-in-the-loop” approach to make AI agents reliable for mission-critical work. This approach is a response to the growing evidence that fully autonomous agents are a high-stakes gamble.  The high cost of unchecked AI The problem of AI hallucinations has become a tangible risk as companies explore AI applications. In a recent incident, the AI-powered code editor Cursor saw its own support bot invent a fake policy restricting subscriptions, sparking a wave of public customer cancellations.  Similarly, the fintech company Klarna famously reversed course on replacing customer service agents with AI after admitting the move resulted in lower quality. In a more alarming case, New York City’s AI-powered business chatbot advised entrepreneurs to engage in illegal practices, highlighting the catastrophic compliance risks of unmonitored agents. These incidents are symptoms of a larger capability gap. According to a May 2025 Salesforce research paper, today’s leading agents succeed only 58% of the time on single-step tasks and just 35% of the time on multi-step ones, highlighting “a significant gap between current LLM capabilities and the multifaceted demands of real-world enterprise scenarios.”  The colleague-in-the-loop model To bridge this gap, a new approach focuses on structured human oversight. “An AI agent should act at your direction and on your behalf,” Mixus co-founder Elliot Katz told VentureBeat. “But without built-in organizational oversight, fully autonomous agents often create more problems than they solve.”  This philosophy underpins Mixus’s colleague-in-the-loop model, which embeds human verification directly into automated workflows. For example, a large retailer might receive weekly reports from thousands of stores that contain critical operational data (e.g., sales volumes, labor hours, productivity ratios, compensation requests from headquarters). Human analysts must spend hours manually reviewing the data and making decisions based on heuristics. With Mixus, the AI agent automates the heavy lifting, analyzing complex patterns and flagging anomalies like unusually high salary requests or productivity outliers.  For high-stakes decisions like payment authorizations or policy violations — workflows defined by a human user as “high-risk” — the agent pauses and requires human approval before proceeding. The division of labor between AI and humans has been integrated into the agent creation process. “This approach means humans only get involved when their expertise actually adds value — typically the critical 5-10% of decisions that could have significant impact — while the remaining 90-95% of routine tasks flow through automatically,” Katz said. “You get the speed of full automation for standard operations, but human oversight kicks in precisely when context, judgment, and accountability matter most.” In a demo that the Mixus team showed to VentureBeat, creating an agent is an intuitive process that can be done with plain-text instructions. To build a fact-checking agent for reporters, for example, co-founder Shai Magzimof simply described the multi-step process in natural language and instructed the platform to embed human verification steps with specific thresholds, such as when a claim is high-risk and can result in reputational damage or legal consequences.  One of the platform’s core strengths is its integrations with tools like Google Drive, email, and Slack, allowing enterprise users to bring their own data sources into workflows and interact with agents directly from their communication platform of choice, without having to switch contexts or learn a new interface (for example, the fact-checking agent was instructed to send approval requests to the editor’s email). The platform’s integration capabilities extend further to meet specific enterprise needs. Mixus supports the Model Context Protocol (MCP), which enables businesses to connect agents to their bespoke tools and APIs, avoiding the need to reinvent the wheel for existing internal systems. Combined with integrations for other enterprise software like Jira and Salesforce, this allows agents to perform complex, cross-platform tasks, such as checking on open engineering tickets and reporting the status back to a manager on Slack. Human oversight as a strategic multiplier The enterprise AI space is currently undergoing a reality check as companies move from experimentation to production. The consensus among many industry leaders is that humans in the loop are a practical necessity for agents to perform reliably.  AI Agents will likely follow a self driving trajectory, where you need a human in the loop for a long tail of tasks for a while. The big difference is we’ll get a growing number of autonomous agents along the way, where full self driving is an all or nothing proposition. https://t.co/5dR7cGS7jn — Aaron Levie (@levie) June 20, 2025 Mixus’s collaborative model changes the economics of scaling AI. Mixus predicts that by 2030, agent deployment may grow 1000x and each human overseer will become 50x more efficient as AI agents become more reliable. But the total need for human oversight will still grow.  “Each human overseer manages exponentially more AI work over time, but you still need more total oversight as AI deployment explodes across your organization,” Katz said.  For enterprise leaders, this means human skills will evolve rather than disappear. Instead of being replaced by AI, experts will be promoted to roles where they orchestrate fleets of AI agents and handle the high-stakes decisions flagged for their review. In this framework, building a strong human oversight function becomes a competitive advantage, allowing companies to deploy AI more aggressively and safely than their rivals. “Companies that master this multiplication will dominate their industries, while those chasing full automation will struggle with reliability, compliance, and trust,” Katz said. source

AI agents are hitting a liability wall. Mixus has a plan to overcome it using human overseers on high-risk workflows Read More »

Lessons learned from agentic AI leaders reveal critical deployment strategies for enterprises

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Companies are rushing AI agents into production — and many of them will fail. But the reason has nothing to do with their AI models. On day two of VB Transform 2025, industry leaders shared hard-won lessons from deploying AI agents at scale. A panel moderated by Joanne Chen, general partner at Foundation Capital, included Shawn Malhotra, CTO at Rocket Companies, which uses agents across the home ownership journey from mortgage underwriting to customer chat; Shailesh Nalawadi, head of product at Sendbird, which builds agentic customer service experiences for companies across multiple verticals; and Thys Waanders, SVP of AI transformation at Cognigy, whose platform automates customer experiences for large enterprise contact centers. Their shared discovery: Companies that build evaluation and orchestration infrastructure first are successful, while those rushing to production with powerful models fail at scale. >>See all our Transform 2025 coverage here<< The ROI reality: Beyond simple cost cutting A key part of engineering AI agent for success is understanding the return on investment (ROI). Early AI agent deployments focused on cost reduction. While that remains a key component, enterprise leaders now report more complex ROI patterns that demand different technical architectures. Cost reduction wins Malhotra shared the most dramatic cost example from Rocket Companies. “We had an engineer [who] in about two days of work was able to build a simple agent to handle a very niche problem called ‘transfer tax calculations’ in the mortgage underwriting part of the process. And that two days of effort saved us a million dollars a year in expense,” he said. For Cognigy, Waanders noted that cost per call is a key metric. He said that if AI agents are used to automate parts of those calls, it’s possible to reduce the average handling time per call. Revenue generation methods Saving is one thing; making more revenue is another. Malhotra reported that his team has seen conversion improvements: As clients get the answers to their questions faster and have a good experience, they are converting at higher rates. Proactive revenue opportunities Nalawadi highlighted entirely new revenue capabilities through proactive outreach. His team enables proactive customer service, reaching out before customers even realize they have a problem. A food delivery example illustrates this perfectly. “They already know when an order is going to be late, and rather than waiting for the customer to get upset and call them, they realize that there was an opportunity to get ahead of it,” he said. Why AI agents break in production While there are solid ROI opportunities for enterprises that deploy agentic AI, there are also some challenges in production deployments. Nalawadi identified the core technical failure: Companies build AI agents without evaluation infrastructure. “Before you even start building it, you should have an eval infrastructure in place,” Nalawadi said. “All of us used to be software engineers. No one deploys to production without running unit tests. And I think a very simplistic way of thinking about eval is that it’s the unit test for your AI agent system.” Traditional software testing approaches don’t work for AI agents. He noted that it’s just not possible to  predict every possible input or write comprehensive test cases for natural language interactions. Nalawadi’s team learned this through customer service deployments across retail, food delivery and financial services. Standard quality assurance approaches missed edge cases that emerged in production. AI testing AI: The new quality assurance paradigm Given the complexity of AI testing, what should organizations do? Waanders solved the testing problem through simulation. “We have a feature that we’re releasing soon that is about simulating potential conversations,” Waanders explained. “So it’s essentially AI agents testing AI agents.” The testing isn’t just conversation quality testing, it’s behavioral analysis at scale. Can it help to understand how an agent responds to angry customers? How does it handle multiple languages? What happens when customers use slang? “The biggest challenge is you don’t know what you don’t know,” Waanders said. “How does it react to anything that anyone could come up with? You only find it out by simulating conversations, by really pushing it under thousands of different scenarios.” The approach tests demographic variations, emotional states and edge cases that human QA teams can’t cover comprehensively. The coming complexity explosion Current AI agents handle single tasks independently. Enterprise leaders need to prepare for a different reality: Hundreds of agents per organization learning from each other. The infrastructure implications are massive. When agents share data and collaborate, failure modes multiply exponentially. Traditional monitoring systems can’t track these interactions. Companies must architect for this complexity now. Retrofitting infrastructure for multi-agent systems costs significantly more than building it correctly from the start. “If you fast forward in what’s theoretically possible, there could be hundreds of them in an organization, and perhaps they are learning from each other,”Chen said. “The number of things that could happen just explodes. The complexity explodes.” source

Lessons learned from agentic AI leaders reveal critical deployment strategies for enterprises Read More »

Walmart cracks enterprise AI at scale: Thousands of use cases, one framework

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Walmart continues to make strides in cracking the code on deploying agentic AI at enterprise scale. Their secret? Treating trust as an engineering requirement, not some compliance checkbox you tick at the end. During the “Trust in the Algorithm: How Walmart’s Agentic AI Is Redefining Consumer Confidence and Retail Leadership” session at VB Transform 2025, Walmart’s VP of Emerging Technology Desirée Gosby, explained how the retail giant operationalizes thousands of AI use cases. One of the retailer’s primary objectives is to consistently maintain and strengthen customer confidence among its 255 million weekly shoppers. “We see this as a pretty big inflection point, very similar to the internet,” Gosby told industry analyst Susan Etlinger during Tuesday’s morning session. “It’s as profound in terms of how we’re actually going to operate, how we actually do work.” The session delivered valuable lessons learned from Walmart’s AI deployment experiences. Implicit throughout the discussion is the retail giant’s continual search for new ways to apply distributed systems architecture principles, thereby avoiding the creation of technical debt. >>See all our Transform 2025 coverage here<< Four-stakeholder framework structures AI deployment Walmart’s AI architecture rejects horizontal platforms for targeted stakeholder solutions. Each group receives purpose-built tools that address specific operational frictions. Customers engage Sparky for natural language shopping. Field associates get inventory and workflow optimization tools. Merchants access decision-support systems for category management. Sellers receive business integration capabilities. “And then, of course, we’ve got developers, and really, you know, giving them the superpowers and charging them up with, you know, the new agent of tools,” Gosby explained. “We have hundreds, if not thousands, of different use cases across the company that we’re bringing to life,” Gosby revealed. The scale demands architectural discipline that most enterprises lack. The segmentation acknowledges the fundamental need of each team in Walmart to have purpose-built tools for their specific jobs. Store associates managing inventory need different tools from merchants analyzing regional trends. Generic platforms fail because they ignore operational reality. Walmart’s specificity drives adoption through relevance, not mandate. Trust economics are driving AI adoption at Walmart Walmart discovered that trust is built through value delivery, not just mandatory training programs that associates, at times, question the value of. Gosby’s example resonated as she explained her mother’s shopping evolution from weekly store visits to COVID-era deliveries, illustrating exactly how natural adoption works. Each step provided an immediate, tangible benefit. No friction, no forced change management, yet the progression happened faster than anyone could have predicted. “She’s been interacting with AI through that whole time,” Gosby explained. “The fact that she was able to go to the store and get what she wanted, it was on the shelf. AI was used to do that.” The benefits customers are getting from Walmart’s predictive commerce vision are further reflected in Gosby’s mother’s experiences. “Instead of having to go weekly, figure out what groceries you need to have delivered, what if it just showed up for you automatically?” That’s the essence of predictive commerce and how it delivers value at scale to every Walmart customer. “If you’re adding value to their lives, helping them remove friction, helping them save money and live better, which is part of our mission, then the trust comes,” Gosby stated. Associates follow the same pattern. When AI actually improves their work, saves them time and helps them excel, adoption happens naturally and trust is earned. Fashion cycles compress from months to weeks Walmart’s Trend to Product system quantifies the operational value of AI. The platform synthesizes social media signals, customer behavior and regional patterns to slash product development from months to weeks. “Trend to Product has gotten us down from months to weeks to getting the right products to our customers,” Gosby revealed. The system creates products in response to real-time demand rather than historical data. The months-to-weeks compression transforms Walmart’s retail economics. Inventory turns accelerate. Markdown exposure shrinks. Capital efficiency multiplies. The company maintains price leadership while matching any competitor’s speed-to-market capabilities. Every high-velocity category can benefit from using AI to shrink time-to-market and deliver quantifiable gains. How Walmart uses MCP Protocol to create a scalable agent architecture Walmart’s approach to agent orchestration draws directly from its hard-won experience with distributed systems. The company uses Model Context Protocol (MCP) to standardize how agents interact with existing services. “We break down our domains and really looking at how do we wrap those things as MCP protocol, and then exposing those things that we can then start to orchestrate different agents,” Gosby explained. The strategy transforms existing infrastructure rather than replacing it. The architectural philosophy runs deeper than protocols. “The change that we’re seeing today is very similar to what we’ve seen when we went from monoliths to distributed systems. We don’t want to repeat those mistakes,” Gosby stated. Gosby outlined the execution requirements: “How do you decompose your domains? What MCP servers should you have? What sort of agent orchestration should you have?” At Walmart, these represent daily operational decisions, not theoretical exercises. “We’re looking to take our existing infrastructure, break it down, and then recompose it into the agents that we want to be able to build,” Gosby explained. This standardization-first approach enables flexibility. Services built years ago now power agentic experiences through proper abstraction layers. Merchant expertise becomes enterprise intelligence Walmart leverages decades of employee knowledge, making it a core component of its growing AI capabilities. The company systematically captures category expertise from thousands of merchants, creating a competitive advantage no digital-first retailer can match. “We have thousands of merchants who are excellent at what they do. They are experts in the categories that they support,” Gosby explained. “We have a cheese merchant who knows exactly what wine goes or what cheese pairing, but that data isn’t necessarily captured in a structured way.” AI operationalizes this knowledge. “With the tools that we have, we can capture that expertise that

Walmart cracks enterprise AI at scale: Thousands of use cases, one framework Read More »

Retail Resurrection: David’s Bridal bets its future on AI after double bankruptcy

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Inside a new David’s Bridal store in Delray Beach, Florida, a bride-to-be carefully taps images on a 65-inch touchscreen, curating a vision board for her wedding. Behind the scenes, an AI system automatically analyzes her selections, building a knowledge graph that will match her with vendors, recommend products and generate a personalized wedding plan. For the overwhelmed bride facing 300-plus wedding planning tasks, this AI assistant promises to automate the process: suggesting what to do next, reorganizing timelines when plans change and eliminating the need to manually update spreadsheets that inevitably break when wedding plans evolve. That’s the vision David’s Bridal is racing to fully implement with Pearl Planner, its new beta AI-powered wedding planning platform. For the twice-bankrupt retailer, this technology-driven transformation represents a high-stakes bet that AI can accomplish what traditional retail strategies couldn’t: Survival in an industry where 15,000 stores are projected to close this year alone. David’s Bridal is hardly alone in the dramatic and ongoing wave of store closures, bankruptcies and disruptions sweeping through the U.S. retail industry since the mid-2010s. Dubbed the “retail apocalypse,” there were at least 133 major retail bankruptcies and 57,000 store closures between 2018 and 2024. The company narrowly survived liquidation in its second bankruptcy in 2023 when business development company CION Investment Corporation — which has more than $6.1 billion in assets and a portfolio of 100 companies — acquired substantially all of its assets and invested $20 million in new funding. David’s AI-led transformation is driven from the top down by new CEO Kelly Cook, who originally joined the company as CMO in 2019. Her vision of taking the company from “aisle to algorithm” led her to make an unconventional choice for her leadership team. Rather than recruiting from within the bridal or retail industries, Cook tapped Elina Vilk, a Silicon Valley tech veteran with 25 years of experience in payments and digital technology, to lead the execution as president. “I’m probably not the first choice, but that’s by design” Vilk told VentureBeat in an exclusive interview. Vilk’s background couldn’t be more different from traditional retail leadership: A decade at eBay and PayPal where she served as CMO, experience running small business marketing at Meta with “200 million businesses” and being among “the first digital marketers, ever.” This fresh outsider perspective was precisely what Cook needed to reimagine how a 75-year-old bridal retailer could use AI to create an entirely new business model. What’s driving David’s Bridal’s transformation  AI was not part of the DNA of David’s Bridal, so Vilk first faced the challenge of building a team from scratch. Her first call was to Mike Bal, a seasoned product leader and technologist, who she worked with as CMO of WooCommerce. Bal, who had spent his career in technology companies like Automattic (parent company of WordPress.com) and various agencies specializing in AI development, was initially reluctant. “I’ve been married for almost 15 years, and my wife’s a marriage and family therapist… she doesn’t like weddings.” Despite his reservations about the wedding industry, though, Vilk’s comprehensive vision convinced him. “Elina had this end-to-end plan,” he explains, highlighting the media network, the acquisition of Love Stories TV and the opportunity to use AI for wedding planning.  With a technical leader in place, Vilk faced a key decision. “I could have a whole team and have everybody report to me. That was an option. Or I could have a couple of people report to me to start, and then everyone else dotted-line to me, but put them in other organizations, which is exactly what I did.” By distributing expertise throughout the company rather than creating a siloed AI team, Vilk says the strategy paid immediate dividends because technological transformation became everyone’s responsibility rather than an isolated initiative. Their accomplishments: Resource multiplication: Without increasing headcount, Vilk effectively doubled her available talent by accessing developers and resources from multiple departments. Cross-company influence: With team members embedded in every leader’s organization, the AI initiative gained strategic representation at all levels. Accelerated development: The team functioned like a startup within the established company, moving quickly by working across traditional departmental boundaries. Collaborative engagement: Department heads became natural stakeholders through their team members’ involvement, creating organic buy-in across the organization. This distributed approach accelerated the company-wide identity shift, transforming David’s from a traditional retailer to a technology-enabled wedding platform in less than a year. Building the technical foundation When Mike Bal arrived at David’s Bridal last December, he faced a daunting technical challenge. The company needed to build a sophisticated AI system with limited resources, a tight timeline and no AI experts. Looking at the wedding industry’s reliance on spreadsheets and the communication barriers between brides and vendors, Bal saw an opportunity for a fundamentally different approach. “The biggest problem brides have throughout the entire planning process is getting people to understand their vision,” Bal explained. Brides could communicate visually through platforms like Pinterest, but struggled to translate those images into words that vendors, family members and even wedding planners could understand. Bal’s first breakthrough came in his architectural approach. While many companies were implementing AI through traditional retrieval-augmented generation (RAG) on vector databases — which essentially functions as a search that finds information matching a query — Bal recognized that this wouldn’t capture the nuanced relationships in wedding planning. Instead, he designed a knowledge graph architecture using Neo4j that still leverages RAG but in a fundamentally different way. Rather than limited search-for-a-match logic, the knowledge graph allows the AI to follow a map to the details that make up the most relevant answer, trace connections between elements, understand that a preference for lace might indicate a bohemian style or that tropical flowers suggest a beach theme. Working with a single “but sharp” engineer, Bal introduced Replit for rapid prototyping to start building experiences immediately. “We can’t really wait,” he recalls thinking as

Retail Resurrection: David’s Bridal bets its future on AI after double bankruptcy Read More »

CTGT wins Best Presentation Style award at VB Transform 2025

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more San Francisco-based CTGT, a startup focused on making AI more trustworthy through feature-level model customization, won the Best Presentation Style award at VB Transform 2025 in San Francisco. Founded by 23-year-old Cyril Gorlla, the company showcased how its technology helps enterprises overcome AI trust barriers by directly modifying model features instead of using traditional fine-tuning or prompt engineering methods. During his presentation, Gorlla highlighted the “AI Doom Loop” faced by many enterprises: 54% of businesses cite AI as their highest tech risk according to Deloitte, while McKinsey reports 44% of organizations have experienced negative consequences from AI implementation. “A large part of this conference has been about the AI doom loop” Gorlla explained during his presentation. “Unfortunately, a lot of these [AI investments] don’t pan out. J&J just canceled hundreds of AI pilots because they didn’t really deliver ROI due to no fundamental trust in these systems.” Breaking the AI compute wall CTGT’s approach represents a significant departure from conventional AI customization techniques. The company was founded on research Gorlla conducted while holding an endowed chair at the University of California San Diego. In 2023, Gorlla published a paper at the International Conference on Learning Representations (ICLR) describing a method for evaluating and training AI models that was up to 500 times faster than existing approaches while achieving “three nines” (99.9%) of accuracy. Rather than relying on brute-force scaling or traditional deep learning methods, CTGT has developed what it calls an “entirely new AI stack” that fundamentally reimagines how neural networks learn. The company’s innovation focuses on understanding and intervening at the feature level of AI models. The company’s approach differs fundamentally from standard interpretability solutions that rely on secondary AI systems for monitoring. Instead, CTGT offers mathematically verifiable interpretability capabilities that eliminate the need for supplemental models, significantly lowering computational requirements in the process. The technology works by identifying specific latent variables (neurons or directions in the feature space) that drive behaviors like censorship or hallucinations, then dynamically modifying these variables at inference time without altering the model’s weights. This approach allows companies to customize model behavior on the fly without taking systems offline for retraining. Real-world applications During his Transform presentation, Gorlla demonstrated two enterprise applications already deployed at a Fortune 20 financial institution: An email compliance workflow that trains models to understand company-specific acceptable content, allowing analysts to check their emails against compliance standards in real-time. The system highlights potentially problematic content and provides specific explanations. A brand alignment tool that helps marketers develop copy consistent with brand values. The system can suggest personalized advice on why certain phrases work well for a specific brand and how to improve content that doesn’t align. “If a company has 900 use cases, they no longer have to fine-tune 900 models,” Gorlla explained. “We’re model-agnostic, so they can just plug us in.” A real-world example of CTGT’s technology in action was its work with DeepSeek models, where it successfully identified and modified the features responsible for censorship behaviors. By isolating and adjusting these specific activation patterns, CTGT was able to achieve a 100% response rate on sensitive queries without degrading the model’s performance on neutral tasks like reasoning, mathematics and coding. Images: CTGT presentation at VB Transform 2025 Demonstrated ROI CTGT’s technology appears to be delivering measurable results. During the Q&A session, Gorlla noted that in the first week of deployment with “one of the leading AI-powered insurers, we saved $5 million of liability from them.” Another early customer, Ebrada Financial, has used CTGT to improve the factual accuracy of customer service chatbots. “Previously, hallucinations and other errors in chatbot responses drove a high volume of requests for live support agents as customers sought to clarify responses,” said Ley Ebrada, Founder and Tax Strategist. “CTGT has helped improve chatbot accuracy tremendously, eliminating most of those agent requests.” In another case study, CTGT worked with an unnamed Fortune 10 company to enhance on-device AI capabilities in computationally constrained environments. The company also helped a leading computer vision firm achieve 10x faster model performance while maintaining comparable accuracy. The company claims its technology can reduce hallucinations by 80-90% and enable AI deployments with 99.9% reliability, a critical factor for enterprises in regulated industries like healthcare and finance. From Hyderabad to Silicon Valley Gorlla’s journey is itself remarkable. Born in Hyderabad, India, he mastered coding at age 11 and was disassembling laptops in high school to squeeze out more performance for training AI models. He came to the United States to study at the University of California, San Diego, where he received the Endowed Chair’s Fellowship. His research there focused on understanding the fundamental mechanisms of how neural networks learn, which led to his ICLR paper and eventually CTGT. In late 2024, Gorlla and co-founder Trevor Tuttle, an expert in hyperscalable ML systems, were selected for Y Combinator’s Fall 2024 batch. The startup has attracted notable investors beyond its institutional backers, including Mark Cuban and other prominent technology leaders drawn to its vision of making AI more efficient and trustworthy. Funding and future Founded in mid-2024 by Gorlla and Tuttle, CTGT raised $7.2 million in February 2025 in an oversubscribed seed round led by Gradient, Google’s early-stage AI fund. Other investors include General Catalyst, Y Combinator, Liquid 2, Deepwater, and notable angels such as François Chollet (creator of Keras), Michael Seibel (Y Combinator, co-founder of Twitch), and Paul Graham (Y Combinator). “CTGT’s launch is timely as the industry struggles with how to scale AI within the current confines of computing limits,” said Darian Shirazi, Managing Partner at Gradient. “CTGT removes those limits, enabling companies to rapidly scale their AI deployments and run advanced AI models on devices like smartphones. This technology is critical to the success of high-stakes AI deployments at large enterprises.” With AI model size outpacing Moore’s Law and advances in AI training chips, CTGT aims to

CTGT wins Best Presentation Style award at VB Transform 2025 Read More »

3 ways to get the most from your data: Scalable AI, intelligent apps, and an open ecosystem

Presented by SAP Any technology expert worth their salt will say a successful AI strategy depends on reliable data. In fact, a recent survey of technology leaders found that almost 94% are now more focused on data, driven by the increased interest in AI. While this should not be a big surprise, things get trickier when organizations attempt to navigate the landscape of vendors out there. With a quickly changing market, it’s difficult to determine which one can best help harness data for strategic transformation, including AI initiatives. The good news is there are paths to data modernization that companies can take no matter where they are on their journey. Here are three tips for success. 1. Add value to your most important business applications Enterprises still struggle to unify data across business applications while preserving context and relationships — only 34% of business leaders reported high trust in their data. Without a trusted data foundation, analytics and AI initiatives often stall, as teams spend more time integrating and managing data than leveraging it for business value. This is where a business data fabric comes in. A business data fabric reduces latency by providing an integrated, semantically-rich data layer over fragmented data landscapes. This architecture simplifies data management and makes it easier to access trusted data. By creating a single source of truth across multiple sources and applications, organizations can more easily set up data governance and self-service data access. SAP provides this with its SAP Business Data Cloud, a fully managed SaaS solution that unifies and governs SAP data and third-party data. SAP Business Data Cloud simplifies customers’ complex data landscapes and with its zero copy share approach, which takes the heavy lifting out of harmonizing, federating, and replicating data. This frees up time for organizations to focus on strategic and transformational data projects, like building intelligent applications or providing high quality data for AI initiative. Moreover, SAP Business Data Cloud sits within and powers the SAP Business Suite, an integrated set of business applications, data, and AI. This helps customers further establish a harmonized data foundation to connect insights across business applications and processes, and fuel AI initiatives. 2. Move from transactional to intelligent applications With a harmonized data layer in place, companies can start using or building their own intelligent applications. According to Gartner, “while applications can behave intelligently, intelligent applications are intelligent.” Moving beyond rule-based, prescriptive approaches, intelligent applications are self-learning and self-adapting applications that can ingest and process data from any source. These applications democratize data, taking it out of the realm of data scientists and giving everyone access from a human resources professional to Chief Financial Officer. But with various vendors offering these applications, how to choose? Intelligence alone won’t drive business results. It’s important to select a company that provides these applications directly within the context of core business processes like supply chain or procurement management, so people use that intelligence in their daily work. SAP’s unique approach to intelligent applications, which are part of its business data cloud, are grounded in 50-plus years honing its business process, application, and industry expertise. The applications are built from data products, easily consumable information taken from its vast array of solutions and curated to solve specific business problems. Enriched with AI technologies such as knowledge graph (which pinpoints the relationships amongst data points), SAP Intelligent Applications come out-of-the-box with modeling, reporting, predictive, and other capabilities. More significant, they are embedded across SAP’s application landscape including ERP, human resources, procurement, supply chain, finance, and other solutions. One example is the newly announced People Intelligence package within SAP Business Data Cloud, which connects and transforms people and skills data from the SAP SuccessFactors Human Capital Management suite into readily available workforce insights and AI-driven recommendations. 3. Don’t go it alone: an open ecosystem is the key to success No one vendor can provide all industry and business data and expertise. That’s why it’s important to select one with an open ecosystem approach. It’s critical for several reasons. First, to scale the solution and make it available on multiple cloud platforms. Second, to provide access to a wide variety of domain expertise. And third, to build an ecosystem of intelligent applications enriched by industry leaders that specialize in specific data sets, like risk assessment. Those three reasons encapsulate SAP’s partner strategy. For example, SAP recently announced a partnership with Adobe to build an intelligent application on SAP Business Data Cloud that combines supply chain, financial, and Adobe digital experience data to generate deep insights for joint customers. Additionally, Moody’s will work with SAP to help customers and partners build intelligent applications using Moody’s risk datasets, integrated with SAP’s accounts receivable data, to boost cash flow and default predictions.  Moving forward, organizations will need to treat data and AI as inseparable. Successful AI initiatives rely on good quality, relevant data taken from business applications and beyond. A recent GigaOm study found that companies that treat data as a strategic asset see a 28% higher AI rate of adoption. With SAP Business Data Cloud, organizations gain the missing puzzle piece to help harmonize and streamline data from any source, enabling AI projects that more effectively drives business outcomes. Jan Bungert is Global Chief Revenue Officer, SAP Business Data Cloud and AI at SAP. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact [email protected]. source

3 ways to get the most from your data: Scalable AI, intelligent apps, and an open ecosystem Read More »

The hidden scaling cliff that’s about to break your agent rollouts

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Enterprises that want to build and scale agents also need to embrace another reality: agents aren’t built like other software.  Agents are “categorically different” in how they’re built, how they operate, and how they’re improved, according to Writer CEO and co-founder May Habib. This means ditching the traditional software development life cycle when dealing with adaptive systems. “Agents don’t reliably follow rules,” Habib said on Wednesday while on stage at VB Transform. “They are outcome-driven. They interpret. They adapt. And the behavior really only emerges in real-world environments.” Knowing what works — and what doesn’t work — comes from Habib’s experience helping hundreds of enterprise clients build and scale enterprise-grade agents. According to Habib, more than 350 of the Fortune 1000 are Writer customers, and more than half of the Fortune 500 will be scaling agents with Writer by the end of 2025. Using non-deterministic tech to produce powerful outputs can even be “really nightmarish,” Habib said — especially when trying to scale agents systemically. Even if enterprise teams can spin up agents without product managers and designers, Habib thinks a “PM mindset” is still needed for collaborating, building, iterating and maintaining agents. “Unfortunately or fortunately, depending on your perspective, IT is going to be left holding the bag if they don’t lead their business counterparts into that new way of building.” >>See all our Transform 2025 coverage here<< Why goal-based agents is the right approach  One of the shifts in thinking includes understanding the outcome-based nature of agents. For example, she said that many customers request agents to assist their legal teams in reviewing or redlining contracts. But that’s too open-ended. Instead, a goal-oriented approach means designing an agent to reduce the time spent reviewing and redlining contracts. “In the traditional software development life cycle, you are designing for a deterministic set of very predictable steps,” Habib said. “It’s input in, input out in a more deterministic way. But with agents, you’re seeking to shape agentic behavior. So you are seeking less of a controlled flow and much more to give context and guide decision-making by the agent.” Another difference is building a blueprint for agents that instructs them with business logic, rather than providing them with workflows to follow. This includes designing reasoning loops and collaborating with subject experts to map processes that promote desired behaviors. While there’s a lot of talk about scaling agents, Writer is still helping most clients with building them one at a time. That’s because it’s important first to answer questions about who owns and audits the agent, who makes sure it stays relevant and still checks if it’s still producing desired outcomes. “There is a scaling cliff that folks get to very, very quickly without a new approach to building and scaling agents,” Habib said. “There is a cliff that folks are going to get to when their organization’s ability to manage agents responsibly really outstrips the pace of development happening department by department.” QA for agents vs software Quality assurance is also different for agents. Instead of an objective checklist, agentic evaluation includes accounting for non-binary behavior and assessing how agents act in real-world situations. That’s because failure isn’t always obvious — and not as black and white as checking if something broke. Instead, Habib said it’s better to check if an agent behaved well, asking if fail-safes worked, evaluating outcomes and intent: “The goal here isn’t perfection It is behavioral confidence, because there is a lot of subjectivity in this here.” Businesses that don’t understand the importance of iteration end up playing “a constant game of tennis that just wears down each side until they don’t want to play anymore,” Habib said. It’s also important for teams to be okay with agents being less than perfect and more about “launching them safely and running fast and iterating over and over and over.” Despite the challenges, there are examples of AI agents already helping bring in new revenue for enterprise businesses. For example, Habib mentioned a major bank that collaborated with Writer to develop an agent-based system, resulting in a new upsell pipeline worth $600 million by onboarding new customers into multiple product lines. New version controls for AI agents Agentic maintenance is also different. Traditional software maintenance involves checking the code when something breaks, but Habib said AI agents require a new kind of version control for everything that can shape behavior. It also requires proper governance and ensuring that agents remain useful over time, rather than incurring unnecessary costs. Because models don’t map cleanly to AI agents, Habib said maintenance includes checking prompts, model settings, tool schemas and memory configuration. It also means fully tracing executions across inputs, outputs, reasoning steps, tool calls and human interactions.  “You can update a [large language model] LLM prompt and watch the agent behave completely differently even though nothing in the git history actually changed,” Habib said. “The model links shift, retrieval indexes get updated, tool APIs evolve and suddenly the same prompt does not behave as expected…It can feel like we are debugging ghosts.” source

The hidden scaling cliff that’s about to break your agent rollouts Read More »

Model minimalism: The new AI strategy saving companies millions

This article is part of VentureBeat’s special issue, “The Real Cost of AI: Performance, Efficiency and ROI at Scale.” Read more from this special issue. The advent of large language models (LLMs) has made it easier for enterprises to envision the kinds of projects they can undertake, leading to a surge in pilot programs now transitioning to deployment.  However, as these projects gained momentum, enterprises realized that the earlier LLMs they had used were unwieldy and, worse, expensive.  Enter small language models and distillation. Models like Google’s Gemma family, Microsoft’s Phi and Mistral’s Small 3.1 allowed businesses to choose fast, accurate models that work for specific tasks. Enterprises can opt for a smaller model for particular use cases, allowing them to lower the cost of running their AI applications and potentially achieve a better return on investment.  LinkedIn distinguished engineer Karthik Ramgopal told VentureBeat that companies opt for smaller models for a few reasons.  “Smaller models require less compute, memory and faster inference times, which translates directly into lower infrastructure OPEX (operational expenditures) and CAPEX (capital expenditures) given GPU costs, availability and power requirements,” Ramgoapl said. “Task-specific models have a narrower scope, making their behavior more aligned and maintainable over time without complex prompt engineering.” Model developers price their small models accordingly. OpenAI’s o4-mini costs $1.1 per million tokens for inputs and $4.4/million tokens for outputs, compared to the full o3 version at $10 for inputs and $40 for outputs.  Enterprises today have a larger pool of small models, task-specific models and distilled models to choose from. These days, most flagship models offer a range of sizes. For example, the Claude family of models from Anthropic comprises Claude Opus, the largest model, Claude Sonnet, the all-purpose model, and Claude Haiku, the smallest version. These models are compact enough to operate on portable devices, such as laptops or mobile phones.  The savings question When discussing return on investment, though, the question is always: What does ROI look like? Should it be a return on the costs incurred or the time savings that ultimately means dollars saved down the line? Experts VentureBeat spoke to said ROI can be difficult to judge because some companies believe they’ve already reached ROI by cutting time spent on a task while others are waiting for actual dollars saved or more business brought in to say if AI investments have actually worked. Normally, enterprises calculate ROI by a simple formula as described by Cognizant chief technologist Ravi Naarla in a post: ROI = (Benefits-Cost)/Costs. But with AI programs, the benefits are not immediately apparent. He suggests enterprises identify the benefits they expect to achieve, estimate these based on historical data, be realistic about the overall cost of AI, including hiring, implementation and maintenance, and understand you have to be in it for the long haul. With small models, experts argue that these reduce implementation and maintenance costs, especially when fine-tuning models to provide them with more context for your enterprise. Arijit Sengupta, founder and CEO of Aible, said that how people bring context to the models dictates how much cost savings they can get. For individuals who require additional context for prompts, such as lengthy and complex instructions, this can result in higher token costs.  “You have to give models context one way or the other; there is no free lunch. But with large models, that is usually done by putting it in the prompt,” he said. “Think of fine-tuning and post-training as an alternative way of giving models context. I might incur $100 of post-training costs, but it’s not astronomical.” Sengupta said they’ve seen about 100X cost reductions just from post-training alone, often dropping model use cost “from single-digit millions to something like $30,000.” He did point out that this number includes software operating expenses and the ongoing cost of the model and vector databases.  “In terms of maintenance cost, if you do it manually with human experts, it can be expensive to maintain because small models need to be post-trained to produce results comparable to large models,” he said. Experiments Aible conducted showed that a task-specific, fine-tuned model performs well for some use cases, just like LLMs, making the case that deploying several use-case-specific models rather than large ones to do everything is more cost-effective.  The company compared a post-trained version of Llama-3.3-70B-Instruct to a smaller 8B parameter option of the same model. The 70B model, post-trained for $11.30, was 84% accurate in automated evaluations and 92% in manual evaluations. Once fine-tuned to a cost of $4.58, the 8B model achieved 82% accuracy in manual assessment, which would be suitable for more minor, more targeted use cases.  Cost factors fit for purpose Right-sizing models does not have to come at the cost of performance. These days, organizations understand that model choice doesn’t just mean choosing between GPT-4o or Llama-3.1; it’s knowing that some use cases, like summarization or code generation, are better served by a small model. Daniel Hoske, chief technology officer at contact center AI products provider Cresta, said starting development with LLMs informs potential cost savings better.  “You should start with the biggest model to see if what you’re envisioning even works at all, because if it doesn’t work with the biggest model, it doesn’t mean it would with smaller models,” he said.  Ramgopal said LinkedIn follows a similar pattern because prototyping is the only way these issues can start to emerge. “Our typical approach for agentic use cases begins with general-purpose LLMs as their broad generalizationability allows us to rapidly prototype, validate hypotheses and assess product-market fit,” LinkedIn’s Ramgopal said. “As the product matures and we encounter constraints around quality, cost or latency, we transition to more customized solutions.” In the experimentation phase, organizations can determine what they value most from their AI applications. Figuring this out enables developers to plan better what they want to save on and select the model size that best suits their purpose and budget.  The experts cautioned that while it is important to build with

Model minimalism: The new AI strategy saving companies millions Read More »

Get paid faster: How Intuit’s new AI agents help businesses get funds up to 5 days faster and save 12 hours a month with autonomous workflows

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Intuit has been on a journey over the last several years with generative AI, incorporating the technology as part of its services at QuickBooks, Credit Karma,Turbotax and Mailchimp. Today the company is taking the next step with a series of AI agents that go beyond that to transform how small and mid-market businesses operate. These new agents work as a virtual team that automates workflows and provides real-time business insights. They include capabilities for payments, accounts and finance that will directly impact business operations. According to Intuit, customers save up to 12 hours per month and, on average, will get paid up to five days faster thanks to the new agents. “If you look at the trajectory of our AI experiences at Intuit in the early years, AI was built into the background, and with Intuit Assist, you saw a shift to provide information back to the customer,” Ashok Srivastava, chief AI and data officer at Intuit, told VentureBeat. “Now what you’re seeing is a complete redesign. The agents are actually doing work on behalf of the customer, with their permission.” Technical architecture: From starter kit to production agents Intuit has been working on the path from assistants to agentic AI for some time. In September 2024, the company detailed its plans to use AI to automate complex tasks. It’s an approach built firmly on the company’s generative AI operating system (GenOS) platform, the foundation of its AI efforts. Earlier this month, Intuit announced a series of efforts that further extend its capabilities. The company has developed its own prompt optimization service that will optimize queries for any large language model (LLM). It has also developed what it calls an intelligent data cognition layer for enterprise data that can understand different data sources required for enterprise workflows. Going a step further, Intuit developed an agent starter kit that builds on the company’s technical foundation to enable agentic AI development. The agent portfolio: From cash flow to customer management With the technical foundation in place, including agent starter kits, Intuit has built out a series of new agents that help business owners get things done. Intuit’s agent suite demonstrates the technical sophistication required to move from predictive AI to autonomous workflow execution. Each agent coordinates prediction, natural language processing (NLP) and autonomous decision-making within complete business processes. They include: Payments agent: Autonomously optimizes cash flow by predicting late payments, generating invoices and executing follow-up sequences.  Accounting agent: Represents Intuit’s evolution from rules-based systems to autonomous bookkeeping. The agent now autonomously handles transaction categorization, reconciliation and workflow completion, delivering cleaner and more accurate books. Finance agent: Automates strategic analysis traditionally requiring dedicated business intelligence (BI) tools and human analysts. Provides key performance indicator (KPI) analysis, scenario planning and forecasting based on how the company is doing against peer benchmarks while autonomously generating growth recommendations. Intuit is also building out customer hub agents that will help with customer acquisition tasks. Payroll processing as well as project management efforts are also part of the future release plans. Beyond conversational UI: Task-oriented agent design The new agents mark an evolution in how AI is presented to users. Intuit’s interface redesign reveals important user experience principles for enterprise agent deployment. Rather than bolting AI capabilities onto existing software, the company fundamentally restructured the QuickBooks user experience for AI. “The user interface now is really oriented around the business tasks that need to be done,” Srivastava explained. “It allows for real time insights and recommendations to come to the user directly.” This task-centric approach contrasts with the chat-based interfaces dominating current enterprise AI tools. Instead of requiring users to learn prompting strategies or navigate conversational flows, the agents operate within existing business workflows. The system includes what Intuit calls a “business feed” that contextually surfaces agent actions and recommendations. Trust and verification: The closed-loop challenge One of the most technically significant aspects of Intuit’s implementation addresses a critical challenge in autonomous agent deployment: Verification and trust. Enterprise AI teams often struggle with the black box problem — how do you ensure AI agents are performing correctly when they operate autonomously? “In order to build trust with artificial intelligence systems, we need to provide proof points back to the customer that what they think is happening is actually happening,” Srivastava emphasized. “That closed loop is very, very important.” Intuit’s solution involves building verification capabilities directly into GenOS, allowing the system to provide evidence of agent actions and outcomes. For the payments agent, this means showing users that invoices were sent, tracking delivery and demonstrating the improvement in payment cycles that results from the agent’s actions. This verification approach offers a template for enterprise teams deploying autonomous agents in high-stakes business processes. Rather than asking users to trust AI outputs, the system provides auditable trails and measurable outcomes. What this means for enterprises looking to get into agentic AI Intuit’s evolution offers a concrete roadmap for enterprise teams planning autonomous AI implementations: Focus on workflow completion, not conversation: Target specific business processes for end-to-end automation rather than building general-purpose chat interfaces. Build agent orchestration infrastructure: Invest in platforms that coordinate prediction, language processing and autonomous execution within unified workflows, not isolated AI tools. Design verification systems upfront: Include comprehensive audit trails, outcome tracking and user notifications as core capabilities rather than afterthoughts. Map workflows before building technology: Use customer advisory programs to define agent capabilities based on actual operational challenges. Plan for interface redesign: Optimize UX for agent-driven workflows rather than traditional software navigation patterns. “As large language models become commoditized, the experiences that are built upon them become much more important,” Srivastava said. source

Get paid faster: How Intuit’s new AI agents help businesses get funds up to 5 days faster and save 12 hours a month with autonomous workflows Read More »

Scaling smarter: How enterprise IT teams can right-size their compute for AI

This article is part of VentureBeat’s special issue, “The Real Cost of AI: Performance, Efficiency and ROI at Scale.” Read more from this special issue. AI pilots rarely start with a deep discussion of infrastructure and hardware. But seasoned scalers warn that deploying high-value production workloads will not end happily without strategic, ongoing focus on a key enterprise-grade foundation.  Good news: There’s growing recognition by enterprises about the pivotal role infrastructure plays in enabling and expanding generative, agentic and other intelligent applications that drive revenue, cost reduction and efficiency gains.  According to IDC, organizations in 2025 have boosted spending on compute and storage hardware infrastructure for AI deployments by 97% compared to the same period a year before. Researchers predict global investment in the space will surge from $150 billion today to $200 billion by 2028.  But the competitive edge “doesn’t go to those who spend the most,” John Thompson, best-selling AI author and head of the gen AI Advisory practice at The Hackett Group said in an interview with VentureBeat, “but to those who scale most intelligently.”  Ignore infrastructure and hardware at your own peril  Other experts agree, saying that chances are slim-to-none that enterprises can expand and industrialize AI workloads without careful planning and right-sizing of the finely orchestrated mesh of processors and accelerators, as well as upgraded power and cooling systems. These purpose-built hardware components provide the speed, availability, flexibility and scalability required to handle unprecedented data volume, movement and velocity from edge to on-prem to cloud.   Source: VentureBeat Study after study identifies infrastructure-related issues, such as performance bottlenecks, mismatched hardware and poor legacy integration, alongside data problems, as major pilot killers. Exploding interest and investment in agentic AI further raise the technological, competitive and financial stakes.  Among tech companies, a bellwether for the entire industry, nearly 50% have agent AI projects underway; the rest will have them going in 24 months. They are allocating half or more of their current AI budgets to agentic, and many plan further increases this year. (Good thing,  because these complex autonomous systems require costly, scarce GPUs and TPUs to operate independently and in real time across multiple platforms.) From their experience with pilots, technology and business leaders now understand that the demanding requirements of AI workloads — high-speed processing, networking, storage, orchestration and immense electrical power — are unlike anything they’ve ever built at scale.  For many enterprises, the pressing question is, “Are we ready to do this?” The honest answer will be: Not without careful ongoing analysis, planning and, likely, non-trivial IT upgrades.   They’ve scaled the AI mountain — listen Like snowflakes and children, we’re reminded that AI projects are similar yet unique. Demands differ wildly between various AI functions and types (training versus inference, machine learning vs reinforcement). So, too, do wide variances exist in business goals, budgets, technology debt, vendor lock-in and available skills and capabilities.  Predictably, then, there’s no single “best” approach. Depending on circumstances, you’ll scale AI infrastructure up or horizontally (more power for increased loads), out or vertically (upgrading existing hardware) or hybrid (both).    Nonetheless, these early-chapter mindsets, principles, recommendations, practices, real-life examples and cost-saving hacks can help keep your efforts aimed and moving in the right direction.  It’s a sprawling challenge, with lots of layers: data, software, networking, security and storage. We’ll keep the focus high-level and include links to helpful, related drill-downs, such as those above. Modernize your vision of AI infrastructure   The biggest mindset shift is adopting a new conception of AI — not as a standalone or siloed app, but as a foundational capability or platform embedded across business processes, workflows and tools.  To make this happen, infrastructure must balance two important roles: Providing a stable, secure and compliant enterprise foundation, while making it easy to quickly and reliably field purpose-built AI workloads and applications, often with tailored hardware optimized for specific domains like natural language processing (NLP) and reinforcement learning. In essence, it’s a major role reversal, said Deb Golden, Deloitte’s chief innovation officer. “AI must be treated like an operating system, with infrastructure that adapts to it, not the other way around.” She continued: “The future isn’t just about sophisticated models and algorithms. Hardware is no longer passive. [So from now on], infrastructure is fundamentally about orchestrating intelligent hardware as the operating system for AI.”   To operate this way at scale and without waste requires a “fluid fabric,” Golden’s term for the dynamic allocation that adapts in real-time across every platform, from individual silicon chips up to complete workloads. Benefits can be huge: Her team found that this approach can cut costs by 30 to 40% and latency by 15 to 20%. “If your AI isn’t breathing with the workload, it’s suffocating.” It’s a demanding challenge. Such AI infrastructure must be multi-tier, cloud-native, open, real-time, dynamic, flexible and modular. It needs to be highly and intelligently orchestrated across edge and mobile devices, on-premises data centers, AI PCs and workstations, and hybrid and public cloud environments.  What sounds like buzzword bingo represents a new epoch in the ongoing evolution, redefining and optimizing enterprise IT infrastructure for AI. The main elements are familiar: hybrid environments, a fast-growing universe of increasingly specialized cloud-based services, frameworks and platforms.   In this new chapter, embracing architectural modularity is key for long-term success, said Ken Englund, EY Americas technology growth leader. “Your ability to integrate different tools, agents, solutions and platforms will be critical. Modularity creates flexibility in your frameworks and architectures.” Decoupling systems components helps future-proof in several ways, including vendor and technology agnosticism, lug-and-play model enhancement and continuous innovation and scalability.   Infrastructure investment for scaling AI must balance prudence and power   Enterprise technology teams looking to expand their use of enterprise AI face an updated Goldilocks challenge: Finding the “just right” investment levels in new, modern infrastructure and hardware that can handle the fast-growing, shifting demands of distributed, everywhere AI. Under-invest or stick with current processing capabilities? You’re looking at show-stopping performance bottlenecks and subpar business outcomes that can tank entire projects (and careers).  Over-invest in shiny new

Scaling smarter: How enterprise IT teams can right-size their compute for AI Read More »