Announcing The Forrester Wave™: Modern Application Development Services, Q1 2025

MAD Services Deliver Cool New Products While Transforming Your Development Capabilities Modern application development (MAD) services represent the next wave in custom application development services. Emerging from the convergence of current and past services such as application development management services (ADMS), digital transformation services (DTS), digital product engineering services (DPES), and broader application modernization services (AMS), MAD services are now utilized by leading organizations, with an increasing number of CIOs showing strong interest in these offerings (see figure below).   What sets MAD services apart? It’s their unique ability to not only support clients in delivering modern apps using the latest technologies and development practices but also their role in transforming and modernizing their custom development capabilities. The Venn diagram illustrates the context for MAD services and their foundational services, though it doesn’t capture the market’s multibillion-dollar scale, which is expected to grow. We just published The Forrester Wave™: Modern Application Development Services, Q1 2025, which analyzes and compares 13 medium and large market players out of more than 50 providers that offer MAD services: Accenture, Capgemini, CI&T, Cognizant, EPAM, Globant, HCLTech, Infosys, LTIMindtree, NTT DATA, Softtek, Tata Consultancy Services, and Thoughtworks. Why These Players And Not Others? The MAD services market is highly competitive, and Forrester clients can learn more about the broader landscape and discover a wider group of vendors in The Modern Application Development Services Landscape, Q3 2024. This most recent MAD services Wave’s analysis focuses on medium and large vendors compared to our previous Wave evaluation on the same market, in which the emphasis was on smaller ones. But not every company from the landscape report met the stringent criteria for inclusion in the Wave, which were: Significant peer recognition. These were providers most frequently cited in client bids. Forrester mindshare. This entails the service providers that were referenced more during briefings, inquiries, or research projects over the last year. MAD capabilities. The Wave’s vendors offer comprehensive and differentiating sets of MAD capabilities or, in Forrester’s view, unique capabilities that warrant inclusion. Global MAD services revenue of at least US$450 million. The included vendors have global MAD services revenue of US$450 million in at least two of the North America, LATAM, EMEA, or APAC regions combined. What Distinguishes The Leaders, Strong Performers, And Contenders? Our Wave methodology categorizes vendors into three groups, Leaders, Strong Performers, and Contenders, based on a range of services that we evaluated: agile, DevOps, microservices architecture, cloud services, and more advanced services such as site reliability engineering, project-to-product capabilities, AI and generative AI architecture services, and the testing and development of AI-infused applications. Showing differentiation in all these services was key to our evaluation. Reference clients, case studies, and other evidence also played a critical role in our analysis. After all, it’s the provider’s ability to enhance your team’s skills in new technologies and practices that truly differentiates MAD services from traditional ADMS or AMS services. We encourage readers not to dismiss any provider without first examining the detailed descriptions of strategy, capabilities, and client feedback in our Wave report. Download the accompanying Excel file for a breakdown of the questions, scoring, and criteria grading. For more information, feedback, or questions, email me at [email protected], or if you’re a Forrester client, schedule a guidance session or inquiry. I’m here to assist! source

Announcing The Forrester Wave™: Modern Application Development Services, Q1 2025 Read More »

What is Grok AI? Is It Worth the Hype?

Amid a sea of generative AI products, Grok AI sets itself apart with a bold and irreverent personality. Developed by Elon Musk’s xAI, Grok’s unconventional tone may make it less suitable for business use compared with its competitors. However, Grok still holds its own among the leading foundation models of today, boasting strong test performance and competitive speed. 1 New Relic Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Any Company Size Any Company Size Features Analytics / Reports, API, Compliance Management, and more 2 Wrike Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Medium (250-999 Employees), Large (1,000-4,999 Employees), Enterprise (5,000+ Employees) Medium, Large, Enterprise Features 24/7 Customer Support, 360 Degree Feedback, Accounting, and more What is Grok AI? Grok AI is a large language model designed for generating, changing, or analyzing text. It also offers advanced generative AI capabilities, including internet search functionality and image creation, making it a versatile tool for various tasks. Unlike standalone AI tools, Grok resides within X (formerly Twitter). To access it, users must log into X and purchase a subscription to Grok. This integration aligns Musk’s vision of transforming the social media platform into an “everything app,” where tools like Grok complement the platform’s ecosystem of services. Additionally, Grok’s development is part of xAI’s larger mission to build AI systems with a distinct personality and edge, reflecting Musk’s intent to differentiate Grok from its more conventional competitors. What are the key features of Grok AI? “Grok is designed to answer questions with a bit of wit and has a rebellious streak,” the Grok team wrote in a blog post in November 2023. “A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 platform. It will also answer spicy questions that are rejected by most other AI systems.” Web search and citations Grok leverages X to deliver real-time answers about current events. Answers to questions related to the news or current events will show links to the source post or website next to the chat window. Images Grok generates images by using xAI’s Aurora, a separate video model. Aurora is an autoregressive image generation model. Autoregressive refers to the statistical technique the model uses to predict what content is most likely to come next in a sequence. Unlike other AI models, Grok will create photorealistic images — a controversial capability, since it can be used to create deepfakes. Grok accepts prompts including copyrighted characters or politically inflammatory material. X users might see the “draw me” feature, in which Grok will generate images based on information in that user’s profile. Facebook similarly introduced AI-generated images into the feed recently. This included images putting the user’s likeness in fantastical situations. API The API for Grok allows for function calling, a 128k context length, and system prompt support. It interoperates with OpenAI and Anthropic software development kits. More must-read AI coverage Who developed Grok AI? xAI developed Grok. Musk founded and leads xAI, which was publicly announced in November 2023. How does Grok AI compare to other AI chatbots like ChatGPT? A major difference between Grok and other generative AI products, like ChatGPT or Llama, is that Grok operates entirely within the X social media platform. Grok will answer questions related to productivity, analyze text,and solve math and coding problems. It can also perform many of the other tasks generative AI can do for business. However, its data remains within the X platform. xAI said the latest version of Grok, Grok 2, scored 87.5% on the MMLU benchmark. MMLU measures the ability to correctly answer natural language questions in academic disciplines including philosophy and mathematics. OpenAI said its o1 scores 92.3%. Meta said its Claude 3.5 Opus scored 86.8%. SEE: Google Workspace subscriptions increased slightly as the Gemini AI became a default part of the package. Is Grok AI free to use? Grok AI is not free to use. It requires a subscription to X Premium or Premium+. Premium costs $8/month or $84/year on the web. Premium+ costs $22/month or $229/year on the web. The Grok enterprise API costs $2 per 1 million input tokens and $10 per 1 million output tokens. What are the privacy concerns associated with Grok AI? Grok’s close association with X has raised concerns about the privacy of personal data on the platform, which may be fed into the AI. X posts are used to train Grok by default. What is the controversy around Grok AI? Musk’s control of Grok and X’s trend toward unlimited — including potentially offensive — content has led some to be weary of using Grok. xAI describes Grok as providing “unfiltered answers.” During the November 2023 announcement of the model, xAI said: “Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!” In September 2024, the National Association of Secretaries of State alleged Grok contributed to election misinformation regarding the US presidential race. In response, X changed Grok’s responses such that questions about voting were redirected to a nonpartisan site, CanIVote.org. Is Grok worth the hype? We find it difficult to recommend Grok for business use cases. Its irreverent tone may make the content it produces inappropriate for general audiences, while heavy reliance on social media for information may make its answers potentially unreliable. Additionally, Grok is not accessible to people without an X account. However, Grok’s irreverent tone may work for some content and audiences, and its placement on X may meet users where they already are. As noted above, Grok scores higher than Meta’s Claude and some versions of OpenAI’s GPT-4 on certain benchmarks. In particular, it holds its own when offering general knowledge and mathematics answers. source

What is Grok AI? Is It Worth the Hype? Read More »

Veeam CIO Nate Kurtz: When data resilience meets AI strategy

00:00 Hello. Good afternoon, and welcome to CIO Leadership Live. I’m your host, Maryfran Johnson, CEO of Maryfran Johnson Media and the former editor in chief of CIO magazine. Since November 2017 this video show and audio podcast has been produced by the editors of CIO.com and the digital media division of foundry, which is an IDG company. Our sponsor for this episode is Veeam Software, a global market leader in data resilience. Veeam delivers a wide range of solutions for data backup and recovery, data portability and security and data intelligence. Veeam leaders believe that every business should be able to bounce forward after a data disruption with confidence and control wherever and whenever they need it, headquartered outside Seattle, Washington with offices in more than 30 countries. Veeam protects more than 550,000 customers worldwide, including 74% of the global 2000 to learn more, visit veeam.com website. Now onward to today’s guest, I’m joined today by Veeam Software, CIO Nathan Kurtz Nate joined Veeam in June of 2022 he leads the corporate technology team and is responsible for the company’s global business systems and all of its internal technologies. Prior to Veeam, he led the Technology Services team at F5 networks, where he spent more than 10 years leading it through periods of significant growth and helping to transform overall customer and employee experience at this rapidly growing tech provider before F5 networks, Nate spent 11 plus years at Arthur Andersen and KPMG serving as part of the global technology and telecommunication practices, where he led both sales and delivery teams for many of their high tech customers, including Microsoft and Amazon. Nate, welcome. It’s wonderful to have you here today. Thanks,Maryfran, really nice to be here as well. Thanks for for inviting me.Okay, well, I’m going to take full advantage of your expertise with my first question here about data resiliency. Now, what does that term mean to your seat CIO customers today, versus the data backup and data storage needs a kind of data housekeeping that we’ve all heard about for many, many years in the industry. When you talk about data resiliency, what exactly are we talking about?Yeah, so I think, you know, in my my career in tech, I think if you were to go back 1015, years, people tended to talk about, like you said, data backup and recovery. It was pretty basic kind of cut and dry. I think, though, as as this space has evolved and we start talking about data resiliency, it’s become more and more important as time has has gone on, as a CIO, we can recover applications and systems pretty quick. What we can never recover or recreate is our data. And so this is the one area. This is truly the keys to what we are responsible for is is recovering that data in a very complex environment, too. And so it’s the world is becoming more complex. The amount of data that we have as a world is growing. It’s doubling every year. I think the last term I heard was 150 zettabytes. And it’s, continuing to expand. And so not only do we have this rapid growth of data, but we also have growth and complexity in terms of the number of of systems, cloud providers, and where it’s all residing. And so with all this, I would say that macro level environment, in terms of our data, it’s just become more and more critical to be resilient when it comes to it. And for Veeam, we we use this bounce forward slogan because we want, we want our products, but we also want the industry as a whole to be able to very quickly be resilient and recover if there is an incident with their data.Well, you mentioned that in years past, once you could recover applications, but data would be lost. Talk a little bit about the kind of protections that are in place today. I take it that as technologies have evolved, the problem of losing that data forever. I mean, we’re not talking about ransomware here. We’re talking about data that is harmed in a disruption. What is it about technologies and the advances of them that are protecting that data better today? Yeah,yeah. So we tend to, we tend to think of it in a framework, type approach. Sure And so, and I think, I think we’ve kind of talked about this a little bit already, but we tend to think of it in kind of five pillars. When you have just your your basic data backup, are you? Are you on a at the frequency that you want? Are you storing your your data within your systems? But then are you able? The next kind of pillar would be recovery. Are you able to bring it back? Then we get into the areas that I think you’re you’re really asking about, which is on that data portability. Can you recover data from any anywhere to anywhere? Because we’re in a very like I mentioned, a complex environment where we have information being stored both in systems. They could be local, in your own data center. They could be at a Co Location facility where you have a hosted data center. They could be in the cloud, you know, the major cloud providers being AWS, Google Cloud, Azure and so forth. Can you recover between those then we get into data security, and that’s kind of that fourth pillar that we tend to think about, which is, when you have your information stored somewhere, does it have integrity? Is it? Is it free from I guess the common term would be, you know you could, we’ve talked about like viruses or ransomware. Do you know that it has you have assurances and trust in that data, and that’s what these products today are allowing CIOs to have, is we know that these where our data sits, there’s a high degree

Veeam CIO Nate Kurtz: When data resilience meets AI strategy Read More »

Del. Justices Seal Oracle's Win In $9.3B NetSuite Merger Suit

By Katryna Perera ( January 21, 2025, 10:02 PM EST) — The Delaware Supreme Court on Tuesday affirmed the Chancery Court’s toss last year of a challenge to Oracle Corp.’s $9.3 billion acquisition of NetSuite Corp. in 2016, saying the Chancery did not err in finding that the transaction was untainted from influence by Oracle’s management or its founder and top shareholder…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Del. Justices Seal Oracle's Win In $9.3B NetSuite Merger Suit Read More »

Europe accelerates AI drug discovery as DeepMind spinoff targets trials this year

Google DeepMind spinoff Isomorphic Labs expects testing on its first AI-designed drugs to begin this year, as tech startups race to turn algorithmic magic into actual treatments. “We’ll hopefully have some AI-designed drugs in clinical trials by the end of the year,” the firm’s Nobel Prize-winning CEO Demis Hassabis told a panel at the World Economic Forum in Davos this week. “That’s the plan.”   The potential of AI-powered drug discovery is huge. Instead of spending years or even decades testing chemicals by hand, machine learning algorithms can sift through mountains of data to spot patterns and predict which molecules could make the next miracle drug. This could lead to faster drug development, cheaper costs, and new cures. By one estimate, there are over 460 AI startups currently working on drug discovery, of which over a quarter come from Europe. Globally, more than $60bn has been invested into the space so far, and the funding flood isn’t showing any signs of letting up.  The 💜 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now! Yet discovering the drugs is merely one step in the process. it’s only when big pharma decides they’re worth manufacturing, marketing, and distributing that it’ll make a real difference to the likes of you and me.  That’s what makes some of the recent hookups between pharma behemoths and AI startups particularly exciting. Last year, Isomorphic Labs inked a $45mn deal with Eli Lilly to collaborate on AI-based research into small molecule therapeutics. Under the agreement, Isomorphic is also eligible to receive up to $1.7bn in “performance-based milestones.” The company also signed a similar collaboration with Swiss biotech Novartis. “We’re already working on real drug programs,” Hassabis told Bloomberg Television in an interview shortly following the announcements. “I would expect in the next couple of years the first AI-designed drugs in the clinic.” Exscientia, which spun out from Dundee University in 2012, was among the first to apply AI to drug discovery. In 2024, the company advanced its first AI-designed drug candidate into human clinical trials, achieving this milestone in just 12 months — a process that typically takes around five years. US rival Recursion acquired the Oxford-based company for $688mn in November.  These are two big examples of an AI-driven drug discovery market that’s booming, and increasingly, consolidating. However, there are also plenty of early-stage companies working on more niche applications of the technology. These include Cambridge, UK-based CardiaTec, which is using AI to find new drugs to treat heart conditions, and London-headquartered Multiomic Health, which is working on formulas to treat metabolic diseases.  Despite all the potential though, AI isn’t a silver bullet for drug discovery. While it can drastically speed up finding the right compounds needed to make new drugs, the most time-consuming steps — like wet lab tests with physical samples, clinical trials, and FDA approvals — aren’t going anywhere. Still, AI’s real power lies in that critical first phase: zeroing in on targets that might’ve otherwise slipped through the cracks, saving researchers time and possibly even unlocking new treatments. source

Europe accelerates AI drug discovery as DeepMind spinoff targets trials this year Read More »

Delays in TSMC’s Arizona plant spark supply chain worries

Delays at TSMC’s Arizona plant could compel its customers to rely on Taiwan-based facilities, leaving them vulnerable to geopolitical risks tied to Taiwan’s dominance in semiconductor production. “This situation could also delay the rollout of next-generation products in the US market, affecting timelines for AI, gaming, and high-performance computing innovations,” Rawat said. “Moreover, without access to local, advanced chips, US tech companies will incur higher transportation and import costs, diminishing their profit margins. In competitive sectors like AI and autonomous vehicles, slower time-to-market could weaken global competitiveness.” For TSMC, the delays and challenges could have significant implications for fab operations, particularly in maintaining profitability and efficiency. “Cost of maintaining the fab, the fab utilization rate, and the yield rate are key metrics to keep the fab profitable,” said Neil Shah, partner, and co-founder at Counterpoint Research. “So, TSMC would look to move as much business as possible from its customers to the Arizona fab to match current and future capacity, maintain the utilization rate, and then build on the yield rate to maximize efficiency.” source

Delays in TSMC’s Arizona plant spark supply chain worries Read More »

Off The Bench: Arrest In NBA Betting Probe, 76ers' Arena Deal

By David Steele ( January 17, 2025, 2:08 PM EST) — In this week’s Off The Bench, the betting fraud investigation with a former National Basketball Association player at the center produces another arrest, the Philadelphia 76ers pull out of one new arena agreement and sign up for another, and a champion fighter is accused of assaulting a woman at a basketball game…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Off The Bench: Arrest In NBA Betting Probe, 76ers' Arena Deal Read More »

Microsoft AutoGen v0.4: A turning point toward more intelligent AI agents for enterprise developers

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The world of AI agents is undergoing a revolution, and Microsoft’s release of AutoGen v0.4 this week marked a significant leap forward in this journey. Positioned as a robust, scalable and extensible framework, AutoGen represents Microsoft’s latest attempt to address the challenges of building multi-agent systems for enterprise applications. But what does this release tell us about the state of agentic AI today, and how does it compare to other major frameworks like LangChain and CrewAI? This article unpacks the implications of AutoGen’s update, explores its standout features, and situates it within the broader landscape of AI agent frameworks, helping developers understand what’s possible and where the industry is headed. The promise of “asynchronous event-driven architecture” A defining feature of AutoGen v0.4 is its adoption of an asynchronous, event-driven architecture (see Microsoft’s full blog post). This is a step forward from older, sequential designs, enabling agents to perform tasks concurrently rather than waiting for one process to complete before starting another. For developers, this translates into faster task execution and more efficient resource utilization — especially critical for multi-agent systems. For example, consider a scenario where multiple agents collaborate on a complex task: One agent collects data via APIs, another parses the data, and a third generates a report. With asynchronous processing, these agents can work in parallel, dynamically interacting with a central reasoner agent that orchestrates their tasks. This architecture aligns with the needs of modern enterprises seeking scalability without compromising performance. Asynchronous capabilities are increasingly becoming table stakes. AutoGen’s main competitors, Langchain and CrewAI, already offered this, so Microsoft’s emphasis on this design principle underscores its commitment to keeping AutoGen competitive. AutoGen’s role in Microsoft’s enterprise ecosystem Microsoft’s strategy for AutoGen reveals a dual approach: Empower enterprise developers with a flexible framework like AutoGen, while also offering prebuilt agent applications and other enterprise capabilities through Copilot Studio (see my coverage of Microsoft’s extensive agentic buildout for its existing customers, crowned by its 10 pre-built applications, announced in November at Microsoft Ignite). By thoroughly updating the AutoGen framework capabilities, Microsoft provides developers the tools to create bespoke solutions while offering low-code options for faster deployment. This image depicts the AutoGen v0.4 update. It includes the framework, developer tools, and applications. It supports both first-party and third-party applications and extensions. This dual strategy positions Microsoft uniquely. Developers prototyping with AutoGen can seamlessly integrate their applications into Azure’s ecosystem, encouraging continued use during deployment. Additionally, Microsoft’s Magentic-One app introduces a reference implementation of what cutting-edge AI agents can look like when they sit on top of AutoGen — thus showing the way for developers to use AutoGen for the most autonomous and complex agent interactions. Magentic-One: Microsoft’s generalist multi-agent system, announced in November, for solving open-ended web and file-based tasks across a variety of domains. To be clear, it’s not clear how precisely Microsoft’s prebuilt agent applications leverage this latest AutoGen framework. After all, Microsoft has just finished rehauling AutoGen to make it more flexible and scalable — and Microsoft’s pre-built agents were released in November. But by gradually integrating AutoGen into its offerings going forward, Microsoft clearly aims to balance accessibility for developers with the demands of enterprise-scale deployments. How AutoGen stacks up against LangChain and CrewAI In the realm of agentic AI, frameworks like LangChain and CrewAI have carved their niches. CrewAI, a relative newcomer, gained traction for its simplicity and emphasis on drag-and-drop interfaces, making it accessible to less technical users. However even CrewAI, as it has added features, has gotten more complex to use, as Sam Witteveen mentions in the podcast we published this morning where we discuss these updates. At this point, none of these frameworks is super differentiated in terms of their technical capabilities. However, AutoGen is now distinguishing itself through its tight integration with Azure and its enterprise-focused design. While LangChain has recently introduced “ambient agents” for background task automation (see our story on this, which includes an interview with founder Harrison Chase), AutoGen’s strength lies in its extensibility — allowing developers to build custom tools and extensions tailored to specific use cases. For enterprises, the choice among these frameworks often boils down to specific needs. LangChain’s developer-centric tools make it a strong choice for startups and agile teams. CrewAI’s user-friendly interfaces appeal to low-code enthusiasts. AutoGen, on the other hand, will now be the go-to for organizations already embedded in Microsoft’s ecosystem. However, a big point made by Witteveen is that these frameworks are still mainly used as great places to build prototypes and experiment, and that many developers port their work over to their own custom environments and code (including the Pydantic library for Python for example) when it comes to actual deployment. It’s true, though, that this could change as these frameworks build out extensibility and integration capabilities. Enterprise readiness: the data and adoption challenge Despite the excitement around agentic AI, many enterprises are not ready to fully embrace these technologies. Organizations I’ve talked with over the past month, like Mayo Clinic, Cleveland Clinic, and GSK in healthcare, Chevron in energy, and Wayfair and ABinBev in retail, are focusing on building robust data infrastructures before deploying AI agents at scale. Without clean, well-organized data, the promise of agentic AI remains out of reach. Even with advanced frameworks like AutoGen, LangChain and CrewAI, enterprises face significant hurdles in ensuring alignment, safety and scalability. Controlled flow engineering — the practice of tightly managing how agents execute tasks — remains critical, particularly for industries with stringent compliance requirements like healthcare and finance. What’s next for AI agents? As the competition among agentic AI frameworks heats up, the industry is shifting from a race to build better models to a focus on real-world usability. Features like asynchronous architectures, tool extensibility, and ambient agents are no longer optional but essential. AutoGen v0.4 marks a significant step for Microsoft, signaling its intent to lead in the enterprise AI

Microsoft AutoGen v0.4: A turning point toward more intelligent AI agents for enterprise developers Read More »

The Domo.AI vision: Agentic workflows that create value

AI can do much more than make individual employees more productive—with a thoughtful strategy, it can transform your industry.   To turn this vision into reality, businesses need AI solutions that are not only ambitious but also practical and adaptable. That’s where Domo steps in. Domo.AI is built to meet the challenges of today’s AI landscape. While many companies focus on narrow applications or single-model solutions, our platform offers something more robust:  Trustworthy AI results—without having to overhaul your entire data infrastructure.  Intelligent agents that can analyze information, make choices, and take actions—with or without a human in the loop.  Flexibility to choose which AI models to use—whether that’s DomoGPT or models you host with our ecosystem partners, including AWS, IBM, Databricks, and Snowflake.   The result? A platform that’s both ready to support your AI strategy today and designed to scale with you as your business and your data grow. Making AI practical, personal, and powerful   Let us paint a picture of secure, personalized, scalable AI. Imagine these five ways Domo.AI can make a difference in your organization: A unified and secure platform Instead of onboarding AI piecemeal across your business (and taking on more risk with every tool), you get one trustworthy platform. As AI adoption grows, teams across your company are likely already using AI, creating a governance and security challenge for IT teams. Domo.AI offers a safer method: Our framework lets you use public models, large language models, and generative AI (e.g., ChatGPT)—all within the security of our platform. Your company’s data never leaves your hands and stays firmly in your control.   2. Scalable AI embedded in your business AI is responsibly embedded into your business—and the capabilities grow alongside you. Many companies today remain in reaction mode, adding new AI tools as soon as new use cases emerge—one for ad imagery production, another for data cleaning, and yet another for inventory management. But these surface-level integrations can’t scale as your organization grows.   An integrated approach, however, will. The Domo.AI experience unites teams around the common stories emerging from your data. Our flexible framework allows you to add new, suitable tools and models as needed while keeping your data centralized and secure in the Domo platform. With Domo.AI, your AI strategy doesn’t just keep up with your growth—it drives it. 3. Tools for every skill level Both data experts and novices can use the same AI tools—with capabilities that match their skill sets. Domo.AI is advanced enough for data architects and analysts yet approachable enough for those without the same technical fluency. The Domo platform is designed to engage people (especially those without data science backgrounds), helping them learn and progress in making data-driven decisions.   4. Conversational AI for everyone Your people can talk to AI as they would a colleague—in everyday language, not code. With Domo.AI, code and complex prompts aren’t needed anymore to get value from AI. In Domo, you can ask a simple question of your data, and AI Chat deliver insights and answers in real time. Personalized support is right at your fingertips, 24/7. All you have to do is ask. 5. AI as your workday companion With Domo, AI becomes your intelligent workday companion—not an expensive investment that doesn’t drive value. As Chris Willis, chief design officer at Domo, put it, “The time has come to put AI to work—not as a novelty, but as an invaluable assistant augmenting capabilities for everyone in your enterprise.” With Domo.AI, our vision is to give you the tools you need to create AI-powered operations that make work life easier. By handling repetitive tasks, we free up your people to focus on the strategic work that requires their skills.  What’s next for Domo.AI  Our journey with AI and with you doesn’t stop here. As we look ahead to 2025, our team is doubling down on the mission to deliver personalized, secure, and scalable AI experiences to our customers.  With agentic workflows that simplify operations and drive value, we’re creating a platform that empowers businesses to thrive—and we can’t wait to see what you build.  To learn more, view our webinar, “Transforming Industries Through Automation and Intelligence.” source

The Domo.AI vision: Agentic workflows that create value Read More »

Purpose-built AI hardware: Smart strategies for scaling infrastructure

This article is part of VentureBeat’s special issue, “AI at Scale: From Vision to Viability.” Read more from this special issue here. This article is part of VentureBeat’s special issue, “AI at Scale: From Vision to Viability.” Read more from the issue here. Enterprises can look forward to new capabilities — and strategic decisions — around the crucial task of creating a solid foundation for AI expansion in 2025. New chips, accelerators, co-processors, servers and other networking and storage hardware specially designed for AI promise to ease current shortages and deliver higher performance, expand service variety and availability, and speed time to value.   The evolving landscape of new purpose-built hardware is expected to fuel continued double-digit growth in AI infrastructure that IDC says has lasted 18 straight months. The IT firm reports that organizational buying of  compute hardware (primarily servers with accelerators) and storage hardware infrastructure for AI grew 37% year over-year in the first half of 2024. Sales are forecast to triple to $100 billion a year by 2028.   “Combined spending on dedicated and public cloud infrastructure for AI is expected to represent 42% of new AI spending worldwide through 2025” writes Mary Johnston Turner, research VP for digital infrastructure strategies at IDC.  The main highway for AI expansion  Many analysts and experts say these staggering numbers illustrate that infrastructure is the main highway for AI growth and enterprise digital transformation. Accordingly, they advise, technology and business leaders in mainstream companies should make AI infrastructure a crucial strategic, tactical and budget priority in 2025.  “Success with generative AI hinges on smart investment and robust infrastructure,”  said Anay Nawathe, director of cloud and infrastructure delivery at ISG, a global research and advisory firm. “Organizations that benefit from generative AI redistribute their  budgets to focus on these initiatives.”   As evidence, Nawathe cited a recent ISG global survey that found that proportionally, organizations had ten projects in the pilot phase and 16 in limited deployment, but only six deployed at scale. A major culprit, says Nawathe, was the current infrastructure’s inability to affordably, securely, and performantly scale.” His advice? “Develop comprehensive purchasing practices and maximize GPU availability and utilization, including investigating specialized GPU and AI cloud services.”   Others agree that when expanding AI pilots, proof of concepts or initial projects, it’s essential to choose deployment strategies that offer the right mix of scalability, performance, price, security and manageability.  Experienced advice on AI infrastructure strategy  To help enterprises build their infrastructure strategy for AI expansion, VentureBeat consulted more than a dozen CTOs, integrators, consultants and other experienced industry experts, as well as an equal number of recent surveys and reports.   The insights and advice, along with hand-picked resources for deeper exploration, can help guide organizations along the smartest path for leveraging new AI hardware and help drive operational and competitive advantages. Smart strategy 1: Start with cloud services and hybrid  For most enterprises, including those scaling large language models (LLMs), experts say the best way to benefit from new AI-specific chips and hardware is indirectly — that is,  through cloud providers and services.   That’s because much of the new AI-ready hardware is costly and aimed at giant data centers. Most new products will be snapped up by hyperscalers Microsoft, AWS, Meta and Google; cloud providers like Oracle and IBM; AI giants such as XAI and OpenAI and other dedicated AI firms; and major colocation companies like Equinix. All are racing to expand their data centers and services to gain competitive advantage and keep up with surging demand.   As with cloud in general, consuming AI infrastructure as a service brings several advantages, notably faster jump-starts and scalability, freedom from staffing worries and the convenience of pay-go and operational expenses (OpEx) budgeting. But plans are still emerging, and analysts say 2025 will bring a parade of new cloud services based on powerful AI optimized hardware, including new end-to-end and industry-specific options.  Smart strategy 2: DIY for the deep-pocketed and mature  New optimized hardware won’t change the current reality: Do it yourself (DIY) infrastructure for AI is best suited for deep-pocketed enterprises in financial services, pharmaceuticals, healthcare, automotive and other highly competitive and regulated industries.   As with general-purpose IT infrastructure, success requires the ability to handle high capital expenses (CAPEX), sophisticated AI operations, staffing and partners with specialty skills, take hits to productivity and take advantage of market opportunities during building. Most firms tackling their own infrastructure do so for proprietary applications with high return on investment (ROI).   Duncan Grazier, CTO of BuildOps, a cloud-based platform for building contractors, offered a simple guideline. “If your enterprise operates within a stable problem space with well-known mechanics driving results, the decision remains straightforward: Does the capital outlay outweigh the cost and timeline for a hyperscaler to build a solution tailored to your problem? If deploying new hardware can reduce your overall operational expenses by 20-30%, the math often supports the upfront investment over a three-year period.”   Despite its demanding requirements, DIY is expected to grow in popularity. Hardware vendors will release new, customizable AI-specific products, prompting more and more mature organizations to deploy purpose-built, finely tuned, proprietary AI in private clouds or on premise. Many will be motivated by faster performance of specific workloads, derisking model drift, greater data protection and control and better cost management.  Ultimately, the smartest near-term strategy for most enterprises navigating the new infrastructure paradigm will mirror current cloud approaches: An open, “fit-for- purpose” hybrid that combines private and public clouds with on-premise and edge.  Smart strategy 3: Investigate new enterprise-friendly AI devices  Not every organization can get their hands on $70,000 high end GPUs or afford $2 million AI servers. Take heart: New AI hardware with more realistic pricing for everyday organizations is starting to emerge .   The Dell AI Factory, for example, includes AI Accelerators, high-performance servers, storage, networking and open-source software in a single integrated package. The company also has announced new PowerEdge servers and an Integrated Rack 5000 series offering air and liquid-cooled, energy-efficient AI infrastructure. Major PC makers continue to introduce

Purpose-built AI hardware: Smart strategies for scaling infrastructure Read More »