Should CIOs Lead User Education Initiatives?

In November 2024, McKinsey’s Alex Panas (the global leader of industries) and Axel Karlsson (global leader of practices and growth platforms) wrote:   “The tech opportunities for today’s organizations are alluring. Businesses are racing to capitalize on the proliferation of technologies like generative AI, and with more data at their fingertips than ever, the potential to transform the business through tech seems vast. But companies looking to make digital hay need to play their cards right, otherwise they risk falling into the same traps that befuddled business leaders of yore faced with earlier digital disruptions.”   Pana and Karlsson cited digital missteps like not having a clear vision for a digital project or overestimating a project’s ultimate economic return to the company. But there are two other “ground floor” caveats that also are requisite for digital project success: The new technology must be seamlessly integrated into company business processes; and the users must be trained to successfully use it.  The goal is total digital assimilation into the business. That digital assimilation is hard to attain if the business processes that use the technology don’t work right, or if employees get confused with the new technology. At this point, the project sputters and the blame game starts, often with the burden placed on IT.  Related:How to Turn Developer Team Friction Into a Positive Force Why is this? Isn’t it the job of HR or user departments to train employees and to redesign business processes so the business flows can work with new digital technology? And, isn’t it IT’s job to stick to technical tasks, like developing, integrating, testing and deploying new digital technologies so that users can use them?  That’s the general idea in theory, but all you have to do is to walk up to a bank teller or a clerk at a hardware store counter who’s struggling to put your transaction through. As they struggle, they will tell you, “It’s the system.”  How to Deal with the ‘It’s the System’ Problem  I still find CIOs today who will consider a digital project complete and successful if delivered within budget and timeline. They wash their hands of it and don’t consider it their responsibility if users later struggle with the system.  Or, maybe the new system renders an internal business process painful or unwieldy. Unfortunately, taking a position like this can cost a career!  Digital transformation expert Eric Kimberling talks about why CIOs get fired and says that CIOs can “become captivated by the technology itself, focusing on its bells and whistles and cool features,” while ignoring “the organizational and human dynamics of a transformation.”   Related:3 Tech Deep Dives That CIOs Must Absolutely Make He goes on to say, “CIOs sometimes assume that if technology works well from a technical perspective, it will automatically work for the business … However, this assumption may or may not hold true. The best CIOs I have worked with are actually those who possess limited technological knowledge but possess a deep understanding of operations and the business they work for. They recognize the value and importance of the human and organizational aspects of change.”  CEOs and boards see this, too. That’s why they expect their CIOs to be as strategically and operationally on top of the business as they are on the technology. It’s also incumbent on CIOs to assume more active roles in the human and business sides of digital project deployments if they want to avoid the “it’s the system” blame syndrome.  The CIO Role in User Education  User education and business process design isn’t the forte of most CIOs, nor of IT staff for that matter. How can CIOs and IT engage more substantially in digital projects to ensure that systems work well in business workflows and that knowledge transfer to employees has occurred?  Digital assimilation should be the goal of the CIO and the project team. If a digital system is to be assimilated into the business fabric of the company, it must meld well with business processes and be intuitively simple for workers to use and understand. Seamless business workflows and optimal ease of use should be ground-level goals of the user-IT project team, and it is the CIO who should push this idea. It is not enough to proclaim a project complete and successful just because it meets the timeline and comes in under budget.  Related:AI’s Next Frontier Is Applications: How to Stay Ahead Project tasks should reflect business processes and ease of use goals. If a business process needs to be redesigned to accommodate new digital technology, tasks should be assigned for developing the workflow, doing the business workflow walkthrough, documenting it, testing it for all routine operations foreseeable exceptions and debugging it until it runs cleanly. If this sounds a bit like the design, develop, test-and-deploy sequence of traditional IT application development, it sounds that way because it is. Developing, testing and revising business process flows, and usability should have equal billing with getting the software done.  New business processes using digital technology should be pilot tested. Before new software is deployed, it’s tested in a system environment that emulates the environment the software will run with in production. The same should be done with new business processes that incorporate digital technology. The new tech and business process should be run in a pilot environment that emulates the “live” business environment where users will be operating. This is the only way you can really see the business issues and fix them for a smooth project cutover.   The CIO should collaborate with other C levels. Launching new business processes and tech, and ensuring that employees have the skills to use them, is everybody’s business. However, it’s especially the business of the user area executive and the CIO who should be co-sponsoring the project and energizing their teams. When both parties and their staffs are aligned with the on-the-ground strategy of making sure the tech works, and that users know how to use that tech, they’ll not only

Should CIOs Lead User Education Initiatives? Read More »

3 Tech Deep Dives that CIOs Must Absolutely Make

When I was a junior programmer/analyst on my first IT job, I was working with a programmer-mentor named Bob who was teaching me to code subroutines. The day’s conversation got around to the CIO, and Bob unexpectedly said, “That guy’s nothing more than a pencil pusher. He doesn’t have a clue about what we’re doing!”   Bob’s words stuck with me, especially after I became a CIO. I kept thinking about the side conversations that happen in cubicles. I determined that although it wasn’t my business as a CIO to code, I would make it my business to stay atop technology details so I could actively interact with my technical staff members in a value-added way. I decided to also learn how to communicate about technology at a plain English “top” level with other executives and board members.  Staying on top of technology at a detailed level isn’t easy for CIOs who have a broad range of responsibilities to fulfill. Meanwhile, it’s crucial to be able to articulate complicated tech in plain English to superiors who lack a tech background when your own strength might be in science and engineering, but not in public speaking.   Nevertheless, it’s absolutely essential for CIOs to do both, or they risk losing the respect of their superiors and their staff.  Here are three tech deep dives that CIOs must make in 2025 so they can meet the technology expectations of their superiors and staffs:  Security  Security worries corporate boards. It’s a key IT responsibility, and as cyberattacks grow more sophisticated, preventing them is becoming more than just monitoring the periphery of the network and conducting security audits. Using traditional security analysts who are generalized in their knowledge also might not suffice.   Enter technologies like network and system observability, which can probe beyond monitoring, drilling down to security threat root causes and interpretations of events based upon the relationships between data points and access points. You’ll have to break down the concept of observability and possibly the evolution of new tech roles in security for the board and executives who will be asked to fund them.   On the IT staff side, implementing observability will be a topic of technical discussion. There may also be a need to discuss new security roles and positions. For instance, in sensitive industries like finance, law enforcement, healthcare or aerospace, you may need a cyberthreat hunter who seeks out malware that may be dormant and embedded in systems, only waiting to be activated. Or, it may be time for a security forensics specialist who can get to the bottom of a breach to identify the perpetrator. These are positions that are more specialized than security analyst. You may have to develop the skillsets for cyberhunting or forensics internally or seek them outside. Adding these roles could force a realignment of duties on the IT security staff, and it will be important for you to work closely with your staff.  Generative and Agentive AI  Companies are flocking to invest in AI,  with boards and CEOs wanting to know about it, and the data science and IT departments want direction on it.  Generative AI is the most common AI used, but how many boards know what Gen AI is, and how it works? Meanwhile, agentive AI, in which AI not only makes decisions but acts upon them, is coming into view.  Both forms of AI can dramatically impact business strategies, customer relationships, business processes and employee headcount. CEOs and boards need to know about these forms of AI, what they are capable of doing, where the risks are, and what the impact could be. They will come to the CIO for information. They don’t need to know about every nut and bolt, but they do need enough working knowledge so they can understand the technology at a conceptual business level.   On the IT and data science staff side, generative AI engines must operate on quality data from a variety of external and internal feeds that must be vetted. In some cases, ETL (extract-transform-load) software must be used to clean and normalize the data. The technical approach to doing this needs to be discussed and implemented. It is a plus for everyone if the CIO partakes in some of these meetings.  With agentive AI, there should be discussions about technology readiness and ethical guardrails as to just how much autonomous work AI should be allowed to perform on its own.  For all AI, security and refresh cycles for data need to be defined and executed, and the algorithms operating on the data must be trialed and tuned.  Collectively, these activities require project approval and budget allotments, so it is in the staff’s and CIO’s best interests that they get discussed technically so the nature of the work, its challenges and opportunities are clearly understood by all.   NaaS  We’ve heard of IaaS (infrastructure as a service), SaaS (software as a service) and PaaS (platform as a service), and now there is NaaS (network as a service). What they have in common is that they are all cloud services. The intent is to shift IT functions to the cloud so you have less direct responsibility for managing them in-house.  Boards and C-level executives are attracted to cloud services because they perceive the cloud as being less expensive, easier to manage, and a way to avoid investing in technology that will be obsolete three years later. But now there is NaaS, which most of them haven’t heard about.  Just what is NaaS (network outsourcing), and what does it do for the company? They will ask the CIO to explain it.  On the IT side, if you’re discussing NaaS, there are decisions to be made as to how much (if any) of the network you’re willing to outsource. Also, if you did outsource, what will be the impact on cost, management, security, bandwidth, application integration service levels. The discussion can get into the weeds of the technology, and the CIO should be prepared to go there.  The Quandary for

3 Tech Deep Dives that CIOs Must Absolutely Make Read More »

ServiceNow’s Yokohama platform release focuses on agentic AI

ServiceNow today launched its newest Now Platform release, Yokohama, which doubles down on the company’s commitment to agentic AI. Building on its 2024 AI-driven enhancements, including the AI agents that made their debut in November 2024 in the Xanadu release, Yokohama includes teams of preconfigured AI agents that, ServiceNow said, “deliver productivity and predictable outcomes from day one, on a single platform.” These include: Security Operations (SecOps) expert AI agents to transform security operations by streamlining the entire incident lifecycle, eliminating repetitive tasks and empowering SecOps teams to focus on quickly stopping real threats. Autonomous change management AI agents that, ServiceNow said, “act like a seasoned change manager, instantly generating custom implementation, test, and backout plans by analyzing impact, historical data, and similar changes.” Proactive network test & repair AI agents thatoperate as AI-powered troubleshooters that automatically detect, diagnose, and resolve network issues before they impact performance. In addition, the new ServiceNow AI Agent Studio will allow no code, low code, and pro code developers to build, manage, and monitor their own AI agents, and to chain agents together to create automation workflows. Both it, and the previously announced AI Agent Orchestrator, are now generally available. source

ServiceNow’s Yokohama platform release focuses on agentic AI Read More »

Recover Your Deleted Data Even If You’ve Emptied Your Trash Bin

TL;DR: Recover lost, deleted, or corrupted files with Stellar Data Recovery Professional for just $89.99 (50% off) Losing important files can be a costly, time-consuming nightmare, especially if they contain critical business data, sensitive client information, or essential work documents. Instead of scrambling for a solution after the fact, having a reliable recovery tool on hand can save you time and stress. Stellar Data Recovery Professional can help restore a single lost file or a whole hard drive, and right now, you can get a 10-year license for just $89.99 (reg. $199). A Trusted Tool for Business & IT Use Stellar Data Recovery has earned its reputation as one of the most powerful and user-friendly data recovery solutions available. With support for both Windows and macOS, it recovers lost data from hard drives, external storage, memory cards, and even optical media like CDs and DVDs. It can even retrieve data from non-bootable and encrypted drives — making it an essential tool for IT professionals handling critical system failures. TechRadar called Stellar Data Recovery a “great file retrieval tool with powerful advanced options for business,” noting its ability to recover 80% of missing files from a corrupted drive and its efficient scanning options that save time when searching for lost data. Unlike basic file recovery tools, Stellar offers tailored scan options so you don’t waste time scanning an entire system when you only need to restore a specific file type. It also includes SMART drive monitoring to detect potential hardware failures before they happen, RAID and virtual drive recovery, and even email restoration for Outlook and Exchange files. Users will appreciate its disk imaging and cloning features, which allow you to create a backup before a drive completely fails, potentially saving crucial business data. Don’t wait until data loss happens — Stellar Data Recovery Professional’s 10-year license is for just $89.99 (50% off) while this deal lasts. StackSocial prices subject to change. source

Recover Your Deleted Data Even If You’ve Emptied Your Trash Bin Read More »

Meet the Analyst: Covering ERP, FP&A, SaaS, and the Enterprise Software Market

I am thrilled to begin my journey as principal analyst on the enterprise software, IT services, and digital transformation-focused team at Forrester. For nearly two decades, I have led major business-technology initiatives. During the last eight years for Forrester Consulting, I helped clients drive transformative results. Now, I am eager to deliver clear, actionable insights to technology and line-of-business leaders, software providers, and implementation partners. Together, we can tackle enterprise resource planning (ERP) modernization, software-as-a-service (SaaS) governance, and enterprise solution roadmaps with confidence, clarity, and measurable impact. My Research Focus: Driving Results Where It Matters Most Enterprises face growing pressure to modernize legacy systems, optimize technology investments, and unlock new avenues for growth. To address these demands, my research focuses on four key areas: We have been working hard at Forrester (view my full bio here) to be a valuable trusted advisor that is on your side and by your side. I will be delivering evaluative research and market intelligence across ERP, financial planning and analysis (FP&A), and SaaS marketplaces, equipping you with the insights needed to fuel innovation, sharpen your strategies, and stay ahead in an increasingly competitive market. A View Into My Consulting Journey Recently, I moved to Forrester’s research team after eight years as a principal consultant in Forrester’s strategy consulting practice. During that time, I worked closely with clients to solve pressing challenges. Notable highlights include: ERP transformations: led modernization projects spanning multiple industries, managed budgets of over $80 million, and boosted efficiency. Go-to-market strategies: teamed with global tech companies to unlock revenue streams and accelerate growth. Strategic sourcing: created frameworks that improved vendor selection and reduced costs. Business-case and ROI analysis: authored Forrester Total Economic Impact™ (TEI) studies to support smarter tech investments. Before Forrester Consulting, I served as associate director at KPMG’s CIO advisory practice. Earlier, I spent nearly a decade at Ernst & Young leading enterprisewide IT transformations. Where Do I Find Inspiration? Outside of Forrester, I can be found riding my motorcycle along the Pacific Coast and chasing my seven-year-old twins. Ready To Start A Conversation? Let’s shape your enterprise software and business application strategies with fresh insights. Click here to schedule an inquiry or guidance session. #Forrester #ERPModernization #DigitalTransformation #TechStrategy #ThoughtLeadership source

Meet the Analyst: Covering ERP, FP&A, SaaS, and the Enterprise Software Market Read More »

Achieving Cost Efficiency in Cloud Storage: The Role of Western Digital's Hard Drive Portfolio

Cloud storage is a key component of modern data infrastructure. The rapid growth of artificial intelligence (AI), the Internet of Things (IoT), and big data analytics drives the need for scalable, high-performance storage solutions. To meet this demand, cloud service providers must efficiently expand their infrastructure while remaining cost-effective. However, balancing affordability, performance, and dependability is a significant challenge. One of cloud providers’ most important financial concerns is managing capital expenditures (CapEx) and operational expenditures (OpEx). Increasing drive capacity is one of the most effective ways to reduce Total Cost of Ownership (TCO). Western Digital helps cloud providers achieve cost-effective scalability and operational efficiency by leveraging high-capacity hard disk drives (HDDs) and advanced storage technologies like Ultra Shingled Magnetic Recording (UltraSMR), Energy-Assisted Perpendicular Magnetic Recording (ePMR), and HelioSeal® technology. This article explores cloud storage providers’ financial challenges and how Western Digital’s hard drive portfolio addresses these issues. It discusses data center CapEx and OpEx, the cost-saving effects of high-capacity HDDs, and Western Digital’s storage solutions’ real-world benefits. The article also discusses cloud storage’s future, including Heat-Assisted Magnetic Recording (HAMR), which will increase storage density and cost efficiency. The Financial Challenges of Cloud Storage The expense of cloud storage goes beyond just hardware acquisition. Providers must effectively manage capital expenses, operational costs, and scalability issues to support cost-effective growth. Capital Expenditures (CapEx): The Cost of Growth Building a cloud storage infrastructure requires significant upfront investment. The need for high-capacity storage drives, data center racks, and cooling systems drives up CapEx, making it important to optimize storage density to maximize cost efficiency. Increasing drive capacity helps reduce costs by lowering the cost per terabyte. By integrating more disks into each drive and increasing data density, storage providers can store more data per device while reducing the number of drives needed. This reduces hardware expenses and improves power efficiency, lowering the total infrastructure investment. Despite advancements in storage technology, cloud providers must carefully balance performance and cost. High-performance storage solutions ensure smooth operations, but overprovisioning can lead to wasted resources. Investing in scalable, high-capacity HDD-based solutions enables providers to expand efficiently without excessive spending. Operational Expenditures (OpEx): Managing Long-Term Costs Once infrastructure is in place, cloud providers must manage ongoing operational costs, including power consumption, cooling, and regular maintenance. The cost of keeping storage devices running efficiently can add up quickly, making energy-efficient solutions critical for cost management. Higher-capacity drives reduce energy consumption per terabyte, optimizing power usage across data centers. By increasing storage density, cloud providers use less energy to power and cool their storage systems, lowering their overall OpEx. Additionally, reducing the number of physical drives minimizes maintenance requirements and reduces operational costs. Effective data management also plays a key role in OpEx reduction. Implementing automated firmware updates, proactive drive monitoring, and advanced storage technologies helps cloud providers optimize resource allocation and improve long-term system performance. Scalability: Expanding Without Overspending Scalability is an important factor for cloud providers, who must continuously expand their storage infrastructure to meet the increasing data demands. However, inefficiently scaling storage can result in unnecessary costs. To minimize expenses, moving to the highest capacity HDD can help reduce the number of drives required. This approach reduces the number of physical devices required, decreasing the need for additional racks, cooling systems, and energy resources. Cloud providers can enhance their infrastructure by utilizing high-capacity drives and advanced storage technologies like UltraSMR while maintaining cost efficiency. Implementing smart scaling strategies ensures that storage solutions grow alongside demand, preventing both overprovisioning and underutilization. Western Digital’s Cost-Effective Storage Solutions Western Digital offers cloud providers a suite of high-capacity storage solutions designed to reduce costs while improving efficiency. Cloud providers can scale effectively by integrating advanced storage technologies while minimizing CapEx and OpEx. High-Capacity Hard Drives: Powering the Future of Cloud Storage Western Digital’s Ultrastar® DC HC600 Series drives offer up to 32TB1 of storage, providing an effective solution for cloud providers seeking to maximize efficiency without incurring excessive costs. These high-density drives allow providers to store more data on each drive, reducing the number of drives needed and enhancing data center efficiency. The following table represents the features and associated benefits of Ultrastar DC HC600 Series drives: Feature Benefit Higher Storage Density Maximizes space utilization while reducing the number of physical drives Lower Power Consumption Fewer drives lead to reduced energy costs and cooling requirements Improved Cost Efficiency Helps cloud providers optimize CapEx and OpEx by consolidating storage capacity Advanced Storage Technologies: Driving Efficiency Through Innovation Western Digital uses advanced storage technologies to drive efficiency. These technologies include: Shingled Magnetic Recording (SMR): Maximizing Data Density Western Digital’s Shingled Magnetic Recording (SMR) technology enhances storage density by overlapping data tracks, enabling cloud providers to store more information in the same physical space. UltraSMR improves upon this by offering even higher capacity, which boosts dollar-per-terabyte ratios and enhances operational efficiency. With greater storage capacity per drive, providers can reduce costs while maintaining system performance. Energy-Assisted Perpendicular Magnetic Recording (ePMR): Enhancing Performance Western Digital’s Energy-Assisted Perpendicular Magnetic Recording (ePMR) technology improves write performance and efficiency, helping cloud providers optimize their data storage solutions. By increasing data densities, ePMR enables greater storage capacity without needing additional physical space. This makes it an effective tool for cloud providers aiming to scale efficiently while managing costs. HelioSeal Technology: Reducing Power Consumption and Increasing Capacity In 2013, Western Digital introduced HelioSeal technology, which replaced air-filled drives with helium-sealed environments. This innovation reduces internal turbulence and drag and allows up to 11 disks per drive, increasing storage capacity while maintaining energy efficiency. In the table below, you can see the benefits of HelioSeal for cloud providers. HelioSeal Benefits Impact on Cloud Providers 30% Energy Savings Reduces data center power consumption, lowering OpEx Lower Mechanical Wear Enhances drive longevity and reliability Optimized Performance Ensures stability in high-demand environments The Quantifiable Financial Benefits Cloud providers can reduce costs and enhance operational efficiency by utilizing advanced storage technologies. The financial benefits of these innovations result in substantial savings, making them essential for

Achieving Cost Efficiency in Cloud Storage: The Role of Western Digital's Hard Drive Portfolio Read More »

AI Hallucinations Can Prove Costly

Large language models (LLMs) and generative AI are fundamentally changing the way businesses operate — and how they manage and use information. They’re ushering in efficiency gains and qualitative improvements that would have been unimaginable only a few years ago.  But all this progress comes with a caveat. Generative AI models sometimes hallucinate. They fabricate facts, deliver inaccurate assertions and misrepresent reality. The resulting errors can lead to flawed assessments, poor decision-making, automation errors and ill will among partners, customers and employees.  “Large language models are fundamentally pattern recognition and pattern generation engines,” points out Van L. Baker, research vice president at Gartner. “They have zero understanding of the content they produce.”  Adds Mark Blankenship, director of risk at Willis A&E: “Nobody is going to establish guardrails for you. It’s critical that humans verify content from an AI system. A lack of oversight can lead to breakdowns with real-world repercussions.”  False Promises  Already, 92% of Fortune 500 companies use ChatGPT. As GenAI tools become embedded across business operations — from chatbots and research tools to content generation engines — the risks associated with the technology multiply.   Related:Breaking Through the AI Bottlenecks “There are several reasons why hallucinations occur, including mathematical errors, outdated knowledge or training data and an inability for models to reason symbolically,” explains Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania. For instance, a model might treat satirical content as factual or misinterpret a word that can have different contexts.  Regardless of the root cause, AI hallucinations can lead to financial harm, legal problems, regulatory sanctions, and damage to trust and reputation that ripples out to partners and customers.  In 2023, a New York City lawyer using ChatGPT filed a lawsuit that contained egregious errors, including fabricated legal citations and cases. The judge later sanctioned the attorney and imposed a $5,000 fine. In 2024, Air Canada lost a lawsuit when it failed to honor the price its chatbot quoted to a customer. The case resulted in minor damages and bad publicity.  At the center of the problem is the fact that LLMs and GenAI models are autoregressive, meaning they arrange words and pixels logically with no inherent understanding of what they are creating. “AI hallucinations, most associated with GenAI, differ from traditional software bugs and human errors because they generate false yet plausible information rather than failing in predictable ways,” says Jenn Kosar, US AI assurance leader at PwC.  Related:How AI is Transforming the Music Industry The problem can be especially glaring in widely used public models like ChatGPT, Gemini and Copilot. “The largest models have been trained on publicly available text from the Internet,” Baker says. As a result, some of the information ingested into the model is incorrect or biased. “The errors become numeric arrays that represent words in the vector database, and the model pulls words that seems to make sense in the specific context.”  Internal LLM models are at risk of hallucinations as well. “AI-generated errors in trading models or risk assessments can lead to misinterpretation of market trends, inaccurate predictions, inefficient resource allocation or failing to account for rare but impactful events,” Kosar explains. These errors can disrupt inventory forecasting and demand planning by producing unrealistic predictions, misinterpreting trends, or generating false supply constraints, she notes.   Smarter AI  Although there’s no simple fix for AI hallucinations, experts say that business and IT leaders can take steps to keep the risks in check. “The way to avoid problems is to implement safeguards surrounding things like model validation, real-time monitoring, human oversight and stress testing for anomalies,” Kosar says.  Related:Why AI Model Management Is So Important Training models with only relevant and accurate data is crucial. In some cases, it’s wise to plug in only domain-specific data and construct a more specialized GenAI system, Kosar says. In some cases, a small language model (SLM) can pay dividends. For example, “AI that’s fine-tuned with tax policies and company data will handle a wide range of tax-related questions on your organization more accurately,” she explains.  Identifying vulnerable situations is also paramount. This includes areas where AI is more likely to trigger problems or fail outright. Kosar suggests reviewing and analyzing processes and workflows that intersect with AI. For instance, “A customer service chatbot might deliver incorrect answers if someone asks about technical details of a product that was not part of its training data. Recognizing these weak spots helps prevent hallucinations,” she says.  Specific guardrails are also essential, Baker says. This includes establishing rules and limitations for AI systems and conducting audits using AI augmented testing tools. It also centers on fact-checking and failsafe mechanisms such as retrieval augmented generation (RAG), which comb the Internet or trusted databases for additional information. Including humans in the loop and providing citations that verify the accuracy of a statement or claim can also help.  Finally, users must understand the limits of AI, and an organization must set expectations accordingly. “Teaching people how to refine their prompts can help them get better results, and avoid some hallucination risks,” Kosar explains. In addition, she suggests that organizations include feedback tools so that users can flag mistakes and unusual AI responses. This information can help teams improve an AI model as well as the delivery mechanism, such as a chatbot.  Truth and Consequences  Equally important is tracking the rapidly evolving LLM and GenAI spaces and understanding performance results across different models. At present, nearly two dozen major LLMs exist, including ChatGPT, Gemini, Copilot, LLaMA, Claude, Mistral, Grok, and DeepSeek. Hundreds of smaller niche programs have also flooded the app marketplace. Regardless of the approach an organization takes, “In early stages of adoption, greater human oversight may make sense while teams are upskilling and understanding risks,” Kosar says.  Fortunately, organizations are becoming savvier about how and where they use AI, and many are constructing more robust frameworks that reduce the frequency and severity of hallucinations. At the same time, vendor software and open-source projects are maturing. Concludes

AI Hallucinations Can Prove Costly Read More »

What is Eutelsat, Europe’s rising rival to Starlink?

French satellite operator Eutelsat was thrust into the spotlight last week as a potential replacement for Elon Musk’s Starlink in Ukraine — and potentially, broader Europe.  Eva Berneke, Eutelsat’s CEO, said the company was in advanced discussions with the EU about expanding its internet service in Ukraine. She also said Eutelsat was in “very positive talks” with Italy to provide an encrypted communications service for government officials. In the same week, investors rallied behind Eutelsat, sending its shares soaring over 500%. But what exactly is Eutelsat? And could it realistically replace Starlink in Ukraine and beyond?  An independence mission In 1977, 17 European countries came together to form the European Telecommunications Satellite Organisation — “Eutelsat” for short. The idea was to develop a satellite-based telecommunications infrastructure independent from the US or the Soviet Union. TNW Conference – The 2025 Agenda has just touched down Discover the insightful and dare we say controversial sessions that will take place June 19-20. In 1983, Eutelsat became the first European provider of satellite TV. In 2001, the company was privatised, and in 2023 it merged with the UK’s OneWeb to become the world’s third largest satellite operator.  With the merger, Eutelsat inherited OneWeb’s constellation of low-Earth orbit satellites for internet communications — a similar setup to its bigger rival Starlink.  How do OneWeb’s satellites work? Eutelsat currently has 653 OneWeb satellites orbiting the Earth, each circling about 1,200km above the surface. This relative proximity results in lower latency and faster internet speeds compared to traditional geostationary satellites, which are around 30 times further out in space.   Ground stations on Earth are connected to the internet and beam data to satellites orbiting above. The satellites then transmit the data to user terminals, small devices with antennas that enable internet access in places where traditional connections aren’t available. These user terminals are especially useful in remote areas, airplanes, ships, vehicles, or — as we’ve seen in Ukraine — conflict zones.   Can Eutelsat replace Starlink in Ukraine?    Eutelsat told TNW that it offers the same coverage and latency capabilities as Starlink. The firm’s low-Earth orbit (LEO) services are already deployed in Ukraine, where they support government and institutional communications. Additionally, Eutelsat said its geostationary orbit (GEO) systems could provide extra capacity over Ukraine, as well as “stronger resilience” for critical infrastructure connectivity. Currently, Eutelsat has around 2,000 user terminals on the ground in Ukraine. That’s dwarfed by Starlink’s 40,000, yet Berneke said her company could reach that number “in a couple of months.” Ramping up capacity that quickly, though, would present some serious logistical challenges, especially as OneWeb terminals are supplied by third-party companies, unlike Starlink, which builds its in-house.  Poland and the US, among others, have helped to fund Ukraine’s use of Starlink. Similar support would likely be needed for a rapid rollout of OneWeb terminals, particularly given Eutelsat’s not-so-healthy financial concerns.   Then there’s the tech itself. OneWeb’s satellites are older and less advanced than Starlink’s. They lack inter-satellite laser link technology, which improves coverage. They also have far fewer satellites in orbit than Starlink, which has around 7,000.  However, if the EU is serious about replacing Starlink in Ukraine, it’ll probably have to settle for second-best. The bloc will also have to make some serious financial commitments. Word from Poland this weekend provided positive news on that front.  In a post on X on Sunday, Poland’s foreign minister said the country would be forced to “look for other suppliers” if SpaceX “proves to be an unreliable provider.” Warsaw currently funds half of the 42,000 Starlink terminals operating in the country at a cost of about $50mn a year. In the longer term, Europe has its bets placed on IRIS², a multi-orbit satellite internet constellation expected to switch on in 2030. There are also reports that a new Airbus-Leonardo-Thales Alenia Space joint venture called “Project Bromo” plans to challenge Starlink’s global dominance. Europe’s technological sovereignty will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale. Use the code TNWXMEDIA2025 at the check-out to get 30% off the price tag. source

What is Eutelsat, Europe’s rising rival to Starlink? Read More »

Dual-OMS TEI: Companies Actually Get Their Money’s Worth

We evaluated the Total Economic Impact™ (TEI) of companies that simultaneously use two order management systems (OMSes). Our research uncovered surprising findings. Our TEI focused on businesses that had a preexisting, primary OMS and then added modules of a newer, secondary OMS to fill functionality gaps. Our central question: Why — and is it worth it? For our study, we applied the Forrester Total Economic Impact methodology, which allows us to calculate the ROI of a business decision. The TEI process includes interviewing representatives from companies that have made the business change in question. Then, we aggregate the experiences of interviewees into a composite organization. Finally, we create a financial framework from the material gathered in the interviews to prove the ROI. It considers everything from hard, direct costs to labor. What did we learn? Spoiler: Pursuing a dual OMS strategy is worth it! The positive ROI has a very short break-even point (less than six months). But there are major caveats, and these results aren’t guaranteed. What we expected: Based on conversations with Forrester clients, we expected to find that organizations plan to perpetually maintain both solutions. We believed we would prove such significant benefits that the costs of maintaining both would be worthwhile. We were wrong — at least partially. Two of the most unexpected takeaways: The dual-OMS approach has a big impact on topline revenue in the pre-purchase stages. In fact, businesses that gained revenue-increasing benefits from the secondary OMS saw the most significant results. Modules that add functionality such as enterprise inventory management had the biggest impact. In fact, the modern module additions gave organizations tools to lock in sales that they previously lost due to stock inaccuracies. The dual OMS strategy allows brands to manage complicated inventory calculations and logic, such as managing “safety stock” more tightly. Organizations also served near-real-time inventory data into the shopping experience, which reduced order cancellations from overselling. Organizations unintentionally have begun a slow-motion “strangler” process. Most firms that used the dual OMS strategy initially intended to maintain both OMSes indefinitely, but businesses saw that slowly adding new modules from the second solution was as effective as a replatforming initiative but at a nondisruptive pace. That is why three of the four interviewees said they ultimately intend to incrementally replace their primary OMS with the secondary one. In addition to the considerable benefits the secondary OMS brought, interviewees realized the add-on process had inadvertently jump-started their replacement. They won’t move quickly, but with such a major step toward replacement complete, they now feel that the rest of the migration is possible. The OMS market right now is currently in flux as longer-standing systems work to modernize their architecture. Meanwhile, the vendors with open, modular architecture are developing functionality and enhanced experiences that push the market forward. In the full report, we dive into the details of how the organizations realized the ROI of their approach and how we calculated the economic benefits. We also noted the risks of attempting similar strategies due to the varying needs of digital businesses. To learn more, read the full report here. Have questions or need support about how to embark on a dual-system strategy in OMS or commerce? Please book a guidance session with me! source

Dual-OMS TEI: Companies Actually Get Their Money’s Worth Read More »