Information Week

Should AI-Generated Content Include a Warning Label?

Like a tag that warns sweater owners not to wash their new purchase in hot water, a virtual label attached to AI content could alert viewers that what they’re looking at or listening to has been created or altered by AI.  While appending a virtual identification label to AI-generated content may seem like a simple, logical solution to a serious problem, many experts believe that the task is far more complex and challenging than currently believed.  The answer isn’t clear-cut, says Marina Cozac, an assistant professor of marketing and business law at Villanova University’s School of Business. “Although labeling AI-generated content … seems like a logical approach, and experts often advocate for it, findings in the emerging literature on information-related labels are mixed,” she states in an email interview. Cozac adds that there’s a long history of using warning labels on products, such as cigarettes, to inform consumers about risks. “Labels can be effective in some cases, but they’re not always successful, and many unanswered questions remain about their impact.”  For generic AI-generated text, a warning label isn’t necessary, since it usually serves functional purposes and doesn’t pose a novel risk of deception, says Iavor Bojinov, a professor at the Harvard Business School, via an online interview. “However, hyper-realistic images and videos should include a message stating they were generated or edited by AI.” He believes that transparency is crucial to avoid confusion or potential misuse, especially when the content closely resembles reality.  Related:Breaking Down Barriers to AI Accessibility Real or Fake?  The purpose of a warning label on AI-generated content is to alert users that the information may not be authentic or reliable, Cozac says. “This can encourage users to critically evaluate the content and increase skepticism before accepting it as true, thereby reducing the likelihood of spreading potential misinformation.” The goal, she adds, should be to help mitigate the risks associated with AI-generated content and misinformation by disrupting automatic believability and the sharing of potentially false information.  The rise of deepfakes and other AI-generated media has made it increasingly difficult to distinguish between what’s real and what’s synthetic, which can erode trust, spread misinformation, and have harmful consequences for individuals and society, says Philip Moyer, CEO of video hosting firm Vimeo. “By labeling AI-generated content and disclosing the provenance of that content, we can help combat the spread of misinformation and work to maintain trust and transparency,” he observes via email.  Related:Why Enterprises Struggle to Drive Value with AI Moyer adds that labeling will also support content creators. “It will help them to maintain not only their creative abilities as well as their individual rights as a creator, but also their audience’s trust, distinguishing their techniques from the content made with AI versus an original development.”  Bojinov believes that besides providing transparency and trust, labels will provide a unique seal of approval. “On the flip side, I think the ‘human-made’ label will help drive a premium in writing and art in the same way that craft furniture or watches will say ‘hand-made’.”  Advisory or Mandatory?  “A label should be mandatory if the content portrays a real person saying or doing something they did not say or do originally, alters footage of a real event or location, or creates a lifelike scene that did not take place,” Moyer says. “However, the label wouldn’t be required for content that’s clearly unrealistic, animated, includes obvious special effects, or uses AI for only minor production assistance.”  Consumers need access to tools that don’t depend on scammers doing the right thing, to help them identify what’s real versus artificially generated, says Abhishek Karnik, director of threat research and response at security technology firm McAfee, via email. “Scammers may never abide by policy, but if most big players help implement and enforce such mechanisms it will help to build consumer awareness.”  Related:Why Every Employee Will Need to Use AI in 2025 The format of labels indicating AI-generated content should be noticeable without being disruptive and may differ based on the content or platform on which the labeled content appears, Karnik says. “Beyond disclaimers, watermarks and metadata can provide alternatives for verifying AI-generated content,” he notes. “Additionally, building tamper-proof solutions and long-term policies for enabling authentication, integrity, and nonrepudiation will be key.”  Final Thoughts  There are significant opportunities for future research on AI-generated content labels, Cozac says. She points out that recent research highlights the fact that while some progress has been made, more work remains to be done to understand how different label designs, contexts, and other characteristics affect their effectiveness. “This makes it an exciting and timely topic, with plenty of room for future research and new insights to help refine strategies for combating AI-generated content and misinformation.”  source

Should AI-Generated Content Include a Warning Label? Read More »

Breaking Down Barriers to AI Accessibility

Artificial intelligence is no longer a futuristic concept — it’s here, promising to revolutionize industries by unlocking unparalleled efficiency and innovation. Yet, despite this immense potential, AI adoption remains elusive for many organizations. Businesses are grappling with challenges like skill shortages, unpredictable cloud pricing, and high computing demands. These barriers have left AI out of reach for many companies, especially those with limited resources.    But the good news is that new technologies are changing this landscape, making AI more accessible and affordable than ever before. From edge computing to no-code platforms and AutoML, businesses are increasingly finding ways to democratize AI, allowing them to leverage its power without breaking the bank. Emerging technologies are paving the way for AI adoption, offering businesses new opportunities to leverage these advancements for greater efficiency and innovation.   Overcoming the Barriers to AI Adoption    The barriers to AI adoption are well-documented. For many organizations, the cost of high-performance computing hardware, such as GPUs, and the unpredictability of cloud pricing have made AI investment seem risky. Additionally, a growing skill gap is preventing companies from finding the talent to manage and implement these technologies effectively.   Related:Should AI-Generated Content Include a Warning Label? What’s more, as AI systems become more complex, the need for highly specialized knowledge and tools to manage them grows. Organizations need solutions that simplify AI development and make it more cost-effective to deploy — without the need for extensive technical expertise.   Technologies Making AI More Accessible   Several key technologies are stepping up to tackle these barriers, providing businesses with the tools to integrate AI effectively.   1. Edge computing   Edge computing brings AI capabilities closer to data sources, allowing businesses to process and analyze data in real time. This proximity reduces latency and improves decision-making speed — crucial for industries like manufacturing, healthcare, and retail that rely on real-time insights. By decentralizing data processing, edge computing lowers the demand for centralized cloud resources and reduces overall costs.    2. No-code/Low-code platforms   No-code and low-code platforms are a game-changer for businesses that lack deep technical expertise. These platforms empower non-technical users to create and deploy AI models without writing complex code, making AI development more accessible and enabling a wider range of businesses to participate in AI-driven innovation, even with limited resources.   Related:Why Enterprises Struggle to Drive Value with AI 3. AutoML   Automated machine learning (AutoML) simplifies the process of building AI models. AutoML tools automatically handle model selection, training, and optimization, allowing users to create high-performing AI systems without requiring data science expertise. By streamlining these tasks, the technology significantly lowers the barrier for businesses looking to integrate AI into their operations, making deployment easier and faster.   4. AI on CPUs   AI’s computational demands, especially for tasks like training large language models, have traditionally required expensive GPU hardware. However, recent innovations are making it possible to run some AI models on more affordable CPUs. Techniques like quantization and frameworks like MLX are enabling smaller AI models to run efficiently on CPUs, broadening AI’s accessibility and reducing the need for costly hardware investments.    Collaboration: The Key to AI Democratization    Organizations cannot travel alone on the journey to making AI accessible. Collaboration between businesses will be essential to overcoming the barriers to AI adoption. By pooling resources, sharing expertise, and developing tailored solutions, companies can reduce costs and streamline the integration of AI into their operations.    Related:Why Every Employee Will Need to Use AI in 2025 Moreover, collaboration is critical for ensuring AI is implemented ethically and safely. As AI’s role in society grows, organizations must work together to establish guidelines and best practices that foster trust and prevent misuse. Transparency in AI development and deployment will be key to its long-term success.    Upskilling the Workforce to Build Trust in AI    Another challenge that organizations face is the need to upskill their workforce. As AI systems become more prevalent, employees must have the skills to manage, work alongside, and trust these technologies. Upskilling workers will alleviate concerns about data privacy, security, and job displacement, allowing for smoother AI adoption.    Investing in training programs will not only help employees adapt to AI systems but also ensure that organizations maximize the benefits of these technologies. A skilled workforce can collaborate effectively with AI, leading to improved productivity and innovation. The broader IT skills shortage is expected to impact nine out of 10 organizations by 2026, leading to $5.5 trillion in delays, quality issues, and revenue loss, according to IDC.    Unlocking AI’s Potential Across Industries    The future of AI is bright, but its potential can only be fully realized when it becomes accessible to all. By leveraging technologies like edge computing, no-code platforms, and AutoML, businesses can overcome the barriers to AI adoption and unlock new opportunities for growth and innovation.    Business leaders who invest in these technologies and prioritize upskilling their workforce will be well-positioned to thrive in an AI-powered future. With collaboration and a commitment to ethical implementation, AI can become a transformative force across industries, reshaping how we work, communicate, and innovate.    It’s time to embrace AI’s possibilities and take the next step toward a more accessible, inclusive future.    source

Breaking Down Barriers to AI Accessibility Read More »

Untangling Enterprise Reliance on Legacy Systems

While the push for digital transformation has been underway for years, many enterprises still have legacy technology deeply ingrained in their tech stacks. In many cases, these systems are years or even decades old but remain integral to keeping a business operational. Simply ripping them out and replacing them is often not a plausible quick fix.   “It’s actually quite hard to fully demise previous versions of technology as we adopt new versions, and so you end up with the sort of layering of various ages of all the technologies,” says Nick Godfrey, senior director and global head, office of the CISO at Google Cloud.   Given that continued use of legacy systems comes with risk, why are legacy systems still so common today? How can enterprise leaders manage that risk and move forward?   A Universal Challenge  In 2019, the Government Accountability Office (GAO) identified 10 critical federal IT legacy systems. These systems were 8 to 51 years old and cost roughly $337 million to operate and maintain each year.   Government is hardly the only sector that relies on outdated systems. The banking sector uses COBOL, a decades-old coding language, heavily. The health care industry is rife with examples of outdated electronic health record (EHR) systems and legacy hardware. One survey found that 74% of manufacturing and engineering companies use legacy systems and spreadsheets to operate.   Related:Tech Company Layoffs: The COVID Tech Bubble Bursts “If we talk about banking, manufacturing, and health care, you would find a big chunk of legacy systems are actually elements of the operational technology that it takes to operate that business,” says Joel Burleson-Davis, senior vice president of worldwide engineering, cyber at Imprivata, a digital identity security company.   The cost of replacing these systems isn’t simply the price tag that comes with the new technology. It’s also the downtime that comes with making the change.   “The hardest way to drive the car is when you’re trying to change the tire at the same time,” says Austin Allen, director of solutions architecture at Airlock Digital, an application control company. “You think about one hour of downtime … you can be talking about millions of dollars depending on the company.”  A survey conducted by commercial software company SnapLogic found that organizations spent an average of $2.7 million to overhaul legacy tech in 2023.   As expensive as it is to replace legacy technology, keeping it in place could prove to be more costly. Legacy systems are vulnerable to cyberattacks and data breaches. In 2024, the average cost of a data breach is $4.88 million, according to IBM’s Cost of a Data Breach Report 2024.   Related:Securing a Better Salary: Tips for IT Pros Evaluating the Tech Stack  The first step to assessing the risk that legacy systems pose to an enterprise is understanding how they are being used. It sounds simple enough on the surface, but enterprise infrastructure is incredibly complicated.   “Everybody wishes that they had all of their processes. and all of their systems integrations documented, but they don’t,” says Jen Curry Hendrickson, senior vice president of managed services at DataBank, a data center solutions company.   Once security and technology leaders conduct a thorough inventory of systems and understand how enterprise data is moving through those systems, they can assess the risks.   “This technology was designed and installed many, many years ago when the threat profile was significantly different,” says Godfrey. “It is creating an ever more complex surface area.”   What systems can be updated or patched? What systems are no longer supported by vendors? How could threat actors leverage access to a legacy system for lateral movement?   Managing Legacy System Risk  Once enterprise leaders have a clear picture of their organizations’ legacy systems and the risk they pose, they have a choice to make. Do they replace those systems, or do they keep them in place and manage those risks?  “Businesses are fully entitled — maybe they shouldn’t [be] — but they’re fully entitled to say no, ‘I understand the risk and that’s not something we’re going to address right now,’” says Burleson-Davis. “Industries that tend to have lower margins and be a little more resource-strapped are the likeliest to make some of those tradeoffs.”  Related:Mobile App Integration’s Day Has Come If an enterprise cannot replace a legacy system, its security and technology leaders can still take steps to reduce the risk of it becoming a doorway for threat actors.   Security teams can implement compensating controls to look for signs of compromise. They can implement zero-trust access and isolate legacy systems from the rest of the enterprise’s network as much as possible.   “Legacy systems really should be hardened from the operating system side. You should be turning off operating system features that do not have any business purpose in your environment by default,” Allen emphasizes.   Security leaders may even find relatively simple ways to reduce risk exposure related to legacy systems.  “People will often find, ‘Oh, I’m running 18 different versions of the same virtualization package Why don’t I go to one?’” Burleson-Davis shares. “We find people running into scenarios like that where after doing a proper inventory [they] find that there was some low-hanging fruit that really solved some of that risk.”  Transitioning Away from Legacy Systems  Enterprise leaders have to clear a number of hurdles in order to replace legacy systems successfully. The cost and the time are obvious challenges. Given the age of these systems, talent constraints come to the fore. Does the enterprise have people who understand how the legacy system works and how it can be replaced?  “You end up with a very complex skills requirement inside of your organization to be able to manage very old types of technologies through to cutting-edge technologies,” Godfrey points out.   A change advisory board (CAB) can lead the charge on strategic planning. That group of people can help answer vital questions about the timeline for the transition, the potential downtime, and the people necessary to execute the change.   “How does that affect anything downstream or upstream? Where is my

Untangling Enterprise Reliance on Legacy Systems Read More »

Why Enterprises Struggle to Drive Value with AI

Artificial Intelligence is virtually everywhere, whether enterprises have an AI strategy or not. As AI capabilities continue to get more sophisticated, businesses are trying to capitalize on it, but they haven’t done enough foundational work to succeed. While it’s true that companies have been increasing their AI budgets over the last several years, it’s become clear that the ROI of such efforts varies significantly, based on many dynamics, such as available talent, budget, and a sound strategy. Now, organizations are questioning the value of such investments to the point of pulling back in 2025.  According to Anand Rao, distinguished service professor, applied data science and artificial Intelligence at Carnegie Mellon University, the top three challenges are ROI measurement, realization, and maintenance.   “If the work I’m doing takes three hours and now it takes a half an hour, that’s easily quantifiable, [but] human performance is variable,” says Rao. “The second way is having a baseline. We don’t [understand] human performance, but we are saying AI is 95% better than a human, but which human? The top-most performer, an average performer, or the new employee?”  When it comes to realizing ROI, there are different ways to look at it. For example, if AI saves 20% of five peoples’ time, perhaps one could be eliminated. However, if those five people are now spending more time on higher value tasks, then it would be unwise to let any of them go because they are providing more value to the business.  Related:Why Every Employee Will Need to Use AI in 2025 The other challenge is maintenance because AI models need to be monitored and maintained to remain trustworthy. Also, as humans use AI more frequently, they get more adept at doing so while AI is learning from the human, which may increase performance. Enterprises are not measuring that either, Rao says.  “[T]here’s a whole learning curve happening between the human and the AI, and independently the two. That might mean that you may not be able to maintain your ROI, because it may increase or decrease from the base point,” says Rao.   Anand Rao, Carnegie Mellon University There’s also a time element. For example, ChatGPT-4 was introduced in March 2023, but enterprises weren’t ready for it, but in six months or less, businesses had started investing systematically to develop their AI strategy. Nevertheless, there’s still more to do.  [T]he crucial fact is that we are still in the very early days of this technology, and things are moving very quickly,” says Beatriz Sanz Saiz, global consulting data and AI Leader at business management consulting firm EY. “Enterprises should become adept at measuring value realization, risk and safety. CIOs need to rethink a whole set of metrics because they will need to deliver results. Many organizations have a need for a value realization office, so that for everything they do, they can establish metrics upfront to be measured against, whether that is cost savings, productivity, new revenue growth, market share, employee satisfaction [or] customer satisfaction.”  Related:Demand and Supply Issues May Impact AI in 2025 The GenAI Impact  While many enterprises have had plenty of success with traditional AI, Kjell Carlsson, head of AI strategy at enterprise MLOps platform Domino Data Lab, estimates that 90% of GenAI initiatives are not delivering results that move the needle on a sustained basis, nor are they on track to do so.   “[M]ost of these organizations are not going after use cases that can deliver transformative impact, nor do they have the prerequisite AI engineering capabilities to deliver production-grade AI solutions,” says Carlsson. “Many organizations are under the misconception that merely making private instances of LLMs and business apps with embedded GenAI capabilities available to business users and developers is an effective AI strategy. It is not. While there have been productivity gains from these efforts, in most cases, these have been far more modest than expected and have plateaued quickly.”  Related:What Happens if AI No Longer Has Access to Good Data to Train On? Though GenAI has many similarities to driving business value with traditional AI and machine learning, it requires expert teams that can design, develop, operationalize and govern AI applications that rely on complex AI pipelines. These pipelines combine data engineering, prompt engineering, vector stores, guardrails, upstream and downstream ML and GenAI models, and integrations with operational systems.   “Successful teams have evolved their existing data science and ML engineering capabilities into AI product and AI engineering capabilities that allow them to build, orchestrate and govern extremely successful AI solutions,” says Carlsson.  Kjell Carlsson, Domino Data Lab Sound tech strategies identify a business problem and then select the technologies to solve it, but with GenAI, users have been experimenting before they define a problem to solve or expected payoff.   “[W]e believe there is promise of transformation with AI, but the practical path is unclear. This shift has led to a lack of focus and measurable outcomes, and the derailment of plenty of AI efforts in the first wave of AI initiatives,” says Brian Weiss, chief technology officer at hyperautomation and enterprise AI infrastructure company Hyperscience. “In 2025, we anticipate a more pragmatic or strategic approach where generative AI tools will be used to deliver value by attaching to existing solutions with clearly measurable outcomes, rather than simply generating content. [T]he success of AI initiatives hinges on a strategic approach, high-quality data, cross-functional collaboration and strong leadership. By addressing these areas, enterprises can significantly improve their chances of achieving meaningful ROI from their AI efforts.”  Andreas Welsch, founder and chief AI strategist at boutique AI strategy firm Intelligence Briefing, says early in the GenAI hype cycle, organizations were quick to experiment with the technology. Funding was made available, and budgets were consolidated to explore what the technology could offer, but they didn’t need to deliver ROI. Times have changed.  “Organizations who have been stuck in the exploration phase without assessing the business value first, are now caught off guard when the use case does not deliver a measurable return,” says

Why Enterprises Struggle to Drive Value with AI Read More »

Mobile App Integration’s Day Has Come

The mobile application market is projected at an annual compound growth rate (CAGR) of 14.3% between now and 2030, and businesses are capitalizing by developing mobile applications for customers, business partners, and internal use. In large part, the mobile app market is being driven by the explosive growth of mobile devices, which over 60% of the world’s population use. Not all of this use is confined to social media, emails, phone calls, and texts. Accordingly, businesses have become involved with launching retail websites for mobile devices, as well as transactional engines for mobile payment processing, e-commerce, banking and booking systems for use in a variety of smart mobile devices. In the process, the key for IT has been the integration of these new mobile applications with enterprise systems. How do you ensure that a mobile app is tightly integrated into your existing business processes and your IT base, and how do you ensure that it will perform consistently well every time it is used? Is your security policy across mobile devices as robust as it is across other enterprise assets, such as mainframes, networks and servers? Does the user interface across all mobile devices navigate equally well and with a certain degree of consistency, no matter which device is used? Related:Securing a Better Salary: Tips for IT Pros In most cases, IT departments (and users and customers) will say that total mobile device integration is still a work in progress. The Role of Mobile App Integration In the past, the integration of mobile applications with other IT infrastructure was more or less confined to the IT assets that the mobile app minimally needed to perform its functions. If the app was there for placing an online order, access to the enterprise order entry, inventory and fulfillment systems was needed, but maybe nothing else for the first installation. If the app was designed for a warehouse worker to operate a series of robots to pick and place items in a warehouse, it was specifically developed just for that, and on first installment, it might not have been integrated into inventory and warehouse management systems. However, now that tech companies are placing their R&D emphasis on smart phones and devices, IT needs to formulate a more inclusive integration strategy for mobile applications that  these apps more “complete” when they launch. The Elements of Mobile App Integration To achieve total integration with the rest of the enterprise IT portfolio, and possibly with third-party services, a mobile app must do the following: Related:Untangling Enterprise Reliance on Legacy Systems Attain seamless data exchange across all systems, along with having the ability to invoke and use system-level infrastructure components such as storage or system-level routines to do its work. Use application programming interfaces (APIs) so it can access other IT and/or vendor systems. Conform to the same security and governance standards that other IT assets are subject to. Provide users and customers with a simple and (as much as possible) uniform graphical user interface (GUI). Be right-fitted into existing business and system workflows. This isn’t just good IT. It also makes major contributions to user productivity and customer satisfaction. Workflow Integration In late 2024, a health insurance company unveiled an automated online process for new customer registration. Unfortunately, the new app didn’t include all data elements needed for registration, and it actually froze in process. Users ended up calling the company and enduring long wait times until they could complete their registrations with a human agent. This was a case of workflow integration failure, because critical ingredients required for registration had been left out of the online mobile app. How did this happen? The project might have been rushed through to meet a deadline or signed off as a first (albeit incomplete) version of an app that would be later enhanced. Or, possibly, QA might have been skipped. But to an experienced IT “eye,” the app was clearly missing data, which suggested that integration with other enterprise systems, or data transfers via API with supporting vendor systems, had been missed. Related:How to Persuade an AI-Reluctant Board to Embrace Critical Change The app’s process flow also was a “miss” because if the project team had tested the mobile app’s process flow against the business workflow, they would have seen (like customers did) that key data elements were missing, and that the workflow didn’t work. The project team should also have verified that security and governance standards had been met, and that the mobile app user experience was consistent, whether the customer was using an iPhone or an Android. Summary Statista says that the mobile application market will reach $756 billion by 2027. In the US, 47% of mobile apps are being used for retail transactions, and another 19% are serving as portals, whether for customers, business partners or employees. There is virtually no business that isn’t developing mobile apps today for its customers, business partners and/or employees, but what has lagged is the same level of discipline over mobile app development that IT expects for traditional enterprise app development. Central to this is mobile application integration. It’s no longer acceptable to let an app “fly” with just the basics, but with many functions and data elements still missing. It’s time for top-to-bottom mobile app integration, whether that integration requires complete data, a uniform user experience across all devices, or something else. source

Mobile App Integration’s Day Has Come Read More »

What Happens if AI No Longer Has Access to Good Data to Train On?

In a world dominated increasingly by AI, access to relevant data becomes paramount — but what if such streams of information dry up? Regulators at state, national, and international levels continue to watch how businesses capture and use data that could be used to train AI. If restrictions emerge that cut off access to data that AI needs, would the technology stall out despite its promises of innovation? Alternatives such as synthetic data exist, but are they sufficient to properly train AI and deliver results that actually matter to operations? This episode features Shobha Phansalkar, vice president of client solutions and innovation for Wolters Kluwer; Olga Megorskaya, founder and CEO of Toloka; Pete DeJoy, co-founder and senior vice president of product for Astronomer; Melissa Bischoping, senior director of security and product design research at Tanium; and Omar Khawaja, Field CISO, Databricks. They discussed types of data that is necessary and relevant for training AI, how organizations might determine if data is useful or simply junk, what happens if policy stonewalls data access, and whether or not AI simply dies without data. Listen to the full podcast here. source

What Happens if AI No Longer Has Access to Good Data to Train On? Read More »

Demand and Supply Issues May Impact AI in 2025

This may well be a sobering year when it comes to AI adoption, use and scaling. On the demand side, organizations will be pulling investments back prematurely because they’re not seeing the value they expected. On the supply side, supply shortages, unmet expectations and investor pressure have caused one big tech company to reduce AI infrastructure investments and others will follow, according to Forrester.  To date, organizations have been investing heavily in AI and GenAI, not necessarily with a view toward ROI, though ROI can be difficult to quantify from a hard dollar perspective, which senior executives and boards now want. The anticipated shortage of infrastructure will also likely have an impact.  What’s Happening on the Demand Side  Organizations will not continue to increase investments in AI if they’re not seeing the value they expect.  “[C]ompanies are scaling back on their AI investments or too impatient in terms of ROI. They will [likely] scale back on their AI investment prematurely, which is not a good strategy,” says Jayesh Chaurasia, analyst at Forrester. “The other factor that might be fueling this is the current economic climate. In the last three months, almost everyone is trying to cut back on any type of investment that is not generating a clear ROI, and not only the AI-related stuff.”  Related:What Happens if AI No Longer Has Access to Good Data to Train On? Executives are asking for ROI numbers on analytics, data governance, and data quality programs, and they are demanding dollar values as opposed to “improving customer experience” or “increasing operational efficiency.”  “In 2023 and this year too, we are seeing more focus on ROI related to generative AI,” says Chaurasia. “Almost every executive was talking about how generative AI is going to just change the world, but it’s not as easy as just deploying a model or a generated AI function and then say your job is done because there is a foundational data analytics requirement that will eventually enable it, and which means you need to have proper privacy and security protocols, [such as] access management and data governance. You also must supply better data quality [because] these models are trained on the entire data set from the internet.”  The fact that people know the models are trained on internet data has inspired internet postings that are intentionally inaccurate or misleading, so the models won’t work right.  “The better answer is, of course, to use your own industry enterprise data, which gives the AI model more information about your company,” says Chaurasia. “You can very easily set up a connection with your data warehouse and get all the data into the model, but it’s not that easy because privacy, security, and governance are not in place. So, you’re not 100% sure whether you’re sharing your data with the model or the entire world.”  Related:AI Risk Management: Is There an Easy Way? Organizations have expected quick returns but not realized them because the initial expectations were unrealistic. Later comes the realization that the proper foundation has not been put in place.  “Folks are saying they expect ROI in at least three years and more than 30% or so are saying that it would take three to five years when we’ve got two years of generative AI. [H]ow can you expect it to perform so quickly when you think it will take at least three years to realize the ROI? Some companies, some leadership, might be freaking out at this moment,” says Chaurasia. “I think the majority of them have spent half a million on generative AI in the last two years and haven’t gotten anything in return. That’s where the panic is setting in.”  Explaining ROI in terms of dollars is difficult, because it’s not as easy as multiplying time savings by individual salaries. Some companies are working to develop frameworks, however.  “Some managers are reaching out to every business unit to ask the benefits that they have received with proper understanding of ownership, where the data exists [and] lineage of particular data set. They are using some custom surveys to reach out to all the employees in the organization to for their suggestions as well as their metrics,” says Chaurasia. “Unfortunately, there is no single framework that I would suggest works for every company.”  Related:Are We Ready for Artificial General Intelligence? Jayesh Chaurasia, Forrester Chaurasia is working on KPIs for the various domains, in terms of quality, governance, MDM, data management, data storage and everything that companies can track over the time to see the improvement, but they’re not connected to dollar value.   “What I’m recommending is find at the tactical, managerial, and executive levels what matters to them [and have] KPIs for each of those different layer levels to maintain and calculate that ROI regularly, so that they can use that KPI those metrics to show the benefit of whether they have improved over time or not.”  View From the Supply Side  If enterprises are reducing AI investments because the anticipated benefits aren’t being realized, vendors will pull back. Meanwhile, China has banned the export of critical materials required for semiconductors and other tech-related technologies in response to President-elect Donald Trump’s planned tariffs, not to mention the downstream impacts of tariffs — higher production costs and therefore higher tech prices IT departments will have to bear when budgets are already tight and may become tighter.  Bottom Line  Infrastructure shortages due to reduced AI investments on the demand side combined with higher prices and a potential US chip shortage due to lack of materials on the supply side would in turn impact the calculus of AI ROI. There are also broader impacts of the incoming administration’s policies such as mass deportation, which could impact tech workers, including AI talent, and their employers.    source

Demand and Supply Issues May Impact AI in 2025 Read More »

How Do Companies Know if They Overspend on AI and Then Recover?

The race among enterprises to exploit AI’s competitive advantages can lead to expenditures that might spiral out of control. Hiring third-party AI developers, training up internal IT teams, sourcing data, and other costs can add up fast. In this early stage of the AI era, it can be easy to think a blank check to develop the technology will deliver success. At what point should the ROI be measured for AI? What happens if a company overspends on AI? Can the project be recovered or should the company cut its losses and move on? This episode of DOS Won’t Hunt featured Manish Goyal, vice president and senior partner for AI and analytics for IBM Consulting; Richard Buractaon, head of AI for Andesite; Carter Busse, CIO for Workato; and Ashok Reddy, CEO and KX. They discussed such matters as whether a barometer exists that companies can compare their AI expenses against and what might be a “reasonable” percentage of budget that can be spent on AI. Listen to the full episode here. source

How Do Companies Know if They Overspend on AI and Then Recover? Read More »