How Trump Policies Are Affecting The Right To Repair

By Jennifer Frank, Matthew Dunn and John Griem ( March 28, 2025, 4:29 PM EDT) — Multiple recent policy developments under the second Trump administration appear likely to affect the rights of independent repair shops and individual consumers to repair electronic devices…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

How Trump Policies Are Affecting The Right To Repair Read More »

Be THE Human In The Loop: Data And AI Literacy Is Your Edge

AI is transforming the way we live, work, and play. It’s altering how we make decisions and interact with technology. But for all its power, it still needs humans (for now) — not just any humans but those who understand how AI works, the dependencies between good data and useful AI outputs, and where human judgement is irreplaceable. Amidst a world rushing towards automation, data and AI literacy isn’t just a skill — it is how you become THE human in the loop. What Does It Mean To Be “The Human In The Loop”? The phrase “human in the loop” (HITL) comes from AI and machine learning, referring to the humans who step in to guide, correct, or make sense of AI-driven processes. Sometimes, it means reviewing AI-generated decisions to catch mistakes (think fraud detection or medical diagnoses). Other times, it’s about injecting human expertise where AI lacks context, nuance, or ethical reasoning. If you’ve attended a conference in the past year, the HITL is what vendors point to when assuring people with AI concerns that humans still will be a part of key governance structures and decision-making. What is often overlooked is how many humans will be in the loop, what the loops might look like, or how many AI/software loops one human can be responsible for. Here is our reality: Not all humans in the loop will be equal. Some will be passive overseers, clicking “approve” or “reject” on AI recommendations (the hospital scene from the 2006 film “Idiocracy” comes to mind here). Others will be active decision-makers driven within a culture of inquiry who shape how AI is used, train models with better data, and ask questions before being prompted by an algorithm. The key difference between passive human drones and those actively involved in guiding AI decisions is data and AI literacy within a culture of inquiry. Why AI And Data Makes You Indispensable Two short anecdotes illustrate this point well: Over the past year, I’ve been showing a friend who works at a bank how the simple use of AI tools outside of her company can help her improve engagement and impact at work. She was just highlighted at work for being “forward-thinking and proactive” for getting creative without sacrificing security. KPMG recently gave me a demo of its “Curiosity Workbench,” an AI tool that helps its employees locate and leverage decades of knowledge, data, and expertise to help with clients and get them moving quickly. Both of these examples depend on humans interpreting information and learning more by being curious and inquisitive. After all, AI is only as good as the data it learns from — and data is only as useful as the humans interpreting it. If you want to be the human in the loop, you need three things: Data literacy: the foundation. AI depends on clean, consistent, relevant, and representative data. Without data literacy, you’re just a spectator to the AI revolution. With it, you’re the one shaping impact. Ask yourself: Can you spot bad data before it leads to bad outcomes? Do you understand how bias can slide into datasets like a creepy social media stalker can slide into your DMs? Can you interpret AI-driven insights to make business decisions, rather than just accepting whatever a model spits out? AI literacy: the next level. AI literacy isn’t about coding your own model from scratch. It’s about understanding how AI influences decisions, where it’s useful, and where it needs a human course interaction. In 2025, I ask our clients to imagine that AI is like the world’s best intern: It can do 80% of most common jobs very well, but that remaining 20% is still pretty suspect and needs the guidance of a wiser mentor who can work with it to get you 100% there. Ask yourself: Do you know how AI models make predictions and where they can go wrong? Can you question AI outputs instead of blindly trusting them? Are you aware of ethical risks, compliance issues, and real-world AI failures? Enterprise culture of (data) inquiry. AI is just software, but without a body of users who are enabled to find it, ask questions of it, grow using it, communicate with it, and trust it, it is as worthless as the grains of sand that its chips are built from. A culture of inquiry is one where all are empowered in a psychologically safe environment to ask questions and share commentary. A culture of data inquiry ensures that, within that safe environment, users can locate, leverage, trust, and communicate those insights found within data without fear. Ask yourself: Do I work within an environment where all can locate data? Do I work in an environment where all can leverage data? Do I work in an environment where all can trust data? Do I work in an environment where all can communicate data? Be The One Behind The AI Automation is here for many routine tasks. But to truly make the most of it, organizations will need humans who: Understand when AI is making good vs. bad recommendations. Know how to validate AI insights before acting on them. Can explain AI-driven decisions in clear, human terms — to coworkers, executives, regulators, and customers. Can translate business challenges to more technical and data-focused AI engineers while also listening and learning from them in turn. Being the human in the loop isn’t about resisting AI. It’s about being the person who knows how to use it responsibly, effectively, and strategically. Now What? Reach out for an inquiry ([email protected]) with me today to uncover your natural strengths and purpose, via your own roles, goals, and values VIP evaluation, to improve your own data communications and data storytelling skills, and then to discover how to build your enterprise culture of data inquiry via curiosity velocity and data and AI literacy programming. I look forward to working with you! If you are a vendor looking to share insights on your AI literacy offerings or have a use case of how

Be THE Human In The Loop: Data And AI Literacy Is Your Edge Read More »

Apple Rolls Out iOS 18.4 With New Languages, Emojis & Apple Intelligence in the EU

Photo of Apple News+ Food feed. Image: Apple Apple has deployed iOS 18.4 to all compatible iPhones. The software update adds support for eight new languages on Apple Intelligence, recipes to Apple News+, and seven new emojis. Users in the European Union can also set their default navigation app other than Apple Maps. You should be prompted about the update automatically, but if not, you can initiate the download manually by going to Settings, General, and then Software Update. Apple Intelligence features are only available on iPhone 16 models, iPhone 15 Pro, and iPhone 15 Pro Max. TechRepublic breaks down all the biggest new features coming to your iPhone with iOS 18.4. SEE: Apple iOS 19: Here’s What to Expect & When Apple Intelligence: New languages, EU access, Vision Pro integration Apple Intelligence now supports these additional languages: French, German, Italian, Portuguese (Brazil), Spanish, Japanese, Korean, Chinese (simplified), and localised English for Singapore and India. It is also now finally available to iPhone and iPad users in the EU after “regulatory uncertainties brought about by the Digital Markets Act” held up its release in the region. Apple Intelligence also now reads and prioritises your iPhone notifications, putting the most urgent alerts at the top, and a “Sketch” style option has been added to Image Playground. It also provides summaries of user reviews for apps listed in the App Store. New emojis iOS 18.4 has seven new emojis added to the iPhone keyboard to help you express yourself better in messages: Face with bags under eyes Fingerprint Leafless tree Root vegetable Harp Shovel Splatter New system languages support Ten new system languages are now available on iPhones with iOS 18.4: Bangla, Gujarati, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, Telugu, and Urdu. Default navigation app choice for EU users iPhone users in the EU will be able to choose their default navigation app with this update from Apple Maps to alternatives like Google Maps or Waze; this will apply to both the handset and CarPlay. The option has been added in response to the EU’s Digital Markets Act, which requires Apple to allow more competition and give consumers greater control over app preferences. Apple first announced this and a number of other changes necessary for DMA compliance in August. SEE: EU Cracks Down on Apple for Anti-Competitive Behavior Vision Pro app For iPhone users with a Vision Pro headset, upgrading to iOS 18.4 will add the new Vision Pro app to your device. This helps users discover and download Vision Pro content, manage device settings, and set up Guest Mode. Apple News+ recipes For budding chefs, subscribers of Apple News+ will find a whole host of recipes in the app that they can search through and save for later. When you’re ready to cook, you can load the recipe in Cooking Mode, which displays each step clearly and individually. The new Food section also shows cooking tips and restaurant reviews. SEE: Apple’s Next Big Thing is AI on Smart Watches Photos: New filters and collection features The Photos app has been updated with new filters that let users show or hide images based on criteria such as whether they’ve been shared with others, synced from a Mac or PC, or included in albums. Albums can be sorted by Date Modified, and items in the Media Types and Utilities collections can be reordered to prioritise videos, selfies, or screenshots. Filters like Oldest First will be available across all collections, and the Recently Viewed and Recently Shared collections can be disabled. In addition, Hidden photos won’t be imported to a Mac or PC if Use Face ID is applied to unlock them. CarPlay: Big screen display and sports scores CarPlay has been updated with iOS 18.4. Now, if the screen in your car is large enough, the CarPlay Home screen will show three rows of apps rather than two. Sports scores can also appear on a new Now Playing interface, thanks to the updated API made available to sports apps. Parental controls updated Apple has simplified the process of creating a Child Account by automatically applying child-appropriate settings before the setup is fully complete, allowing parents to step away and finish later. It has also made it so that the Screen Time App Limits remain enforced even if a child uninstalls and reinstalls an app. source

Apple Rolls Out iOS 18.4 With New Languages, Emojis & Apple Intelligence in the EU Read More »

Hugging Face submits open-source blueprint, challenging Big Tech in White House AI policy fight

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In a Washington policy landscape increasingly dominated by calls for minimal AI regulation, Hugging Face is making a distinctly different case to the Trump administration: open-source and collaborative AI development may be America’s strongest competitive advantage. The AI platform company, which hosts more than 1.5 million public models across diverse domains, has submitted its recommendations for the White House AI Action Plan, arguing that recent breakthroughs in open-source models demonstrate they can match or exceed the capabilities of closed commercial systems at a fraction of the cost. In its official submission, Hugging Face highlights recent achievements like OlympicCoder, which outperforms Claude 3.7 on complex coding tasks using just 7 billion parameters, and AI2’s fully open OLMo 2 models that match OpenAI’s o1-mini performance levels. The submission comes as part of a broader effort by the Trump administration to gather input for its upcoming AI Action Plan, mandated by Executive Order 14179, officially titled “Removing Barriers to American Leadership in Artificial Intelligence,” which was issued in January. The Order, which replaced the Biden administration’s more regulation-focused approach, emphasizes U.S. competitiveness and reducing regulatory barriers to development. Hugging Face’s submission stands in stark contrast to those from commercial AI leaders like OpenAI, which has lobbied heavily for light-touch regulation and “the freedom to innovate in the national interest,” while warning about China’s narrowing lead in AI capabilities. OpenAI’s proposal emphasizes a “voluntary partnership between the federal government and the private sector” rather than what it calls “overly burdensome state laws.” How open source could power America’s AI advantage: Hugging Face’s triple-threat strategy Hugging Face’s recommendations center on three interconnected pillars that emphasize democratizing AI technology. The company argues that open approaches enhance rather than hinder America’s competitive position. “The most advanced AI systems to date all stand on a strong foundation of open research and open source software — which shows the critical value of continued support for openness in sustaining further progress,” the company wrote in its submission. Its first pillar calls for strengthening open and open-source AI ecosystems through investments in research infrastructure like the National AI Research Resource (NAIRR) and ensuring broad access to trusted datasets. This approach contrasts with OpenAI’s emphasis on copyright exemptions that would allow proprietary models to train on copyrighted material without explicit permission. “Investment in systems that can freely be re-used and adapted has also been shown to have a strong economic impact multiplying effect, driving a significant percentage of countries’ GDP,” Hugging Face noted, arguing that open approaches boost rather than hinder economic growth. Smaller, faster, better: Why efficient AI models could democratize the technology revolution The company’s second pillar focuses on addressing resource constraints faced by AI adopters, particularly smaller organizations that can’t afford the computational demands of large-scale models. By supporting more efficient, specialized models that can run on limited resources, Hugging Face argues the U.S. can enable broader participation in the AI ecosystem. “Smaller models that may even be used on edge devices, techniques to reduce computational requirements at inference, and efforts to facilitate mid-scale training for organizations with modest to moderate computational resources all support the development of models that meet the specific needs of their use context,” the submission explains. On security—a major focus of the administration’s policy discussions—Hugging Face makes the counterintuitive case that open and transparent AI systems may be more secure in critical applications. The company suggests that “fully transparent models providing access to their training data and procedures can support the most extensive safety certifications,” while “open-weight models that can be run in air-gapped environments can be a critical component in managing information risks.” Big tech vs. little tech: The growing policy battle that could shape AI’s future Hugging Face’s approach highlights growing policy divisions in the AI industry. While companies like OpenAI and Google emphasize speeding up regulatory processes and reducing government oversight, venture capital firm Andreessen Horowitz (a16z) has advocated for a middle ground, arguing for federal leadership to prevent a patchwork of state regulations while focusing regulation on specific harms rather than model development itself. “Little Tech has an important role to play in strengthening America’s ability to compete in AI in the future, just as it has been a driving force of American technological innovation historically,” a16z wrote in its submission, using language that aligns somewhat with Hugging Face’s democratization arguments. Google’s submission, meanwhile, focused on infrastructure investments, particularly addressing “surging energy needs” for AI deployment—a practical concern shared across industry positions. Between innovation and access: The race to influence America’s AI future As the administration weighs competing visions for American AI leadership, the fundamental tension between commercial advancement and democratic access remains unresolved. OpenAI’s vision of AI development prioritizes speed and competitive advantage through a centralized approach, while Hugging Face presents evidence that distributed, open development can deliver comparable results while spreading benefits more broadly. The economic and security arguments will likely prove decisive. If administration officials accept Hugging Face’s assertion that “a robust AI strategy must leverage open and collaborative development to best drive performance, adoption, and security,” open-source could find a meaningful place in national strategy. But if concerns about China’s AI capabilities dominate, OpenAI’s calls for minimal oversight might prevail. What’s clear is that the AI Action Plan will set the tone for years of American technological development. As Hugging Face’s submission concludes, both open and proprietary systems have complementary roles to play — suggesting that the wisest policy might be one that harnesses the unique strengths of each approach rather than choosing between them. The question isn’t whether America will lead in AI, but whether that leadership will bring prosperity to the few or innovation for the many. source

Hugging Face submits open-source blueprint, challenging Big Tech in White House AI policy fight Read More »

ATM Company Sanctioned For 'Objectively Frivolous' Claim

By Chart Riggall ( April 3, 2025, 4:51 PM EDT) — A Georgia federal judge on Wednesday tossed an attempt to relitigate a patent infringement suit brought by an ATM technology company against a competitor, and sanctioned its attorneys for bringing the “objectively frivolous” claim that the competitor defrauded the court in a previous suit…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

ATM Company Sanctioned For 'Objectively Frivolous' Claim Read More »

Beyond generic benchmarks: How Yourbench lets enterprises evaluate AI models against actual data

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Every AI model release inevitably includes charts touting how it outperformed its competitors in this benchmark test or that evaluation matrix.  However, these benchmarks often test for general capabilities. For organizations that want to use models and large language model-based agents, it’s harder to evaluate how well the agent or the model actually understands their specific needs.  Model repository Hugging Face launched Yourbench, an open-source tool where developers and enterprises can create their own benchmarks to test model performance against their internal data.  Sumuk Shashidhar, part of the evaluations research team at Hugging Face, announced Yourbench on X. The feature offers “custom benchmarking and synthetic data generation from ANY of your documents. It’s a big step towards improving how model evaluations work.” He added that Hugging Face knows “that for many use cases what really matters is how well a model performs your specific task. Yourbench lets you evaluate models on what matters to you.” Creating custom evaluations Hugging Face said in a paper that Yourbench works by replicating subsets of the Massive Multitask Language Understanding (MMLU) benchmark “using minimal source text, achieving this for under $15 in total inference cost while perfectly preserving the relative model performance rankings.”  Organizations need to pre-process their documents before Yourbench can work. This involves three stages: Document Ingestion to “normalize” file formats. Semantic Chunking to break down the documents to meet context window limits and focus the model’s attention. Document Summarization Next comes the question-and-answer generation process, which creates questions from information on the documents. This is where the user brings in their chosen LLM to see which one best answers the questions.  Hugging Face tested Yourbench with DeepSeek V3 and R1 models, Alibaba’s Qwen models including the reasoning model Qwen QwQ, Mistral Large 2411 and Mistral 3.1 Small, Llama 3.1 and Llama 3.3, Gemini 2.0 Flash, Gemini 2.0 Flash Lite and Gemma 3, GPT-4o, GPT-4o-mini, and o3 mini, and Claude 3.7 Sonnet and Claude 3.5 Haiku. Shashidhar said Hugging Face also offers cost analysis on the models and found that Qwen and Gemini 2.0 Flash “produce tremendous value for very very low costs.” Compute limitations However, creating custom LLM benchmarks based on an organization’s documents comes at a cost. Yourbench requires a lot of compute power to work. Shashidhar said on X that the company is “adding capacity” as fast they could. Hugging Face runs several GPUs and partners with companies like Google to use their cloud services for inference tasks. VentureBeat reached out to Hugging Face about Yourbench’s compute usage. Benchmarking is not perfect Benchmarks and other evaluation methods give users an idea of how well models perform, but these do not perfectly capture how the models will work daily. Some have even voiced skepticism that benchmark tests show models’ limitations and can lead to false conclusions about their safety and performance. A study also warned that benchmarking agents could be “misleading.” However, enterprises cannot avoid evaluating models now that there are many choices in the market, and technology leaders justify the rising cost of using AI models. This has led to different methods to test model performance and reliability.  Google DeepMind introduced FACTS Grounding, which tests a model’s ability to generate factually accurate responses based on information from documents. Some Yale and Tsinghua University researchers developed self-invoking code benchmarks to guide enterprises for which coding LLMs work for them.  source

Beyond generic benchmarks: How Yourbench lets enterprises evaluate AI models against actual data Read More »

2. Views of risks, opportunities and regulation of AI

As the role of artificial intelligence in daily life grows, its challenges and opportunities are front and center for experts and the public alike. This chapter covers where experts and the American public differ in their excitement and worries, as well as where they think AI might surpass humans. It also walks through the areas of agreement, such as on government regulation, corporate responsibility, and concerns about AI bias and misinformation.   Concern and excitement over AI AI experts are far more enthusiastic than the American public about the increased use of AI in daily life. The public, on the other hand, expresses far more concern. Roughly half of the experts surveyed say they are more excited than concerned (47%) about the increased use of AI in daily life. By contrast, only 11% of U.S. adults say this. About half of U.S. adults (51%) say they are more concerned than excited. This drops dramatically to 15% among the experts surveyed. What’s more, the U.S. public has become more concerned over recent years. The share who say they are more concerned than excited increased from about four-in-ten in 2021 and 2022 to roughly half in 2023. Today, identical shares of both groups say that they are equally concerned and excited (38% each).  By gender, among AI experts surveyed and U.S. adults Among both the public and AI experts, men are more excited than women about the increased use of AI in daily life. The gender difference on excitement is wider among AI experts, though. AI experts: A far greater share of men than women say they are more excited than concerned (53% vs. 30%). While just 11% of men say they’re more concerned than excited, that ticks up to 24% of women. U.S. public: Men are again more likely than women to say they are more excited than concerned (15% vs. 7%). While 46% of men are more concerned, that rises slightly to 55% among women. In in-depth interviews, we asked AI experts about what uses of AI excite them, and why. Some themes include making life easier or more efficient and improving outcomes for certain industries. (Quotes have been lightly edited for grammar and clarity.) Quotes from AI experts: Reasons for excitement about AI “I think broadly some of the things that excite me are things like applications that can save people a lot of time from repetitive and mundane tasks. So I think automating some of those workflows.” “I’ve seen that the AI can improve a lot the accuracy of the diagnosis of different diseases. Also, it can boost the development of different medicines for different treatments. Like for instance, for breast cancer classification, it can improve a lot. It can decrease the false positive rates and false negative rates. Most excited about the positive impact that it could have in the health industry.” And we also asked experts what uses of AI concern them, and why. Some themes include data privacy and misinformation. Quotes from AI experts: Reasons for concern about AI “I do think about how that [airport biometrics] technology is used, especially from a privacy and security standpoint. … Where’s that data going? How is it being housed? Where is it being used for? Where is my consent? Can I really, truly say no, I don’t want my picture taken, but what is the consequence of me saying that and still trying to make it to my flight at home?” “Misinformation has always been an issue with technology. … But I think the main issue with AI and misinformation is that you can now do misinformation at scale, at a way larger scale.” A 2021 Center survey found that some related themes also arose among U.S. adults when asked why they were either more concerned or more excited.  Specific concerns about AI Our new survey also gives us the chance to compare expert and public concern in several key areas, including those related to “deepfakes,” misinformation, job displacement and AI bias. The public is more worried about losing jobs – and human connection – than AI experts. Continuing a theme from our broader body of research, we find the public is anxious about AI’s impact on work. More than half of U.S. adults are extremely or very concerned about AI eliminating jobs, versus a smaller share of experts surveyed (56% vs. 25%). The public also fears the loss of human connection more than experts do (57% vs. 37%). There’s wide concern about inaccurate information. Seven-in-ten of the experts we surveyed and 66% of U.S. adults are highly worried about people getting inaccurate information from AI. Impersonation and data misuse are also among the top concerns. The public is more worried about each of these things than experts – most U.S. adults are highly worried. Still, six-in-ten experts say they are extremely or very concerned about data misuse, and roughly two-thirds say this about AI being used to impersonate people. Experts and the public align in their concerns about bias. Identical shares of each group (55%) are highly worried about this. About half or more of experts and the public also express notable concern about people not understanding what AI can do. By gender, among AI experts surveyed and U.S. adults There are gender differences on specific concerns about AI as well. Some of the biggest are on data misuse, bias and inaccurate information. On the other hand, women feel similarly to men about impersonation and job loss. Among the general public, most gender differences on this topic are minimal. Loss of human connection is one place we see women being somewhat more concerned in both groups, though. Women are slightly more likely than men to be highly worried about AI leading to this, both among experts (45% vs. 35%) and the public (63% vs. 52%). AI’s personal impact For some, concerns about AI extend to how they see their own futures. The public is more likely to foresee personal harm from AI than benefit, though

2. Views of risks, opportunities and regulation of AI Read More »

Medallia And Qualtrics Conference Highlights: Rivals Offer Different Plans For AI Enhancements

Over the past two weeks, we attended back-to-back CX events: first, Qualtrics’ X4 in Salt Lake City, then Medallia Experience in Las Vegas. Both Leaders in The Forrester Wave™: Customer Feedback Management Solutions, Q4 2024, these vendors court enterprisewide CX programs as well as digital, contact center, and location-based operations leaders. Despite the similarities in the products and target audiences, Both providers announced a host of new or enhanced features, but of course the focus was on all things AI. Qualtrics made a bold jump into the (very busy) AI agent space with its announcement of Experience Agents. Experience Agents will be customer-facing, able to deliver chatlike experiences that go beyond menu-driven chatbots such as helping customers find products tailored to their needs, performing real-time service recovery, and conducting conversational surveys. In contrast, Medallia’s AI-related announcements mainly focused on employee-facing enhancements, including AI-supported Root Cause Assist and text analytics theme improvements. These features will help employees get more out of unstructured text and accelerate the insights-to-action process. What both approaches have in common is that providers in this space continue to offer more than most clients can — or want to — handle. For many clients, especially those in healthcare or financial services, using generative AI in customer-facing applications is still too risky. For others, their organization’s unwillingness to pull more data into these platforms will limit the value of the AI-enhanced features. Attendees at both events echoed what we see in our work with clients: Many are still struggling to mature beyond surveys to look at other sources of data, earn stakeholder buy-in, and show how CX connects to business goals. Attendees are excited for AI-powered tools, but they are realistic in understanding that these are in fact just tools. Organizational culture and strategy remain just as important and no less challenging to overcome. Whether you’re using Qualtrics, Medallia, or another CFM solution, these events should have you thinking about: AI. No kidding. But as ServiceNow CEO Bill McDermott said from the Qualtrics stage, “the worst advice I can give you is to wait for second-mover advantage.” While your organization might not be ready to use the new AI-powered features, it’s time to start figuring out a path toward using AI to help understand your customers and create better interactions. Data. As Carolynn Smith, vice president and head of USB Service at Prudential Financial, said during a Medallia breakout session, “you can’t just layer genAI on top of bad data.” Prudential has been on a 10-year journey to modernize its data, and that labor is paying off now as it is able to experiment with lots of different AI innovations across the business. CX pros need to get closer to their data and IT counterparts to ensure that customer feedback is part of the organization’s data and AI strategy. Employees. There was a lot of talk about EX and CX connections during these events, but CX pros also need to think about how they can help employees boost their artificial intelligence quotient (AIQ) to leverage AI-enabled tech. Don’t underestimate the amount of internal effort needed to bring employees along as AI becomes more and more of a part of the everyday. There’s a lot more to unpack from these events. If you’re a client and want to learn more, please give us a call! source

Medallia And Qualtrics Conference Highlights: Rivals Offer Different Plans For AI Enhancements Read More »

Soaring AI energy use sparks call to ‘fundamentally redesign’ computing

One of Europe’s leading climate tech VC firms has called for a “fundamental redesign” of traditional computing methods amid surging energy consumption from AI applications. The Berlin-based World Fund warns that simply transitioning data centres to renewable power will not be enough to fully decarbonise AI compute.  “We need to rethink the way we go about computing, from the materials and chips we use to software we run,” Daria Saharova, founding partner at World Fund, said at the Future of Green Computing event in Munich today.   At the event, World Fund joined Dealroom and Intel’s deeptech accelerator Ignite to unveil a new report that proposes a set of emerging technologies — from chips made in space to processors that mimic the brain — to curb AI’s enormous appetite for energy and usher in a new era of greener computing.  3 free tickets to TNW Conference? Get them now! For a limited time, groups can get up to three extra free tickets! Book now and increase your visibility and connections at TNW Conference Using data from Dealroom, the report maps out the green computing ecosystem. It identifies 65 startups in this space, 54 of which are European, which have collectively raised $900mn. Over half of these companies were founded within the past five years, with 12 emerging in just the last 12 months.  A greener vision of AI The report highlights three key technologies that hold the most potential to decarbonise AI.  The first is advanced semiconductor materials such as Gallium Nitride (GaN), Silicon Carbide (SiC), and graphene. These could significantly reduce AI’s energy consumption by improving efficiency and thermal performance in computing hardware.  One of the leading innovators in this space is Welsh startup Space Forge. The company is leveraging the microgravity, vacuum, and extreme temperatures of space to produce semiconductors that it claims are three to five times purer than those made on Earth.  “We’ve pushed the efficiency of silicon chips to their limit,” said Joshua Western, CEO and cofounder at Space Forge. Another promising avenue lies in new computing paradigms, such as quantum, neuromorphic, and optical computing. Quantum computers, for instance, promise to solve complex calculations much faster than classical machines, potentially reducing computational time and overall energy consumption.  “Classical computers are getting too big, too expensive, and use too much energy and water,” said Inés De Vega, VP of innovation at IQM, Europe’s best-funded quantum computing startup. “Quantum computing can both find new solutions to climate change but also drastically reduce the overall energy consumption of computing itself.” Another type of computing gaining traction is optical computing, which leverages photons — particles of light — instead of electrons. It could dramatically increase processing speed, as demonstrated by Germany’s Black Semiconductor. The company’s photonics processors could transmit signals 100 to 1,000 times faster than traditional electronic chips. Anastasiia Nosova, a former chip engineer at German semiconductor giant Infineon and host of the Anastasi In Tech podcast, argued that photonic chips could be 100 times more energy efficient than regular silicon semiconductors. “They are one of the most important developments in computing right now,” she said at the Munich event..   While hardware fixes will be critical, there’s also work to be done in advanced software that makes AI’s energy use more efficient. One of the startups working on this is London-based Deep Render. The company uses deep learning to compress files while retaining quality beyond what was previously possible. This reduces the volume of data that needs to be transmitted or stored, and thus the amount of computing power required.  While these technologies hold potential, they’re still in the nascent stages of development. Meanwhile, the energy needed to train AI models is doubling every three to four months, according to OpenAI.   “For these computing solutions to scale in Europe, we need a lot of venture capital but also government backing,” said Saharova. She believes that Europe needs to allocate about €1 trillion to bring climate tech, including green computing, to the “level it needs to be.”  The future of AI will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale. Use the code TNWXMEDIA2025 at the check-out to get 30% off the price tag. source

Soaring AI energy use sparks call to ‘fundamentally redesign’ computing Read More »

How to Prioritize Multiple Innovation Projects

Innovations arrive at a rapid pace. To stay on top of the latest promising breakthroughs — and weed out the flops — IT leaders must create and staff innovation projects. Yet when working with limited resources (and which IT leader isn’t?), it’s important to find a way to prioritize initiatives.  Start by mapping each project to a specific business goal or customer need — this ensures real impact, advises Rohan Sharma, a former innovation team leader at scientific instrumentation firm Thermo Fisher Scientific and now an independent author and lecturer. “Next, weigh key factors such as ROI, resource availability, and risk tolerance,” he recommends in an email interview. “Finally, create a transparent scoring or ranking system so everyone understands why certain projects come first.”  Sharma says this approach forces discipline. “Instead of running with the coolest idea, you’re aligning with strategy and measurable outcomes,” he explains. “It also demystifies decision-making for your team, reinforcing trust and focus.”  Risks and Rewards  A reliable way to prioritize innovation projects is to weigh each initiative’s risks and rewards, suggests Nick Esposito, founder of NYCServers, which specializes in hosting services for fintech and trading platforms. “It’s about looking at the potential impact, how doable the project is, and whether it fits with the company’s long-term goals,” he says in an online interview.  Related:Should IT Add Automation and Robotics Engineers? Esposito notes that projects with a potentially high financial or competitive reward are generally worth prioritizing — just as long as the risks remain manageable. Don’t forget to consider the project’s time-sensitivity and whether it can be completed on schedule, he adds. “By focusing on projects that offer the biggest benefits with reasonable risks, organizations can get the most out of their innovation efforts.”  Innovative Approaches  Prateek Shrivastava, advanced analytics manager at engine and power-generation manufacturer Cummins, says his team relies on what he calls “The WIZGIF Method,” an abbreviation of “What Is the Goal in Focus?” “This approach ensures that every project is evaluated based on its alignment with the overarching business goal,” he explains in an online interview. “By breaking down priorities into clear, actionable criteria — such as business impact, strategic alignment, feasibility, and required resources — it creates a structured framework for decision-making.”  Shrivastava believes that his WIZGIF method is effective, since it forces clarity and alignment from the outset. “By keeping the business goal in sharp focus, it minimizes distractions and ensures that all efforts are contributing to the organization’s strategic objectives,” he states. “This approach fosters collaboration and transparency while keeping teams agile in responding to evolving needs.”  Related:Task Delegation Mistakes IT Leaders Need to Avoid Benjamin Atkinson, innovation director at CNA Insurance, takes an alternate position. He feels that project prioritization should be generally avoided. “When we talk of innovation, we’re usually talking about problem-solving in a complex adaptive system,” he says via email. “We simply can’t know in advance which ideas will succeed — picking winning ideas is a loser’s game.”  If leaders want successful ideas, they must provide their teams with a clear direction, a clearly defined problem space, and known constraints, Atkinson says. “If leaders take the time to do this, they will have created a magnet for good ideas.”  Seeking Support  Sharma says cross-functional peers in areas such as finance, operations, and product teams, are the best innovation allies. “They offer diverse viewpoints on feasibility, budget, and timing,” he explains. “Tapping into an executive sponsor can also help keep priorities aligned with the bigger organizational picture.”  Related:Why Vendor Relationships Are More Important Than Ever for CIOs Working closely with cross-functional teams, including business analysts, finance departments, and product managers, can provide a clear understanding of a project’s feasibility and potential value, Esposito says. External consultants and other industry experts can also offer valuable insights, especially when exploring new or unfamiliar technologies. “Collaborating with these resources ensures a comprehensive view of market trends, technological advancements, and business needs to inform decisions.”  Sharma says the biggest mistake project leaders make is spreading resources too thinly or chasing “shiny objects” without any clear business alignment. Meanwhile, trying to focus on everything at once guarantees mediocre results across the board, he adds.  Parting Thoughts  Don’t consider any new project without first establishing a solid prioritization framework. “A strong prioritization framework is a living process, not a one-off exercise,” Sharma says. “Keep refining it based on feedback and results,” he advises. “Additionally, by embracing ongoing learning, you’ll cultivate a culture that values both innovative thinking and practical execution.”  Prioritization is not a one-time activity — it’s a continuous process that requires alignment, evaluation, and adaptability, Shrivastava says. “Methods like WIZGIF are valuable because they provide a consistent framework to revisit priorities, make dynamic adjustments, and ensure that resources are always directed toward maximum value creation.”  source

How to Prioritize Multiple Innovation Projects Read More »