Is your AI product actually working? How to develop the right metric system

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In my first stint as a machine learning (ML) product manager, a simple question inspired passionate debates across functions and leaders: How do we know if this product is actually working? The product in question that I managed catered to both internal and external customers. The model enabled internal teams to identify the top issues faced by our customers so that they could prioritize the right set of experiences to fix customer issues. With such a complex web of interdependencies among internal and external customers, choosing the right metrics to capture the impact of the product was critical to steer it towards success. Not tracking whether your product is working well is like landing a plane without any instructions from air traffic control. There is absolutely no way that you can make informed decisions for your customer without knowing what is going right or wrong. Additionally, if you do not actively define the metrics, your team will identify their own back-up metrics. The risk of having multiple flavors of an ‘accuracy’ or ‘quality’ metric is that everyone will develop their own version, leading to a scenario where you might not all be working toward the same outcome. For example, when I reviewed my annual goal and the underlying metric with our engineering team, the immediate feedback was: “But this is a business metric, we already track precision and recall.”  First, identify what you want to know about your AI product Once you do get down to the task of defining the metrics for your product — where to begin? In my experience, the complexity of operating an ML product with multiple customers translates to defining metrics for the model, too. What do I use to measure whether a model is working well? Measuring the outcome of internal teams to prioritize launches based on our models would not be quick enough; measuring whether the customer adopted solutions recommended by our model could risk us drawing conclusions from a very broad adoption metric (what if the customer didn’t adopt the solution because they just wanted to reach a support agent?). Fast-forward to the era of large language models (LLMs) — where we don’t just have a single output from an ML model, we have text answers, images and music as outputs, too. The dimensions of the product that require metrics now rapidly increases — formats, customers, type … the list goes on. Across all my products, when I try to come up with metrics, my first step is to distill what I want to know about its impact on customers into a few key questions. Identifying the right set of questions makes it easier to identify the right set of metrics. Here are a few examples: Did the customer get an output? → metric for coverage How long did it take for the product to provide an output? → metric for latency Did the user like the output? → metrics for customer feedback, customer adoption and retention Once you identify your key questions, the next step is to identify a set of sub-questions for ‘input’ and ‘output’ signals. Output metrics are lagging indicators where you can measure an event that has already happened. Input metrics and leading indicators can be used to identify trends or predict outcomes. See below for ways to add the right sub-questions for lagging and leading indicators to the questions above. Not all questions need to have leading/lagging indicators. Did the customer get an output? → coverage How long did it take for the product to provide an output? → latency Did the user like the output? → customer feedback, customer adoption and retention Did the user indicate that the output is right/wrong? (output) Was the output good/fair? (input) The third and final step is to identify the method to gather metrics. Most metrics are gathered at-scale by new instrumentation via data engineering. However, in some instances (like question 3 above) especially for ML based products, you have the option of manual or automated evaluations that assess the model outputs. While it’s always best to develop automated evaluations, starting with manual evaluations for “was the output good/fair” and creating a rubric for the definitions of good, fair and not good will help you lay the groundwork for a rigorous and tested automated evaluation process, too. Example use cases: AI search, listing descriptions The above framework can be applied to any ML-based product to identify the list of primary metrics for your product. Let’s take search as an example. Question  Metrics Nature of Metric Did the customer get an output? → Coverage % search sessions with search results shown to customer Output How long did it take for the product to provide an output? → Latency Time taken to display search results for the user Output Did the user like the output? → Customer feedback, customer adoption and retention Did the user indicate that the output is right/wrong? (Output) Was the output good/fair? (Input) % of search sessions with ‘thumbs up’ feedback on search results from the customer or % of search sessions with clicks from the customer % of search results marked as ‘good/fair’ for each search term, per quality rubric Output Input How about a product to generate descriptions for a listing (whether it’s a menu item in Doordash or a product listing on Amazon)? Question  Metrics Nature of Metric Did the customer get an output? → Coverage % listings with generated description Output How long did it take for the product to provide an output? → Latency Time taken to generate descriptions to the user Output Did the user like the output? → Customer feedback, customer adoption and retention Did the user indicate that the output is right/wrong? (Output) Was the output good/fair? (Input) % of listings with generated descriptions that required edits from the technical content team/seller/customer % of listing descriptions marked as ‘good/fair’, per

Is your AI product actually working? How to develop the right metric system Read More »

The Future Of Digital Experiences: A Human-Centered, Empowering Journey

We are on the brink of a digital revolution in consumer experiences. A convergence of multiple forces is compelling organizations to innovate in this area: Consumers connect digitally, accessing products and services through a range of devices, channels, and platforms. And they now expect seamless service at their moments of need, often seeking curated and personalized experiences to achieve their goals. A synergy of advancing and emerging technologies is accelerating the transformation of digital experiences — reshaping how firms interact with consumers, streamline operations, and deliver value. Competition is intensifying. While consumers navigate a “digital sea of sameness,” leading firms leverage cutting-edge technologies and extensive partner ecosystems to swiftly develop and scale innovative products, services, and business models. Digital Experiences Are Evolving To Become More Human-Centered Today, we are already witnessing the gradual integration of multiple interaction modes into interfaces, including touch, text, voice, haptics, and gestures. Apps now allow users to use voice commands to ask questions, research products and services, and make payments. Virtual assistants use augmented reality to offer virtual try-ons. Smartwatches use haptic feedback to alert users or share health metrics. In the future, organizations will leverage AI to further reduce friction in human-computer interactions. AI-powered interfaces, such as chatbots and virtual agents, will actively observe, seek information, learn, and communicate with consumers. This will allow organizations to better understand consumer intent and emotions and generate responses that use appropriate tone, emotion, visual elements, and more. In the short term, conversational interfaces will make digital experiences more natural, intuitive, and accessible. In the longer term, the internet of senses, computer vision, extended reality, and edge AI will create more perceptive and immersive experiences by tracking eye movement, expressions, and gestures and blending multisensory experiences to incorporate touch, taste, and smell. Digital Experiences Will Evolve Through Three Phases Over the next decade, emerging technologies will enhance consumer understanding, boost automation, and accelerate the orchestration and delivery of digital experiences. By gaining deeper consumer insights, organizations will be able to: Dynamically assemble the content and services that consumers need. Provide actionable suggestions tailored to individual needs. Act on behalf of consumers — with their permission — to reduce cognitive load and simplify their lives. As market offerings expand, technologies mature, and consumers increasingly adopt new types of digital experiences, Forrester expects that digital experiences will evolve through three phases. These phases will not occur in strict sequential order; instead, they are interrelated and mutually reinforcing, building upon each other: Assistive experiences use consumer preferences to help with decision-making. Already today, consumers interact with firms through chatbots and virtual assistants. These interfaces let customers ask questions, get answers, and perform some actions. Firms use data and real-time models to engage consumers with relevant experiences, providing insights, alerts, and suggestions to help them make informed decisions. Anticipatory experiences leverage consumer context to proactively address their needs. Next, anticipatory experiences will become more common. Consumers will have deeper interactions through multimodal interfaces, sharing more data with firms. These experiences will retain user preferences and behaviors. Organizations will use this data and predictive tools to offer AI-driven insights, helping consumers prepare for events and achieve better outcomes. AI-powered assistants will continuously optimize experiences to proactively meet consumer needs. Agentic experiences understand and act on consumer intent. Finally, firms will use agentic AI systems for real-time personalization and automation. Consumers will use personal AI agents to refine outputs based on their preferences and goals. Major platforms with broader data access such as Apple and Google will use AI to assemble dynamic cross-brand experiences from modular components. With permission, AI agents will autonomously seek information, learn, adapt, and act on behalf of consumers.   By delivering assistive, anticipatory, and agentic experiences, businesses will be able to create a future where technology seamlessly integrates into our daily lives, empowering consumers in unprecedented ways. Trust Will Be A Key Factor In Shaping This Future Brand trust, shaped by the brand promise but also the quality of past interactions, will determine how much data consumers are willing to share for personalized experiences. Additionally, trust in the technology itself, scenarios, and perceived levels of risk will influence the degree of autonomy granted to AI agents and the breadth of service or advice provided. The pace of change is accelerating — but the fundamentals remain the same. As organizations prepare for the future of experiences, it’s crucial to remember that brand and customer experience are the powerhouse duo driving growth. Join Us At CX Summit EMEA 2025 To Learn More To learn more about how to anticipate and prepare for the future of experiences, join us at CX Summit EMEA June 2–4, 2025, in London. I will present new research on the future of digital experiences during my keynote, “Design For The Future Of Experiences.” The Summit brings together leaders in CX, digital, and marketing to explore the future of customer relationships and learn how to build a total experience that aligns brand experience and CX to drive sustainable growth. You can explore the full agenda and register here. If you’re a Forrester client, stay tuned for upcoming research on the future of digital experiences. Visit my Forrester bio page and click “Follow” to receive notifications. You can also follow me on LinkedIn here. Forrester clients can also schedule an inquiry or guidance session with me to delve deeper into this topic. source

The Future Of Digital Experiences: A Human-Centered, Empowering Journey Read More »

Keys To Handling Digital Investigations In Pharma IP Litigation

In the high-stakes realm of pharmaceutical intellectual property litigation, efficient e-discovery and digital investigation workflows are essential to supporting strategic arguments, building defensible cases and proving that the requirements for market entry have been adequately met, says Jerry Lay at FTI Consulting. source

Keys To Handling Digital Investigations In Pharma IP Litigation Read More »

Trump tariffs reignite Europe’s push for cloud sovereignty

The Trump administration’s sweeping tariffs have ruffled feathers across the world — and reignited Europe’s push for digital sovereignty.  One of the key focus points has been Europe’s cloud infrastructure, which is currently dominated by US tech giants: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Together, the “big three” account for more than 50% of the continent’s cloud market. “Europe has been heavily reliant on US tech and cloud for decades,” said Mark Boost, CEO of UK-based web hosting company Civo. “But there are alternatives, where France, Germany, and the UK have full control of their data and cloud landscape.” Trump’s tariffs, Boost added, had “cemented the idea that Europe can no longer afford to rely on the US for its digital infrastructure.” Thankfully, Europe has loads of homegrown cloud providers. The largest is France’s OVHcloud, which runs the world’s largest data centre by surface area. Others include Finland’s UpCloud, Switzerland’s Exoscale, Germany’s IONOS, and France’s Scaleway (the cloud provider of choice for French AI unicorn Mistral). These alternative cloud providers may not match the scale and breadth of services offered by the US hyperscalers. They do, however, offer something very attractive in these uncertain geopolitical times: data sovereignty and privacy.    As Alexander Samsig, senior consultant and partner at Norwegian tech consultancy Funktive, put it in a recent blog post: “In 2025, the choice of a cloud provider isn’t just about technology or price.” Boost echoes that sentiment. “A sovereign European cloud could foster an ecosystem defined by fairness and transparency, in which domestic providers can compete, and customers have maximum freedom to choose the service that’s right for them,” he said.   It’s not a pipedream, either — Europe has cut its dependence on powerful American tech before and can do it again.  Europeans once relied entirely on the US for GPS access, but today, smartphone users on the continent can access navigation through the EU’s Galileo satellite system. Launched in 2016, Galileo is one of the world’s best satellite networks, and unlike others, it’s a civilian system designed with secure service provision at its core. It cost around €10bn to build and deploy. If Europe is truly committed to building sovereign cloud infrastructure, it will need to back up its ambitions with significant investment. “Allocating funding for domestic sovereign clouds would also go a long way to supporting domestic industries, and would send a clear signal that Europe can chart an independent path from the US and China,” said Boost.  Political momentum on this front looks to be building. In a speech yesterday, France’s AI minister, Clara Chappaz, called on the continent to “work as a pack” to take on US “predator” tech firms, particularly in the cloud services sector.  To shield Europe from US tech dominance, Chappaz urged the bloc to enforce its digital rulebook, stand up to Trump’s “idiotic” trade war, and hit back with digital taxes on Big Tech — if required. She also slammed “sovereignty washing” — when US cloud giants partner with EU firms to appear sovereign — and backed strict standards like France’s SecNumCloud certification, which disqualifies foreign-owned providers based on shareholding caps. Chappaz said Europe is finally “waking up” to the need for true cloud independence. The minister also claimed that both OVHcloud and Scaleway saw record client growth since Trump took office.  Europe’s digital sovereignty will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale. Use the code TNWXMEDIA2025 at the check-out to get 30% off the price tag. source

Trump tariffs reignite Europe’s push for cloud sovereignty Read More »

4. Freedom gaps

One way to evaluate how people around the world feel about free speech, free press, and internet use without state or government censorship is to compare views of how important these freedoms are with perceptions of freedom in each country. When making this comparison, we observe what we call “freedom gaps,” or differences between the shares of people who say that free press, speech and internet use are important and the shares who say that these activities are actually free in their country. In many of the 35 countries surveyed, we see gaps in views of media freedom and freedom of speech – that is, the shares who say these freedoms are important are larger than the shares who say people in their country actually enjoy these freedoms. But the picture is much less clear when it comes to freedom on the internet. In fact, we see “reverse gaps” in many countries, where more people say they can use the internet freely than say freedom on the internet is important. Press freedom gaps There are significant gaps on press freedom in 30 of 35 countries surveyed. In almost all of these cases, the gaps occur because a larger share of people say freedom of the press is important than say media in their country are actually free. The largest press freedom gap is in Chile, where 90% of adults say that the media reporting the news without state or government interference is very or somewhat important, while only 29% say that the media in their country are completely or somewhat free to report the news. In other words, the share of Chileans who say a free press is important is approximately triple the share who say their media are indeed free. Large gaps can also be found in Argentina, Colombia, Greece, Hungary, Mexico, Peru, Singapore, South Korea and Turkey. In several of these countries, views are split on whether media reporting is free. In both Chile and Greece, only about a third of adults or fewer rate their media as completely or somewhat free. Among Americans, 92% say freedom of the press is important, compared with 79% who say the U.S. press are completely or somewhat free to report the news. In India and Kenya, the gaps are reversed: Eight-in-ten adults or more in each country say their press are free to report the news, while around two-thirds say it is important to have press freedom. Press freedom gaps are insignificant in Bangladesh, Ghana, Israel, the Philippines and South Africa. In other words, there is not a difference between views of media freedom’s importance and perceptions of an uncensored press. Speech freedom gaps Of the 35 countries surveyed, there is a significant speech freedom gap in 31 countries. In 30 of them, the gaps are due to larger shares saying free speech is important than saying they are actually free to say what they want. For the most part, the free speech gaps look similar to press freedom gaps. The largest gap among the countries surveyed is in Turkey, where 91% say people expressing themselves without government or state interference is very or somewhat important, while 52% say people in Turkey are completely or somewhat free to do this. Free speech gaps are particularly large in the Latin American countries surveyed. For instance, in Peru, approximately eight-in-ten adults say free speech is important, but only about half (47%) say Peruvians enjoy this freedom. Similarly, in both Chile and Mexico, large majorities agree that free speech is important. But Chileans and Mexicans are about evenly divided on whether people can say what they want without censorship in their respective countries. In the U.S., more people say freedom of speech is important to have (92%) than say they are able to speak freely (86%). In India, a slightly larger share say they have free speech than think this is important, resulting in a reverse gap. And publics in Ghana, Israel, Kenya and South Africa do not feel differently about the importance of free speech and their experiences with free speech. Internet freedom gaps Globally, internet freedom gaps are less pronounced than gaps on the other two freedoms we asked about. Overwhelming majorities in most countries say it is important for people to be able to use the internet without censorship, and similar shares say they are able to use the internet freely where they live. But because of high levels of people saying they have internet freedom, there are reverse freedom gaps in 17 countries. In these cases, larger shares of adults say that the internet is free of censorship in their country than say freedom on the internet is important to have. For example, in the middle-income countries of Bangladesh, India, Kenya and South Africa, at least eight-in-ten adults say they are completely or somewhat free to use the internet – but only about two-thirds say this freedom is very or somewhat important to have. But these reverse gaps are not limited to middle-income countries. In Australia, Israel, Italy, Japan, the Netherlands, Singapore, South Korea and the UK – all high-income countries – the shares who say they are free to use the internet are larger than the shares who believe internet freedom is important. In seven countries, internet freedom gaps look similar to the speech and press freedom gaps. In other words, more people in these countries say that freedom on the internet is important than say they are able to use the internet freely. As for the U.S., similar shares of Americans say that freedom on the internet is very important to have in their country and that people in the U.S. are completely free to use the internet without government censorship (91% vs. 92%). source

4. Freedom gaps Read More »

Does RAG make LLMs less safe?  Bloomberg research reveals hidden dangers

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Retrieval Augmented Generation (RAG) is supposed to help improve the accuracy of enterprise AI by providing grounded content. While that is often the case, there is also an unintended side effect. According to surprising new research published today by Bloomberg, RAG can potentially make large language models (LLMs) unsafe.  Bloomberg’s paper, ‘RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models,’ evaluated 11 popular LLMs including Claude-3.5-Sonnet, Llama-3-8B and GPT-4o. The findings contradict conventional wisdom that RAG inherently makes AI systems safer. The Bloomberg research team discovered that when using RAG, models that typically refuse harmful queries in standard settings often produce unsafe responses. Alongside the RAG research, Bloomberg released a second paper, ‘Understanding and Mitigating Risks of Generative AI in Financial Services,’ that introduces a specialized AI content risk taxonomy for financial services that addresses domain-specific concerns not covered by general-purpose safety approaches. The research challenges widespread assumptions that retrieval-augmented generation (RAG) enhances AI safety, while demonstrating how existing guardrail systems fail to address domain-specific risks in financial services applications. “Systems need to be evaluated in the context they’re deployed in, and you might not be able to just take the word of others that say, Hey, my model is safe, use it, you’re good,” Sebastian Gehrmann, Bloomberg’s Head of Responsible AI, told VentureBeat.  RAG systems can make LLMs less safe, not more RAG is widely used by enterprise AI teams to provide grounded content. The goal is to provide accurate, updated information.  There has been a lot of research and advancement in RAG in recent months to further improve accuracy as well. Earlier this month a new open-source framework called Open RAG Eval debuted to help validate RAG efficiency. It’s important to note that Bloomberg’s research is not questioning the efficacy of RAG or its ability to reduce hallucination. That’s not what the research is about. Rather it’s about how RAG usage impacts LLM guardrails in an unexpected way. The research team discovered that when using RAG, models that typically refuse harmful queries in standard settings often produce unsafe responses. For example, Llama-3-8B’s unsafe responses jumped from 0.3% to 9.2% when RAG was implemented. Gehrmann explained that without RAG being in place, if a user typed in a malicious query, the built-in safety system or guardrails will typically block the query. Yet for some reason, when the same query is issued in an LLM that is using RAG, the system will answer the malicious query, even when the retrieved documents themselves are safe. “What we found is that if you use a large language model out of the box, often they have safeguards built in where, if you ask, ‘How do I do this illegal thing,’ it will say, ‘Sorry, I cannot help you do this,’” Gehrmann explained. “We found that if you actually apply this in a RAG setting, one thing that could happen is that the additional retrieved context, even if it does not contain any information that addresses the original malicious query, might still answer that original query.” How does RAG bypass enterprise AI guardrails? So why and how does RAG serve to bypass guardrails? The Bloomberg researchers were not entirely certain though they did have a few ideas. Gehrmann hypothesized that the way the LLMs were developed and trained did not fully consider safety alignments for really long inputs. The research demonstrated that context length directly impacts safety degradation. “Provided with more documents, LLMs tend to be more vulnerable,” the paper states, showing that even introducing a single safe document can significantly alter safety behavior. “I think the bigger point of this RAG paper is you really cannot escape this risk,” Amanda Stent, Bloomberg’s Head of AI Strategy and Research, told VentureBeat. “It’s inherent to the way RAG systems are. The way you escape it is by putting business logic or fact checks or guardrails around the core RAG system.” Why generic AI safety taxonomies fail in financial services Bloomberg’s second paper introduces a specialized AI content risk taxonomy for financial services, addressing domain-specific concerns like financial misconduct, confidential disclosure and counterfactual narratives. The researchers empirically demonstrated that existing guardrail systems miss these specialized risks. They tested open-source guardrail models including Llama Guard, Llama Guard 3, AEGIS and ShieldGemma against data collected during red-teaming exercises. “We developed this taxonomy, and then ran an experiment where we took openly available guardrail systems that are published by other firms and we ran this against data that we collected as part of our ongoing red teaming events,” Gehrmann explained. “We found that these open source guardrails… do not find any of the issues specific to our industry.” The researchers developed a framework that goes beyond generic safety models, focusing on risks unique to professional financial environments. Gehrmann argued that general purpose guardrail models are usually developed for consumer facing specific risks. So they are very much focused on toxicity and bias. He noted that while important those concerns are not necessarily specific to any one industry or domain. The key takeaway from the research is that organizations need to have the domain specific taxonomy in place for their own specific industry and application use cases. Responsible AI at Bloomberg Bloomberg has made a name for itself over the years as a trusted provider of financial data systems. In some respects, gen AI and RAG systems could potentially be seen as competitive against Bloomberg’s traditional business and therefore there could be some hidden bias in the research.  “We are in the business of giving our clients the best data and analytics and the broadest ability to discover, analyze and synthesize information,” Stent said. “Generative AI is a tool that can really help with discovery, analysis and synthesis across data and analytics, so for us, it’s a benefit.” She added that the kinds of bias that Bloomberg is concerned about with its AI solutions are

Does RAG make LLMs less safe?  Bloomberg research reveals hidden dangers Read More »

Customer-centric IT: Strategies for delivering winning customer experiences

7. Speed AI adoption CIOs should also be accelerating adoption of all variations of artificial intelligence, such as generative AI-powered chatbots, to create better customer experiences, West Monroe’s Cheng says, noting that agentic AI in particular shows potential as a powerful tool for delivering efficient, effective customer experiences. Whether the agentic AI is enabled to execute decisions on its own or designed to require a human to approve certain actions, the technology is demonstrating that it can resolve customer needs faster and more accurately than humans can, Cheng explains. Likewise, Seiler says adding more automation and AI — from chatbots to virtual assistance — throughout the customer journey is essential to meeting rising customer expectations. KPMG, in its 2024 Global Customer Experience Excellence report, says that leading organizations “are humanizing their AI interfaces, making them more engaging and relatable through anthropomorphism — that is, attributing human traits to non-human things — to create more engaging and relatable experiences. This approach taps into our innate tendency to connect with human-like characteristics, enabling AI bots, like Microsoft’s Cortana and Apple’s Siri, to offer more personalized, emotionally resonant experiences with their distinct personalities and conversational styles.” source

Customer-centric IT: Strategies for delivering winning customer experiences Read More »

2. Importance of press freedom, free speech and freedom on the internet

Across 35 countries, our survey finds widespread support for freedom of the press, freedom of speech and freedom on the internet. But the shares in each country who say these freedoms are very important range somewhat. A median of 61% say it’s very important that the media are able to report the news without censorship. A median of 59% say this about people being able to say what they want without censorship. A median of 55% say this about people being able to use the internet without censorship. Freedom of the press A median of 61% across the 35 countries surveyed say it is very important that the media are able to report the news without state or government censorship in their country. A median of 23% say this is somewhat important; 11% say it’s not too or not at all important. Majorities of adults in Canada (77%) and the U.S. (67%) believe having freedom of the press is very important in their country. In Europe, the shares saying freedom of the press is very important range from 56% in Poland to 89% in Greece. Majorities across all countries in the region hold this view. In the Asia-Pacific region, varying shares say it’s very important that the media can report the news without censorship. About four-in-ten hold this view in Bangladesh, India and Singapore, compared with about six-in-ten in Australia. Shares in other Asia-Pacific countries fall in between. Roughly seven-in-ten Turkish adults think press freedom is very important, a much higher share than the 43% of Israelis who say the same. In the four sub-Saharan African countries surveyed, the share of adults who say press freedom is very important ranges from 44% in Kenya to 61% in Ghana. And half or more of adults in each of the six Latin American countries polled say having a free press is very important. Views by education Education is linked to views of  press freedom’s importance in many of the countries surveyed, including both middle- and high-income countries (as defined by the World Bank). In South Korea, for example, adults with higher levels of education (in this case, a postsecondary education or more) are more likely than those who have less education to say having press freedom is very important. A similar pattern is present in countries spanning all regions included in the survey. Views over time In eight countries, the share of adults who consider freedom of the press very important has increased since this question was first asked in 2015. The change over time is particularly large in Turkey: 45% of Turks said a free press was very important in 2015, and 71% say this in the spring 2024 survey. Significant increases have also occurred in Canada, France, Indonesia, Italy, Japan and the UK. But in a few countries, the share of those who see press freedom as very important has declined. In 2015, for example, 71% of Brazilian adults said freedom of the press was very important, compared with 62% most recently. Since 2019, the share expressing this opinion has also dropped in South Africa (-15 points) and the Philippines (-11), as well as Kenya (-10), Poland (-8) and Nigeria (-7). Americans’ views have fluctuated over time. The share saying a free press is very important was 67% in 2015, went up to 80% in 2019, then back down to 67% in 2025. Similarly, the share of Australians saying press freedom is very important in 2024 is larger than it was in 2015, but smaller than it was in 2019. Freedom of speech A median of 59% across 35 countries believe it is very important that people are able say what they want without state or government censorship. Another 27% say it is somewhat important, and 10% say it is not too or not at all important. Majorities in Canada and the U.S. say having freedom of speech is very important in their country. Views of free speech’s importance vary widely across the European countries polled: 53% of Polish adults say this freedom is very important to have in their country, while 87% in Germany say the same. And eight-in-ten in Greece and Sweden believe being able to speak without censorship is very important. Just 35% of adults in Singapore say that having free speech is very important – the lowest share of all countries polled. Elsewhere in the Asia-Pacific region, half or more of Australians, South Koreans, Japanese and Sri Lankans believe it’s very important that people are able to speak without censorship. In the Middle East-North Africa region, half of Israelis and 69% of Turks say freedom of speech is very important to have in their country. While majorities across the four sub-Saharan African countries surveyed think free speech is at least somewhat important, only in Ghana does more than half of the public say free speech is very important (54%). And at least half of adults in the six Latin American countries surveyed say speech without censorship is very important, including about eight-in-ten or more in Argentina and Chile. Views by education In 15 countries, adults with more education are more likely than those with less education to say having free speech in their country is very important. In Peru, for example, 64% of those with an upper secondary education or more say it is very important to be able to speak without censorship, compared with 42% of those who have less education. Notably, in India, people with less education were also less likely to provide a response. Views over time In five countries, the share of adults who say it is very important to have free speech has increased since we first asked this question in 2015. For example, 43% of Turkish adults in 2015 saw free speech as very important; in 2024, 69% say the same. Double-digit increases occurred in Indonesia and Italy as well. In France, views have fluctuated over time: In 2024, three-quarters of French adults say free speech is very important,

2. Importance of press freedom, free speech and freedom on the internet Read More »

Fertility startup ‘rejuvenates’ human eggs to boost chances of conception

German biotech startup Ovo Labs has developed new technologies to “rejuvenate” human eggs, potentially boosting the chances of conception.  The therapeutics are designed to enhance in vitro fertilisation (IVF), one of the most transformative advances in reproductive medicine. The first baby was born via IVF more than 40 years ago. Since then, the technology has helped millions of women get pregnant. However, IVF can put significant emotional, psychological, and financial strain on patients. It is often unsuccessful on the first attempt. Some try multiple times without success. For many, IVF ultimately does not lead to parenthood. 40% off TNW Conference! For 1 week only… Register by 28 April & save up to €700 on General Admission, Corporate, VIP & Investor Passes, and Startup/Scaleup packages Ovo Labs wants to improve the odds. Based on 20 years of fertility research, the startup has developed three therapeutic treatments that reduce genetic errors in eggs. In doing so, the company aims to “dramatically” boost the number of women who can conceive in a single IVF attempt. A microscopic image of a human oocyte with the DNA visible in pink. Credit: Ovo Labs “By helping to increase the number of viable eggs, we aim to extend the reproductive window, empowering more couples to have children at a time that feels right to them,” said co-founder Professor Melina Schuh. Schuh is a world-leading fertility expert at the Max Planck Institute in Munich. She co-founded Ovo Labs in January alongside her former colleague Dr Agata Zielinska, a Polish-British fertility scientist, and German biotech expert Dr Oleksandr Yagensky. Ovo Labs has already proven that it can improve the quality of eggs in old mice. The company has also shown it can successfully treat isolated human eggs. However, its technology is not yet approved for human trials.    If the treatment gets the green light, Ovo Labs hopes it will become standard practice in IVF. To get there, though, the startup will need time, as the regulatory approval process for new medical treatments is notoriously slow. It will also need money. To that end, Ovo Labs today announced it has secured €4.6mn, its first batch of external funding. Creator Fund and Local Globe led the round, with participation from Blue Wire Capital, Ahren Innovation Capital, and angel investor Antonio Pellicer.   “It is inspiring to see European scientists of this calibre launch a company solving such a fundamental question facing humanity,” said Jamie Macfarlane, founder of UK-based Creator Fund.   Schuh and Zielinska spent years together researching eggs at Bourn Hall Clinic, the world’s first IVF centre (recently featured in the Netflix movie Joy). Their work shed light on why egg quality declines with age and the potential therapies. By the time a woman reaches 40, over 70% of her eggs carry genetic abnormalities, according to data from the London Egg Bank, making it much harder to conceive. By reducing genetic errors, Ovo Labs hopes to improve the chances of successful pregnancies. source

Fertility startup ‘rejuvenates’ human eggs to boost chances of conception Read More »

Relyance AI builds ‘x-ray vision’ for company data: Cuts AI compliance time by 80% while solving trust crisis

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Relyance AI, a data governance platform provider that secured $32.1 million in Series B funding last October, is launching a new solution aimed at solving one of the most pressing challenges in enterprise AI adoption: understanding exactly how data moves through complex systems. The company’s new Data Journeys platform, announced today, addresses a critical blind spot for organizations implementing AI — tracking not just where data resides, but how and why it’s being used across applications, cloud services, and third-party systems. “The fundamental premise is making sure that our customers have this AI native, context-aware view, very visual view of the entire journey of data across their applications, services, infrastructures, third parties,” said Abhi Sharma, CEO and co-founder of Relyance AI, in an exclusive interview with VentureBeat. “You can really get at the heart of the why of data processing, which is the most foundational layer needed for general AI governance.” The launch comes at a pivotal moment for enterprise AI governance. As companies accelerate AI implementation, they face mounting pressure from regulators worldwide. More than a quarter of Fortune 500 companies have identified AI regulation as a risk in SEC filings, and GDPR-related fines reached €1.2 billion in 2024 alone (approximately $1.26 billion at current exchange rates). How Data Journeys tracks information flow where others fall short The platform represents a significant evolution from conventional data lineage approaches, which typically track data movement on a table-to-table or column-to-column basis within specific systems. “The status quo for data lineage is basically table to table and column level lineage. I can see how data moved within my Snowflake instance or within my S3 buckets,” Sharma explained. “But nobody can answer: Where did it come from originally? What nuanced transformation happened between data pipelines, third-party vendors, API calls, RAG architectures, to finally land up here?” Data Journeys aims to provide this comprehensive view, showing the complete data lifecycle from original collection through every transformation and use case. The system starts with code analysis rather than simply connecting to data repositories, giving it context about why data is being processed in specific ways. The promise of AI comes with significant accountability for how data is used. After seeing Relyance AI Data Journeys, we immediately recognized its potential to revolutionize our approach to responsible AI development,” said Heather Allen, privacy officer and director of privacy management at CHG Healthcare. “The automated, context-aware data lineage capabilities would address our most pressing challenges. It represents exactly what we’ve been looking for to support our global AI governance framework. Four business problems that data visibility promises to solve According to Sharma, Data Journeys delivers value in four critical areas: First, compliance and risk management: “Today, you kind of are required to vouch for integrity of data processing, but you can’t see inside. It’s basically blind governance,” Sharma said. The platform enables organizations to prove the integrity of their data practices when facing regulatory scrutiny. Second, precise bias detection: Rather than just examining the immediate dataset used to train a model, companies can trace potential bias to its source. “Bias often happens at inference time, not because you had bias in the dataset,” Sharma noted. “The point is, it’s actually not that dataset. It’s the journey it took.” Third, explainability and accountability: For high-stakes AI decisions like loan approvals or medical diagnoses, understanding the complete data provenance becomes essential. “The why behind that is super important, and many times, the incorrect behavior of the model is completely dependent on the multiple steps it took before the inference time,” Sharma explained. Finally, regulatory compliance: The platform provides what Sharma calls a “mathematical proof point” that companies are using data appropriately, helping them navigate increasingly complex global regulations. From hours to minutes: Measurable returns on better data oversight Relyance claims the platform delivers measurable returns on investment. According to Sharma, customers have seen 70-80% time savings in compliance documentation and evidence gathering. What he calls “time to certainty”—the ability to quickly answer questions about how specific data is being used—has been reduced from hours to minutes. In one example Sharma shared, a direct-to-consumer company was switching payment processors from Braintree to Stripe. An engineer working on the project inadvertently created code that stored credit card information in plain text under the wrong column name in Snowflake. “We caught that at the time the code was checked in,” Sharma said. Without Data Journeys’ visual representation of data flows, this potential security incident might have gone undetected until much later. Keeping sensitive data inside your walls: The self-hosted option Alongside Data Journeys, Relyance is introducing InHost, a self-hosted deployment model designed for organizations with strict data sovereignty requirements or those in highly regulated industries. “The industries that are most interested in the in-host option are more regulated industries — FinTech and healthcare,” said Sharma. This includes banking, fraud detection, credit worthiness applications, genetics, and personal healthcare services. The flexibility to deploy either in the cloud or within a company’s own infrastructure addresses growing concerns about sensitive data leaving organizational boundaries, particularly for AI applications that might process regulated information. Relyance AI’s expansion plans point to growing AI governance market Relyance is positioning Data Journeys as part of a broader strategy to become what Sharma calls “a unified AI-native platform” for global privacy compliance, data security posture management, and AI governance. “In the second half of this year, I’m launching an AI governance solution which will be a 360-degree management of all AI footprint in your environment,” Sharma revealed, encompassing compliance, real-time ethics monitoring, bias detection, and accountability for both third-party and in-house AI systems. The company’s long-term vision is ambitious. “AI agents are going to run the world, and we want to be that company that provides the infrastructure for organizations to trust and govern it,” Sharma said. “We want to help improve the data utility index of the world.” Investors bet big on

Relyance AI builds ‘x-ray vision’ for company data: Cuts AI compliance time by 80% while solving trust crisis Read More »