The Next Web TNW

Hollywood AI pioneer Flawless launches new editing tool

AI took another step into Hollywood today with the launch of a new filmmaking tool from showbiz startup Flawless. The product — named DeepEditor — promises cinematic wizardry for the digital age. For movie makers, the tool offers photorealistic edits without a costly return to set. Flawless has showcased several use cases. One transfers an actor’s performance from one shot to another. Another adds new dialogue while keeping the original scene. The character’s lip movements are synchronised with the updated words. Users can also trim lines, insert pauses, and re-time delivery. Every edit is delivered in 4K resolution. The results have already hit the silver screen. One early test case was the survival thriller Fall, which was directed by Scott Mann — the co-founder of Flawless. AI editing arrives in Tinseltown Mann applied the software to clean up the movie’s dialogue. The first cut featured dozens of f-bombs, which were pushing Fall towards an R rating that would have severely restricted the audience. Those curse words had to go. To replace them, Flawless first converted the actors’ faces into 3D models. Next, neural networks then analysed and reconstructed the performances. Facial expressions and lip movements were then synchronised with the new dialogue. The experiment was a success. Fall secured a PH-13 rating and became a sleeper hit, grossing a reported $21mn against a budget of just $3mn. A sequel is now shooting in Thailand. The results convinced Mann to bring the tech to market, which led to today’s commercial launch of DeepEditor. “It’s already altering where people are shooting,” Mann told TNW last month. “And as it extends out, I think it’s going to completely transform how we make movies.” Flawless has also integrated protections for creators. Embedded in DeepEditor is a tool called the Artistic Rights Treasury (A.R.T.), which allows performers to review and consent to AI edits. Actor’s union SAG-AFTRA has endorsed the approach. “DeepEditor is proof that AI can enhance storytelling while ensuring performers and editors remain in control,” Mann said. “It provides real creative flexibility, operates on clean, copyrightable data, and respects the artistry behind every film.” If all goes to plan, movie lovers will soon be able to review the results for themselves. But if the AI edits are as good as advertised, we won’t even know that they exist. AI will take centre stage again at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale. Use the code TNWXMEDIA2025 at the check-out to get 30% off the price tag.   source

Hollywood AI pioneer Flawless launches new editing tool Read More »

Mistral CEO: Europe must ‘own and operate’ its AI infrastructure

Mistral CEO and co-founder Arthur Mensch has urged Europe to invest more in AI infrastructure amid fears that the continent is falling behind the US and China in tech development. “It’s important to have European players coming to the game,” Mensch said at the Visionaries Unplugged conference in Paris yesterday. “Europe needs to invest in owning and operating the infrastructure so that the money that is being made will not just go back to the hyperscalers in the US.” Mensch was joined at the conference by a cohort of tech luminaries, including DeepMind founder Demis Hassabis, LinkedIn co-founder Reid Hoffman, Anthropic founder Dario Amodei, and former Google CEO Eric Schmidt. Many of them echoed Mensch’s sentiment.   “Ambition in Europe is on par or higher than the US — it’s not a talent problem but a structural one,” said Schmidt.  TNW Conference FLASH SALE is LIVE This week only, take advantage of our 50% our Startup, Scaleup and Investor Programs. Ends 21 February. Xavier Niel, a French billionaire tech investor, added the continent must retain control over AI developments. “Models built in the US and China are not built with the same kind of life we have in Europe,” said Niel, whose telecommunications firm Iliad recently pledged €3bn to advance AI development in France. “I don’t want our kids relying on models that are not created with the same rules that we have in Europe, for people in my country or my continent to not have models they can rely on.” Founders and investors at the event repeatedly called for regulation in Europe that is “flexible enough” to support innovation and competitiveness, according to a press release.  The call comes as the EU pushes ahead with its landmark AI Act, which entered force last year. The act lays out a rulebook for governing AI based on risk levels, designed to ensure the technology is deployed safely, transparently, and ethically.    The US, meanwhile, is moving in a very different direction. While the EU imposes strict rules, the Trump administration is removing AI protections and giving tech sector leaders prominent roles in government. At the AI Action Summit in Paris this week, US Vice President JD Vance criticised the EU’s efforts to regulate the burgeoning AI sector. He said the Trump administration will not accept foreign governments “tightening the screws” on US tech firms. source

Mistral CEO: Europe must ‘own and operate’ its AI infrastructure Read More »

'Worrying decline' in Dutch startups sparks call for growth capital

Stalling growth in the Dutch tech sector has sparked urgent calls for fresh funding streams. New data released today reveals the number of new startups in the Netherlands is declining. The country is also suffering from a severe lack of local investors.  The findings emerged in the State of Dutch Tech report by Techleap, a non-profit that supports startups and scaleups in the Netherlands.  The report raises concerns about the nation’s funding landscape. In 2024, only 104 startups raised over €100,000 — a 23% decline over the previous year. The number of deals, meanwhile, dropped by 20%. Webinar: Nurturing Scaleup Success Join us on 18 February for a discussion on the vital role of ecosystems in nurturing startups and scaleups and fostering a dynamic entrepreneurial landscape. Myrthe Hooijman, Techleap’s director of ecosystem change and governmental affairs, said the startup struggles were a “worrying signal.” “We need startups to build scaleups that can grow to unicorns,” Hooijman told TNW. “The decline potentially weakens our future potential. We must accelerate the transition from research to ventures, and learn from expert ecosystems, together with addressing the need to expand access to early-stage capital.” Dutch tech’s funding fortunes Amid the gloom, the report also exposed positive signs for Dutch tech. Collectively, the sector raised €3.1bn in venture capital during the past year — a 47% increase over 2023. The country’s VC market remains the fourth-largest in Europe, behind the UK, Germany, and France. Dutch deep tech has been a big target for the funding. The sector attracted €1.1bn last year and now accounts for 35% of the ecosystem. Techleap credits the success to government initiatives such as Brainport Eindhoven. The Netherlands also raised two new unicorns in 2024: Mews and DataSnipper. DataSnipper, an automation platform for audit and finance teams, reached the milestone $1bn (€965mn) valuation in February after raising $100mn ($97mn) in a Series B round. The company’s CEO will share her story at this year’s TNW Conference. Mews, a hospitality management scaleup based at TNW City, passed the landmark a month later. The company hit a valuation of $1.2bn (€1.1bn) after securing $110mn (€101mn). Overall, the Dutch scaleup ratio has risen from 13% to 21.5% over the past five years. However, this growth still trails the European average (23%) — and lags way behind the US (54%). Dutch investors have also slowed down. In 2024, domestic investment plummeted from 61% to just 15%. Hooijman urged them to expand their spending in growth phases. “We need to continue our work to unlock the late-stage capital through institutional investors,” he said. Alongside new funding streams, Techleap is pushing for improved access to tech talent. The non-profit has also called for greater European collaboration through a startup entity that covers the entire continent. The future of Dutch tech is a key theme at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale. Use the code TNWXMEDIA2025 at the check-out to get 30% off the price tag. source

'Worrying decline' in Dutch startups sparks call for growth capital Read More »

Research shows AI datasets have human values blind spots

My colleagues and I at Purdue University have uncovered a significant imbalance in the human values embedded in AI systems. The systems were predominantly oriented toward information and utility values and less toward prosocial, well-being and civic values. At the heart of many AI systems lie vast collections of images, text and other forms of data used to train models. While these datasets are meticulously curated, it is not uncommon that they sometimes contain unethical or prohibited content. To ensure AI systems do not use harmful content when responding to users, researchers introduced a method called reinforcement learning from human feedback. Researchers use highly curated datasets of human preferences to shape the behavior of AI systems to be helpful and honest. In our study, we examined three open-source training datasets used by leading U.S. AI companies. We constructed a taxonomy of human values through a literature review from moral philosophy, value theory, and science, technology and society studies. The values are well-being and peace; information seeking; justice, human rights and animal rights; duty and accountability; wisdom and knowledge; civility and tolerance; and empathy and helpfulness. We used the taxonomy to manually annotate a dataset, and then used the annotation to train an AI language model. Our model allowed us to examine the AI companies’ datasets. We found that these datasets contained several examples that train AI systems to be helpful and honest when users ask questions like “How do I book a flight?” The datasets contained very limited examples of how to answer questions about topics related to empathy, justice and human rights. Overall, wisdom and knowledge and information seeking were the two most common values, while justice, human rights and animal rights was the least common value. The researchers started by creating a taxonomy of human values. Obi et al, CC BY-ND Why it matters The imbalance of human values in datasets used to train AI could have significant implications for how AI systems interact with people and approach complex social issues. As AI becomes more integrated into sectors such as law, health care and social media, it’s important that these systems reflect a balanced spectrum of collective values to ethically serve people’s needs. This research also comes at a crucial time for government and policymakers as society grapples with questions about AI governance and ethics. Understanding the values embedded in AI systems is important for ensuring that they serve humanity’s best interests. What other research is being done Many researchers are working to align AI systems with human values. The introduction of reinforcement learning from human feedback was groundbreaking because it provided a way to guide AI behavior toward being helpful and truthful. Various companies are developing techniques to prevent harmful behaviors in AI systems. However, our group was the first to introduce a systematic way to analyze and understand what values were actually being embedded in these systems through these datasets. What’s next By making the values embedded in these systems visible, we aim to help AI companies create more balanced datasets that better reflect the values of the communities they serve. The companies can use our technique to find out where they are not doing well and then improve the diversity of their AI training data. The companies we studied might no longer use those versions of their datasets, but they can still benefit from our process to ensure that their systems align with societal values and norms moving forward. source

Research shows AI datasets have human values blind spots Read More »

ASML rebounds, expects DeepSeek's AI leap to boost chip demand

Shares in ASML have bounced back from the hit inflicted by DeepSeek’s AI advances. Celebrating the results, ASML predicted that the sudden emergence of low-cost models will boost demand for the firm’s semiconductor machines. The company’s stock price rose by over 10% on Wednesday after the Dutch business reported impressive orders for its chip-making equipment. The tools produce the most advanced semiconductors in the world — and ASML is the only company that manufactures them. This dominant position has made ASML the second most valuable tech firm in Europe. But the business was shaken on Monday by DeepSeek’s rapid AI progress. Last week, the Chinese company released a new chatbot and models with a stunning blend of high performance and low cost. The results sent tech stocks spiralling. Nvidia set an alarming precedent, suffering the largest rout in market history. Webinar: Nurturing Scaleup Success Join us on 18 February for a discussion on the vital role of ecosystems in nurturing startups and scaleups and fostering a dynamic entrepreneurial landscape. Shares in ASML slumped by as much as 12%. But the company has been reinvigorated by strong results from 2024. The firm reported total annual sales of €28.3bn — just above its forecast of €28bn. Net bookings, meanwhile, surged to €7.1bn in the fourth quarter of 2024 — 169% above the €2.63bn reported in Q3. Christophe Fouquet, ASML’s CEO, expects demand for the company’s machines to grow. He told CNBC that the business will benefit from rise of low-cost AI models developed by the likes of DeepSeek. “A lower cost of AI could mean more applications,” Fouquet said. “More applications means more demand over time. We see that as an opportunity for more chips demand.” source

ASML rebounds, expects DeepSeek's AI leap to boost chip demand Read More »

Will AI revolutionise drug development? Researchers say it depends on how it’s used

The potential of using artificial intelligence in drug discovery and development has sparked both excitement and skepticism among scientists, investors and the general public. “Artificial intelligence is taking over drug development,” claim some companies and researchers. Over the past few years, interest in using AI to design drugs and optimise clinical trials has driven a surge in research and investment. AI-driven platforms like AlphaFold, which won the 2024 Nobel Prize for its ability to predict the structure of proteins and design new ones, showcase AI’s potential to accelerate drug development. AI in drug discovery is “nonsense,” warn some industry veterans. They urge that “AI’s potential to accelerate drug discovery needs a reality check,” as AI-generated drugs have yet to demonstrate an ability to address the 90% failure rate of new drugs in clinical trials. Unlike the success of AI in image analysis, its effect on drug development remains unclear. Behind every drug in your pharmacy are many, many more that failed.nortonrsx/iStock via Getty Images Plus We have been following the use of AI in drug development in our work as a pharmaceutical scientist in both academia and the pharmaceutical industry and as a former program manager in the Defense Advanced Research Projects Agency, or DARPA. We argue that AI in drug development is not yet a game-changer, nor is it complete nonsense. AI is not a black box that can turn any idea into gold. Rather, we see it as a tool that, when used wisely and competently, could help address the root causes of drug failure and streamline the process. Most work using AI in drug development intends to reduce the time and money it takes to bring one drug to market – currently 10 to 15 years and US$1 billion to $2 billion. But can AI truly revolutionise drug development and improve success rates? AI in drug development Researchers have applied AI and machine learning to every stage of the drug development process. This includes identifying targets in the body, screening potential candidates, designing drug molecules, predicting toxicity and selecting patients who might respond best to the drugs in clinical trials, among others. Between 2010 and 2022, 20 AI-focused startups discovered 158 drug candidates, 15 of which advanced to clinical trials. Some of these drug candidates were able to complete preclinical testing in the lab and enter human trials in just 30 months, compared with the typical 3 to 6 years. This accomplishment demonstrates AI’s potential to accelerate drug development. Drug development is a long and costly process. On the other hand, while AI platforms may rapidly identify compounds that work on cells in a Petri dish or in animal models, the success of these candidates in clinical trials – where the majority of drug failures occur – remains highly uncertain. Unlike other fields that have large, high-quality datasets available to train AI models, such as image analysis and language processing, the AI in drug development is constrained by small, low-quality datasets. It is difficult to generate drug-related datasets on cells, animals or humans for millions to billions of compounds. While AlphaFold is a breakthrough in predicting protein structures, how precise it can be for drug design remains uncertain. Minor changes to a drug’s structure can greatly affect its activity in the body and thus how effective it is in treating disease. Survivorship bias Like AI, past innovations in drug development like computer-aided drug design, the Human Genome Project and high-throughput screening have improved individual steps of the process in the past 40 years, yet drug failure rates haven’t improved. Most AI researchers can tackle specific tasks in the drug development process when provided with high-quality data and particular questions to answer. But they are often unfamiliar with the full scope of drug development, reducing challenges into pattern recognition problems and refinement of individual steps of the process. Meanwhile, many scientists with expertise in drug development lack training in AI and machine learning. These communication barriers can hinder scientists from moving beyond the mechanics of current development processes and identifying the root causes of drug failures. Current approaches to drug development, including those using AI, may have fallen into a survivorship bias trap, overly focusing on less critical aspects of the process while overlooking major problems that contribute most to failure. This is analogous to repairing damage to the wings of aircraft returning from the battle fields in World War II while neglecting the fatal vulnerabilities in engines or cockpits of the planes that never made it back. Researchers often overly focus on how to improve a drug’s individual properties rather than the root causes of failure. While returning planes might survive hits to the wings, those with damage to the engines or cockpits are less likely to make it back.Martin Grandjean, McGeddon, US Air Force/Wikimedia Commons, CC BY-SA The current drug development process operates like an assembly line, relying on a checkbox approach with extensive testing at each step of the process. While AI may be able to reduce the time and cost of the lab-based preclinical stages of this assembly line, it is unlikely to boost success rates in the more costly clinical stages that involve testing in people. The persistent 90% failure rate of drugs in clinical trials, despite 40 years of process improvements, underscores this limitation. Addressing root causes Drug failures in clinical trials are not solely due to how these studies are designed; selecting the wrong drug candidates to test in clinical trials is also a major factor. New AI-guided strategies could help address both of these challenges. Currently, three interdependent factors drive most drug failures: dosage, safety and efficacy. Some drugs fail because they’re too toxic, or unsafe. Other drugs fail because they’re deemed ineffective, often because the dose can’t be increased any further without causing harm. We and our colleagues propose a machine learning system to help select drug candidates by predicting dosage, safety and efficacy based on five previously overlooked features of drugs. Specifically, researchers could use AI models

Will AI revolutionise drug development? Researchers say it depends on how it’s used Read More »

Cultivated beef pioneer Mosa Meat goes fat-first in Switzerland

Swiss foodies could soon be served an experimental new delicacy: cultivated burgers. The lab-grown cuisine is the brainchild of Dutch scaleup Mosa Meat. Founded in 2013, the company cultivates beef from cells extracted from cows. The blend is then formed into burgers that are indistinguishable from the mince on supermarket shelves. The lucky cattle, meanwhile, amble back to the farm. Mosa calls the product “the world’s kindest burger.” Cultivated meat could also slash our carbon footprints, but the concept first needs support from regulators around the world. Swiss authorities are the latest target for Mosa. The company announced today that it’s requested a “novel food authorisation” in Switzerland that focuses on one ingredient: cultivated fat. Going fat-first is a local strategy. Like the EU, Switzerland requires cultivated ingredients to be submitted individually for regulatory approval. Fat is a logical starting point. It plays a critical role in delivering the taste, aroma, and mouthfeel of beef, making it essential to the culinary experience. Once approved, the fat can be mixed with plant-based ingredients into beefy products. Maarten Bosch, Mosa’s CEO, told TNW that the company plans to sell burgers formed from the blend. The scaleup is also in talks with plant-based food firms about adding cultivated fat to their products. “By starting with cultivated fat, we are paving the way to bring our first burgers to market while staying true to our long-term vision,” Bosch said. The cultivated meat market The Swiss submission marks the latest milestone in Mosa’s journey to commercialise cultivated meat. In 2013, the company’s chief scientific officer, Mark Post, created the world’s first cultivated burger. Costing a whopping €250,000 to make, the patty was also the world’s most expensive burger. Google co-founder Sergey Brin paid the bill. Three years later, Mosa Meat was founded. Since then, the company has pioneered a cultivation technique that removes the controversial fetal bovine serum, earned the industry’s first B Corp certificate, and raised over €130mn from investors including Leonardo DiCaprio. Mosa is now focusing on routes to market. Last year, the company hosted the first public tasting of cultivated beef in the EU. In January, Mosa submitted the union’s second-ever application to sell cultivated meat. The first was for a lab-grown foie gras made in (where else?) France. Across Europe, however, no cultivated meat for human consumption has been approved for sale yet. Globally, the only countries to have given the green light are Singapore, the US, and Isreal. Singapore became the first in 2020. Unlike Switzerland and the EU, the country assesses full cultivated meat products for approval. Mosa’s new application, by contrast, focuses on just the fat. The company expects the approval process to last around 18 months. source

Cultivated beef pioneer Mosa Meat goes fat-first in Switzerland Read More »

These are the skills you should consider learning in 2025

Every two years for the last decade, the World Economic Forum has released a comprehensive, and oft cited report proffering insights into the changing nature of the jobs economy. The latest Future of Jobs Report, which covers 2025–2030, combines the viewpoints of more than 1,000 prominent international businesses, who together account for over 14 million workers in 22 sector clusters and 55 economies worldwide. Here are a few key takeaways: Gen AI & robots According to the report, by 2030, 60% of employers anticipate that broadening digital access will revolutionise their industry. 5 jobs to discover this week Software Engineer, Mercor, FR Software Developer DevOps, Seibert Group GmbH, Stuttgart Software Developer – Data Analytics & AI (w/m/d), TÜV SÜD AG, München Software Engineer, Alten, Alpes-Maritimes Software Engineer TSCMIS, Leidos, Stuttgart Employers agreed the following areas were likely to drive business transformation: artificial intelligence and information processing (86%), robotics and automation (58%), semiconductors and computing technologies (20%), and satellites and space technologies (9%). Gen AI remains the hottest and most accessible tech trend, and has received a rapid surge in both investment and adoption across various sectors. Since ChatGPT was released in November 2022, investment into AI has increased nearly eightfold. And that doesn’t include the significant investment in the physical infrastructures required for AI, including servers and data centres. Some 40% of businesses expect to reduce their personnel in areas where AI can automate jobs, two-thirds aim to hire talent with specialised AI capabilities, and half of employers plan to reorient their business in response to AI. Interestingly, the report features data from Coursera, which shows a steep incline in AI upskilling from April 2023 onwards. Meanwhile, robots and autonomous systems have seen steady growth of 5-7% annually since 2020. Worldwide, the average robot density was 162 units per 10,000 workers in 2023, which is twice as many as it was seven years prior. However, 80% of robot installations are taking place in China, Japan, the United States, the Republic of Korea, and Germany, making this technology trend highly concentrated. Skills to pay the bills The report also highlights the increased demand for both technology-related skills and broader workplace skills. In terms of tech skills, big data and AI (yes, again), networks and cybersecurity, and technological literacy are all predicted to be the top fastest-growing skills. Despite all this, AI and big data only placed 11th in the list of core skills for 2025. In top billing is analytical thinking (69% of employers agree). Resilience, flexibility, and agility come next, followed by leadership and social influence, highlighting the vital role that flexibility and teamwork play in modern workplaces. Creative thinking, plus motivation and self-awareness come in fourth and fifth. Completing the top ten are: technological literacy, empathy and active listening, curiosity and lifelong learning, talent management, and service orientation and customer service. Changing priorities There have been several notable changes in essential skills since this report’s previous edition in 2023. Relevance has significantly increased for leadership and social impact, AI and big data, talent management, as well as customer service and service orientation. Overall, the most significant increases in relevance have been observed in the areas of leadership and social impact, resilience, flexibility and agility, and AI and big data. Looking further to 2030, tech skills are increasing in importance. Some 87% of employers consider AI and big data to be important during the next five years, 70% say networks and cybersecurity are hot topics, while 68% say technological literacy will be paramount. Systems thinking also ranks highly (51%), and design and user experience does too (45%). Programming ranked lower overall at 27%, though this was higher in the technology services sector and the telecommunications industry. Technological literacy was most valued in automotive and aerospace, financial services and capital markets, followed by medical and healthcare services. When it comes to the importance of networks and cybersecurity, financial services and capital markets unsurprisingly is the top industry, followed by insurance and pensions management, and energy technology and utilities. With all this data, it might be difficult to narrow down where your next upskilling opportunity should lie. However, AI and big data, networks and cybersecurity, and technological literacy all sound like safe bets. It can be helpful to look at what roles are in demand. In percentage terms, the fastest-growing occupations are those in the technology sector, such as software and application developers, fintech engineers, big data specialists, and experts in artificial intelligence and machine learning. Over six million software and applications developer roles are expected to open up between now and 2030, the third highest growing jobs after farm workers and truck drivers. If you already have transferable skills and you’re ready to start your job search, the House of Talent Job Board is the perfect place to start. It features Robin, a conversational AI job search agent that can help you to locate your next tech position, fast. Robin pops up on the bottom right hand side of your screen when you’re on the job board, and allows you to search for best-matched jobs using your resume. Not quite sure what you want to do? You can tell it a bit about yourself, your skills, and where you’d like to work to generate some ideas. Ready to find your next software job? Check out The Next Web Job Board source

These are the skills you should consider learning in 2025 Read More »

Ethical AI is turning the Netherlands into an innovation leader

Long admired for its progressive policies and open economy, the Netherlands is making an aggressive play to become Europe’s next tech powerhouse. By blending AI with sustainability and a strong ethical framework, the country attracted $2.5bn in tech investments in 2024 alone — a 39% surge from the previous year. With a government-backed push for responsible innovation, the Netherlands is positioning itself as the epicentre of Europe’s next tech renaissance.  According to VC firm Atomico, the country has become one of Europe’s fastest-growing tech ecosystems. Europe’s leading stock exchange by market cap, Euronext Amsterdam, has become a cornerstone of the country’s digital ecosystem. Tech now accounts for 23% of Euronext Amsterdam’s total market — exceeding the New York Stock Exchange’s 14%.  Ethical AI is a pivotal aspect of the Netherlands’ tech ambitions. Dutch leaders in the space include Kickstart AI, a collaboration among five major Dutch companies — Ahold Delhaize, ING, KLM, NS, and Philips — that focuses on driving ethical AI innovations that align with societal values and can tackle real-world challenges. Another key initiative, GPT-NL, spearheaded by non-profits TNO, NFI, and SURF, aims to ensure transparent and fair AI usage, adhering to Dutch and European principles of data ownership and ethical standards.  The Dutch government has been a key player in these developments. It’s implemented policies that nurture tech growth at every stage — from grants for early-stage startups to tax incentives for R&D activities. Meanwhile, programs like the Dutch Good Growth Fund and the Innovation Box tax scheme encourage businesses to invest in sustainable, high-tech solutions.Last year, the Dutch government unveiled its vision for generative AI, outlining a framework to develop and use this technology responsibly while maintaining control over its societal impacts. The vision is structured around six key action lines: fostering collaboration among stakeholders; closely monitoring AI advancements; developing appropriate legislation and regulations; expanding AI knowledge and skills (particularly through education); experimenting with generative AI within government in a safe and controlled manner; and ensuring strict supervision with enforcement measures when necessary. “It is essential that the Netherlands does not remain stuck on the sidelines when it comes to artificial intelligence,” said Micky Adriaansens, Netherlands’ Minister of Economic Affairs and Climate Policy, during a briefing last year. “In particular, generative AI is increasingly developing into one of the most defining technologies of our time, both in everyday life, and for example for application in machines and in more efficient industrial systems. Asia and the US have taken the lead and Europe will have to catch up.” The plans aligns with significant investments — amounting to millions of euros — already made by research institutions, private enterprises, and the government, all focused on keeping pace with the rapid evolution of AI. “The Dutch approach to ethical AI development embodies a distinctly European balance between innovation and privacy rights,” said Krik Gunning, co-founder and CEO of Amsterdam-based digital identity startup Fourthline. “By establishing clear guidelines for data protection and algorithmic transparency through frameworks like the GDPR, Europe has built a foundation of trust crucial for the adoption of AI-driven solutions in the digital identity space.” A sustainable technology plan The government has provided further support by investing heavily in smart cities. Amsterdam and Eindhoven lead the way in deploying IoT technologies, 5G networks, and AI-driven solutions to improve urban living. Another pillar is emerging in the Hague, where a spin-off from the Netherlands Organisation for Applied Scientific Research (TNO) recently unveiled plans to build digital twins of smart cities. Gunning added that the partnership between the Dutch government and leading universities in Delft and Eindhoven has also been instrumental in fostering innovation. TU Delft works with the Dutch government, industry partners, and other technical universities to develop materials for sustainable energy sources. TU Eindhoven, meanwhile, is at the heart of the Brainport Eindhoven innovation ecosystem, one of Europe’s leading high-tech regions. “What makes this model particularly effective is its focus on practical innovation — ensuring research translates into real solutions,” Gunning said. “One cool success story of a Dutch university working in partnership with the private sector and the government is ASML.” Moreover, ethical AI development Initiatives like the Dutch AI Coalition aim to create a collaborative environment where industry, academia, and government work together to harness AI responsibly. Another promising sector is sustainability. Collectively, Dutch green tech startups attracted a record $700mn in funding in 2024. Companies such as Voltfang, which focuses on renewable energy storage, and Vind, a pioneer in wind energy optimisation, are emerging leaders in the sector. The country is also experimenting with circular economy models, where waste is minimised and resources are reused.  Anders Indset, chairman of Njordis Group, a VC firm investing in technology companies, says the sustainability advances can boost AI progress.  “The Netherlands has a strong focus on renewable energy, which ensures a sustainable energy supply for the development and training of AI models,” Indset told me. “The availability of eco-friendly energy reduces both costs and environmental impact when training energy-intensive AI systems.”  Retaining AI talent is the Netherlands’ biggest tech hurdle The Netherlands’ pursuit of becoming an innovation leader in Europe is not without its challenges. While the country has become a magnet for investment — with VC funds like Peak Capital and Speedinvest funding high-impact startups, and institutional investors including pension funds increasingly investing in Dutch tech — its ability to retain skilled talent could impede its growth.  Global tech hubs like Silicon Valley and Shenzhen offer highly lucrative opportunities. To compete with them, the Dutch ecosystem must keep innovating and provide compelling incentives to retain top talent. “One of our key competitive advantages in attracting global tech talent has been the tax benefits, which enable us to compete effectively with tech hubs like London, Berlin, and Singapore for top specialists in AI, cybersecurity, and fintech,” Gunning explained. “Most international tech professionals tend to only stay in the Netherlands during their peak working years, typically from their late twenties to early forties.” While Atomico reported that the

Ethical AI is turning the Netherlands into an innovation leader Read More »

‘Sorry, I didn’t get that’: AI misunderstands some people’s words more than others

The idea of a humanlike artificial intelligence assistant that you can speak with has been alive in many people’s imaginations since the release of “Her,” Spike Jonze’s 2013 film about a man who falls in love with a Siri-like AI named Samantha. Over the course of the film, the protagonist grapples with the ways in which Samantha, real as she may seem, is not and never will be human. Twelve years on, this is no longer the stuff of science fiction. Generative AI tools like ChatGPT and digital assistants like Apple’s Siri and Amazon’s Alexa help people get driving directions, make grocery lists, and plenty else. But just like Samantha, automatic speech recognition systems still cannot do everything that a human listener can. You have probably had the frustrating experience of calling your bank or utility company and needing to repeat yourself so that the digital customer service bot on the other line can understand you. Maybe you’ve dictated a note on your phone, only to spend time editing garbled words. Linguistics and computer science researchers have shown that these systems work worse for some people than for others. They tend to make more errors if you have a non-native or a regional accent, are Black, speak in African American Vernacular English, code-switch, if you are a woman, are old, are too young or have a speech impediment. Tin ear Unlike you or me, automatic speech recognition systems are not what researchers call “sympathetic listeners.” Instead of trying to understand you by taking in other useful clues like intonation or facial gestures, they simply give up. Or they take a probabilistic guess, a move that can sometimes result in an error. As companies and public agencies increasingly adopt automatic speech recognition tools in order to cut costs, people have little choice but to interact with them. But the more that these systems come into use in critical fields, ranging from emergency first responders and health care to education and law enforcement, the more likely there will be grave consequences when they fail to recognize what people say. Imagine sometime in the near future you’ve been hurt in a car crash. You dial 911 to call for help, but instead of being connected to a human dispatcher, you get a bot that’s designed to weed out nonemergency calls. It takes you several rounds to be understood, wasting time and raising your anxiety level at the worst moment. What causes this kind of error to occur? Some of the inequalities that result from these systems are baked into the reams of linguistic data that developers use to build large language models. Developers train artificial intelligence systems to understand and mimic human language by feeding them vast quantities of text and audio files containing real human speech. But whose speech are they feeding them? If a system scores high accuracy rates when speaking with affluent white Americans in their mid-30s, it is reasonable to guess that it was trained using plenty of audio recordings of people who fit this profile. With rigorous data collection from a diverse range of sources, AI developers could reduce these errors. But to build AI systems that can understand the infinite variations in human speech arising from things like gender, age, race, first vs. second language, socioeconomic status, ability and plenty else, requires significant resources and time. ‘Proper’ English For people who do not speak English – which is to say, most people around the world – the challenges are even greater. Most of the world’s largest generative AI systems were built in English, and they work far better in English than in any other language. On paper, AI has lots of civic potential for translation and increasing people’s access to information in different languages, but for now, most languages have a smaller digital footprint, making it difficult for them to power large language models. Even within languages well-served by large language models, like English and Spanish, your experience varies depending on which dialect of the language you speak. Right now, most speech recognition systems and generative AI chatbots reflect the linguistic biases of the datasets they are trained on. They echo prescriptive, sometimes prejudiced notions of “correctness” in speech. In fact, AI has been proved to “flatten” linguistic diversity. There are now AI startup companies that offer to erase the accents of their users, drawing on the assumption that their primary clientele would be customer service providers with call centers in foreign countries like India or the Philippines. The offering perpetuates the notion that some accents are less valid than others. Human connection AI will presumably get better at processing language, accounting for variables like accents, code-switching and the like. In the U.S., public services are obligated under federal law to guarantee equitable access to services regardless of what language a person speaks. But it is not clear whether that alone will be enough incentive for the tech industry to move toward eliminating linguistic inequities. Many people might prefer to talk to a real person when asking questions about a bill or medical issue, or at least to have the ability to opt out of interacting with automated systems when seeking key services. That is not to say that miscommunication never happens in interpersonal communication, but when you speak to a real person, they are primed to be a sympathetic listener. With AI, at least for now, it either works or it doesn’t. If the system can process what you say, you are good to go. If it cannot, the onus is on you to make yourself understood. source

‘Sorry, I didn’t get that’: AI misunderstands some people’s words more than others Read More »