The Next Web TNW

Skin phantoms help researchers improve wearable devices without people wearing them

Wearable devices have become a big part of modern health care, helping track a patient’s heart rate, stress levels and brain activity. These devices rely on electrodes, sensors that touch the skin to pick up electrical signals from the body. Creating these electrodes isn’t as easy as it might seem. Human skin is complex. Its properties, such as how well it conducts electricity, can change depending on how hydrated it is, how old you are or even the weather. These changes can make it hard to test how well a wearable device works. Additionally, testing electrodes often involves human volunteers, which can be tricky and unpredictable. Everyone’s skin is different, meaning results aren’t always consistent. Testing also takes time and money. Plus, there are ethical concerns about asking people to participate in these experiments, including making sure they are informed about the risks and benefits and can voluntarily participate. Scientists have tried to create artificial skin models to avoid some of these problems, but existing ones haven’t been able to fully mimic the way skin behaves when interacting with wearable sensors. To address these limitations, my colleagues and I have developed a tool called a biomimetic skin phantom – a model that mimics the electrical behavior of human skin, making testing wearable sensors easier, cheaper and more reliable. What is a skin phantom? Our biomimetic skin phantom is made of two layers that capture the nuances of both the skin’s surface and deeper tissues. “Biomimetic” means it imitates something from nature – in this case, human skin. “Phantom” refers to a physical model or device made to mimic the properties of something real, like human tissues, so it can be used for research instead of relying on actual people. Your skin is made up of multiple layers of cells. OpenStax, CC BY-SA The bottom layer mimics the deeper tissues under the skin. It is made from a gel-like substance called polyvinyl alcohol cryogel, which can be adjusted to have similar softness and electrical conductivity to real biological tissues. We chose this material because these qualities, along with its durability and wide use in biomedical research, make it a good stand-in for the deeper layers of skin. The top layer mimics the outermost part of the skin, known as the stratum corneum. It is made from a silicone-like material called PDMS, which is mixed with special additives to match the skin’s electrical properties. Also widely used in biomedical research, PDMS is flexible and easy to shape to closely replicate the skin’s outer layer. One unique feature of our skin phantom is its ability to mimic different levels of skin hydration. Hydration affects how well skin conducts electricity. Dry skin has higher resistance, meaning it opposes the flow of electricity. This makes it harder for wearable devices to pick up signals. Hydrated skin conducts electricity more easily because water improves the movement of charged particles, leading to better signal quality. Improving how dry skin is modeled and tested can lead to better electrode designs. To replicate the effects of skin hydration, we introduced adjustable pores into the top PDMS layer of the skin phantom. By precisely changing the size and density of the pores, the model can mimic dry or hydrated skin conditions. Testing the skin phantom My team and I tested our skin phantom in several ways to see whether it could truly replace human skin in experiments. First, we used a method called impedance spectroscopy to study the phantom’s electrical properties. This technique applies alternating electrical signals at different frequencies and measures the material’s resistance to electrical flow, providing a detailed profile of its electrical behavior. Results from the experiments we conducted on five volunteers showed that the phantom’s impedance response closely mirrored that of human skin across both dry and hydrated conditions, with a difference of less than 20% between real skin and the phantom. Moist skin behaves differently from dry skin. Frederic Cirou/PhotoAlto via Getty Images We also tested whether wearable devices could pick up signals from the skin phantom and how signal quality changed with different skin conditions. To do this, we recorded eletrocardiogram signals on phantoms designed to mimic dry and hydrated skin. The results showed clear differences in signal quality: The phantom simulating dry skin had a lower signal-to-noise ratio, while the hydrated skin phantom showed better signal clarity. These findings are consistent with previous studies from other researchers. Together, our skin phantom closely replicates the way human skin responds to wearable sensors across a range of conditions, including dry and hydrated states. This accuracy makes it an optimal stand-in for real skin in the lab. Wearable technology The skin phantom is more than just a testing tool – it’s a step forward for wearable health technology. By removing the unpredictability of human testing, scientists can design and improve wearable devices more quickly and effectively. They can also use it to study how skin interacts with medical devices, such as patches that deliver medicine or advanced diagnostic tools. Our skin phantom is also simple and inexpensive. Each phantom costs less than US$3 and can be made with standard lab materials and tools. It can be reused multiple times within the same day without significant changes in its electrical properties, though extended use over several days may require adjustments, such as rehydration, to maintain stable performance. This affordability and reusability make the phantom more accessible for labs with limited budgets or resources. As wearable technology becomes more common in health care, tools such as the skin phantom can help make devices more reliable, accessible and personalized for everyone. source

Skin phantoms help researchers improve wearable devices without people wearing them Read More »

Deepfake detection improves when using algorithms that are more aware of demographic diversity

Deepfakes – essentially putting words in someone else’s mouth in a very believable way – are becoming more sophisticated by the day and increasingly hard to spot. Recent examples of deepfakes include Taylor Swift nude images, an audio recording of President Joe Biden telling New Hampshire residents not to vote, and a video of Ukrainian President Volodymyr Zelenskyy calling on his troops to lay down their arms. Although companies have created detectors to help spot deepfakes, studies have found that biases in the data used to train these tools can lead to certain demographic groups being unfairly targeted. A deepfake of Ukraine President Volodymyr Zelensky in 2022 purported to show him calling on his troops to lay down their arms. Olivier Douliery/AFP via Getty Images My team and I discovered new methods that improve both the fairness and the accuracy of the algorithms used to detect deepfakes. To do so, we used a large dataset of facial forgeries that lets researchers like us train our deep-learning approaches. We built our work around the state-of-the-art Xception detection algorithm, which is a widely used foundation for deepfake detection systems and can detect deepfakes with an accuracy of 91.5%. We created two separate deepfake detection methods intended to encourage fairness. One was focused on making the algorithm more aware of demographic diversity by labeling datasets by gender and race to minimize errors among underrepresented groups. The other aimed to improve fairness without relying on demographic labels by focusing instead on features not visible to the human eye. It turns out the first method worked best. It increased accuracy rates from the 91.5% baseline to 94.17%, which was a bigger increase than our second method as well as several others we tested. Moreover, it increased accuracy while enhancing fairness, which was our main focus. We believe fairness and accuracy are crucial if the public is to accept artificial intelligence technology. When large language models like ChatGPT “hallucinate,” they can perpetuate erroneous information. This affects public trust and safety. Likewise, deepfake images and videos can undermine the adoption of AI if they cannot be quickly and accurately detected. Improving the fairness of these detection algorithms so that certain demographic groups aren’t disproportionately harmed by them is a key aspect to this. Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect. source

Deepfake detection improves when using algorithms that are more aware of demographic diversity Read More »

AI battery brain promises to jumpstart European EVs

A German startup plans to jumpstart European EVs with an AI-powered brain. Sphere Energy built the system to simulate battery behaviour. The company then predicts a power source’s lifetime in numerous scenarios, from driving styles to temperatures on the road.  According to Sphere, the insights shrink the battery testing cycle by at least a year. Developing a car, meanwhile, could be completed “at least” twice as quickly. Sphere envisions endless benefits: manufacturers will save millions, car prices will plummet, and innovations will increase at exponential rates. The startup’s co-founder, Lukas Lutz, said the plans are unprecedented. “Nobody right now — not even Tesla — can accurately estimate the lifetime of their battery,” Lutz told TNW. “This is something that will be really groundbreaking.” A lifeline for European EVs? Sphere unveiled the project last month at the IBM Research Lab in Switzerland. In a futuristic facility overlooking Lake Zurich, the startup introduced an AI brain called Batty. Batty was initially trained on years of testing data from over 1,000 batteries. Car manufacturers also mix in their own information. The system then simulates a specific battery’s life under various conditions. Customers can test the effects of speeding down motorways and crawling around mountains, applying fast and slow chargers, driving in searing summers and freezing winters. Every aspect will impact the battery’s degradation. The system’s power derives from the transformer architecture — the founding stone of today’s large language models (LLMs). But Sphere’s approach doesn’t rely solely on text. The startup extends the model’s scope by integrating time-series data. As a result, the system can simulate a battery’s behaviour over years. The approach adds a new twist to the LLM paradigm. While a chatbot predicts the next best word, Batty will predict the next best data point. Car companies have been impressed by the results. According to Sphere, the majority of European manufacturers have already used the tech. Batty could provide a vital boost to the continent’s EV makers, which are rapidly losing market share to their Chinese rivals. “Battery development is a huge pain for them — and it shouldn’t be,” Lutz said. “We really want to take away the burden.” But batteries are just the start of Sphere’s ambitions. The company envisions simulating endless energy applications, from electric boats to grid storage. Alongside IBM, the startup is also exploring new levels of simulating batteries. “With these foundation AI models, we understand atomic level behaviour intrinsically,” Lutz said. “But we want to go sub-atomic — with quantum.” source

AI battery brain promises to jumpstart European EVs Read More »

The NBA is testing a new smart basketball made in Europe

The NBA is experimenting with a digital brain for basketballs. The system is the brainchild of SportIQ, a Finnish startup that develops smart basketballs. Inside each ball’s valve, SportIQ embeds a sensor that tracks a player’s shots. Data is first extracted on their form, position, angle, power, and technique. Next, the information is fed to a mobile app for AI analysis. Players then receive direct feedback and advice. According to SportIQ, over 20 million shots have already been tracked. The company estimates that regular users improve their shooting accuracy by 12%. The results impressed bigwigs at the NBA. They announced this week that SportIQ has been selected for Launchpad, the league’s tech incubator. During the six-month program, SportIQ will gets hands-on support and resources from the NBA to develop its tech. It all culminates with a presentation to the league’s executives, partners, and investors during the prestigious NBA Summer League. Erik Anderson, CEO of SportIQ, said the process will integrate his company’s system at basketball’s highest level. “This partnership opens doors to opportunities that are rare for startups,” Anderson told TNW. “It positions us to enhance officiating, provide deeper analytics for teams, and elevate the fan experience — all while staying true to our vision of making basketball smarter and more connected.” Building smarter basketballs Basketball is at the root of SportIQ. The startup’s founder, Harri Hohteri, is a former professional player and computer scientist. The sport is also ripe for data-driven disruption. “Basketball has a gap in analytics solutions, particularly at the consumer level, compared to other sports,” Anderson said. “This provides a clear opportunity to bring innovative and accessible tools to players, coaches, and fans alike, revolutionising the way the game is understood and played.” Credit: SportIQAnderson (left) and Hohteri believe basketball is underserved by analytics. Credit: SportIQ SportIQ is also a rare example of a European consumer tech firm breaking into the US. According to the startup, thousands of Americans buy its smart basketballs a year. Launchpad provides a chance to increase those numbers. SportIQ is the only European company in the program this year. Joining the Finish startup are OneCourt, which translates gameplay into haptics and generative audio for vision-impaired fans; VReps, an education platform that improves basketball IQ; Somnee, which has developed a clinical-grade sleep diagnostic and therapeutic headband; and Trashie, a clothes recycling and rewards platform. The squad earned their spots after pitching innovations that address Launchpad’s key objectives. For the sport itself, the program is prioritising the future of officiating, youth basketball, and player health. For the business of basketball, meanwhile, the focus shifts to the future of media, fan connection, and impact. SportIQ will also benefit from targeting these objectives. By joining Launchpad, the company hopes to expand its product lines, usage cases, revenue streams, and technological capabilities. But the NBA is just a starting point for SportIQ. The company is already planning to expand into new markets, and its sensor system can adapt to numerous sports. As Anderson puts it: “Every ball can be smart.” source

The NBA is testing a new smart basketball made in Europe Read More »

DeepSeek: China’s gamechanging AI system has big implications for UK tech development

DeepSeek sent ripples through the global tech landscape this week as it soared above ChatGPT in Apple’s app store. The meteoric rise has shifted the dynamics of US-China tech competition, shocked global tech stock valuations, and reshaped the future direction of artificial intelligence (AI) development. Among the industry buzz created by DeepSeek’s rise to prominence, one question looms large: what does this mean for the strategy of the third leading global nation for AI development – the United Kingdom? The generative AI era was kickstarted by the release of ChatGPT on November 30 2022, when large language models (LLMs) entered mainstream consciousness and began reshaping industries and workflows, while everyday users explored new ways to write, brainstorm, search and code. We are now witnessing the “DeepSeek moment” – a pivotal shift that demonstrates the viability of a more efficient and cost-effective approach for AI development. DeepSeek isn’t just another AI tool. Unlike ChatGPT and other major LLMs developed by tech giants and AI startups in the USA and Europe, DeepSeek represents a significant evolution in the way AI models are developed and trained. Most existing approaches rely on large-scale computing power and datasets (used to “train” or improve the AI systems), limiting development to very few extremely wealthy market players. DeepSeek not only demonstrates a significantly cheaper and more efficient way of training AI models, its open-source “MIT” licence (after the Massachusetts Institute of Technology where it was developed) allows users to deploy and develop the tool. This helps democratise AI, taking up the mantle from US company OpenAI – whose initial mission was “to build artificial general intelligence (AGI) that is safe and benefits all of humanity” – enabling smaller players to enter the space and innovate. By making cutting-edge AI development accessible and affordable to all, DeepSeek has reshaped the competitive landscape, allowing innovation to flourish beyond the confines of large, resource-rich organisations and countries. It has also set a new benchmark for efficiency in its approach, by training its model at a fraction of the cost, and matching – even surpassing – the performance of most existing LLMs. By employing innovative algorithms and architectures, it is delivering superior results with significantly lower computational demands and environmental impact. Why DeepSeek matters DeepSeek was conceived by a group of quantitative trading experts in China. This unconventional origin holds lessons for the UK and US. While the UK – particularly London – has long attracted scientific and technological excellence, many of the highest achieving young graduates have tended to disproportionately opt for careers in finance, something that has come the expense of innovation in other critical sectors such as AI. Diversifying the pathways for Stem (science, technology, engineering and maths) professionals could yield transformative outcomes. The UK government’s recent and much-publicised 50-point action plan on AI offers glimpses of progressive intent, but also displayed a lack of boldness to drive real change. Incremental steps are not sufficient in such a fast-moving environment. The UK needs a new plan – one that leverages its unique strengths while addressing systemic weaknesses. Firstly, it’s important to recognise that the UK’s comparative advantage lies in its leading interdisciplinary expertise. World-class universities, thriving fintech and dynamic professional services and creative sectors offer fertile ground for AI applications that extend beyond traditional tech silos. The intersection of AI with finance, law, creative industries and medicine presents opportunities to lead in some niche but high-impact areas. The UK’s funding and regulatory frameworks are due an overhaul. DeepSeek’s development underscores the importance of agile, well-funded ecosystems that can support big, ambitious “moonshot” projects. Current UK funding mechanisms are bureaucratic and fragmented, favouring incremental innovations over radical breakthroughs, at times stifling innovation rather than nurturing it. Simplifying grant applications and offering targeted tax incentives for AI startups would represent a healthy start. Finally, it will be critical for the UK to keep its talent in the country. The UK’s AI sector faces a brain drain as top talent gravitates toward better-funded opportunities in the US and China. Initiatives such as public-private partnerships for AI research development can help anchor talent at home. DeepSeek’s rise is an excellent example of strategic foresight and execution. It doesn’t merely aim to improve existing models, but redefines the very boundaries of how AI could be developed and deployed – while demonstrating efficient, cost-effective approaches that can yield astounding results. The UK should adopt a similarly ambitious mindset, focusing on areas where it can set global standards rather than playing catch-up. AI’s geopolitics cannot be ignored either. As the US and China compete with one another, the UK has a critical role to play as the trusted intermediary and ethical leader in AI governance. By championing transparent AI standards and fostering international collaboration, the UK can punch above its weight on the global stage. DeepSeek’s success should serve as a wake-up call. Britain has the talent, institutions and entrepreneurial spirit to be a significant leading player in AI – but it must act decisively, and now. It is time to remove token gestures and embrace bold strategies that move the needle and position the UK as a leader in an AI-driven future. This moment calls for action, not just more conversation. DeepSeek has raised the bar. It is now up to the UK to meet it. source

DeepSeek: China’s gamechanging AI system has big implications for UK tech development Read More »

European AI allies unveils LLM alternative to Big Tech, DeepSeek

As China’s DeepSeek threatens to dismantle Silicon Valley’s AI monopoly, a European alliance has emerged with an alternative to tech’s global order. They call their project OpenEuroLLM. Like DeepSeek, they aim to develop next-generation open-source language models — but their agenda is very different. Their mission: forging European AI that will foster digital leaders and impactful public services across the continent. To support these objectives, OpenEuroLLM is building a family of high-performing, multilingual large language foundation models. The models will be available for commercial, industrial, and public services. Over 20 leading European research institutions, companies, and high-performance computing (HPC) centres have enlisted in the the project. Leading their alliance is Jan Hajič, a renowned computational linguist at Charles University, Czechia, and Peter Sarlin, the co-founder of Silo AI, Europe’s largest private AI lab, which was acquired last year by US chipmaker AMD for $665mn. Webinar: Nurturing Scaleup Success Join us on 18 February for a discussion on the vital role of ecosystems in nurturing startups and scaleups and fostering a dynamic entrepreneurial landscape. They’re joined by an array of European tech luminaries. Among them are Aleph Alpha, the leading light of Germany’s AI sector, Finland’s CSC, which hosts one of the world’s most powerful supercomputers., and France’s Lights On, which recently became Europe’s first publicly-traded GenAI company. Their alliance has been backed by the European Commission. According to Sarlin, the initiative could be the Commission’s largest-ever AI project.  “What’s unique about this initiative is that we’re bringing together many Europe’s leading AI organisations in one focused effort, rather than having many small, fragmented projects,” he told TNW via email. “This concentrated approach is what Europe needs to build open European AI models that eventually enable innovation at scale.” The project has a budget of €52mn, as well as compute commitment that may have a larger monetary value, Sarlin said. Alongside funding from the Commission, OpenEuroLLM has received support from STEP, an EU scheme to boost investment in strategic technologies. The project also aligns with the EU’s plans to fortify Europe’s digital sovereignty, which is becoming vulnerable. Europe’s AI future With China and the US developing new AI capabilities at breakneck speeds, Europe faces an uncertain future in the digital landscape. OpenEuroLLM hopes to strengthen the continent’s position with new digital infrastructure. The project has also pledged to embed AI with European values of democracy, transparency, openness, and community involvement. According to OpenEuroLLM, the models, software, data, and evaluation will be fully open. They will also be capable of fine-tuning and instruction-tuning for specific industry and public sector needs. Additionally, the alliance promises to preserve both linguistic and cultural diversity. The plans arrive in testing times for European tech. With US and Chinese firms racing to deliver new AI breakthroughs, fears are growing that European companies, economies, and even culture are under threat.  Sarlin wants OpenEuroLLM to bring new hope to the continent. ”This isn’t about creating a general purpose chatbot — it’s about building the digital and AI infrastructure that enables European companies to innovate with AI,” he said.  “Whether it’s a healthcare company developing specialised assistants to medical doctors or a bank creating personalised financial services, they need AI models adapted to the context in which they operate, and that they can control and own. “This project is about giving European businesses tools to build models and solutions in their languages that they own and control.” source

European AI allies unveils LLM alternative to Big Tech, DeepSeek Read More »

AI-driven battery brain promises to jumpstart European EVs

A German startup plans to jumpstart European EVs with an AI-powered brain. Sphere Energy built the system to simulate battery behaviour. The company then predicts a power source’s lifetime in numerous scenarios, from driving styles to temperatures on the road.  According to Sphere, the insights shrink the battery testing cycle by at least a year. Developing a car, meanwhile, could be completed “at least” twice as quickly. Sphere envisions endless benefits: manufacturers will save millions, car prices will plummet, and innovations will increase at exponential rates. The 💜 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now! The startup’s co-founder, Lukas Lutz, said the plans are unprecedented. “Nobody right now — not even Tesla — can accurately estimate the lifetime of their battery,” Lutz told TNW. “This is something that will be really groundbreaking.” A lifeline for European EVs? Sphere unveiled the project last month at the IBM Research Lab in Switzerland. In a futuristic facility overlooking Lake Zurich, the startup introduced an AI brain called Batty. Batty was initially trained on years of testing data from over 1,000 batteries. Car manufacturers also mix in their own information. The system then simulates a specific battery’s life under various conditions. Customers can test the effects of speeding down motorways and crawling around mountains, applying fast and slow chargers, driving in searing summers and freezing winters. Every aspect will impact the battery’s degradation. The system’s power derives from the transformer architecture — the founding stone of today’s large language models (LLMs). But Sphere’s approach doesn’t rely solely on text. The startup extends the model’s scope by integrating time-series data. As a result, the system can simulate a battery’s behaviour over years. The approach adds a new twist to the LLM paradigm. While a chatbot predicts the next best word, Batty will predict the next best data point. Car companies have been impressed by the results. According to Sphere, the majority of European manufacturers have already used the tech. Batty could provide a vital boost to the continent’s EV makers, which are rapidly losing market share to their Chinese rivals. “Battery development is a huge pain for them — and it shouldn’t be,” Lutz said. “We really want to take away the burden.” But batteries are just the start of Sphere’s ambitions. The company envisions simulating endless energy applications, from electric boats to grid storage. Alongside IBM, the startup is also exploring new levels of simulating batteries. “With these foundation AI models, we understand atomic level behaviour intrinsically,” Lutz said. “But we want to go sub-atomic — with quantum.” source

AI-driven battery brain promises to jumpstart European EVs Read More »

We’re getting closer to having practical quantum computers – here’s what they will be used for

In 1981, American physicist and Nobel Laureate, Richard Feynman, gave a lecture at the Massachusetts Institute of Technology (MIT) near Boston, in which he outlined a revolutionary idea. Feynman suggested that the strange physics of quantum mechanics could be used to perform calculations. The field of quantum computing was born. In the 40-plus years since, it has become an intensive area of research in computer science. Despite years of frantic development, physicists have not yet built practical quantum computers that are well suited for everyday use and normal conditions (for example, many quantum computers operate at very low temperatures). Questions and uncertainties still remain about the best ways to reach this milestone. What exactly is quantum computing, and how close are we to seeing them enter wide use? Let’s first look at classical computing, the type of computing we rely on today, like the laptop I am using to write this piece. Classical computers process information using combinations of “bits”, their smallest units of data. These bits have values of either 0 or 1. Everything you do on your computer, from writing emails to browsing the web, is made possible by processing combinations of these bits in strings of zeroes and ones. Quantum computers, on the other hand, use quantum bits, or qubits. Unlike classical bits, qubits don’t just represent 0 or 1. Thanks to a property called quantum superposition, qubits can be in multiple states simultaneously. This means a qubit can be 0, 1, or both at the same time. This is what gives quantum computers the ability to process massive amounts of data and information simultaneously. Imagine being able to explore every possible solution to a problem all at once, instead of once at a time. It would allow you to navigate your way through a maze by simultaneously trying all possible paths at the same time to find the right one. Quantum computers are therefore incredibly fast at finding optimal solutions, such as identifying the shortest path, the quickest way. Different qubits can be linked via the quantum phenomenon of entanglement. Jurik Peter / Shutterstock Think about the extremely complex problem of rescheduling airline flights after a delay or an unexpected incident. This happens with regularity in the real world, but the solutions applied may not be the best or optimal ones. In order to work out the optimal responses, standard computers would need to consider, one by one, all possible combinations of moving, rerouting, delaying, cancelling or grouping, flights. Every day there are more than 45,000 flights, organised by over 500 airlines, connecting more than 4,000 airports. This problem would take years to solve for a classical computer. On the other hand, a quantum computer would be able to try all these possibilities at once and let the best configuration organically emerge. Qubits also have a physical property known as entanglement. When qubits are entangled, the state of one qubit can depend on the state of another, no matter how far apart they are. This is something that, again, has no counterpart in classical computing. Entanglement allows quantum computers to solve certain problems exponentially faster than traditional computers can. Read more: Brain implants, agentic AI and answers on dark matter: what to expect from science in 2025 – podcast A common question is whether quantum computers will completely replace classical computers or not. The short answer is no, at least not in the foreseeable future. Quantum computers are incredibly powerful for solving specific problems – such as simulating the interactions between different molecules, finding the best solution from many options or dealing with encryption and decryption. However, they are not suited to every type of task. Classical computers process one calculation at a time in a linear sequence, and they follow algorithms (sets of mathematical rules for carrying out particular computing tasks) designed for use with classical bits that are either 0 or 1. This makes them extremely predictable, robust and less prone to errors than quantum machines. For everyday computing needs such as word processing or browsing the internet, classical computers will continue to play a dominant role. There are at least two reasons for that. The first one is practical. Building a quantum computer that can run reliable calculations is extremely difficult. The quantum world is incredibly volatile, and qubits are easily disturbed by things in their environment, such as interference from electromagnetic radiation, which makes them prone to errors. The second reason lies in the inherent uncertainty in dealing with qubits. Because qubits are in superposition (are neither a 0 or 1) they are not as predictable as the bits used in classical computing. Physicists therefore describe qubits and their calculations in terms of probabilities. This means that the same problem, using the same quantum algorithm, run multiple times on the same quantum computer might return a different solution each time. To address this uncertainty, quantum algorithms are typically run multiple times. The results are then analysed statistically to determine the most likely solution. This approach allows researchers to extract meaningful information from the inherently probabilistic quantum computations. From a commercial point of view, the development of quantum computing is still in its early stages, but the landscape is very diverse with lots of new companies appearing every year. It is fascinating to see that in addition to big, established companies like IBM and Google, new ones are joining, such as IQM, Pasqal and startups such as Alice and Bob. They are all working on making quantum computers more reliable, scalable and accessible. A range of companies are working towards building practical quantum computers. ANNA SZILAGYI / EPA IMAGES In the past, manufacturers have drawn attention to the number of qubits in their quantum computers, as a measure of how powerful the machine is. Manufacturers are increasingly prioritising ways to correct the errors that quantum computers are prone to. This shift is crucial for developing large-scale, fault-tolerant quantum computers, as these techniques are essential for improving their usability. Google’s latest quantum chip,

We’re getting closer to having practical quantum computers – here’s what they will be used for Read More »

Knowing less about AI makes people more open to having it in their lives

The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it. Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link. This link shows up across different groups, settings and even countries. For instance, our analysis of data from market research company Ipsos spanning 27 countries reveals that people in nations with lower average AI literacy are more receptive towards AI adoption than those in nations with higher literacy. Similarly, our survey of US undergraduate students finds that those with less understanding of AI are more likely to indicate using it for tasks like academic assignments. The reason behind this link lies in how AI now performs tasks we once thought only humans could do. When AI creates a piece of art, writes a heartfelt response or plays a musical instrument, it can feel almost magical – like it’s crossing into human territory. Of course, AI doesn’t actually possess human qualities. A chatbot might generate an empathetic response, but it doesn’t feel empathy. People with more technical knowledge about AI understand this. They know how algorithms (sets of mathematical rules used by computers to carry out particular tasks), training data (used to improve how an AI system works) and computational models operate. This makes the technology less mysterious. On the other hand, those with less understanding may see AI as magical and awe inspiring. We suggest this sense of magic makes them more open to using AI tools. Our studies show this lower literacy-higher receptivity link is strongest for using AI tools in areas people associate with human traits, like providing emotional support or counselling. When it comes to tasks that don’t evoke the same sense of human-like qualities – such as analysing test results – the pattern flips. People with higher AI literacy are more receptive to these uses because they focus on AI’s efficiency, rather than any “magical” qualities. The researchers carried out surveys with a number of different groups, including undergraduates. Owlie Productions / Shutterstock It’s not about capability, fear or ethics Interestingly, this link between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary. Their openness to AI seems to stem from their sense of wonder about what it can do, despite these perceived drawbacks. This finding offers new insights into why people respond so differently to emerging technologies. Some studies suggest consumers favour new tech, a phenomenon called “algorithm appreciation”, while others show scepticism, or “algorithm aversion”. Our research points to perceptions of AI’s “magicalness” as a key factor shaping these reactions. These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people’s enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption. To make the most of AI’s potential, businesses, educators and policymakers need to strike this balance. By understanding how perceptions of “magicalness” shape people’s openness to AI, we can help develop and deploy new AI-based products and services that take the way people view AI into account, and help them understand the benefits and risks of AI. And ideally, this will happen without causing a loss of the awe that inspires many people to embrace this new technology. source

Knowing less about AI makes people more open to having it in their lives Read More »

Air traffic control for drones in sight for Norwegian startup AirDodge

Remember when spotting a drone in the sky was a novelty? Now it’s like playing whack-a-mole with flying machines. Delivery drones, military drones, AI drones, hobby drones — our skies are busier than the queue at airport security. Without air traffic control, we’re one step away from midair collisions and drones arguing over parking spots.  Enter AirDodge, a Norwegian startup that’s stepping in to tame the chaos. The Oslo-based company just secured a $500,000 pre-seed funding round, led by VC firms Nordic Makers and Antler. The investment will help AirDodge develop its U-Space software platform, designed to manage large-scale drone operations across Europe. “At AirDodge, we envision a future where drones seamlessly integrate into the airspace, contributing positively to various industries while ensuring safety and compliance,” said Umar Chughtai, who founded AirDodge in 2022. “This funding will allow us to accelerate the development of our U-Space platform, bringing us closer to realising that vision.”  The AirDodge platform provides a real-time map of drone activity and aims to simplify the process of obtaining flight permissions. The tech aligns with the EU’s U-Space standards which are “designed to provide safe, efficient and secure access to airspace for large numbers of unmanned aircraft, operating automatically and beyond visual line of sight.”   In 2018, London’s Gatwick airport was forced to shut down after drones were spotted flying near the runway. The incident affected around 1,000 flights and 140,000 passengers. Many similar incidents have occurred over the years, from Stockholm to Frankfurt.  If AirDodge’s tech had been around during the Gatwick fiasco, it could’ve spotted the rogue drones in real-time, flagged them faster than airport security can confiscate a water bottle, and perhaps kept flights running smoothly. By enforcing no-fly zones and syncing drones with air traffic control, the platform might have saved 140,000 passengers a lot of headaches (and missed connections).  “Drone technology has the potential to have a positive impact on society, business and public services, but there is not yet a way to guarantee safety,” said Kristian Jul Røsjø, partner at Antler. “High-profile disruptions are hindering the development of this technology and AirDodge will provide a much-needed solution.”   AirDodge will use the pre-seed funding to accelerate the development of its platform. The company aims to launch the alpha version in mid-2025.  Across the EU, the market for drone services is soaring. One projection valuing it at €14.5bn by 2030, bringing in 145,000 new jobs. But as drones proliferate, so do the challenges.    “We have the drones, but we lack the infrastructure,” said Nima Tisdall, partner at Nordic Makers. “In this case, the infrastructure is not roads, plumbing, or electrical wires; but rather ethereal communication systems.” Tisdall added that Airdodge had unusual strengths for the region. “The founding team is forceful and ambitious — qualities that can be surprisingly rare to find in Nordic entrepreneurs, but integral in building a category-winning business,” she said. “We’re excited to be supporting a local player who can help unlock the large-scale adoption of drones across Europe.” source

Air traffic control for drones in sight for Norwegian startup AirDodge Read More »