The Next Web TNW

Battery recycling startup Tozero bags €11M to boost Europe’s lithium supply

In 1991, Sony brought the first rechargeable lithium-ion battery to market. The unique chemistry proved a game-changer in energy storage. Today everything from EVs to smartphones depends on it, with demand skyrocketing.  But lithium is rare, most of it comes from unstable markets outside Europe, and its extraction can cause extensive pollution. We need more lithium to enable the green transition and yet, currently, its use is unsustainable — both environmentally and economically. We’re stuck in a paradox. Munich-based startup Tozero believes that battery recycling offers a way out. Recycling batteries is far from a new concept, but the German venture claims its technology gets the job done more efficiently than existing methods and without the use of harmful acids. Webinar: Unicorn DNA: The Blueprint for Scaling Success What does it take to build a unicorn? On November 19, 3pm CET, top executives of unicorn companies will reveal the mindset, strategies, and innovative thinking that propelled their companies to the top. Tozero was founded in 2022 by serial entrepreneur Sarah Fleischer and metallurgy expert Dr. Ksenija Milicevic Neumann. When the pair first met, they were working in the space industry. Three years later they teamed up to fix a pressing issue here on Earth. Before founding Tozero, Neumann spent years at RWTH Aachen developing a breakthrough water-based carbonation process for extracting lithium and other elements like graphite from black mass. This powdery substance is produced after shredding and processing spent batteries.     Neumann’s research gave Tozero a significant head-start. In just two years, the company has managed to break out of the lab and deliver its first batches of recycled lithium to customers. And today, the company announced it has raised €11mn in Series A funding as it looks to scale up at pace. “Despite our limited resources as a two-year-old startup we’ve already made human history by being the first to ever deliver recycled lithium for end products in Europe,” said Fleischer, the company’s CEO.   NordicNinja, a Japan-backed European VC fund, led the funding round, bringing Tozero’s total raised to a cosy €17mn. Other investors include automotive giant Honda, US venture firm In-Q-Tel, and engineering group JGC.  Tozero will use the fresh capital to build its first industrial deployment plant. From 2026 onwards, the company plans to process 30,000 tonnes of battery waste annually.  Tozero can technically just keep on growing as long as it receives a continuous supply of old batteries. And that shouldn’t be too much of an issue.  Lithium-ion production is set to almost quadruple by 2030. Meanwhile, regulations like the EU’s Battery Directive—which calls for at least 80% of lithium to be recovered from batteries by 2031—add much-needed incentives. This is only good news for Tozero and other recycling upstarts, including Cylib, which is currently building Europe’s largest recycling plant for EV batteries. However, if Europe is to secure a sustainable supply of the lithium it so desperately needs, it must expand local mining and explore new battery technologies like sodium-ion, zinc-ion, and the holy grail — solid-state batteries. source

Battery recycling startup Tozero bags €11M to boost Europe’s lithium supply Read More »

Dutch startup Sympower secures €21M to balance out the energy grid

Amsterdam-based startup Sympower has secured €21mn as it looks to scale its grid-balancing technology. Sympower partners with businesses that use a large amount of electricity. It gains access to some of their energy assets and can turn them on and off when the grid requires balancing — a process called demand response. Sympower’s software platform uses AI to analyse data and optimise when and how much power businesses can sell at any given time, making energy use adjustments more effective and profitable for all parties.  Grid operators pay Sympower to stabilise the energy supply. The company passes most of that payment to participating businesses, keeping a small service fee.  Webinar: Unicorn DNA: The Blueprint for Scaling Success What does it take to build a unicorn? On November 19, 3pm CET, top executives of unicorn companies will reveal the mindset, strategies, and innovative thinking that propelled their companies to the top. The system gives energy-intensive companies an incentive to stop using electricity during times of peak demand. The more energy they save, the more money they make. The idea is to help prevent blackouts and the strain on the grid. Electricity infrastructure around the world is struggling to adapt to the influx of renewable energies like wind and solar because — unlike the gas and coal that came before — they produce power intermittently. The EU has identified demand response — as well as energy storage technologies — as key to restoring the balance. As more renewables come online and our societies electrify, it’s no wonder that demand for grid flexibility is surging. Founded by Simon Bushnell and Georg Rute in 2015, who studied together at Imperial College London, Sympower is a first-mover in this relatively niche industry. It has 2GW worth of energy assets under management and employs 200 people in over 10 countries. Today’s funding round brings its total capital raised to a healthy €60mn. “Sympower has grown tremendously in recent years, which aligns with the unprecedented demand across Europe for diversified and mature energy flexibility solutions,” said Bushell, the company’s CEO and co-founder. A&G Energy Transition Tech Fund (A&G ETTF) led the funding round, with direct investment from the European Investment Fund (EIF) and participation from existing investors Activate Capital, Rubio Impact Ventures, PDENH, and Expon Capital. Armed with fresh funding, the company aims to continue expanding throughout Europe, with the intent to acquire competitors in the near future. It is also expanding into the battery storage business to complement its demand response software. source

Dutch startup Sympower secures €21M to balance out the energy grid Read More »

Do we need a European DARPA to cope with technological challenges in Europe?

The US Defense Advanced Research Projects Agency (DARPA) is often held as a model for driving technology advances. For decades, it has contributed to military and economic dominance by bridging the gap between military and civilian applications. European policymakers frequently reference DARPA in discussions, as outlined in the 2024 Draghi Report, but an EU equivalent has yet to materialise. To create such an agency, the governance and management of European innovation programmes would need drastic changes. DARPA supports disruptive innovation Founded in 1958, DARPA operates under the US Department of Defense (DoD) with a straightforward mission: to fund high-risk technological programmes that could lead to radical innovation. DARPA provides support throughout the innovation process, focusing on environments where new uses for technology must be invented or adapted. Although part of the DoD, DARPA funds projects that promise technological and economic superiority whether they align with current military priorities or not. DARPA has backed projects like ARPANET, the precursor to the internet, and the GPS. Today, DARPA shows interest in autonomous vehicles for urban areas and new missile technologies. As part of its core mission, DARPA accepts high financial risks on exploration projects and makes long-term commitments to these projects. Many emblematic successes explain why DARPA is a reference agency. However, the list of failed projects is even longer. Both failures and successes feed the exploration process in emerging industrial sectors. They represent opportunities to learn together and build collective strategies in innovation ecosystems. Five key principles of DARPA DARPA’s success stems not just from its stability but from adhering to five organisational principles that allow it to explore deep tech in an open innovation context: Independence: DARPA operates independently from other military services, research & development centres and federal agencies, allowing it to explore options outside dominant research paradigms. While cooperation is possible, its decisions and directions are not influenced by other parts of the federal administration. Agility: The agency’s flat organisational structure minimises bureaucracy. Its independent decision-making processes and streamlined contracting allow it to pivot quickly, test new concepts and collaborate with academic or private sector partners. Agility also enables DARPA to test new exploration or experimentation methods that are often based on user-centric approaches. Potential military or civilian end-users are involved very early in innovation projects to discuss potential uses and applications. This approach has recently led DARPA to absorb the Strategic Capabilities Office (SCO), where officers from the different military services (Army, Air Force, Navy and Marines) and all military ranks test new technological solutions (from different maturity levels), fostering co-creation processes with military innovators and expanding the agency’s impact. Sponsorship: High-ranking executives within the DoD and other federal administrations (NASA, Department of Energy) endorse, but do not commission, DARPA’s projects. This sponsorship model increases a project’s potential impact and allows for swift adaptation if a project fails. Community building: DARPA creates innovation communities with a mix of diverse expertise. By bringing different perspectives together, it fosters collective strategies essential for disruptive innovation. Diverse leadership: Project managers come from a range of backgrounds, including civilian experts, military officers and private-sector professionals. All have demonstrated scientific and technological expertise and a solid capability to bridge dreams and foresight with reality. All have a perfect command of risk and complexity management. Managers serve three- to four-year terms focused on driving technological disruption and building new innovation ecosystems. Their diverse expertise sets DARPA apart from other federal agencies. The challenge of a European DARPA The Draghi Report on European competitiveness suggests that a European DARPA could help bridge technological gaps, reduce dependencies and accelerate the green transition. However, implementing this model would require a seismic shift in how European agencies operate. Creating a new agency would be ineffective without ensuring that all principles underlying the success of DARPA are implemented in Europe. Even if Europe actively promotes deep tech and devotes significant budgets to it, European public policies and ways of working prevailing in national and European agencies are hardly consistent with the DARPA model. European agencies do not have much autonomy in their decisions about the exploration of new ventures or human resource management. They clearly demonstrate an outcome-focused orientation inconsistent with DARPA’s approach to risk. Two main challenges European agencies often lack the stable missions, scope and ambition seen at DARPA. The European Space Agency (ESA), the European Defence Agency (EDA) and Eurocontrol highlight the difficulties in developing cohesive, cross-border innovation ecosystems. A European DARPA would require a unified ambition among EU member states, a challenging feat given the institutional and geopolitical divides within Europe. The debates around the European Defence Fund illustrate how complex it is to reach consensus on shared objectives and funding. Adopting DARPA’s five organisational principles would represent a cultural revolution for European agencies in relation to EU bureaucratic norms and the budgetary controls of individual member states. Implementing these changes would also disrupt the existing power balance between countries. The DARPA model is inconsistent with the European “fair returns” model that refers to proportionality rules between funding, research operations and then industrial repartition during the production phase between member states in each project. The DARPA model would only focus on existing competencies, excellence, risk-taking approaches and entrepreneurial mindsets. Establishing a European DARPA would require a fundamental rethinking of public policy management in Europe. Its success would depend on whether European stakeholders are willing to adopt DARPA’s core principles, including its independence, agility and willingness to accept failure. Creating an agency is one thing; ensuring it adheres to the structures that make DARPA effective is another. The question remains: Is Europe ready for this transformation? The European Academy of Management (EURAM) is a learned society founded in 2001. With over 2,000 members from 60 countries in Europe and beyond, EURAM aims at advancing the academic discipline of management in Europe. David W. Versailles, Professor, strategic management and innovation management, co-director of PSB’s newPIC chair, PSB Paris School of Business and Valérie Mérindol, Enseignant chercheur en management de l’innovation et de la créativité, PSB

Do we need a European DARPA to cope with technological challenges in Europe? Read More »

Why learning 10 programming languages doesn’t make you a more interesting job candidate

New data from LinkedIn on the most in-demand jobs on the platform in the third quarter of this year reveals that software engineering is in second place. Just pipped to the post by sales roles, it is clear that software engineering and development pros are in high demand. Additionally, full stack engineers and application developers feature in the top ten in-demand roles at places eight and ten respectively. Software roles are in such high prominence because software powers pretty much everything. According to McKinsey, these days, “Every company is a software company.” Traditional bricks and mortar businesses are now increasingly digital-first. Think of your bank or your supermarket, for example. The way we use these businesses has radically changed, with services increasingly offered online. 5 jobs to discover this week Gen AI Solutions Consultant (w/m/d), Capgemini Deutschland, Düsseldorf Senior Software Developer (m/w/d), SEITENBAU GmbH, Remote Expert Technique Drupal F/H/X, Atos, Nantes Java Backend Developer, Healthcare Sektor, DAVASO, Leipzig Data Scientist – Generative AI, Nordson, Eckental The 💜 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now! Media are software companies now too. Hundreds of workers at The New York Times Tech Guild went on strike the day before the US election. They include data analysts, project managers, and software developers, and make up around 600 of the publication’s tech employees. These workers create and maintain the back-end systems that power the New York Times—yes, including Wordle. The fact that they not only represent about 10% of the paper’s total workforce, but are so essential to its operations, is yet another sign of our reliance on software solutions and the people who provide them. McKinsey has established three main reasons why this is the case. Firstly, there is the accelerated adoption of digital products, observed particularly during the pandemic when we did more online than ever before. Secondly, these days, more of the value in products and services is derived from software. Thirdly, the growth of cloud computing, PaaS, low- and no-code tools, and AI-based programming platforms are growing the sector exponentially. Languages to learn In such a dynamic sector, it’s no surprise that new programming languages are emerging all the time. Consider Mojo, a language designed to combine the simplicity of Python, with the efficiency of C++ or Rust. Or how about Finch, a new language from MIT that’s designed to support both flexible control flow and diverse data structures. Additionally, older languages are having a resurgence, such as Go, and that’s because it’s good for security and AI; both hot-button topics right now. Stack Overflow’s 2024 Developer Survey highlighted JavaScript, HTML/CSS, and Python as the top three languages respondents had used for extensive development work over the past year. Additionally, the US White House Office of the National Cyber Director (ONCD) issued a recent report advising that programmers should move to memory-safe languages. Given all that, it is understandable if as a developer, you’re really not sure what languages you should be using, what you should learn, and what you can think about dropping. Broad v specific Does this mean you should be aiming to become proficient in up to ten languages? A recent Reddit thread discussed just that, with one user arguing, “There is absolutely no point of learning 10 languages; just pick two, pick a specific field, and become the best at it.” Others agreed, with one contributor saying, “people are fixated on finding the hottest new language, the hottest new tech stack, or the latest trends, but this is not gonna help you.” Another user pointed out that “Specialisation is good but you should have a general understanding of the type of languages and how they work, then you can learn new languages and tech stack easily.” For many developers, good foundational knowledge is more important (and more valuable to their long-term career) than having a laundry list of programming languages on their CV that they may only be semi-proficient in. “Learning a stack on YouTube and building toy projects is easy,” pointed out another thread contributor. “Building specialisation takes a lot more effort and many years of real life experience.” If you do decide to specialise in a couple of languages, that should be, at least in part, influenced by what you enjoy doing most. “Do what you think is good for you,” says a thread contributor. “Once you become really good, you’ll automatically stand out from the crowd by being better than 90% of the mediocre developers.” Wise advice. Ready to find your next programming role? Check out The Next Web Job Board source

Why learning 10 programming languages doesn’t make you a more interesting job candidate Read More »

How close are we to an accurate AI fake news detector?

In the ambitious pursuit to tackle the harms from false content on social media and news websites, data scientists are getting creative. While still in their training wheels, the large language models (LLMs) used to create chatbots like ChatGPT are being recruited to spot fake news. With better detection, AI fake news checking systems may be able to warn of, and ultimately counteract, serious harms from deepfakes, propaganda, conspiracy theories and misinformation. The next level AI tools will personalise detection of false content as well as protecting us against it. For this ultimate leap into user-centered AI, data science needs to look to behavioural and neuroscience. Recent work suggests we might not always consciously know that we are encountering fake news. Neuroscience is helping to discover what is going on unconsciously. Biomarkers such as heart rate, eye movements and brain activity) appear to subtly change in response to fake and real content. In other words, these biomarkers may be “tells” that indicate if we have been taken in or not. For instance, when humans look at faces, eye-tracking data shows that we scan for rates of blinking and changes in skin colour caused by blood flow. If such elements seem unnatural, it can help us decide that we’re looking at a deepfake. This knowledge can give AI an edge – we can train it to mimic what humans look for, among other things. The personalisation of an AI fake news checker takes shape by using findings from human eye movement data and electrical brain activity that shows what types of false content has the greatest impact neurally, psychologically and emotionally, and for whom. Knowing our specific interests, personality and emotional reactions, an AI fact-checking system could detect and anticipate which content would trigger the most severe reaction in us. This could help establish when people are taken in and what sort of material fools people the easiest. Counteracting harms What comes next is customising the safeguards. Protecting us from the harms of fake news also requires building systems that could intervene – some sort of digital countermeasure to fake news. There are several ways to do this such as warning labels, links to expert-validated credible content and even asking people to try to consider different perspectives when they read something. Our own personalised AI fake news checker could be designed to give each of us one of these countermeasures to cancel out the harms from false content online. Such technology is already being trialled. Researchers in the US have studied how people interact with a personalised AI fake news checker of social media posts. It learned to reduce the number of posts in a news feed to those it deemed true. As a proof of concept, another study using social media posts tailored additional news content to each media post to encourage users to view alternative perspectives. Accurate detection of fake news But whether this all sounds impressive or dystopian, before we get carried away it might be worth asking some basic questions. Much, if not all, of the work on fake news, deepfakes, disinformation and misinformation highlights the same problem that any lie detector would face. There are many types of lie detectors, not just the polygraph test. Some exclusively depend on linguistic analysis. Others are systems designed to read people’s faces to detect if they are leaking micro-emotions that give away that they are lying. By the same token, there are AI systems that are designed to detect if a face is genuine or a deep fake. Before the detection begins, we all need to agree on what a lie looks like if we are to spot it. In fact, in deception research shows it can be easier because you can instruct people when to lie and when tell the truth. And so you have some way of knowing the ground truth before you train a human or a machine to tell the difference, because they are provided with examples on which to base their judgements. Knowing how good an expert lie detector is depends on how often they call out a lie when there was one (hit). But also, that they don’t frequently mistake someone as telling the truth when they were in fact lying (miss). This means they need to know what the truth is when they see it (correct rejection) and don’t accuse someone of lying when they were telling the truth (false alarm). What this refers to is signal detection, and the same logic applies to fake news detection which you can see in the diagram below. If an AI was designed to detect fake news, here are the ways it could be right (hit, correct rejection) as well as how it could go wrong (miss, false alarm) For an AI system detecting fake news, to be super accurate, the hits need to be really high (say 90%) and so the misses will be very low (say 10%), and the false alarms need to stay low (say 10%) which means real news isn’t called fake. If an AI fact-checking system, or a human one is recommended to us, based on signal detection, we can better understand how good it is. There are likely to be cases, as has been reported in a recent survey, where the news content may not be completely false or completely true, but partially accurate. We know this because the speed of news cycles means that what is considered accurate at one time, may later be found to be inaccurate, or vice versa. So, a fake news checking system has its work cut out. AI ultimately learn from humans. NicoElNino/Shutterstock If we knew in advance what was faked and what was real news, how accurate are biomarkers at indicating unconsciously which is which? The answer is not very. Neural activity is most often the same when we come across real and fake news articles. When it comes to eye-tracking studies, it is worth knowing that there are different types

How close are we to an accurate AI fake news detector? Read More »

At 30 years old, is Ruby in a mid-life crisis or a renaissance?

Ruby’s creator, Yukihiro Matsumoto (Matz), released the first public version of the programming language in December 1995, making Ruby just shy of its 30th birthday. It spread across Japanese-language Usenet newsgroups, a popular way of exchanging conversation and media before the World Wide Web, and then reached broader communities throughout the late 1990s. This was thanks to Ruby’s friendly community and, in no small part, thanks to Matz. (The community has a motto, “Matz is nice, and so we are nice.”) At this year’s annual European Ruby Konferenze — EuRoKu — in Sarajevo, Matz said he created Ruby because he was “lazy and full of hubris.” That doesn’t sound like a justification for creating and maintaining a programming language for 30 years, but it’s a sign of his derisive humbleness that feeds Ruby and has kept it a generally welcoming community over the decades. 30 years of Ruby history The programming language emerged when the rapid growth of web-related technologies embraced lightweight, easier-to-learn, and easier-to-run languages such as PHP and Python. While all three languages have myriad other uses, timing and external factors often propelled them into broader popularity. For Ruby, this was the Rails framework in 2004 and two books — Dave Thomas’s Pragmatic Programmer in 1999 and WHY’S (poignant) Guide to Ruby in 2005*. Webinar: Unicorn DNA: The Blueprint for Scaling Success What does it take to build a unicorn? On November 19, 3pm CET, top executives of unicorn companies will reveal the mindset, strategies, and innovative thinking that propelled their companies to the top. While Thomas’s book didn’t cover Ruby in detail, it mentions it, and the author continued to promote the language for many years after publishing it. The book, in general, has been a long-term success, and contributed to increasing interest in Ruby in its early life. Sometimes, Ruby on Rails feels like a blessing and a curse for Ruby itself. To many developers, they are one and the same thing. Rails events typically have more attendees. Many of the recent features and changes in Ruby came upstream from Rails. Finally, Rails’ creator and founder of 37Signals, David Heinemeier Hansson (DHH), is a far more recognised name in the wider programming and tech community and is a vocal presence online. One attendee at EuRoKu I spoke to stated that as much as the community doesn’t want to admit it, the two projects are highly connected. Many Web 2.0 sites that emerged in the early 2000s ran on Rails, and many still do (at least in part), including Airbnb, GitHub, Twitter (now X), Netflix, and Shopify (another major Ruby contributor and sponsor). Rails introduced many features that any older developer (like myself) will remember how groundbreaking they were, and many younger developers will now take it for granted. As a Drupal PHP developer at the time, I remember looking at features such as database table creation, management, and migration with envy. While interest and search term tracking for Rails shows at a quarter of what it once was, actual usage remains reasonably close to its peak. This shows that many developers using it are at a senior level and largely know what they’re doing. The 2024 Planet Argon survey confirms this: nearly 70% of respondents had more than seven years experience and have run their applications for about the same length of time. I don’t know how many new projects choose Rails, but there are enough pre-existing ones to maintain a healthy interest for a 20-year-old project. If you remove the several-year peak of Ruby’s interest after Rails’ release, then Ruby is as popular as it was 30 years ago. According to Tiobe statistics, it’s slightly more popular. The 2023 Stack Overflow survey puts Ruby’s popularity at 16 out of 50 languages, and an IEEE survey from 2024 and PYPL reports about the same. It’s easy to draw negative or inaccurate statistical comparisons to an unprecedented blip, but abstract them out over time, and you see a different story. Every language with a few decades under its belt has a degree of technical or community baggage, and I felt this same impression with Ruby in general and at the EuRuKo event. The community was friendly and welcoming but full of references and name drops that meant nothing to me. Granted, all communities do this to a degree, and maybe other events support newcomers, but there were few beginner-level talks. The next 30 years But enough of the distant past. What has Ruby recently added, or changed, or planned to keep the current developers interested and maybe attract new ones? Ruby is an interpreted language, meaning it’s converted from human-readable to machine-runnable code when run, often in a virtual machine that runs on a physical machine. One modern criticism of Ruby, and all interpreted languages, is that they are too slow for the scale of modern applications. Ruby has a default interpreter, CRuby (formerly “Matz’s Ruby Interpreter”), that translates the code into instructions run by the Ruby virtual machine. But Ruby and the community have added alternative and more performant interpreters, especially in the past few years, including multiple “just in time” (JIT) compiler options, which is a popular technique for bringing compiled code speed to interpreted languages. Other programming languages, such as C, C++, and Rust, are compiled languages, turning human-readable code into machine-readable code before it’s run. While not primarily designed for running as a compiled language, other options make it possible with Ruby. However, compiling languages is nothing new, and as I mentioned in my KubeCon EU wrap-up, WebAssembly (WASM) is the present for some and the future for many. Principally, WASM lets you run supported languages in the browser (but also now offers much more, maybe that’s a future post), bringing complex and powerful applications to the browser. Since 2022, Ruby has been able to compile to WASM. If you’ve used Mastodon in the browser, it’s a Ruby on Rails application running as WASM. When Ruby began life, monoliths were the common

At 30 years old, is Ruby in a mid-life crisis or a renaissance? Read More »

Endless possibilities of a digital stethoscope

 Welcome to the new episode of the TNW Podcast — the show where we discuss the latest developments in the European technology ecosystem and feature interviews with some of the most interesting people in the industry. In today’s special episode, we’re happy to present an interview with Diana van Stijn, co-founder and chief medical officer at Lapsi Health, a Dutch startup that builds smart medical hardware — starting with a digital stethoscope. Also featured in the interview is the sound of Andrii’s heart as captured by Lapsi’s first device, Keikku. Here are the links for this episode: Webinar: Unicorn DNA: The Blueprint for Scaling Success What does it take to build a unicorn? On November 19, 3pm CET, top executives of unicorn companies will reveal the mindset, strategies, and innovative thinking that propelled their companies to the top. Music and sound engineering for this podcast are by Sound Pulse. Feel free to email us with any questions, suggestions, and opinions at [email protected]. source

Endless possibilities of a digital stethoscope Read More »

Apophis: a European space mission gets up close with an asteroid set to brush by Earth

The European Space Agency has given the go-ahead for initial work on a mission to visit an asteroid called (99942) Apophis. If approved at a key meeting next year, the robotic spacecraft, known as the Rapid Apophis Mission for Space Safety (Ramses), will rendezvous with the asteroid in February 2029. Apophis is 340 metres wide, about the same as the height of the Empire State Building. If it were to hit Earth, it would cause wholesale destruction hundreds of miles from its impact site. The energy released would equal that from tens or hundreds of nuclear weapons, depending on the yield of the device. Luckily, Apophis won’t hit Earth in 2029. Instead, it will pass by Earth safely at a distance of 19,794 miles (31,860 kilometres), about one-twelfth the distance from the Earth to the Moon. Nevertheless, this is a very close pass by such a big object, and Apophis will be visible with the naked eye. Nasa and the European Space Agency have seized this rare opportunity to send separate robotic spacecraft to rendezvous with Apophis and learn more about it. Their missions could help inform efforts to deflect an asteroid that threatens Earth, should we need to in future. The threat from asteroids Some 66 million years ago, an asteroid the size of a small city hit Earth. The impact of this asteroid brought about a global extinction event that wiped out the dinosaurs. Earth is in constant danger of being hit by asteroids, leftover debris from the formation of the Solar System 4.5 billion years ago. Located in the asteroid belt between Mars and Jupiter, asteroids come in many shapes and sizes. Most are small, only 10 metres across, but the largest are hundreds of kilometres across, larger than the asteroid that killed the dinosaurs. Artist’s impression of Apophis. Nasa The asteroid belt contains 1-2 million asteroids larger than a kilometre across and millions of smaller bodies. These space rocks feel each other’s gravitational pull, as well as the gravitational tug of Jupiter on one side and the inner planets on the other. Because of this gravitational tug-of-war, every once in a while an asteroid is thrown out of its orbit and hurtles towards the inner Solar System. There are 35,000 such “near-Earth objects” (Neos). Of these, 2,300 “potentially hazardous objects” (PHOs) have orbits that intersect Earth’s and are large enough that they pose a real threat to our survival. Do not go gentle into that good night During the 20th century, astronomers set up several surveys, such as Atlas, in order to detect and study hazardous asteroids. But detection is not enough; we have to find a way to defend Earth against an incoming asteroid. Blowing up an asteroid, as depicted in the movie Armageddon, is no use. The asteroid would be broken into smaller fragments, which would keep moving in much the same direction. Instead of being hit by one large asteroid, Earth would be hit by a swarm of smaller objects. The preferred solution is to deflect the incoming asteroid away from Earth so that it passes by harmlessly. To do so, we would need to apply an external force to the asteroid to nudge it away. A popular idea is to fire a projectile at the asteroid. Nasa did this in 2022, when a spacecraft called Dart collided with an asteroid. Before we do this out of necessity, we have to understand how different types of asteroids would react to such an impact. Apophis, Ramses and Osiris-Apex Apophis was discovered in 2004. The asteroid passed by Earth on December 21 2004 at a distance of 14 million kilometres. It returned in 2021, and will swing by Earth again in 2029, 2036 and 2068. Until recently, there was a small chance that Apophis could collide with Earth in 2068. However, during Apophis’ approach in 2021, astronomers used radar observations to refine their knowledge of the asteroid’s orbit. These showed that Apophis would not hit our planet for the next 100 years. The Ramses mission will rendezvous with Apophis in February 2029, two months before its closest approach to Earth on Friday April 13. It will then accompany the asteroid as it approaches Earth. The goal is to learn how Apophis’s orbit, rotation and shape will change as it passes so close to Earth’s gravitational field. In 2016, Nasa launched the “Origins, Spectral Interpretation, Resource Identification, and Security – Regolith Explorer” (Osiris-Rex) mission to study the near-Earth asteroid Bennu. It intercepted Bennu in 2020 to collect samples of rock and soil from its surface. It dispatched the rocks in a capsule, which arrived on Earth in 2023. The spacecraft is still out there, so Nasa renamed it the “Origins, Spectral Interpretation, Resource Identification and Security – Apophis Explorer” (Osiris-Apex) and assigned it to study Apophis. Osiris-Apex will reach the asteroid just after its 2029 close encounter. It will then fly low over Apophis’s surface and fire its engines, disturbing the rocks and dust that cover the asteroid to reveal the layer underneath. A close flyby of an asteroid as large as Apophis happens only once every 5,000 to 10,000 years. Apophis’s arrival in 2029 presents a rare opportunity to study such an asteroid up close, and seeing how it is affected by Earth’s gravitational pull. The information gleaned will shape the way we choose to protect Earth in the future from a real killer asteroid. Ancient Egyptian mythology When Ramses and Osiris-Apex meet up with Apophis in 2029 they will inadvertently reenact a core component of ancient Egyptian cosmology. To the ancient Egyptians, the Sun was personified by several powerful gods, chief among them Re. The Sun’s setting in the evening was interpreted as Re dying and entering the netherworld. During his nighttime journey through the netherworld, Re was menaced by the great snake Apophis, who embodied the powers of darkness and dissolution. Only after Apophis had been defeated could Re be revitalised by Osiris, the king of the netherworld. Re could then once

Apophis: a European space mission gets up close with an asteroid set to brush by Earth Read More »

How wasted heat from our bodies could generate green energy

If you’ve ever seen yourself through a thermal imaging camera, you’ll know that your body produces lots of heat. This is in fact a waste product of our metabolism. Every square foot of the human body gives off heat equivalent to about 19 matches per hour. Unfortunately, much of this heat simply escapes into the atmosphere. Wouldn’t it be great if we could harness it to produce energy? My research has shown this would indeed be possible. My colleagues and I are discovering ways of capturing and storing body heat for energy generation, using eco-friendly materials. The goal is to create a device that can both generate and store energy, acting like a built-in power bank for wearable tech. This could allow devices such as smart watches, fitness trackers, or GPS trackers to run much longer, or even indefinitely, by harnessing our body heat. The author won the editor’s choice award in Vitae’s three minute thesis competition. It isn’t just our bodies that produce waste heat. In our technologically advanced world, substantial waste heat is generated daily, from the engines of our vehicles to the machines that manufacture goods. Typically, this heat is also released into the atmosphere, representing a significant missed opportunity for energy recovery. The emerging concept of “waste heat recovery” seeks to address this inefficiency. By harnessing this otherwise wasted energy, industries can improve their operational efficiency and contribute to a more sustainable environment. The thermoelectric effect is a phenomenon that can help turn heat into electricity. This works by having a temperature difference produce an electric potential, as electrons flow from the hot side to the cool side, generating usable electrical energy. Conventional thermoelectric materials, however, are often made from cadmium, lead or mercury. These come with environmental and health risks that limit their practical applications. The power of wood But we’ve discovered you can also create thermoelectric materials from wood – offering a safer, sustainable alternative. Wood has been integral to human civilisations for centuries, serving as a source of building materials and fuel. We are uncovering the potential of wood-derived materials to convert waste heat, often lost in industrial processes, into valuable electricity. This approach not only enhances energy efficiency, but also redefines how we view everyday materials as essential components of sustainable energy solutions. Our team at the University of Limerick, in collaboration with the University of Valencia, has developed a sustainable method to convert waste heat into electricity using Irish wood products, particularly lignin, which is a byproduct of the paper industry. Our study shows that lignin-based membranes, when soaked in a salt solution, can efficiently convert low-temperature waste heat (below 200°C) into electricity. The temperature difference across the lignin membrane causes ions (charged atoms) in the salt solution to move. Positive ions drift toward the cooler side, while negative ions move toward the warmer side. This separation of charges creates an electric potential difference across the membrane, which can be harnessed as electrical energy. Since around 66% of industrial waste heat falls within this temperature range, this innovation presents a significant opportunity for eco-friendly energy solutions. This new technology has the potential to make a big difference in many areas. Industries such as manufacturing, which produce large amounts of leftover heat, could see major benefits by turning that waste heat into electricity. This would help them save energy and lessen their impact on the environment. This technology could find use in various settings, from providing power in remote areas to powering sensors and devices in everyday applications. Its eco-friendly nature also makes it a promising solution for sustainable energy generation in buildings and infrastructure. The trouble with storage Capturing energy from waste heat is just the first step; storing it effectively is equally critical. Supercapacitors are energy storage devices that rapidly charge and discharge electricity. This makes them essential for applications requiring quick power delivery. However, their reliance on fossil fuel-derived carbon materials raises sustainability concerns, highlighting the need for renewable alternatives in their production. Our research group has discovered that lignin-based porous carbon can serve as an electrode in supercapacitors for energy storage generated from harvesting waste heat using a lignin membrane. This process allows the lignin membrane to capture and convert waste heat into electrical energy, while the porous carbon structure facilitates the rapid movement and storage of ions. By providing a green alternative that avoids harmful chemicals and reliance on fossil fuels, this approach offers a sustainable solution for energy storage from waste heat. This innovation in energy storage technology could power everything from consumer electronics, wearable technology to electric vehicles. source

How wasted heat from our bodies could generate green energy Read More »

Apple Intelligence will help AI become as commonplace as word processing

When Apple’s version of AI, branded as Apple Intelligence, rolls out in October to folks with the company’s latest hardware, the response is likely to be a mix of delight and disappointment. The AI capabilities on their way to Apple’s walled-garden will bring helpful new features, such as textual summaries in email, Messages and Safari; image creation; and a more context-aware version of Siri. But as Apple Intelligence’s beta testing has already made clear, the power of these features falls well below what is on offer from major players like OpenAI, Google and Meta. Apple AI won’t come close to the quality of document summary, image or audio generation easily accessed from any of the frontier models. But Apple Intelligence will do something none of the flagship offerings can do: change perceptions of AI and its role in ordinary life for a large portion of users around the world. The real impact of Apple AI won’t be practical but moral. It will normalize AI, make it seem less foreign or complex. It will de-associate AI from the idea of cheating or cutting corners. It will help a critical mass of users cross a threshold of doubt or mystification about AI to forge a level of comfort and acceptance of it, even a degree of reliance. Overcoming early doubts Generative AI has faced two problems since ChatGPT was unveiled in 2022. Many have wondered what it’s really for or whether it’s truly useful, given hallucinations and other issues that are rooted in training data. Others have doubted the ethics of using AI, seeing it as a form of cheating or copyright infringement. But as we have learned in recent months, language models are most effective when they work on our own documents and data, as with platforms like NotebookLM or GPT4o, which can now handle upwards of 50 to 100 books’ worth of material we upload. Customers at the Apple Store on 5th Ave. in New York on Sept. 20, 2024. (AP Photo/Ted Shaffrey) The output of the prompts we run — in the form of article or lecture summaries, reports, slide decks and even podcasts — is much more accurate and useful than what came out of earlier chatbots. Apple Intelligence capitalizes on this insight by pointing most of its AI functionality at user data, rather than data on the web. Domesticating AI With Apple Intelligence working mainly on our own data, much of its output will likely mirror the higher quality of output we’re seeing with tools like NotebookLM — compared to AI that works mainly on large bodies of anonymous training data, like ChatGPT in its early days. Having AI work mostly on user data — and doing it frequently — will forge a new association in people’s minds between generative AI and personal information, rather than miscellaneous training data. It will likely cause us to see AI as something integral to our personal routines, like reading email or the morning news. This, in turn, will make using more powerful tools like GPT4o or Claude more socially and ethically acceptable. Once we’re in the habit of using AI to summarize or edit our email, condense articles on the web into pithy summaries or edit images in Photos, we’ll think less about the propriety of using NotebookLM to prepare a first draft of a memo or report, or using Dall-E to create images. ‘AI for the rest of us’ Apple has a long history of making complex technologies more accessible to everyday users, and that is their goal for AI. When word processors first appeared in the late 1970s and early 1980s, there was similar uncertainty about the propriety of using them to help us write things — a belief that something authentic or human about writing by hand would be lost. Read more: Think tech killed penmanship? Messy handwriting was a problem centuries before smartphones For many, computers themselves were too daunting to embrace. But Apple’s Macintosh personal computer helped domesticate and normalize using computers to write with its graphic user interface and WYSIWYG feature (“what you see is what you get”). Eventually, writing would become so closely associated with word processing that we find it hard to imagine the one without the other. Former Apple CEO Steven P. Jobs, left and President John Sculley presenting the new Macintosh Desktop Computer in January 1984 at a shareholder meeting in Cupertino, Calif. (AP Photo) Apple Intelligence could do for generative AI what the Mac or graphic user interface did for personal computers: help tame it, and make it seem ordinary and acceptable. Apple’s marketing team hints at this in their tagline for Apple Intelligence, “AI for the rest of us.” If history is any guide, Apple will play a key role in changing how we think about AI. Doing many of our basic tasks without it may soon seem unthinkable. source

Apple Intelligence will help AI become as commonplace as word processing Read More »