FCC Reworks Database Of Reassigned Phone Numbers

By Christopher Cole ( April 8, 2025, 3:22 PM EDT) — It will be easier and cost less for companies to make sure they’re reaching the right consumer’s phone number with recent changes to the Reassigned Numbers Database, the Federal Communications Commission said…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

FCC Reworks Database Of Reassigned Phone Numbers Read More »

Google’s new Ironwood chip is 24x more powerful than the world’s fastest supercomputer

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Google Cloud unveiled its seventh-generation Tensor Processing Unit (TPU), Ironwood, on Wednesday. This custom AI accelerator, the company claims, delivers more than 24 times the computing power of the world’s fastest supercomputer when deployed at scale. The new chip, announced at Google Cloud Next ’25, represents a significant pivot in Google’s decade-long AI chip development strategy. While previous generations of TPUs were designed primarily for both training and inference workloads, Ironwood is the first purpose-built specifically for inference — the process of deploying trained AI models to make predictions or generate responses. “Ironwood is built to support this next phase of generative AI and its tremendous computational and communication requirements,” said Amin Vahdat, Google’s Vice President and General Manager of ML, Systems, and Cloud AI, in a virtual press conference ahead of the event. “This is what we call the ‘age of inference’ where AI agents will proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.” Shattering computational barriers: Inside Ironwood’s 42.5 exaflops of AI muscle The technical specifications of Ironwood are striking. When scaled to 9,216 chips per pod, Ironwood delivers 42.5 exaflops of computing power — dwarfing El Capitan‘s 1.7 exaflops, currently the world’s fastest supercomputer. Each individual Ironwood chip delivers peak compute of 4,614 teraflops. Ironwood also features significant memory and bandwidth improvements. Each chip comes with 192GB of High Bandwidth Memory (HBM), six times more than Trillium, Google’s previous-generation TPU announced last year. Memory bandwidth reaches 7.2 terabits per second per chip, a 4.5x improvement over Trillium. Perhaps most importantly, in an era of power-constrained data centers, Ironwood delivers twice the performance per watt compared to Trillium, and is nearly 30 times more power efficient than Google’s first Cloud TPU from 2018. “At a time when available power is one of the constraints for delivering AI capabilities, we deliver significantly more capacity per watt for customer workloads,” Vahdat explained. From model building to ‘thinking machines’: Why Google’s inference focus matters now The emphasis on inference rather than training represents a significant inflection point in the AI timeline. The industry has been fixated on building increasingly massive foundation models for years, with companies competing primarily on parameter size and training capabilities. Google’s pivot to inference optimization suggests we’re entering a new phase where deployment efficiency and reasoning capabilities take center stage. This transition makes sense. Training happens once, but inference operations occur billions of times daily as users interact with AI systems. The economics of AI are increasingly tied to inference costs, especially as models grow more complex and computationally intensive. During the press conference, Vahdat revealed that Google has observed a 10x year-over-year increase in demand for AI compute over the past eight years — a staggering factor of 100 million overall. No amount of Moore’s Law progression could satisfy this growth curve without specialized architectures like Ironwood. What’s particularly notable is the focus on “thinking models” that perform complex reasoning tasks rather than simple pattern recognition. This suggests that Google sees the future of AI not just in larger models, but in models that can break down problems, reason through multiple steps and simulate human-like thought processes. Gemini’s thinking engine: How Google’s next-gen models leverage advanced hardware Google is positioning Ironwood as the foundation for its most advanced AI models, including Gemini 2.5, which the company describes as having “thinking capabilities natively built in.” At the conference, Google also announced Gemini 2.5 Flash, a more cost-effective version of its flagship model that “adjusts the depth of reasoning based on a prompt’s complexity.” While Gemini 2.5 Pro is designed for complex use cases like drug discovery and financial modeling, Gemini 2.5 Flash is positioned for everyday applications where responsiveness is critical. The company also demonstrated its full suite of generative media models, including text-to-image, text-to-video, and a newly announced text-to-music capability called Lyria. A demonstration showed how these tools could be used together to create a complete promotional video for a concert. Beyond silicon: Google’s comprehensive infrastructure strategy includes network and software Ironwood is just one part of Google’s broader AI infrastructure strategy. The company also announced Cloud WAN, a managed wide-area network service that gives businesses access to Google’s planet-scale private network infrastructure. “Cloud WAN is a fully managed, viable and secure enterprise networking backbone that provides up to 40% improved network performance, while also reducing total cost of ownership by that same 40%,” Vahdat said. Google is also expanding its software offerings for AI workloads, including Pathways, its machine learning runtime developed by Google DeepMind. Pathways on Google Cloud allows customers to scale out model serving across hundreds of TPUs. AI economics: How Google’s $12 billion cloud business plans to win the efficiency war These hardware and software announcements come at a crucial time for Google Cloud, which reported $12 billion in Q4 2024 revenue, up 30% year over year, in its latest earnings report. The economics of AI deployment are increasingly becoming a differentiating factor in the cloud wars. Google faces intense competition from Microsoft Azure, which has leveraged its OpenAI partnership into a formidable market position, and Amazon Web Services, which continues to expand its Trainium and Inferentia chip offerings. What separates Google’s approach is its vertical integration. While rivals have partnerships with chip manufacturers or acquired startups, Google has been developing TPUs in-house for over a decade. This gives the company unparalleled control over its AI stack, from silicon to software to services. By bringing this technology to enterprise customers, Google is betting that its hard-won experience building chips for Search, Gmail, and YouTube will translate into competitive advantages in the enterprise market. The strategy is clear: offer the same infrastructure that powers Google’s own AI, at scale, to anyone willing to pay for it. The multi-agent ecosystem: Google’s audacious plan for AI systems that work together Beyond hardware, Google outlined a vision for AI centered around multi-agent

Google’s new Ironwood chip is 24x more powerful than the world’s fastest supercomputer Read More »

Wells Fargo’s AI assistant just crossed 245 million interactions – no human handoffs, no sensitive data exposed

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Wells Fargo has quietly accomplished what most enterprises are still dreaming about: building a large-scale, production-ready generative AI system that actually works. In 2024 alone, the bank’s AI-powered assistant, Fargo, handled 245.4 million interactions – more than doubling its original projections – and it did so without ever exposing sensitive customer data to a language model. Fargo helps customers with everyday banking needs via voice or text, handling requests such as help paying bills, transferring funds, providing transaction details, and answering questions about account activity. The assistant has proven to be a sticky tool for users, averaging multiple interactions per session. The system works through a privacy-first pipeline. A customer interacts via the app, where speech is transcribed locally with a speech-to-text model. That text is then scrubbed and tokenized by Wells Fargo’s internal systems, including a small language model (SLM) for personally identifiable information (PII) detection. Only then is a call made to Google’s Flash 2.0 model to extract the user’s intent and relevant entities. No sensitive data ever reaches the model. “The orchestration layer talks to the model,” Wells Fargo CIO Chintan Mehta said in an interview with VentureBeat. “We’re the filters in front and behind.” The only thing the model does, he explained, is determine the intent and entity based on the phrase a user submits, such as identifying that a request involves a savings account. “All the computations and detokenization, everything is on our end,” Mehta said. “Our APIs… none of them pass through the LLM. All of them are just sitting orthogonal to it.” Wells Fargo’s internal stats show a dramatic ramp: from 21.3 million interactions in 2023 to more than 245 million in 2024, with over 336 million cumulative interactions since launch. Spanish language adoption has also surged, accounting for more than 80% of usage since its September 2023 rollout. This architecture reflects a broader strategic shift. Mehta said the bank’s approach is grounded in building “compound systems,” where orchestration layers determine which model to use based on the task. Gemini Flash 2.0 powers Fargo, but smaller models like Llama are used elsewhere internally, and OpenAI models can be tapped as needed. “We’re poly-model and poly-cloud,” he said, noting that while the bank leans heavily on Google’s cloud today, it also uses Microsoft’s Azure. Mehta says model-agnosticism is essential now that the performance delta between the top models is tiny. He added that some models still excel in specific areas — Claude Sonnet 3.7 and OpenAI’s o3 mini high for coding, OpenAI’s o3 for deep research, and so on — but in his view, the more important question is how they’re orchestrated into pipelines. Context window size remains one area where he sees meaningful separation. Mehta praised Gemini 2.5 Pro’s 1M-token capacity as a clear edge for tasks like retrieval augmented generation (RAG), where pre-processing unstructured data can add delay. “Gemini has absolutely killed it when it comes to that,” he said. For many use cases, he said, the overhead of preprocessing data before deploying a model often outweighs the benefit.  Fargo’s design shows how large context models can enable fast, compliant, high-volume automation – even without human intervention. And that’s a sharp contrast to competitors. At Citi, for example, analytics chief Promiti Dutta said last year that the risks of external-facing large language models (LLMs) were still too high. In a talk hosted by VentureBeat, she described a system where assist agents don’t speak directly to customers, due to concerns about hallucinations and data sensitivity. Wells Fargo solves these concerns through its orchestration design. Rather than relying on a human in the loop, it uses layered safeguards and internal logic to keep LLMs out of any data-sensitive path. Agentic moves and multi-agent design Wells Fargo is also moving toward more autonomous systems. Mehta described a recent project to re-underwrite 15 years of archived loan documents. The bank used a network of interacting agents, some of which are built on open source frameworks like LangGraph. Each agent had a specific role in the process, which included retrieving documents from the archive, extracting their contents, matching the data to systems of record, and then continuing down the pipeline to perform calculations – all tasks that traditionally require human analysts. A human reviews the final output, but most of the work ran autonomously. The bank is also evaluating reasoning models for internal use, where Mehta said differentiation still exists. While most models now handle everyday tasks well, reasoning remains an edge case where some models clearly do it better than others, and they do it in different ways. Why latency (and pricing) matter At Wayfair, CTO Fiona Tan said Gemini 2.5 Pro has shown strong promise, especially in the area of speed. “In some cases, Gemini 2.5 came back faster than Claude or OpenAI,” she said, referencing recent experiments by her team. Tan said that lower latency opens the door to real-time customer applications. Currently, Wayfair uses LLMs for mostly internal-facing apps—including in merchandising and capital planning—but faster inference might let them extend LLMs to customer-facing products like their Q&A tool on product detail pages. Tan also noted improvements in Gemini’s coding performance. “It seems pretty comparable now to Claude 3.7,” she said. The team has begun evaluating the model through products like Cursor and Code Assist, where developers have the flexibility to choose. Google has since released aggressive pricing for Gemini 2.5 Pro: $1.24 per million input tokens and $10 per million output tokens. Tan said that pricing, plus SKU flexibility for reasoning tasks, makes Gemini a strong option going forward. The broader signal for Google Cloud Next Wells Fargo and Wayfair’s stories land at an opportune moment for Google, which is hosting its annual Google Cloud Next conference this week in Las Vegas. While OpenAI and Anthropic have dominated the AI discourse in recent months, enterprise deployments may quietly swing back toward Google’s favor. At

Wells Fargo’s AI assistant just crossed 245 million interactions – no human handoffs, no sensitive data exposed Read More »

Spotify CEO’s Neko Health opens its biggest body-scan clinic yet

Body-scanning startup Neko Health has opened its largest clinic yet, continuing its expansion in London — just six months after launching its first site in the city. The futuristic new facility expands access to Neko’s high-tech health vision. Blending body scans, lidar sensors, and AI with blood tests, eye pressure checks, and strength tests, the startup maps millions of data points in minutes.  The findings can reveal warning signs about the skin, heart, blood vessels, and inflammation. A human doctor then immediately takes the user through the findings. Within an hour of arriving, they’re on their way out of the clinic. The system is the brainchild of Spotify CEO Daniel Ek and his business partner Hjalmar Nilsonne. The duo want to shift healthcare systems from reactive to proactive. Limited offer: Bag 2 tickets for the price of 1! Register for TNW Conference by 15 April and get another ticket totally free! After launching Neko in their native Sweden in 2023, they began expanding the service to London last year, starting with a new health centre in the chic neighbourhood of Marylebone. The second London facility dramatically expands the company’s capacity. Located in the buzzy Spitalfields Market, the centre covers a roomy 7,466 square feet, with capacity for up to 30,000 scans annually. “This health centre is built for scale, and we’re doubling down in London,” said Nilsonne. London calling Neko The new centre has a suitably space-age design, which echoes the aesthetic of the inaugural London site. Neko already has plans to open two more clinics in the UK’s capital, with launches in other cities in the country also targeted for this year. A spokesperson for the company told TNW that London was “a strategic choice” for Neko’s international expansion beyond Stockholm. “It represents the global market in a way; offering a lot of health tourism and a very competitive space when it comes to private healthcare,” they said. “If we can break into this market we have a chance to do it elsewhere. Additionally, London is a global healthcare hub, with world-class medical institutions and research centres.” The city also evidently has enough people prepared to pay the £299 that each body scan costs. The Marylebone centre attracted lengthy waitlists. Across the London and Stockholm sites, 80% of members book and prepay a scan for the following year at the end of their appointment, Neko said. Between the two centres, Neko has now completed over 15,000 scans. Earlier this year, TNW added one more to the total. During the doctor’s consultation, we were advised to seek a further review. Once we finally have our appointment with the NHS, we’ll reveal all the details about our experience — and Neko’s findings. If you want to try the scan out for yourself, you can sign up to the waitlist here. The future of healthcare on the agenda for TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale. Use the code TNWXMEDIA2025 at the check-out to get 30% off the price tag. source

Spotify CEO’s Neko Health opens its biggest body-scan clinic yet Read More »

Decision-making 101: How to get consensus right

Next up: Figure out which alternatives are both best and most likely to be accepted by most of the group. Schedule a second round of one-on-one conversations, whose purpose is to nudge everyone toward the most likely alternative — the one most likely to be sufficiently agreeable to everyone involved. Yes, this is a lot of work. Consensus decision-making is, as noted, expensive and time-consuming, which is one reason it should be saved for when maximal buy-in is more important than any other aspect of choosing a direction. Consensus playbook, part 2: The meeting Now is the time for a meeting — a consensus check, consensus check because everyone involved is close enough to the same preferences that the meeting’s energy is best expended getting everyone to commit, in public, that this is what they agree to. Again, that’s agree to not agree with. And a significant part of the meeting is documenting, for the record, for each member of the group, how it is, if they don’t agree with the chosen alternative but do agree to it, why they are okay with it even if it isn’t, from their perspective, perfect. source

Decision-making 101: How to get consensus right Read More »

IoT, IIoT, IoMT, And OT — Welcome To Acronym Mania. What Does It All Mean?

Across IT, acronyms come with the territory. Whether they’re classic ones (ENIAC, Electronic Numerical Integrator and Computer), just a tad more modern (VAX, Virtual Address eXtension), network-based (TCP/IP, Transmission Control Protocol/Internet Protocol; XNS, Xerox Network Systems), or cybersecurity-related (NGAV, next-gen antivirus; DLP, data loss prevention; IDS, intrusion detection system), the acronyms and the process of keeping up with them are endless. It doesn’t help that many IT vendors create new acronyms in an effort to stand out and make themselves feel special. In the world of autonomous endpoints, we are dealing with five primary acronyms. To clarify the meaning of these acronyms, here is some guidance and perspective. IoT: internet of things This is the broadest category, as there are a myriad of devices and technologies within it, both at home or as part of a business. Device types range from smart assistants, doorbell cameras, and fitness trackers to printers, security door locks, and warehouse label scanners. What ties these devices together is that they are designed to communicate and exchange internet data, with ‘I’ being the key letter in the acronym. IoT devices, such as sensors and actuators, are integrated into or attached to machines or assets and connected to the internet via a Wi-Fi connection or through cellular networks. The devices use cloud platforms to send and receive data to make informed decisions about the connected assets. IIoT: industrial internet of things A subset of the IoT category, these devices, as the name implies, are made for heavy work but are often larger than simple sensors or scanners. IIoT devices are usually focused on improving industrial processes, including predictive maintenance, asset tracking, quality monitoring, process optimization, supply chain visibility, and building management. The industrial aspect isn’t restrictive to just monitoring; it can also incorporate devices such as electric vehicle chargers or building management systems. The first ‘I’ is the differentiator in the acronym. OT: operational technology As the name implies, OT encompasses the hardware and software that controls the physical operation of industrial devices. Here is where we will find manufacturing, energy production and transmission, water treatment devices, or factory equipment. Connectivity is regularly restricted to private networks, but in recent years, OT has started to have external/internet connections. The focus is on the ‘O.’ To make matters worse, under OT, you also have industrial control system (ICS), supervisory control and data acquisition (SCADA), distributed control systems (DCS), and programmable logic controllers (PLC). There seems to be no end to OT-based acronyms. IoMT: internet of medical things As the ‘M’ implies, this subset of IoT revolves around devices used within the healthcare industry. These could be devices in a hospital, such as infusion pumps or smart medication dispensers, or outside devices like blood pressure monitors, CPAP machines, and pacemakers. But like IIoT, you also have devices that could be considered operational technology like MRI or X-ray machines, but it is generally accepted that IoMT, the ‘M’ for medical being the distinction, will incorporate both IoT and OT. M2M: machine to machine This entails technology that enables machines to interact via wireless or wired communication channels without human intervention. Devices connect and interact with each other to exchange information and perform actions without requiring an internet connection. M2M technology is often integrated into security, track and trace, automation, manufacturing, and facility management processes. IoT technology differs from M2M communication because IoT extends interactions to include cloud-based networks. Please note: We recognize that there are many other relevant IoT-related acronyms, which we will explore in an upcoming IoT report. A simplified version that takes these distinctions to just IoT and OT would be: IoT devices are those that you run inside your business. If these devices go offline, you may have some challenges, but your business can still function. OT devices are those that run your business. If these devices go offline, you’re not doing business. Like all analogies, there are exceptions that don’t fit. For instance, if your medical business relies on performing MRI scans and the MRI machine is offline, you can’t do business. A hospital can treat patients without IoT infusion pumps or Bluetooth pulse oximeter sensors, but it won’t be easy. And would you really want to run your industrial manufacturing tools without IoT noxious gas sensors? For a little more distinction, we could use this image below:   Device protection is important with IoT and OT, but the purpose is different. For IoT devices, the goal is to protect the data. For OT, the goal is maintaining operational safety. Because of this, the approaches to security for these technologies have historically been different. Until recently, many enterprises completely walled off their OT devices into their own air-gapped network, developing extensive human-action security policies to control the flow of data in and out of the network to ensure that these devices stayed operational and weren’t exposed to internal or external threats. Conversely, IoT devices were often interspersed throughout the enterprise with other endpoints. In more secure environments, network traffic to and from these devices is logically segmented and controlled to protect them against internet-based threats. Security in IoT and OT environments is currently changing. The walls between the OT devices and the rest of the network are becoming porous. Business leaders are still highly concerned about OT security, but the need for connectivity to IT and internet resources is growing. For IoT, simple segmentation is no longer sufficient because of the mounting threats. This is leading business and security leaders to deploy solutions to improve device security. New acronyms will continue to emerge (such as the confusing CPS, cyber physical security) as IoT and OT security solutions expand. I’m still dreading hearing about the first IoTDR solution. Vendors in this space need to stop throwing out word salad in an attempt to make something relevant and stick with established acronyms. If you’d like to get assistance in understanding the complexities of managing and securing IoT and OT devices, please schedule an inquiry or guidance session

IoT, IIoT, IoMT, And OT — Welcome To Acronym Mania. What Does It All Mean? Read More »

Hannover Messe 2025: Mind The Reality Gap

Last week, I joined 127,000 of our closest friends in Germany for the Hannover Messe trade fair, which once again showcased all that’s new and interesting in the smart manufacturing world. Events like this always exist in a bit of a bubble, but the reality gap between lovely spring sunshine, beautiful cherry blossoms, and breathless AI boosterism inside the showground and tariffs, uncertainty, and lengthening sales cycles outside felt particularly wide this year.   So What Did We See? Robots everywhere. Big and small, fixed and mobile, wheeled and legged: In some halls, utilitarian autonomous mobile robots were the point, and prospective buyers dug deeply into questions of carrying capacity, connectivity, range, and fleet management. Elsewhere, the flash of a robot leg (or four) drew crowds to the booths of the Bundeswehr (Germany’s army), Siemens, and other giants of the industrial world. In line with Forrester’s prediction, humanoid robots were a rarer beast. In a week of searching, I saw two (from Unitree and Sanctuary AI), and only one (Unitree’s G1) had legs. Forrester’s advice, to focus on the use case rather than the form factor, remains as relevant as ever. AI everywhere, too. Last year, I commented that “everyone had an AI story, even if few made much sense.” There was still plenty of that in 2025, but I also saw some evidence that AI was being put to practical use. Almost everyone had a chatbot to show, and some of them were quite clever. PTC showed a nice enrichment of the CodeBeamer asset lifecycle management application, using Microsoft’s AI tools to reduce ambiguity and contradiction in formal statements of requirements during product design and manufacture. Siemens enriched its existing AI offerings with a new industrial foundation model, trained on domain-specific concepts and able to process engineering diagrams as well as the text and images that more general-purpose tools manipulate. Embodied or physical AI. Interesting things happen when robots and AI get together, and some early indications of that were also on show. Sanctuary AI’s humanoid robot on the Microsoft booth might have been legless, but it had very clever hands and an impressive ability to respond to its environment. A small robotic arm on the TCS booth looked much like all the other robotic arms at the show, except for the scrawled signature of NVIDIA CEO Jensen Huang. Behind the scenes, his company’s Cosmos model helped the TCS team train the arm to cope with a wide set of situations. I’ll be diving more deeply into both in some embodied or physical AI research this year. Virtual PLCs. Audi, Intel, and Siemens have all been talking about different aspects of a project to virtualize line-side programmable logic controllers (PLCs) for several years, but industry conservatism, network latency, the control loop, and an engineer’s understandable desire to see — and touch — the little box of tricks responsible for keeping their multimillion-Euro industrial process moving smoothly conspire to slow the virtualization of operational technology workflows. Audi and Siemens have taken the solution into production, with Audi’s car body assembly line in Neckarsulm now controlled by virtual PLCs (TÜV-certified as fail-safe) installed on standard IT infrastructure in a data center 10 kilometers from the plant. According to Siemens, a further 40 customers are evaluating the solution. Unified namespace. The unified namespace (or UNS) was mentioned at a lot of booths this year, but it’s a term that risks becoming too diluted to be useful. Some (like Automation, HiveMQ, Litmus, and Sight Machine) mostly used the term in the pure sense originally intended by Walker Reynolds. Others were less precise and really just talked about pouring data from different systems into a single data lake: There’s not much unification happening there! Both can be useful, but the extra work to add context, semantics, and structure provides the real differentiation that makes a true UNS special. More data hubs and fabrics. We talk about the digital industrial platform at Forrester (new report on the topic coming very soon), and one important aspect of this is providing a way to more easily share data across application, organization, or workflow silos. There’s some overlap with the UNS, but we also see vendors offering their own software solutions. Autodesk Forge, AVEVA Connect, Hexagon Nexus, and others are addressing this challenge, and new options like GE Vernova’s Proficy Data Hub and HiveMQ’s Pulse were being promoted at the show and should be generally available later this year. Merck combines physical and digital to improve quality and traceability. It’s a pretty specific use case, but it popped up on at least two stands. Merck launched the M-Trust “cyber-physical trust platform” at CES in January, which ties digital product information to specific attributes of a unique physical product such as specific pigments embedded in the ink used to print its label. There’s a lot to explore here in terms of ensuring trust and authenticity up and down the supply chain and making the solution cost-effective for cheaper products. But integrations like those on show in Hannover help: On the Zebra stand, the special reader required to spot inclusions in ink and paint was embedded into a regular Zebra handheld scanner, and Siemens Merck showcased the SmartFacturing Studio that supports modular production of pharmaceuticals with a lot of help from Siemens hardware, software, and the Xcelerator platform. This touches on some similar ideas to the digital product passport, which I also saw good examples of and will be exploring in more depth in a report later this year. Oh, Canada! There’s usually a partner country at Hannover Messe. They’re selected months ahead of the show and normally don’t make that much of an impression after some of their senior politicians, diplomats, or executives say worthy things at the launch press conference. This year’s choice, Canada, was fortuitous and well placed to connect with broader concerns around tariffs and geopolitics. Canadian startups, companies, and high school robotics teams made the most of the opportunity to show their capabilities and

Hannover Messe 2025: Mind The Reality Gap Read More »

Trump’s Tariffs Hammer Big Tech as Apple, Meta, Amazon Shares Plunge

U.S. President Donald Trump announced on Tuesday a host of new tariffs that sent the stock prices of numerous tech giants plummeting. He applied individual “reciprocal” tariffs to several nations, equivalent to half of its trade deficit with the U.S., and a baseline 10% levy on all imports. Goods from Vietnam are now subject to a 46% reciprocal tariff 32% on imports from Taiwan 26% from India Additionally, China faces a 34% reciprocal tariff, which is on top of the 20% tariff that has been in effect since March. By Thursday’s close, NVIDIA’s stock had fallen by nearly 8% as a result of the tariffs announcement, while Amazon and Meta dropped by 9% each, according to CNBC. Apple led the declines, tumbling 9% — its steepest drop since the COVID-induced market sell-off in March 2020. Shares of Microsoft and Alphabet both fell about 2% and 4%, respectively. The Nasdaq Composite Index, a benchmark heavily weighted toward tech stocks, dropped by almost 6%. This is all due to fears that their operational costs will rise and that supply chains, which rely heavily on overseas manufacturing and imports, will be disrupted. Much of the so-called “Magnificent Seven” – Apple, Microsoft, Alphabet, Tesla, NVIDIA, Meta, and Amazon – took another hit during Friday’s trading, after China retaliated with a 34% tariff on imports from the U.S.. Both NVIDIA and Apple dropped 7%, according to Yahoo Finance, while shares in Meta sank by more than 5%. And CNBC reported on Friday that the “Magnificent Seven” lost a combined $1.8 trillion in market value over the past two days. As of today, cryptocurrency prices in particular are seeing a significant negative impact. Company-level impact: Apple and NVIDIA Apple products — primarily manufactured in China, India, and Vietnam — are likely to become more expensive as the company passes increased import costs on to U.S. consumers. Morgan Stanley analysts estimate that Apple’s profits could take a 7% hit in 2026 due to its annual costs rising by $8.5 billion. U.S. chipmaker giant NVIDIA should be somewhat shielded from the impact due to Trump’s exemption on semiconductors, sparing it from the 32% tariff on chips manufactured in Taiwan by TSMC. However, it remains unclear whether the semiconductor exemption will also cover the 10% baseline tariff on all imports, and rumour has it that new tariffs on chips are coming soon. Tariffs could ‘disrupt AI innovation’ The U.S. relies on China and Taiwan for approximately 80% of its foundry capacity for 20 to 45nm chips and about 70% for 50 to 180nm chips. Tech firms may attempt to shift sourcing to reciprocal tariff-free countries, but many will pass the additional costs to consumers instead. Separately, Trump has revoked tariff exemptions on Chinese imports valued at $800 or less. This is particularly bad news for Amazon as many of the low-price goods listed on its marketplace are from Chinese sellers. Analysts are concerned about potential retaliatory action from China, as the country’s Ministry of Commerce said it would “resolutely take countermeasures” if the U.S. does not “immediately cancel” its tariffs. Dan Ives of Wedbush Securities said in a note that tariffs from China could “constrict the supply chain for next-generation Nvidia chips/hardware,” disrupting AI innovation. source

Trump’s Tariffs Hammer Big Tech as Apple, Meta, Amazon Shares Plunge Read More »

An answer to AI’s energy addiction? More AI, says the IEA

The International Energy Agency (IEA) has published its first major report on the AI gold rush’s impact on global energy consumption — and its findings paint a worrying, and perhaps contradictory, picture.   Energy use from data centres, including for artificial intelligence applications, is predicted to double over the next five years to 3% of global energy use. AI-specific power consumption could drive over half of this growth globally, the report found. Some data centres today consume as much electricity as 100,000 households. The hyperscalers of the future could gobble up 20x that number, according to the IEA. By 2030, data centres are predicted to run on 50% renewable energy, the rest comprising a mix of coal, nuclear power, and new natural gas-fired plants. The findings paint a bleak picture for the climate, but there’s a silver lining, the IEA said. While AI is set to gobble up more energy, its ability to unlock efficiencies from power systems and discover new materials could provide a counterweight.   “With the rise of AI, the energy sector is at the forefront of one of the most important technological revolutions of our time,” said Fatih Birol, IEA’s executive director. “AI is a tool, potentially an incredibly powerful one, but it is up to us – our societies, governments, and companies – how we use it.”  AI can help to optimise power grids, increase the energy output of solar and wind farms through better weather forecasting, and detect leaks in vital infrastructure. The technology could also be used to more effectively plan transport routes or design cities. AI also has the potential to discover new green materials for tech like batteries.  However, the IEA warned that the combined impact of these AI-powered solutions would be “marginal” unless governments create the necessary “enabling conditions.” “The net impact of AI on emissions – and therefore climate change – will depend on how AI applications are rolled out, what incentives and business cases arise, and how regulatory frameworks respond to the evolving AI landscape,” the report said.  Divisions in the AI energy debate While AI could, theoretically, curb energy use, major questions remain. Meanwhile, the technology’s negative climate impact is already set in.  The IEA predicts data centres will contribute 1.4% of global “combustion emissions” by 2030, almost triple today’s figure and nearly as much as air travel. While that doesn’t sound like much, the IEA’s figure doesn’t account for the embodied emissions created from constructing all those new data centres and producing all the materials therein.  Alex de Vries, a researcher at VU Amsterdam and the founder of Digiconomist, told Nature that he thinks the IEA has underestimated the growth in AI’s energy consumption. “Regardless of the exact number, we’re talking several percentage of our global electricity consumption,” said de Vries. This uptick in data centre electricity use “could be a serious risk for our ability to achieve our climate goals,” he added.  Claude Turmes, Luxembourg’s energy minister, accused the IEA of presenting an overly optimistic view and not addressing the tough realities that policymakers need to hear.   “Instead of making practical recommendations to governments on how to regulate and thus minimise the huge negative impact of AI and new mega data centres on the energy system, the IEA and its [executive director] Fatih Birol are making a welcome gift to the new Trump administration and the tech companies which sponsored this new US government,” he told the Guardian.  Aside from AI, there are more proven ways to curb energy use from data centres. These include immersion cooling, pioneered by startups like Netherlands-based Asperitas, Spain’s Submer, and UK-based Iceotope. Another is repurposing data centre heat for other applications, which is the value proposition of UK venture DeepGreen.  All of these weird and wonderful solutions will need to scale up fast if they are to make a dent in data centres’ thirst for electricity. Ultimately, we also need to start using computing power more wisely.    The debate on sustainable AI will continue at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale. Use the code TNWXMEDIA2025 at the check-out to get 30% off the price tag. source

An answer to AI’s energy addiction? More AI, says the IEA Read More »

Digi Yatra aims to be the ‘Travel stack of India’: CEO, Suresh Khadakbhavi

Q. What is the next wave of AI and Gen AI in Indian companies, and how will CIOs and tech leaders of companies stay prepared for it? Do you see faster adoption and implementation of AI and Gen AI in dynamic B2C industries like airlines and travel? Suresh: The ongoing tech transformation needs a proactive approach. I believe developing proprietary language models tailored to specific business needs can reduce reliance on external providers, mitigating geopolitical risks. Robust data governance frameworks are also crucial for the foundation for effective AI implementation. Moreover, investing in upskilling initiatives will equip teams with the necessary skills to manage and innovate with AI technologies. AI and Generative AI are rapidly enhancing efficiency and customer experience in B2C industries like airlines and travel. Digi Yatra is leading this evolution, deploying AI-powered facial biometric technology to streamline passenger verification and reduce wait times at airport touchpoints. AI-driven multilingual chatbots are being integrated to assist travelers with onboarding and real-time support. As Digi Yatra expands, AI will play a key role in document verification and fraud prevention, simplifying international travel processes. These advancements will create a seamless, contactless, and privacy-first journey for passengers, reinforcing Digi Yatra’s commitment to innovation and operational excellence in air travel. Q. The facial-recognition technology-based check-in service at airports, called Digi Yatra, could be implemented at hotels and public places like historical monuments, according to Digi Yatra Foundation. Has there been a prototype for this use-case developed and discussions ongoing with various government agencies such as the Ministry of Tourism? Suresh: Digi Yatra’s contactless biometric solutions have the potential to extend beyond airports, supporting a more integrated and secure travel ecosystem across India. While the platform is currently under enhancements at airports with additional e-gates at various touchpoints to ensure smoother passenger flow, its application can be explored in other sectors where identity validation is essential. By integrating Digi Yatra across multiple travel and public spaces, India can establish a tech-driven, globally benchmarked travel ecosystem that is efficient, secure, and traveler-friendly. For travelers, it would mean a hassle-free travel experience across the board. For the travel industry, integrating Digi Yatra can drive greater operational efficiency by automating ID checks and reducing manual verification processes, making the whole end to end experience much more satisfying for the end customers. The government and tourism ministry can leverage this to ensure security and a seamless travel experience. With data-driven insights, they can plan infrastructure, allocate resources efficiently, and improve overall travel management. source

Digi Yatra aims to be the ‘Travel stack of India’: CEO, Suresh Khadakbhavi Read More »