1. Artificial intelligence in daily life: Views and experiences

Artificial intelligence is quickly becoming more and more part of everyday life. This chapter explores how the public and experts compare in their experiences and views around the use of AI (such as chatbots) and their control over AI’s role in their lives. Interacting with AI Americans encounter AI in various ways, from social media to health care to financial services. But AI experts believe the public engages with AI more than they report. AI experts were asked how often they think people in the United States interact with AI. A vast majority (79%) say people in the U.S. interact with AI almost constantly or several times a day. A much smaller share of U.S. adults (27%) think they interact with AI at this rate. Three-in-ten say they do so about once a day or several times a week, and 43% report doing so less often. Use and views of chatbots It’s been over two years since ChatGPT was released, and other chatbots came soon after. Since then, Americans have been increasingly using them for work or entertainment. To that end, we asked AI experts and the general public about their use of these tools. Using chatbots is nearly universal among experts, but that’s not the case for the general public. One-third of U.S. adults say they have ever used an AI chatbot, compared with nearly all AI experts surveyed (98%). That said, most Americans (72%) have at least heard of chatbots, including 28% who’ve heard a lot. The public’s experiences with chatbots have not been as positive as those of experts. About six-in-ten AI experts who have used a chatbot (61%) say it was extremely or very helpful to them. Smaller shares of users in the general public (33%) say this. Fewer in both groups report that chatbots have been not too or not at all helpful. Still, U.S. adults who’ve used chatbots are more likely than experts surveyed to say these tools have been not too or not at all helpful (21% vs. 9%). Do people think they have control over AI in their lives? Debates have continued around the difficulty or inability to opt out of AI. On balance, both the American public and the AI experts we surveyed want more control over this technology. When asked about control over AI use in their lives, almost half or more in both groups say they have little or no control, with this sentiment being somewhat more prevalent among U.S. adults (59%) than AI experts surveyed (46%).  Smaller shares of both groups think they have control over whether AI is used in their lives: 14% of the general public and 23% of AI experts say they have a great deal or quite a bit of control. What’s more, both U.S. adults and AI experts most commonly say they want more control over how AI is used in their lives. More than half of both AI experts and U.S. adults (57% and 55%) say they would like more control over how AI is used in their own lives. Fewer in both groups are comfortable with the amount of control they have, though experts are more likely to say this (38% vs. 19%). Uncertainty is more common among the general public. U.S. adults are far more likely than AI experts to say they are unsure how much control they want over AI (26% vs. 4%). By gender, among AI experts surveyed Among experts, women are more likely than men to say that they would like more control over AI (67% vs. 54%). By job sector, among AI experts surveyed Experts who work at colleges or universities are more likely than those who work in private companies to say they want more control over AI (61% vs. 50%). Roughly equal portions of both say they have not too much or no control in how AI is used in their lives (47% and 46%, respectively). source

1. Artificial intelligence in daily life: Views and experiences Read More »

VMware/Siemens: A Cautionary Tale About The Risks Of Software And Services Licensing

Litigation has become the default method for companies to resolve disagreements, force accountability, and establish recourse for everything from breach-related failures to contractual disagreements. A recent lawsuit filed by VMware (now owned by Broadcom) against its customer, Siemens’ US operations, for alleged use of unlicensed software is not unique and should serve as a stark reminder that poorly governed software licenses and assets come with a risk to both sides and will impact the technologies we depend on. The Siemens-Broadcom Saga: He Said/She Said Broadcom is accusing Siemens of using multiple VMware products without proper licenses. This “aha!” discovery that thousands of software licenses were illegally downloaded was only brought to VMware’s attention, however, after Siemens provided a list of installed software that it insisted was “eligible for the one-year extension of Support Services,” even though some of those installs could not be associated with an active software license. Siemens had threatened legal action if it did not receive those extensions, and VMware countered with the observation of the license violations. Both sides hold responsibility for guarding legal license use, so it’s an oopsie on both sides. The result is a legal battle certain to cost both companies millions in attorney fees and litigation costs, along with a legal discovery process that could unearth more licensing violations — not to mention potentially compromise Siemens’ ability to get support services for the duration of the lawsuit. Pay Attention To The Details, As Mistakes Have Consequences “True-ups” are often negotiating tools for vendors. They can start with a request for a software audit but often then lead to finding unlicensed software that the business either needs to pay for or discontinue use of. The intersection of infrastructure software, virtualization, and massive operational scale can mean large areas of unaccounted expense from true-ups where a business has no choice but to pay or disrupt the business. For example: IBM raked in millions from WebSphere licensing when businesses started virtualizing its WebSphere servers because the licensing was based on the software’s access to all the physical CPUs in the virtualized cluster. Until customers set up subcapacity licensing and the software agents to track it, they were on the hook for the additional licensing costs. Oracle customers have run into similar issues when running Oracle Database on HCI clusters due to Oracle’s licensing parameters. Efforts to get better utilization through virtualization while also avoiding these licensing issues have driven many organizations to adopt disaggregated HCI or even to create targeted smaller clusters for Oracle use. VMware’s licensing changes are affecting many, as the piecemeal licensing that businesses were used to is converted to a bundled platform license where they then incur the charge for platform components that they haven’t used in the past, often duplicating the functionality of existing infrastructure investments. These are just a few examples. Pick a large software vendor and you can find similar stories. Finding license violations is a common tactic for vendors to identify what they see as unrealized income and can mean hundreds of thousands to millions of dollars in license costs for an enterprise customer. License changes, product bundling changes, and major infrastructure paradigm shifts can introduce a mismatch between what someone has paid for and what they should have paid for. Additionally, automated deployment, especially if the software is a key component of your tech stack, can lead to overuse at scale and create a big licensing risk for your company. Accurate tracking is a must to manage that risk, but be careful with vendor-supplied license management tools. Those tools can be a way for a vendor to see the license overuse before you do. Assume that your license use is part of a negotiation; treat it that way, and manage that negotiating resource appropriately. Lessons For Software Vendors And Their Consumers As your ecosystem of software and services becomes larger and more complex, it’s time to revisit the basics of how you can prevent disruption to business operations and avoid the negative optics of a similar situation at your company. Focus on effective vendor management and licensing best practices. To do this, consumers of software must: Conduct regular license audits. Regularly review and audit software licenses to ensure compliance and avoid unlicensed usage. Audits should not be your crutch, however. For automated deployments, use valid license checks before deploying rather than just auditing the environment after the fact. Even better, create deployed license thresholds so that when you are close to reaching the limits of what you have already purchased, an alert can be sent to procurement or a tech leader to address the situation before it slows down your operations. Use tech to manage software licenses. It’s your responsibility to know how many software licenses are deployed in your environment. Implement tooling to track and manage your software licenses efficiently, check that the numbers match up with what you have contracted and paid for, and educate employees about the importance of software licensing and compliance to prevent inadvertent violations. In addition to the idea of adding license checks to deployment automation, you can also automate new license provisioning and hopefully retirement if your vendor provides a mechanism for it. Rethink procurement and contracting processes. Software is constantly changing, and your procurement practices need to keep up with new trends in bundling and packaging. Develop and enforce clear policies for software procurement, encourage procurement to ask hard questions around inadvertent violations, and ensure that contract language protects your company’s position if noncompliance is unintentional. Software vendors must: Set thresholds for noncompliance. Not all software licensing violations are by an egregious amount or a result of flagrant disregard of the contractual agreement. Understand what leeway you’re willing to provide and make it clear in the contract that overage can’t exceed a certain percentage or number of licenses. Provide a time frame for violations to be resolved, such as a 30- or 60-day period after notice is given. Don’t ignore contract governance. Most companies spend their time and

VMware/Siemens: A Cautionary Tale About The Risks Of Software And Services Licensing Read More »

The Impact of April 2 Tariffs on IT Spending

The wave of new tariffs introduced by the US administration will drive up technology prices, disrupt supply chains, and weaken global IT spending in 2025. Not only will these tariffs have a direct inflationary effect on technology prices in the US, but growing concerns about a broader economic slowdown will lead to weaker investment by businesses and consumers around the world, even prior to any slowdowns appearing in earnings or economic data. This impact will unfold quickly in 2025, despite the strong countervailing force of growing demand for AI and related technologies.   On March 31, IDC published a downside scenario in which global IT spending would grow by 5%, rather than the 10% growth we currently project in our baseline forecast. This scenario was modelled before the latest tariff announcements in April but already reflected the potential impact of a broadening economic slowdown. While the details of final tariffs don’t align exactly with that downside scenario, we expect our baseline forecast will move towards the lower end of that 5-10% range over the next few weeks.   As a result, we are developing a new downside scenario that reflects the possibility of a broadening global trade war, which will likely include additional tariffs and retaliatory measures by many countries. These may include protective actions against countries other than the US. Our new baseline forecast in April will reflect what we now know, which is that these new tariffs will have a significant negative impact on the ICT industry in 2025.  This situation remains highly fluid and dynamic. Tariffs set to be implemented on April 9 may yet be adjusted or postponed, and the response in other countries could include stimulus measures to protect short-term economic stability in China and elsewhere. This is a moving target, but the risk of a global recession is higher than one week ago, with some economists now pegging it at 40%, and this uncertainty will have an immediate effect on business and consumer confidence.   New tariffs will have an inflationary impact on technology prices in the US, as well as causing significant disruption to supply chains. While this impact will be most immediate in devices, then other compute, storage, and network hardware as well datacenter construction, even sectors such as software and services will be affected if tariffs are longer lived. There’s also an indirect negative impact of tariffs on software and services, where the provider delivering the software and/or services will incur increased costs for the infrastructure to develop and deliver the product, meaning that many software and services vendors will need to include increased costs in their own pricing assumptions.   Some devices and hardware vendors may seek to mitigate the impact, but US customers will swiftly feel the effect of higher prices. Lean inventories and rapid manufacturing cycles mean that price hikes will materialize quickly. The broad, unfocused nature of these new tariffs leaves manufacturers little room to adjust.   It’s important to note that our surveys of IT buyers had remained relatively resilient through March. While there is significant concern over the uncertainty caused by tariff policies, a majority of firms in March were trying to protect their key investment priorities around AI, analytics, security, and IT optimization. IT is more important to the business than ever before. We will be checking in with IT leaders on these same issues in mid-April.  Price sensitivity is rising, however, which history shows is a major cause of competitive disruption. The IT market will continue to be more resilient than during previous economic cycles, and more resilient than many other sectors of the economy. Service providers will try to maintain their aggressive investment in deployments of AI infrastructure, and they have the ability to optimize asset use to much greater extent than even the largest of their enterprise customers. For businesses, IT has largely transitioned from a capex to an opex model in which a larger share of technology spending is essential to business operations and is increasingly tied to business conditions.   Despite all of this, the reality of a slowing economy and rising unemployment will have a direct impact on IT spending. Consumer spending is likely to be hit hard. Businesses will first look to cut spending on devices and on-premise infrastructure, seeking rapid cost benefits to protect the bottom line. Any job cuts will have a direct impact on some types of IT spending.    IT services spending is vulnerable to a slowdown in new contract signoffs, which will be driven by a broader economic slowdown in the next 6-12 months. Combined with other economic headwinds, including government spending cuts in the US, this adds up to a much weaker outlook for short-term investment in new technology projects.   Conclusion  Our March 31 forecast of 10% growth for global IT spending will be reduced significantly in April, based on the tariff announcements of April 2. The situation remains extremely fluid, and subject to new announcements or changes, but a weakening economy will lead to IT spending cuts and delays in the next six months. We will move closer to the previous downside of 5% growth, which reflects a rapid, negative impact on hardware and IT services spending.  Agility is key to navigating this period of major disruption and uncertainty. It may take several months for the full picture to become clearer, but this is already causing delays in some types of investment. Underlying demand for IT is still high, and the likelihood of a decline in overall IT spending remains very low, but adjusting to a new baseline of slower growth in the near term is our new reality.   The tariffs announced this week have introduced significant instability into the IT market. If the measures announced on April 2 stay in place and trigger an escalation of retaliatory measures leading to a global recession, the impact on IT spending will be swift and downward, potentially leading to the worst market performance since the great financial crisis of 2008-2009.  IDC will continue to monitor developments closely. We’ll

The Impact of April 2 Tariffs on IT Spending Read More »

Airbus to build lander for Europe’s first Mars rover after Russia dropped

The European Space Agency’s (ESA) Rosalind Franklin rover is back on course for a landmark trip to Mars, where it will probe the red planet for signs of extraterrestrial life.  ESA initially designed the Mars rover alongside Roscosmos, Russia’s space agency, as part of the ExoMars programme. The vehicle was set to launch in 2022, but when Russia invaded Ukraine, ESA severed ties with Moscow, putting the mission in jeopardy. Rosalind Franklin — named after the British chemist whose work was crucial to understanding the structure of DNA— was left without several key components, including a landing platform to safely touch down on the Martian surface.  But now, ESA and Thales Alenia Space, the prime contractor for the ExoMars mission, have issued Airbus a £150mn contract to build a new lander at the company’s facility in Stevenage, UK. The British government will fund the lander via the UK Space Agency.   “Getting the Rosalind Franklin rover onto the surface of Mars is a huge international challenge and the culmination of more than 20 years’ work,” said Kata Escott, managing director at Airbus Defence and Space UK, which also designed and built the rover.     The ExoMars spacecraft is set to launch from the US in 2028. Arrival on Mars is expected by 2030.  The 💜 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now! If successful, it will be Europe’s first rover to be sent to Mars. The US space agency NASA already has two in operation— Perseverance and Opportunity — while China has one, called Zhurong. The trip to Mars UK Technology Secretary Peter Kyle next to a mockup of the ExoMars Rosalind Franklin rover at Airbus’s facility in Stevenage in the UK. Credit: DSIT As the spacecraft approaches Mars, the lander — carrying the rover — will separate and begin its rapid descent into the atmosphere. A combination of a heat shield, parachutes, and braking rockets will slow down the lander just before touchdown.  Once on the surface, the lander will deploy ramps, allowing the rover to drive off and begin its exploration. Rosalind Franklin’s instruments will look for evidence of past and present Martian life. The rover includes a drill designed to probe as deep as two metres into the surface, acquiring samples shielded from radiation on the surface. It’s designed to operate for at least seven months.  Since its fallout with Russia, ESA has secured new agreements for various components of the ExoMars spacecraft, including a contract with NASA to supply adjustable braking engines for the landing platform and radioisotope heating units (RHUs). These RHUs use radioactive decay to generate heat, preventing the rover from freezing in the frigid Martian environment. source

Airbus to build lander for Europe’s first Mars rover after Russia dropped Read More »

AGs Sue To Halt Disruptions To NIH Grant Funding

By Julie Manganis ( April 4, 2025, 11:58 AM EDT) — A coalition of 16 states on Friday sued the National Institutes of Health over delays and cancellations of grant programs linked to vaccines, transgender issues and other areas they say are currently “disfavored” by the Trump administration…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

AGs Sue To Halt Disruptions To NIH Grant Funding Read More »

Confidence In Marketing Measurement Is Increasing, But The Job Is Getting Bigger

One of the most interesting aspects of my role as a Forrester analyst is hearing marketers ask questions about how others in their position or industry are approaching measurement. A common fear I hear is that “everyone else” in a client’s competitive set has figured things out and the client brand is being left behind. To help assuage these fears, we recently analyzed data from Forrester’s Marketing Survey, 2024, to uncover the state of B2C marketing measurement. While marketing measurement is still a work in progress for most companies, marketer confidence in their ability to measure marketing’s business value accurately and consistently is high. Fewer than 5% of marketers say that they have not been able to prove the long-term impact of marketing. But between data deprecation, fragmentation of channels, and increasing consumer complexity, marketing analytics and measurement isn’t getting any easier. Here are three takeaways from our analysis: Marketers manage a broad set of metrics. Revenue growth remains the top metric used by marketers to gauge both the business impact of marketing and the performance of individual marketing initiatives, but they are also being asked to track customer outcomes (i.e., satisfaction, loyalty, retention, profitability) and increase brand value. Twenty-nine percent of marketers say they routinely use brand value to measure and attribute the incremental business value of marketing, up from 19% in 2023. Tools and resources are major drivers of measurement confidence. The marketers who are most confident in their ability to measure marketing’s incremental business value are also the most confident in the ability of their tools, teams, and data to meet their needs for timely insights. This portends a potential future split between the haves and have-nots, where ability to accurately measure is dependent on investing today in measurement technology, data, and teams. Brands that are not yet investing in creating a measurement-informed culture will only find it more difficult to catch up going forward. Data issues remain the top marketing measurement challenge. Data challenges continue to make marketers’ measurement jobs tougher. Too many unconnected data sources and inconsistent levels of quality among data sources hold them back from making use of measurement and analytics, and B2C marketers continue to lose trust in third-party data, which impacts their ability to measure granular audience segments. Sixty-eight percent of marketers are reevaluating their third-party data partnerships. For more detailed insights into how B2C marketers are thinking about measurement, read our recent report, The State Of B2C Marketing Measurement, 2024. In the coming months, I’ll also be publishing reports on data requirements for marketing measurement, how to build strong measurement teams, best practices for extracting value from your marketing mix model, and how generative AI is impacting the marketing measurement landscape. If you would like to discuss your own approach to marketing measurement and how to prepare for the future, schedule a guidance session here. source

Confidence In Marketing Measurement Is Increasing, But The Job Is Getting Bigger Read More »

Meta defends Llama 4 release against ‘reports of mixed quality,’ blames bugs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Meta’s new flagship AI language model Llama 4 came suddenly over the weekend, with the parent company of Facebook, Instagram, WhatsApp and Quest VR (among other services and products) revealing not one, not two, but three versions — all upgraded to be more powerful and performant using the popular “Mixture-of-Experts” architecture and a new training method involving fixed hyperparameters, known as MetaP. Also, all three are equipped with massive context windows — the amount of information that an AI language model can handle in one input/output exchange with a user or tool. But following the surprise announcement and public release of two of those models for download and usage — the lower-parameter Llama 4 Scout and mid-tier Llama 4 Maverick — on Saturday, the response from the AI community on social media has been less than adoring. Llama 4 sparks confusion and criticism among AI users An unverified post on the North American Chinese language community forum 1point3acres made its way over to the r/LocalLlama subreddit on Reddit alleging to be from a researcher at Meta’s GenAI organization who claimed that the model performed poorly on third-party benchmarks internally and that company leadership “suggested blending test sets from various benchmarks during the post-training process, aiming to meet the targets across various metrics and produce a ‘presentable’ result.” The post was met with skepticism from the community in its authenticity, and a VentureBeat email to a Meta spokesperson has not yet received a reply. But other users found reasons to doubt the benchmarks regardless. “At this point, I highly suspect Meta bungled up something in the released weights … if not, they should lay off everyone who worked on this and then use money to acquire Nous,” commented @cto_junior on X, in reference to an independent user test showing Llama 4 Maverick’s poor performance (16%) on a benchmark known as aider polyglot, which runs a model through 225 coding tasks. That’s well below the performance of comparably sized, older models such as DeepSeek V3 and Claude 3.7 Sonnet. Referencing the 10 million-token context window Meta boasted for Llama 4 Scout, AI PhD and author Andriy Burkov wrote on X in part that: “The declared 10M context is virtual because no model was trained on prompts longer than 256k tokens. This means that if you send more than 256k tokens to it, you will get low-quality output most of the time.” Also on the r/LocalLlama subreddit, user Dr_Karminski wrote that “I’m incredibly disappointed with Llama-4,” and demonstrated its poor performance compared to DeepSeek’s non-reasoning V3 model on coding tasks such as simulating balls bouncing around a heptagon. Former Meta researcher and current AI2 (Allen Institute for Artificial Intelligence) Senior Research Scientist Nathan Lambert took to his Interconnects Substack blog on Monday to point out that a benchmark comparison posted by Meta to its own Llama download site of Llama 4 Maverick to other models, based on cost-to-performance on the third-party head-to-head comparison tool LMArena ELO aka Chatbot Arena, actually used a different version of Llama 4 Maverick than the company itself had made publicly available — one “optimized for conversationality.” As Lambert wrote: “Sneaky. The results below are fake, and it is a major slight to Meta’s community to not release the model they used to create their major marketing push. We’ve seen many open models that come around to maximize on ChatBotArena while destroying the model’s performance on important skills like math or code.” Lambert went on to note that while this particular model on the arena was “tanking the technical reputation of the release because its character is juvenile,” including lots of emojis and frivolous emotive dialog, “The actual model on other hosting providers is quite smart and has a reasonable tone!” In response to the torrent of criticism and accusations of benchmark cooking, Meta’s VP and Head of GenAI Ahmad Al-Dahle took to X to state: “We’re glad to start getting Llama 4 in all your hands. We’re already hearing lots of great results people are getting with these models. That said, we’re also hearing some reports of mixed quality across different services. Since we dropped the models as soon as they were ready, we expect it’ll take several days for all the public implementations to get dialed in. We’ll keep working through our bug fixes and onboarding partners. We’ve also heard claims that we trained on test sets — that’s simply not true and we would never do that. Our best understanding is that the variable quality people are seeing is due to needing to stabilize implementations. We believe the Llama 4 models are a significant advancement and we’re looking forward to working with the community to unlock their value.“ Yet even that response was met with many complaints of poor performance and calls for further information, such as more technical documentation outlining the Llama 4 models and their training processes, as well as additional questions about why this release compared to all prior Llama releases was particularly riddled with issues. It also comes on the heels of the number two at Meta’s VP of Research Joelle Pineau, who worked in the adjacent Meta Foundational Artificial Intelligence Research (FAIR) organization, announcing her departure from the company on LinkedIn last week with “nothing but admiration and deep gratitude for each of my managers.” Pineau, it should be noted also promoted the release of the Llama 4 model family this weekend. Llama 4 continues to spread to other inference providers with mixed results, but it’s safe to say the initial release of the model family has not been a slam dunk with the AI community. And the upcoming Meta LlamaCon on April 29, the first celebration and gathering for third-party developers of the model family, will likely have much fodder for discussion. We’ll be tracking it all, stay tuned. source

Meta defends Llama 4 release against ‘reports of mixed quality,’ blames bugs Read More »

In Light Of New Tariffs, Focus On Digitizing And Diversifying Your Supply Chain

In the ever-evolving landscape of global trade, the recent imposition of new tariffs and sanctions is leaving many business leaders concerned about the future of their supply chain strategies. Navigating through the complexities of today’s global trade environment presents a multifaceted challenge for businesses. While, in the short term, consumers will bear the brunt as importers include tariffs in their prices, this moment also presents an opportunity for supply chain leaders to diversify and digitize their supply chains for greater resilience. In earlier research, we outlined three steps that business leaders should take to digitally transform their supply chains. Tariffs are just one element shaping global trade flows, especially in a world of increasing regulation and compliance mediated by shared data about materials, methods, and treatment of labor. Meanwhile, the COVID-19 pandemic and the ongoing war in Ukraine demonstrate that the location of manufacturing still matters and that companies need to diversify their supply chains to maintain an optimal balance between cost and flexibility. In another recent blog, we already discussed how prospective tariffs might impact supply chain processes and supporting applications. Industries That Are Being Challenged To Scale Up Domestic Production Face The Greatest Risk Manufacturing: Manufacturers in automotive, pharmaceutical, and consumer electronics are most heavily impacted by tariffs. COVID-19 demonstrated the risks of exclusive reliance on global supply chains, susceptible equally to disruption from war, pestilence, or tariffs. Manufacturers like Ford and General Motors consider all risks including exposure to tariffs in their sourcing strategies and use local suppliers to mitigate risk. Other manufacturers such as HP build optionality into their supply chain strategies. This buys options on production capacity in case a particular product offering takes off, but it also avoids outright commitment to subcontractor capacity in case of weaker demand. You can actually measure the business value of your supply chain optionality using Forrester’s Total Economic Impact™ (TEI) methodology. Agriculture: The agricultural sector suffered from tariffs imposed on US exports such as soybeans, dairy, and pork. China, one of the largest markets for US agricultural products, retaliated against earlier US tariffs by imposing its own duties on these goods, significantly reducing demand. Some farmers sought new markets, while others cut production or shifted to alternative crops. The ripple effect of tariffs on agriculture extends beyond farmers, affecting global supply chains and consumer prices. The situation is exacerbated by the current export challenges faced by Ukraine, historically known as the breadbasket of the world. Semiconductor-dependent industries: The US’s efforts to curb China’s strength in embedded electronic components, together with the EU’s sovereign cloud initiatives, force global manufacturers to manage a technology stack for each imperial block. Manufacturers must carefully choose their markets of operation. For example, the Dutch tool maker ASML obtained exemption from US sanctions only after negotiations between their governments. Meanwhile, Chinese firms placed $16 billion orders with NVIDIA ahead of tighter export regulations. Life sciences: The pharmaceutical and life sciences sector faces its own set of challenges with the US pushing toward domestic production of critical drug ingredients. The adoption of advanced supply chain tools, such as TraceLink, reflects the industry’s move toward greater transparency and resilience. The Role Of Logistics And Freight Suppliers Will Further Increase In a climate fraught with trade uncertainties and slowdowns, logistics and freight suppliers emerge as crucial navigators. Their expertise in customs clearance and compliance becomes invaluable, guiding businesses through challenging terrain. Continual maintenance of enterprise master data (for example, ship-to and ship-from addresses) helps master attributes such as sustainability or country of origin. The adoption of global trade management solutions like those provided by SAP and Oracle exemplifies the strategic measures that companies can take to ensure smooth operations amid the complexities of global trade. I look forward to hearing your viewpoint on how to best deal with current uncertainty and flourish in the next four years. In the meantime, please book a guidance session to discuss how you can leverage our research and tools to create better supply chain resilience. I also want to thank Forrester Research Associate Lorenzo Annicchiarico, who contributed to this blog. source

In Light Of New Tariffs, Focus On Digitizing And Diversifying Your Supply Chain Read More »

$115 million just poured into this startup that makes engineering 1,000x faster — and Bezos, Altman, and Nvidia are all betting on its success

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Rescale, a digital engineering platform that helps companies run complex simulations and calculations in the cloud, announced today that it has raised $115 million in Series D funding to accelerate the development of AI-powered engineering tools that can dramatically speed up product design and testing. The funding round, which brings Rescale’s total capital raised to more than $260 million, included investments from Applied Ventures, Atika Capital, Foxconn, Hanwha Asset Management Deeptech Venture Fund, Hitachi Ventures, NEC Orchestrating Future Fund, Nvidia, Prosperity7, SineWave Ventures, TransLink Capital, the University of Michigan, and Y Combinator. The San Francisco-based company has drawn support from an impressive roster of early backers including Sam Altman, Jeff Bezos, Paul Graham, and Peter Thiel. This latest round aims to propel Rescale’s vision of transforming how products are designed across industries by combining high-performance computing, intelligent data management, and a new field the company calls “AI physics.” “Rescale was founded with the mission to empower engineers and scientists to accelerate innovation by running computations and simulations more efficiently,” Joris Poort, Rescale’s founder and CEO, said in an interview with VentureBeat. “That’s exactly what we’re focused on today.” From Boeing’s carbon fiber challenge to a $260 million startup The company’s origins trace back to Poort’s experience working on the Boeing 787 Dreamliner more than 20 years ago. He and his co-founder Adam McKenzie were tasked with designing the aircraft’s wing using complex physics-based simulations. “My co-founder, Adam, and I were working at Boeing, running large-scale physics simulations for the 787 Dreamliner,” Poort told VentureBeat. “It was the first fully carbon fiber commercial airplane, which posed significant engineering challenges. Most airplanes before had always been built out of aluminum, but carbon fiber has many different layers and variables that needed to be optimized.” The challenge they faced was a lack of sufficient computing resources to run the millions of calculations needed to optimize the innovative carbon fiber design. “We couldn’t get enough compute resources. This was 20 years ago, before cloud computing existed,” he recalled. “We had to bootstrap together and cobble together resources from different organizations just to run these large-scale simulations over the weekend.” This experience led directly to Rescale’s founding mission: build the platform they wished they had during those Boeing years. “Rescale was founded to build the platform we wish we had, because it took us many years to develop all these capabilities,” Poort explained. “We were really just engineers trying to design the best possible plane, but we had to become applied mathematicians and computer scientists, doing all this infrastructure work just to solve engineering problems.” How AI models are turning days of calculations into seconds Central to Rescale’s ambitions is the concept of “AI physics” — using artificial intelligence models trained on simulation data to dramatically accelerate computational engineering. While traditional physics simulations might take days to complete, AI models trained on those simulations can deliver approximate results in seconds. “With AI physics, you train AI models on simulation data sets, allowing you to run these simulations over 1,000 times faster,” Poort said. “The AI model provides probabilistic answers—essentially estimates—whereas traditional physics calculations are deterministic, giving you exact results.” He offered a concrete example from one of Rescale’s customers: “General Motors motorsports, they’re designing the external aerodynamics of a Formula One vehicle. They may run thousands of these sort of fluid dynamics, aerodynamic calculations. Normally, these may take, like, about three days on, say, 1000 compute cores. Now, with an AI model, they’re able to do this in like less than a second.” This thousand-fold acceleration allows engineers to explore design spaces much more rapidly, testing many more iterations and possibilities than previously feasible. “The really unique advantage of AI physics is that you can verify the answers. It’s just math,” Poort emphasized. “This is different from LLMs, where you might encounter hallucinations that are difficult to validate. Many questions don’t have definitive answers, but in physics, you have concrete, verifiable solutions.” The funding comes amid increasing enterprise investments in technologies that speed up product development. The high-performance computing market has grown to approximately $50 billion, with simulation software reaching $20 billion and product lifecycle data management about $30 billion, according to figures shared by Rescale. What differentiates Rescale is its “compute recommendation engine,” which optimizes workloads across different cloud architectures in real-time. “Our unique differentiation is our technology called the compute recommendation engine. This allows us to optimize workloads in real time across different architectures available across all public clouds,” Poort said. “We support 1,150 different applications with many versions, operating systems, and hardware architectures. When combined together, this creates more than 50 million different possible configurations.” The company’s enterprise customers, which include Arm, General Motors, Samsung, SLB (formerly Schlumberger), and the U.S. Department of Defense, collectively spend over $1 billion annually to power their virtual product development and scientific discovery environments. Beyond simulation: Data management and AI integration for modern engineering Rescale is accelerating its roadmap in three key areas. First, expanding its library of over 1,250 applications and network of more than 500 cloud datacenters. Second, establishing unified data management and digital thread capabilities for all computing workflows. Third, enabling faster engineering through AI. “We also have a product called Rescale Data, which focuses on creating an intelligent data layer,” Poort explained. “This is sometimes called the digital thread. Throughout the product lifecycle—whether you’re developing an aircraft, a car, or in life sciences, a medical device or drug—you need to track all that data. If an issue arises, you can look back to see when that data was created, what the input files were, and related information.” Applied Materials, one of the investors in this round, has been working with Rescale to enhance its simulation capabilities. Rather than simply accelerating existing processes, the partnership suggests a more profound shift in how engineering knowledge is captured and applied. The most intriguing aspect of

$115 million just poured into this startup that makes engineering 1,000x faster — and Bezos, Altman, and Nvidia are all betting on its success Read More »