Cloud Startup Figma Confidentially Files IPO Amid Volatility

By Tom Zanki ( April 15, 2025, 8:56 PM EDT) — Cloud-based design platform Figma Inc. said Tuesday it confidentially filed for an initial public offering, marking a first step toward going public during tense times for equity markets and coming more than one year after a failed merger with Adobe…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Cloud Startup Figma Confidentially Files IPO Amid Volatility Read More »

Ready Your Commerce Strategy For Growth Through Volatility

The cost of doing business is now a moving target, adding more complexity in an already challenging business environment. US businesses continue to grapple with costly consumer touchpoints and a myriad of challenges to run and grow effectively and profitably. With market volatility permeating global markets, including the threat of retaliatory tariffs, now is the time to deliberately review and expand your commerce strategy to fight growth stagnation in 2025. Our new research on the future of commerce is based on both data and many executive interviews across retail, hospitality, automotive, travel, technology vendors, and service providers. From this research, we see three distinct and emerging commerce strategies to guide leaders amid volatility — distributed, dynamic, and intelligent. We see that: Converting a consumer is hard — and in 2025, even more factors impact a purchase. Consumers making a purchase still consider the total cost of that purchase — but core consideration factors also include convenience, product sentiment, brand resonance, and even culture and community. And that’s true across consumer cohorts from Baby Boomers to Gen Alphas. Social commerce, marketplaces, and genAI are growing, with content and data as the foundation. Among US online adults, 33% say that social media influencers are the primary way that they discover new products. Simultaneously, shoppable media is winning the attention race with younger consumers, fueling “shopping roulette.” Distributed and dynamic commerce strategies have already begun generating the content and data that will power future-state intelligent commerce. Business and consumer agents are beginning to power next-gen commerce. If Amazon’s recent “Buy for Me” shopping agent is any indication, consumer and business agents are already beginning to complete discrete tasks and tangentially impact product discovery and selection by removing friction in today’s digital experiences. Intelligence, however, is a lot more than completing individual tasks; rather, it is the synthesis of information from multiple sources to make decisions and take action correctly. There’s much talk today about agentic AI — and there will be more AI innovations tomorrow. As these innovations proliferate in the market, commerce leaders must develop an intelligent commerce strategy with today’s distributed content and dynamic datasets that can evolve and adapt over time. This strategy will be your bedrock for competitive advantage as digital commerce scales across touchpoints in the coming years. What do distributed, dynamic, and/or intelligent commerce strategies look like for your business? Let’s talk! If you are assessing new commerce strategies for your business, please get in touch with us — Forrester’s commerce team — for an inquiry or guidance session to explore the right strategy and tactics for your business. source

Ready Your Commerce Strategy For Growth Through Volatility Read More »

UK AI Copyright Rules May Backfire, Causing Biased Models & Low Creator Returns

Image: pichetw/Envato Elements Barring companies like OpenAI, Google, and Meta from training AI on copyrighted material in the UK may undermine model quality and economic impact, policy experts warn. They say that it will lead to bias in model outputs, undermining their effectiveness, while rightsholders are unlikely to receive the level of compensation they anticipate. The UK government opened a consultation in December 2024 to explore ways to protect the rights of artists, writers, and composers when creative content is used to train AI models. It outlined a system that permits AI developers to use online content for training unless the rightsholder explicitly opts out. Bodies representing the creative industries largely rejected this proposal, as it put the onus on creators to exclude their content rather than requiring AI developers to seek consent. Tech companies didn’t like it either, arguing that the system would make it difficult to determine which content they could legally use, restrict commercial applications, and demand excessive transparency. During a recent webinar hosted by the Centre for Data Innovation think tank, three policy experts explain why they believe any solution short of a full text and data mining exemption in UK copyright law risks producing ineffective AI systems and stalling innovation. Opt-out regimes may result in poorly trained AI and minimal income for rightsholders Benjamin White, the founder of copyright reform advocacy group Knowledge Rights 21, argued that regulations on AI training will affect more than just the creative industries, and since copyright serves to stimulate investment by protecting intellectual property, he said the broader economic impact of any restrictions should also be taken into account. “The rules that affect singers affect scientists, and the rules that affect clinicians, affect composers as well. Copyrights are sort of a horizontal one-size-fits-all all,” he said. He added that the scientific community is “very concerned at the framing of the consultation,” noting that it overlooks the potential benefits of knowledge sharing in advancing academic research, which, in turn, offers widespread advantages for society and the economy. White said: “The existing exception doesn’t allow universities to share training data or analysis data with other universities within proportionate partnerships, doesn’t allow NHS trusts to share training data derived from copyright materials like journal articles or materials scraped off the web.” SEE: Why Artists Hate AI Art Bertin Martens, senior fellow at economic think tank Bruegel, added: “I think media industries want to have their cake and eat it at the same time. They’re all using these models to increase their own productivity already at this moment, and they benefit from good quality models, and by withholding their data for training, they reduce the quality… so it cuts into their own flesh.” If AI developers signed licensing agreements with just the consenting publishers or rightsholders, then the data their models are trained on would be skewed, according to Martens. “Clearly, even big AI companies are not going to sign licenses along that long tail of small publishers,” he said. “It’s far too costly in terms of transaction costs, it’s not feasible, and so we get biased models with partial information.” Julia Willemyns, the co-founder of tech policy research project UK Day One, stated that the opt-out regime is unlikely to be effective in practice, as jurisdictions with less restrictive laws will still allow access to the same content for training. Blocking access to outputs from those jurisdictions would ultimately deprive the UK of the best available models, she warned. She said this “slows down technology diffusion” and has “negative productivity effects.” SEE: UK Government Releases AI Action Plan Furthermore, artists are unlikely to earn meaningful income from AI licensing deals. “The problem is that every piece of data isn’t worth very much to the models, these models operate at scale,” said Willemyns. Even if licensing regimes were enforced globally and rightsholders’ content could only be used with explicit legal consent, the economic benefit for creators would still be “likely very, very minimal.” “So, we’re trading off countrywide economic effects for a positive that seems very negligible,” she said. Willemyns added that overcomplicating the UK’s copyright approach by, say, requiring separate regimes for AI training on scientific and creative materials, could create legal uncertainty. This would overburden courts, deter business adoption, and risk losing out on AI’s productivity gains. A text and data mining exemption would ensure simplicity. More must-read AI coverage ChatGPT’s Ghibli controversy underscores blurred lines in AI creativity The debate over artistic protection versus innovation also surfaced last month during a controversy involving AI-generated art in the style of Studio Ghibli, the Japanese animation house behind ‘Spirited Away’ and ‘My Neighbor Totoro.’ Critics argued it risked appropriating a distinctive artistic style without permission, and OpenAI eventually introduced a refusal mechanism that activates when users attempt to generate images in the style of a living artist. The panel disagreed with this approach. Willemyns said that the stock of Studio Ghibli’s parent company “clearly upticked” as increased attention drove more people to watch its films. “I feel like the arguments that AI slop is not going to actually take over content were kind of re-reaffirmed by the instance,” she said. Martens agreed, arguing that “if there are many Ghibli lookalikes that are being produced it increases competition around a popular product, and that’s something that we should welcome.” SEE: UK Pledges Public Sector AI Overhaul White added that cartoons with Ghibli’s art style are produced by lots of different Japanese studios. “They’re all people with big eyes, Western-looking, that’s the style,” he said. “That’s not protected by copyright, what copyright law protects is substantial similarity.” Martens noted that how close a particular AI-generated work can come to an original is “up to the courts,” but this can only be determined on a case-by-case basis. Ultimately, the panel agreed that models should not be able to directly reproduce training content, but that training on publicly available material should remain permissible. “Having flexibility on how the systems are built and how technology learns

UK AI Copyright Rules May Backfire, Causing Biased Models & Low Creator Returns Read More »

CIO Sharon Mandell transforms Juniper Networks for the AI era

Traditionally, Juniper’s business processes were targeted toward very large, complex, and long sales cycles. Mist required the exact opposite: a more bundled package sale to the enterprise, Mandell says. It would not be easy, but the CIO — recognizing business alignment was the key — enlisted both the business and technology sides of the house and got to work. “I locked on to the need for a business transformation, which required a change to many, if not all of our systems,” Mandell says. “Business transformation is a team sport and not always easy. IT can’t make this transformation alone. Leaders and subject matter experts in other impacted functions have to come along with this change in approach as well.” Much input and planning were required to evolve Juniper’s business model and prepare for a services future, she says. The company started by selecting and implementing new products to better integrate teams. For example, she and her IT team modified Juniper’s Salesforce Opportunity Management system, implemented new sales forecasting approaches using Clari, re-engineered the company’s use of Oracle CPQ, and updated its SAP Order Management systems. source

CIO Sharon Mandell transforms Juniper Networks for the AI era Read More »

Meta Accused Of Turning Smart Devices Into Useless 'Bricks'

By Dorothy Atkins ( April 15, 2025, 5:28 PM EDT) — Consumers hit Meta Platforms Inc. with a proposed class action in California federal court Monday, accusing the social media giant of a deceptive “bait-and-switch” scheme by advertising Meta’s Portal video-calling smart devices with wide-ranging features only to later discontinue key software functionality rendering its hardware “largely obsolete,” useless “bricks.”… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Meta Accused Of Turning Smart Devices Into Useless 'Bricks' Read More »

What is technical debt? A business risk IT must manage

A rush to meet deadlines: Time constraints often force teams to take shortcuts, leading to substandard code that must be dealt with later. Your tech leadership team must prioritize tasks effectively and track postponed work to ensure it ultimately gets addressed. Unclear project requirements: When goals are vaguely written or not well thought out, teams may produce code that doesn’t really align with the underlying needs. Work done early on to define clear requirements can pay off later with cleaner code. Poorly written code: One of the main sources of tech debt, sloppy code makes future development and refactoring slow and inefficient; well-structured code, on the other hand, is easier to maintain and integrate with new features. Inadequate documentation: If poorly documented, even well-written code will cost your team and their successors wasted time and effort down the line. Establishing strong documentation from the start may take time, but will ultimately reduce effort going forward. Inevitable system evolution: Even well-designed codebases require ongoing maintenance due to evolving business needs, security threats, and outdated technologies. Code can “drift” due to dependencies on other packages or minor tweaks that have unintended consequences. In some ways this is the most insidious cause of tech debt, and should be guarded against. How to measure and manage technical debt One important difference between financial and technical debt: It’s much easier to quantify how much money you owe than it is to figure out your exact level of technical debt. There are techniques to help, however; for instance, in a whitepaper, CodeScene suggests a strategy in which you measure your team’s unplanned work, which is a good stand-in for time spent cleaning up tech debt they’ve inherited. Even if you can’t hang a number on your debt, you still need to get a handle on it. Andrew Sharp, research director at Info-Tech Research Group, is a strong advocate for tracking technical debt. He advises IT leaders to document their most critical technical debt, understand its business impact, and establish a clear process for resolving it. Understanding what technical debt you have is the first step to managing it. CIO’s Mary K. Pratt has a deep dive on how tech leaders should approach managing technical debt: You need to prioritize it on your road maps, think about it as a business risk, and be sure that when you do take on new debt, it’s in the planned/prudent quadrant.  source

What is technical debt? A business risk IT must manage Read More »

Trump Cites U.S. Security To Investigate Critical Minerals Tax

By Hailey Konnath ( April 15, 2025, 11:13 PM EDT) — President Donald Trump on Tuesday issued an executive order launching a so-called Section 232 national security tariff investigation into the United States’ reliance on imported processed critical minerals, citing his belief that “an overreliance … could jeopardize U.S. defense capabilities.”… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Trump Cites U.S. Security To Investigate Critical Minerals Tax Read More »

How Latin American Finance Markets May Shift Under Trump

By David Contreiras Tyler ( April 10, 2025, 1:34 PM EDT) — Latin American economies are uniquely positioned due to their geographical proximity to the U.S., extensive economic integration, significant immigration patterns and potential for growth…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

How Latin American Finance Markets May Shift Under Trump Read More »

The New US Federal AI Policy Demands That Government And Private-Sector Tech Leaders Embrace Responsible And Explainable AI

Today, beneath the headline-grabbing reports of geopolitical and geoeconomic volatility, a significant and consequential transformation is quietly unfolding in the public sector: a shift underscored by the change in US federal AI policy marked by Executive Order 14179 and subsequent Office of Management and Budget memoranda (M-25-21 and M-25-22). This policy decisively pivots from internal, government-driven AI innovation to significant reliance on commercially developed AI, accelerating the subtle yet critical phenomenon of the “algorithmic privatization” of government. Historically, privatization meant transferring tasks and personnel from public to private hands. Now, as government services and functions are increasingly delegated to non-human agents — commercially maintained and operated algorithms, large language models, and, soon, AI agents and agentic systems — government leaders will have to adapt. The best practices that come from a decade’s worth of research on governing privatization — where public services are largely delivered through private-sector contractors — rests on one fundamental assumption: All the actors involved are human. Today, this assumption no longer holds. The new direction of the US federal government opens a myriad of questions and implications for which we don’t currently have the answers. For example: Who does a commercially provided AI agent optimize for in a principal-agent relationship? The contracting agency or the commercial AI supplier? Or does it optimize for its own evolving model? Can you have a network of AI agents from different AI suppliers in the same service area? Who’s responsible for the governance of the AI, the AI supplier, or the contracting government agency? What happens when we need to rebid the AI agent supply relationship? Can an AI agent transfer its context and memory to the new incoming supplier? Or do we risk the loss of knowledge or create new monopolies and rent extraction, driving up costs we saved though AI-enabled reductions in force? The Stakes Are High For AI-Driven Government Services Technology leaders — both within government agencies and commercial suppliers — must grasp these stakes. Commercial AI-based offerings using technologies that are less than two years old promise efficiency and innovation but also carry substantial risks of unintended consequences, including maladministration.  Consider these examples of predictive AI solutions gone wrong in the last five years alone: These incidents highlight foreseeable outcomes when oversight lags technological deployment. Rapid AI adoption heightens the risk of errors, misuse, and exploitation. Government Tech Leaders Must Closely Manage Third-Party AI Risk For government technology leaders, the imperative is clear: Manage these acquisitions for what they are — third-party outsourcing arrangements that must be risk-managed, regularly rebid, and replaced. As you deliver on these new policy expectations, you must: Prioritize transparency and accountability in AI procurement. Insist on visibility into algorithmic processes, rejecting opaque “black box” solutions for those with explainability. Maintain robust internal expertise to oversee and regulate these commercial algorithms effectively. Require all data captured by any AI solution to remain the property of the government. Ensure that a mechanism exists for training or transfer of data for any subsequent solution providers contracted to replace an incumbent AI solution. Adopt an “align by design” approach to ensure that your AI systems meet their intended objectives while adhering to your values and policies. Private-Sector Tech Leaders Must Embrace Responsible AI For suppliers, success demands ethical responsibility beyond technical capability. Begin by accepting that your AI-enabled privatization isn’t a permanent grant of fief or title over public service delivery, so you must: Embrace accountability, aligning AI solutions with public values and governance standards. Proactively address transparency concerns with open, auditable designs. Collaborate closely with agencies to build trust, ensuring meaningful oversight. Help the industry drive toward interoperability standards to maintain competition and innovation. Only responsible leadership on both sides — not merely responsible AI — can mitigate these risks, ensuring that AI genuinely enhances public governance rather than hollowing it out. The cost of failure at this juncture won’t be borne by the technology titans — such as AWS, Google, Meta, Microsoft, or xAI — but inevitably by individual taxpayers: the very people the government is intended to serve. I would like to thank Brandon Purcell and Fred Giron for their help in challenging my thinking and hardening arguments in what is a difficult time and space in which to address these critical partisan issues. source

The New US Federal AI Policy Demands That Government And Private-Sector Tech Leaders Embrace Responsible And Explainable AI Read More »

MIT Bros. Cite DOJ Memo In Bid To Get $25M Crypto Case Axed

By Elliot Weld ( April 15, 2025, 2:42 PM EDT) — Two Massachusetts Institute of Technology-educated brothers accused of stealing $25 million worth of cryptocurrency cited a U.S. Department of Justice memo instructing prosecutors to pull back from novel cases involving digital assets as they urged a New York federal judge to dismiss the charges…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

MIT Bros. Cite DOJ Memo In Bid To Get $25M Crypto Case Axed Read More »