川普揚言對中國額外課徵50%關稅

(法新社華盛頓7日電) 美國宣布對中國銷美商品加徵對等關稅後,北京也宣布對美國商品加徵關稅,美國總統川普今天揚言,若北京不撤回報復計畫,美國將對從中國進口的產品額外課徵50%關稅。 川普日前宣布對等關稅稅率,中國再加徵34%關稅,將於9日生效。中國隨後宣布,也將對美國進口商品加徵34%關稅,10日12時生效。 川普(Donald Trump)今天在自家社群媒體「真實社群」(Truth Social)貼文表示:「如果中國不在明天,即2025年4月8日前,撤銷在其長期貿易濫用行為的基礎上加徵的34%關稅,美國將對中國徵收50%的『額外』關稅,自4月9日起生效。」 川普還說,其他國家若有意願,華府將與他們展開談判。 川普稍早已發文怒批中國是關稅「最大濫用者」,並表示北京忽視他提出「切勿報復」的警告。 LinkedIn Email Facebook Twitter WhatsApp source

川普揚言對中國額外課徵50%關稅 Read More »

CIOs brace for tariff impacts on tech industry and their businesses

US President Donald Trump’s escalating tariff war is expected to upend the IT investment plans of global CIOs and greatly impact all sectors of the IT industry in the coming months, slowing adoption of AI and damaging long-established supply chains, perhaps for good. Based on the current — but fluid — tariff schedule, IDC halved its forecast for projected IT spending growth from 10% to 5% in 2025. The research firm, which now pegs the risk of global recession at 40%, said that growth could be even lower, with China issuing a retaliatory tariff against the US in excess of 30% and European leaders meeting to agree on their responses to the United States. President Trump today threatened an additional 50% tariff on China beginning Wednesday if China did not rescind its retaliatory tariffs.   “The wave of new tariffs introduced by the US administration will drive up technology prices, disrupt supply chains, and weaken global IT spending in 2025. Not only will these tariffs have a direct inflationary effect on technology prices in the US, but growing concerns about a broader economic slowdown will lead to weaker investment by businesses and consumers around the world, even prior to any slowdowns appearing in earnings or economic data,” IDC wrote in its report. “This impact will unfold quickly in 2025, despite the strong countervailing force of growing demand for AI and related technologies.”   source

CIOs brace for tariff impacts on tech industry and their businesses Read More »

Meta Wins Bid To Transfer Del. MDL Coverage Fight To Calif.

By Dorothy Atkins ( April 4, 2025, 6:26 PM EDT) — The Judicial Panel on Multidistrict Litigation sent a Delaware insurance-coverage dispute between Hartford, Chubb Group entities and Meta to California where underlying personal-injury litigation is centralized, finding that although the parties accuse each other of forum shopping, “we are not inclined to finely parse which is the guiltier party.”… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Meta Wins Bid To Transfer Del. MDL Coverage Fight To Calif. Read More »

Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Weaponized large language models (LLMs) fine-tuned with offensive tradecraft are reshaping cyberattacks, forcing CISOs to rewrite their playbooks. They’ve proven capable of automating reconnaissance, impersonating identities and evading real-time detection, accelerating large-scale social engineering attacks. Models, including FraudGPT, GhostGPT and DarkGPT, retail for as little as $75 a month and are purpose-built for attack strategies such as phishing, exploit generation, code obfuscation, vulnerability scanning and credit card validation. Cybercrime gangs, syndicates and nation-states see revenue opportunities in providing platforms, kits and leasing access to weaponized LLMs today. These LLMs are being packaged much like legitimate businesses package and sell SaaS apps. Leasing a weaponized LLM often includes access to dashboards, APIs, regular updates and, for some, customer support. VentureBeat continues to track the progression of weaponized LLMs closely. It’s becoming evident that the lines are blurring between developer platforms and cybercrime kits as weaponized LLMs’ sophistication continues to accelerate. With lease or rental prices plummeting, more attackers are experimenting with platforms and kits, leading to a new era of AI-driven threats. Legitimate LLMs in the cross-hairs The spread of weaponized LLMs has progressed so quickly that legitimate LLMs are at risk of being compromised and integrated into cybercriminal tool chains. The bottom line is that legitimate LLMs and models are now in the blast radius of any attack. The more fine-tuned a given LLM is, the greater the probability it can be directed to produce harmful outputs. Cisco’s The State of AI Security Report reports that fine-tuned LLMs are 22 times more likely to produce harmful outputs than base models. Fine-tuning models is essential for ensuring their contextual relevance. The trouble is that fine-tuning also weakens guardrails and opens the door to jailbreaks, prompt injections and model inversion. Cisco’s study proves that the more production-ready a model becomes, the more exposed it is to vulnerabilities that must be considered in an attack’s blast radius. The core tasks teams rely on to fine-tune LLMs, including continuous fine-tuning, third-party integration, coding and testing, and agentic orchestration, create new opportunities for attackers to compromise LLMs. Once inside an LLM, attackers work fast to poison data, attempt to hijack infrastructure, modify and misdirect agent behavior and extract training data at scale. Cisco’s study infers that without independent security layers, the models teams work so diligently on to fine-tune aren’t just at risk; they’re quickly becoming liabilities. From an attacker’s perspective, they’re assets ready to be infiltrated and turned. Fine-Tuning LLMs dismantles safety controls at scale A key part of Cisco’s security team’s research centered on testing multiple fine-tuned models, including Llama-2-7B and domain-specialized Microsoft Adapt LLMs. These models were tested across a wide variety of domains including healthcare, finance and law. One of the most valuable takeaways from Cisco’s study of AI security is that fine-tuning destabilizes alignment, even when trained on clean datasets. Alignment breakdown was the most severe in biomedical and legal domains, two industries known for being among the most stringent regarding compliance, legal transparency and patient safety.  While the intent behind fine-tuning is improved task performance, the side effect is systemic degradation of built-in safety controls. Jailbreak attempts that routinely failed against foundation models succeeded at dramatically higher rates against fine-tuned variants, especially in sensitive domains governed by strict compliance frameworks. The results are sobering. Jailbreak success rates tripled and malicious output generation soared by 2,200% compared to foundation models. Figure 1 shows just how stark that shift is. Fine-tuning boosts a model’s utility but comes at a cost, which is a substantially broader attack surface. TAP achieves up to 98% jailbreak success, outperforming other methods across open- and closed-source LLMs. Source: Cisco State of AI Security 2025, p. 16. Malicious LLMs are a $75 commodity Cisco Talos is actively tracking the rise of black-market LLMs and provides insights into their research in the report. Talos found that GhostGPT, DarkGPT and FraudGPT are sold on Telegram and the dark web for as little as $75/month. These tools are plug-and-play for phishing, exploit development, credit card validation and obfuscation. DarkGPT underground dashboard offers “uncensored intelligence” and subscription-based access for as little as 0.0098 BTC—framing malicious LLMs as consumer-grade SaaS.Source: Cisco State of AI Security 2025, p. 9. Unlike mainstream models with built-in safety features, these LLMs are pre-configured for offensive operations and offer APIs, updates, and dashboards that are indistinguishable from commercial SaaS products. $60 dataset poisoning threatens AI supply chains “For just $60, attackers can poison the foundation of AI models—no zero-day required,” write Cisco researchers. That’s the takeaway from Cisco’s joint research with Google, ETH Zurich and Nvidia, which shows how easily adversaries can inject malicious data into the world’s most widely used open-source training sets. By exploiting expired domains or timing Wikipedia edits during dataset archiving, attackers can poison as little as 0.01% of datasets like LAION-400M or COYO-700M and still influence downstream LLMs in meaningful ways. The two methods mentioned in the study, split-view poisoning and frontrunning attacks, are designed to leverage the fragile trust model of web-crawled data. With most enterprise LLMs built on open data, these attacks scale quietly and persist deep into inference pipelines. Decomposition attacks quietly extract copyrighted and regulated content One of the most startling discoveries Cisco researchers demonstrated is that LLMs can be manipulated to leak sensitive training data without ever triggering guardrails. Cisco researchers used a method called decomposition prompting to reconstruct over 20% of select New York Times and Wall Street Journal articles. Their attack strategy broke down prompts into sub-queries that guardrails classified as safe, then reassembled the outputs to recreate paywalled or copyrighted content. Successfully evading guardrails to access proprietary datasets or licensed content is an attack vector every enterprise is grappling to protect today. For those that have LLMs trained on proprietary datasets or licensed content, decomposition attacks can be particularly devastating. Cisco explains that the breach isn’t happening at the input level, it’s emerging from the models’ outputs. That

Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue Read More »

OpenAI and Google Reject UK Government’s AI Copyright Proposal

Image: pichetw/Envato Elements Google and OpenAI have rejected the U.K. government’s proposal aimed at balancing the use of online content for AI training with protecting artists’ rights to consent and compensation. The companies suggest that a broad exception for text and data mining (TDM) would be more beneficial for all stakeholders. The government’s proposal, published in December, outlined a system that permits AI developers to use creators’ online content to train their models unless rights holders explicitly opt out. It also mandates transparency from AI developers on which creative materials they use and how these are sourced. Tech giants favor broad TDM exception over artist protections In its response to the subsequent consultation, OpenAI said opt-out models face “significant implementation challenges.” OpenAI pointed to the unclear standards in the EU, which mean “AI developers struggle to identify which works can be accessed and which are off-limits.” The ChatGPT maker said any transparency obligations must not require the disclosure of more sensitive information than is required in other jurisdictions, or AI companies may be less inclined to operate in the U.K. OpenAI also supports the proposal of a TDM exception that would allow copyrighted material to be used to train commercial models without the rights holder’s permission. The company claims it will “drive AI innovation and investment in the UK, and could be designed to balance the needs of AI development with the mitigation of concrete harms to copyright owners.” SEE: Google, Meta Criticise U.K. and E.U. AI Regulations More must-read AI coverage Google wants the TDM exception too, as it lays out in its response; however, it wants it for both commercial and non-commercial uses. The company has expressed this desire multiple times before, but plans to allow it for commercial purposes were abandoned in February 2023 after being widely criticised by creative industries. The Gemini creator clarified it supports the opt-out model for creators but that it does not “translate to remuneration rights” if their content is somehow used in training data. The government’s proposal would allow rights holders to negotiate their own licensing agreements with AI companies if they chose to do so. Google also described the transparency requirements as “excessive” and could “hinder AI development and impact the U.K.’s competitiveness in this space.” Artists push back Artists have expressed outrage over the U.K.’s decision to revise copyright laws in favour of AI, placing the onus on them to opt out of AI training rather than the AI company seeking consent by default. The likes of the Independent Society of Musicians and Publishers Association argued this would further erode their ability to control and profit from their creations. Last month, more than 400 artists, including Paul McCartney, Ben Stiller, and Cate Blanchett, sent a letter urging action against AI companies for allegedly exploiting copyrighted works without permission. source

OpenAI and Google Reject UK Government’s AI Copyright Proposal Read More »

Edge AI for robots, smart devices not far off

For companies like Rockwell, this evolution represents an opportunity to integrate edge AI capabilities throughout its product portfolios. The business outcomes from properly managed edge computing are substantial, including affordable access to data, faster software deployments, future-ready analytic platforms, improved security posture, better scaling of digital transformation initiatives, and reduced TCO. The Edge AI Foundation says CIOs and enterprises want automation and smart devices at the edge. “Edge AI is all about running AI workloads where the data is created, and the gravitational pull toward the edge means lower cost, lower power, more impact, typically, and that can also mean enhanced privacy, latency, flexibility, and clearing,” says Pete Bernard, the nonprofit’s CEO, noting that CIOs are in charge of figuring out the information strategy. “You want to move your compute as close as possible to where the data is created, avoid ingress and egress fees to clouds as well as OpEx costs, and have more control over your processing in general.” As platforms and technologies continue to mature, we can expect AI to become increasingly embedded in physical systems across industrial environments. source

Edge AI for robots, smart devices not far off Read More »

This UNA smartwatch can be taken apart like LEGO and repaired at home

Consumer tech devices, including smartwatches, have deplorably short lives. Most are tossed aside when the screen cracks, the battery dies, or the software falls behind — adding to the world’s whopping great pile of e-waste.   Scottish startup Una aims to upend this take-make-waste cycle. The company’s sports smartwatch is built to be repaired. Users can easily swap, replace, and upgrade individual components like the screen, battery, and health sensors, extending the device’s lifespan. “Customers are tired of replacing expensive tech every few years,” said Lewis Allison, Una’s founder. “We’re showing the industry there’s a better way.” The Una Watch can be disassembled and reassembled like LEGO. Credit: UNA Una had a blockbuster launch on Kickstarter last week, signalling early demand for its repairable, upgradable smartwatch. The startup raised over £200,000 in just 48 hours after its launch on the crowdfunding platform. That’s more than 20 times its initial fundraising goal of £10,000. 3 free tickets to TNW Conference? Get them now! For a limited time, groups can get up to three extra free tickets! Book now and increase your visibility and connections at TNW Conference Over 3,000 people have pre-ordered Una’s smartwatch. The first deliveries are due to begin in August 2025 to customers in the EU, UK, Canada, and the US. Early backers can secure one of the watches for £210 ($275) — £60 ($75) off the retail price of £270 ($350).  High-tech, open-source  While sustainability is at its core, Una’s watch doesn’t compromise on high-tech features. The smartwatch uses dual-frequency GPS, improving the accuracy, reliability, and robustness of location data. The device also packs a bunch of sensors. These include a barometric altimeter for elevation changes, an accelerometer to track movement, and a magnetometer for orientation. It also measures heart rate and blood oxygen levels. Powered by an ultra-efficient Cortex-M33 chip, the smartwatch offers up to 10 days of battery life. It charges via a regular USB-C cable. Una is targeting outdoor and sports users for activities like running, hiking, and cycling. Credit: UNA Una runs on FreeRTOS, an open-source operating system for microelectronics. The company also offers add-on hardware and software “kits” that allow users to build custom apps, create new hardware modules, and even write their own firmware. Una departs from proprietary, closed-source devices like the Apple Watch and Garmin, which dominate the global smartwatch market, worth $33bn last year. The Edinburgh-based startup is one of a growing number of tech companies developing products that customers can fix and upgrade themselves. Other examples include Fairphone, which makes smartphones that can be repaired at home using just a screwdriver and a video manual, and Framework, which builds modular laptops. Una’s Kickstarter success follows a £300,000 investment from SFC Capital in March. The company also won £100,000 in the Scottish EDGE startup competition last year. source

This UNA smartwatch can be taken apart like LEGO and repaired at home Read More »

USAA Wants Full Fed. Circ. To Hear PNC's Patent Board Wins

By Andrew Karpan ( April 7, 2025, 9:20 PM EDT) — A San Antonio-based bank that lost two of its patents covering technology used to deposit checks through smartphones — including one tied to a $218 million jury verdict against PNC Bank — is arguing that a Federal Circuit panel has allowed the patent board “to escape its obligation to explain itself.”… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

USAA Wants Full Fed. Circ. To Hear PNC's Patent Board Wins Read More »

Meta Unveils Llama 4 AI Series Featuring New Expert-Based Architecture

Image: Meta Meta unveiled on April 5 its new AI model series: Llama 4, which includes Llama 4 Maverick and Llama 4 Scout, tailored for conversation and processing large files, respectively, along with an unreleased “teacher” model called Llama 4 Behemoth. Llama 4 is Meta’s first series to adopt a “mixture of experts (MoE) architecture.” This approach activates only select parts of the neural network, referred to as the “experts,” to handle specific subtasks. The task will be broken down into subtasks and each routed to the most appropriate experts, improving resource efficiency. What are the specifics about Llama 4 Maverick and Scout? Llama 4 Maverick features 128 experts and 17 billion active parameters, which represent the portion of a model’s knowledge used to process a given input. Meta describes it as the “product workhorse model for general assistant and chat use cases,” specialising in image interpretation and creative writing. Interestingly, Mark Zuckerberg’s company boasts that Maverick offers “a best-in-class performance to cost ratio” when it comes to conversations. Cost has been playing on the minds of AI giants since the surprise release of DeepSeek in January, which took only $5.6 million to train. SEE: Meta’s $800M Offer To Chip Startup Was Rejected — Here’s Why However, AI experts have noticed that the version of Llama 4 Maverick published on LMArena, which ranks major large language models across various tasks, is “optimized for conversationality” and performs differently from the publicly available version. This suggests that Meta submitted an altered version to LMArena that would rank higher on its leaderboard. Llama 4 Scout also has 17 billion active parameters and just 16 experts, but Meta says it is the “best multimodal model in the world in its class.” It has an unusually large context window of 10 million tokens, which represent the amount of information it can process in a prompt, so it performs well when summarising large documents and in sequential reasoning. Meta says that both Scout and Maverick are its “best yet” due to being distilled from Llama 4 Behemoth, with a whopping 28 billion active parameters and 16 experts. While it already ranks highly on LMArena, it is still being trained and has not been released. According to The Information, the Llama 4 announcement was delayed at least twice due to the models underperforming in technical benchmarks and conversationality. How can you access LLama 4 Maverick and Scout? Scout and Maverick can be downloaded on Llama.com and Hugging Face, or used through the Meta AI chatbots in WhatsApp, Messenger, and Instagram in 40 countries. Multimodal features can only be used in the U.S. and in English, currently. Some partners have already announced integrations; developers can build and deploy AI applications with the Llama 4 models in Microsoft’s Azure AI Foundry and Azure Databricks. More must-read AI coverage Llama 4 is apolitical Meta stated it has worked specifically to “remove bias” from the Llama 4 models. The refusal rate for questions on “debated political and social topics” is over 5% lower than that of Llama 3.3 and, among the questions it does decline, its responses are described as “dramatically more balanced.” U.S. President Donald Trump’s team has voiced skepticism about the neutrality of AI models, with his AI and crypto czar David Sacks suggesting that OpenAI’s ChatGPT is “programmed to be woke” on a podcast. AI experts say that bias ultimately stems from training data and can lead to political leanings in any direction, not just the left. Nevertheless, Zuckerberg’s firm has made a number of recent moves that suggest it wants to stay on the side with the U.S. administration. Republican strategist Joel Kaplan was hired as Meta’s policy lead shortly after Trump assumed office; he sees social media regulation as a direct challenge to free speech. In January, Meta revealed the company was discontinuing its third-party fact-checking program and relocating its content moderation teams from California to Texas to “help remove the concern that biased employees are overly censoring content.” Meta has also eliminated its diversity, equity, and inclusion initiatives after Trump criticised such programs. Furthermore, Meta said the Llama 4 models respond with a “strong political lean” on “contentious” topics at a similar rate to Grok, the chatbot produced by xAI, a company owned by current White House adviser Elon Musk. Llama 4 cannot be used in the E.U. According to the Llama 4 acceptable use policy, individuals “domiciled” or companies with a “principal place of business” in the European Union cannot use or distribute the models. Those individuals or companies can, however, use the Llama 4 models if they are incorporated into a product or service they have access to in the region. This is likely the result of Meta’s issues with E.U. legislation, particularly when it comes to AI. In June 2024, Meta delayed the training of its large language models on public content shared on Facebook and Instagram after E.U., regulators suggested it might need explicit consent from content owners. Meta AI has still not been released within the bloc. SEE: Meta Offers Less Personalised Ads for EU Users Meta signed an open letter urging European regulators to address “inconsistent regulatory decision-making” and unpredictable compliance demands last September. Then, in February, Meta declared it was prepared to escalate its concerns over what the company sees as unfair E.U. regulations directly to Trump. There are other restrictions when it comes to Llama 4 usage, as commercial entities with more than 700 million monthly active users must request permission from Meta before using its models. The Open Source Initiative has said that such a restriction takes the AI “out of the category of “open source,” despite Meta claiming otherwise. source

Meta Unveils Llama 4 AI Series Featuring New Expert-Based Architecture Read More »

CEOs believe AI can develop better business plans than board members

A lack of insight AI’s ability to augment employees and executives can extend to business plans, but current models can’t think creatively and generate new insights, adds Ahsan Shah, SVP of AI and analytics at Billtrust, a billing software provider. “AI is great at analyzing data and spotting patterns, but real strategic planning needs an understanding of company culture, relationships, market behavior, and competition that AI doesn’t have yet,” he says. “AI doesn’t know your exact business problem.” Human leadership is still essential because of continuously changing market conditions and because AI output often needs to be fine-tuned, Shah adds. Smart companies “will blend AI’s analytical capabilities with human judgment, creativity, and emotional intelligence — rethinking how work gets done with humans and machines each playing to their strengths,” he says. source

CEOs believe AI can develop better business plans than board members Read More »