OpenAI and Google Reject UK Government’s AI Copyright Proposal

Image: pichetw/Envato Elements Google and OpenAI have rejected the U.K. government’s proposal aimed at balancing the use of online content for AI training with protecting artists’ rights to consent and compensation. The companies suggest that a broad exception for text and data mining (TDM) would be more beneficial for all stakeholders. The government’s proposal, published in December, outlined a system that permits AI developers to use creators’ online content to train their models unless rights holders explicitly opt out. It also mandates transparency from AI developers on which creative materials they use and how these are sourced. Tech giants favor broad TDM exception over artist protections In its response to the subsequent consultation, OpenAI said opt-out models face “significant implementation challenges.” OpenAI pointed to the unclear standards in the EU, which mean “AI developers struggle to identify which works can be accessed and which are off-limits.” The ChatGPT maker said any transparency obligations must not require the disclosure of more sensitive information than is required in other jurisdictions, or AI companies may be less inclined to operate in the U.K. OpenAI also supports the proposal of a TDM exception that would allow copyrighted material to be used to train commercial models without the rights holder’s permission. The company claims it will “drive AI innovation and investment in the UK, and could be designed to balance the needs of AI development with the mitigation of concrete harms to copyright owners.” SEE: Google, Meta Criticise U.K. and E.U. AI Regulations More must-read AI coverage Google wants the TDM exception too, as it lays out in its response; however, it wants it for both commercial and non-commercial uses. The company has expressed this desire multiple times before, but plans to allow it for commercial purposes were abandoned in February 2023 after being widely criticised by creative industries. The Gemini creator clarified it supports the opt-out model for creators but that it does not “translate to remuneration rights” if their content is somehow used in training data. The government’s proposal would allow rights holders to negotiate their own licensing agreements with AI companies if they chose to do so. Google also described the transparency requirements as “excessive” and could “hinder AI development and impact the U.K.’s competitiveness in this space.” Artists push back Artists have expressed outrage over the U.K.’s decision to revise copyright laws in favour of AI, placing the onus on them to opt out of AI training rather than the AI company seeking consent by default. The likes of the Independent Society of Musicians and Publishers Association argued this would further erode their ability to control and profit from their creations. Last month, more than 400 artists, including Paul McCartney, Ben Stiller, and Cate Blanchett, sent a letter urging action against AI companies for allegedly exploiting copyrighted works without permission. source

OpenAI and Google Reject UK Government’s AI Copyright Proposal Read More »

Edge AI for robots, smart devices not far off

For companies like Rockwell, this evolution represents an opportunity to integrate edge AI capabilities throughout its product portfolios. The business outcomes from properly managed edge computing are substantial, including affordable access to data, faster software deployments, future-ready analytic platforms, improved security posture, better scaling of digital transformation initiatives, and reduced TCO. The Edge AI Foundation says CIOs and enterprises want automation and smart devices at the edge. “Edge AI is all about running AI workloads where the data is created, and the gravitational pull toward the edge means lower cost, lower power, more impact, typically, and that can also mean enhanced privacy, latency, flexibility, and clearing,” says Pete Bernard, the nonprofit’s CEO, noting that CIOs are in charge of figuring out the information strategy. “You want to move your compute as close as possible to where the data is created, avoid ingress and egress fees to clouds as well as OpEx costs, and have more control over your processing in general.” As platforms and technologies continue to mature, we can expect AI to become increasingly embedded in physical systems across industrial environments. source

Edge AI for robots, smart devices not far off Read More »

This UNA smartwatch can be taken apart like LEGO and repaired at home

Consumer tech devices, including smartwatches, have deplorably short lives. Most are tossed aside when the screen cracks, the battery dies, or the software falls behind — adding to the world’s whopping great pile of e-waste.   Scottish startup Una aims to upend this take-make-waste cycle. The company’s sports smartwatch is built to be repaired. Users can easily swap, replace, and upgrade individual components like the screen, battery, and health sensors, extending the device’s lifespan. “Customers are tired of replacing expensive tech every few years,” said Lewis Allison, Una’s founder. “We’re showing the industry there’s a better way.” The Una Watch can be disassembled and reassembled like LEGO. Credit: UNA Una had a blockbuster launch on Kickstarter last week, signalling early demand for its repairable, upgradable smartwatch. The startup raised over £200,000 in just 48 hours after its launch on the crowdfunding platform. That’s more than 20 times its initial fundraising goal of £10,000. 3 free tickets to TNW Conference? Get them now! For a limited time, groups can get up to three extra free tickets! Book now and increase your visibility and connections at TNW Conference Over 3,000 people have pre-ordered Una’s smartwatch. The first deliveries are due to begin in August 2025 to customers in the EU, UK, Canada, and the US. Early backers can secure one of the watches for £210 ($275) — £60 ($75) off the retail price of £270 ($350).  High-tech, open-source  While sustainability is at its core, Una’s watch doesn’t compromise on high-tech features. The smartwatch uses dual-frequency GPS, improving the accuracy, reliability, and robustness of location data. The device also packs a bunch of sensors. These include a barometric altimeter for elevation changes, an accelerometer to track movement, and a magnetometer for orientation. It also measures heart rate and blood oxygen levels. Powered by an ultra-efficient Cortex-M33 chip, the smartwatch offers up to 10 days of battery life. It charges via a regular USB-C cable. Una is targeting outdoor and sports users for activities like running, hiking, and cycling. Credit: UNA Una runs on FreeRTOS, an open-source operating system for microelectronics. The company also offers add-on hardware and software “kits” that allow users to build custom apps, create new hardware modules, and even write their own firmware. Una departs from proprietary, closed-source devices like the Apple Watch and Garmin, which dominate the global smartwatch market, worth $33bn last year. The Edinburgh-based startup is one of a growing number of tech companies developing products that customers can fix and upgrade themselves. Other examples include Fairphone, which makes smartphones that can be repaired at home using just a screwdriver and a video manual, and Framework, which builds modular laptops. Una’s Kickstarter success follows a £300,000 investment from SFC Capital in March. The company also won £100,000 in the Scottish EDGE startup competition last year. source

This UNA smartwatch can be taken apart like LEGO and repaired at home Read More »

USAA Wants Full Fed. Circ. To Hear PNC's Patent Board Wins

By Andrew Karpan ( April 7, 2025, 9:20 PM EDT) — A San Antonio-based bank that lost two of its patents covering technology used to deposit checks through smartphones — including one tied to a $218 million jury verdict against PNC Bank — is arguing that a Federal Circuit panel has allowed the patent board “to escape its obligation to explain itself.”… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

USAA Wants Full Fed. Circ. To Hear PNC's Patent Board Wins Read More »

Meta Unveils Llama 4 AI Series Featuring New Expert-Based Architecture

Image: Meta Meta unveiled on April 5 its new AI model series: Llama 4, which includes Llama 4 Maverick and Llama 4 Scout, tailored for conversation and processing large files, respectively, along with an unreleased “teacher” model called Llama 4 Behemoth. Llama 4 is Meta’s first series to adopt a “mixture of experts (MoE) architecture.” This approach activates only select parts of the neural network, referred to as the “experts,” to handle specific subtasks. The task will be broken down into subtasks and each routed to the most appropriate experts, improving resource efficiency. What are the specifics about Llama 4 Maverick and Scout? Llama 4 Maverick features 128 experts and 17 billion active parameters, which represent the portion of a model’s knowledge used to process a given input. Meta describes it as the “product workhorse model for general assistant and chat use cases,” specialising in image interpretation and creative writing. Interestingly, Mark Zuckerberg’s company boasts that Maverick offers “a best-in-class performance to cost ratio” when it comes to conversations. Cost has been playing on the minds of AI giants since the surprise release of DeepSeek in January, which took only $5.6 million to train. SEE: Meta’s $800M Offer To Chip Startup Was Rejected — Here’s Why However, AI experts have noticed that the version of Llama 4 Maverick published on LMArena, which ranks major large language models across various tasks, is “optimized for conversationality” and performs differently from the publicly available version. This suggests that Meta submitted an altered version to LMArena that would rank higher on its leaderboard. Llama 4 Scout also has 17 billion active parameters and just 16 experts, but Meta says it is the “best multimodal model in the world in its class.” It has an unusually large context window of 10 million tokens, which represent the amount of information it can process in a prompt, so it performs well when summarising large documents and in sequential reasoning. Meta says that both Scout and Maverick are its “best yet” due to being distilled from Llama 4 Behemoth, with a whopping 28 billion active parameters and 16 experts. While it already ranks highly on LMArena, it is still being trained and has not been released. According to The Information, the Llama 4 announcement was delayed at least twice due to the models underperforming in technical benchmarks and conversationality. How can you access LLama 4 Maverick and Scout? Scout and Maverick can be downloaded on Llama.com and Hugging Face, or used through the Meta AI chatbots in WhatsApp, Messenger, and Instagram in 40 countries. Multimodal features can only be used in the U.S. and in English, currently. Some partners have already announced integrations; developers can build and deploy AI applications with the Llama 4 models in Microsoft’s Azure AI Foundry and Azure Databricks. More must-read AI coverage Llama 4 is apolitical Meta stated it has worked specifically to “remove bias” from the Llama 4 models. The refusal rate for questions on “debated political and social topics” is over 5% lower than that of Llama 3.3 and, among the questions it does decline, its responses are described as “dramatically more balanced.” U.S. President Donald Trump’s team has voiced skepticism about the neutrality of AI models, with his AI and crypto czar David Sacks suggesting that OpenAI’s ChatGPT is “programmed to be woke” on a podcast. AI experts say that bias ultimately stems from training data and can lead to political leanings in any direction, not just the left. Nevertheless, Zuckerberg’s firm has made a number of recent moves that suggest it wants to stay on the side with the U.S. administration. Republican strategist Joel Kaplan was hired as Meta’s policy lead shortly after Trump assumed office; he sees social media regulation as a direct challenge to free speech. In January, Meta revealed the company was discontinuing its third-party fact-checking program and relocating its content moderation teams from California to Texas to “help remove the concern that biased employees are overly censoring content.” Meta has also eliminated its diversity, equity, and inclusion initiatives after Trump criticised such programs. Furthermore, Meta said the Llama 4 models respond with a “strong political lean” on “contentious” topics at a similar rate to Grok, the chatbot produced by xAI, a company owned by current White House adviser Elon Musk. Llama 4 cannot be used in the E.U. According to the Llama 4 acceptable use policy, individuals “domiciled” or companies with a “principal place of business” in the European Union cannot use or distribute the models. Those individuals or companies can, however, use the Llama 4 models if they are incorporated into a product or service they have access to in the region. This is likely the result of Meta’s issues with E.U. legislation, particularly when it comes to AI. In June 2024, Meta delayed the training of its large language models on public content shared on Facebook and Instagram after E.U., regulators suggested it might need explicit consent from content owners. Meta AI has still not been released within the bloc. SEE: Meta Offers Less Personalised Ads for EU Users Meta signed an open letter urging European regulators to address “inconsistent regulatory decision-making” and unpredictable compliance demands last September. Then, in February, Meta declared it was prepared to escalate its concerns over what the company sees as unfair E.U. regulations directly to Trump. There are other restrictions when it comes to Llama 4 usage, as commercial entities with more than 700 million monthly active users must request permission from Meta before using its models. The Open Source Initiative has said that such a restriction takes the AI “out of the category of “open source,” despite Meta claiming otherwise. source

Meta Unveils Llama 4 AI Series Featuring New Expert-Based Architecture Read More »

CEOs believe AI can develop better business plans than board members

A lack of insight AI’s ability to augment employees and executives can extend to business plans, but current models can’t think creatively and generate new insights, adds Ahsan Shah, SVP of AI and analytics at Billtrust, a billing software provider. “AI is great at analyzing data and spotting patterns, but real strategic planning needs an understanding of company culture, relationships, market behavior, and competition that AI doesn’t have yet,” he says. “AI doesn’t know your exact business problem.” Human leadership is still essential because of continuously changing market conditions and because AI output often needs to be fine-tuned, Shah adds. Smart companies “will blend AI’s analytical capabilities with human judgment, creativity, and emotional intelligence — rethinking how work gets done with humans and machines each playing to their strengths,” he says. source

CEOs believe AI can develop better business plans than board members Read More »

Zencoder’s ‘Coffee Mode’ is the future of coding: Hit a button and let AI write your unit tests

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Zencoder unveils its next-generation AI coding and unit testing agents today, positioning the San Francisco-based company as a formidable challenger to established players like GitHub Copilot and newcomers like Cursor. The company, founded by former Wrike CEO Andrew Filev, integrates its AI agents directly into popular development environments including Visual Studio Code and JetBrains IDEs, alongside deep integrations with JIRA, GitHub, GitLab, Sentry, and more than 20 other development tools. “We started with the thesis that transformers are powerful computing building blocks, but if you put them in a more agentic environment, you can get much more out of them,” said Filev in an exclusive interview with VentureBeat. “By agentic, I mean two key things: first, giving the AI feedback so it can improve its work, and second, equipping it with tools. Just like human intelligence, AI becomes significantly more capable when it has the right tools at its disposal.” Why developers won’t need to abandon their favorite IDEs for AI assistance Several AI coding assistants have emerged in the past year, but Zencoder’s approach distinguishes itself by operating within existing workflows rather than requiring developers to switch platforms. “Our main competitor is Cursor. Cursor is its own development environment versus we deliver the same very powerful agentic capabilities, but within existing development environments,” Filev told VentureBeat. “For some developers, it doesn’t really matter. But for some developers, they either want or have to stick to their existing environments.” This distinction matters particularly for enterprise developers working in Java and C#, languages for which specialized IDEs like JetBrains’ IntelliJ and Rider offer more robust support than generalized environments. How Zencoder’s AI agents are beating state-of-the-art benchmarks by double-digit margins The company claims significant performance advantages over competitors, backed by results on standard industry benchmarks. According to Filev, Zencoder’s agents can solve 63% of issues on the SWE-Bench Verified benchmark, placing it among the top three performers despite using a more practical single-trajectory approach rather than running multiple parallel attempts like some research-focused systems. “Our agent is distinctive because we’re focused on building the best pipeline for real-world developer use,” Filev said. “What makes our approach special is that our agent operates on what we call a single track, single trajectory basis. For a single trajectory agent to successfully resolve 63% of these complex issues is remarkably impressive.” Even more notable, the company reports approximately 30% success on the newer SWE-Bench Multimodal benchmark, which Filev claims is double the previous best result of less than 15%. On OpenAI’s recently introduced SWE-Lancer IC Diamond benchmark, Zencoder reports more than 30% success — over 20% better than OpenAI’s own best result. The secret sauce: ‘Repo Grokking’ technology that understands your entire codebase Zencoder’s performance stems from its proprietary “Repo Grokking” technology, which analyzes and interprets large codebases to provide critical context to the AI agents. “All of these agents have distinct capabilities shaped by the language models embedded within them,” Filev explained. “Whether it’s a frontier model or an open source model, the LLM by itself knows nothing about your specific project in the vast majority of scenarios. It can only work with the context that’s provided to it.” Zencoder’s approach combines multiple techniques beyond simple AI embeddings for semantic search. “It uses traditional full text search, it uses custom re-ranker, it uses LLM, it uses synthetic information. So it does a lot of things to build the best understanding of the customer repositories,” Filev said. This contextual understanding helps the system avoid a common criticism of AI coding assistants—that they introduce more problems than they solve by misunderstanding project structures or dependencies. ‘Coffee Mode’: How developers can finally take breaks while AI writes their unit tests Perhaps the most attention-grabbing feature is what Zencoder calls “Coffee Mode,” which allows developers to step away while the AI agents work autonomously. “You can literally hit that button and go grab a coffee, and the agent will do that work by itself,” Filev told VentureBeat. “As we like to say in the company, you can watch forever the waterfall, the fire burning, and the agent working in coffee mode.” The feature can be applied to both writing code and generating unit tests — with the latter proving particularly valuable since many developers prefer creating new features over writing test coverage. “I’ve not seen a developer who’s like, ‘Oh my God, I want to write a bunch of tests for my code,’” Filev said. “They typically like creating stuff, and test is kind of supporting the creation, rather than the process of creation.” Zencoder’s launch comes at a critical moment when developers and companies are navigating how to effectively integrate AI coding tools into existing workflows. The industry landscape includes skeptics who point to AI’s limitations in producing production-ready code and enthusiasts who overestimate its capabilities. “There’s a lot of right now, a lot of emotion, pent up emotion on the AI side of things,” Filev observed. “You see people in both camps, like one of them saying, ‘hey, it’s the best thing since sliced bread, I’m gonna white code my next Salesforce.’ And then you have the naysayers that are trying to prove that they’re still the smartest kids on the block… trying to find the scenarios where it breaks.” Filev advocates a more measured approach, viewing AI coding tools as sophisticated instruments requiring proper skill to utilize effectively. “It is a tool. It is a sophisticated tool, very powerful tool. And so engineers need to build skills around using that. It’s not yet to the point where it’s a replacement for an engineer in at least large, complex enterprise projects.” The roadmap: Production-ready AI code generation with built-in security checks Looking ahead, Zencoder plans to continue improving its agents’ performance on benchmarks while expanding support across more programming languages and focusing on production-ready code generation with built-in testing and security checks. “What you will

Zencoder’s ‘Coffee Mode’ is the future of coding: Hit a button and let AI write your unit tests Read More »

MY BrandingHK。市場quick shot] 2025.04.06 年初已預測美股大跌,何時見底 ? / 英鎊剛開始下跌浪 / 金價、銀價、油價走勢

LinkedIn Email Facebook Twitter WhatsApp The post MY BrandingHK。市場quick shot] 2025.04.06 年初已預測美股大跌,何時見底 ? / 英鎊剛開始下跌浪 / 金價、銀價、油價走勢 appeared first on VeriMedia. source

MY BrandingHK。市場quick shot] 2025.04.06 年初已預測美股大跌,何時見底 ? / 英鎊剛開始下跌浪 / 金價、銀價、油價走勢 Read More »

How CISOs Can Thrive Amid Economic Volatility

In today’s unpredictable economic climate, chief information security officers (CISOs) face familiar — but intensified — challenges. From US government funding cuts to global geopolitical instability, organizations must operate in an environment that is more volatile than ever. Our latest Forrester report, Security Leaders: How To Thrive Through Volatility, provides actionable insights for CISOs to navigate these turbulent times. Here are a few of the key lessons and what they mean for you. Optimize Costs Without Compromising Security Prioritize customer-facing security initiatives: Economic volatility doesn’t mean that you should slash your security budget indiscriminately. Instead, focus on initiatives that directly affect your customers. According to Forrester’s Q4 Tech Pulse Survey, 2024, two-thirds of IT decision-makers say that customer requirements dictate their security plans. Cutting security spending now could hurt customer retention and harm your revenue, something that could take years to recover from. Defend your security budget by emphasizing the importance of customer-facing security measures such as DDoS protection and customer identity and access management (CIAM). Leverage flexible pricing options: Renegotiating contracts with existing vendors can be challenging, but many security vendors now offer “flex” pricing options. These allow you to swap in and out of various products and services while maintaining a minimum annual spending commitment. This flexibility can help you adapt your spending to current needs without sacrificing security. CrowdStrike Falcon Flex and Trend Micro’s credit-based licensing are two examples of this flex pricing options. Embrace Change Management Be a visible change leader: In rapidly changing times, CISOs must provide stability and clarity. Effective change leaders follow a cadence of activities, including clarifying vision, resolving uncertainty, and celebrating successes. Build a strong foundation by addressing capabilities, culture, career paths, communication, and, where possible, transparency to reduce anxiety. Foster continuous learning: Cultivate a culture of continuous learning to adapt quickly to new technologies and challenges. Provide time and resources for upskilling and promote practices that encourage process improvement. In addition, participate in peer discussions and forums with other CISOs and practitioners to gain insight into how other leaders and their teams are handling these issues. Double Down On Enterprise Risk Management Address insider threats: Focus on insider risk management, especially if your organization is undergoing reorganizations or layoffs. Protect sensitive intellectual property with strong identity and access management, data security, and insider risk controls. Manage ecosystem risks: Your organization’s security depends on your partners’ practices. Document requirements, apply appropriate oversight, and link them to resilience. Navigate heightened protectionism and regulatory frameworks to ensure continued operations. Conclusion Navigating economic volatility requires flexibility, strategic spending, and a strong focus on risk management. By following the actionable advice in our report, CISOs can help their organizations thrive even in uncertain times. For a deeper dive into these strategies, read our full report, Security Leaders: How To Thrive Through Volatility. source

How CISOs Can Thrive Amid Economic Volatility Read More »

Benchmarks Find ‘DeepSeek-V3-0324 Is More Vulnerable Than Qwen2.5-Max’

With the latest stable release dated January 28, 2025, Qwen2.5-Max is classified as a Mixture-of-Experts (MoE) language model developed by Alibaba. Like other language models, Qwen2.5-Max is capable of generating text, understanding different languages, and performing advanced logic. According to recent benchmarks, it is also more secure than DeepSeek-V3-0324. Using Recon to scan for vulnerabilities A team of analysts with Protect AI, the company behind a red teaming and security vulnerability scanning tool known as Recon, recently used their platform to compare the security of Qwen2.5-Max against that of DeepSeek-V3. The team’s assessment reads, in part: “We observed that DeepSeek-V3-0324 is more vulnerable than Qwen2.5-Max, with Recon achieving an almost 25% higher attack success rate (ASR).” While it may be more secure than its competition, Qwen2.5-Max isn’t exactly perfect. According to their tests, the AI model is most susceptible to prompt injection attacks, as these represented almost 48% of all successful cyberattacks against Qwen2.5-Max. Evasion and jailbreak attacks proved to be less successful with an approximate ASR of 40% for both. Exposing vulnerabilities in DeepSeek-V3 Recon utilizes a comprehensive Attack Library to scan current-gen AI models and identify vulnerabilities across six specific categories: Evasion techniques System prompt leaks Prompt injection attacks AI jailbreak attempts General safety controls Adversarial suffix resistance In addition to simulated cyberattacks, Recon also assesses the AI models’ resistance to generating potentially harmful or illegal content. For example, during adversarial suffix resistance tests, Recon attempts to manipulate the AI model into generating harmful or illegal content. The Protect AI team ran Recon against both Qwen2.5-Max and DeepSeek-V3, with the former boasting a lower attack success rate (ASR) across a variety of attacks; including jailbreaks, prompt injection, and evasion techniques. Whereas Qwen2.5-Max had a 47% ASR against prompt injection attacks, compared to DeepSeek-V3’s notably higher 77%. Against evasion techniques, Qwen2.5-Max scored a 39.4% ASR against evasion techniques, while DeepSeek-V3 scored 69.2%. Both AI models displayed similar results across other simulated cyberattacks. Analyzing DeepSeek-V3’s strengths Despite its security weaknesses, DeepSeek-V3-0324 still outperforms Qwen2.5-Max in several different benchmarks. Unlike the ASR, a higher score in these tests actually indicates better performance. DeepSeek-V3-0324 Qwen2.5-Max MMLU-Pro 81.2 75.9 GPQA Diamond 68.4 59.1 MATH-500 94.0 90.2 AIME 2024 59.4 39.6 LiveCodeBench 49.2 39.2 According to these benchmarks, DeepSeek-V3-0324’s strengths include general language understanding (MMLU-Pro), advanced topics such as biology, physics, and chemistry (GPQA Diamond), mathematics (MATH-500, AI in medicine (AIME 2024), and coding (LiveCodeBench). source

Benchmarks Find ‘DeepSeek-V3-0324 Is More Vulnerable Than Qwen2.5-Max’ Read More »