McAfee’s Total Protection Has Your Back on 5 Devices for Just $19.99

Image: StackCommerce TL;DR: Get two years of McAfee Total Protection for five devices for just $19.99 (reg. $149.99). In today’s work-anywhere world, protecting your data and devices is no longer optional. That’s why McAfee Total Protection is more than just antivirus software. It’s a full-featured cybersecurity suite designed for professionals, remote workers, and business owners who rely on seamless, secure digital operations across multiple platforms. For just $19.99, you’ll get two years of award-winning protection for up to five devices — whether that’s your work laptop, smartphone, tablet, or home desktop. This plan includes advanced features like a secure VPN for safe browsing on public Wi-Fi, real-time identity monitoring with alerts if your personal data shows up on the dark web, and a Protection Score that helps you stay ahead of potential vulnerabilities with proactive advice. The AI-powered antivirus engine is constantly learning and adapting to detect new and evolving threats, offering real-time protection against viruses, ransomware, and phishing attacks. This makes McAfee Total Protection particularly valuable for professionals who handle sensitive client data or work in high-risk industries like finance, healthcare, and law. McAfee’s built-in password manager simplifies your digital life by securely storing your credentials and helping you create strong, unique passwords for each login. Whether you’re logging into client portals, internal systems, or cloud services, your credentials stay protected and accessible. Cross-platform compatibility means you get the same level of robust security whether you’re on a Mac at home, an Android phone on the go, or a Windows laptop at work. And with one centralized dashboard, managing your protection has never been easier. If you need reliable, multi-device protection with advanced identity monitoring and user-friendly tools, McAfee Total Protection offers one of the best values on the market today — especially while it’s just $19.99 for two years (reg. $149.99). StackSocial prices subject to change. source

McAfee’s Total Protection Has Your Back on 5 Devices for Just $19.99 Read More »

Researchers warn of ‘catastrophic overtraining’ in LLMs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A new academic study challenges a core assumption in developing large language models (LLMs), warning that more pre-training data may not always lead to better models. Researchers from some of the leading computer science institutions in the West and around the world—including Carnegie Mellon University, Stanford University, Harvard University and Princeton University—have introduced the concept of “Catastrophic Overtraining. ” They show that extended pre-training can actually make language models harder to fine-tune, ultimately degrading their performance. The study, “Overtrained Language Models Are Harder to Fine-Tune,” is available on arXiv and led by Jacob Mitchell Springer. Its co-authors are Sachin Goyal, Kaiyue Wen, Tanishq Kumar, Xiang Yue, Sadhika Malladi, Graham Neubig and Aditi Raghunathan. The law of diminishing returns The research focuses on a surprising trend observed in modern LLM development: while models are pre-trained on ever-expanding pools of data—licensed or scraped from the web, represented to an LLM as a series of tokens or numerical representations of concepts and ideas—increasing the token number during pre-training may lead to reduced effectiveness when those models are later fine-tuned for specific tasks. The team conducted a series of empirical evaluations and theoretical analyses to examine the effect of extended pre-training on model adaptability. One of the key findings centers on AI2’s open source OLMo-1B model. The researchers compared two versions of this model: one pre-trained on 2.3 trillion tokens and another on 3 trillion tokens. Despite the latter being trained on 30% more data, the latter model performed worse after instruction tuning. Specifically, the 3T-token model showed over 2% worse performance on several standard language model benchmarks compared to its 2.3T-token counterpart. In some evaluations, the degradation in performance reached up to 3%. The researchers argue that this decline is not an anomaly but rather a consistent phenomenon they term “Catastrophic Overtraining.” Understanding sensitivity and forgetting The paper attributes this degradation to a systematic increase in what they call “progressive sensitivity.” As models undergo extended pre-training, their parameters become more sensitive to changes. This increased fragility makes them more vulnerable to degradation during post-training modifications such as instruction tuning, fine-tuning for multimodal tasks, or even simple weight perturbations. The researchers provide evidence that, beyond a certain point in pre-training, any modification—whether structured like fine-tuning or unstructured like adding Gaussian noise—leads to a greater loss of previously learned capabilities. This sensitivity results in “forgetting,” where the model’s original strengths deteriorate as new training data is introduced. The study identifies an “inflection point” in pre-training, after which additional training leads to diminishing and even negative returns regarding fine-tuning outcomes. For the OLMo-1B model, this threshold emerged around 2.5 trillion tokens. A wealth of evidence The team’s analysis spans real-world and controlled experimental settings. They tested the phenomenon across different tasks, including instruction tuning using datasets like Anthropic-HH and TULU and multimodal fine-tuning using the LLaVA framework. The results consistently showed that models pre-trained beyond certain token budgets underperformed after fine-tuning. Furthermore, the researchers constructed a theoretical model using linear networks to understand better why overtraining leads to increased sensitivity. Their analysis confirmed that progressive sensitivity and catastrophic overtraining are mathematically inevitable when pre-training continues indefinitely without proper constraints. The ultimate takeaway? Model providers and trainers must make trade-offs The findings challenge the widespread assumption that more pre-training data is always better. Instead, the paper suggests a nuanced trade-off: while longer pre-training improves the base model’s capabilities, it also increases the risk that fine-tuning will degrade those capabilities. In practice, attempts to mitigate this effect—such as adjusting fine-tuning learning rates or adding regularization—may delay the onset of catastrophic overtraining but cannot fully eliminate it without sacrificing downstream performance. Thus, for enterprises looking to leverage LLMs to improve business workflows and outcomes, if one idea for doing so is to fine-tune an open-source model, the lesson from this research indicates that fine-tuning lower parameter models trained on less material is likely to arrive at a more reliable production model. The authors acknowledge that further research is needed to understand the factors influencing when and how catastrophic overtraining occurs. Open questions include whether the pre-training optimizer, training objective, or data distribution can impact the severity of the phenomenon. Implications for future LLM and AI model development The study significantly impacts how organizations and researchers design and train large language models. As the field continues to pursue larger and more capable models, this research highlights the importance of balancing pre-training duration with post-training adaptability. Additionally, the findings may influence how model developers think about resource allocation. Rather than focusing exclusively on increasing pre-training budgets, developers may need to reassess strategies to optimize downstream performance without incurring the negative effects of catastrophic overtraining. source

Researchers warn of ‘catastrophic overtraining’ in LLMs Read More »

How Can AI Be Used Safely? Expert Researchers Weigh In

Image: Shutter2U/Adobe Stock An important focus of AI research is improving an AI system’s factualness and trustworthiness. Even though significant progress has been made in these areas, some AI experts are pessimistic that these issues will be solved in the near future. That is one of the main findings of a new report by The Association for the Advancement of Artificial Intelligence (AAAI), which includes insights from experts from various academic institutions (e.g., MIT, Harvard, and University of Oxford) and tech giants (e.g., Microsoft and IBM). The goal of the study was to define the current trends and the research challenges to make AI more capable and reliable so the technology can be safely used, wrote AAAI President Francesca Rossi. The report includes 17 topics related to AI research culled by a group of 24 “very diverse” and experienced AI researchers, along with 475 respondents from the AAAI community, she noted. Here are highlights from this AI research report. Improving an AI system’s trustworthiness and factuality An AI system is considered factual if it doesn’t output false statements, and its trustworthiness can be improved by including criteria “such as human understandability, robustness, and the incorporation of human values,’’ the report’s authors stated. Other criteria to consider are fine-tuning and verifying machine outputs, and replacing complex models with simple understandable models. SEE: How to Keep AI Trustworthy from TechRepublic Premium Making AI more ethical and safer AI is becoming more popular, and this requires greater responsibility for AI systems, according to the report. For example, emerging threats such as AI-driven cybercrime and autonomous weapons require immediate attention, along with the ethical implications of new AI techniques. Among the most pressing ethical challenges, the top concerns respondents had were: Misinformation (75%) Privacy (58.75%) Responsibility (49.38%) This indicates more transparency, accountability, and explainability in AI systems is needed. And, that ethical and safety concerns should be addressed with interdisciplinary collaboration, continuous oversight, and clearer responsibility. Respondents also cited political and structural barriers, “with concerns that meaningful progress may be hindered by governance and ideological divides.” More must-read AI coverage Evaluating AI using various factors Researchers make the case that AI systems introduce “unique evaluation challenges.” Current evaluation approaches focus on benchmark testing, but they said more attention needs to be paid to usability, transparency, and adherence to ethical guidelines. Implementing AI agents introduces challenges AI agents have evolved from autonomous problem-solvers to AI frameworks that enhance adaptability, scalability, and cooperation. Yet, the researchers found that the introduction of agentic AI, while providing flexible decision making, has introduced challenges when it comes to efficiency and complexity. The report’s authors state that integrating AI with generative models “requires balancing adaptability, transparency, and computational feasibility in multi-agent environments.” More aspects of AI research Some of the other AI research-related topics covered in the AAAI report include sustainability, artificial general intelligence, social good, hardware, and geopolitical aspects. source

How Can AI Be Used Safely? Expert Researchers Weigh In Read More »

House GOP Infighting Delays Push To Repeal 2 CFPB Rules

By Jon Hill ( April 1, 2025, 5:04 PM EDT) — Plans for the U.S. House to vote on overturning two Biden-era Consumer Financial Protection Bureau rules were scuttled Tuesday by an unrelated fight among Republicans about whether to allow proxy voting for lawmakers with infant children…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

House GOP Infighting Delays Push To Repeal 2 CFPB Rules Read More »

Adobe Beats Class Action Over Alleged Competitive Threats

By Katryna Perera ( March 28, 2025, 8:48 PM EDT) — A New York federal judge has tossed a securities class action against Adobe Inc. alleging that the software company and its top brass misled shareholders about the competitive threat Adobe’s products faced from a  user experience design tool developed by another company, saying the investors have failed to plead any actionable misstatements or knowledge of wrongdoing…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Adobe Beats Class Action Over Alleged Competitive Threats Read More »

OpenAI Secures $40B in Historic Funding Round — But There’s a $10B Catch

OpenAI, the company behind ChatGPT, has just completed the largest private tech funding round in history — raising $40 billion at a $300 billion valuation. The deal, led by Japan’s SoftBank with backing from Microsoft and other investors, solidifies OpenAI as one of the world’s most valuable private companies, trailing only SpaceX and rivaling TikTok’s parent company, ByteDance. A historic deal in tech funding The scale of this funding round is unprecedented. Before OpenAI, the largest private tech deal was Ant Group’s $14 billion raise in 2018. This new funding more than doubles that record and highlights the surging investor enthusiasm for artificial intelligence. SoftBank is leading the charge with a $30 billion commitment, with the rest coming from Microsoft, Coatue, Altimeter, and Thrive. OpenAI has positioned itself as the leader in generative AI, with its flagship product, ChatGPT, now boasting 500 million weekly users, up from 400 million just last month. That rapid adoption has made the company a prime target for investors looking to stake a claim in the AI boom. Where will the money go? OpenAI says the fresh capital will help push the boundaries of AI research, expand its computing infrastructure, and accelerate the development of artificial general intelligence (AGI). A significant portion — around $18 billion — is reportedly earmarked for OpenAI’s Stargate project, a $500 billion initiative with SoftBank and Oracle to build next-generation AI data centers. However, there’s a catch: The deal includes a clause requiring OpenAI to transition into a fully for-profit company by the end of 2025. If it fails to do so, the funding could be slashed by as much as $10 billion. This restructuring plan has drawn scrutiny, with concerns about the company’s unique nonprofit-to-capped-profit hybrid model and potential regulatory challenges. Why are investors betting so big? This massive funding round comes amid a surge in AI adoption across industries. Since ChatGPT launched in late 2022, OpenAI has become a household name, influencing how businesses and individuals use AI. CEO Sam Altman reflected via an X post on its rapid rise, “The [ChatGPT] launched 26 months ago was one of the craziest viral moments I’d ever seen, and we added one million users in five days,” he notes. “ We added one million users in the last hour.” The company’s revenue is also skyrocketing. OpenAI expects to generate $12.7 billion in 2025, a massive leap from $3.7 billion last year. Despite its rapid growth, OpenAI is still a cash-hungry operation. Insiders say the startup profitability is still a long way off — estimates suggest OpenAI may not be cash-flow positive until 2029, when it expects to generate $125 billion in revenue. Altman also hinted at the company’s next big move — an open-weight language model with advanced reasoning capabilities set to launch in the coming months. More must-read AI coverage The race to AI supremacy Despite its dominance in the AI space, OpenAI faces stiff competition from rivals like Google’s DeepMind, Amazon, Perplexity, and Anthropic. The AI industry is expected to generate more than $1 trillion in revenue within the next decade, and every major tech player is racing to lead the charge. With this record-breaking investment, OpenAI is well-positioned to shape the future of AI. But whether it can navigate regulatory hurdles, restructuring challenges, and increasing competition remains to be seen. source

OpenAI Secures $40B in Historic Funding Round — But There’s a $10B Catch Read More »

Davis Polk, Skadden Lead Stablecoin Issuer Circle's IPO Filing

By Tom Zanki ( April 2, 2025, 3:42 PM EDT) — Venture-backed stablecoin issuer Circle Internet Group Inc. is moving forward with its long-awaited initial public offering amid expectations of favorable regulatory policies for crypto firms, represented by Davis Polk & Wardwell LLP and underwriters’ counsel Skadden Arps Slate Meagher & Flom LLP…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Davis Polk, Skadden Lead Stablecoin Issuer Circle's IPO Filing Read More »

The tool integration problem that’s holding back enterprise AI (and how CoTools solves it)

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Researchers from the Soochow University of China have introduced Chain-of-Tools (CoTools), a novel framework designed to enhance how large language models (LLMs) use external tools. CoTools aims to provide a more efficient and flexible approach compared to existing methods. This will allow LLMs to leverage vast toolsets directly within their reasoning process, including ones they haven’t explicitly been trained on.  For enterprises looking to build sophisticated AI agents, this capability could unlock more powerful and adaptable applications without the typical drawbacks of current tool integration techniques. While modern LLMs excel at text generation, understanding and even complex reasoning, they need to interact with external resources and tools such as databases or applications for many tasks. Equipping LLMs with external tools—essentially APIs or functions they can call—is crucial for extending their capabilities into practical, real-world applications. However, current methods for enabling tool use face significant trade-offs. One common approach involves fine-tuning the LLM on examples of tool usage. While this can make the model proficient at calling the specific tools seen during training, it often restricts the model to only those tools. Furthermore, the fine-tuning process itself can sometimes negatively impact the LLM’s general reasoning abilities, such as Chain-of-Thought (CoT), potentially diminishing the core strengths of the foundation model. The alternative approach relies on in-context learning (ICL), where the LLM is provided with descriptions of available tools and examples of how to use them directly within the prompt. This method offers flexibility, allowing the model to potentially use tools it hasn’t seen before. However, constructing these complex prompts can be cumbersome, and the model’s efficiency decreases significantly as the number of available tools grows, making it less practical for scenarios with large, dynamic toolsets. As the researchers note in the paper introducing Chain-of-Tools, an LLM agent “should be capable of efficiently managing a large amount of tools and fully utilizing unseen ones during the CoT reasoning, as many new tools may emerge daily in real-world application scenarios.” CoTools offers a compelling alternative to existing methods by cleverly combining aspects of fine-tuning and semantic understanding while crucially keeping the core LLM “frozen”—meaning its original weights and powerful reasoning capabilities remain untouched. Instead of fine-tuning the entire model, CoTools trains lightweight, specialized modules that work alongside the LLM during its generation process. “The core idea of CoTools is to leverage the semantic representation capabilities of frozen foundation models for determining where to call tools and which tools to call,” the researchers write. In essence, CoTools taps into the rich understanding embedded within the LLM’s internal representations, often called “hidden states,” which are computed as the model processes text and generates response tokens. CoTools architecture Credit: arXiv The CoTools framework comprises three main components that operate sequentially during the LLM’s reasoning process: Tool Judge: As the LLM generates its response token by token, the Tool Judge analyzes the hidden state associated with the potential next token and decides whether calling a tool is appropriate at that specific point in the reasoning chain. Tool Retriever: If the Judge determines a tool is needed, the Retriever chooses the most suitable tool for the task. The Tool Retriever has been trained to create an embedding of the query and compare it to the available tools. This allows it to efficiently select the most semantically relevant tool from the pool of available tools, including “unseen” tools (i.e., not part of the training data for the CoTools modules). Tool Calling: Once the best tool is selected, CoTools uses an ICL prompt that demonstrates filling in the tool’s parameters based on the context. This targeted use of ICL avoids the inefficiency of adding thousands of demonstrations in the prompt for the initial tool selection. Once the selected tool is executed, its result is inserted back into the LLM’s response generation. By separating the decision-making (Judge) and selection (Retriever) based on semantic understanding from the parameter filling (Calling via focused ICL), CoTools achieves efficiency even with massive toolsets while preserving the LLM’s core abilities and allowing flexible use of new tools. However, since CoTools requires access to the model’s hidden states, it can only be applied to open-weight models such as Llama and Mistral instead of private models such as GPT-4o and Claude. Example of CoTools in action. Credit: arXiv The researchers evaluated CoTools across two distinct application scenarios: numerical reasoning using arithmetic tools and knowledge-based question answering (KBQA), which requires retrieval from knowledge bases. On arithmetic benchmarks like GSM8K-XL (using basic operations) and FuncQA (using more complex functions), CoTools applied to LLaMA2-7B achieved performance comparable to ChatGPT on GSM8K-XL and slightly outperformed or matched another tool-learning method, ToolkenGPT, on FuncQA variants. The results highlighted that CoTools effectively enhance the capabilities of the underlying foundation model. For the KBQA tasks, tested on the KAMEL dataset and a newly constructed SimpleToolQuestions (STQuestions) dataset featuring a very large tool pool (1836 tools, including 837 unseen in the test set), CoTools demonstrated superior tool selection accuracy. It particularly excelled in scenarios with massive tool numbers and when dealing with unseen tools, leveraging the descriptive information for effective retrieval where methods relying solely on trained tool representations faltered. The experiments also indicated that CoTools maintained strong performance despite lower-quality training data. Implications for the enterprise Chain-of-Tools presents a promising direction for building more practical and powerful LLM-powered agents in the enterprise. This is especially useful as new standards such as the Model Context Protocol (MCP) enable developers to integrate external tools and resources easily into their applications. Enterprises can potentially deploy agents that adapt to new internal or external APIs and functions with minimal retraining overhead. The framework’s reliance on semantic understanding via hidden states allows for nuanced and accurate tool selection, which could lead to more reliable AI assistants in tasks that require interaction with diverse information sources and systems. “CoTools explores the way to equip LLMs with massive new tools in a simple

The tool integration problem that’s holding back enterprise AI (and how CoTools solves it) Read More »

Apple Patches Critical Vulnerabilities in iOS 15 and 16

Image: ink drop/Adobe Stock On Monday, Apple issued critical security updates that retroactively address three actively exploited zero-day vulnerabilities affecting legacy versions of its operating systems. CVE-2025-24200 The first vulnerability, designated CVE-2025-24200, was patched in iOS 16.7.11, iPadOS 16.7.11, iOS 15.8.4, and iPadOS 15.8.4. CVE-2025-24200 allows a physical attacker to disable USB Restricted Mode on an Apple device. This is a security feature designed to block unauthorised data access through the USB port when the iPhone or iPad is locked for over an hour. Apple said CVE-2025-24200 “may have been exploited in an extremely sophisticated attack against specific targeted individuals,” hinting at potential involvement from state-sponsored actors aiming to surveil high-value targets such as government officials, journalists, or senior business executives. Although initially patched on February 10 in iOS 18.3.1, iPadOS 18.3.1, and iPad 17.7.5, the vulnerability remained unresolved in older operating systems until now. SEE: Critical Zero-Day Vulnerabilities Found in These VMware Products CVE-2025-24201 The second flaw, CVE-2025-24201, was also patched in iOS 16.7.11, iPadOS 16.7.11, iOS 15.8.4, and iPadOS 15.8.4. This flaw is in WebKit, the browser engine used by Safari to render web pages. It allows malicious code running inside the Web Content sandbox —  an isolated environment intended to contain browser-based threats — to escape and compromise broader system components. CVE-2025-24201 was first mitigated in iOS 17.2 in late 2023, followed by a supplemental patch in iOS 18.3.2, macOS Sequoia 15.3.2, visionOS 2.3.2, and Safari 18.3.1. The flaw has now been retrospectively addressed in iOS and iPadOS 15 and 16. Must-read Apple coverage CVE-2025-24085 CVE-2025-24085, the third vulnerability, was patched in iPadOS 17.7.6, macOS Sonoma 14.7.5, and macOS Ventura 13.7.5. The use-after-free vulnerability is in Apple’s Core Media, the framework responsible for handling media processing tasks such as audio and video playback in apps. It allows attackers to seize control of deallocated memory and repurpose it to execute privileged malicious code.. Originally patched in January, with iOS 18.3, iPadOS 18.3, macOS Sequoia 15.3, watchOS 11.3, visionOS 2.3, and tvOS 18.3, Apple has now backported the fix to older systems. Other vulnerabilities were patched in iOS 18.4 Alongside new Apple Intelligence features and emojis, iOS 18.4 — released on Tuesday — delivers fixes for new vulnerabilities, including: CVE-2025-30456: A flaw in the DiskArbitration framework that allowed apps to escalate their privileges to root. CVE-2025-24097: A flaw in AirDrop that allowed unauthorised apps to access file metadata, such as creation dates or user details. CVE-2025-31182: A flaw in the libxpc framework that lets apps delete arbitrary files on the device. CVE-2025-30429, CVE-2025-24178, CVE-2025-24173: Flaws that allowed apps to break out of sandbox in Calendar, libxpc, and Power Services, respectively. CVE-2025-30467: A flaw in Safari that could allow malicious websites to spoof the address bar. Apple users are strongly urged to update their devices immediately to guard against exploitation of these now-publicised vulnerabilities. While most users will receive automatic update prompts, manual updates can be performed via Settings, General, and then Software Update. source

Apple Patches Critical Vulnerabilities in iOS 15 and 16 Read More »