NVIDIA GTC 2025: Reasoning And Robotics Converge

San Jose was abuzz with excitement as AI enthusiasts gathered for the 2025 NVIDIA GTC AI conference. NVIDIA showcased its expanding data center offerings, along with a commitment to joint developments with server and network vendors. Everyone had high expectations, as this is a world-renowned AI infrastructure event, and this year, it did not disappoint. Sovereign AI led off the agenda, with UK Secretary of State for Science, Innovation, and Technology Peter Kyle highlighting the UK’s ambitious AI strategy and representatives from Denmark, India, Italy, South Korea, and Brazil also sharing their sovereign AI initiatives. Italy’s Colosseum and Brazil’s WideLabs stood out as prime examples of innovative international AI applications. Another highlight was the collaboration between DeepMind and Disney Research that demonstrated AI’s potential to revolutionize fields such as robotics, drug discovery, and energy grids, along with the introduction of Dynamo, both as an open-source project and a framework for NVIDIA’s hardware, which promises to accelerate industrywide advancements in AI infrastructure. GTC also brought forward NVIDIA’s news of the disaggregation of NVLink, partnerships with Cisco for future telecommunications, and the expansion of its hardware certification program. Here’s a roundup of some of the most notable announcements: Vera Rubin and Rubin Ultra. Jensen Huang introduced the Vera Rubin architecture, named after astronomer Vera Rubin. This next-generation GPU, launching in 2026, is designed to significantly enhance system performance. Rubin Ultra, expected in 2027, will further boost these capabilities. Disaggregated NVLink. NVIDIA’s NVLink72 is an advanced interconnect architecture that facilitates ultra high-speed communication between GPUs and CPUs in large-scale computing setups. It connects 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs within a single rack, enabling them to function as a unified, massive computational resource. Partnerships with Cisco. NVIDIA and Cisco are collaborating to develop an AI-native wireless network stack, focusing on radio access networks for 6G technology. This partnership focuses on performance, efficiency, and scalability in telecommunications. Expanded certification program. NVIDIA’s certification program validates servers equipped with NVIDIA GPUs to handle diverse AI workloads, including deep learning training and inference tasks. The rigorous testing ensures optimal performance, manageability, and scalability. Systems from Dell Technologies, HPE, and storage providers like NetApp and VAST Data have achieved NVIDIA-certified status. AI Data Center Blueprint. Recognizing the unique requirements of AI data centers, NVIDIA is partnering with vendors like Cadence, Vertiv, and Schneider to develop AI Factory Blueprints. These blueprints streamline the design, testing, and optimization of AI data centers, creating visual models to simulate and refine aspects such as power, cooling, and networking before construction, ensuring efficiency and reliability. Dynamo. NVIDIA released Dynamo, an open-source framework for scalable model inferencing. Although not every organization will be inferencing models directly on their own hardware, NVIDIA aspires to become to AI what Kubernetes is to cloud. Cohere is an early explorer of this project. Some more tactical updates: CUDA-X libraries. Powered by GH200 and GB200 superchips, these libraries accelerate computational engineering tools by up to 11x and enable 5x larger calculations. With over 400 libraries, key microservices include NVIDIA Riva for speech AI, Earth-2 for climate simulations, cuOpt for routing optimization, and NeMo Retriever for retrieval-augmented generation capabilities. NVIDIA Llama Nemotron reasoning. This feature enhances multistep math, coding, reasoning, and complex decision-making with Llama models. It boosts accuracy by 20% and optimizes inference speed by 5x, reducing operational costs. NVIDIA Cosmos World Foundation Models (WFMs). WFMs introduce customizable reasoning models for physical AI. Cosmos Transfer WFMs generate controllable photorealistic video outputs from structured video inputs, streamlining perception AI training. NVIDIA Isaac GR00T N1. New models GROOT and Newton accelerate reliable robot deployment across various industries, using real and synthetic training data. These are enhanced by the latest Cosmos WFM. As firms build agentic AI, the need for optimized hardware to run inferencing reasoning models becomes ever more critical. Targeted inferencing frameworks such as NVIDIA’s Dynamo that are released as open-source projects are very valuable for the early movers of the agentic world, allowing for broader community co-innovation. What It Means NVIDIA is driving a vertical integration story based on its prowess in AI hardware and is now extending this to libraries, open–source AI models (generic and industry-specific), edge, and robotics. This certainly is good news for organizations (the idea of a one-stop shop), but business and tech leaders must address challenges extraneous to their NVIDIA relationship, such as export controls, trade sanctions that limit infrastructure availability, power requirements, business cases for AI, skills, cost increases, and risks including security, privacy, and compliance. Specifically, power requirements for AI ambitions remains an ongoing challenge. Jensen Huang talked about how AWS, Azure, GCP, and Oracle Cloud will procure nearly 3.6 million Blackwell GPUs in 2025. In another session, Schneider execs talked about additional 150-gigawatt capacity requirements now through 2030. For reference, one rack full of NVIDIA Blackwell servers with NVLink72 requires approximately 150+ kilowatt power (compared to 10–30 kWs for traditional systems). These massive deployments across the globe require thinking outside of the box to make it all sustainable. We are looking forward to publishing a few research reports on this market very soon. If you’re exploring AI potential and want to discuss it further, please submit an inquiry request. source

NVIDIA GTC 2025: Reasoning And Robotics Converge Read More »

NYT Demands OpenAI President Testify As Long As Staff

By Ivan Moreno ( April 1, 2025, 5:24 PM EDT) — The New York Times has asked a federal judge to order that OpenAI president Greg Brockman sit for a standard deposition this month in copyright lawsuits over material used to train large language models, saying he should not be considered an “apex” witness who can testify for less time than his employees…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

NYT Demands OpenAI President Testify As Long As Staff Read More »

The Sudden Silobreaker: GenAI Converges Search Experiences And Disciplines

GenAI Mirrors Search Experiences Because of generative AI (genAI), all search experiences are increasingly conversational, assistive, and agentic. Consequently, distinctions between search experiences disappear. Perplexity and Rufus, Amazon’s shopping assistant, both leverage genAI-integrated search, blurring the line between search engine and site search experiences. Like Rufus, Perplexity’s shopping assistant rapidly summarizes reviews, compares features, and requires only one click to buy. Similarly, Adobe’s Acrobat AI Assistant, an example of cognitive search, facilitates conversations with PDFs and summarizes documents. This is similar to Leo, an AI assistant developed by private search engine Brave, which analyzes PDFs and Google Docs. Suddenly, search engine and cognitive search experiences look and feel alike. Examples abound of genAI-induced search convergence. Experiences like ChatGPT Tasks, Quora’s Poe, Reddit Answers, Salesforce’s Agentforce, ThredUp’s Style Chat, Workday Assistant, and more have much in common. Together, they form and reflect powerfully evolving search behavior. Now, users expect back-and-forth interactions with agents that act like personal assistants and, increasingly, act on users’ behalf. GenAI Minimizes Searchers’ Time To Value The convenience of genAI-integrated search experiences motivates mass adoption. Already, 37% of consumers use conversational search features whenever they can, according to a recent survey of Forrester’s Market Research Online Community. Such features replace the friction of clicks with the intuition of conversations and demand less effort. For example, when planning a trip, Google’s Gemini can let you know the best time to book flights, advise how to save money on hotels, create a trip planning document, draft a packing list, and check Gmail for confirmation codes. Microsoft’s Copilot can create a meal plan in seconds customized to your age by retrieving information from various sites. GenAI Demands Holistic Search As search experiences across engines, sites, and databases converge, silos between search marketing, commerce search, and cognitive search dissolve. Search-related tasks that once occurred in isolation — such as bid management for pay-per-click, log file analysis for search engine optimization, enhancing product metadata for commerce search, and synthesizing customer service answers for cognitive search — can now cross-pollinate in a holistic search strategy. Holistic search entails incrementality testing to mitigate keyword cannibalization, creating cross-functional testbeds for new search strategies and tactics, and listening more actively to customers’ voices. It means measuring search engine results page saturation, addressing websites’ existential crises, adopting commerce search, and investing in vector search. Our latest report — GenAI Forever Changes All Forms Of Search — details how to do all that and more. It’s a first-of-its-kind collaboration across Forrester’s B2C marketing, B2B marketing, commerce search, and cognitive search subject-matter experts. We look forward to your feedback and helping marketing, digital, and technology leaders and processes adapt to genAI-integrated search. As always, feel free to contact us to learn more. source

The Sudden Silobreaker: GenAI Converges Search Experiences And Disciplines Read More »

Bitcoin Rival Appeals Grayscale's Win In $2M CUTPA Suit

By Ryan Harroff ( April 1, 2025, 6:02 PM EDT) — Cryptocurrency company Osprey Funds LLC is appealing a Connecticut state judge’s ruling against it in its unfair trade practice suit accusing digital asset management firm Grayscale Investments LLC of misleading bitcoin investors about the security of their investments after the state court declined to reconsider its decision…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Bitcoin Rival Appeals Grayscale's Win In $2M CUTPA Suit Read More »

How To Boost Your Third-Party Risk Program With A Spring Cleaning

Prioritize Foundational Elements Over Decorative Accessories Our springtime urge to clean, redecorate, and renovate has a biological explanation. Turns out that spring’s increased hours of daylight lower our body’s production of melatonin (the hormone that makes you sleepy), which leads to regained energy and inspiration to clean our living environments. For security and risk pros, what better way to use that energy than to give your third-party risk management (TPRM) program a good spring cleaning?! Whether your TPRM program needs some sprucing up or a complete renovation, my new report, How To Build The Foundation For An Effective Third-Party Risk Management Program, takes you through the steps to get there. Follow These Steps To Spruce Up Your TPRM Program Like A Pro These days, there’s no shortage of foolproof, celebrity-endorsed checklists to make your home deep-clean a breeze but none (that I could find) for tidying up your TPRM house. Putting my Home Network show obsession to good use, I created a TPRM spring cleaning checklist. To refresh third-party risk without getting overwhelmed: Focus on the foundational elements. Before you clean indoors, experts recommend focusing on the structural elements such as gutters, air ducts, and roofing. These areas are far less costly when maintenance is routine. Similarly, the third-party ecosystem is foundational to your company’s business strategy and requires the same preventive maintenance. Breaches, attacks, and disruptions are no different than the leaks from clogged gutters, fires from blocked air ducts, and structural damage from a failing roof. If third-party risk is not a risk managemnet priority or low on the list, prepare for disaster, not inconvenience. Foundational to your TPRM program are things such as organizationwide nomenclature and what third parties are in versus out of scope. Prioritize visibility. A thorough window washing is synonymous with spring cleaning. Beyond the curb appeal, the process allows you to check that hinges are operational, check for air and water leaks, and remove dirt to improve the air quality and energy efficiency. Data is the window into your third parties: The better the quality and the more complete it is, the better your visibility is into the risk. The good news is that you are likely to have more TPRM data than you know and often enough to get your program started — if you know where to look. To build a holistic view of third-party risk, partner with colleagues in sourcing, procurement, contract management, and business users. Tackle overlooked surfaces. Spring cleaning is often when we move the furniture instead of cleaning around it and finally address those “forgotten” spots such as baseboards, light fixtures, and curtains. The surfaces are either out of the way or take too much effort to address regularly. In TPRM, tiering, segmentation, and risk scoring are those overlooked surfaces. We’re so focused on keeping up with the volume of third parties that there’s no time to reevaluate whether our tiering and segmentation aligns to business strategy and our scoring model matches our risk management maturity. Third-Party Risk Doesn’t Have To Be A Business Blind Spot Third-party risk is a rapidly maturing discipline where yesterday’s requirements can quickly become insufficient. As technology, business dynamics, and the threat landscape all change, make sure your TPRM program keeps pace. Read the full report for a step-by-step guide to building the foundation for an effective TPRM program, and schedule an inquiry or guidance session with me for further insights. source

How To Boost Your Third-Party Risk Program With A Spring Cleaning Read More »

McAfee’s Total Protection Has Your Back on 5 Devices for Just $19.99

Image: StackCommerce TL;DR: Get two years of McAfee Total Protection for five devices for just $19.99 (reg. $149.99). In today’s work-anywhere world, protecting your data and devices is no longer optional. That’s why McAfee Total Protection is more than just antivirus software. It’s a full-featured cybersecurity suite designed for professionals, remote workers, and business owners who rely on seamless, secure digital operations across multiple platforms. For just $19.99, you’ll get two years of award-winning protection for up to five devices — whether that’s your work laptop, smartphone, tablet, or home desktop. This plan includes advanced features like a secure VPN for safe browsing on public Wi-Fi, real-time identity monitoring with alerts if your personal data shows up on the dark web, and a Protection Score that helps you stay ahead of potential vulnerabilities with proactive advice. The AI-powered antivirus engine is constantly learning and adapting to detect new and evolving threats, offering real-time protection against viruses, ransomware, and phishing attacks. This makes McAfee Total Protection particularly valuable for professionals who handle sensitive client data or work in high-risk industries like finance, healthcare, and law. McAfee’s built-in password manager simplifies your digital life by securely storing your credentials and helping you create strong, unique passwords for each login. Whether you’re logging into client portals, internal systems, or cloud services, your credentials stay protected and accessible. Cross-platform compatibility means you get the same level of robust security whether you’re on a Mac at home, an Android phone on the go, or a Windows laptop at work. And with one centralized dashboard, managing your protection has never been easier. If you need reliable, multi-device protection with advanced identity monitoring and user-friendly tools, McAfee Total Protection offers one of the best values on the market today — especially while it’s just $19.99 for two years (reg. $149.99). StackSocial prices subject to change. source

McAfee’s Total Protection Has Your Back on 5 Devices for Just $19.99 Read More »

Researchers warn of ‘catastrophic overtraining’ in LLMs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A new academic study challenges a core assumption in developing large language models (LLMs), warning that more pre-training data may not always lead to better models. Researchers from some of the leading computer science institutions in the West and around the world—including Carnegie Mellon University, Stanford University, Harvard University and Princeton University—have introduced the concept of “Catastrophic Overtraining. ” They show that extended pre-training can actually make language models harder to fine-tune, ultimately degrading their performance. The study, “Overtrained Language Models Are Harder to Fine-Tune,” is available on arXiv and led by Jacob Mitchell Springer. Its co-authors are Sachin Goyal, Kaiyue Wen, Tanishq Kumar, Xiang Yue, Sadhika Malladi, Graham Neubig and Aditi Raghunathan. The law of diminishing returns The research focuses on a surprising trend observed in modern LLM development: while models are pre-trained on ever-expanding pools of data—licensed or scraped from the web, represented to an LLM as a series of tokens or numerical representations of concepts and ideas—increasing the token number during pre-training may lead to reduced effectiveness when those models are later fine-tuned for specific tasks. The team conducted a series of empirical evaluations and theoretical analyses to examine the effect of extended pre-training on model adaptability. One of the key findings centers on AI2’s open source OLMo-1B model. The researchers compared two versions of this model: one pre-trained on 2.3 trillion tokens and another on 3 trillion tokens. Despite the latter being trained on 30% more data, the latter model performed worse after instruction tuning. Specifically, the 3T-token model showed over 2% worse performance on several standard language model benchmarks compared to its 2.3T-token counterpart. In some evaluations, the degradation in performance reached up to 3%. The researchers argue that this decline is not an anomaly but rather a consistent phenomenon they term “Catastrophic Overtraining.” Understanding sensitivity and forgetting The paper attributes this degradation to a systematic increase in what they call “progressive sensitivity.” As models undergo extended pre-training, their parameters become more sensitive to changes. This increased fragility makes them more vulnerable to degradation during post-training modifications such as instruction tuning, fine-tuning for multimodal tasks, or even simple weight perturbations. The researchers provide evidence that, beyond a certain point in pre-training, any modification—whether structured like fine-tuning or unstructured like adding Gaussian noise—leads to a greater loss of previously learned capabilities. This sensitivity results in “forgetting,” where the model’s original strengths deteriorate as new training data is introduced. The study identifies an “inflection point” in pre-training, after which additional training leads to diminishing and even negative returns regarding fine-tuning outcomes. For the OLMo-1B model, this threshold emerged around 2.5 trillion tokens. A wealth of evidence The team’s analysis spans real-world and controlled experimental settings. They tested the phenomenon across different tasks, including instruction tuning using datasets like Anthropic-HH and TULU and multimodal fine-tuning using the LLaVA framework. The results consistently showed that models pre-trained beyond certain token budgets underperformed after fine-tuning. Furthermore, the researchers constructed a theoretical model using linear networks to understand better why overtraining leads to increased sensitivity. Their analysis confirmed that progressive sensitivity and catastrophic overtraining are mathematically inevitable when pre-training continues indefinitely without proper constraints. The ultimate takeaway? Model providers and trainers must make trade-offs The findings challenge the widespread assumption that more pre-training data is always better. Instead, the paper suggests a nuanced trade-off: while longer pre-training improves the base model’s capabilities, it also increases the risk that fine-tuning will degrade those capabilities. In practice, attempts to mitigate this effect—such as adjusting fine-tuning learning rates or adding regularization—may delay the onset of catastrophic overtraining but cannot fully eliminate it without sacrificing downstream performance. Thus, for enterprises looking to leverage LLMs to improve business workflows and outcomes, if one idea for doing so is to fine-tune an open-source model, the lesson from this research indicates that fine-tuning lower parameter models trained on less material is likely to arrive at a more reliable production model. The authors acknowledge that further research is needed to understand the factors influencing when and how catastrophic overtraining occurs. Open questions include whether the pre-training optimizer, training objective, or data distribution can impact the severity of the phenomenon. Implications for future LLM and AI model development The study significantly impacts how organizations and researchers design and train large language models. As the field continues to pursue larger and more capable models, this research highlights the importance of balancing pre-training duration with post-training adaptability. Additionally, the findings may influence how model developers think about resource allocation. Rather than focusing exclusively on increasing pre-training budgets, developers may need to reassess strategies to optimize downstream performance without incurring the negative effects of catastrophic overtraining. source

Researchers warn of ‘catastrophic overtraining’ in LLMs Read More »

How Can AI Be Used Safely? Expert Researchers Weigh In

Image: Shutter2U/Adobe Stock An important focus of AI research is improving an AI system’s factualness and trustworthiness. Even though significant progress has been made in these areas, some AI experts are pessimistic that these issues will be solved in the near future. That is one of the main findings of a new report by The Association for the Advancement of Artificial Intelligence (AAAI), which includes insights from experts from various academic institutions (e.g., MIT, Harvard, and University of Oxford) and tech giants (e.g., Microsoft and IBM). The goal of the study was to define the current trends and the research challenges to make AI more capable and reliable so the technology can be safely used, wrote AAAI President Francesca Rossi. The report includes 17 topics related to AI research culled by a group of 24 “very diverse” and experienced AI researchers, along with 475 respondents from the AAAI community, she noted. Here are highlights from this AI research report. Improving an AI system’s trustworthiness and factuality An AI system is considered factual if it doesn’t output false statements, and its trustworthiness can be improved by including criteria “such as human understandability, robustness, and the incorporation of human values,’’ the report’s authors stated. Other criteria to consider are fine-tuning and verifying machine outputs, and replacing complex models with simple understandable models. SEE: How to Keep AI Trustworthy from TechRepublic Premium Making AI more ethical and safer AI is becoming more popular, and this requires greater responsibility for AI systems, according to the report. For example, emerging threats such as AI-driven cybercrime and autonomous weapons require immediate attention, along with the ethical implications of new AI techniques. Among the most pressing ethical challenges, the top concerns respondents had were: Misinformation (75%) Privacy (58.75%) Responsibility (49.38%) This indicates more transparency, accountability, and explainability in AI systems is needed. And, that ethical and safety concerns should be addressed with interdisciplinary collaboration, continuous oversight, and clearer responsibility. Respondents also cited political and structural barriers, “with concerns that meaningful progress may be hindered by governance and ideological divides.” More must-read AI coverage Evaluating AI using various factors Researchers make the case that AI systems introduce “unique evaluation challenges.” Current evaluation approaches focus on benchmark testing, but they said more attention needs to be paid to usability, transparency, and adherence to ethical guidelines. Implementing AI agents introduces challenges AI agents have evolved from autonomous problem-solvers to AI frameworks that enhance adaptability, scalability, and cooperation. Yet, the researchers found that the introduction of agentic AI, while providing flexible decision making, has introduced challenges when it comes to efficiency and complexity. The report’s authors state that integrating AI with generative models “requires balancing adaptability, transparency, and computational feasibility in multi-agent environments.” More aspects of AI research Some of the other AI research-related topics covered in the AAAI report include sustainability, artificial general intelligence, social good, hardware, and geopolitical aspects. source

How Can AI Be Used Safely? Expert Researchers Weigh In Read More »

House GOP Infighting Delays Push To Repeal 2 CFPB Rules

By Jon Hill ( April 1, 2025, 5:04 PM EDT) — Plans for the U.S. House to vote on overturning two Biden-era Consumer Financial Protection Bureau rules were scuttled Tuesday by an unrelated fight among Republicans about whether to allow proxy voting for lawmakers with infant children…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

House GOP Infighting Delays Push To Repeal 2 CFPB Rules Read More »