House Dems Seek FCC Answers On Media Probes

By Christopher Cole ( April 2, 2025, 5:49 PM EDT) — A trio of leading House Democrats on the Energy and Commerce Committee are calling on the Federal Communications Commission’s Republican chief to explain his pursuit of “political goals” through a bevy of news network investigations since taking office in January…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

House Dems Seek FCC Answers On Media Probes Read More »

From concept to reality: A practical guide to agentic AI deployment

Deployment: Automating the LLM operations lifecycle Keep in mind that everything surrounding artificial intelligence and agentic AI is still evolving. We are seeing models being released faster, which introduces model management activities that we didn’t have to manage previously. Tooling is evolving and new frameworks are being released that make processes easier and more streamlined and that can reduce technical debt. You need to ensure your AI solution evolves as well. You will need to iterate your solutions more frequently than you would have with your traditional non-AI solutions. You also need to ensure that you have a versioning strategy to keep up with modifications and new features.  If you aren’t planning updates, with a versioning strategy, as well as updating the iterative tests, your AI system will become obsolete. This can cause unreliability and it becomes a technical debt that you will struggle to maintain.  The benefits of fully automating the LLM operations lifecycle to enhance efficiency, consistency and reliability, while also supporting continuous improvement, cost-effectiveness and compliance far outweigh the cost.  Agentic AI solutions have immense potential for businesses seeking to automate tasks, enhance efficiency and incorporate the benefits of agentic AI. But if you aren’t deploying, testing, monitoring and automating the process it doesn’t matter how good your solution is or what the potential could have been.  In this article, we have covered the processes around agentic AI DevOps but I want you to take away five things that you should ensure are your foundational baseline required as the basis for every successful implementation:  Automate, automate, automate: Automate tasks, create automation pipelines, automate testing, automate evaluations, automate the deployment of monitoring.  Deploy to containers and virtual environments: Run solutions in Docker containers to isolate the agents and constrain their access.  Restrict access: Limit the agents’ access to resources, and to the internet, as well as data repositories to prevent unauthorized access or data oversharing.  Monitor: Monitor output logs, performance logs and custom metrics during and after execution to identify issues that require human review. Create and compare against the baseline to identify and easily identify unintended behavior.  Human oversight: Run tests with humans in the loop to supervise the agents and ensure that you have included all scenarios that will require human intervention.  Fully automating the LLM operations lifecycle will enhance efficiency, consistency and reliability, while also supporting continuous improvement, cost-effectiveness and compliance.  Stephen Kaufman serves as a chief architect in the Microsoft Customer Success Unit Office of the CTO focusing on AI and cloud computing. He brings more than 30 years of experience across some of the largest enterprise customers, helping them understand and utilize AI ranging from initial concepts to specific application architectures, design, development and delivery.  This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects. source

From concept to reality: A practical guide to agentic AI deployment Read More »

3 Firms Guide MDA Space's $269M SatixFy Deal

By Al Barbarino ( April 1, 2025, 4:22 PM EDT) — MDA Space said Tuesday it will acquire SatixFy Communications Ltd. at an equity value of approximately $193 million in a push by the Brampton, Ontario-based firm to bolster its end-to-end satellite systems offerings, with at least three law firms steering the deal…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

3 Firms Guide MDA Space's $269M SatixFy Deal Read More »

Emergence AI’s new system automatically creates AI agents rapidly in realtime based on the work at hand

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Another day, another announcement about AI agents. Hailed by various market research reports as the big tech trend in 2025 — especially in the enterprise — it seems we can’t go more than 12 hours or so without the debut of another way to make, orchestrate (link together), or otherwise optimize purpose-built AI tools and workflows designed to handle routine white collar work. Yet Emergence AI, a startup founded by former IBM Research veterans and which late last year debuted its own, cross-platform AI agent orchestration framework, is out with something novel from all the rest: a new AI agent creation platform that lets the human user specify what work they are trying to accomplish via text prompts, and then turns it over to AI models to create the agents they believe are necessary to accomplish said work. This new system is literally a no code, natural language, AI-powered multi-agent builder, and it works in real time. Emergence AI describes it as a milestone in recursive intelligence, aims to simplify and accelerate complex data workflows for enterprise users. “Recursive intelligence paves the path for agents to create agents,” said Satya Nitta, co-founder and CEO of Emergence AI. “Our systems allow creativity and intelligence to scale fluidly, without human bottlenecks, but always within human-defined boundaries.” Image of Dr. Satya Nitta, Co-founder and CEO of Emergence AI, during his keynote at the AI Engineer World’s Fair 2024, where he unveiled Emergence’s Orchestrator meta-agent and introduced the open-source web agent, Agent-E. (photo courtesy AI Engineer World’s Fair) The platform is designed to evaluate incoming tasks, check its existing agent registry, and, if necessary, autonomously generate new agents tailored to fulfill specific enterprise needs. It can also proactively create agent variants to anticipate related tasks, broadening its problem-solving capabilities over time. According to Nitta, the orchestrator’s architecture enables entirely new levels of autonomy in enterprise automation. “Our orchestrator stitches multiple agents together autonomously to create multi-agent systems without human coding. If it doesn’t have an agent for a task, it will auto-generate one and even self-play to learn related tasks by creating new agents itself,” he explained. A brief demo shown to VentureBeat over a video call last week appeared duly impressive, with Nitta showing how a simple text instruction to have the AI categorize email sparked a wave of new agents being created, displayed on a visual timeline showing each agent represented as a colored dot in a column designating the category of work it was designed to help carry out. Animated GIF image showing Emergence AI’s user interface for automatically creating multiple enterprise AI Agents. Nitta also said the user could stop and intervene in this process, conveying additional text instructions, at any time. Bringing agentic coding to enterprise workflows Emergence AI’s technology focuses on automating data-centric enterprise workflows such as ETL pipeline creation, data migration, transformation, and analysis. The platform’s agents are equipped with agentic loops, long-term memory, and self-improvement abilities through planning, verification, and self-play. This enables the system to not only complete individual tasks but also understand and navigate surrounding task spaces for adjacent use cases. “We’re in a weird time in the development of technology and our society. We now have AI joining meetings,” Nitta said. “But beyond that, one of the most exciting things that’s happened in AI over the last two, three years is that large language models are producing code. They’re getting better, but they’re probabilistic systems. The code might not always be perfect, and they don’t execute, verify, or correct it.” Emergence AI’s platform seeks to fill that gap by integrating large language models’ code-generation abilities with autonomous agent technology. “We’re marrying LLMs’ code generation capabilities with autonomous agent technology,” Nitta added. “Agentic coding has enormous implications and will be the story of the next year and the next several years. The disruption is profound.” Emergence AI highlights the platform’s ability to integrate with leading AI models such as OpenAI’s GPT-4o and GPT-4.5, Anthropic’s Claude 3.7 Sonnet, and Meta’s Llama 3.3, as well as frameworks like LangChain, Crew AI, and Microsoft Autogen. The emphasis is on interoperability—allowing enterprises to bring their own models and third-party agents into the platform. Expanding multi-agent capabilities With the current release, the platform expands to include connector agents and data and text intelligence agents, allowing enterprises to build more complex systems without writing manual code. The orchestrator’s ability to evaluate its own limitations and take action is central to Emergence’s approach. “A very non-trivial thing that’s happening is when a new task comes in, the orchestrator figures out if it can solve the task by checking the registry of existing agents,” Nitta said. “If it can’t, it creates a new agent and registers it.” He added that this process is not simply reactive, but generative. “The orchestrator is not just creating agents; it’s creating goals for itself. It says, ‘I can’t solve this task, so I will create a goal to make a new agent.’ That’s what’s truly exciting.” Bet lest you worry the orchestrator will spiral out of control and create too many needless custom agents for each new task, Emergence’s research on its platform shows that it has been designed to — and successfully carries out — the additional requirement of winnowing down the number of agents created as it comes closer and closer to completing a task, adding agents with more general applicability to its internal registry for your enterprise, and checking back with that before creating any new ones. Graph showing the number of tasks increasing while the number of Emergence AI “core agents” and “multi agents” level off over time. Credit: Emergence AI Prioritizing safety, verification, and human oversight To maintain oversight and ensure responsible use, Emergence AI incorporates several safety and compliance features. These include guardrails and access controls, verification rubrics to evaluate agent performance, and human-in-the-loop oversight to validate key decisions. Nitta

Emergence AI’s new system automatically creates AI agents rapidly in realtime based on the work at hand Read More »

FCC Pulls Texas Station's License For Unpaid Fees

By Nadia Dreid ( April 1, 2025, 9:53 PM EDT) — A Texas radio station nestled right on the border with New Mexico just had its license yanked by the Federal Communications Commission after it failed to pay its regulatory fees for more than a decade, the agency has revealed…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

FCC Pulls Texas Station's License For Unpaid Fees Read More »

NVIDIA GTC 2025: Reasoning And Robotics Converge

San Jose was abuzz with excitement as AI enthusiasts gathered for the 2025 NVIDIA GTC AI conference. NVIDIA showcased its expanding data center offerings, along with a commitment to joint developments with server and network vendors. Everyone had high expectations, as this is a world-renowned AI infrastructure event, and this year, it did not disappoint. Sovereign AI led off the agenda, with UK Secretary of State for Science, Innovation, and Technology Peter Kyle highlighting the UK’s ambitious AI strategy and representatives from Denmark, India, Italy, South Korea, and Brazil also sharing their sovereign AI initiatives. Italy’s Colosseum and Brazil’s WideLabs stood out as prime examples of innovative international AI applications. Another highlight was the collaboration between DeepMind and Disney Research that demonstrated AI’s potential to revolutionize fields such as robotics, drug discovery, and energy grids, along with the introduction of Dynamo, both as an open-source project and a framework for NVIDIA’s hardware, which promises to accelerate industrywide advancements in AI infrastructure. GTC also brought forward NVIDIA’s news of the disaggregation of NVLink, partnerships with Cisco for future telecommunications, and the expansion of its hardware certification program. Here’s a roundup of some of the most notable announcements: Vera Rubin and Rubin Ultra. Jensen Huang introduced the Vera Rubin architecture, named after astronomer Vera Rubin. This next-generation GPU, launching in 2026, is designed to significantly enhance system performance. Rubin Ultra, expected in 2027, will further boost these capabilities. Disaggregated NVLink. NVIDIA’s NVLink72 is an advanced interconnect architecture that facilitates ultra high-speed communication between GPUs and CPUs in large-scale computing setups. It connects 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs within a single rack, enabling them to function as a unified, massive computational resource. Partnerships with Cisco. NVIDIA and Cisco are collaborating to develop an AI-native wireless network stack, focusing on radio access networks for 6G technology. This partnership focuses on performance, efficiency, and scalability in telecommunications. Expanded certification program. NVIDIA’s certification program validates servers equipped with NVIDIA GPUs to handle diverse AI workloads, including deep learning training and inference tasks. The rigorous testing ensures optimal performance, manageability, and scalability. Systems from Dell Technologies, HPE, and storage providers like NetApp and VAST Data have achieved NVIDIA-certified status. AI Data Center Blueprint. Recognizing the unique requirements of AI data centers, NVIDIA is partnering with vendors like Cadence, Vertiv, and Schneider to develop AI Factory Blueprints. These blueprints streamline the design, testing, and optimization of AI data centers, creating visual models to simulate and refine aspects such as power, cooling, and networking before construction, ensuring efficiency and reliability. Dynamo. NVIDIA released Dynamo, an open-source framework for scalable model inferencing. Although not every organization will be inferencing models directly on their own hardware, NVIDIA aspires to become to AI what Kubernetes is to cloud. Cohere is an early explorer of this project. Some more tactical updates: CUDA-X libraries. Powered by GH200 and GB200 superchips, these libraries accelerate computational engineering tools by up to 11x and enable 5x larger calculations. With over 400 libraries, key microservices include NVIDIA Riva for speech AI, Earth-2 for climate simulations, cuOpt for routing optimization, and NeMo Retriever for retrieval-augmented generation capabilities. NVIDIA Llama Nemotron reasoning. This feature enhances multistep math, coding, reasoning, and complex decision-making with Llama models. It boosts accuracy by 20% and optimizes inference speed by 5x, reducing operational costs. NVIDIA Cosmos World Foundation Models (WFMs). WFMs introduce customizable reasoning models for physical AI. Cosmos Transfer WFMs generate controllable photorealistic video outputs from structured video inputs, streamlining perception AI training. NVIDIA Isaac GR00T N1. New models GROOT and Newton accelerate reliable robot deployment across various industries, using real and synthetic training data. These are enhanced by the latest Cosmos WFM. As firms build agentic AI, the need for optimized hardware to run inferencing reasoning models becomes ever more critical. Targeted inferencing frameworks such as NVIDIA’s Dynamo that are released as open-source projects are very valuable for the early movers of the agentic world, allowing for broader community co-innovation. What It Means NVIDIA is driving a vertical integration story based on its prowess in AI hardware and is now extending this to libraries, open–source AI models (generic and industry-specific), edge, and robotics. This certainly is good news for organizations (the idea of a one-stop shop), but business and tech leaders must address challenges extraneous to their NVIDIA relationship, such as export controls, trade sanctions that limit infrastructure availability, power requirements, business cases for AI, skills, cost increases, and risks including security, privacy, and compliance. Specifically, power requirements for AI ambitions remains an ongoing challenge. Jensen Huang talked about how AWS, Azure, GCP, and Oracle Cloud will procure nearly 3.6 million Blackwell GPUs in 2025. In another session, Schneider execs talked about additional 150-gigawatt capacity requirements now through 2030. For reference, one rack full of NVIDIA Blackwell servers with NVLink72 requires approximately 150+ kilowatt power (compared to 10–30 kWs for traditional systems). These massive deployments across the globe require thinking outside of the box to make it all sustainable. We are looking forward to publishing a few research reports on this market very soon. If you’re exploring AI potential and want to discuss it further, please submit an inquiry request. source

NVIDIA GTC 2025: Reasoning And Robotics Converge Read More »

NYT Demands OpenAI President Testify As Long As Staff

By Ivan Moreno ( April 1, 2025, 5:24 PM EDT) — The New York Times has asked a federal judge to order that OpenAI president Greg Brockman sit for a standard deposition this month in copyright lawsuits over material used to train large language models, saying he should not be considered an “apex” witness who can testify for less time than his employees…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

NYT Demands OpenAI President Testify As Long As Staff Read More »

The Sudden Silobreaker: GenAI Converges Search Experiences And Disciplines

GenAI Mirrors Search Experiences Because of generative AI (genAI), all search experiences are increasingly conversational, assistive, and agentic. Consequently, distinctions between search experiences disappear. Perplexity and Rufus, Amazon’s shopping assistant, both leverage genAI-integrated search, blurring the line between search engine and site search experiences. Like Rufus, Perplexity’s shopping assistant rapidly summarizes reviews, compares features, and requires only one click to buy. Similarly, Adobe’s Acrobat AI Assistant, an example of cognitive search, facilitates conversations with PDFs and summarizes documents. This is similar to Leo, an AI assistant developed by private search engine Brave, which analyzes PDFs and Google Docs. Suddenly, search engine and cognitive search experiences look and feel alike. Examples abound of genAI-induced search convergence. Experiences like ChatGPT Tasks, Quora’s Poe, Reddit Answers, Salesforce’s Agentforce, ThredUp’s Style Chat, Workday Assistant, and more have much in common. Together, they form and reflect powerfully evolving search behavior. Now, users expect back-and-forth interactions with agents that act like personal assistants and, increasingly, act on users’ behalf. GenAI Minimizes Searchers’ Time To Value The convenience of genAI-integrated search experiences motivates mass adoption. Already, 37% of consumers use conversational search features whenever they can, according to a recent survey of Forrester’s Market Research Online Community. Such features replace the friction of clicks with the intuition of conversations and demand less effort. For example, when planning a trip, Google’s Gemini can let you know the best time to book flights, advise how to save money on hotels, create a trip planning document, draft a packing list, and check Gmail for confirmation codes. Microsoft’s Copilot can create a meal plan in seconds customized to your age by retrieving information from various sites. GenAI Demands Holistic Search As search experiences across engines, sites, and databases converge, silos between search marketing, commerce search, and cognitive search dissolve. Search-related tasks that once occurred in isolation — such as bid management for pay-per-click, log file analysis for search engine optimization, enhancing product metadata for commerce search, and synthesizing customer service answers for cognitive search — can now cross-pollinate in a holistic search strategy. Holistic search entails incrementality testing to mitigate keyword cannibalization, creating cross-functional testbeds for new search strategies and tactics, and listening more actively to customers’ voices. It means measuring search engine results page saturation, addressing websites’ existential crises, adopting commerce search, and investing in vector search. Our latest report — GenAI Forever Changes All Forms Of Search — details how to do all that and more. It’s a first-of-its-kind collaboration across Forrester’s B2C marketing, B2B marketing, commerce search, and cognitive search subject-matter experts. We look forward to your feedback and helping marketing, digital, and technology leaders and processes adapt to genAI-integrated search. As always, feel free to contact us to learn more. source

The Sudden Silobreaker: GenAI Converges Search Experiences And Disciplines Read More »

Bitcoin Rival Appeals Grayscale's Win In $2M CUTPA Suit

By Ryan Harroff ( April 1, 2025, 6:02 PM EDT) — Cryptocurrency company Osprey Funds LLC is appealing a Connecticut state judge’s ruling against it in its unfair trade practice suit accusing digital asset management firm Grayscale Investments LLC of misleading bitcoin investors about the security of their investments after the state court declined to reconsider its decision…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Bitcoin Rival Appeals Grayscale's Win In $2M CUTPA Suit Read More »

How To Boost Your Third-Party Risk Program With A Spring Cleaning

Prioritize Foundational Elements Over Decorative Accessories Our springtime urge to clean, redecorate, and renovate has a biological explanation. Turns out that spring’s increased hours of daylight lower our body’s production of melatonin (the hormone that makes you sleepy), which leads to regained energy and inspiration to clean our living environments. For security and risk pros, what better way to use that energy than to give your third-party risk management (TPRM) program a good spring cleaning?! Whether your TPRM program needs some sprucing up or a complete renovation, my new report, How To Build The Foundation For An Effective Third-Party Risk Management Program, takes you through the steps to get there. Follow These Steps To Spruce Up Your TPRM Program Like A Pro These days, there’s no shortage of foolproof, celebrity-endorsed checklists to make your home deep-clean a breeze but none (that I could find) for tidying up your TPRM house. Putting my Home Network show obsession to good use, I created a TPRM spring cleaning checklist. To refresh third-party risk without getting overwhelmed: Focus on the foundational elements. Before you clean indoors, experts recommend focusing on the structural elements such as gutters, air ducts, and roofing. These areas are far less costly when maintenance is routine. Similarly, the third-party ecosystem is foundational to your company’s business strategy and requires the same preventive maintenance. Breaches, attacks, and disruptions are no different than the leaks from clogged gutters, fires from blocked air ducts, and structural damage from a failing roof. If third-party risk is not a risk managemnet priority or low on the list, prepare for disaster, not inconvenience. Foundational to your TPRM program are things such as organizationwide nomenclature and what third parties are in versus out of scope. Prioritize visibility. A thorough window washing is synonymous with spring cleaning. Beyond the curb appeal, the process allows you to check that hinges are operational, check for air and water leaks, and remove dirt to improve the air quality and energy efficiency. Data is the window into your third parties: The better the quality and the more complete it is, the better your visibility is into the risk. The good news is that you are likely to have more TPRM data than you know and often enough to get your program started — if you know where to look. To build a holistic view of third-party risk, partner with colleagues in sourcing, procurement, contract management, and business users. Tackle overlooked surfaces. Spring cleaning is often when we move the furniture instead of cleaning around it and finally address those “forgotten” spots such as baseboards, light fixtures, and curtains. The surfaces are either out of the way or take too much effort to address regularly. In TPRM, tiering, segmentation, and risk scoring are those overlooked surfaces. We’re so focused on keeping up with the volume of third parties that there’s no time to reevaluate whether our tiering and segmentation aligns to business strategy and our scoring model matches our risk management maturity. Third-Party Risk Doesn’t Have To Be A Business Blind Spot Third-party risk is a rapidly maturing discipline where yesterday’s requirements can quickly become insufficient. As technology, business dynamics, and the threat landscape all change, make sure your TPRM program keeps pace. Read the full report for a step-by-step guide to building the foundation for an effective TPRM program, and schedule an inquiry or guidance session with me for further insights. source

How To Boost Your Third-Party Risk Program With A Spring Cleaning Read More »