Veeam Data Platform protects, secures and recovers data during disruptions

00:00 Hi, everybody! Welcome to DEMO, the show where companies come in and showcase their latest products and platforms. Today, I’m joined by Emilee Tellez, a Field CTO at Veeam Software. Welcome to the show, Emilee. 00:10Thank you, Keith. I’m happy to be here. 00:12Congratulations! You’re our first virtual demo. 00:15That’s awesome. 00:17So, what are you showing us today? 00:20Today, we’re going to be discussing everything that’s part of the Veeam Data Platform Premium. 00:26Who is this designed for? Usually, within an enterprise, it’s fair to say everybody, but more specifically, is there a role within a company that benefits the most from this product? 00:37Sure! I’ll say it is for everybody, but when we start thinking about specific cyber threats that companies face, CISOs and security analysts are going to love Veeam Data Platform Premium because it helps them with defense and response capabilities. For CIOs and IT operations teams, it highlights gaps in their recovery process. 01:02The idea is that you’re merging the worlds of security teams and IT operations, right? It still feels like, in many companies, those are separate groups that don’t communicate much. There are things that just get “thrown over the wall,” so to speak. 01:16Yep. We want to bring those two teams together so they can coordinate a better response to any type of incident or threat. 01:27What are the main problems you’re solving with this? Why should companies adopt this platform? Does it improve their current security and recovery processes? What would they be doing without it? 01:39Sure. Generally speaking, when you think about Veeam or your recovery process after a cyber event, recovery becomes your number one priority. Everyone should be backing up their data—that’s table stakes. But having a breadth and depth of recovery features and functionality, so you can restore your data where and when you need it, is critical. That’s where our platform helps organizations increase their cyber resilience, ensuring they have full flexibility and access to their data. 02:17Let’s jump into the demo. Show me some of the key features of the Veeam Data Platform. 02:22Perfect. First, let’s talk about the Veeam difference. Take, for example, an attack from a ransomware group like Akira. If we look at a company without Veeam, a threat actor might gain access to the environment unnoticed. There’s no notification, and no way to determine how they got in. With Veeam, companies can detect brute force login attempts within vSphere and monitor unusual behavior on specific VMDKs. Our Veeam ONE analytics and monitoring platform provides robust, built-in alarms. These alarms monitor vSphere, Hyper-V, backup and replication environments, and even protect items within Microsoft 365. One key feature we showcase is possible ransomware activity detection. This runs on the actual virtual machine in production—not just before backup occurs, but as the virtual machines are active. We have predefined rules and logic that can trigger alerts when suspicious or anomalous behavior is detected. We can assign these rules to specific workloads and notify security administrators for further investigation. Additionally, we can take automated actions, such as isolating a compromised virtual machine from the network, either automatically or with approval settings. 04:21Are these settings designed to prevent security teams from experiencing alarm fatigue or dealing with too many false positives? 04:34Yes, exactly. The system is configured to reduce unnecessary alerts while ensuring critical threats are flagged appropriately. Let’s take a look at another feature. When a threat actor gains access and moves laterally within an environment, how does Veeam notify organizations of this potential intrusion? With Veeam, not only can we detect anomalies on production machines, but we also recognize that cybercriminals often target backup servers first. They aim to corrupt or delete backups to ensure they get paid in a ransomware attack. Veeam can scan endpoints and backup servers to detect unusual activity. For example, we can identify brute force login attempts on backup servers or flag malware detection events within our software. We can also detect encryption attempts, unauthorized tools for data exfiltration, and suspicious file types. 06:03Wow! I didn’t even realize attackers often target backup servers first. Their strategy makes sense—if companies have backups, the attackers try to compromise them before stealing data. Is that what you’re seeing? 06:22Absolutely. In our annual Ransomware Trends Report, 1,200 organizations affected by ransomware reported that 96% of the time, threat actors targeted their backups first—trying to corrupt, delete, or encrypt them. 06:43So, what happens next? 06:48Let’s say a threat actor gains access and begins encrypting or exfiltrating data. Over the past two years, Veeam has enhanced its security scanning capabilities to help customers identify clean recovery points. For example, our malware detection system can identify:Encryption activity: Detecting mass changes in production data.Suspicious files: Monitoring for onion links or unauthorized tools like Mimikatz.Signature detection: Identifying malware with polymorphic capabilities or leveraging existing antivirus solutions for scanning backups. We also allow customers to integrate Veeam with third-party security tools. If a security tool detects an encryption attempt, it can trigger an out-of-band backup automatically. 09:10At this point, if a company doesn’t have Veeam, is their data already compromised? If they do have Veeam, does this stop the attack, or does it just prevent things from getting worse? 09:29With Veeam, they can begin stopping the attack. Our platform enables:Out-of-band backups: Capturing as much clean data as possible before it’s encrypted.Security alerts: Notifying security teams of active threats.Incident triage: Understanding what’s been compromised and preparing for recovery. By proactively detecting attacks, we help organizations recover without reinfecting themselves. 13:29I like that this isn’t just about preventing attacks but also helping companies respond effectively. Security isn’t 100% foolproof, so it’s crucial to have a plan for stopping and recovering from incidents quickly. 14:12Exactly. Security professionals talk about defense in depth—having multiple layers to prevent intrusions. I like to think of it as defense in response—giving organizations multiple ways to respond and recover. 15:35Before the show, we discussed your security integrations. That’s a big advantage since companies don’t want to replace

Veeam Data Platform protects, secures and recovers data during disruptions Read More »

Midjourney’s surprise: new research on making LLMs write more creatively

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Midjourney is best known as one of the leading AI image generators — with nearly 20 million users on its Discord channel, according to third-party trackers, and presumably more atop that on its website — but its ambitions are beginning to expand. Following the news in late summer 2024 that it was building its own computing and AI hardware, the company this week released a new research paper alongside machine learning experts at New York University (NYU) on training text-based large language models (LLMs) such as Meta’s open source Llama and Mistral’s eponymous source models to write more creatively. The collaboration, documented in a new research paper published on AI code community Hugging Face, introduces two new technieques — Diversified Direct Preference Optimization (DDPO) and Diversified Odds Ratio Preference Optimization (DORPO)— designed to expand the range of possible outputs while maintaining coherence and readability. For a company that is best known for its diffusion AI image generating models, Midjourney’s new approach to rethinking creativity in text-based LLMs shows that it is not limiting its ambitions to visuals, and that, a picture may not actually be worth a thousand words. Could a Midjourney-native LLM or fine-tuned version of an existing LLM be in the cards from the small, bootstrapped startup? I reached out to Midjourney founder David Holz but have yet to hear back. Regardless of a first-party Midjourney LLM offering, the implications of its new research go beyond academic exercises and could be used to help fuel a new wave of LLM training among enterprise AI teams, product developers, and content creators looking to improve AI-generated text. It also shows that despite recent interest and investment among AI model providers in new multimodal and reasoning language models, there’s still a lot of juice left to be squeezed, cognitively and performance-wise, from classic Transformer-based, text-focused LLMs. The problem: AI-generated writing collapses around homogenous outputs In domains like fact-based Q&A or coding assistance, LLMs are expected to generate a single best response. However, creative writing is inherently open-ended, meaning there are many valid responses to a single prompt. For an example provided by the Midjourney researchers, given a prompt like “Write a story about a dog on the moon”, the LLM could explore multiple diverse paths like: An astronaut’s pet dog accidentally left behind after a lunar mission. A dog who finds itself in a futuristic canine space colony. A stranded dog that befriends an alien species. Despite this range of possibilities, instruction-tuned LLMs often converge on similar storylines and themes. This happens because: Post-training techniques prioritize user preference over originality, reinforcing popular but repetitive responses. Instruction tuning often smooths out variation, making models favor “safe” responses over unique ones. Existing diversity-promoting techniques (like temperature tuning) operate only at inference time, rather than being baked into the model’s learning process. This leads to homogenized storytelling, where AI-generated creative writing feels repetitive and lacks surprise or depth. The solution: modifying post-training methods to prioritize diversity To overcome these limitations, the researchers introduced DDPO and DORPO, two extensions of existing preference optimization methods. The core innovation in these approaches is the use of deviation—a measure of how much a response differs from others—to guide training. Here’s how it works: During training, the model is given a writing prompt and multiple possible responses. Each response is compared to others for the same prompt, and a deviation score is calculated. Rare but high-quality responses are weighted more heavily in training, encouraging the model to learn from diverse examples. By incorporating deviation into Direct Preference Optimization (DPO) and Odds Ratio Preference Optimization (ORPO), the model learns to produce high-quality but more varied responses. This method ensures that AI-generated stories do not converge on a single predictable structure, but instead explore a wider range of characters, settings, and themes—just as a human writer might. What Midjourney’s researchers did to achieve this The study involved training LLMs on creative writing tasks using a dataset from the subreddit r/writingPrompts, a Reddit community where users post prompts and respond with short stories. The researchers used two base models for their training: Meta’s Llama-3.1-8B (an 8-billion-parameter model from the Llama 3 series). Mistral-7B-v0.3 (a 7-billion-parameter model from Mistral AI). Then, they took these models through the following processes: Supervised Fine-Tuning (SFT): The models were first fine-tuned using LoRA (Low-Rank Adaptation) to adjust parameters efficiently. Preference Optimization: DPO and ORPO were used as baselines—these standard methods focus on improving response quality based on user preference signals. DDPO and DORPO were then applied, introducing deviation-based weighting to encourage more unique responses. Evaluation: Automatic evaluation: Measured semantic and stylistic diversity using embedding-based techniques. Human evaluation: Judges assessed whether outputs were diverse and engaging compared to GPT-4o and Claude 3.5. Key Training Findings: DDPO significantly outperformed standard DPO in terms of output diversity while maintaining quality. Llama-3.1-8B with DDPO achieved the best balance of quality and diversity, producing responses that were more varied than GPT-4o while maintaining coherence. When dataset size was reduced, DDPO models still maintained diversity, though they required a certain number of diverse training samples to be fully effective. Enterprise implications: what does it mean for those using AI to produce creative responses — such as in marketing copywriting, corporate storytelling, and film/TV/video game scripting? For AI teams managing LLM deployment, enhancing output diversity while maintaining quality is a critical challenge. These findings have significant implications for organizations that rely on AI-generated content in applications such as: Conversational AI and chatbots (ensuring varied and engaging responses). Content marketing and storytelling tools (preventing repetitive AI-generated copy). Game development and narrative design (creating diverse dialogue and branching storylines). For professionals responsible for fine-tuning and deploying models in an enterprise setting, this research provides: A new approach to LLM post-training that enhances creativity without sacrificing quality. A practical alternative to inference-time diversity tuning (such as temperature adjustments) by integrating diversity into the learning process itself. The

Midjourney’s surprise: new research on making LLMs write more creatively Read More »

Visualize, Control and Optimize Your Spend With Software Asset Management Tools

In a climate of economic and political uncertainty, tech budgets are under pressure. According to Forrester’s Industry- And Customer-Supporting Software Survey, 2025, 23% of organizations cite budget as their number one software challenge. Currency depreciation, ranging from 6 to 12% in EMEA/APAC, adds further strains on US-dollar-denominated renewals. Making things worse, 27% of organizations report that over 50% of non-IT tech spending occurs without IT oversight (source: Forrester’s Security Survey, 2024). This fragmentation undermines cost management and increases risk. In this environment, software asset management (SAM) tools emerge as a critical lever to gain visibility, regain control, and optimize license utilization — tracking usage in real time and driving cost efficiency across the technology landscape. Reclaiming Control In A Complex Tech Stack SAM tools bring discipline to the entire software lifecycle by automating discovery, deployment, and retirement. They centralize software-as-a-service (SaaS) management, giving IT clear visibility into subscriptions, usage patterns, and costs. This enables smarter license utilization and prevents waste. In addition to cost tracking, high-performance IT organizations utilize SAM budget forecasting and identification of underutilized assets. Leading platforms reduce the risk of surprise true-ups by automating license reconciliation and real-time usage monitoring to maintain continuous compliance. SAM also enforces governance by aligning software usage with policy, reducing audit risks and unexpected spend. In The Forrester Wave™: Software Asset Management Solutions, Q1 2025, we highlight the following features as crucial when selecting a SAM tool. Ensure that the tool offers: AI/ML in contract and license management. Vendors should integrate AI/ML to automate contract term extraction, ensure compliance, and provide predictive insights into software usage trends. SaaS management with extended FinOps capabilities. Providers should offer comprehensive SaaS management with real-time visibility into subscriptions, license utilization, and spending optimization. Support for the entire software lifecycle management process. Vendors should enable end-to-end lifecycle management, streamlining software acquisition, requests, approvals, and compliance. Choosing The Right Vendor Is Half The Battle No two IT environments are the same. Each operates with its own blend of tech stacks, philosophies around build vs. buy, asset management practices, and definitions of success. Accordingly, selecting the right SAM vendor that meets the IT team’s needs is crucial. IT teams should start by identifying their most critical criteria, such as avoiding true-ups, managing security vulnerabilities, or optimizing costs. They should then examine these criteria in detail to identify the essential functionalities of a SAM tool that can best meet their needs. Refer to our latest Forrester Wave evaluation of the SAM solutions space to gain insight into each type of functionality, helping you choose the right vendor that aligns with broader organizational goals and objectives. source

Visualize, Control and Optimize Your Spend With Software Asset Management Tools Read More »

T-Mobile, UScellular Deal Could Cut Service, FCC Warned

By Christopher Cole ( March 24, 2025, 7:24 PM EDT) — The planned multibillion-dollar tie-up between T-Mobile and UScellular wireless operations could harm consumers by shutting down cell towers in areas that can’t be served without government deployment aid, the deal’s opponents told the Federal Communications Commission…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

T-Mobile, UScellular Deal Could Cut Service, FCC Warned Read More »

How Health Cos. Can Navigate Data Security Regulation Limbo

By William Li ( March 20, 2025, 5:35 PM EDT) — Healthcare organizations face a critical dilemma: As data breaches reach unprecedented levels, the regulatory framework designed to protect patient information hangs in limbo…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

How Health Cos. Can Navigate Data Security Regulation Limbo Read More »

Microsoft infuses enterprise agents with deep reasoning, unveils data Analyst agent that outsmarts competitors

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Microsoft has built the largest enterprise AI agent ecosystem, and is now extending its lead with powerful new capabilities that position the company ahead in one of enterprise tech’s most exciting segments. The company announced Tuesday evening two significant additions to its Copilot Studio platform: deep reasoning capabilities that enable agents to tackle complex problems through careful, methodical thinking, and agent flows that combine AI flexibility with deterministic business process automation. Microsoft also unveiled two specialized deep reasoning agents for Microsoft 365 Copilot: Researcher and Analyst. “We have customers with thousands of agents already,” Microsoft’s Corporate Vice President for Business and Industry Copilot Charles Lamanna, told VentureBeat in an exclusive interview on Monday. “You start to have this kind of agentic workforce where no matter what the job is, you probably have an agent that can help you get it done faster.” Microsoft’s distinctive Analyst agent While the Researcher agent mirrors capabilities from competitors like OpenAI’s Deep Research and Google’s Deep Research, Microsoft’s Analyst agent represents a more differentiated offering. Designed to function like a personal data scientist, the Analyst agent can process diverse data sources, including Excel files, CSVs, and embedded tables in documents, generating insights through code execution and visualization. “This is not a base model off the shelf,” Lamanna emphasized. “This is quite a bit of extensions and tuning and training on top of the core models.” Microsoft has leveraged its deep understanding of Excel workflows and data analysis patterns to create an agent that aligns with how enterprise users actually work with data. The Analyst can automatically generate Python code to process uploaded data files, produce visualizations, and deliver business insights without requiring technical expertise from users. This makes it particularly valuable for financial analysis, budget forecasting and operational reporting use cases that typically require extensive data preparation. Deep reasoning: Bringing critical thinking to enterprise agents Microsoft’s deep reasoning capability extends agents’ abilities beyond simple task completion to complex judgment and analytical work. By integrating advanced reasoning models like OpenAI’s o1 and connecting them to enterprise data, these agents can tackle ambiguous business problems more methodically. The system dynamically determines when to invoke deeper reasoning, either implicitly based on task complexity or explicitly when users include prompts like “reason over this” or “think really hard about this.” Behind the scenes, the platform analyzes instructions, evaluates context, and selects appropriate tools based on the task requirements. This enables scenarios that were previously difficult to automate. For example, one large telecommunications company uses deep reasoning agents to generate complex RFP responses by assembling information from across multiple internal documents and knowledge sources, Lamanna told VentureBeat. Similarly, Thomson Reuters employs these capabilities for due diligence in mergers and acquisition reviews, processing unstructured documents to identify insights, he said. See an example of the agent reasoning at work in the video below: Agent flows: Reimagining process automation Microsoft has also introduced agent flows, which effectively evolve robotic process automation (RPA) by combining rule-based workflows with AI reasoning. This addresses customer demands for integrating deterministic business logic with flexible AI capabilities. “Sometimes they don’t want the model to freestyle. They don’t want the AI to make its own decisions. They want to have hard-coded business rules,” Lamanna explained. “Other times they do want the agent to freestyle and make judgment calls.” This hybrid approach enables scenarios like intelligent fraud prevention, where an agent flow might use conditional logic to route higher-value refund requests to an AI agent for deep analysis against policy documents. Pets at Home, a U.K.-based pet supplies retailer, has already deployed this technology for fraud prevention. Lamanna revealed the company has saved “over a million pounds” through the implementation. Similarly, Dow Chemical has realized “millions of dollars saved for transportation and freight management” through agent-based optimization. Below is a video showing the Agent Flows at work: The Microsoft Graph advantage Central to Microsoft’s agent strategy is its enterprise data integration through the Microsoft Graph, which is a comprehensive mapping of workplace relationships between people, documents, emails, calendar events, and business data. This provides agents with contextual awareness that generic models lack.  “The lesser known secret capability of the Microsoft graph is that we’re able to improve relevance on the graph based on engagement and how tightly connected some files are,” Lamanna revealed. The system identifies which documents are most referenced, shared, or commented on, ensuring agents reference authoritative sources rather than outdated copies. This approach gives Microsoft a significant competitive advantage over standalone AI providers. While competitors may offer advanced models, Microsoft combines these with workplace context and fine-tuning optimized explicitly for enterprise use cases and Microsoft tools. Microsoft can leverage the same web data and model technology that competitors can, Lamanna noted, “but we then also have all the content inside the enterprise.” This creates a flywheel effect where each new agent interaction further enriches the graph’s understanding of workplace patterns. Enterprise adoption and accessibility Microsoft has prioritized making these powerful capabilities accessible to organizations with varying technical resources, Lamanna said. The agents are exposed directly within Copilot, allowing users to interact through natural language without prompt engineering expertise. Meanwhile, Copilot Studio provides a low-code environment for custom agent development. “It’s in our DNA to have a tool for everybody, not just people who can boot up a Python SDK and make calls, but anybody can start to build these agents,” Lamanna emphasized. This accessibility approach has fueled rapid adoption. Microsoft previously revealed that over 100,000 organizations have used Copilot Studio and that more than 400,000 agents were created in the last quarter. The competitive landscape While Microsoft appears to lead enterprise agent deployment today, competition is intensifying. Google has expanded its Gemini capabilities for agents and agentic coding, while OpenAI’s o1 model and Agents SDK provide powerful reasoning and agentic tools for developers. Big enterprise application companies like Salesforce, Oracle, ServiceNow, SAP and others have all launched agentic platforms

Microsoft infuses enterprise agents with deep reasoning, unveils data Analyst agent that outsmarts competitors Read More »

1+1+AI=5: How Generative AI is Empowering Teams

We often talk about AI through the lens of individual productivity: automating tasks, accelerating workflows, and reducing costs. But there’s something far more powerful — and far less discussed — emerging in front of us: the impact of generative AI on teams — not just replacing tasks but reshaping how people work together. That’s the story behind one of the most fascinating studies I’ve seen this year, a large-scale experiment conducted with 776 professionals, in commercial and R&D spaces, at Procter & Gamble (P&G) in collaboration with researchers from Harvard University, the Wharton School of the University of Pennsylvania, and ESSEC Business School in France. The question the study asked was bold: Can AI act as a teammate? This means more than just a tool but a genuine contributor to team dynamics, performance, and emotional experience. The study’s findings should provide inspiration for any tech and business leader who is rethinking the operating model of collaboration in an AI-enabled enterprise. AI Equals And Augments Human Collaboration In traditional settings, teams outperform individuals by integrating diverse perspectives. But what happens when an individual is paired with genAI? The study found that individuals using generative AI matched the performance of human teams working without it. More strikingly, teams with genAI outperformed everyone else, producing higher-quality, faster solutions with more comprehensive detail. We’re talking real business challenges that P&G employees tackle in their day-to-day work.   AI Breaks Down Silos If you’ve ever led a cross-functional team, you’ve seen how people’s ideas tend to reflect their domain: R&D folks skew technical, while commercial folks lean toward marketability. Collaboration helps blend those ideas, but it takes time, trust, and iteration. Here’s what changed with AI: Those silos started to dissolve. With genAI, both R&D and commercial professionals produced solutions that were balanced — integrating both technical and commercial dimensions. The genAI interface nudged them there. It helped them think beyond their professional default. In other words, genAI is enabling cross-functional thinking at the point of creation. That’s not task automation — that’s meaningful, intellectual contribution. For CIOs, this opens the door to rethinking how we configure teams, how we design roles, and how we structure collaboration. AI Makes Work More Human There’s one more dimension, and it might be the most surprising. We tend to associate technology adoption with friction, stress, and overload — not here. Participants using AI reported significantly higher positive emotions: more excitement, more energy, less frustration. In fact, individuals working with AI felt as positive about the experience as people collaborating in human-only teams. And when teams used AI together? Emotional engagement surged even higher. When was the last time that we saw a piece of technology improve morale?   1+1+AI=5 This isn’t just a productivity story. It’s a human story. AI, used well, doesn’t just get the job done; it makes people feel more confident, more creative, and more connected to the work. That’s true empowerment. That’s why 1+1+AI doesn’t equal 3; it equals 5. This study was conducted with prior-generation models not optimized for team interaction. So imagine what’s possible when tech and business leaders start building AI into workflows designed for collaboration. We can move from using genAI as a tool to automate tasks to one that reinvents workflows by integrating it as a core part of how teams think, solve, and create together, because the future of work isn’t just faster; it’s more human, more inclusive, and — if we design it right — more extraordinary than we imagined. source

1+1+AI=5: How Generative AI is Empowering Teams Read More »

Breaches And Lawsuits And Fines, Oh My! What We Learned The Hard Way From 2024

With the average cost of a data breach at $2.7 million and 33% of enterprises reporting being breached three or more times over the past 12 months, understanding and learning from past incidents is not just beneficial — it’s essential. Our detailed examination of the top 35 breaches and privacy fines of 2024 has unearthed critical insights into the evolving cyberthreat landscape. Among the key findings: Attacks cause more than just monetary damage; inadequate data protection severely impacts customer trust; and healthcare in particular is at a critical juncture, because it’s not just brand reputation at stake but delivery of critical medical services. 2024 also saw hefty fines levied on organizations. GDPR is once again the most enforced privacy regulation in the world, but it isn’t the only regulation with sharp penalties. In the US, more states are putting privacy laws in place and holding organizations accountable. Not only does Meta hold the record of the highest-ever GDPR fine at €1.2 billion in 2023 from an Irish regulator, but in 2024, Meta took home the largest US state fine ever at $1.4 billion. While some companies can pay off their fines like parking tickets, most organizations do not have the capital or lawyers to copy this behavior. From our analysis of the top breaches and fines, we found the following: Massive breaches and outages drive regulatory proposals and changes. In early 2024, US Executive Order 14117 focused its attention on bulk sensitive personal data, with emphasis on telecommunications and the healthcare market. The US Federal Communications Commission has proposed telecom cybersecurity and supply chain risk management rules. The proposed HIPAA Security Rule that is currently open for comment is the first major update to the rule in over a decade. New York State, acting independently, implemented strict cybersecurity mandates for hospitals. And not to be outdone, the EU has focused on operational resilience, as the Digital Operational Resilence Act (DORA), which has been years in the making and has sweeping demands on security practices, went into effect January 17, 2025. Organizations need to worry about more than regulatory fines. It is important for firms operating within the US to be aware that, although the regulatory penalties they face can be substantial, there is another financial risk on the horizon that can’t be overlooked. Recent data indicates that the proportion of companies confronted with class-action lawsuits has reached its highest point in 13 years, and it is projected this year that the expenses associated with defending against these class-action lawsuits could exceed the costs of regulatory fines. Not all breaches are for financial gain. This past year, US ISPs and telecoms found their systems infiltrated by Chinese state-affiliated actors. After the investigation of these breaches, it appears that the focus was on a small number of individuals of political interest. In a separate incident, state-sponsored Chinese attackers breached the US Department of the Treasury through third-party vendor BeyondTrust’s support software. The objective was to gain sensitive information and conduct reconnaissance. To see the rest of our analysis and, more importantly, get the recommended actions you can take to protect your organization, read our report, Lessons Learned From The World’s Biggest Data Breaches And Privacy Abuses, 2024, or schedule a guidance session with us to talk more. (written with Danielle Chittem, research associate) source

Breaches And Lawsuits And Fines, Oh My! What We Learned The Hard Way From 2024 Read More »

Latham-Led Online Ticket Giant StubHub Files IPO

By Tom Zanki ( March 21, 2025, 8:47 PM EDT) — Private equity- and venture-backed online ticket reseller StubHub Holdings Inc. on Friday filed its long-awaited initial public offering plans, represented by Latham & Waktins LLP and underwriters counsel Cooley LLP…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Latham-Led Online Ticket Giant StubHub Files IPO Read More »

AMD is powering AI success with smarter, right-sized compute

Presented by AMD As AI adoption accelerates, businesses are encountering compute bottlenecks that extend beyond just raw processing power. The challenge is not only about having more compute; it’s about having smarter, more efficient compute, customized to an organization’s needs, with the ability to scale alongside AI innovation. AI models are growing in size and complexity, requiring architectures that can process massive datasets, support continuous learning and provide the efficiency needed for real-time decision-making. From AI training and inference in hyperscale data centers to AI-driven automation in enterprises, the ability to deploy and scale compute infrastructure seamlessly is now a competitive differentiator. “It’s a tall order. Organizations are struggling to stay up-to-date with AI compute demands, scale AI workloads efficiently and optimize their infrastructure,” says Mahesh Balasubramanian, director, datacenter GPU product marketing at AMD. “Every company we talk to wants to be at the forefront of AI adoption and business transformation. The challenge is, they’ve never been faced before with such a massive, era-defining technology.” Launching a nimble AI strategy Where to start? Modernizing existing data centers is an essential first step to removing bottlenecks to AI innovation. This frees up space and power, improves efficiency and greens the data center, all of which helps the organization stay nimble enough to adapt to the changing AI environment. “You can upgrade your existing data center from a three-generation-old, Intel Xeon 8280 CPU, to the latest generation of AMD EPYC CPU and save up to 68% on energy while using 87% fewer servers3,” Balasubramanian says. “It’s not just a smart and efficient way of upgrading an existing data center, it opens up options for the next steps in upgrading a company’s compute power.” And as an organization evolves its AI strategy, it’s critical to have a plan for fast-growing hardware and computational requirements. It’s a complex undertaking, whether you’re working with a single model underlying organizational processes, customized models for each department or agentic AI. “If you understand your foundational situation – where AI will deployed, and what infrastructure is already available from a space, power, efficiency and cost perspective – you have a huge number of robust technology solutions to solve these problems,” Balasubramanian says. Beyond one-size-fits all compute A common perception in the enterprise is that AI solutions require a massive investment right out of the gate, across the board, on hardware, software and services. That has proven to be one of the most common barriers to adoption — and an easy one to overcome, Balasubramanian says. The AI journey kicks off with a look at existing tech and upgrades to the data center; from there, an organization can start scaling for the future by choosing technology that can be right-sized for today’s problems and tomorrow’s goals. “Rather than spending everything on one specific type of product or solution, you can now right-size the fit and solution for the organizations you have,” Balasubramanian says. “AMD is unique in that we have a broad set of solutions to meet bespoke requirements. We have solutions from cloud to data center, edge solutions, client and network solutions and more. This broad portfolio lets us provide the best performance across all solutions, and lets us offer in-depth guidance to enterprises looking for the solution that fits their needs.” That AI portfolio is designed to tackle the most demanding AI workloads — from foundation model training to edge inference. The latest AMD InstinctTM MI325X GPUs, powered by HBM3e memory and CDNA architecture, deliver superior performance for generative AI workloads, providing up to 1.3X better inference performance compared to competing solutions1,2​. AMD EPYC CPUs continue to set industry standards, delivering unmatched core density, energy efficiency and high-memory bandwidth critical for AI compute scalability​. Collaboration with a wide range of industry leaders — including OEMs like Dell, Supermicro, Lenovo, and HPE, network vendors like Broadcom and Marvell, and switching vendors like Arista and Cisco — maximizes the modularity of these data center solutions. It scales seamlessly from two or four servers to thousands, all built with next gen Ethernet-based AI networking and backed by industry-leading technology and expertise. Why open-source software is critical for AI advancement While both hardware and software are crucial for tackling today’s AI challenges, open-source software will drive true innovation. “We believe there’s no one company in this world that has the answers for every problem,” Balasubramanian says. “The best way to solve the world’s problems with AI is to have a united front, and to have a united front means having an open software stack that everyone can collaborate on. That’s a key part of our vision.” AMD’s open-source software stack, ROCmTM, is widely adopted by industry leaders like OpenAI, Microsoft, Meta, Oracle and more. Meta runs its largest and most complicated model on AMD Instinct GPUs. ROCm comes with standard support for PyTorch, the largest AI framework, and has more than a million models from Hugging Face’s premium model repository enabling customers begin their journey with seamless out of the box experience on ROCm software and Instinct GPUs. AMD works with vendors like PyTorch, Tensorflow, JAX, OpenAI’s Triton and others to ensure that no matter what the size of the model, small or large, applications and use cases can scale anywhere from a single GPU all the way to tens of thousands of GPUs — just as its AI hardware can scale to match any size workload. ROCm’s deep ecosystem engagement with continuous integration and continuous development ensures that new AI functions and features can be securely integrated into the stack. These features go through an automated testing and development process to ensure it fits in, it’s robust, it doesn’t break anything and it can provide support right away to the software developers and data scientists using it. And as AI evolves, ROCm is pivoting to offer new capabilities, rather than locking an organization into one particular vendor that might not offer the flexibility necessary to grow. “We want to give organizations an open-source software stack that is completely open

AMD is powering AI success with smarter, right-sized compute Read More »