CISA Adds Four Vulnerabilities to Catalog for Federal Enterprise

The U.S. Cybersecurity and Infrastructure Security Agency has added four vulnerabilities to its catalog of Known Exploited Vulnerabilities, warning federal agencies to take immediate action. While the mandate applies primarily to Federal Civilian Executive Branch agencies, the alert serves as a wake-up call for all organizations to assess their security posture and defend against emerging cyber threats. Must-read security coverage What are the four vulnerabilities? The four vulnerabilities are: CVE-2024-45195: A direct request ( or ‘Forced Browsing’) vulnerability in the Apache OFBiz ERP system. In this vulnerability, which was patched in September 2024, a threat actor could use URLs, scripts, or files to run arbitrary code on the server. CVE-2024-29059: A .NET Framework Information Disclosure Vulnerability in the Microsoft .NET Framework versions 3.5 and 4.8. Specifically, an error message could be generated that contained sensitive information such as passwords or the full pathname of the installed application. The error could pop up in multiple ways, either automatically generated by the source code or generated by a language interpreter or other external element. It was patched in March 2024. CVE-2018-9276: An issue in PRTG Network Monitor that could allow a threat actor with administrative access to the PRTG System Administrator to exploit an OS command injection vulnerability. It was patched in 2018. CVE-2018-19410 is another issue in PRTG Network Monitor. By exploiting it, an author can use HTTP requests and perform a Local File Inclusion attack to create users with read-write privileges (including administrator). It was patched in 2018. SEE: The U.K. has released a world-first Cyber Code of Practice to help developers, system operators, and organizations safely manage AI. “These types of vulnerabilities are frequent attack vectors for malicious cyber actors and pose significant risks to the federal enterprise,” CISA said in its alert. Monitoring known exploited vulnerabilities can strengthen an organization’s overall security posture. In this case, the software companies patched the vulnerabilities — sometimes years ago — and users do not need to take any action. In addition, the vulnerabilities highlight the importance of compliance and reporting on security in critical sectors. source

CISA Adds Four Vulnerabilities to Catalog for Federal Enterprise Read More »

SPAC Market Hums Again Following Multiyear Downturn

By Tom Zanki ( February 7, 2025, 8:44 PM EST) — Special purpose acquisition companies are once again asserting their presence in the capital markets and M&A landscape, forming new vehicles at the highest pace in three years — albeit in leaner form than in the last cycle, when many deals ended in busts…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

SPAC Market Hums Again Following Multiyear Downturn Read More »

Why The DOJ Is Worried About Networking Innovation And What Paths Lay Ahead For HPE & Juniper

On January 30, 2025, the US Department of Justice (DOJ) moved to block the acquisition of Juniper by HPE. The DOJ is concerned about 60% of the wireless LAN (WLAN) market being occupied by two vendors (HPE and Cisco) and that it would stifle any innovation from other vendors that have single-digit market share. HPE and Juniper have started to contest this ruling with various counterpoints regarding WLAN competition, a classic “better together to fuel innovation” story, and a cautionary tale regarding network security, especially for the next generation of emerging technology. The DOJ is not wrong that sleepy, mature markets like IT networking struggle with innovation without the push from startups. Today, innovation in networking is hard to come by. It’s been a while (early 2000s) since the major networking players drove innovation in the market, such as the creation of new protocols and architectures to help networking organizations overcome technology challenges. Since the 2010s, there has been a chasm. Any innovation in this space has mainly come from two areas: 1) startups like Meraki, Mist, Nicira, and Viptela; and 2) large customers like Facebook and Google. The large cloud providers have been lighthouses for networking professionals in need of guidance on data center best practices, how to automate networks, and alternative network architectures. The enterprise Wi-Fi market hasn’t moved the needle much, either. Despite being around for 25 years, Wi-Fi solutions hardly do more than connect laptops and mobile phones in office settings and have real difficulty adapting to the new ways that users want to leverage other wireless protocols. Few attempts have been made to improve the management experience of supporting the various device types, including IoT devices. Few solutions support basic connections into Bluetooth, EnOcean, NearLink, Thread, and Zigbee technologies. To HPE Aruba’s credit, it has one of the strongest IoT-ready wireless solutions available, offering a strong smart office and retail portfolio featuring AI security insights, edge computing solutions, and IoT data processing capabilities. However, much of this capability came from the innovation under past Aruba leadership/visionaries, Dominic Orr and Keerti Melkote. And that’s just WLAN. This lack of innovation is systemic across the entire IT networking world. The biggest gap is at the edge (manufacturing plants, stadiums, stores, aircrafts, etc.), where nontraditional devices need connectivity. With over 100 billion IoT devices, the market is vast, driven by new applications, devices, security needs, and hardware requirements. Networking at the edge faces significant challenges, such as wireless interference and troubleshooting in distribution centers. The industry needs business-optimized networks (BON) with verticalized solutions. Where does that leave us? The IT networking world needs a hero: someone with the resources and the will to revolutionize the industry. Realistically, that isn’t going to come from a tiny startup. But the big players have been focused on market share, marketing, and margins. Change will require true leadership. Will the DOJ’s action to block the Juniper acquisition save innovation in networking? No. Would the acquisition unlock new innovation? Unlikely. But we’d love to be wrong (see below). Why is it hard to imagine this rosy future? Every major public technology company struggles to fund revolutionary innovation when there are shareholders to satisfy. Realistically, efficiencies gained from a combined portfolio/organization largely go to shareholders, not to fund new organic innovation. And it’s far easier to justify innovation via one-time acquisition costs rather than an ongoing stream of unrealized innovation. Ultimately, either future could hold good news for customers if they are put front and center. Let’s look ahead at the possible outcomes and what each path could entail: Scenario 1: HPE and Juniper remain separate HPE’s broader portfolio paired with its Aruba footprint has a real opportunity to be an edge networking innovator that creates verticalized easy-to-use solutions. In a time when almost every vendor is claiming to have cloud-native, AI-driven platform solutions, HPE is the only networking vendor that has the resources and portfolio today — such as compute, multilayer stack, storage, and software — to put together an edge, IoT, and networking solution to deliver a BON solution for retail and other adjacent verticals, such as hospitality. No one else is making big bets, turning their backs on generic technology providers, and choosing a few verticals to go after. This could be HPE’s moment to reclaim its innovator culture of yesteryear to truly own edge and network verticalization. Juniper’s technology and executive team are ahead of the game with its an AI-driven networking platform, Mist, that unifies the management and monitoring across various networking and security components. Mist innovation is how Juniper grew its WLAN market share so quickly over a short period of time and posed an issue for the competition, including HPE. With more investments, Juniper can continue to disrupt the management of data center and campus networking market with its Mist leadership — under Bob Friday and Sujai Hajela — and Marvis AI solution. As such, Juniper would create a businesswide networking fabric solution, which is essential for the future of networking. Looking forward, the company can expand on its networking platform by seamlessly integrating security services, with Marvis as its foundation to automate the secure networking platform. At this stage, Juniper would just be running up against Extreme’s networking platform, its version of a secure businesswide networking fabric. Cisco, Huawei, and others would be hard-pressed to match Extreme’s and Juniper’s platforms.   Scenario 2: HPE acquires Juniper A combined organization can take the superpowers from both groups to create a dominant force in IT networking; but in practice, it’s too tempting to focus on fast cash for shareholders with the greater leadership from the acquired party choosing to quickly move on as soon as their contract terms allow. To position a combined HPE/Juniper for success, Juniper leadership should take the helm of innovative progress across its portfolio to ensure products like Mist don’t get stifled as the engineers integrate into the Aruba portfolio (Axis, ClearPass, Cape Networks, Silver Peak, etc.). There will likely be some portfolio

Why The DOJ Is Worried About Networking Innovation And What Paths Lay Ahead For HPE & Juniper Read More »

Securing terminal emulation and green screen access from evolving threats

The breadth and complexity of modern cyber-attacks have made the potential for an attack on IT infrastructure, including mainframes, a matter of ‘when,’ not ‘if.’ Oftentimes, these attacks come down to system access—a bad actor who shouldn’t be there slips into critical systems, resulting in disaster. It’s a reality that is growing increasingly common. In fact, incidents involving the use of stolen or compromised credentials increased by 71% year over year in 2024.   Regulators have long taken steps to protect sensitive information and guide businesses on what protections and policies they must have in place—this includes policies like GDPR or the Digital Operational Resilience Act (DORA). And now, with the rise in compromised credentials, many of these regulations are evolving to go deeper into identity and access management (IAM), with tools like encryption or multi-factor authentication for remote access. Terminal emulation is critical for organizations to enable their employees to access host systems through a terminal-like interface. And with green screen capabilities, organizations can maintain access to mainframe systems through a desktop interface. But as more users gain access to these critical systems, organizations open themselves up to greater risk. Let’s take a closer look at how these regulations are shifting, and what organizations that depend on terminal emulation and green screens should consider to keep their systems secure. Adapting to a shifting regulatory reality As IT environments evolve, so do the threats from bad actors looking to sneak in and wreak havoc. A security breach can be devastating for businesses, with the average cost in the U.S. rising by 10% in 2024, reaching its highest total ever. In turn, there has been a steady rise in regulations and compliance guidelines aimed at keeping sensitive systems and data secure. For businesses that rely on terminal emulation and green screens, these regulations are increasingly bringing their systems into focus. For instance, recent changes in New York State’s 23 NYRCRR 500 policy tackle challenges with remote access around governance, encryption, and incident response. Particularly, these policies require multi-factor authentication (MFA) for remote access to information systems, third-party applications where nonpublic information (NPI) is accessible, as well as privileged accounts.  That’s just one example of how cybersecurity regulations are trending for the future. With that in mind, it’s clear how important it is for organizations to extend their modernization efforts to their green screen and terminal emulation tools. Tapping into secure host access When it comes to identifying the right solution for secure host access, easy integration is crucial. Risks change, and so do regulations. With a solution that allows for simple integration of green screen access with existing IAM capabilities, organizations can gain a deeper level of defense, while also remaining compliant. Looking at a solution like Rocket® Secure Host Access, this integration brings existing IAM solutions to users accessing host applications, securing the terminal emulation authentication process, and offers centrally managed, high-availability host application access that is deployable across infrastructures. The benefits of a solution like this also extend to green screens and the mainframe. For organizations that need to manage access on those green screens, these host access capabilities make it easy to handle mainframe terminal emulation sessions and monitor encryption status. Solutions, like Rocket Software’s, also allow organizations to fully use MFA tools for mainframe applications. By folding in these capabilities, organizations can avoid non-compliance—a threat that, on top of an attack, can lead to even more costs in terms of fines or penalties. Extending enterprise authentication and authorization practices to host applications helps create an end-to-end IT security solution that encourages compliance and limits the risk of potential attacks.   As security threats evolve and grow more complex, regulators remain determined to keep up, implementing new policies or changing existing compliance requirements, all to protect NPI and ensure businesses are prepared to stop a breach before it can do serious damage. Implementing the right solution, like Rocket Secure Host Access, is a critical step in the right direction, helping future-proof security capabilities while keeping up with the latest regulatory standards. Learn more about how Rocket Software can help your organization defend against security threats and modernize critical IT systems. source

Securing terminal emulation and green screen access from evolving threats Read More »

Jury Awards Players $25M In High 5 Mobile Gambling Case

By Greg Lamm ( February 7, 2025, 8:50 PM EST) — A Washington federal jury on Friday awarded nearly $25 million to a class of players who said they were injured by game developer High 5 Games’ social casino-style mobile apps that targeted gambling addicts as “whales.”… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Jury Awards Players $25M In High 5 Mobile Gambling Case Read More »

Google launches Gemini 2.0 Pro, Flash-Lite and connects reasoning model Flash Thinking to YouTube, Maps and Search

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Google’s Gemini series of AI large language models (LLMs) started off rough nearly a year ago with some embarrassing incidents of image generation gone awry, but it has steadily improved since then, and the company appears to be intent on making its second-generation effort — Gemini 2.0 — the biggest and best yet for consumers and enterprises. Today, the company announced the general release of Gemini 2.0 Flash, introduced Gemini 2.0 Flash-Lite, and rolled out an experimental version of Gemini 2.0 Pro. These models, designed to support developers and businesses, are now accessible through Google AI Studio and Vertex AI, with Flash-Lite in public preview and Pro available for early testing. “All of these models will feature multimodal input with text output on release, with more modalities ready for general availability in the coming months,” Koray Kavukcuoglu, CTO of Google DeepMind, wrote in the company’s announcement blog post — showcasing an advantage Google is bringing to the table even as competitors such as DeepSeek and OpenAI continue to launch powerful rivals. Google plays to its multimodal strenghts Neither DeepSeek-R1 nor OpenAI’s new o3-mini model can accept multimodal inputs — that is, images and file uploads or attachments. While R1 can accept them on its website and mobile app chat, The model performs optical character recognition (OCR) a more than 60-year-old technology, to extract the text only from these uploads — not actually understanding or analyzing any of the other features contained therein. However, both are a new class of “reasoning” models that deliberately take more time to think through answers and reflect on “chains-of-thought” and the correctness of their responses. That’s opposed to typical LLMs like the Gemini 2.0 pro series, so the comparison between Gemini 2.0, DeepSeek-R1 and OpenAI o3 is a bit of an apples-to-oranges. But there was some news on the reasoning front today from Google, too: Google CEO Sundar Pichai took to the social network X to declare that the Google Gemini mobile app for iOS and Android has been updated with Google’s own rival reasoning model Gemini 2.0 Flash Thinking. The model can be connected to Google Maps, YouTube and Google Search, allowing for a whole new range of AI-powered research and interactions that simply can’t be matched by upstarts without such services like DeepSeek and OpenAI. I tried it briefly on the Google Gemini iOS app on my iPhone while writing this piece, and it was impressive based on my initial queries, thinking through the commonalities of the top 10 most popular YouTube videos of the last month and also providing me a table of nearby doctors’ offices and opening/closing hours, all within seconds. Gemini 2.0 Flash enters general release The Gemini 2.0 Flash model, originally launched as an experimental version in December, is now production-ready. Designed for high-efficiency AI applications, it provides low-latency responses and supports large-scale multimodal reasoning. One major benefit over the competition is in its context window, or the number of tokens that the user can add in the form of a prompt and receive back in one back-and-forth interaction with an LLM-powered chatbot or application programming interface (API). While many leading models, such as OpenAI’s new o3-mini that debuted last week, only support 200,000 or fewer tokens — about the equivalent of a 400 to 500 page novel — Gemini 2.0 Flash supports 1 million, meaning it is is capable of handling vast amounts of information, making it particularly useful for high-frequency and large-scale tasks. Gemini 2.0 Flash-Lite arrives to bend the cost curve to the lowest yet Gemini 2.0 Flash-Lite, meanwhile, is an all-new LLM aimed at providing a cost-effective AI solution without compromising on quality. Google DeepMind states that Flash-Lite outperforms its full-size (larger parameter-count) predecessor, Gemini 1.5 Flash, on third-party benchmarks such as MMLU Pro (77.6% vs. 67.3%) and Bird SQL programming (57.4% vs. 45.6%), while maintaining the same pricing and speed. It also supports multimodal input and features a context window of 1 million tokens, similar to the full Flash model. Currently, Flash-Lite is available in public preview through Google AI Studio and Vertex AI, with general availability expected in the coming weeks. As shown in the table below, Gemini 2.0 Flash-Lite is priced at $0.075 per million tokens (input) and $0.30 per million tokens (output). Flash-Lite is positioned as a highly affordable option for developers, outperforming Gemini 1.5 Flash across most benchmarks while maintaining the same cost structure. Logan Kilpatrick highlighted the affordability and value of the models, stating on X: “Gemini 2.0 Flash is the best value prop of any LLM, it’s time to build!” Indeed, compared to other leading traditional LLMs available via provider API, such as OpenAI 4o-mini ($0.15/$0.6 per 1 million tokens in/out), Anthropic Claude ($0.8/$4! per 1M in/out) and even DeepSeek’s traditional LLM V3 ($0.14/$0.28), in Gemini 2.0 Flash appears to be the best bang for the buck. Gemini 2.0 Pro arrives in experimental availability with 2-million token context window For users requiring more advanced AI capabilities, the Gemini 2.0 Pro (experimental) model is now available for testing. Google DeepMind describes this as its strongest model for coding performance and the ability to handle complex prompts. It features a 2 million-token context window and improved reasoning capabilities, with the ability to integrate external tools like Google Search and code execution. Sam Witteveen, co-founder and CEO of Red Dragon AI and an external Google developer expert for machine learning who often partners with VentureBeat, discussed the Pro model in a YouTube review. “The new Gemini 2.0 Pro model has a two-million-token context window, supports tools, code execution, function calling and grounding with Google Search — everything we had in Pro 1.5, but improved.” He also noted of Google’s iterative approach to AI development: “One of the key differences in Google’s strategy is that they release experimental versions of models before they go GA (generally accessible), allowing for rapid iteration based

Google launches Gemini 2.0 Pro, Flash-Lite and connects reasoning model Flash Thinking to YouTube, Maps and Search Read More »

Off The Bench: Trump Bans Trans Athletes, NCAA Falls In Line

By David Steele ( February 7, 2025, 5:59 PM EST) — In this week’s Off The Bench, the NCAA changes course to accommodate a presidential ban on transgender women athletes, Shohei Ohtani’s former interpreter is sentenced for his gambling-driven embezzlement, and women’s soccer players get restitution for abuse at the hands of their coaches and teams…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Off The Bench: Trump Bans Trans Athletes, NCAA Falls In Line Read More »

EXL’s Insurance LLM transforms claims and underwriting

As insurance companies embrace generative AI (genAI) to address longstanding operational inefficiencies, they’re discovering that general-purpose large language models (LLMs) often fall short in solving their unique challenges. Claims adjudication, for example, is an intensive manual process that bogs down insurers. Medical professionals can spend long hours reading upwards of 1,000 pages of medical records and other documents for a single claim. Then they have to synthesize and interpret all this complex information to facilitate a determination. Understandably, lapses in concentration are common, and they can compromise the quality of the settlement. In addition, the quality of this overall work can vary significantly based on an employee’s experience. The sheer volume of data, and the amount of time it takes to absorb it all, makes for an inconsistent, error-prone process. And while generic LLMs are powerful, they lack the precision, domain expertise, and privacy assurances needed to tackle the problem completely. Recognizing this gap, EXL launched its EXL Insurance LLM, whose industry-specific AI capabilities empower insurers to streamline claims adjudication, enhance underwriting processes, and more. Leveraging NVIDIA’s AI platform, the EXL Insurance LLM is a purpose-built solution to the industry’s unique problems around claims adjudication and underwriting. Because it’s trained on proprietary industry data, the model provides specific, accurate, and concise responses that enhance insurers’ efficiency and improve the customer experience. How the EXL Insurance LLM works  The EXL Insurance LLM eliminates much of the heavy lifting for practitioners by ingesting all of the claim documentation and providing a summary of the specific information needed to adjudicate. This can only happen because the model is powered by NVIDIA’s AI technology and fine-tuned with proprietary insurance data. NVIDIA’s technology reduces training time from months to days, filters out junk data to improve accuracy, and enhances security by preventing the unauthorized transmission of sensitive information. This allows the model to accurately and efficiently handle industry-specific language patterns, terminologies, and processes in a way that general-purpose models simply can’t—because they don’t have access to that proprietary data. And it does so without the errors and variances in quality that can happen in manual reviews. Real-world examples and benefits  The EXL Insurance LLM is transforming the industry in other ways as well. The model aggregates and reconciles hundreds of thousands of de-identified medical records, claims histories, call logs, and more to help underwriters make more informed decisions. It also: Improves regulatory adherence by performing compliance checks Identifies errors, inconsistencies, and insights buried in lengthy documents Provides accurate, repeatable results without the variation that is common among human reviewers.  The return on investment is real: The EXL Insurance LLM lowers claim indemnity costs, reduces claims leakage, and leads to faster settlements. Practitioners, clinical professionals, and legal staff have used it to increase their efficiency by 30% in the near term and up to 75% in the medium term. In internal studies, the model achieved a 30% improvement in accuracy on insurance tasks over top general-purpose models, and it offers 30% lower costs. Register for the upcoming virtual event, AI in Action: Driving the Shift to Scalable AI, to learn how the EXL Insurance LLM can transform your business. source

EXL’s Insurance LLM transforms claims and underwriting Read More »