Banks Need Modern Identity Verification Solutions To Stay Competitive and Resilient In The AI Era

When discussing digital transformation and innovation with clients in the banking and financial services (FS) sector, identity verification (IDV) often tops their agenda. A seamless digital onboarding experience is crucial, and it can determine whether a customer proceeds with or abandons a new banking relationship. There is also a growing need for more seamless, effective, and secure IDV in the FS industry due to the rise of sophisticated fraud tactics like deepfakes and increasingly complex regulatory requirements. My latest report, The State Of Identity Verification In The Financial Services Industry, highlights four key challenges that FS firms face: Deepfakes are challenging established IDV capabilities. The increase in AI-generated content means that deepfake-related cyberattacks are on the rise. Generative AI (genAI) face attacks, face-swapping, and synthetic faces will continue to be increasingly common; to combat these threats, we expect that IDV vendors will increase their investments in genAI-based deepfake detection technologies like generative adversarial networks and multimodal sense algorithms. Outdated onboarding experiences are stalling business growth. Customers demand fast and reliable onboarding processes, but traditional IDV methods create friction and false positives, deterring legitimate customers and reducing conversion rates. FS firms need to offer integrated identity experiences during onboarding and beyond to address ongoing authentication throughout customer journeys. Regulatory compliance is increasingly complex and costly. Evolving regulations — such as regional mandates, privacy laws, know-your-customer, and anti-money-laundering regulations — are adding compliance pressure. Firms need flexible and scalable IDV solutions to navigate these complexities. Managing IDV costs is becoming crucial. As competition for new customers intensifies, FS firms strive to enhance UX and lower the cost of acquiring customers. For example, pay-per-IDV-check allows FS firms to reduce cost at scale. Multilayered verification systems also help FS firms avoid costly and time-consuming checks, thus providing improved UX. Involve Multiple Roles And Stakeholders When Adopting IDV Solutions Beyond its role in fraud prevention, IDV solutions build customer trust and support business growth. A multistakeholder approach is crucial for adopting these solutions because it ensures comprehensive risk management, enhances customer experience, and meets regulatory standards. Fraud and risk management teams focus on protecting against sophisticated threats like identity theft and account takeover fraud; customer experience teams ensure that IDV processes do not hinder smooth onboarding, balancing security with usability; and compliance officers stay updated on regulatory changes and ensure adherence to requirements, preventing financial crimes such as money laundering. By involving these key stakeholders, FS firms can adopt robust, user-friendly, and compliant IDV solutions, leading to better business outcomes. Improve The Effectiveness Of Each IDV Lifecycle Stage With Key IDV Technologies IDV solutions are evolving rapidly, and FS firms should implement dynamic IDV strategies to optimize verification paths, reduce friction, and enhance UX. My report features a useful graphic that maps out key IDV technologies that firms should leverage at each IDV lifecycle stage to improve the experience and effectiveness.   FS firms are increasingly challenged on many fronts, and they must move away from manual and legacy IDV processes that hinder growth, create operational inefficiencies, and increase fraud risk. FS firms must adopt modern IDV solutions to stay competitive, ensure regulatory compliance, and provide a superior customer experience. Read our full report, The State Of Identity Verification In The Financial Services Industry, to explore these insights, enhance your IDV processes, improve customer experiences, ensure regulatory compliance, and achieve better business outcomes. Forrester clients can schedule a guidance session or inquiry with me for further discussion. source

Banks Need Modern Identity Verification Solutions To Stay Competitive and Resilient In The AI Era Read More »

Palo Alto Networks Beats Suit Over Competition 'Headwinds'

By Katryna Perera ( April 11, 2025, 9:24 PM EDT) — Cybersecurity company Palo Alto Networks has beaten, for now, a shareholder class action over allegedly concealed “headwinds,” with a California federal judge saying Friday that the investors have failed to plead any actionable misstatements or knowledge of wrongdoing by Palo Alto’s top brass…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Palo Alto Networks Beats Suit Over Competition 'Headwinds' Read More »

Increase flexibility and enable a cyber-resilient IT infrastructure

Broadcom and Google Cloud’s continued commitment to solving our customers’ most pressing challenges stems from our joint goal to enable every organization’s ability to digitally transform through data-powered innovation with the highly secure and cyber-resilient infrastructure, platform, industry solutions and expertise. With our longstanding technology and go-to-market partnership, we are yet again innovating to deliver value in the space of cyber and disaster recovery. Cyber resilience has become a top-of-mind priority for our customers, as the data shows that it presents a challenge most today are ill-equipped to address. For example, 59% of organizations were hit by ransomware in 2023 and 70% of them suffered data encryption.1 Enabling true cyber resilience requires a layered approach that spans across infrastructure hardening, strong distributed lateral security and confident cyber recovery—all of which are uniquely enabled by the VMware Cloud Foundation platform. An area where most organizations struggle today is their ability to restore operations after a cyber event. It presents unique challenges, including the need to select recovery point candidates, validate them in a secure environment before restore, set up this environment, and prevent reinfection. To achieve this, the vast majority of organizations report having to use up to five solutions across each of the following categories to enable cyber recovery: backup, cloud infrastructure, networking, disaster recovery as-a-service (DRaaS), and extended detection and response.2 The inevitable outcomes are that they are vulnerable to attacks due to a scattered approach to cyber resilience, and their confidence in recovery remains low even after they’ve been hit and have implemented remediation measures. VMware Live Recovery was engineered to solve these challenges. It delivers cyber and disaster recovery for VMware Cloud Foundation infrastructure under a unified management experience. It enables confident, secure cyber recovery through a dedicated, step-by-step workflow that integrates guided restore point selection, safe validation of recovery points in an isolated recovery environment (IRE) with live behavioral analysis, network isolation to prevent reinfection, and recovery orchestration at enterprise scale. Broadcom’s mission is to enable customers to recover from anything, anywhere, and we are delivering on this promise. At Explore Las Vegas last year, we announced that customers will be able to leverage an on-premises IRE for cyber recovery, which brings significant benefits to organizations that need to preserve data sovereignty or that have to abide by strict privacy and locality requirements. In addition, we have announced plans to enhance VMware Live Recovery support for Google Cloud VMware Engine (GCVE) as a cyber and disaster recovery (DR) site, for both on-premises and GCVE environments. This builds on VMware Live Recovery’s existing protection of GCVE sites as a source and enables a consistent deployment topology for on-premises and cloud environments for their cyber resilience and DR needs. “VMware Live Recovery support for Google Cloud empowers customers with more choices for their cyber and disaster recovery strategies,” said Manoj Sharma, Director of Product Management, Google Cloud. “This solution will enable just-in-time recovery by leveraging the elasticity of Google Cloud and delivers protection against increasingly sophisticated cyberattacks. This is a testament to our deep engineering commitment to solving complex customer challenges and lays a foundation for more innovation to come.“VMware Live Recovery on GCVE enables customers to benefit from a consistent VMware experience, with the elasticity and scalability of the cloud. GCVE supports the full VMware Cloud Foundation platform on enterprise-grade infrastructure, with unique capabilities such as: Four 9s of uptime service level agreement in a single zone Flexible node families with eight node shapes for better capacity shaping 100 Gbps of east-west networking Native virtual private cloud integration Combined with the power of VMware Cloud Foundation, the service enables customers to deploy flexible technology infrastructure that helps them innovate faster and work better together. These innovations offer important choices for customers to solve their modern cyber and disaster recovery needs in the face of an ever-changing threat landscape. Together, VMware/Broadcom and Google continue to design, develop, and deliver cutting-edge technology to solve our customers’ most pressing problems. 1Sophos State of Ransomware 20242Forrester Opportunity Snapshot: Organizations Are Missing Critical Ransomware Recovery Capabilities, July 2024 About the author:Belu de Arbelaiz is the Sr. Product Line Marketing Manager for VMware’s Data Protection as a Service portfolio, in charge of VMware Cloud Disaster Recovery. source

Increase flexibility and enable a cyber-resilient IT infrastructure Read More »

Step Right Up: To Manage Volatility, You’re All Risk Leaders Now!

My one and only roller-coaster experience was in the 1990s when I succumbed to peer pressure and rode the iconic Coney Island Cyclone, which at that time boasted to be the third-steepest drop of any wooden coaster in the world. My friends found the ride on “Big Momma” (as it’s commonly called) exhilarating and adrenaline-inducing, while I stepped off shaking, nauseous, and determined to never again repeat the experience. Here’s why: When the ride went from thrilling to terrifying, I couldn’t slow it down; I couldn’t make it stop; I couldn’t get off. All I could do was wait it out and hope I survived. Basically, I was completely unprepared for what was happening and had zero ability to control it. Exhilarating Or Nauseating: Your Choice Today’s new era of business volatility is that wooden roller-coaster ride that few business leaders expected and none can stop. Wooden roller coasters are distinct in that they are bumpier, more uneven, and have that distinguishable “clickety-clack” sound purposely to induce more psychological fear. Similarly, volatility, with its massive global outages, cyberthreats, new tariffs, trade wars, divided and impatient customers, and economic concerns is taking all of us on a wild ride. This ride feels like one where we’re strapped in, unable to get off, and don’t know what’s coming next. But it doesn’t have to be that way. While you can’t control the volatility, your approach to enterprise risk management will determine whether this ride is an exhilarating experience or a nausea-inducing one. Smooth Out Volatility With Enterprise Risk Management While business volatility tests the boundaries of resilience, it also creates opportunities for companies to make risk management efforts more targeted and effective. To take advantage of these opportunities as well as to avoid getting caught off guard with everything, all business leaders must understand risk to chart the best course of action. My new report, Regain Control Over Business Risk With The Three E’s Framework, provides a foundation for identifying what is controllable and how to be smart when dealing with volatility. To identify the three sources of risk, model scenarios, and create mitigation plans, recognize that: Enterprise risks are where you have full control. Companies have the greatest level of control within the walls of their own enterprise. Risks that arise from your company’s strategy, investments, business model, products, policies, internal controls, and even the maturity of your enterprise risk management program are fully within your control to address. Luna Park, the amusement park that operates the Coney Island Cyclone, is directly across from the beach but requires all park guests to wear shoes and shirts on all rides for health and safety reasons. Ensuring that rides are maintained, the park is safe and hazard free, and they have the right policies to keep guests safe and happy are risks within the park’s control. Ecosystem risks are where you have partial control. When it comes to your ecosystem, your company is fully responsible for risks, disruptions, and failures that arise from third-party relationships; however, you only have partial control over how they manage their risk or adhere to regulations and practices that will ultimately impact you. Amusement parks, even the theme park giants, don’t build their own rides. Instead, they rely on third-party firms with engineering expertise, and knowledge of safety best practices, to bring their vision to life. Unfortunately, when an accident or injury occurs, it’s the park that’s held responsible, as rising litigation against park operators can attest. External risks are where you have no control but can prepare a response. External forces of systemic risks build slowly, materialize quickly, and cause a cascade of adjacent failures for companies and their ecosystems. You can’t prevent tariffs, technology bans, pandemics, and wars, but you can identify, assess, and mitigate them. Amusement parks are highly sensitive to external forces such as weather. Wet and slippery surfaces, gusty winds, and lightning strikes increase the risk of accidents and threaten the safety of guests and employees. Although park operators have no control over the weather, they must have response strategies such as policies for how quickly to close, how quickly to reopen, and under what circumstances they’ll stop a ride or close the park. With no end in sight to the stream of critical events, profound changes, and global disruptions, yesterday’s approach to risk management can quickly become insufficient. Leverage the Forrester Three E’s Framework to target risk management efforts at the risks that are most consequential to your business and that will provide the greatest reward. Read the full report for more detail on the Three E’s Framework, and schedule an inquiry or guidance session with me for further insights. source

Step Right Up: To Manage Volatility, You’re All Risk Leaders Now! Read More »

Return of the dire wolf? More like a Colossal case of conservation-washing

US biotech startup Colossal Biosciences has resurrected the dire wolf — or at least that’s what the company would like you to believe. Social media is abuzz with viral videos, memes, and images of fluffy white wolf puppies. The Game of Thrones references are — predictably — omnipresent. The news even made it to Time magazine’s latest cover.  But behind the hype lies a dangerous de-extinction delusion that could distract from proven solutions to the biodiversity crisis. The Trump administration is already using Colossal’s claims as an excuse to slash endangered species protections. First, let’s set something straight — Colossal didn’t bring back the dire wolf. It took DNA from ancient dire wolves’ remains and then edited a handful of those genes into the genomes of modern grey wolves to give them larger bodies, broader skulls, and specific coat colours. It’s an impressive feat of tech-wizardry, but these fluffy white cubs are, at best, mutations.   The 💜 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now! George R.R. Martin holds the first new dire wolf born in 10,000 years pic.twitter.com/5JPepJK8k1 — Winter is Coming (@WiCnet) April 8, 2025 Colossal claims it has “successfully restored a once-eradicated species through the science of de-extinction” for the “first time in human history”. That’s factually incorrect, as many scientists have already pointed out. As University of Maine paleoecologist Jacquelyn Gill wrote on Bluesky on Monday, “To see this work being done with such a casual disregard not only for the truth but for life itself is genuinely abhorrent to me.” But Colossal shows no signs of slowing down. Last month, the $10bn company used a similar technique to create a wooly mouse — a rodent genetically engineered to have mammoth-like pelts. In the future, Colossal plans to “resurrect” other extinct creatures, including the dodo, Tasmanian tiger, and the giant mammoth. The company says that these projects serve as proof of concepts for de-extinction technologies, which could aid in bringing back lost species and restoring ecological balance. It’s the flagbearer of a growing de-extinction movement in the US, joined by organisations like Revive & Restore and Re:Wild. Europe, in contrast, has focused its rewilding efforts more on bringing back existing species, like bison, wolves, and beavers, to regions where they were hunted to extinction.  While genetically engineered “dire wolves” are going viral, the extant Iberian wolf is endangered. Credit: Animal Record/Creative Commons Meanwhile, an emerging cohort of biodiversity-focused startups is tapping tech to restore nature in more sane ways. For instance, Stream Ocean from Switzerland has developed a face recognition technology for fish that helps scientists monitor species numbers. Germany’s Soilytix tracks soil health using environmental DNA, while UK startup Pivotal Earth connects corporate funding to credited conservation projects.   This is where technology can find its use in biodiversity restoration, not in Frankensteinian conservation attempts, which aren’t just over-hyped but present a dangerous distraction from proven measures.  Colossal has broadcast the message that extinction is reversible — but it is not. While the public fawns over adorable mutated “dire wolves” on Instagram, biodiversity loss is snowballing. One million (known) species are threatened with extinction, with extinction rates now occurring up to 1,000 times the rate pre-humans. We need to mobilise resources to protect the species that we still have left, like the Iberian wolf. Once widespread, the canid is now confined to mountainous regions of Portugal and Spain. Only around 2,200 individuals remain.  Humanity’s top priority should be to safeguard existing biodiversity and restore what’s been damaged. Instead of playing God with long-extinct creatures, we must fight for the endangered species we still have left. source

Return of the dire wolf? More like a Colossal case of conservation-washing Read More »

Crypto Firm To Pay SEC Fine Over False Client Claims

By Jessica Corso ( April 11, 2025, 6:11 PM EDT) — Cryptocurrency firm Nova Labs Inc. has agreed to pay $200,000 to settle a U.S. Securities and Exchange Commission lawsuit claiming it falsely touted client relationships with Nestle and other large businesses in an effort to sell crypto mining devices tied to the so-called Helium network…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Crypto Firm To Pay SEC Fine Over False Client Claims Read More »

From MIPS to exaflops in mere decades: Compute power is exploding, and it will transform AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More At the recent Nvidia GTC conference, the company unveiled what it described as the first single-rack system of servers capable of one exaflop — one billion billion, or a quintillion, floating-point operations (FLOPS) per second. This breakthrough is based on the latest GB200 NVL72 system, which incorporates Nvidia’s latest Blackwell graphics processing units (GPUs). A standard computer rack is about 6 feet tall, a little more than 3 feet deep and less than 2 feet wide. Shrinking an exaflop: From Frontier to Blackwell A couple of things about the announcement struck me. First, the world’s first exaflop-capable computer was installed only a few years ago, in 2022, at Oak Ridge National Laboratory. For comparison, the “Frontier” supercomputer built by HPE and powered by AMD GPUs and CPUs, originally consisted of 74 racks of servers. The new Nvidia system has achieved roughly 73X greater performance density in just three years, equivalent to a tripling of performance every year. This advancement reflects remarkable progress in computing density, energy efficiency and architectural design. Secondly, it needs to be said that while both systems hit the exascale milestone, they are built for different challenges, one optimized for speed, the other for precision. Nvidia’s exaflop specification is based on lower-precision math — specifically 4-bit and 8-bit floating-point operations — considered optimal for AI workloads including tasks like training and running large language models (LLMs). These calculations prioritize speed over precision. By contrast, the exaflop rating for Frontier was achieved using 64-bit double-precision math, the gold standard for scientific simulations where accuracy is critical. We’ve come a long way (very quickly) This level of progress seems almost unbelievable, especially as I recall the state-of-the-art when I began my career in the computing industry. My first professional job was as a programmer on the DEC KL 1090. This machine, part of DEC’s PDP-10 series of timeshare mainframes, offered 1.8 million instructions per second (MIPS). Aside from its CPU performance, the machine connected to cathode ray tube (CRT) displays via hardwired cables. There were no graphics capabilities, just light text on a dark background. And of course, no Internet. Remote users connected over phone lines using modems running at speeds up to 1,200 bits per second. DEC System 10; Source: By Joe Mabel, CC BY-SA 3.0. 500 billion times more compute While comparing MIPS to FLOPS gives a general sense of progress, it is important to remember that these metrics measure different computing workloads. MIPS reflects integer processing speed, which is useful for general-purpose computing, particularly in business applications. FLOPS measures floating-point performance that is crucial for scientific workloads and the heavy number-crunching behind modern AI, such as the matrix math and linear algebra used to train and run machine learning (ML) models. While not a direct comparison, the sheer scale of the difference between MIPS then and FLOPS now provides a powerful illustration of the rapid growth in computing performance. Using these as a rough heuristic to measure work performed, the new Nvidia system is approximately 500 billion times more powerful than the DEC machine. That kind of leap exemplifies the exponential growth of computing power over a single professional career and raises the question: If this much progress is possible in 40 years, what might the next 5 bring? Nvidia, for its part, has offered some clues. At GTC, the company shared a roadmap predicting that its next-generation full-rack system based on the “Vera Rubin” Ultra architecture will deliver 14X the performance of the Blackwell Ultra rack shipping this year, reaching somewhere between 14 and 15 exaflops in AI-optimized work in the next year or two. Just as notable is the efficiency. Achieving this level of performance in a single rack means less physical space per unit of work, fewer materials and potentially lower energy use per operation, although the absolute power demands of these systems remain immense. Does AI really need all that compute power? While such performance gains are indeed impressive, the AI industry is now grappling with a fundamental question: How much computing power is truly necessary and at what cost? The race to build massive new AI data centers is being driven by the growing demands of exascale computing and ever-more capable AI models. The most ambitious effort is the $500 billion Project Stargate, which envisions 20 data centers across the U.S., each spanning half a million square feet. A wave of other hyperscale projects is either underway or in planning stages around the world, as companies and countries scramble to ensure they have the infrastructure to support the AI workloads of tomorrow. Some analysts now worry that we may be overbuilding AI data center capacity. Concern intensified after the release of R1, a reasoning model from China’s DeepSeek that requires significantly less compute than many of its peers. Microsoft later canceled leases with multiple data center providers, sparking speculation that it might be recalibrating its expectations for future AI infrastructure demand. However, The Register suggested that this pullback may have more to do with some of the planned AI data centers not having sufficiently robust ability to support the power and cooling needs of next-gen AI systems. Already, AI models are pushing the limits of what present infrastructure can support. MIT Technology Review reported that this may be the reason many data centers in China are struggling and failing, having been built to specifications that are not optimal for the present need, let alone those of the next few years. AI inference demands more FLOPs Reasoning models perform most of their work at runtime through a process known as inference. These models power some of the most advanced and resource-intensive applications today, including deep research assistants and the emerging wave of agentic AI systems. While DeepSeek-R1 initially spooked the industry into thinking that future AI might require less computing power, Nvidia CEO Jensen Huang pushed back hard. Speaking to CNBC, he

From MIPS to exaflops in mere decades: Compute power is exploding, and it will transform AI Read More »

How Cohesity’s new generative AI Assistant, Gaia, unlocks enterprise data for instant insights

Now, back to my boss’s request—he’s heading into a board meeting and wants a summary of unauthorized data breaches from 2018 and 2021. I just joined the company and don’t have the historical context; it’s all locked in people’s heads or archives. So, I ingested the archives Cohesity had already backed up, indexed it, and asked: “Can you summarize the differences between the unauthorized data breaches in 2018 and 2021?” For this demo, I’m anonymizing company names because it’s real data. The system takes my text question, semantically compares it against 10,000 PDFs, extracts relevant snippets, packages them in a prompt, and sends them to a large language model. The LLM uses that context to generate an answer—with good detail on both events and general observations. Because it’s using internal data, I also get resource links and citations—so I can download the source and add it to the board meeting materials. Keith: That’s great for explainability—so the AI isn’t just guessing. Greg: Exactly. You get a much more robust and trustworthy result because you’re using your internal data as the source of truth. This isn’t something you’d want to do with a public ChatGPT-type tool—this is proprietary data. Keith: Right. Greg: What we’re doing here sits between full model fine-tuning and simple querying. Instead of training a model on all your data, we pull relevant content at the time of inference—like handing an artificial researcher a stack of topic-specific papers and asking a question. source

How Cohesity’s new generative AI Assistant, Gaia, unlocks enterprise data for instant insights Read More »

Netchoice Wants New Calif. Online Marketplace Law Blocked

By Hailey Konnath ( April 10, 2025, 9:41 PM EDT) — Big Tech trade group Netchoice LLC has asked a California federal court to block a new Golden State law requiring online marketplaces to collect information from third-party sellers and report those selling stolen goods, claiming the “onerous” measure will “impose unprecedented and unconstitutional burdens on widely used online services.”… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Netchoice Wants New Calif. Online Marketplace Law Blocked Read More »

New IBM z17 Mainframe Will ‘Redefine AI at Scale’

Screenshot of IBM z17 mainframe from an IBM product video. Image: IBM IBM on Tuesday announced the newest version of its famous mainframe: the IBM z17. Powered by the latest IBM Telum II processor, IBM z17 culminates five years of research and development. It features AI capabilities across hardware, software, and systems operations. “IBM Z is built to redefine AI at scale,” IBM said in the press release. While mainframes are often seen as a throwback to older eras of computing, they are still used by large companies to process massive amounts of data. Many industries worldwide — including banking, insurance, retail, and telecommunications — still use IBM mainframes today. SEE: IT Leader’s Guide to Generative AI From TechRepublic Premium) New IBM mainframe puts AI first The newest IBM mainframe was explicitly designed to better support AI features. According to IBM, the z17 can process 50% more AI inference operations per day than the z16. The tech giant says the z17 has over 250 use cases, including managing chatbots and mitigating loan risk. Some of the main AI tools offered by the z17 processor are: More inferencing capabilities: The z17 has increased frequency, greater compute capacity, and a 40% growth in cache. This enables more than 450 billion inferencing operations in a day and a one-millisecond response time. Accelerated computing: When it becomes available in the last quarter of 2025, the IBM Spyre™ Accelerator will augment the Telum II processor’s computation abilities, allowing the mainframe to run generative features such as assistants. Better user experience: z17 incorporates AI assistants and AI agents, such as IBM watsonx Code Assistant for Z and IBM watsonx Assistant for Z, to improve the user experience of IT teams and developers. Watsonx Assistant for Z will also be integrated with Z Operations Unite, providing live systems data for AI chat-based incident detection and resolution. “The industry is quickly learning that AI will only be as valuable as the infrastructure it runs on,” said Ross Mauri, general manager of IBM Z and LinuxONE. “With z17, we’re bringing AI to the core of the enterprise with the software, processing power, and storage to make AI operational quickly. Additionally, organizations can put their vast, untapped stores of enterprise data to work with AI in a secured, cost-effective way.” source

New IBM z17 Mainframe Will ‘Redefine AI at Scale’ Read More »