Ai2 releases Tülu 3, a fully open source model that bests DeepSeek v3, GPT-4o with novel post-training approach

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The open-source model race just keeps on getting more interesting.  Today, the Allen Institute for AI (Ai2) debuted its latest entry in the race with the launch of its open-source Tülu 3 405 billion-parameter large language model (LLM). The new model not only matches the capabilities of OpenAI’s GPT-4o, it surpasses DeepSeek’s v3 model across critical benchmarks. This isn’t the first time the Ai2 has made bold claims about a new model. In November 2024 the company released its first version of Tülu 3, which had both 8- and 70-billion parameter versions. At the time, Ai2 claimed the model was on par with the latest GPT-4 model from OpenAI, Anthropic’s Claude and Google’s Gemini. The big difference is that Tülu 3 is open-source. Ai2 also claimed back in September 2024 that its Molmo models were able to beat GPT-4o and Claude on some benchmarks.  While benchmark performance data is interesting, what’s perhaps more useful is the training innovations that enable the new Ai2 model. Pushing post-training to the limit The big breakthrough for Tülu 3 405B is rooted in an innovation that first appeared with the initial Tülu 3 release in 2024. That release utilized a combination of advanced post-training techniques to get better performance. With the Tülu 3 405B model, those post-training techniques have been pushed even further, using an advanced post-training methodology that combines supervised fine-tuning, preference learning, and a novel reinforcement learning approach that has proven exceptional at larger scales. “Applying Tülu 3’s post-training recipes to Tülu 3-405B, our largest-scale, fully open-source post-trained model to date, levels the playing field by providing open fine-tuning recipes, data and code, empowering developers and researchers to achieve performance comparable to top-tier closed models,” Hannaneh Hajishirzi, senior director of NLP Research at Ai2 told VentureBeat. Advancing the state of open-source AI post-training with RLVR Post-training is something that other models, including DeepSeek v3, do as well. The key innovation that helps to differentiate Tülu 3 is Ai2’s “reinforcement learning from verifiable rewards” (RLVR) system.  Unlike traditional training approaches, RLVR uses verifiable outcomes — such as solving mathematical problems correctly — to fine-tune the model’s performance. This technique, when combined with direct preference optimization (DPO) and carefully curated training data, has enabled the model to achieve better accuracy in complex reasoning tasks while maintaining strong safety characteristics. Key technical innovations in the RLVR implementation include: Efficient parallel processing across 256 GPUs Optimized weight synchronization  Balanced compute distribution across 32 nodes Integrated vLLM deployment with 16-way tensor parallelism The RLVR system showed improved results at the 405B-parameter scale compared to smaller models. The system also demonstrated particularly strong results in safety evaluations, outperforming DeepSeek V3 , Llama 3.1 and Nous Hermes 3. Notably, the RLVR framework’s effectiveness increased with model size, suggesting potential benefits from even larger-scale implementations. How Tülu 3 405B compares to GPT-4o and DeepSeek v3 The model’s competitive positioning is particularly noteworthy in the current AI landscape. Tülu 3 405B not only matches the capabilities of GPT-4o but also outperforms DeepSeek v3 in some areas, particularly with safety benchmarks. Across a suite of 10 AI benchmarks including safety benchmarks, Ai2 reported that the Tülu 3 405B RLVR model had an average score of 80.7, surpassing DeepSeek V3’s 75.9. Tülu however is not quite as good at GPT-4o, which scored 81.6. Overall the metrics suggest that Tülu 3 405B is at the very least extremely competitive with GPT-4o and DeepSeek v3 across the benchmarks. Why open-source AI matters and how Ai2 is doing it differently What makes Tülu 3 405B different for users, though, is how Ai2 has made the model available.  There is a lot of noise in the AI market about open source. DeepSeek says its model is open-source, and so is Meta’s Llama 3.1, which Tülu 3 405B also outperforms. With both DeepSeek and Llama the models are freely available for use; and some code, but not all, is available. For example, DeepSeek-R1 has released its model code and pre-trained weights but not the training data. Ai2 is taking a different approach in an attempt to be more open. “We don’t leverage any closed datasets,” Hajishirzi said. “As with our first Tülu 3 release in November 2024, we are releasing all of the infrastructure code.” She added that Ai2’s fully open approach, which includes data, training code and models, ensures users can easily customize their pipeline for everything from data selection through evaluation. Users can access the full suite of Tülu 3 models, including Tülu 3-405B, on Ai2’s Tülu 3 page, or test the Tülu 3-405B functionality through Ai2’s Playground demo space. source

Ai2 releases Tülu 3, a fully open source model that bests DeepSeek v3, GPT-4o with novel post-training approach Read More »

New Research — Workload/Batch Automation Is Undergoing A Transformation

It’s been some time since Forrester has written about this market, and a lot has changed. Automation is the cornerstone of speed and operational efficiency. With the increasing complexity in IT ecosystems, business applications, and data, the demand for smarter automation is greater than ever. Batch automation and workload automation are certainly not new concepts (they date back to the early days of the mainframe), but they are undergoing a renaissance as organizations optimize their processes. In our upcoming research, we will delve into why it’s time to revisit these technologies, explore the macro trends that impact this market, and how they have the potential to reshape organizations’ automation plans. We’ll help our clients understand the current state of the market, the impact of the latest technological advancements, and new emerging use cases. Why Are We Revisiting This Research? Increased client demand. Enterprises are increasingly demanding insights into the direction of this market and vendors’ ability to solve their operational and organizational requirements. Hybrid and multicloud environments. Firms today live and operate in a hybrid setup — a mix of on-premises and public cloud services. Applications, infrastructure, and data are spread across this setup, and workload/batch automation must likewise seamlessly integrate across it. Native capabilities in business applications. Some business applications have native capabilities to perform workload automation. We will explore how these impact standalone tools in the market. AI and AI agent enhancements. While AI is no secret in automation, we want to make clear how AI will help advance solutions. When should agents take over (if at all)? Demand for operational and cyber resiliency. With the growing threat of system failures and cybersecurity issues, all automation solutions must be designed with capabilities to address these challenges. Workload/batch automation can no longer be just a tool for the IT organization: Like all other types of automation, it must be a strategic enabler for modern businesses. By revisiting research in this space, we will explore new possibilities for scalability, efficiency, and resilience. Get Involved Over the next two months, we will be conducting interviews and taking briefings with vendors. If you would like to participate in our research, please contact Meg Bellavance ([email protected]). source

New Research — Workload/Batch Automation Is Undergoing A Transformation Read More »

How to Use Keeper Password Manager: A Comprehensive Guide

Keeper is an all-around password manager that offers a variety of authentication options and an intuitive user interface. In this article, we walk you through how to set up Keeper, how to use it, and how you can maximize its capabilities for your organization. Dashlane Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Micro (0-49 Employees), Small (50-249 Employees), Medium (250-999 Employees), Large (1,000-4,999 Employees), Enterprise (5,000+ Employees) Micro, Small, Medium, Large, Enterprise Features Automated Provisioning ManageEngine ADSelfService Plus Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Any Company Size Any Company Size Features Access Management, Compliance Management, Credential Management, and more Keeper step-by-step instructions 1. Choosing a Keeper subscription Keeper has two subscription types: Personal & Family and Organizations. The Personal & Family options are the more consumer-facing subscriptions, while the Organizations tier is designed for small to large businesses. In our hands-on review, Keeper received a rating of 4.4 stars out of 5. Check out the full Keeper review here. Keeper Personal & Family. Image: Keeper Keeper’s Personal or Unlimited plan is priced at $2.92 per month and comes with one user vault. The Family plan is $6.25 per month for five user vaults. Keeper Organizations plans. Image: Keeper The Organizations plans are divided into Business Starter, Business, and Enterprise. Keeper Business Starter covers small teams of up to ten people, while Keeper Business is meant for small to medium-sized businesses — with the two plans starting at $2 per user, per month, and $3.75 per user, per month, respectively. Keeper Enterprise is tailored towards larger companies and includes more business-focused features. You can contact Keeper for a quote on Enterprise pricing. Keeper offers a generous 30-day free trial for Keeper Personal, and a 14-day trial for Keeper Business; neither requires a credit card or payment information to access. I highly recommend picking the plan best suited for you or your business and trying out their free trial. This lets you experience Keeper’s password management without paying for a premium subscription. Keeper has a free version, but it’s very limited and is only available on the mobile application. 2. Setting up the web app and browser extension To get access to one of Keeper’s free trials, click the “Try It Free” button at the top of Keeper’s Pricing page and select your plan of choice. For this guide’s sake, we will try out Keeper Personal. Starting a Keeper free trial. Image: Keeper Keeper will ask you to provide an email address. Once you’ve provided one, you’ll be redirected to Keeper’s web application. From there, it’ll ask you to input your email address and create a Master Password. Your master password is technically the only password you’ll have to make on your own. It unlocks your Keeper vaults, where all your data and credentials are going to be stored. Creating a master password. Image: Luis Millares Because it’s your main gateway to all your passwords, it’s crucial that you remember your master password. Keeper will send you a verification code through your email and ask you to input it on their app. Once you input the code, you’ll now be able to access Keeper’s full web vault! Initial Keeper web vault page. Image: Luis Millares 3. Using Keeper When you first encounter Keeper’s web vault, it’ll offer you a few tutorials on how to import passwords, install the browser extension, and set up account recovery. Keeper tutorials. Image: Keeper Of the three tutorials, go through the browser extension guide first. This will allow you to have Keeper’s extension ready on your browser at all times and will make your password management experience more seamless. If you’re using Chrome, you can download Keeper Password Manager on the Chrome Web Store. Now that you have both Keeper’s web app and browser extension installed, you can start saving and managing your passwords. Keeper Chrome extension. Image: Keeper To show you how to save your first login, I’ll demonstrate the process by creating a new Goodreads account. Upon navigating to Goodreads’ account creation page, you will see that a Keeper logo will appear on the password field. Clicking on it will show the Keeper’s password generator. Keeper password generator. Image: Luis Millares Keeper’s password generator automatically creates a random password for every new login you have. Through the generator, you can configure how many characters you want a password to have and whether you want it to have numbers, letters, symbols, or a combination of the three. At default, Keeper’s password manager generates a 20-character password with a maximum of 100 characters. After you’ve input your new account details, Keeper’s browser extension will ask you to save the new login to your Keeper vault. New saved login. Image: Luis Millares Once you click “OK,” you’ve officially saved your very first login in Keeper! How to ensure you’re maximizing Keeper’s capabilities Out of the box, Keeper offers heightened security in protecting your passwords. However, there are a few steps you can take to fully maximize its features. Download Keeper’s dedicated desktop app I highly recommend downloading one of Keeper’s desktop applications alongside its browser extension. This provides you with a more organized view of your encrypted vault and prevents any slowdown that could happen with your browser, especially if you anticipate accessing a ton of login credentials at any given time. Currently, Keeper has dedicated desktop applications for Windows, macOS, and Linux. Consider Keeper’s paid add-ons You should also check out Keeper’s secure add-ons, which are separate purchases to your Keeper membership that add extra functionality. Chief among these are Keeper’s BreachWatch and KeeperChat add-ons. BreachWatch is Keeper’s take on dark web scanning, while KeeperChat is an encrypted messaging service that works with the Keeper Password Manager. Maximize Keeper discounts for selected groups Keeper provides generous discounts to people in the military, first responders, medical personnel, and students. In particular, Keeper offers a 50% discount for students and a 30% discount

How to Use Keeper Password Manager: A Comprehensive Guide Read More »

Observo’s AI-native data pipelines cut noisy telemetry by 70%, strengthening enterprise security

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The AI boom has set off an explosion of data. AI models need massive datasets to train on, and the workloads they power — whether internal tools or customer-facing apps — are generating a flood of telemetry data: logs, metrics, traces and more. Even with observability tools that have been around for some time, organizations are often struggling to keep up, making it harder to detect and respond to incidents in time. That’s where a new player, Observo AI, comes in.  The California-based startup, which has just been backed by Felicis and Lightspeed Venture Partners, has developed a platform that creates AI-native data pipelines to automatically manage surging telemetry flows. This ultimately helps companies like Informatica and Bill.com cut incident response times by over 40% and slash observability costs by more than half. The problem: rule-based telemetry control Modern enterprise systems generate petabyte-scale operational data on an ongoing basis.  While this noisy, unstructured information has some value, not every data point is a critical signal for identifying incidents. This leaves teams dealing with a lot of data to filter through for their response systems. If they feed everything into the system, the costs and false positives increase. On the other hand, if they pick and choose, scalability and accuracy get hit — again leading to missed threat detection and response.  In a recent survey by KPMG, nearly 50% of enterprises said they suffered from security breaches, with poor data quality and false alerts being major contributors. It’s true that some security information and event management (SIEM) systems and observability tools have rule-based filters to cut down the noise, but that rigid approach doesn’t evolve in response to surging data volumes. To address this gap, Gurjeet Arora, who previously led engineering at Rubrik, developed Observo, a platform that optimizes these operational data pipelines with the help of AI. The offering sits between telemetry sources and destinations and uses ML models to analyze the stream of data coming in. It understands this information and then cuts down the noise to decide where it should go — to a high-value incident alert and response system or a more affordable data lake covering different data categories. In essence, it finds the high-importance signals on its own and routes them to the right place.  “Observo AI…dynamically learns, adapts and automates decisions across complex data pipelines,” Arora told VentureBeat. “By leveraging ML and LLMs, it filters through noisy, unstructured telemetry data, extracting only the most critical signals for incident detection and response. Plus, Observo’s Orion data engineer automates a variety of data pipeline functions including the ability to derive insights using a natural language query capability.” What’s even more interesting here is that the platform continues to evolve its understanding on an ongoing basis, proactively adjusting its filtering rules and optimizing the pipeline between sources and destinations in real time. This ensures that it keeps up even as new threats and anomalies emerge, and does not require new rules to be set up.  Observo AI stack The value to enterprises Observo AI has been around for nine months and has already roped in over a dozen enterprise customers, including Informatica, Bill.com, Alteryx, Rubrik, Humber River Health and Harbor Freight. Arora noted that they have seen 600% revenue growth quarter-over-quarter and have already drawn some of their competitors’ customers. “Our biggest competitor today is another start-up called Cribl. We have clear product and value differentiation against Cribl, and have also displaced them at a few enterprises. At the highest level, our use of AI is the key differentiating factor, which leads to higher data optimizations and enrichment, leading to better ROI and analytics, leading to faster incident resolution,” he added, noting that the company typically optimizes data pipelines to the extent of reducing “noise” by 60-70%, as compared to competitors’ 20-30%.  The CEO did not share how the above-mentioned customers derived benefits from Observo, although he did point out what the platform has been able to do for companies operating in highly regulated industries (without sharing names). In one case, a large North American hospital was struggling with the growing volume of security telemetry from different sources, leading to thousands of insignificant alerts and massive expenses for Azure Sentinel SIEM, data retention and compute. The organization’s security operations analysts tried creating makeshift pipelines to manually sample and reduce the amount of data ingested, but they feared they could be missing some signals that could have a big impact.  With Observo’s data-source-specific algorithms, the organization was initially able to reduce more than 78% of the total log volume ingested into Sentinel while fully onboarding all the data that mattered. As the tool continues to improve, the company expect to achieve more than 85% reductions within the first three months. On the cost front, it reduced the total cost of Sentinel, including storage and compute, by over 50%. This allowed their team to prioritize the most important alerts, leading to a 35% reduction in mean time to resolve critical incidents.  Similarly, in another case, a global data and AI company was able to reduce its log volumes by more than 70% and reduce its total Elasticsearch Observability and SIEM costs by more than 40%.  Plan ahead As the next step in this work, the company plans to accelerate its go-to-market efforts and take on other players in the category — Cribl, Splunk, DataDog, etc.  It also plans to enhance the product with more AI capabilities, anomaly detection, data policy engine, analytics, and source and destination connectors. According to insights from MarketsAndMarkets, the market size for global observability tools and platforms is expected to grow nearly 12% from $2.4 billion in 2023 to $4.1 billion by 2028. source

Observo’s AI-native data pipelines cut noisy telemetry by 70%, strengthening enterprise security Read More »

Venture-Backed IPO Recovery Could Be Muted, Report Says

By Tom Zanki ( January 24, 2025, 7:00 AM EST) — The expected recovery for venture-backed initial public offerings in 2025 will likely be muted, a capital markets research firm said Friday, given investors’ persistent concerns about valuation and delayed interest rate cuts that may not happen until midyear…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Venture-Backed IPO Recovery Could Be Muted, Report Says Read More »

Five Ingredients To Win The Recurring Revenue B2B Bake Off And Avoid Getting Chopped

As a keen amateur chef, I have been known to occasionally seek inspiration from a tv cooking competition. Those bite-sized episodes of culinary drama sometimes provide just enough to satisfy my hunger for light evening entertainment. Of course, all these shows follow a proven recipe: enthusiastic contestants, challenging ingredients, and a panel of picky jurors deciding everyone’s fate – all set against the backdrop of the ever-present ticking clock… At the end of each episode, there is always a winner – a Chopped Champion, a Top Chef, a Star Baker. The victor is the one who, through the various phases of the competition, wins over the expectant jurors by their transformation of the raw ingredients, while simultaneously wrangling the technology, dealing with the heat of the kitchen, and letting the viewers know just enough about their unique and scintillating backstory. Are they chefs, or are they marketers? For every winner, there are of course multiple losers – the eliminated ones. These unfortunate contestants tend to falter for a few simple reasons.  While talented and accomplished competitors, they often fail to adapt their usual cooking approaches to the specific demands of the competition arena. They make an error either in what to serve, how to prepare it, or how it is presented. And no one likes medium-rare chicken, even on a bed of seasonal yuzu-drizzled kale chips… Recurring Revenue Marketing Demands A Different Recipe Seasoned B2B marketers often face similar challenges when stepping into the competitive recurring revenue arena. Equipped with their trusted tools, know-how and scars from years of competition, they often ‘play it too safe.’ They apply tried and tested [legacy] approaches to their new environment, only to be greeted with an underwhelmed reaction from a new jury of buyers. It is not that they have suddenly become bad marketers, however. Recurring revenue marketing in B2B is still marketing, with the raw ingredients of brand, demand, engagement, and enablement. It is just that these ingredients need to be prepared and seasoned in different ways, to reflect the nature and demands of the recurring revenue environment. Optimizing Five Stakeholder Relationships Will Help Your Recurring Revenue Rise In our recent research report “Recurring Revenue Marketing Demands Customer Obsession And A Seamless Operating Model“, Dawn Ferrara and I explain how the secret to recurring revenue marketing success is to reimagine marketing’s work through the lens of its interactions with 5 key stakeholder groups: Buyers, Product, Sellers, Operations, and Employees. We examine what makes these relationships different in a recurring revenue model, and introduce a new framework, the ‘Recurring Revenue Marketing Propeller.’ This framework illustrates how marketers should adjust their approach to stakeholder relationships, and what steps they should take to ensure they win the trust and long-term patronage of recurring revenue customers. We hope clients enjoy reading the full report. If we have left you hungry for more, please do not hesitate to contact us to schedule a deeper discussion. source

Five Ingredients To Win The Recurring Revenue B2B Bake Off And Avoid Getting Chopped Read More »

Chancery Nixes TRO in Jenzabar Stock Buyback Dispute

By Jeff Montgomery ( January 28, 2025, 8:55 PM EST) — Investors in an educational software venture mired in Delaware Court of Chancery litigation dating to 2009 lost an 11th-hour effort to broaden the latest case on Tuesday, with a vice chancellor noting that the state Supreme Court is set to take up an appeal in the already decided action on Wednesday…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Chancery Nixes TRO in Jenzabar Stock Buyback Dispute Read More »

Samsung Gets PTAB To Review 2 Smart Ring Patents

By Adam Lidgett ( January 30, 2025, 5:29 PM EST) — The Patent Trial and Appeal Board has agreed to hear Samsung’s challenge to a pair of patents owned by a company that makes smart rings, finding there was a reasonable chance the electronics giant could potentially prevail in the fight…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Samsung Gets PTAB To Review 2 Smart Ring Patents Read More »

DeepSeek proves AI innovation isn’t ‘dictated’ by Silicon Valley

DeepSeek proves that Silicon Valley can’t monopolise AI innovation, according to a European AI entrepreneur. Muj Choudhury, the CEO and co-founder of British voice processing startup RocketPhone, welcomed DeepSeek’s rapid rise. He hopes the Chinese company signals a shift in the balance of AI power. “AI development has long been dominated by Silicon Valley’s powerful VC firms, which wield immense influence by pouring vast sums into the technology and shaping its trajectory,” he said. “In this landscape, an outsider like DeepSeek breaking through is not just impressive. It’s necessary. The industry needs challengers to drive real innovation and prevent AI’s future from being monopolised by a handful of players.” That handful has certainly been shaken by DeepSeek’s emergence. The Chinese startup’s AI assistant has overtaken ChatGPT to reach the top spot on the Apple App Store’s free app rankings. The 💜 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now! The company’s open-source models have also stunned the market. In tests, they’ve outperformed rival models from OpenAI and Meta — at just a fraction of the operating costs. The advances have shaken the stock market. According to Choudhury, they’ve proven that AI innovation isn’t “dictated” by access to supercomputers or Silicon Valley funding. Fresh from raising $10.5mn for his own startup, Choudhury has growing optimism for Europe’s AI scene. He wants DeepSeek to inspire the continent’s tech sector, which is struggling to commercialise AI innovations. “For European startups, who have historically excelled at building focused, efficient solutions rather than chasing scale at all costs, DeepSeek’s rise suggests there’s room for strategic players who can execute well without massive capital outlays,” he said. “Perhaps this shift will finally allow us to focus on what truly matters: building practical AI systems that solve real enterprise problems and deliver tangible business value, rather than chasing the next viral consumer app.” source

DeepSeek proves AI innovation isn’t ‘dictated’ by Silicon Valley Read More »

What's New (And Worrisome) in Quantum Security?

A growing number of security experts are warning that quantum computing might soon devastate existing cryptographic systems, leading to a security crisis that could devastate businesses and governments worldwide.  Researchers and other security experts are drawing a direct parallel between the pending quantum security danger and the notorious Y2K threat, only on a much larger scale. In essence, the current basic and widely-used encryption mechanisms, such as factorization-based cryptography, are highly vulnerable to quantum computing’s processing power.  Avalanche  The major security issue is the ability of quantum computers to rapidly break cryptographic algorithms, which are used in multiple security architectures and products, says Doug Saylors, partner and cybersecurity lead, with global technology research and advisory firm ISG. He notes in an email interview that modern cryptography is easily broken by quantum computing. As a result, file encryption becomes completely worthless. “Imagine every private conversation, every strategic plan, every forecast or product under development, all out in the open for public consumption, from competitors to suppliers to partners,” Saylors states. The reputation damage alone, he believes, “could be bankruptcy-inducing.”  Related:Tidal Wave of Trump Policy Changes Comes for the Tech Space Cryptographers have demonstrated that quantum computers can break asymmetric encryption algorithms, such as RSA and ECC, which are widely used for secure communication and digital signatures, says Archana Ramamoorthy, senior director, regulated and trusted cloud, at Google Cloud. This vulnerability can enable attacks, such as ‘store now, decrypt later’. “As a result, the longevity of hardware firmware signatures generated by similar asymmetric encryption algorithms is also threatened,” she warns in an online interview. “In contrast, symmetric cryptography appears less vulnerable to quantum attacks.”  Everybody Knows  Quantum security’s biggest challenge is identifying the exact date on which a solution will be needed, says Tom Patterson, quantum security global lead at business advisory firm Accenture, in an email interview. “Unlike Y2K, when we knew exactly when it would happen, but we didn’t know what would happen, with QDAY we know exactly what will happen and what to do about it, but we’re not sure if it’s needed in a day or a decade.” The challenge for IT and security leaders today, he adds, “is where to slot quantum security into their five-year plan, and how best to get started today.”  Responding to the threat quantum computing poses to current asymmetric algorithms, leading organizations, including the U.S. National Institute for Science and Technologies (NIST), are now working with researchers worldwide to create and test cryptographic algorithms that are resistant to the power of quantum computers. “The aim is to standardize these quantum-resistant algorithms and complete a thorough crypto analysis,” Ramamoorthy says.  Related:Data Thefts: Indecent Exposure to Risk According to Ramamoorthy, NIST has already endorsed three quantum-safe algorithms, based on extensive research and analysis by the global cryptographic community: FIPS 203, FIPS 204 and FIPS 205. “These algorithms address key exchange for secure communications and digital hashes used in various cryptographic operations,” she says, adding that NIST is also considering additional algorithms to further bolster the security of digital certificates. “This ongoing work is crucial to safeguarding the privacy and security of our digital lives and ensuring that our communications remain confidential and protected.”  The Future  The solution side of quantum security is advancing even faster than quantum computers themselves, Patterson observes. “We now have the first of many new NIST encryption standards that aren’t susceptible to a quantum computing decryption attack, which is great progress and great news.” He adds that “crypto agility,” a data encryption practice used to ensure a rapid response to a cryptographic threat, is gaining traction, helping enterprises to actively manage new NIST standards as they appear.  Related:Infogram Test There are also advances being made in using quantum information science itself to defend against quantum computing attacks. With new research, development, and early deployments of quantum key distribution (QKD), a secure communication method that implements a cryptographic protocol incorporating components of quantum mechanics which, when perfected, will provide a way to exchange keys anywhere without fear of compromise, the future looks far from hopeless.  Closing Time  Quantum security is a good-news story in that there are already solutions to mitigate the critical new risk, Patterson says. He believes that upgrading old and vulnerable encryption methods early will help enterprises save time and money while lowering current and future risks. “While there’s a cost to do the upgrade, running on the latest secure encryption is no more expensive than running old vulnerable encryption, so it’s good from a budgeting perspective as well.”  The light at the end of the tunnel is the fact that quantum computing can be used to defend against quantum attacks, and researchers are already beginning to catalog presumed attack vectors and design countermeasures, Saylors says. “We’re still three to five years out from the potential for an attack, but quantum-based countermeasures could prevent the attack from spreading to other organizations.” source

What's New (And Worrisome) in Quantum Security? Read More »