Google, Kove Settle Cloud Storage Patent Case

By Theresa Schliep ( January 16, 2025, 8:36 PM EST) — Google and Kove IO Inc. have settled claims that the technology behemoth infringed three of the Chicago software company’s patents covering cloud storage technologies, the parties told an Illinois federal court, concluding a dispute similar to another involving Amazon where Kove won a $673 million jury award, plus interest…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Google, Kove Settle Cloud Storage Patent Case Read More »

With AGI looming, CIOs stay the course on AI partnerships

“A lot of companies have plenty of ability to implement AI capabilities in-house if they’re smart about the way they build those capabilities and they’re very careful and conscious about the cost profile of the technologies they put in place,” he says. “There is absolutely a sweet spot of relatively easy-to-access capability at a modest price that many technology organizations are perfectly capable of reaching.” Liberty Mutual has been experimenting using its own nonpublic, internal version of ChatGPT, called LibertyGPT, for the past two years and has nine use cases in production, including document summarization at scale; 18 use cases in R&D; and a long list of potential uses. Tony Marron, managing director of Liberty IT in Belfast, Northern Ireland, says the nine production use cases save the company about 200,000 hours of human labor and generates about $100 million in savings.  Reaping those benefits required a very high level of change management and integrating business and technology team members, Marron says, including data scientists, engineers, and operational employees. There is no plan for an AGI use case, he maintains. source

With AGI looming, CIOs stay the course on AI partnerships Read More »

What Security Leaders Get Wrong About Zero-Trust Architecture

Zero-trust architecture has emerged as the leading security method for organizations of all types and sizes. Zero-trust shifts cyber defenses away from static, network-based perimeters to focus directly on protecting users, assets, and resources.  Network segmentation and strong authentication methods give zero-trust adopters strong Layer 7 threat prevention. That’s why a growing number of enterprises of all types and sizes are embracing the approach. Unfortunately, many security leaders continue to deploy zero-trust incorrectly, weakening its power and opening the door to all types of bad actors.  To prevent the mistakes that many organizations make when planning a transition to zero-trust security, here’s a look at six common misconceptions you need to avoid.  Mistake One: A single security vendor can supply everything  One vendor can’t provide everything your organization needs to implement a zero-trust architecture strategy, warns Tim Morrow, situational awareness technical manager in the CERT division of Carnegie Mellon University’s Software Engineering Institute.  “It’s dangerous to accept zero-trust architecture vendors’ marketing material and product information without considering whether it will meet your organization’s security priority needs and its capability to implement and maintain the architecture,” Morrow says in an email interview.  Related:What Does Biden’s New Executive Order Mean for Cybersecurity? Mistake Two: Zero-trust is too costly to implement  Aside from the costs saved by reducing the risk of a breach, zero-trust can help save long term expenses by improving asset utilization, operational effectiveness, and reduced compliance costs, says Dimple Ahluwalia, vice president and managing partner, security consulting and systems integration at IBM via email.  Mistake Three: Underestimating the technical challenges  IT and security leaders often overlook the need to implement and manage foundational security practices before establishing a zero-trust architecture, says Craig Zeigler, an incident response senior manager at accounting and business advisory firm Crowe, in an online interview. They may also fail to identify potential gaps, such as vendor-related issues, and ensure that the chosen solution is not only compatible with their specific needs but also equipped with the appropriate controls to provide equal or greater security. “In essence, without security leaders having a thorough understanding of their team and endpoints, implementing zero trust becomes a daunting task.”  Mistake Four: Failing to align zero-trust architecture strategy with overall enterprise assets and needs  Related:3 Strategies For a Seamless EU NIS2 Implementation Cyberattacks are growing in number and severity. “A continuous vigil concerning the organization’s security operations … must be maintained,” Morrow says. The zero-trust architecture must fully mesh with business operations and goals.  Understand your organization’s current assets — data, applications, infrastructure, and workflows — and set up a procedure to update this information periodically, Morrow advises. “Yearly updates of your organization’s assets will definitely no longer be enough.”  Organizations also need to remember that their business and reputation are on the line each and every day, Morrow says. “Not doing your best to reduce your organization’s risks to cyber threats can be very costly.”  Mistake Five: Viewing zero-trust as a solution rather than an ongoing strategy  It’s essential for security leaders to understand that zero-trust is not a static goal, but a dynamic, evolving strategy, says Ricky Simpson, solutions director at Quorum Cyber, a Microsoft cybersecurity partner. “Building a culture that prioritizes security at every level, from executive leadership to individual employees, is critical to the success of zero-trust initiatives,” he notes via email.  Related:Microsoft Rings in 2025 With Record Security Update Simpson feels that continuous education, regular assessments, and a willingness to adapt to new threats and technologies are key components within a sustainable zero-trust framework. “By fostering collaboration and maintaining a vigilant stance, security leaders can better protect their organizations in an increasingly complex and hostile digital environment.”  Mistake Six: Believing that implementing zero-trust is simply a one-and-done project  Zero-trust is actually a holistic and strategic approach to security that requires ongoing evaluations of trust and threats. “It’s not a quick fix but a long-term shift in strategy,” says Shane O’Donnell, vice president of Centric Consulting’s cybersecurity practice.  Underestimating zero-trust implementation poses two major risks, notes O’Donnell in an email interview. First, unrealistic timelines and expectations can derail project planning, exhaust budgets, and drain resources. Second, hasty or flawed execution can actually create new security vulnerabilities, defeating the very purpose of a zero-trust architecture.  O’Donnell says this misconception can be addressed through continuous education and understanding. “It’s vital for security leaders to realize that transitioning to a zero-trust architecture means substantial technological and organizational changes,” he says. “This strategy should be treated as an ongoing commitment that lasts way beyond the initial set-up stage.”  source

What Security Leaders Get Wrong About Zero-Trust Architecture Read More »

Quick ROI vs. innovation: CIOs face competing AI goals

The survey shows a significant split in approaches to AI investment, with some companies focused on quick ROI by deploying off-the-shelf, easy-to-implement AI tools, and others investing in innovative AI projects that they hope will give them major competitive advantages down the line, observers suggest. When asked about their motivations for deploying AI, the survey respondents were split along three lines: 28% said ROI was their primary focus, 31% said innovation was most important, and 41% said ROI and innovation were equal drivers of their AI spending. Manish Goyal, vice president, senior partner, and global AI and analytics leader at IBM Consulting, notes that, while short-term gains are attractive, the power of AI is in using it to create competitive advantages, such as deploying new products and services, creating new revenue streams, or “delighting” customers. source

Quick ROI vs. innovation: CIOs face competing AI goals Read More »

The path forward for gen AI-powered code development in 2025

This article is part of VentureBeat’s special issue, “AI at Scale: From Vision to Viability.” Read more from this special issue here. This article is part of VentureBeat’s special issue, “AI at Scale: From Vision to Viability.” Read more from the issue here. Three years ago AI-powered code development was mostly just GitHub Copilot.  GitHub’s AI-powered developer tool amazed developers with its ability to help with code completion and even generate new code. Now, at the start of 2025, a dozen or more generative AI coding tools and services are available from vendors big and small. AI-powered coding tools now provide sophisticated code generation and completion features, and support an array of programming languages and deployment patterns.  The new class of software development tools has the potential to completely revolutionize how applications are built and delivered — or so many vendors claim. Some observers have worried that these new tools will spell the end for professional coders as we know it. What’s the reality? How are tools actually making an impact today? Where do they fall short and where is the market headed in 2025? “This past year, AI tools have become increasingly essential for developer productivity,” Mario Rodriguez, chief product officer at GitHub, told VentureBeat.  The enterprise efficiency promise of gen AI-powered code development So what can gen AI-powered code development tools do now? Rodriguez said that tools like GitHub Copilot can already generate 30-50% of code in certain workflows. The tools can also help automate repetitive tasks and assist with debugging and learning. They can even serve as a thought partner to help developers go from idea to application in minutes. “We’re also seeing that AI tools not only help developers write code faster, but also write better quality code,” Rodriguez said. “In our latest controlled developer study we found that code written with Copilot is not only easier to read but also more functional — it’s 56% more likely to pass unit tests.” While GitHub Copilot is an early pioneer in the space, other more recent entrants are seeing similar gains. One of the hottest vendors in the space is Replit, which has developed an AI-agent approach to accelerate software development. According to Amjad Masad, CEO of Replit, gen AI-powered coding tools can make coding anywhere between 10-40% faster for professional engineers. “The biggest beneficiaries are front-end engineers, where there is so much boilerplate and repetition in the work,” Masad told VentureBeat. “On the other hand, I think it’s having less impact on low-level software engineers where you have to be careful with memory management and security.” What’s more exciting for Masad isn’t the impact of gen AI coding on existing developers, but rather the impact it can have on others. “The most exciting thing, at least from the perspective of Replit, is that it can make non-engineers into junior engineers,” Masad said. “Suddenly, anyone can create software with code. This can change the world.” Certainly gen AI-powered coding tools have the potential to democratize development and improve professional developers’ efficiency. That said, it isn’t a panacea and it does have some limitations, at least for now. “For simple, isolated projects, AI has made remarkable progress,” Itamar Friedman, cofounder and CEO of Qodo, told VentureBeat. Qodo (formerly Codium AI) is building out a series of AI agent-driven enterprise application development tools. Friedman said that using automated AI tools, anyone can now create basic websites faster and with more personalization than traditional website builders can.  “However, for complex enterprise software that powers Fortune 5000 companies, AI isn’t yet capable of full end-to-end automation,” Friedman noted. “It excels at specific tasks, like question-answering on complex code, line completion, test generation and code reviews.” Friedman argued that the core challenge is in the complexity of enterprise software. In his view, pure large language model (LLM) capabilities on their own can’t handle this complexity.  “Simply using AI to generate more lines of code could actually worsen code quality — which is already a significant problem in enterprise settings,” Friedman said. “So the reason that we don’t see huge adoption yet is because there are still more advances in technology, engineering and machine learning that need to be achieved in order for AI solutions to fully understand complicated enterprise software.” Friedman said that Qodo is addressing that issue by focusing on understanding complex code, indexing it, categorizing it and understanding organizational best practices to generate meaningful tests and code reviews. Another barrier to broader adoption and deployment is legacy code. Brandon Jung, VP of ecosystem at gen AI development vendor Tabnine, told VentureBeat that he sees a lack of quality data preventing wider adoption of AI coding tools.  “For enterprises, many have large, old code bases and that code is not well understood,” Jung said. “Data has always been critical to machine learning and that is no different with gen AI for code.” Towards fully agentic AI-driven code development in 2025 No single LLM can handle everything required for modern enterprise software development. That’s why leading vendors have embraced an agentic AI approach. Qodo’s Friedman expects that in 2025 the features that seemed revolutionary in 2022 — like autocomplete and simple code chat functions — will become commoditized.  “The real evolution will be towards specialized agentic workflows — not one universal agent, but many specialized ones each excelling at specific tasks,” Friedman said. “In 2025 we’re going to see many of these specialized agents developed and deployed until eventually, when there are enough of these, we’re going to see the next inflection point, where agents can collaborate to create complex software.” It’s a direction that GitHub’s Rodriguez sees as well. He expects that throughout 2025, AI tools will continue to evolve to assist developers throughout the entire software lifecycle. That’s more than just writing code; it’s also building, deploying, testing, maintaining and even fixing software. Humans will not be replaced in this process, they will be augmented with AI that will make things faster and more efficient. “This is going to be accomplished with the

The path forward for gen AI-powered code development in 2025 Read More »

Upgrade Your Sales Game: Three Key Takeaways From Forrester’s Review Of 13 North American Investing Sales Websites

The desktop website is a crucial part of the sales journey for a new investing-account customer. More than half of US and Canadian online adults who opened an investing account use a computer to research the product. To better understand and identify investing sales best practices, we evaluated the sales digital experiences of 13 North American investment firms for opening a new self-directed account across the first four phases of the customer lifecycle (discover, evaluate, commit, and initiate). Our research reveals that: Investment firms are not making it easy to get help when needed. A prospective customer may need assistance with the nuances of a self-directed account such as fees, trading options, and its rules and regulations. Live chat is readily available from most firms for existing customers. It should also be available to prospects throughout the process of researching, evaluating, and buying a product; most firms in our evaluation failed to offer this, and they are missing an opportunity to address a prospect’s concerns early in the customer lifecycle and increase the chance of converting interest into a sale. Investment firms need to make it easier to navigate the website. Investing websites have a ton of resources: product information, tutorials, articles, and more. It can be difficult for a prospect to find what they need quickly. Many firms in our evaluation struggled with having a consistent and easy-to-understand navigation menu. Their search features were likewise not easily accessible, failing to yield relevant results for a query. A prospect who cannot navigate the website effectively or find what they are looking for will quickly lose confidence in the brand and look for another firm that can meet their needs. Investment firms in the US lag behind Canadian firms. Firms in the US scored higher than the Canadian firms in just nine of the 26 digital experience criteria we used. US firms performed below-average and struggled across most criteria in the buying and onboarding phases of the customer lifecycle. Specifically, the majority of US firms lacked adequate access to human help from within the product application, relevant cross-selling capabilities in the application, and informative post-application communication. For a deeper dive into our Digital Experience Review™ research, further insights from our reviews, and specific best-practice examples, Forrester clients can check out the full report here: The Forrester Digital Experience Review™: North American Investing Sales Sites, Q1 2025. If you are interested in evaluating your own firm’s digital sales experience and want to use the same criteria we did for our reviews, be sure to check out our interactive self-assessment tool: The Forrester Investing Sales Website Digital Experience Assessment. If you want to discuss any of our findings, or the results of your digital experience self-assessment, please reach out to your Forrester account team. source

Upgrade Your Sales Game: Three Key Takeaways From Forrester’s Review Of 13 North American Investing Sales Websites Read More »

Nvidia intros new guardrail microservices for agentic AI

Nvidia today added new Nvidia inference microservices (NIMs) for AI guardrails to its Nvidia NeMo Guardrails software tools. The new microservices aim to help enterprises improve accuracy, security, and control of agentic AI applications, addressing a key reservation IT leaders have about adopting the technology. “One-in-ten organizations are already using AI agents today, and more than 80% plan to adopt AI agents within the next three years,” Kari Briski, vice president of enterprise AI models, software, and services at Nvidia, said in a press conference Wednesday. “This means that you don’t just build agents for accuracy of the task, but you must also evaluate AI agents to meet security, data privacy, and governance requirements, and that can be a major barrier to deployment.” Briski explained that beyond trust, safety, security, and compliance, successfully deploying AI agents in production requires they be performant. They must stay on track while remaining fast and responsive in their interactions with end users and other AI agents. To that end, Nvidia today introduced three new NIMs for NeMo Guardrails aimed at content safety, topic control, and jailbreak detection. source

Nvidia intros new guardrail microservices for agentic AI Read More »

Transforming trade operations with work orchestration, automation, and genAI

Trade operations teams face increasing pressure to tighten processes, reduce costs, and ensure compliance—all while managing complex infrastructures and siloed systems. But there’s good news: Automated orchestration solutions and generative AI (genAI) are helping teams address these challenges and reshape the trade operations landscape. One significant challenge companies face is the shift to T+1 settlement cycles, which reduces the time to complete a trade from two business days to one. This tighter timeframe forces operations teams to adapt quickly; past strategies of assigning more employees to handle increasing volumes no longer suffice. “Our clients have been through a transformation of offshoring, nearshoring, and trying to remove costs,” said Mark Wilson, Managing Director, Capital Markets at Accenture, in a recent panel discussion. “But the need to continue to do more with less is greater than it has ever been.”  This means firms must adapt work orchestration solutions that integrate processes across legacy systems, improving efficiency and minimizing risk. But organizations don’t need to overhaul their entire infrastructure to take advantage of the advances that such orchestration offers, said Ryan Clare, Head of Corporate Transformation and Automation at Jefferies, during the panel discussion. Instead, he suggests layering orchestration tools on top of legacy platforms. “The legacy platforms do their job very well,” he said. “You build a layer on top that just connects into them.” This “fabric layer,” as he calls it, enables greater automation while maintaining essential core operations. This helps avoid costly overhauls. Generative AI in Action: Adding real value GenAI also plays important role in transforming operations and is already delivering tangible benefits. “We’ve used genAI for email automation—reading emails, doing analysis, inserting the results into workflows, and generating responses,” said Wilson. “For example, if a client asks about the status of a trade, genAI can pull the information from the order management system, generate the response, and place it in the user’s outbox for review.” These capabilities reduce manual effort, ensure accuracy, and streamline communication. “It just allows you to start and finish the task much quicker and get to the answer faster,” said Clare. While genAI is often hyped as revolutionary and with the potential to replace staff, John Almeida, Global Head of Wealth and Asset Management at ServiceNow, said he thinks genAI will instead be a technology used to enhance productivity. “I don’t believe genAI is going to replace people,” he said. “It complements humans by making them more efficient—handling low-value tasks like summarizing documents so employees can focus on higher-value, customer-facing work.” Transforming Trade Operations One Process at a Time: A Panel Discussion Accenture + ServiceNow: A work orchestration game-changer Accenture has developed a new Intelligent Work Orchestration solution for the capital markets industry. Developed on the ServiceNow platform, Intelligent Work Orchestration bridges operational siloes with a single pane of glass—a centralized hub where teams can access everything they need to track progress and identify pain points. Accenture’s solution is built around three core pillars: End-to-end process management to automate core trade processes Gen AI-powered email automation to streamline communication Centralized command centers to offer real-time insights for faster decision making To learn more about how Accenture and ServiceNow are driving operational efficiency across capital markets and financial services or get in touch visit our resource page. source

Transforming trade operations with work orchestration, automation, and genAI Read More »

9th Circ. Revives H-1B Fraud Charges Against CEO, HR Head

By Rae Ann Varona ( January 15, 2025, 10:42 PM EST) — The Ninth Circuit on Tuesday revived criminal visa fraud charges against a semiconductor company’s CEO and human resources manager, saying in a published opinion that the government could protect itself against fraud, even through questions it had no right asking…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

9th Circ. Revives H-1B Fraud Charges Against CEO, HR Head Read More »

4 bold AI predictions for 2025

This article is part of VentureBeat’s special issue, “AI at Scale: From Vision to Viability.” Read more from this special issue here. This article is part of VentureBeat’s special issue, “AI at Scale: From Vision to Viability.” Read more from the issue here. As we wrap up 2024, we can look back and acknowledge that artificial intelligence has made impressive and groundbreaking advances. At the current pace, predicting what kind of surprises 2025 has in store for AI is virtually impossible. But several trends paint a compelling picture of what enterprises can expect in the coming year and how they can prepare themselves to take full advantage. The plummeting costs of inference In the past year, the costs of frontier models have steadily decreased. The price per million tokens of OpenAI’s top-performing large language model (LLM) has dropped by more than 200 times in the past two years.  One key factor driving down the price of inference is growing competition. For many enterprise applications, most frontier models will be suitable, which makes it easy to switch from one to another, shifting the competition to pricing. Improvements in accelerator chips and specialized inference hardware are also making it possible for AI labs to provide their models at lower costs.  To take advantage of this trend, enterprises should start experimenting with the most advanced LLMs and build application prototypes around them even if the costs are currently high. The continued reduction in model prices means that many of these applications will soon be scalable. At the same time, the models’ capabilities continue to improve, which means you can do a lot more with the same budget than you could in the past year.  The rise of large reasoning models The release of OpenAI o1 has triggered a new wave of innovation in the LLM space. The trend of letting models “think” for longer and review their answers is making it possible for them to solve reasoning problems that were impossible with single-inference calls. Even though OpenAI has not released o1’s details, its impressive capabilities have triggered a new race in the AI space. There are now many open-source models that replicate o1’s reasoning abilities and are extending the paradigm to new fields, such as answering open-ended questions. Advances in o1-like models, which are sometimes referred to as large reasoning models (LRMs), can have two important implications for the future. First, given the immense number of tokens that LRMs must generate for their answers, we can expect hardware companies to be more incentivized to create specialized AI accelerators with higher token throughput.  Second, LRMs can help address one of the important bottlenecks of the next generation of language models: high-quality training data. There are already reports that OpenAI is using o1 to generate training examples for its next generation of models. We can also expect LRMs to help spawn a new generation of small specialized models that have been trained on synthetic data for very specific tasks. To take advantage of these developments, enterprises should allocate time and budget to experimenting with the possible applications of frontier LRMs. They should always test the limits of frontier models, and think about what kinds of applications would be possible if the next generation of models overcome those limitations. Combined with the ongoing reduction in inference costs, LRMs can unlock many new applications in the coming year. Transformer alternatives are picking up steam The memory and compute bottleneck of transformers, the main deep learning architecture used in LLMs, has given rise to a field of alternative models with linear complexity. The most popular of these architectures, the state-space model (SSM), has seen many advances in the past year. Other promising models include liquid neural networks (LNNs), which use new mathematical equations to do a lot more with many fewer artificial neurons and compute cycles.  In the past year, researchers and AI labs have released pure SSM models as well as hybrid models that combine the strengths of transformers and linear models. Although these models have yet to perform at the level of the cutting-edge transformer-based models, they are catching up fast and are already orders of magnitude faster and more efficient. If progress in the field continues, many simpler LLM applications can be offloaded to these models and run on edge devices or local servers, where enterprises can use bespoke data without sending it to third parties. Changes to scaling laws The scaling laws of LLMs are constantly evolving. The release of GPT-3 in 2020 proved that scaling model size would continue to deliver impressive results and enable models to perform tasks for which they were not explicitly trained. In 2022, DeepMind released the Chinchilla paper, which set a new direction in data scaling laws. Chinchilla proved that by training a model on an immense dataset that is several times larger than the number of its parameters, you can continue to gain improvements. This development enabled smaller models to compete with frontier models with hundreds of billions of parameters. Today, there is fear that both of those scaling laws are nearing their limits. Reports indicate that frontier labs are experiencing diminishing returns on training larger models. At the same time, training datasets have already grown to tens of trillions of tokens, and obtaining quality data is becoming increasingly difficult and costly.  Meanwhile, LRMs are promising a new vector: inference-time scaling. Where model and dataset size fail, we might be able to break new ground by letting the models run more inference cycles and fix their own mistakes. As we enter 2025, the AI landscape continues to evolve in unexpected ways, with new architectures, reasoning capabilities, and economic models reshaping what’s possible. For enterprises willing to experiment and adapt, these trends represent not just technological advancement, but a fundamental shift in how we can harness AI to solve real-world problems. source

4 bold AI predictions for 2025 Read More »