VentureBeat

Intel unveils new Core Ultra processors with 2X to 3X performance on AI apps

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Intel unveiled new Intel Core Ultra 9 processors today at CES 2025 with as much as two or three times the edge performance on AI apps than before. Intel said it is pushing the boundaries of AI performance and power efficiency for businesses and consumers, ushering in the next era of AI computing. In other performance metrics, Intel said the Core Ultra 9 processors are up to 5.8 times faster in media performance, 3.4 times faster in video analytics end-to-end workloads with media and AI, and 8.2 times better in terms of performance per watt than prior chips. The chips under the Intel Core Ultra 9 and Core i9 labels were previously codenamed Arrow Lake H, Meteor Lake H, Arrow Lake S and Raptor Lake S Refresh. A fresh start for Intel Intel hopes to kick off the year better than in 2024. CEO Pat Gelsinger resigned last month without a permanent successor after a variety of struggles, including mass layoffs, manufacturing delays and poor execution on chips including gaming bugs in chips launched during the summer. Intel Core Ultra Series 2 Michael Masci, vice president of product management at the Edge Computing Group at Intel, said in a briefing that AI, once the domain of research labs, is integrating into every aspect of our lives, including AI PCs where the AI processing is done in the computer itself, not the cloud. AI is also being processed in data centers in big enterprises, from retail stores to hospital rooms. “As CES kicks off, it’s clear we are witnessing a transformative moment,” he said. “Artificial intelligence is moving at an unprecedented pace.” The new processors include the Intel Core 9 Ultra 200 H/U/S models, with up to 99 TOPS (a measure of AI performance) for the H versions. Other models being launched carry the Intel Core 200S, 200H, 100U and Intel Core 3 processor and Intel Processor names. The chips have improvements for data security, and they come with built-in Intel Arc GPU with Intel XMX or Intel graphics. Intel Core Ultra 200V series processors are focused on the enterprise. The flagship Intel Core Ultra 9 processor 285H, formerly codenamed Arrow Lake H, has 2.2 times higher performance in Procyon AI Computer Vision, 3.3 times higher performance in Llama 3 8B, and 2.3 times higher performance in Stable Diffusion 1.5 compared to the prior chip, the Intel Core Ultra 9 processor 185H (codenamed Meteor Lake H). Intel is now under the temporary leadership of David Zinsner and Michelle Johnston Holthaus as co-CEOs. Zinsner is the company’s CFO, while Holthaus is the general manager of Intel’s client computing group. “Intel Core Ultra processors are setting new benchmarks for mobile AI and graphics, once again demonstrating the superior performance and efficiency of the x86 architecture as we shape the future of personal computing,” said Holthaus in a statement. “The strength of our AI PC product innovation, combined with the breadth and scale of our hardware and software ecosystem across all segments of the market, is empowering users with a better experience in the traditional ways we use PCs for productivity, creation and communication, while opening up completely new capabilities with over 400 AI features. And Intel is only going to continue bolstering its AI PC product portfolio in 2025 and beyond as we sample our lead Intel 18A product to customers now ahead of volume production in the second half of 2025.” The Intel Core Ultra Processor (V-SKUs) platform has NPU performance that hits 48 TOPS and 67 TOPS with a GPU. The V-SKUs have eight processor cores and run at P-Core Max Turbo frequency up to 5.1 GHz. Intel said its AI PCs use GPUs for high throughput, NPUs for low-power AI workloads and CPUs for fast response with low-latency AI workloads. There are other variations on the Intel Core Ultra as well. Smart vehicles and more In other CES 2025 news, Intel is also unveiling its solutions for smart vehicles. Jack Weast, Intel Fellow and vice president of Intel Automotive, will unveil Intel’s next-gen architecture with AI inside for vehicles on Tuesday, January 7, at 3:30 p.m. Intel says its whole-vehicle approach is built to empower the next generation of intelligent software-defined vehicles. Intel is offering a Core Ultra 9 vPro refresh. Weast’s announcement will showcase how Intel’s combination of AI-enhanced high-performance compute, intelligent power management and software-defined zonal controllers built on an open ecosystem enable a more sustainable, scalable and profitable automotive future. Intel also showed off its Intel Core Ultra 200V Series processors (announced in September) for business users. And it updated its Intel vPro technology for IT departments. For businesses striving to stay ahead in the AI era, Intel introduced Intel Core Ultra 200V series processors with Intel vPro. The company says these new processors offer dramatic performance gains, enhanced efficiency, and robust security and manageability features to help modernize IT environments. New Intel Core Ultra 200V series mobile processors with Intel vPro are aimed at empowering businesses with AI-driven productivity and enhanced IT management. The combination of performance, efficiency and industry-leading business computing with advanced security and manageability — all while enabling a seamless Microsoft Copilot+ experience — helps to deliver a robust platform for modern workplaces, Intel said. It noted that the latest HP EliteBook X laptop with an Intel Core Ultra7 268V processor has up to 10.5 hours of battery life using Microsoft Teams, compared to similar rival machines with lower battery lives. On Microsoft 365 apps, it has up to 20.3 hours of battery life. Intel has partnered with Microsoft to continue to advance AI-driven innovation, enhanced security, and superior performance into 2025. Copilot+ PCs powered by Intel Core Ultra 200V series processors unlock next-gen AI productivity, all while delivering long battery life, Intel said. “Copilot+ PCs offer exceptional performance, battery life, [and] enhanced AI experiences, and are all Secured-core PCs with the Microsoft

Intel unveils new Core Ultra processors with 2X to 3X performance on AI apps Read More »

Collaboration amplified: Driving business value with cross-company process intelligence

Process intelligence that improves key supply chain, distribution, product, finance and customer operations has brought enormous, lasting business value to organizations across the globe. BMW, Allianz, GE Healthcare, The State of Oklahoma and others of all sizes across all industries have optimized business performance, freed up billions in cash savings and reduced their carbon footprint by using state-of-the art technology to mine, model, orchestrate and optimize just a single process or system.  Now, leading-edge organizations are starting to recognize that improved efficiency, resilience, agility, digital transformation and other benefits can be significantly amplified by collaborating with business partners and extending process intelligence beyond their company walls.  A peek at the process intelligence frontier Want a glimpse at the frontier of high-value enterprise process ontelligence?  Three of Europe’s top electronics suppliers are streamlining operations by building transparency between their shared Order Management and Procurement process.  Process intelligence is letting an NGO and researchers analyze previously siloed data from the juvenile justice and mental health systems in a large U.S. state, providing systemic insights that can improve outcomes for at-risk youth and families.  A pre-built AI Collaboration Agent, powered by Rollio, uses a natural language interface to help humans resolve process exceptions through better cross-organizational decision-making.  In different ways, each example reaches beyond organizational and company boundaries and a handful of in-house experts and into profitable collaboration with business partners and a larger population of everyday users. Cross-company collaboration amplifies value for all  Integrating and democratizing value across networks is a logical evolution, says Eugenio Cassiano, SVP of strategy and innovation at Celonis. Headquartered in Munich and New York, the private firm (valued at $13 billion) is a pioneer and global leader in Process intelligence that counts more than 30% of the Fortune Global 500 companies as customers. Collaborating on processes can produce many shared benefits and boost the “top, bottom and green lines” of companies, business partners and industries, Cassiano says. But there’s a big catch. Processes have become multi-layered and extremely complex, explains Cassiano. Today, no single company — including Celonis — can possess all the technological and industry knowledge needed to profitably extend process improvements to diverse partners. Success requires working with and across partner ecosystems to create deep and lasting value. “Scale is not just scaling with your own people,” he explains. “It doesn’t just take a village – it takes an ecosystem.” “Co-innovators” are key to process expansion In this ever-changing environment, Celonis believes the smartest way to meet the challenge is by empowering a worldwide community of partners to “co-innovate” process improvements. Involving customers, consultants, developers, integrators and others has long been foundational to the company’s ambitious mission to “make processes work for the people, companies and the planet.”  In the last year, this community (which Celonis calls “ change makers”) has significantly expanded, with 150 new partners starting 1,300 new projects. During the same period, 65,000 students have trained at more than 700 Celonis partner alliance universities. They’ll become the next generation of process leaders, with an inside track to the new career paths and opportunities in process intelligence fueled by the ongoing AI boom. Advanced tech expands process development and use  In addition, it is important to equip community partners with state-of-the-art technologies and tools.  From its start, Cassiano notes, Celonis has championed and developed open, vendor-agnostic platforms and products that work with the likes of Salesforce, SAP, Oracle and hundreds of other enterprise vendors of ERP, CRM and core business systems.  At Celosphere, its annual user event held this fall, the company introduced a host of new offerings and enhancements designed to make it easier for various partners to develop, co-develop and use process optimizations within and across company boundaries, including:  Celonis Data Core: Called Celocore for short, it helps customers get data into Celonis more easily and once it’s there, enables them to perform transforms, loads and queries more quickly. The company said Celocore delivers up to 20x performance versus the competition. New GenAI-powered user experience: This simplifies data ingestion and dashboard building through GenAI-powered assistants. New use-case-specific apps: These applications combine partners’ industry and domain expertise with Celonis’ app-building best practices and the latest platform capabilities. Celonis AgentC: This suite of tools, integrations and partnerships enables the company’s community to develop AI agents in the leading AI agent platforms, like Microsoft Copilot Studio, IBM watsonx Orchestrate and Amazon Bedrock Agents. It also allows them to use AI agents pre-built by partners. One early user, Campari Group, the legendary spirits maker, will use the Celonis Process Collaboration Agent, powered by Rollio to speed up removal of credit blocks for sales orders. Cosentino, a leading manufacturer of design and architectural surfaces, is also using a Celonis AI assistant to analyze blocked sales orders, enabling credit managers to process up to 5x more orders per day. Celonis Networks: A new offering, announced in beta, that is designed to extend the process intelligence beyond company walls by connecting organizations. Speaking about Networks, Cassiano said: “Celonis provides a platform for companies and industries to come together, share information, coordinate activities and co-create solutions to tackle complex problems that no single entity could solve on its own, and turn intelligence into business value.”  Network capabilities help partners solve complex problems in several keyways:  Common taxonomy and language reduce time, effort and friction in communicating and collaborating across companies and industries.  Federated data sharing gives contributors a holistic view of problems and insights unavailable in a single organization. Process orchestration enables coordinated workflows and activities across organizations.  While most cross-organizational projects and partner-developed apps are in early stages, initial results are noteworthy. Electronics partners streamline procurement  Despite using EDI technology, Conrad Electronic, a 100-year-old family business and its European suppliers, Schukat Electronic and TD SYNNEX, struggled with outdated pricing, delivery discrepancies and various data mismatches. Implementing a cross-company process network has yielded big payoffs, the partners said. Shared, standardized and unified data structures means purchase orders are communicated and translated seamlessly across the supply chain. Prices and orders

Collaboration amplified: Driving business value with cross-company process intelligence Read More »

Nvidia unveils Isaac GR00T blueprint to accelerate humanoid robotics

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Nvidia announced the Isaac GR00T blueprint to accelerate humanoid robotics development. At the CES 2025 keynote of Nvidia CEO Jensen Huang, Nvidia said that Isaac GR00T workflows for synthetic data and Nvidia Cosmos world foundation models will supercharge development of general humanoid robots. Over the next two decades, the market for humanoid robots is expected to reach $38 billion. To address this significant demand, particularly in industrial and manufacturing sectors, Nvidia is releasing a collection of robot foundation models, data pipelines and simulation frameworks to accelerate next-generation humanoid robot development efforts. The Nvidia Isaac GR00T Blueprint for synthetic motion generation helps developers generate exponentially large synthetic motion data to train their humanoids using imitation learning. Imitation learning — a subset of robot learning — enables humanoids to acquire new skills by observing and mimicking expert human demonstrations. Collecting these extensive, high-quality datasets in the real world is tedious, time-consuming and often prohibitively expensive. Implementing the Isaac GR00T blueprint for synthetic motion generation allows developers to easily generate exponentially large synthetic datasets from just a small number of human demonstrations. Starting with the GR00T-Teleop workflow, users can tap into the Apple Vision Pro to capture human actions in a digital twin. These human actions are mimicked by a robot in simulation and recorded for use as ground truth. The GR00T-Mimic workflow then multiplies the captured human demonstration into a larger synthetic motion dataset. Finally, the GR00T-Gen workflow, built on the Nvidia Omniverse and Nvidia Cosmos platforms, exponentially expands this dataset through domain randomization and 3D upscaling. The dataset can then be used as an input to the robot policy, which teaches robots how to move and interact with their environment effectively and safely in NVIDIA Isaac Lab, an open-source and modular framework for robot learning. World Foundation Models Narrow the Sim-to-Real Gap Which one is not the robot? Nvidia also announced Cosmos at CES, a platform featuring a family of open, pretrained world foundation models purpose-built for generating physics-aware videos and world states for physical AI development. It includes autoregressive and diffusion models in a variety of sizes and input data formats. The models were trained on 18 quadrillion tokens, including 2 million hours of autonomous driving, robotics, drone footage and synthetic data. In addition to helping generate large datasets, Cosmos can reduce the simulation-to-real gap by upscaling images from 3D to real. Combining Omniverse — a developer platform of application programming interfaces and microservices for building 3D applications and services — with Cosmos is critical, because it helps minimize potential hallucinations commonly associated with world models by providing crucial safeguards through its highly controllable, physically accurate simulations. An Expanding Ecosystem Nvidia GR00T generates synthetic data for robots. Collectively, Nvidia Isaac GR00T, Omniverse and Cosmos are helping physical AI and humanoid innovation take a giant leap forward. Major robotics companies have started adopting and demonstrated results with Isaac GR00T, including Boston Dynamics and Figure. Humanoid software, hardware and robot manufacturers can apply for early access to Nvidia’s humanoid robot developer program. source

Nvidia unveils Isaac GR00T blueprint to accelerate humanoid robotics Read More »

Acer launches Acer Predator Helios AI gaming laptop with Nvidia’s RTX 5090 GPU

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More During Nvidia CEO Jensen Huang’s CES 2025 keynote, Acer unveiled the Acer Predator Helios AI gaming laptops and a new Predator monitor. Acer held the announcement to that hour at the big tech trade show in Las Vegas to time it with Nvidia’s unveiling of its GeForce RTX 5000 series graphics processing units (GPUs). Acer Predator Helios 16 AI Acer Predator Helios 16 AI gaming laptop. The Acer Predator Helios 16 AI (PH16-73) is designed for gamers, early adopters and tech enthusiasts. The line offers up to an Intel Core Ultra 9 processor 275HX with NPU for AI-accelerated performance and up to an Nvidia GeForce RTX 5070 Ti GPU. It can be configured with up to 64 GB of memory and supports up to 4 TB of PCIe Gen 5 storage, a massive amount of space for games, photos and movie libraries. It has a 16-inch display with a fast 240 Hz refresh rate provides a stunning, bright and crystal-clear canvas for games and movies. It also supports G-Sync and Nvidia Advanced Optimus. Its per-key RGB keyboard features Acer’s MagKey 4.0 swappable mechanical switches for the WASD and arrow keys, improving trigger signal accuracy. The Acer Predator Helios 16 AI (PH16-73) will be available in North America in June, starting at $2,300. Acer Predator Helios 18 AI Acer Predator Helios 18 AI The Acer Predator Helios 18 AI (PH18-73) brings desktop-level performance, a large screen and top-quality immersive gaming to hardcore gamers and tech-savvy consumers needing maximum power for applications beyond gaming. The line offers up to an Intel Core Ultra 9 processor 275HX with NPU for AI-accelerated performance and up to an NVIDIA GeForce RTX 5090 Laptop GPU. It supports up to a 192 GB of memory and up to 6 TB of PCIe Gen 5 storage. Providing a bright and crystal-clear canvas for games and movies, its 18-inch 4K Mini LED WQXGA display includes a 120 Hz refresh rate, up to 1000 nits brightness, and a new Dual-mode display feature that allows users to seamlessly switch to FHD resolution at 240 Hz. It also supports G-Sync and Nvidia Advanced Optimus. Its per-key RGB keyboard features Acer’s MagKey 4.0 swappable mechanical switches for the WASD and arrow keys, improving trigger signal accuracy. The Acer Predator Helios 18 AI (PH18-73) will be available in North America in May, starting at $3,000. Acer Predator Helios Neo 16S AI Acer Predator Helios Neo 16S AI gaming laptop. The brand new Acer Predator Helios Neo 16S AI offers maximum power with the combination of cutting-edge silicon and a brilliant OLED display with a 240 Hz refresh rate in a sleek new design. Striking the balance between performance, precision and style, it’s less than 19.9 millimeters thick at its thinnest and is specifically designed for versatility in supporting demanding gaming and creative applications at a more accessible price point. It supports up to Intel Core Ultra 9 processor 275HX with an integrated NPU for AI gaming and up to the new Nvidia GeForce RTX 5070 Ti Laptop GPU, up to 32 GB of memory and 2 TB of storage. Its OLED WQXGA display features a fast 240 Hz refresh rate and a wide color gamut supports 100% DCI-P3 for best-in-class visuals, Nvidia G-SYNC technology and NVIDIA Advanced Optimus. The Acer Predator Helios Neo 16S AI (PHN16S-71) will be available in North America in April starting at $1,700. Acer Predator XB323QX gaming monitor Acer Predator gaming monitor. The Acer Predator XB323QX gaming monitor features an expansive 31.5-inch 5K IPS display with a 144 Hz refresh rate and 0.5 ms (GTG) response time. Nvidia G-Sync Pulsar supports stunningly clear images and buttery smooth action. With true 10-bit color depth, the monitor displays cinematic visuals, further enhanced by 95% DCI-P3 or 99% sRGB color gamut support. It’s also equipped with DisplayPort 1.4 and two HDMI 2.1 ports. The Acer Predator XB323QX will be available in North America in Q3. Pricing will be disclosed closer to launch. source

Acer launches Acer Predator Helios AI gaming laptop with Nvidia’s RTX 5090 GPU Read More »

Google maps the future of AI agents: Five lessons for businesses

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A new Google white paper, titled “Agents“, imagines a future where AI takes on a more active and independent role in business. Published without much fanfare in September, the 42-page document is now gaining attention on X.com (formerly Twitter) and LinkedIn. It introduces the concept of AI agents — software systems designed to go beyond today’s AI models by reasoning, planning and taking actions to achieve specific goals. Unlike traditional AI systems, which generate responses based solely on pre-existing training data, AI agents can interact with external systems, make decisions and complete complex tasks on their own. “Agents are autonomous and can act independently of human intervention,” the white paper explains, describing them as systems that combine reasoning, logic and real-time data access. The idea behind these agents is ambitious: They could help businesses automate tasks, solve problems and make decisions that were once handled exclusively by humans. The paper’s authors, Julia Wiesinger, Patrick Marlow and Vladimir Vuskovic, offer a detailed breakdown of how AI agents work and what they require to function. But the broader implications are just as important. AI agents aren’t merely an upgrade to existing technology; they represent a shift in how organizations operate, compete and innovate. Businesses that adopt these systems could see dramatic gains in efficiency and productivity, while those that hesitate may find themselves struggling to keep up. Here are the five most important insights from Google’s white paper and what they could mean for the future of AI in business. 1. AI agents are more than just smarter models Google argues that AI agents represent a fundamental departure from traditional language models. While models like GPT-4o or Google’s Gemini excel at generating single-turn responses, they are limited to what they’ve learned from their training data. AI agents, by contrast, are designed to interact with external systems, learn from real-time data and execute multi-step tasks. “Knowledge [in traditional models] is limited to what is available in their training data,” the paper notes. “Agents extend this knowledge through the connection with external systems via tools.” This difference is not just theoretical. Imagine a traditional language model tasked with recommending a travel itinerary. It may suggest ideas based on general knowledge, but lacks the ability to book flights, check hotel availability or adapt its recommendations based on user feedback. An AI agent, however, can do all of these things, combining real-time information with autonomous decision-making. This shift positions agents as a new type of digital worker capable of handling complex workflows. For businesses, this could mean automating tasks that previously required multiple human roles. By integrating reasoning and execution, agents could become indispensable for industries ranging from logistics to customer service. A breakdown of how AI agents use extensions to access external APIs, such as the Google Flights API, for task execution. (Image Credit: Google) 2. A cognitive architecture powers their decision-making At the heart of an AI agent’s capabilities is its cognitive architecture, which Google describes as a framework for reasoning, planning and decision-making. This architecture, known as the orchestration layer, allows agents to process information in cycles, incorporating new data to refine their actions and decisions. Google compares this process to a chef preparing a meal in a busy kitchen. The chef gathers ingredients, considers the customer’s preferences and adapts the recipe as needed based on feedback or ingredient availability. Similarly, an AI agent gathers data, reasons about its next steps and adjusts its actions to achieve a specific goal. The orchestration layer relies on advanced reasoning techniques to guide decision-making. Frameworks such as reasoning and acting (ReAct), chain-of-thought (CoT) and tree-of-thoughts (ToT) provide structured methods for breaking down complex tasks. For instance, ReAct enables an agent to combine reasoning and actions in real time, while ToT allows it to explore multiple possible solutions simultaneously. These techniques give agents the ability to make decisions that are not only reactive but also proactive. According to the paper, this makes them highly adaptable and capable of managing uncertainty and complexity in ways that traditional models cannot. For enterprises, this means agents could take on tasks such as troubleshooting a supply chain issue or analyzing financial data with a level of autonomy that reduces the need for constant human oversight. The flow of an AI agent’s decision-making process, from user input to tool execution and final responses. (Image Credit: Google) Traditional AI models are often described as “static libraries of knowledge,” limited to what they were trained on. AI agents, on the other hand, can access real-time information and interact with external systems through tools. This capability is what makes them practical for real-world applications. “Tools bridge the gap between the agent’s internal capabilities and the external world,” the paper explains. These tools include APIs, extensions and data stores, which allow agents to fetch information, execute actions and retrieve knowledge that evolves over time. For example, an agent tasked with planning a business trip could use an API extension to check flight schedules, a data store to retrieve travel policies and a mapping tool to find nearby hotels. This ability to interact dynamically with external systems transforms agents from static responders into active participants in business processes. Google also highlights the flexibility of these tools. Functions, for instance, allow developers to offload certain tasks to client-side systems, giving businesses more control over how agents access sensitive data or perform specific operations. This flexibility could be essential for industries like finance and healthcare, where compliance and security are critical. A comparison of agent-side and client-side control, illustrating how AI agents interact with external tools like the Google Flights API. (Image Credit: Google) 4. Retrieval-augmented generation makes agents smarter One of the most promising advancements in AI agent design is the integration of retrieval-augmented generation (RAG). This technique allows agents to query external data sources — such as vector databases or structured documents — when their training data

Google maps the future of AI agents: Five lessons for businesses Read More »

Why 2025 will be the year of AI orchestration

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In the tech world, we like to label periods as the year of (insert milestone here). This past year (2024) was a year of broader experimentation in AI and, of course, agentic use cases.  As 2025 opens, VentureBeat spoke to industry analysts and IT decision-makers to see what the year might bring. For many, 2025 will be the year of agents, when all the pilot programs, experiments and new AI use cases converge into something resembling a return on investment.  In addition, the experts VentureBeat spoke to see 2025 as the year AI orchestration will play a bigger role in the enterprise. Organizations plan to make management of AI applications and agents much more straightforward.  Here are some themes we expect to see more in 2025.  More deployment  Swami Sivasubramanian, VP of AI and data at AWS, said 2025 will be the year of productivity, because executives will begin to care more about the costs of using AI. Proving productivity becomes essential, and this begins with understanding how multiple agents, both inside internal workflows and those that touch other services, can be made better.  “In an agentic world, workflows are going to be reimagined, and you start asking about accuracy and how do you achieve five times productivity,” he said.  Palantir chief architect Akshay Krishnaswamy agreed that decision-makers, especially those outside of the technology cluster, are beginning to get antsy about seeing the impact these AI investments will have on their businesses.  “People are rightfully fatigued about more sandboxing, because it’s off the back of the whole data and analytics journey of the past 10 years, where people also did a ton of experimentation,” said Krishnaswamy. “If you’re an executive, you’re like, ‘this has to be the year I actually start to see some ROI, right?’” An explosion of orchestration frameworks Going into 2025, there is a greater need to create infrastructure to manage multiple AI agents and applications.  Chris Jangareddy, a managing director at Deloitte, told VentureBeat that next year will be very exciting. Competitors will face LangChain and other AI companies looking to offer their own orchestration platforms.  “A lot of tools are catching up to LangChain, and we’re going to see more new players come up,” Jangareddy said. “Even before organizations can think about multiagents, they’re already thinking about orchestration so everyone is building that layer.”  Many AI developers turned to LangChain to start building out a traffic system for AI applications. But LangChain isn’t always the best solution for some companies, which is where some new options including Microsoft’s Magentic, or comparable companies like LlamaIndex come in. But for 2025, expect to see an explosion of even more new options for enterprises.  “Orchestration frameworks are still very experimental, with LangChain and Magentic, so you can’t be heads down for just one,” said PwC global commercial technology and innovation officer Matt Wood. “Tooling in this space is still early, and it’s only going to grow.” Better agents and more integrations AI agents became the biggest trend for enterprises in 2024. As organizations gear up to deploy multiple agents into their workflows, the possibility of agents crossing from one system to another becomes more apparent. This is particularly true when enterprises are looking to demonstrate their agents’ full value to executives and employees.  Platforms like AWS’s Bedrock, and even Slack, offer connections to other agents from Salesforce’s Agentforce or ServiceNow, making it easier to transfer context from one platform to another. However, understanding how to support these integrations and teaching orchestrator agents to identify internal and external agents will become an important task.  When agentic workflows become more complex, the recent crop of more powerful reasoning models, like OpenAI’s recently announced 03 or Google’s Gemini 2.0, could make orchestrator agents more powerful.  However, all of this will be in vain if enterprises do not get their employees to actually use new AI tools in 2025.  Don Vu, chief data and analytics officer at New York Life, told VentureBeat that the last-mile problem of employees often choosing more manual methods over AI will continue for the next year.  “The last mile problem is something that we’ve all stubbed our toe on in 2024, and understanding that change management, business process reengineering stuff that’s not maybe as sexy as building an agent that can do all these incredible things,” said Vu. “It’s harder to change human behavior than deploy an app.” source

Why 2025 will be the year of AI orchestration Read More »

Samsung spreads Vision AI across its 2025 TV portfolio

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More For this year’s lineup, Samsung said that AI will come to life in more ways than just great picture quality. The company is introducing AI-backed experiences to make your day simpler, more dynamic, and just plain better. Announced at CES 2025, these experiences will help usher in a new era for Samsung TVs known as Vision AI. Vision AI will deliver better picture quality, optimized sound, and new experiences that will change how you watch TV. And before I forget, here’s an interesting fact: 60% of Samsung TV owners play games each month. In 2025, Samsung is upgrading features like AI Upscaling, Auto HDR Remastering and Adaptive Sound Pro. It is also introducing new Color Booster Pro, which leverages AI to offer richer, more vibrant colors than ever before. The 2025 TVs will also see a whole suite of AI features designed to help you discover new content and learn more about what you’re watching. Click to Search can identify people, places or products on your screen and provide information tailored to you, in real time. With just one click of the new AI button on your SolarCell remote, you can learn who the actors are in a given scene, where that scene is taking place or even the clothing the characters are wearing. Samsung shows side by side what a TV can do with AI turned on or off. The TVs can also take the dishes from movies or TV shows you’re watching and show you how to make them via recipes with Samsung Food. Leveraging the AI processor, it recognizes the food on your screen and provides recipes for bringing them to life. Samsung Food can also analyze what’s in your fridge and build a shopping list of missing ingredients. Plus, you can purchase groceries or takeout using provider apps and monitor delivery right from your TV. AI will also provide security and accessibility features. Samsung AI Home Security transforms your TV into a smart security hub. It analyzes video feeds from your connected cameras and audio from your TV’s microphone to provide comprehensive home monitoring. It can detect unusual sounds and movements, such as falls or break-ins, to give you more peace of mind whether you’re at home, or away. You’ll receive alerts and notifications on your phone or directly on your TV screen, helping you stay connected to your home while ensuring the safety and well-being of your loved ones. Plus, Samsung is the only manufacturer to offer Knox Matrix on the TV lineup, providing end-to-end encryption for all of your personal data. On the accessibility front, Samsung is using AI to power new features like Live Translate. Now, you can instantly translate closed captions on live broadcasts in up to seven languages. Samsung lets you control your TV with your smart watch. Samsung is improving AI-based Voice Removal with Audio Subtitles, a feature for the visually impaired. The 2025 TVs will analyze subtitles, isolate voices and adjust reading speed for a seamless experience. Together, the AI-backed accessibility features are eliminating barriers and making sure Samsung TVs are inclusive and accessible for everyone. Finally, they’ll be more ways to control the TV lineup in 2025. Samsung has trained Bixby to better understand context and assist with multiple actions – like changing the channel and raising the volume at the same time. And then there’s Universal Gestures. While not an AI feature, it does introduce a super cool new way to control your Samsung TV using prompts and hand gestures on your Galaxy Watch. Samsung Odyssey G7 gaming monitor The Samsung Odyssey G7 is a 40-inch gaming monitor. Samsung released a bunch of its monitor announcements last week. But today it’s also revealing the Odyssey G7, a new addition to the Odyssey series. It is the industry’s first 40-inch 21:9 WUHD (5120×2160) gaming monitor. Its unique combination of a large, wide screen with a 1000R curvature and WUHD resolution provides extra dimensions and a more detailed experience. The G& is HDR10+ gaming certified, which is the latest premium HDR (High Dynamic Range) gaming technology that guarantees beautiful HDR graphics optimized for HDR displays automatically . The G7 supports VESA DisplayHDR 600 for a rich and vibrant color expression so users can enjoy all the details in their favorite game. The back side of the Odyssey G7 gaming monitor is pretty too. It encompasses a black finish and three-side, bezel-less design, eliminating the need for a clunky dual monitor setup in favor of a seamless, modern set up. Gamers will also be able to remain competitive in quick action gameplay with its 1 m/s GtG response time and 180Hz refresh rate. Samsung Neo QLED 8K TVs The latest 83-inch Neo QLED 8K TV. Samsung said its lineup of Neo QLED 8K TVs are its flagship technologies for 2025, and it’s introducing two models, the QN990F and QN900F. Both ultra premium TVs are packed with several firsts to deliver the pinnacle of immersive 8K viewing. After creating the industry’s first OLED with Glare-Free technology, Samsung is bringing it to the 8K lineup, helping you enjoy the highest resolution picture in any room, bright or dark. The QN990F will also feature a brand-new technology that will make cable clutter a thing of the past: the Wireless One Connect Box. The Wireless One Connect Box can transmit wirelessly up to 10 meters away, even with obstacles in its path. Leveraging WiFi7 and Omni-Directional Technology, it doesn’t even need to face your TV to transmit an 8K resolution at up to 120Hz. The company is also providing access to the Samsung Art Store on the QN990F, QN900F and several other models across the 2025 lineup. Now, more buyers than ever can create a gallery-like experience in their homes with access to 3,000+ pieces from renowned museums and institutions across the globe. The Frame is aimed at displaying art. Owners of 2025 Samsung

Samsung spreads Vision AI across its 2025 TV portfolio Read More »

Nvidia unveils Mega Omniverse blueprint for building industrial robot fleet digital twins

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Nvidia unveils Mega Omniverse blueprint for building industrial robot fleet digital twins, as part of a CES 2025 keynote speech by Nvidia CEO Jensen Huang. The new framework enables next era of industrial AI and robot simulation through software-defined testing and optimization to factories and warehouses, the company said. According to Gartner, the worldwide end-user spending on all IT products for 2024 was $5 trillion. This industry is built on a computing fabric of electrons, is fully software-defined, accelerated — and now generative AI-enabled. While huge, it’s a fraction of the larger physical industrial market that relies on the movement of atoms. “In the future, every factory will have a digital twin,” Huang said. Today’s 10 million factories, nearly 200,000 warehouses and 40 million miles of highways form the “computing” fabric of our physical world. But that vast network of production facilities and distribution centers is still laboriously and manually designed, operated and optimized. In warehousing and distribution, operators face highly complex decision optimization problems — matrices of variables and interdependencies across human workers, robotic and agentic systems and equipment. Unlike the IT industry, the physical industrial market is still waiting for its own software-defined moment. That moment is coming, Nvidia said. Mega The company today at CES announced “Mega,” an Omniverse Blueprint for developing, testing and optimizing physical AI and robot fleets at scale in a digital twin before deployment into real-world facilities. Advanced warehouses and factories use fleets of hundreds of autonomous mobile robots, robotic arm manipulators and humanoids working alongside people. With implementations of increasingly complex systems of sensor and robot autonomy, it requires coordinated training in simulation to optimize operations, help ensure safety and avoid disruptions. Mega offers enterprises a reference architecture of Nvidia accelerated computing, AI, Nvidia Isaac and Nvidia Omniverse technologies to develop and test digital twins for testing AI-powered robot brains that drive robots, video analytics AI agents, equipment and more for handling enormous complexity and scale. The new framework brings software-defined capabilities to physical facilities, enabling continuous development, testing, optimization and deployment. Developing AI Brains With World Simulator for Autonomous Orchestration With Mega-driven digital twins, including a world simulator that coordinates all robot activities and sensor data, enterprises can continuously update facility robot brains for intelligent routes and tasks for operational efficiencies. The blueprint uses Omniverse Cloud Sensor RTX APIs that enable robotics developers to render sensor data from any type of intelligent machine in the factory, simultaneously, for high-fidelity large-scale sensor simulation. This allows robots to be tested in an infinite number of scenarios within the digital twin, using synthetic data in a software-in–the-loop pipeline with Nvidia Isaac ROS. Supply chain solutions company Kion Group is collaborating with Accenture and Nvidia as the first to adopt Mega for optimizing operations in retail, consumer packaged goods, parcel services and more. Huang offered a glimpse into the future of this collaboration on stage at CES, demonstrating how enterprises can navigate a complex web of decisions using the Mega Omniverse Blueprint. “At Kion, we leverage AI-driven solutions as an integral part of our strategy to optimize our customers’ supply chains and increase their productivity,” said Rob Smith, CEO of Kion Group, in a statement. “With Nvidia’s AI leadership and Accenture’s expertise in digital technologies, we are reinventing warehouse automation. Bringing these strong partners together, we are creating a vision for future warehouses that are part of a smart agile system, evolve with the world around them and can handle nearly any supply chain challenge.” Creating Operational Efficiencies With Mega Omniverse Blueprint Kion is working with Mega Creating operational efficiencies, Kion and Accenture are embracing the Mega Omniverse Blueprint to build next-generation supply chains for Kion and its customers. Kion can capture and digitalize a warehouse digital twin in Omniverse by using computer-aided design files, video, lidar, image and AI-generated data. Kion uses the Omniverse digital twin as a virtual training and testing environment for its industrial AI’s robot brains, powered by Nvidia Isaac, tapping into smart cameras, forklifts, robotic equipment and digital humans. Integrating the Omniverse digital twin, Kion’s warehouse management software can create and assign missions for robot brains, like moving a load from one place to another. These simulated robots can carry out tasks by perceiving and reasoning in environments, and they’re capable of planning next motions and then taking actions that are simulated in the digital twin. The robot brains perceive the results deciding the next action, and this cycle continues with Mega precisely tracking the state and position of all the assets in the digital twin. Delivering Services With Mega for Facilities Everywhere Accenture, global leader in professional services, is adopting Mega as part of its AI Refinery for Simulation and Robotics, built on NVIDIA AI and Omniverse, to help organizations use AI simulation to reinvent factory and warehouse design and ongoing operations. With the blueprint, Accenture is delivering new services – including Custom Robotics and Manufacturing Foundation Model Training and Finetuning; Intelligent Humanoid Robotics; and AI-Powered Industrial Manufacturing and Logistics Simulation and Optimization — to expand the power of physical AI and simulation to the world’s factories and warehouse operators. Now, for example, an organization can explore numerous options for their warehouse before choosing and implementing the best one. “As organizations enter the age of industrial AI, we are helping them use AI-powered simulation and autonomous robots to reinvent the process of designing new facilities and optimizing existing operations,” said Julie Sweet, chair and CEO, Accenture, in a statement. “Our collaboration with Nvidia and KION will help our clients plan their operations in digital twins, where they can run hundreds of options and quickly select the best for current or changing market conditions, such as seasonal market demand or workforce availability. This represents a new frontier of value for our clients to achieve using technology, data and AI.” source

Nvidia unveils Mega Omniverse blueprint for building industrial robot fleet digital twins Read More »

How Meta’s latest research proves you can use generative AI to understand user intent

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Meta — parent company of Facebook, Instagram, WhatsApp, Threads and more — runs one of the biggest recommendation systems in the world. In two recently released papers, its researchers have revealed how generative models can be used to better understand and respond to user intent.  By looking at recommendations as a generative problem, you can tackle it in new ways that are richer in content and more efficient than classic approaches. This approach can have important uses for any application that requires retrieving documents, products or other kinds of objects. Dense vs generative retrieval The standard approach to creating recommendation systems is to compute, store and retrieve dense representations of documents. For example, to recommend items to users, an application must train a model that can compute embeddings for the users’ requests and embeddings for a large store of items.  At inference time, the recommendation system tries to understand the user’s intent by finding one or more items whose embeddings are similar to the user’s. This approach requires an increasing amount of storage and computation capacity as the number of items grows because every item embedding must be stored and every recommendation operation requires comparing the user embedding against the entire item store. Dense retrieval (source: arXiv) Generative retrieval is a more recent approach that tries to understand user intent and make recommendations not by searching a database but by simply predicting the next item in a sequence of things it knows about a user’s interactions. Here’s how it works: The key to making generative retrieval work is to compute “semantic IDs” (SIDs) which contain the contextual information about each item. Generative retrieval systems like TIGER work in two phases. First, an encoder model is trained to create a unique embedding value for each item based on its description and properties. These embedding values become the SIDs and are stored along with the item.  Generative retrieval (source: arXiv) In the second stage, a transformer model is trained to predict the next SID in an input sequence. The list of input SIDs represents the user’s interactions with past items, and the model’s prediction is the SID of the item to recommend. Generative retrieval reduces the need for storing and searching across individual item embeddings. So its inference and storage costs remain constant as the list of items grows. It also enhances the ability to capture deeper semantic relationships within the data, and provides other benefits of generative models, such as modifying the temperature to adjust the diversity of recommendations.  Advanced generative retrieval Despite its lower storage and inference costs, generative retrieval suffers from some limitations. For example, it tends to overfit to the items it has seen during training, which means it has trouble dealing with items that were added to the catalog after the model was trained. In recommendation systems, this is often referred to as “the cold start problem,” which pertains to users and items that are new and have no interaction history.  To address these shortcomings, Meta has developed a hybrid recommendation system called LIGER, which combines the computational and storage efficiencies of generative retrieval with the robust embedding quality and ranking capabilities of dense retrieval. During training, LIGER uses both similarity score and next-token goals to improve the model’s recommendations. During inference, LIGER selects several candidates based on the generative mechanism and supplements them with a few cold-start items, which are then ranked based on the embeddings of the generated candidates.  LIGER combines generative and dense retrieval (source: arXiv) The researchers note that “the fusion of dense and generative retrieval methods holds tremendous potential for advancing recommendation systems,” and as the models evolve “they will become increasingly practical for real-world applications, enabling more personalized and responsive user experiences.” In a separate paper, the researchers introduce a novel multimodal generative retrieval method named Multimodal preference discerner (Mender), a technique that can enable generative models to pick up implicit preferences from users’ interactions with different items. Mender builds on top of the generative retrieval methods based on SIDs and adds a few components that can enrich recommendations with user preferences. Mender uses a large language model (LLM) to translate user interactions into specific preferences. For example, if the user has praised or complained about a specific item in a review, the model will summarize it into a preference about that product category.  The main recommender model is trained to be conditioned both on the sequence of user interactions and the user preferences when predicting the next semantic ID in the input sequence. This gives the recommender model the ability to generalize and perform in-context learning and to adapt to user preferences without being explicitly trained on them. “Our contributions pave the way for a new class of generative retrieval models that unlock the ability to utilize organic data for steering recommendation via textual user preferences,” the researchers write. Mender recommendation framework (source: arXiv) Implications for enterprise applications The efficiency provided by generative retrieval systems can have important implications for enterprise applications. These advancements translate into immediate practical benefits, including reduced infrastructure costs and faster inference. The technology’s ability to maintain constant storage and inference costs regardless of catalog size makes it particularly valuable for growing businesses. The benefits extend across industries, from ecommerce to enterprise search. Generative retrieval is still in its early stages and we can expect applications and frameworks to emerge as it matures. source

How Meta’s latest research proves you can use generative AI to understand user intent Read More »

Timekettle unveils Babel OS for AI simultaneous interpretation in language translation earbuds

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Timekettle has unveiled the Babel OS, its first-ever operating system designed to redefine AI-driven simultaneous interpretation and it will be used in its language translation earbuds. This breakthrough not only sets a new benchmark for translation software but also significantly enhances the performance of Timekettle’s hardware devices, delivering lightening quick transitions with solutions that can anticipate what is being said, adapt to customizable lexicons, and translate in over 40 languages with real human emotion and tonality, the company said. Babel OS is available immediately and transforms Timekettle’s product lineup, including advanced devices such as the W4 Pro Earbuds, WT2 Edge/W3 Earbuds, X1 Interpreter Hub, and the T1 and T1 Mini Handheld Interpreters. These devices are now faster, more accurate, and more human than ever before, Timekettle said. By enabling seamless and natural conversations, Babel OS brings Timekettle closer than ever to replicating the experience of having a live interpreter right by your side. Timekettle product portfolio and tech specs can be viewed here. “Over the years, Timekettle has become synonymous with groundbreaking hardware innovation,” said Leal Tian, CEO of Timekettle, in a statement. “But it’s our relentless dedication to advancing software technology that truly keeps us ahead. With Babel OS, we are introducing the next evolution of translation – one that merges unmatched speed, accuracy, and personalization, making communication across languages more natural than ever.” Babel OS includes AI semantic segmentation. Powered by Timekettle’s HybridComm, this technology is at the core of Babel OS, enabling lightning-fast translation by optimizing speech segmentation for AI processing. Utilizing a vast database and sophisticated algorithms, the system intelligently segments sentences, predicting their completion. This proactive approach not only expedites the translation process but also ensures a high level of accuracy. With this innovation, Timekettle devices achieve unprecedented speed, delivering translations in real-time with near-zero latency, the company said. This real-time translation enables users to catch up with information more effectively and quickly during a conference or speech, erasing any sense of language barrier. Custom lexicon for personalized translation AI is improving and making Timekettle’s language translation better. Babel OS redefines personalization by enabling users to create custom vocabularies tailored to specific industries, contexts, or even slang. The custom lexicon feature is ideal for avoiding translation errors with names, locations, and specialized terms. Users can define specific terms and link them to precise translations, ensuring consistency and accuracy. As the system learns and expands with each added term, it evolves into a highly intuitive and personalized translator—one that feels as if it truly understands your unique language needs. Babel OS revolutionizes translation by incorporating advanced voice cloning technology, bringing authenticity to conversations. This intelligent system replicates users’ unique voice tones, styles, and speech patterns, ensuring that translations sound natural, conversational, and emotionally resonant. By adding depth and realism to interactions, Babel OS enhances communication, making every exchange more engaging and lifelike. Whether for professional discussions or casual conversations, this feature adds warmth, emotion, and depth, making each interaction vivid and elevating the overall communication experience. Machine Learning powering enhancements Timekettle is updating its translation earbuds with Babel OS. Leveraging the latest in AI training, Babel OS adapts dynamically to different languages and accents, continually learning and improving to ensure superior accuracy in even the most complex linguistic scenarios. Through Timekettle’s own AI Lab, the company continuously optimizes current AI technology based on user feedback, making the translation experience even better. In addition, its latest large language model (LLM) engine establishes a strong foundation for delivering precise and contextually rich translations, setting a new standard in language communication. Timekettle is also working on AI Edge solutions that operate on devices, offline and without network services. Business professionals and outdoor adventurers can rely on seamless communication, even in remote locations such as mountainous regions, airplanes, or underground parking lots—without requiring an Internet connection. This breakthrough eliminates concerns about translation functionality being disrupted by network issues, enabling truly barrier-free communication anytime and anywhere. Backing its commitment to ongoing innovation and improvement, all Timekettle products are enhanced with ongoing over-the-air (OTA) updates, provided at no additional cost. This ensures that users receive not just a translation device, but a continuously evolving and increasingly powerful language companion, offering unmatched reliability, convenience, and peace of mind. Safety-driven technology Timekettle said it places user privacy and security at the forefront with Babel OS, integrating advanced encryption and robust security measures across all devices and applications. Fully compliant with GDPR certification standards, Babel OS meets the highest data protection requirements, offering users a transparent and secure data experience. This approach not only optimizes system-wide data protection mechanisms but also reinforces overall security and reliability, ensuring peace of mind for both individuals and businesses. “This operating system doesn’t just complement our hardware – it elevates it,” continued Leal Tian. “Babel OS enables Timekettle devices to operate at speeds and levels of sophistication that were previously unimaginable. It’s a testament to how software and hardware innovation can combine to create something truly transformative.” Availability and pricing Babel OS is available immediately and can be seen in action at CES at the Timekettle booth [LVCC, North Hall – 9163]. Its enhanced features are demonstrated within products such as the W4 Pro Earbuds (priced at $449), WT2 Edge/W3 Earbuds (priced at $349.99), X1 Interpreter Hub (priced at $699.99), and the T1 and T1 Mini Handheld Interpreters (priced at $299.99 and $149.99 respectively). All Timekettle devices are available for purchase at its website or on Amazon. Established in 2016, Timekettle is dedicated to advancing cross-language communication through innovative products and solutions. The company has more than 400,000 users. source

Timekettle unveils Babel OS for AI simultaneous interpretation in language translation earbuds Read More »