Need Help Making Business Choices? This AI App Lets You Test Strategies before Deciding

TL;DR: Making smart business decisions can be tricky, so why not get assistance from SkillWee AI, a decision-making app that offers informed comparisons and decision simulations for only $49.99 (reg. $299)? Running a business is challenging, but knowing which decisions can benefit and grow your brand vs. destabilize it can be tricky if you’re not able to test strategies or compare options fully. Do you struggle with indecision? Let SkillWee AI assist. This decision-making app helps entrepreneurs make smarter business choices by letting you test business strategies in a risk-free environment for less than $50. Did you begin a startup and need to decide on funding? Or are you a small business owner needing to determine which growth strategy is best suited to your brand? Regardless of the scenario, SkillWee is designed to help you weigh your options more thoroughly and make wiser decisions. Think of it as a business strategy crystal ball. Check out how SkillWee may become the business decision partner you never knew you needed: Test strategies before taking action. The app lets you simulate real-world scenarios to explore various decision paths and see potential outcomes. Analyze risks and rewards from decisions: You’ll be able to ponder through the pros, cons, and long-term effects of each business decision you may make. SkillWee’s AI can offer recommendations for decisions needed for funding, hiring, leadership, and growth. Ask the app for insights on your decision-making. It’ll help you refine them and improve your choices, as well as help you improve your decision-making skills, whether you’re a business leader or manager. By using this app, you could gain skills similar to those of world business leaders, as well as those involved in higher-scale crisis management and strategic planning. Get lifetime access to the SkillWee AI-powered decision-making app, now just $49.99 while supplies last. SkillWee AI-Powered Decision-Making App: Lifetime Subscription StackSocial prices subject to change. source

Need Help Making Business Choices? This AI App Lets You Test Strategies before Deciding Read More »

Euclid space telescope captures super rare double gravitational lenses

The European Space Agency has released the first major batch of data from its “dark universe” telescope Euclid. What’s inside could change our understanding of dark matter and the expansion of the universe. The data comprises just one week’s worth of deep field images from three points in space. They make up just 0.4% of the vast area Euclid will capture, which scientists say will be the largest 3D map of the sky ever created. With one scan of each region so far, Euclid has already spotted 26 million galaxies, each potentially containing millions of stars and billions of planets. The furthest of these galaxies are 10.5 billion light years away from Earth, meaning the images you see are almost as old as the universe itself. The Euclid map of the stars The Cat’s Eye Nebula, one of the most complex planetary nebulae ever seen in space, as captured by Euclid. Credit: ESA Hiding amongst all those millions of galaxies are rare phenomena called gravitational lenses or “Einstein rings,” named as such because they prove Albert Einstein’s prediction that gravity warps spacetime, causing light to bend as it travels through. Gravitational lensing occurs when a massive object, like a galaxy or black hole, bends the light from a galaxy behind it — forming visible distortions or arcs around the galaxy’s nucleus.   The 💜 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now! In this new batch of data, Euclid has more than doubled the number of gravitational lenses that have been captured from space. ESA estimates that Euclid will capture 100,000 strong gravitational lenses by the end of its six-year mission, around 100 times more than currently known.   Today’s data has also revealed an even rarer phenomenon: double gravitational lensing, also called double source plane lensing. This happens when light from two distant galaxies passes through the same galaxy, causing a double lensing effect. Finding double gravitational lenses A collage of gravitational lenses from Euclid’s first major data drop, released today. Credit: ESA Look at the image above and go to the fourth column, third from the bottom. The image is faint but you can make out two outer arcs and then two inner arcs close to the centre of the galaxy nucleus. That’s a double gravitational lens.  Double gravitational lensing could help scientists better understand dark energy and the expansion of the universe, because, in theory, an expanding universe will determine the angle of the arcs.  “Double-source plane lenses are extremely rare — only a few have ever been found,” said Euclid Consortium scientist Mike Walmsley at a press briefing. “But we think we’ve found four good candidates already from just a week’s worth of data covering a fraction of the night sky. We’re confident that Euclid will quickly capture enough of them to allow scientists to start measuring their effects.” To find such rare phenomena hiding amidst Euclid’s images, the European Space Agency (ESA) enlisted the help of thousands of volunteers — and AI algorithms.   Euclid’s AI-powered galaxy finder Launched in 2023, Euclid has observed about 14% of its total survey area so far. By the time its mission is complete, the telescope is expected to capture images of more than 1.5 billion galaxies, sending back around 100GB of data every day.  These images provide scientists with unprecedented opportunities — and huge problems when it comes to finding, categorising, and analysing all the objects within them.  To speed up the process, the Euclid consortium has developed an AI-powered galaxy spotter — called “Zoobot.” The algorithm was trained on decades’ worth of citizen science work, from volunteers who scan through images and identify each object.  A collage of galaxies identified by AI and citizen scientists. Credit: ESA From today’s data drop, Zoobot put together a detailed catalogue of 360,000 galaxies. Thousands of volunteers from the Space Warps citizen science project then sorted through the most promising candidates. That’s how the gravitational lenses were identified.  “We’re at a pivotal moment in terms of how we tackle large-scale surveys in astronomy. AI is a fundamental and necessary part of our process in order to fully exploit Euclid’s vast dataset,” said Walmsley, who has worked on astronomical deep learning algorithms for the last decade. A collage of Euclid Deep Field South, a portion of the night sky never previously captured in such detail. Credit: ESA The dark universe explorer Euclid launched on a SpaceX Falcon 9 rocket from Cape Canaveral in Florida on 1 July, 2023. It returned its first images in August of that year, and in May last year released its first scientific data.  Euclid’s mission is to shed light on two of the universe’s most perplexing mysteries: dark energy and dark matter —, thought to make up 95% of the cosmos. Scientists theorise that dark energy is responsible for accelerating the universe’s expansion and that dark matter acts as cosmic glue that holds the galaxies together. Yet the nature of these components is still unknown. To build its 3D map of the night sky, the telescope is deploying two high-tech cameras: VIS, which captures the cosmos in visible light, and NISP, which measures the distances to galaxies and the expansion speed of the universe.  Euclid is set to provide us with an unprecedented chronology of the history of the cosmos and help us unravel the mysteries of the universe – and our own existence. The three deep field previews can now be explored in the ESASky app. Euclid Deep Field South here, Euclid Deep Field Fornax here, Euclid Deep Field North here. source

Euclid space telescope captures super rare double gravitational lenses Read More »

Deere & Co. Attacks FTC's Right-To-Repair Suit As 'Vague'

By Bryan Koenig ( March 18, 2025, 8:17 PM EDT) — Farm machinery manufacturer Deere & Co. is asking an Illinois federal court to nix the Federal Trade Commission’s right-to-repair suit, arguing that the company doesn’t operate in or exclude others from the equipment repair market, and that the FTC lacks the constitutional authority to sue, among other failings…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Deere & Co. Attacks FTC's Right-To-Repair Suit As 'Vague' Read More »

Nvidia’s GTC 2025 keynote: 40x AI performance leap, open-source ‘Dynamo’, and a walking Star Wars-inspired ‘Blue’ robot

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Nvidia CEO Jensen Huang took to the stage at the SAP Center on Tuesday morning, leather jacket intact and without a teleprompter, to deliver what has become one of the most anticipated keynotes in the technology industry. The GPU Technology Conference (GTC) 2025, self-described by Huang as the “Super Bowl of AI,” arrives at a critical juncture for Nvidia and the broader artificial intelligence sector. “What an amazing year it was, and we have a lot of incredible things to talk about,” Huang told the packed arena, addressing an audience that has grown exponentially as AI has transformed from a niche technology into a fundamental force reshaping entire industries. The stakes were particularly high this year following market turbulence triggered by Chinese startup DeepSeek‘s release of its highly efficient R1 reasoning model, which sent Nvidia’s stock tumbling earlier this year amid concerns about potential reduced demand for its expensive GPUs. Against this backdrop, Huang delivered a comprehensive vision of Nvidia’s future, emphasizing a clear roadmap for data center computing, advancements in AI reasoning capabilities, and bold moves into robotics and autonomous vehicles. The presentation painted a picture of a company working to maintain its dominant position in AI infrastructure while expanding into new territories where its technology can create value. Nvidia’s stock traded down throughout the presentation, closing more than 3% lower for the day, suggesting investors may have hoped for even more dramatic announcements. But if Huang’s message was clear, it was this: AI isn’t slowing down, and neither is Nvidia. From groundbreaking chips to a push into physical AI, here are the five most important takeaways from GTC 2025. Blackwell platform ramps up production with 40x performance gain over Hopper The centerpiece of Nvidia’s AI computing strategy, the Blackwell platform, is now in “full production,” according to Huang, who emphasized that “customer demand is incredible.” This is a significant milestone after what Huang had previously described as a “hiccup” in early production. Huang made a striking comparison between Blackwell and its predecessor, Hopper: “Blackwell NVLink 72 with Dynamo is 40 times the AI factory performance of Hopper.” This performance leap is particularly crucial for inference workloads, which Huang positioned as “one of the most important workloads in the next decade as we scale out AI.” The performance gains come at a critical time for the industry, as reasoning AI models like DeepSeek‘s R1 require substantially more computation than traditional large language models. Huang illustrated this with a demonstration comparing a traditional LLM’s approach to a wedding seating arrangement (439 tokens, but wrong) versus a reasoning model’s approach (nearly 9,000 tokens, but correct). “The amount of computation we have to do in AI is so much greater as a result of reasoning AI and the training of reasoning AI systems and agentic systems,” Huang explained, directly addressing the challenge posed by more efficient models like DeepSeek’s. Rather than positioning efficient models as a threat to Nvidia’s business model, Huang framed them as driving increased demand for computation — effectively turning a potential weakness into a strength. Next-generation Rubin architecture unveiled with clear multi-year roadmap In a move clearly designed to give enterprise customers and cloud providers confidence in Nvidia’s long-term trajectory, Huang laid out a detailed roadmap for AI computing infrastructure through 2027. This is an unusual level of transparency about future products for a hardware company, but reflects the long planning cycles required for AI infrastructure. “We have an annual rhythm of roadmaps that has been laid out for you so that you could plan your AI infrastructure,” Huang stated, emphasizing the importance of predictability for customers making massive capital investments. The roadmap includes Blackwell Ultra coming in the second half of 2025, offering 1.5 times more AI performance than the current Blackwell chips. This will be followed by Vera Rubin, named after the astronomer who discovered dark matter, in the second half of 2026. Rubin will feature a new CPU that’s twice as fast as the current Grace CPU, along with new networking architecture and memory systems. “Basically everything is brand new, except for the chassis,” Huang explained about the Vera Rubin platform. The roadmap extends even further to Rubin Ultra in the second half of 2027, which Huang described as an “extreme scale up” offering 14 times more computational power than current systems. “You can see that Rubin is going to drive the cost down tremendously,” he noted, addressing concerns about the economics of AI infrastructure. This detailed roadmap serves as Nvidia’s answer to market concerns about competition and sustainability of AI investments, effectively telling customers and investors that the company has a clear path forward regardless of how AI model efficiency evolves. Nvidia Dynamo emerges as the ‘operating system’ for AI factories One of the most significant announcements was Nvidia Dynamo, an open-source software system designed to optimize AI inference. Huang described it as “essentially the operating system of an AI factory,” drawing a parallel to how traditional data centers rely on operating systems like VMware to orchestrate enterprise applications. Dynamo addresses the complex challenge of managing AI workloads across distributed GPU systems, handling tasks like pipeline parallelism, tensor parallelism, expert parallelism, in-flight batching, disaggregated inferencing, and workload management. These technical challenges have become increasingly important as AI models grow more complex and reasoning-based approaches require more computation. The system gets its name from the dynamo, which Huang noted was “the first instrument that started the last Industrial Revolution, the industrial revolution of energy.” The comparison positions Dynamo as a foundational technology for the AI revolution. By making Dynamo open source, Nvidia is attempting to strengthen its ecosystem and ensure its hardware remains the preferred platform for AI workloads, even as software optimization becomes increasingly important for performance and efficiency. Partners including Perplexity are already working with Nvidia on Dynamo implementation. “We’re so happy that so many of our partners are working with us on

Nvidia’s GTC 2025 keynote: 40x AI performance leap, open-source ‘Dynamo’, and a walking Star Wars-inspired ‘Blue’ robot Read More »

What is Pay by Bank? Secure Payment Method Explained

Key takeaways: Pay by bank is an electronic exchange of funds that can be used for personal or commercial transactions. For individuals, the pay-by-bank method is popularly used for large, one-time money transfers. For businesses, pay-by-bank transactions are a great alternative to credit cards that charge merchants with much higher processing fees. The latest advancements in pay by bank include digital banking, which is slowly enabling real-time and cross-border exchange. 1 Pipedrive CRM Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Any Company Size Any Company Size Features 24/7 Customer Support, Analytics / Reports, API, and more 2 CrankWheel Employees per Company Size Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+) Any Company Size Any Company Size Features Analytics / Reports, API, Dashboard, and more What is pay by bank? Pay by bank is a secure payment method that allows direct bank transfers between individuals and/or businesses. It is also referred to as electronic bank transfers or EFT because the exchange of funds is done electronically between the sender’s and recipient’s banks. In the early days, pay by bank was commonly known as bank-to-bank, account-to-account (A2A), or direct bank transfer, as this payment method was used primarily for money transfers between two individuals. Eventually, pay by bank became a staple for B2Bs because it allows for clear paper trails. Consumers have also begun using electronic checks instead of the paper version to pay their bills. Today, the pay by bank method includes a modern C2B approach where customers pay merchants directly through online banking and mobile banking apps. Types of pay-by-bank methods Pay by bank includes everything from traditional ACH transactions to digital banking apps. Each option differs in processing speed and fees. Wire transfers: Simple bank-to-bank processes used for large-value transfers; best for one-time transactions. ATM payments: Bank transfers initiated from an ATM machine; best for one-time transactions IVR payments: Bank transfers conducted via a computerized transaction from a pay-by-phone or Interactive Voice Response (IVR) system; best for large, one-time transactions Debit card payments: Transactions completed by paying with a debit card to access the source of funds; best for small, frequent payments Digital wallet payments: Payments made by choosing a bank account that is linked to the digital wallet in order to complete transactions; best for small, frequent payments Local bank-to-bank or Global ACH: Bank transfers involving accounts that are located in the same country or region. Global ACH is possible if you have an account with a foreign bank that has a presence in your region or country. Best for small, occasional payments. ACH payments: Transfers that go through the ACH payment network. Exclusive to US banks. Direct deposit: Sender completes the transaction; used for employee paychecks, taxes, and echecks Direct payment: Both sender and receiver initiate and complete transactions ACH debit: For subscriptions/recurring payments ACH credit: Such as Zelle and Venmo In the US, the Automated Clearing House (ACH) network is composed of financial organization representatives that assume the role of processing, clearing, and settling all ACH and echeck payments. See: Best ACH Payment Processing for Businesses How do bank payments work? The personal pay-by-bank (individual bank transfers) process differs from commercial pay by bank (C2B, B2B) in terms of how transactions are initiated. However, the processing and clearing stages are mostly the same. In most cases, the receiver in individual bank-to-bank transfers does not request (or initiate) the payment. Meanwhile, commercial pay by bank transactions are often characterized by payment requests, such as an invoice. Step 1: Payment is initiated. For personal pay by banks: The customer chooses a pay-by-bank method and prepares a fund transfer request. For commercial pay by bank: The customer receives an invoice from the merchant, chooses a pay-by-bank method, and prepares the fund transfer request. Step 2: Sender transmits payment request to their bank. The sender chooses from one of the pay-by-bank types. For IVR and ATM payment type: The sender interacts with the IVR system or ATM machine by going through the prompts to process their payment request. For debit card payment type: The sender verifies the specified amount on the payment terminal and enters their PIN code on the PIN pad. For digital wallet payment type: The sender logs into the app and follows the prompt for sending payments. For all other pay-by-bank types: The sender fills out a form that specifies the transaction details as well as provides a notice of official authorization to complete the transaction. Step 3: Sender’s bank receives and processes the request. For all pay-by-bank payment types: The sender’s bank receives the payment and authorization request. The bank first verifies the identity of the account holder and then validates that the sender’s account has sufficient funds. For ACH and echecks: Once the bank verifies and validates the financial information, the funds and the transaction details are routed electronically to Nacha’s ACH network for clearing and forwarding to the recipient’s account. Step 4: If the request is approved, the bank initiates the fund transfer process. The sender’s bank debits the transaction amount for approved (and cleared for ACH and echecks) payment requests. The bank adjusts the sender’s fund balance and also notifies the sender that the request is successful in the form of a receipt. If the request is rejected, the sender is also notified and will have to choose a different payment method. For digital wallet, IVR, debit card, and ATM payment types: The approval or rejection notice is also displayed on the terminal screen in addition to a printed or emailed receipt. Step 5: Funds are credited to the recipient’s bank and the recipient is notified of the successful transaction. The recipient of the funds will get a notification by email from their bank once the transfer is successful. For debit card payments at the point of sale: Transaction records are kept and updated within the POS software. Note that fund transfer speed varies depending on the pay-by-bank type. Digital wallet, IVR, debit

What is Pay by Bank? Secure Payment Method Explained Read More »

Adobe’s new AI agents can make personal websites for your customers

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Adobe first made its mark in generative AI with its Firefly image generation model in 2023 and its generative fill feature on Photoshop. With enterprise customers turning their attention from exploring AI-powered creation tools to agents, Adobe is throwing its hat in the agentic ring and adding more personalization features to everyday customer experience tasks.  Adobe announced the launch of 10 agents and an orchestration tool on its Adobe Experience platform. These tools target specific needs such as customer channel engagement, content production, data management and site optimization.  The company also debuted Brand Concierge, a way for organizations to personalize their websites for customers based on their previous interactions with the brand.  Loni Stark, vice president of strategy and product for Adobe, told VentureBeat in an interview that agents would change the customer experience for both the enterprise and their clients. “We see that agents can scale up the capacity of experience makers. It’s not just because of the hype out there, but because when we have delivered our tools to the customers we work with, we see that as their trust in the AI capabilities we deliver increases, they start to think, oh, can I make them autonomous,” Stark said.  She added the idea is to let these agents work ambiently, meaning the agents and the orchestrator continue to work in the background to provide information or solve issues for enterprises proactively.  Orchestration and agents for customer experience The new agents launching on AEP are: Account qualification agent to evaluate new sales pipelines Audience agent, which analyzes cross-channel engagement data to  Content production agent that helps marketers and creatives scale by generating and assembling content   Data insights agent simplifies and expands the process of deriving insights from signals   Data engineering agent   Experimentation agent helps stimulate new ideas and conduct impact analysis   Journeyagentst can orchestrate cross-channel experiences  Product advisor agent recommends experience and product engagement experiments   Site optimization agent manages and detects traffic and engagement in a website  Workflow optimization agent for cross-team collaboration and monitoring ongoing projects Stark highlighted the Site Optimization agent during a demo with VentureBeat. The agent would check for broken links or proactively examine a brand’s website for traffic and bounce rates and suggest fixes.  “Most companies don’t have people that spend all of their days looking at broken links, for example, especially if they have tens of thousands of pages, or can’t check on these daily,” Stark said. “What’s happening is that there’s lost opportunity both if you think about the bounce rate. This agent is pre-trained, so out of the box, it already comes with skills like looking for broken backlinks.” Stark said enterprises using the Experience Platform can fine-tune how much agents access their data through the orchestrator. Adobe joins companies like Salesforce and ServiceNow in providing users with pre-built agents for specific tasks and teams.   A customized brand website Another new feature for the Adobe Experience Platform is Brand Concierge, which will help enterprises build websites that offer customized customer visits. Organizations can create a website for their company or product that greets customers by name and provides a query box asking them what information they want.  Say a company has a website for a hotel chain. A customer can ask the chat function or click on premade prompts to ask about amenities specific to one location, Brand Concierge helps the company push the appropriate information to the front page of the site and also customize all other assets and experiences to that location. Stark said customers can still browse the site as usual, but Brand Concierge pushes customer engagement further by remembering how particular customers have interacted with the enterprise before.  Brand Concierge is a separate offering from the agents that sits on top of the AEP, but Stark said, “It’ll leverage agents such as the Product Advisor Agent, which is already built into the Concierge app.” The company also understands its customers’ past interactions and preferences.  Stark said Adobe customers increasingly find their clients more comfortable using AI chatbots, making it easier to transition them to more personalized, prompt-based website experiences.  “I think what we’re seeing is that consumers are increasingly comfortable with an AI-powered conversational experience. New Adobe Analytics data shows a 1,200% surge in U.S. retail sites and a 1,700% surge in U.S. travel sites (July 2024 to Feb 2025) from generative AI sources. Companies can surface this on high-traffic properties (like their website) with an increasingly familiar form factor that is gaining traction,” Stark said.  The company launched the Adobe Experience Platform in 2019, but the real-time customer experience management solution saw a massive update last year, including an AI assistant for users. source

Adobe’s new AI agents can make personal websites for your customers Read More »

Unleashing the power of AI elevates a telecom leader’s service delivery

A global telecom provider recognized that its traditional approach to delivering service and support to its employees was becoming a bottleneck. With a large workforce generating a high volume of IT, HR, and finance-related support requests and inquiries, the company faced increasing operational pressure and strain. To improve response times and reduce manual support efforts, the company adopted BMC HelixGPT. The agentic artificial intelligence (AI) platform multiplies productivity, elevates service team efficiency, and improves the employee experience. For the telecom provider, the results have been dramatic: Employees are resolving issues faster on their own. Support teams are focusing on higher-value tasks. The company has significantly reduced costs by shifting to more effective self-service support channels. Overcoming the challenges of high-volume support requests Before implementing BMC HelixGPT, the company relied heavily on manual support processes, leading to long wait times and inefficient workflows. With tens of thousands of employees and consultants requiring assistance, service teams were handling an overwhelming number of repetitive and routine inquiries, leaving little time for resolving more complex issues. The telecom provider needed a new approach that would enable employees to resolve common and routine issues through a self-service portal, while maintaining access to live support when necessary. The objective was not only to improve operational efficiency but also create a more responsive, employee-friendly support system better aligned with the organization’s long-term automation strategy. Shifting to AI-powered support To address these challenges, the company deployed the BMC Helix agentic AI solution that integrates self-service tools, intelligent chat capabilities, and knowledge management. Boosting its BMC Helix Service Management solution with BMC HelixGPT Employee Navigator delivered several key benefits: Improved self-service capabilities: Employees gained access to AI-generated knowledge summaries and articles, allowing them to find concise answers quickly without waiting for a support agent. Intelligent chatbot interactions: Generative AI-driven chat services now answer more than half of user inquiries for IT and other departments in more human-like, natural language engagements. Elimination of manual support: The company transitioned entirely to digital support channels, improving efficiency and reducing operational costs. Delivering real-world impact Since adopting BMC HelixGPT Employee Navigator, the company has achieved significant results that have helped its support operations, including: More effective support interactions AI-driven support has achieved over a 60% success rate, with the majority of employee inquiries resolved without human intervention. When live support is needed, chatbots transfer employees to the appropriate service agents with relevant context, eliminating employees repeating information or being delayed from agent mis-assignments. Higher employee satisfaction and productivity Faster resolutions mean employees spend less time waiting for support fulfillment and more time focusing on their work. Support teams are no longer overwhelmed by routine questions, allowing them to refocus on solving complex, high-priority issues. Higher staff productivity from reduced operational overhead is driving more transformation initiatives and business innovation. Significant time and cost savings Employees can now resolve issues independently, reducing the burden on IT, HR, and finance support teams. AI-generated knowledge summaries and articles alone have been estimated to save hundreds of support staff hours in the initial year by making information more concise, accessible, and usable. Overall, the company estimates several thousands of hours have been saved annually from current use cases that improved efficiency across multiple departments. Expanding AI across the enterprise With the success of agentic AI and BMC HelixGPT in its IT, HR, and finance service operations, the telecom provider is now exploring additional opportunities to expand AI-driven support. Future plans include: Rolling out AI-powered support across more business units, extending benefits to a wider range of employees. Testing new AI-driven use cases that integrate with additional enterprise workflows to further reduce manual efforts. Exploring BMC Helix AIOps and observability solutions to proactively monitor and prevent service disruptions before they affect users, increasing critical system uptime while reducing costs and risks. Delivering better outcomes with AI By embracing AI-powered service management, this telecom provider has redefined how enterprise support should work. Employees now experience faster, more efficient and effective resolutions, while the organization benefits from reduced costs and optimized resource allocation. As agentic AI-powered automation continues to improve, BMC HelixGPT will remain a key solution component of the company’s long-term strategy, helping the company adapt, scale, and deliver better support outcomes across all areas of the business. Ready to transform your service delivery experience? Explore the BMC Helix agentic AI solution today or contact BMC Helix to see how AI can elevate team performance and the user experience. source

Unleashing the power of AI elevates a telecom leader’s service delivery Read More »

Inside Zoom’s AI evolution: From basic meeting tools to agentic productivity platform powered by LLMs and SLMs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Zoom became a household name during the pandemic as remote work became the norm nearly overnight. While the company was once synonymous only with video conferencing, it has been quietly building a sophisticated AI infrastructure over the last several years with an aim to redefine workplace productivity. While video conferencing is important and remains the cornerstone of Zoom’s business, there’s a lot more now, too, thanks to AI. Moving from meeting to milestone Everyone knows that Zoom is a technology for meetings. But what is the meeting for? In a business context, certainly there can be meetings that have no purpose, but those should be outliers. Meetings should lead to something, whether that’s an action item or some other milestone. “In the agentic AI era, finally technology is reaching the point that we can transform from meeting to milestone,” Zoom CTO Xuedong (X.D.) Huang told VentureBeat in an exclusive interview. Today, Zoom is announcing an aggressive agentic AI strategy that includes a series of new services. The update introduces agentic capabilities that promise to transform meetings from communication events into action-oriented workflows, alongside a new AI Studio that lets enterprises create customized AI agents. The hidden technical evolution behind Zoom’s agentic AI  Prior to joining Zoom, Huang spent 30 years at Microsoft, working on speech technologies as well as Microsoft’s Azure OpenAI service. He carried forward a lot of lessons learned from that experience when he joined Zoom in 2023. Under Huang’s direction, Zoom began quietly building an AI architecture designed to facilitate tasks rather than just summarize conversations. Zoom publicly announced a partnership with Anthropic in May 2023 — but that’s not the only large language model (LLM) used at Zoom. While Microsoft Teams generally relies on OpenAI via the Microsoft OpenAI Azure service, and Google Meet is supported by Google Gemini, Zoom has taken an agnostic approach to LLMs. Huang explained that when Zoom launched the first iteration of its AI companion in 2023, it wasn’t based on any one single LLM. Instead, the company started off with a federated approach, using multiple LLMs including its own custom built small language model (SLM). “We’ve partnered with the best models out there, including OpenAI and Anthropic, but we’ve also built our own highly customized 2 billion parameter language model,” said Huang. Zoom’s AI Companion uses a federated approach in which the smaller Zoom model is used in conjunction with larger, industry-leading language models. The smaller model initially evaluates and processes the input, and the partial results are then passed to larger models to produce the final output. This approach allows Zoom to take advantage of the strengths of both the smaller, customized model and the larger, more powerful models, while reducing costs and improving performance. How the small language model is at the center of Zoom’s agentic AI journey Perhaps the most technically intriguing aspect of Zoom’s AI strategy is its focus on SLMs. Rather than following the industry trend of distilling smaller models from larger ones, Zoom built its 2-billion parameter model entirely from scratch. The technical advantage of this approach becomes apparent when customizing for specific domains. “When you customize, it takes more effort, it’s just hard to steer a bigger ship,” Huang explained. As it turns out, the ability to customize the small model is a critical component to the development of specific agentic AI workflows. Looking ahead, Zoom envisions its SLMs eventually running directly on user devices, enabling both better privacy and more personalized experiences. AI companion 2.0: Agentic AI transforms meetings to milestones At the heart of Zoom’s updates is AI Companion 2.0, which transforms Zoom’s AI capabilities from meeting support to fully agentic functions. With 2.0, Zoom is evolving from assistant to agentic AI that is capable of reasoning, memory and task execution. The evolved AI Companion can now execute multi-step actions on behalf of users, orchestrating tasks like scheduling meetings, generating video clips and creating documents. Key updates include: Agentic skills: Calendar management, clip generation, advanced writing assistance; Task management: Automatic detection of action items from meetings and chats; Meeting enhancements: AI-powered agendas, live notes and voice recording; Document creation: Advanced references and automatic data table generation in Zoom Docs; Virtual agents: Self-service capabilities for customer service with both chat and voice support; Industry solutions: Specialized tools for frontline workers, healthcare professionals and educators; Zoom Drive: New central repository for meeting assets and productivity documents; Custom avatars: AI-generated video avatars for creating presentation clips. Most features will roll out between March and July 2025. While the standard AI Companion is included at no additional cost for paid users, specialized agents and custom configurations will require additional fees. “The most important aspect for us of agentic AI is really enabling the action-oriented information flow,” said Huang. “What that means is that when you have a meeting, the action task will flow into Docs or chat or into other actions you have to take.” AI Studio: Building custom agents for enterprises  While Zoom is providing a lot of different agentic AI capabilities out-of-the-box for users, Huang recognized that enterprises often need more customized options. That’s where AI Studio comes in, allowing companies to create customized AI agents tailored to specific business needs. These can be deeply integrated with company-specific knowledge and workflow processes. As an example, Huang detailed one practical application for human resources policy. Enterprises can use the AI Studio to upload all of their internal HR policy documents. The AI companion will then be trained on this company-specific HR policy information, allowing it to accurately answer employee questions about HR guidelines and procedures. IT administrators can also use the AI Studio to connect the companion to other internal knowledge bases, like IT support documentation. The goal is to enable companies to create AI agents that are deeply integrated with their own processes, data and workflows, transforming the AI companion into a customized and

Inside Zoom’s AI evolution: From basic meeting tools to agentic productivity platform powered by LLMs and SLMs Read More »

台灣雕塑家李光裕首度來港展出 「雕塑虛空」蘊藏佛學意象與時間詩意

台灣雕塑家李光裕經常被譽為「詩人」,他的作品以「鏤空」技法為名,深受佛學「空、有」概念影響,動靜相生,呈現出一種物我兩忘的美學境界,像是由此解放了空間的束縛,接受自然流露的殘缺,從而踏進一種更自由的心境。 城市的紛雜,世俗的瑣事,常讓人無所適從。但站在李光裕的雕塑前,彷彿有了一個暫緩下來的空間,單純地駐足欣賞──那些處於虛實之間的作品,蘊含了一種內觀自省的過程,也蘊含着時間的流動,以及生活的態度。 李光裕1954年生於高雄內惟,上世紀70年代中期在台灣完成學院雕塑訓練,之後曾負笈西班牙深造,獲得碩士學位。學成歸國後,李光裕任教於台北藝術大學、台灣藝術大學雕塑系教授,他一方面在台灣文化影響創作,一方面又熱愛西方美學,兩者交織,形成他作品中的獨特視野。2006年退休後,李光裕一直創作至今。他喜歡師法自然,作品中兼有東方哲學與自然的美感。對他來說,雕塑是一種媒介,「我覺得我們每一個人的內在理想和思想,都會藉由某個媒介來表達。每個人使用的(媒介)不一樣,而我就是用雕塑來表達我所看到的世界,以及看到這個世界以後,我內在的心智覺醒。」 以「雕塑虛空」為題,李光裕將在今年3月至12月期間,首度來港展出。他由2008年至近年創作的多件雕塑,散落在金鐘「亞洲協會香港中心」的不同角落,融入這個兼具藝術、歷史與自然元素的獨特空間裏。負責穿針引線的「藝文策略」創辦人郭東杰解釋說:「很有趣,李光裕老師的作品一旦和大自然結合,整個味道不同、是出世的。亞洲協會香港中心的特色是既為歷史建築(本身是一個軍火庫),卻有當代建築的加建部分,猶如鬧市中的綠洲,當我們把老師的作品放入其中,你若人在不同位置看他的作品,會有不同的味道出來。」 在亞洲協會香港中心展示的作品,創作時間橫跨2008至2021年,展現了他對空性、自然之物的看法變遷。這次展出的雕塑當中,既有鳳凰、神龜與金鳥等意象,亦有些元素發端自生活微小瞬間。如藝術家本人所說:「(生活中)那麼多的事情讓你糾纏,有時候你會有太多問題、太多東西,但是當你看到我的作品,你會慢慢地覺得整個人平靜下來。」 在虛實之間,在靜動之間,在空有之間,這些作品喚起觀者對生命與藝術的深層共鳴。那些銅雕冷冽的質感與流暢的動勢背後,充滿李光裕生命哲學的體會,它們瀟灑靈動,使觀者留連於藝術之間,感受時間的詩意。 「李光裕 — 雕塑虛空|亞洲協會香港中心 X 李光裕」 日期:2025年3月20日至12月14日 地點:亞洲協會香港中心(香港金鐘正義道九號) LinkedIn Email Facebook Twitter WhatsApp The post 台灣雕塑家李光裕首度來港展出 「雕塑虛空」蘊藏佛學意象與時間詩意 appeared first on VeriMedia. source

台灣雕塑家李光裕首度來港展出 「雕塑虛空」蘊藏佛學意象與時間詩意 Read More »