Move over, Alexa: Amazon launches new realtime voice model Nova Sonic for third-party enterprise development

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Amazon is best known as an e-commerce giant and then somewhere perhaps slightly further down the list of notable offerings is its Alexa AI voice assistant product, which just got a big intelligence upgrade last month thanks in part to Amazon Nova and Amazon’s investment Anthropic. Now Alexa will have to make space for a new Amazon voice AI sibling: today the company is introducing Amazon Nova Sonic, a new foundation model designed to allow third-party app developers to build realtime, naturalistic, conversational voice interactivity to their products using Amazon’s web platform Bedrock. It’s available now via a bi-directional streaming application programming interface (API). And actually, Amazon has already incorporated some portions of it — a speech encoder that provides representation and a speech synthesizer — into the new Alexa model, Alexa+. “This approach allows us to bring the benefits of our speech technologies to different use cases simultaneously while continuing to evolve both systems based on customer feedback and technological advancements,” a spokesperson told us. Obvious use cases include customer support and service, guidance, information retrieval, and entertainment. A unified approach Nova Sonic addresses a key challenge in voice AI: the fragmentation of technologies. Traditionally, building voice interfaces required combining separate models for speech recognition, language processing, and speech synthesis, according to Rohit Prasad, SVP and Head Scientist for Artificial General Intelligence (AGI) at Amazon, in a video call interview with VentureBeat yesterday using Amazon’s Chime video service. This complexity often results in robotic, unnatural interactions and increased development overhead. Now, Sonic seeks to improve on this state of affairs by combining all three distinct model types into one. Prasad explained the model’s core innovation: “Nova Sonic brings together three traditionally separate models—speech-to-text, text understanding, and text-to-speech—into one unified system that can model not just the ‘what’ but also the ‘how’ of communication.” By retaining the acoustic context—such as tone, cadence, and style—Nova Sonic helps maintain the nuances of human conversation. Recognizing the intricacies and quirks of live, two-way audio conversations One of Nova Sonic’s defining capabilities is its ability to handle live, two-way conversations. It recognizes when users pause, hesitate, or interrupt—common behaviors in human speech—and responds fluidly while maintaining context. “The real breakthrough here is real-time, interactive, low-latency voice interaction, which means you can interrupt the AI mid-sentence, and it will still maintain context and respond coherently,” said Prasad. This feature is especially relevant in scenarios like customer service, where responsiveness and adaptability are critical. Nova Sonic is also designed to integrate seamlessly with other systems. It automatically generates transcripts of spoken input, which can be used to trigger APIs or interact with proprietary tools. This allows companies to build AI agents that can perform tasks such as booking appointments, retrieving live information, or answering complex customer inquiries. “You can use Nova Sonic through Amazon Bedrock and connect it with any tools or proprietary data sources, even visual ones, as long as they’re wrapped as callable APIs,” said Prasad. This flexibility makes the model suitable for a wide range of industries, from education and travel to enterprise operations and entertainment. Benchmark performance and industry comparisons Nova Sonic has been benchmarked against other real-time voice models, including OpenAI’s GPT-4o and Google’s Gemini Flash 2.0. On the Common Eval data set, it achieved a 69.7% win-rate over Gemini Flash 2.0 and a 51.0% win-rate over GPT-4o for American English single-turn conversations using a masculine voice. Similar gains were seen with feminine and British English voices. Prasad emphasized Nova Sonic’s strong performance in its primary language markets: “Nova Sonic is currently best-in-class in U.S. and British English, outperforming even GPT-4o real-time in both conversational naturalness and accuracy.” He added, “To the best of our knowledge, only two other models—GPT-4o real-time and a variant of GPT-4o mini—come close to what Nova Sonic does in combining speech understanding and generation in real time. This space is still very early and very hard.” Multilingual capabilities and noisy environment handling In speech recognition, Nova Sonic also excels in multilingual and real-world conditions. It recorded a word error rate (WER) of 4.2% on the Multilingual LibriSpeech benchmark, outperforming GPT-4o Transcribe by over 36% across English, French, German, Italian, and Spanish. In noisy, multi-speaker environments (measured using the AMI benchmark), Nova Sonic showed a 46.7% improvement in WER over GPT-4o Transcribe. Expressive voices and language expansion Currently, the model supports multiple expressive voices, both masculine and feminine, in American and British English. Amazon noted that additional accents and languages are in development and will be released in future updates. Low latency and enterprise-friendly cost Speed and cost are also part of the appeal. Third-party benchmarking shows Nova Sonic delivers a customer-perceived latency of 1.09 seconds, compared to 1.18 seconds for OpenAI’s GPT-4o and 1.41 seconds for Google’s Gemini Flash 2.0. From a pricing standpoint, Amazon positions Nova Sonic as an enterprise-ready solution. “We’re nearly 80% cheaper than GPT-4o real-time, and that superior price-performance is resonating with enterprises moving from experimentation to deployment,” said Prasad. Early adoption across sectors According to Amazon, companies across different sectors have already begun using or testing Nova Sonic. ASAPP is applying the technology to optimize contact center workflows, praising its accuracy and natural dialog handling. Education First (EF) uses the model to support language learners with real-time pronunciation feedback, especially for non-native speakers with varied accents. Sports data provider Stats Perform is leveraging Nova Sonic’s low latency and simple setup to power rapid, data-rich interactions in its Opta AI Chat platform. Responsible AI and safety commitment Alongside performance and cost, Amazon is highlighting its commitment to responsible AI development. The Nova family of models includes built-in safeguards and is supported by AWS AI Service Cards that outline intended use cases, potential limitations, and ethical guidelines. Prasad underscored Amazon’s focus on trust and safety: “Trust is paramount for us—developers can customize personality within limits, but we’ve put in strong guardrails to prevent

Move over, Alexa: Amazon launches new realtime voice model Nova Sonic for third-party enterprise development Read More »

10 most used gen AI tools in the enterprise

Dall-E 3 Gen AI isn’t just about chatbots and virtual assistants. DALL-E 3, also from OpenAI, focuses on generating visuals from text descriptions, and 30% of respondents in the Wharton survey said they currently use DALL-E 3, and 35% said they’re evaluating or testing it. OpenAI launched the original DALL-E model in 2021, and the DALL-E 3 deep learning model leverages computer vision and natural language processing to create visuals. Potential business uses include product ideation, app mockups, logo design, creating images and videos for social media posts, and educational materials. Among AI image generators, DALL-E 3’s strength lies in its integration with ChatGPT, yet many users say it struggles with photorealism, with a distinctive style that makes it easy to spot the model generated an image. RunwayML Gen-1 and Gen-2 Runway uses text, images, and video inputs (including content generated by other gen AI tools) to generate video, and 25% of respondents to Wharton’s survey said they currently use Gen-1 and Gen-2, while 31% said they were evaluating or testing the models. source

10 most used gen AI tools in the enterprise Read More »

From ‘catch up’ to ‘catch us’: How Google quietly took the lead in enterprise AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Just a year ago, the narrative around Google and enterprise AI felt stuck. Despite inventing core technologies like the Transformer, the tech giant seemed perpetually on the back foot, overshadowed by OpenAI‘s viral success, Anthropic‘s coding prowess and Microsoft‘s aggressive enterprise push. But witness the scene at Google Cloud Next 2025 in Las Vegas last week: A confident Google, armed with benchmark-topping models, formidable infrastructure and a cohesive enterprise strategy, declaring a stunning turnaround. In a closed-door analyst meeting with senior Google executives, one analyst summed it up. This feels like the moment, he said, when Google went from “catch up, to catch us.”  This sentiment that Google has not only caught up with but even surged ahead of OpenAI and Microsoft in the enterprise AI race prevailed throughout the event. And it’s more than just Google’s marketing spin. Evidence suggests Google has leveraged the past year for intense, focused execution, translating its technological assets into a performant, integrated platform that’s rapidly winning over enterprise decision-makers. From boasting the world’s most powerful AI models running on hyper-efficient custom silicon, to a burgeoning ecosystem of AI agents designed for real-world business problems, Google is making a compelling case that it was never actually lost – but that its stumbles masked a period of deep, foundational development.  Now, with its integrated stack firing on all cylinders, Google appears positioned to lead the next phase of the enterprise AI revolution. And in my interviews with several Google executives at Next, they said Google wields advantages in infrastructure and model integration that competitors like OpenAI, Microsoft or AWS will struggle to replicate. The shadow of doubt: acknowledging the recent past It’s impossible to appreciate the current momentum without acknowledging the recent past. Google was the birthplace of the Transformer architecture, which sparked the modern revolution in large language models (LLMs). Google also started investing in specialized AI hardware (TPUs), which are now driving industry-leading efficiency, a decade ago. And yet, two and a half years ago, it inexplicably found itself playing defense.  OpenAI’s ChatGPT captured the public imagination and enterprise interest at breathtaking speed and became the fastest-growing app in history. Competitors like Anthropic carved out niches in areas like coding. Google’s own public steps sometimes seemed tentative or flawed. The infamous Bard demo fumbles in 2023 and the later controversy over its image generator producing historically inaccurate depictions fed a narrative of a company potentially hampered by internal bureaucracy or overcorrection on alignment. It felt like Google was lost: The AI stumbles seemed to fit a pattern, first shown by Google’s initial slowness in the cloud competition, where it remained a distant third in market share behind Amazon and Microsoft. Google Cloud CTO Will Grannis acknowledged the early questions about whether Google Cloud would stand behind in the long run. “Is it even a real thing?,” he recalled people asking him. The question lingered: Could Google translate its undeniable research brilliance and infrastructure scale into enterprise AI dominance? The pivot: a conscious decision to lead Behind the scenes, however, a shift was underway, catalyzed by a conscious decision at the highest levels to reclaim leadership. Mat Velloso, VP of product for Google DeepMind’s AI Developer Platform, described sensing a pivotal moment upon joining Google in Feb. 2024, after leaving Microsoft. “When I came to Google, I spoke with Sundar [Pichai], I spoke with several leaders here, and I felt like that was the moment where they were deciding, okay, this [generative AI] is a thing the industry clearly cares about. Let’s make it happen,” Velloso shared in an interview with VentureBeat during Next last week. This renewed push wasn’t hampered by a feared “brain drain” that some outsiders felt was depleting Google. In fact, the company quietly doubled down on execution in early 2024 – a year marked by aggressive hiring, internal unification and customer traction. While competitors made splashy hires, Google retained its core AI leadership, including DeepMind CEO Demis Hassabis and Google Cloud CEO Thomas Kurian, providing stability and deep expertise. Moreover, talent began flowing towards Google’s focused mission. Logan Kilpatrick, for instance, returned to Google from OpenAI, drawn by the opportunity to build foundational AI within the company, creating it. He joined Velloso in what he described as a “zero to one experience,” tasked with building developer traction for Gemini from the ground up. “It was like the team was me on day one… we actually have no users on this platform, we have no revenue. No one is interested in Gemini at this moment,” Kilpatrick recalled of the starting point. People familiar with the internal dynamics also credit leaders like Josh Woodward, who helped start AI Studio and now leads the Gemini App and Labs. More recently, Noam Shazeer, a key co-author of the original “Attention Is All You Need” Transformer paper during his first tenure at Google, returned to the company in late 2024 as a technical co-lead for the crucial Gemini project This concerted effort, combining these hires, research breakthroughs, refinements to its database technology and a sharpened enterprise focus overall, began yielding results. These cumulative advances, combined with what CTO Will Grannis termed “hundreds of fine-grain” platform elements, set the stage for the announcements at Next ’25, and cemented Google’s comeback narrative. Pillar 1: Gemini 2.5 and the era of thinking models It’s true that a leading enterprise mantra has become “it’s not just about the model.” After all, the performance gap between leading models has narrowed dramatically, and tech insiders acknowledge that true intelligence is coming from technology packaged around the model, not just the model itself – for example, agentic technologies that allow a model to use tools and explore the web around it. Despite this, to possess the demonstrably best-performing LLM is an important feat – and a powerful validator, a sign that the model-owning company has things like superior research and the

From ‘catch up’ to ‘catch us’: How Google quietly took the lead in enterprise AI Read More »

[MY BrandingHK。市場quick shot] 2025.04.19 美元失去避險功能,最壞情況見XX / 道指、納指反彈沽貨 / 最強上證綜指 / 油價或見每桶 XX 美元

https://www.youtube.com/watch?v=MTRe2iKIuIU   LinkedIn Email Facebook Twitter WhatsApp The post [MY BrandingHK。市場quick shot] 2025.04.19 美元失去避險功能,最壞情況見XX / 道指、納指反彈沽貨 / 最強上證綜指 / 油價或見每桶 XX 美元 appeared first on VeriMedia. source

[MY BrandingHK。市場quick shot] 2025.04.19 美元失去避險功能,最壞情況見XX / 道指、納指反彈沽貨 / 最強上證綜指 / 油價或見每桶 XX 美元 Read More »

FCC Rejects Changes To 'Silkwave-2' Satellite Plan

By Nadia Dreid ( April 18, 2025, 9:42 PM EDT) — The Federal Communications Commission has said no to a satellite operator’s request to launch a new satellite after it promised that satellite would be space-bound before it retired a previous one but it didn’t happen…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

FCC Rejects Changes To 'Silkwave-2' Satellite Plan Read More »

The FTC Wants To Break Up Meta

The US Federal Trade Commission sees Meta in court, starting tomorrow. It’s the culmination of a nearly six-year investigation on whether Meta has a monopoly on the personal social networking market or not. The FTC argues that by acquiring Instagram (2012) and WhatsApp (2014), it “enabled Facebook to sustain its dominance — to the detriment of competition and users — not by competing on the merits, but by avoiding competition.” Meta argues that the FTC’s case is “weak” and “ignores reality.” The company asserts that “the evidence at trial will show what every 17-year-old in the world knows: Instagram competes with TikTok (and YouTube and X and many other apps).” 54% Of Poll Respondents Think Meta Has A Monopoly On Social Media We ran an overnight “quick pulse check” poll in Forrester’s ConsumerVoices Market Research Online Community.* We asked them to react to the potential forced split-up of Meta. About 500 online adults across the US, the UK, and Canada responded. The results show ambivalence and agreement: 54% agree that Meta has a monopoly on the personal social networking market (27% are neutral and 19% disagree). 43% agree that Instagram spinning off into a separate company (separate from Meta) would be good (50% are neutral and just 7% disagree). 45% agree that WhatsApp spinning off into a separate company (separate from Meta) would be good (50% are neutral and just 5% disagree). *Note: This poll was administered to a random sample of 497 online consumers in the US, the UK, and Canada in Forrester’s qualitative ConsumerVoices online community. This data is not weighted to be representative of total country populations. Our analysis of respondents’ open-ended statements found four common themes (below), each with just one illustrative verbatim (of many): Meta has too much power: “No company should have all that power and user data.” Users are concerned about data privacy: “It’s not so easy to track behavior if on different platforms.” Some see an opportunity for innovation: “I think it allows it to grow and advance without the parent company choosing what it does.” People want marketplace competition: “Meta having a monopoly on three popular social media apps prevents competition and better oversight of these apps.” Meta Has A Trust Problem — It’s Not New When Meta was just Facebook, the company suffered from trust issues. The 2018 Cambridge Analytica scandal created mainstream awareness of consumer data privacy issues, tarnishing Facebook’s already shaky reputation. And when the company rebranded as “Meta” in late 2021, Forrester found (back then) that 75% of poll respondents disagreed that a new company name would increase their trust in Facebook. It hasn’t. And that brings us back to present day. According to Forrester’s February 2025 Consumer Pulse Survey, just about a third of online adults (35% US, 30% UK) trust Meta (as a company) the same or more today than they did in 2024, and less than that have confidence in Mark Zuckerberg as the CEO of Meta (32% US, 28% UK). But whether a Meta breakup would ultimately be good for social media users or not, according to one of our poll respondents (referring to Instagram), “depends on who takes it over.” The Real Case To Keep Meta Intact? Interoperability Some respondents in Forrester’s ConsumerVoices Market Research Online Community pointed to the connectivity and governance across Meta’s family of apps as a good thing. “It’s really easy right now when I post to Instagram; it posts automatically to my Facebook page without me having to do that, so it’s really convenient,” someone replied. Another said, “There’s uniformity of policies at the moment — for example, rules for teenagers — which makes it a little simpler.” Yet when we surveyed online adults in February, just 31% of them agreed that they benefit from Facebook, Instagram, and WhatsApp all being interoperable under one company (Meta), and 43% disagreed with that statement (37% were neutral). A Meta Breakup Is A Seismic Social Media Market Reset The ramifications of this trial, coupled with TikTok’s future in limbo, potentially puts the very core of the social media market at play. No longer would Meta be its center of gravity. We haven’t seen anything like this since around 2006–2011 — social media’s earliest days. Yes, there was a time when all of these apps were separate and then some. We’d likely see a renaissance of social media startups looking to grab a piece of the new social-media world order. So what would happen to Meta? Sure, Meta is trying to make Facebook cool again. But the company’s social media “insurance” is (and has been for a while) … Instagram. Without Instagram and WhatsApp, what really is Meta? Could Facebook seriously compete with a standalone Instagram? Can Threads monetize at scale? Doubtful. And the company absolutely should not hang its hat on its fledgling metaverse ambitions. Its AI Glasses are a bright spot, as is its broader AI work (i.e., Llama). That means, in a broken-up Meta, the company’s AI initiatives would usurp its social media roots. Would this be good for advertisers? Yes and no. It would certainly spawn a renewed wave of creativity in the marketplace. This could mean new and interesting ad types, targeting capabilities, and partnership opportunities. On the other hand, Meta’s sheer scale and reach is the one thing that makes the company’s family of apps a marketing mainstay. A more fragmented marketplace would reduce social media’s advertising efficiencies — making brands work harder to plan, buy, and create custom ads across a newly expanded portfolio of platforms. Here’s the big (unanswered) question: If Meta is just Facebook (once again), would today’s advertisers even bother with it? For now, marketing executives should keep doing what they’re doing. Meta’s not getting broken up anytime in the short term. But hang tight and let the trial begin. Forrester clients: Let’s chat more about this via a Forrester guidance session. source

The FTC Wants To Break Up Meta Read More »

CCaaS Vendors Thrive In A Wild Market

This is my second Forrester Wave™ evaluation covering contact-center-as-a-service (CCaaS) platforms, and to the uninitiated, the familiar set of vendors in this Wave could make it appear almost as if there has not been much change in the market. But looks can be deceiving. Two years has made a big difference for CCaaS vendors, as Forrester’s newly published report, The Forrester Wave™: Contact-Center-As-A-Service Platforms, Q2 2025, reveals. The CCaaS vendors have marched forward through a time of incredible change: Generative AI (genAI) is reshaping customer service, adjacent vendors are working to commoditize CCaaS, and the CCaaS vendors continue to expand their value proposition. Much as these proverbial ducks appear to be bobbing along serenely, under the surface, those feet are paddling away through the choppy waters of this market. Following are some of the most striking changes I saw while researching this Wave. In CCaaS, AI Changes The Game (Again!) ChatGPT 3.5 was announced within weeks of the launch of the 2023 CCaaS Wave, promising great potential but too soon then to impact any of the offerings in early 2023. Of course, AI was already reshaping the offerings in this space with new approaches to self-service, agent assist, analytics, quality management, and more. Now that genAI has had time to permeate CCaaS offerings, we are seeing new levels of capabilities that are changing what it means to run a contact center: Call summarization. This capability became a commodity in a matter of months. Generative AI-written notes are high-quality and save agents time performing an important but rote task, thereby freeing them to spend more time with customers. Analytics. Every call is now transcribed, and genAI enables the business to query this data to unearth business insights without requiring a data scientist. New insights are helping brands run their contact centers and hold the promise of spreading customer insights across the organization. Quality management. No longer do we need to sample 1% of calls and hope to find a good example of an interaction to judge an agent on (an old process with ineffective results). AI can score all calls, noting customer and agent sentiment to provide overall feedback. This capability frees supervisors to focus more on coaching and improvement instead of basic scoring. Agent assist. Two years ago, CCaaS offerings could demo the system, advising the agent advice such as “The customer has negative sentiment; be more empathetic.” Cool, yes, but the advice wasn’t particularly relevant or useful. GenAI provides real insights and next-best-action recommendations that save training for agents and improve outcomes for customers. Customer self-service. This is one place where the CCaaS vendors are lagging the conversational AI point-solution vendors that have aggressively embraced genAI for self-service applications, since the alternative would be quick extinction. For the CCaaS crew, genAI provides value in many places without unleashing genAI directly on customers. As a result, it’s not surprising to see that the CCaaS vendors have invested in other areas. Look for this to change before you see the next CCaaS Wave. CCaaS Vendors Deliver More Than Incremental Improvements CCaaS vendors are thinking beyond the confines of improving the traditional capabilities of a CCaaS platform. For example, they might be providing a new level of value, extending beyond the contact center, or preparing for a new, AI-centric world. Areas we saw in this Wave include: Next best action. This capability offers useful suggestions for what agents can do, which the system often suggests proactively based on the conversation between the agent and the customer. So far, this capability is more practical for conversations that happen in the digital realm, as there is still too much lag time in most solutions to keep up with the chaotic nature of spoken human conversation. Analytics reaches beyond the contact center. The more the contact center can understand what happened to the customer before they hit the contact center, the better an agent can anticipate that customer’s needs. Understanding the customer journey beyond the confines of the customer service interaction allows for a much better service experience, and CCaaS systems can provide insights that have value beyond the contact center. New pricing models. There is general agreement that as AI increases automation across the contact center, the number of agents will start to decline, putting pressure on prevalent agent-based pricing models. Vendors in this Wave evaluation showcased a variety of approaches that typically focus on monetizing AI capabilities to offset any losses from traditional seat-based revenue. The CCaaS market continues to evolve — watch for the pace of innovation to increase further. To understand what this evolution means for your organization, please schedule a guidance session or inquiry with me! source

CCaaS Vendors Thrive In A Wild Market Read More »

Google Cloud Next 2025's Developer Keynote: Agents Take Center Stage

Google Cloud Next 2025’s (full takeaway blog here) developer keynote offered a detailed look at the company’s latest AI innovations, with a particular focus on agent technology and developer tools. Cohosts Richard Seroter and Stephanie Wong brought both technical insight and their signature energy to the stage, keeping the audience engaged with well-timed humor as they guided attendees through a series of practical demonstrations that built upon each other to showcase the potential of these technologies. Agent Framework Takes Shape The keynote opened with Brad Calder framing Google’s strategy around three key areas: agentic applications, developer productivity tools, and Gemini models. What followed was a series of interconnected demonstrations centered around a home renovation scenario, showcasing how multiple specialized agents could collaborate on complex tasks. The newly released Agent Development Kit (ADK) appears designed to lower the barrier to entry for creating AI agents. Dr. Fran Hinkelmann demonstrated its three core components: instructions defining an agent’s goal, tools enabling actions, and a model handling large language model (LLM) tasks. The demonstration showed an agent generating a professional renovation proposal from floor plans and customer requirements. Building on this foundation, Dr. Abirami Sukumaran presented a multiagent system in which specialized agents for proposals, permits, and material ordering work together. When one agent encountered an error, she demonstrated cloud investigations, which provided automated debugging assistance. Developer Choice Emphasized Google stressed flexibility throughout the keynote, with Debi Cabrera showcasing Gemini integration across popular IDEs including Windsurf, Cursor, and IntelliJ. She also highlighted Vertex AI’s Model Garden, which supports models from other providers including Meta, Anthropic, and Mistral. Real-World Applications In one of the more interesting demonstrations (I’m a baseball fan), MLB hackathon winner Jake DiBattista presented an application that used Gemini to analyze baseball pitching mechanics. His demo analyzed both professional pitcher Clayton Kershaw’s pitching and, to humorous effect, Richard Seroter’s more amateur (but better than what I could muster!) efforts. The application demonstrated how computer vision capabilities previously requiring specialized hardware are now accessible to developers with affordable tools. The Kanban Board: Bridging AI Hype And Real Product Team Workflows Perhaps the most significant announcement was Scott Densmore’s preview of a Kanban board interface for Gemini Code Assist. Unlike the chat interfaces that have dominated AI coding assistants to date, this approach aligns with how development teams actually work. The board enables developers to assign tasks to Code Assist including bug fixes, code reviews, and prototype development. This potentially offers a more intuitive workflow for developers than conversation-based interactions. Data Science Access Expands Jeff Nelson demonstrated a Data Science Agent that transformed complex data analysis into an approachable process. With simple prompts, the agent generated forecasting models using BigQuery, Serverless Spark, and new foundation models such as TimesFM. This culminated in a deployed data app — suggesting that specialized AI agents may someday enable less technical users to build advanced capabilities. As the industry continues to evaluate the practical impact of these tools, the keynote made a compelling case that agent-based approaches might meaningfully change how software development and data analysis teams operate together. The demonstrations suggested that Google is working to integrate AI assistance into existing development workflows rather than requiring teams to adapt to entirely new paradigms. source

Google Cloud Next 2025's Developer Keynote: Agents Take Center Stage Read More »

US Officials Claim DeepSeek AI App Is 'Designed To Spy on Americans'

Image: iStock/BeeBright A bipartisan report, recently issued by the US Select Committee on the Chinese Communist Party (CCP), accuses DeepSeek of a series of subversive, illegal, and immoral practices. Moreover, the tech giant NVIDIA is also catching the ire of US government officials for supplying DeepSeek with the chips needed to create the AI models. Investigating DeepSeek The report, titled “DeepSeek Unmasked: Exposing the CCP’s Latest Tool for Spying, Stealing, and Subverting U.S. Export Control Restrictions,” was published in April 2025. It levies numerous accusations against DeepSeek, including: Actively suppressing more than 85% of responses that are related to human rights, democracy, Taiwan, or Hong Kong. Being owned and operated by a company with a direct link to the CCP. Currently funneling data on American users to the CCP. Maintaining infrastructure that is linked to Chinese companies utilizing mass surveillance, data harvesting, and censorship. In addition to the aforementioned accusations, the report also links dozens of DeepSeek employees and researchers to the People’s Liberation Army (PLA), the military wing of the CCP. “This report makes it clear: DeepSeek isn’t just another AI app — it’s a weapon in the Chinese Communist Party’s arsenal, designed to spy on Americans, steal our technology, and subvert U.S. law,” said Chairman John Moolenaar (R-MI). DeepSeek has already been banned in some countries including Australia, India, Italy, South Korea, and Taiwan due to potential security concerns. Its usage has also been banned by the US Congress, NASA, and among government entities within the state of Texas. Exploring NVIDIA’s role DeepSeek wasn’t the only company mentioned in the report; the tech giant NVIDIA also faces accusations, including: Supplying DeepSeek with more than 60,000 chips for the development of its platform, possibly circumventing export regulations. Developing a modified chip as a workaround to a loophole in regulations. NVIDIA spokesman John Rizzo previously issued a statement saying, in part: “We insist that our partners comply with all applicable laws, and if we receive any information to the contrary, act accordingly,” as reported by The New York Times. But NVIDIA’s statement isn’t enough for the congressional committee, which is accusing countries like Singapore and Malaysia of purchasing chips and illegally exporting them to China. As such, the committee is asking NVIDIA to provide specific details concerning every customer account from no less than 11 Asian countries. The committee is expecting NVIDIA’s response within two weeks. Waiting for a resolution The US Select Committee on the CCP first began investigating DeepSeek and its rapid pace of technological advancement in February 2025. While their recent report includes some very serious allegations against both DeepSeek and NVIDIA, it’s important to remember that all parties remain innocent until proven guilty. source

US Officials Claim DeepSeek AI App Is 'Designed To Spy on Americans' Read More »