VentureBeat

Apple’s ELEGNT framework could make home robots feel less like machines and more like companions

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Apple researchers have developed a new framework for making non-humanoid robots move more naturally and expressively during interactions with people, potentially paving the way for more engaging robotic assistants in homes and workplaces. The research, published this month on arXiv, introduces Expressive and Functional Movement Design ELEGNT, which allows robots to convey intentions, emotions and attitudes through their movements — rather than just completing functional tasks. “For robots to interact more naturally with humans, robot movement design should integrate expressive qualities — such as intention, attention and emotions — alongside traditional functional considerations like task fulfillment, spatial constraints and time efficiency,” the researchers from Apple’s robotics team write in their research paper. (Credit: Apple) How a desk lamp became the perfect test subject for robot emotions The study focused on a lamp-like robot, reminiscent of Pixar’s animated Luxo Jr. character, equipped with a 6-axis robotic arm and a head containing a light and projector. The researchers programmed the robot with two types of movements: purely functional ones focused on completing tasks, and more expressive movements designed to communicate the robot’s internal state. In user testing with 21 participants, the expressive movements significantly improved people’s engagement with and perception of the robot. This effect was especially pronounced during social tasks like playing music or engaging in conversation, although it was less impactful for purely functional tasks like adjusting lighting. “Without the playfulness, I might find this type of interaction with a robot annoying rather than welcome and engaging,” noted one study participant, highlighting how expressive movements made even potentially intrusive robot behaviors more acceptable. A visual guide showing the expressive movement vocabulary developed for the lamp-like robot, including basic gestures and spatial behaviors. (Credit: Apple) User testing reveals age gap in robot movement preferences The research comes as major tech companies increasingly explore home robotics. While most current home robots like robot vacuums focus purely on function, this work suggests that adding more natural, expressive movements could make future robots more appealing companions. However, the researchers note that balance is crucial. “There needs to be a balance between engagement through motion and speed completion of the task being given, otherwise the human might grow impatient,” one participant observed. The study also found that older participants were significantly less receptive to expressive robot movements, suggesting that robot behavior may need to be customized based on user preferences. The robot’s capabilities span from functional tasks like providing reading light to social interactions such as creative suggestions and playful companionship. (Credit: Apple) The future of social robotics: Finding the sweet spot between function and expression While Apple rarely discusses its robotics research publicly, this work offers intriguing hints about how the tech giant might approach future home robots. The study suggests a fundamental shift in robotics design: Instead of focusing solely on what robots can do, companies must consider how robots make people feel. The challenge ahead lies not just in programming robots to complete tasks, but in making their presence welcome in our most intimate spaces. As robots transition from factory floors to living rooms, their success may depend less on raw efficiency and more on their ability to read the room — both literally and metaphorically. Apple’s paper will be presented at the 2025 Designing Interactive Systems conference in Madeira this July. The results point to a future where robot design requires as much input from animators and behavioral psychologists as it does from engineers. As robots become more common in homes and workplaces, making them move in ways that feel natural rather than mechanical could be the difference between another forgotten gadget and a truly indispensable companion. The real test will be whether companies like Apple can translate these research insights into products that people not only use, but genuinely want to interact with. source

Apple’s ELEGNT framework could make home robots feel less like machines and more like companions Read More »

From 220M data points to revenue: How AI is transforming sports entertainment ROI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The Super Bowl is one of the largest sports entertainment events on the planet, bringing in more than a hundred million viewers and a billion in revenue. But for NFL teams and sports entertainment in general, there is a long road to championship as franchises aim to build brand, grow fandom and maximize revenues. One of the ways to make that happen is AI. The technology is no stranger to the world of sports entertainment. Predating the modern era of generative AI — as far back as 2017 — big vendors like IBM were already discussing how AI would disrupt sport entertainment networks. The NFL itself is using AI to help improve player safety with a Digital Athlete system developed in partnership with AWS. The NFL is also using AWS to build gen AI-powered apps using the Amazon MemoryDB database. For individual teams, both in the NFL and across the sports entertainment landscape, there are other options for implementing gen AI. One such option, launching today, comes from Elevate, a technology vendor led by Al Guido, who is also the president of the San Francisco 49ers NFL football team. The company’s new Elevate performance and insights cloud (EPIC) data and AI platform combines consumer insights, ticketing management and property analytics to help sports and entertainment organizations engage better with fans. The platform helps organizations with targeted engagement efforts to better understand potential customer personas. That information helps determine stadium seating options, ticket pricing and fan retention. The platform has already been used by more than 25 organizations, including the Tennessee Titans. Elevate has been in operation since 2018, but now with the advent of gen AI, the company is able to do much more with data. “Building EPIC has reinforced a fundamental truth that we’ve seen and validated with our clients since we’ve been in operation — data is only as powerful as the decisions it enables,” Guido, Elevat’s chairman and CEO, told VentureBeat. “In sports, the challenge isn’t just capturing that data but harnessing it to drive real, actionable intelligence that improves fan engagement, revenue strategies and operational efficiency.” The data challenges of building an AI-first engagement system Elevate already has data for approximately 220 million people in its system. The company collects first-party data through its client work and relationships. This includes data on fan behavior, ticket sales, sponsorships and other property-related information. Elevate also licenses and purchases third-party data sets to further enrich user profiles. Guido noted that many organizations collect what seems like infinite amounts of data, but they struggle to unify and leverage it. EPIC was designed to bridge that gap.  To fully benefit from modern gen AI, data should be in a vector database format, Elevate contends. CIO Jim Caruso explained to VentureBeat that his company has undergone an intensive process to not only vectorize data, but to make sure it’s the right data to help inform business decisions. There is no shortage of database vendors and technologies that claim to make vectorizing data simple. In reality, Caruso stressed that the vectorization process isn’t as simple as turning on a switch. As part of building EPIC, they reevaluated all data and how it could work together to provide the best insights. The actual vectorization process involved testing different approaches and processing pipelines to find the right balance of accuracy and performance. Currently, Elevate uses Amazon Sagemaker to make its vectorization work. How Anthopic Claude, XGBoost and Amazon Bedrock help to power AI insights for EPIC Caruso explained that the EPIC system provides a wide range of AI-powered applications, from pricing tickets to developing consumer insights personas. Elevate is using a combination of different technologies to build those tools. At the core is the Anthropic Claude Haiku 3.5 large language model (LLM), which has been fine-tuned on Elevate’s data. Claude provides the interface to ask questions and get insights based on different personas.  For example, one persona could be a venue operator that wants to determine the best way to configure premium seating in a venue. That operator will need to understand who would be interested in those seats and how they should be marketed to different groups. Elevate went beyond just identifying broad demographic segments, like suburban millennials. Instead, they created a series of distinct personas with a range of attributes including finances, buying preferences, entertainment choices and social networking engagement. The key goal is to provide very concrete, detailed personas that enable organizations to make specific business decisions. The system also uses the XGBoost (eXtreme Gradient Boosting) open-source machine learning (ML) library via Amazon Sagemaker to specifically help with numerical data for ticket pricing.  XGBoost is a supervised ML algorithm that uses decision trees to make predictions. Caruso explained that his team converted historical data, as well as real-time data, into 55 different features. These include event details, inventory details and recent sales information. All were then then fed into the XGBoost algorithm.  The competitive landscape for AI across sports entertainment Guido said that across the NFL and beyond, the initial response to EPIC has been positive. Many properties face similar challenges: fragmented data sources, evolving fan expectations and the need for smarter, more efficient revenue generation. Guido also clearly recognizes that the competitive landscape for this kind of technology is expanding. There are traditional customer relationship management (CRM) and analytics providers, like Salesforce, but in his view, they often lack the industry-specific intelligence that EPIC brings to sports and live entertainment. “What sets EPIC apart is its deep integration with the realities of sports,” said Guido.  How AI-powered insights are driving real-world impact for the Tennessee Titans Among the early users of EPIC is the NFL’s Tennessee Titans. The team is working with Elevate as it develops a new $2.1 billion stadium set to open in 2027. As part of the engagement, Elevate has helped lead sponsorship sales for the new stadium. The company developed

From 220M data points to revenue: How AI is transforming sports entertainment ROI Read More »

Anthropic CEO Dario Amodei warns: AI will match ‘country of geniuses’ by 2026

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI will match the collective intelligence of “a country of geniuses” within two years, Anthropic CEO Dario Amodei has warned in a sharp critique of this week’s AI Action Summit in Paris. His timeline — targeting 2026 or 2027 — marks one of the most specific predictions yet from a major AI leader about the technology’s advancement toward superintelligence. Amodei labeled the Paris summit a “missed opportunity,” challenging the international community’s leisurely pace toward AI governance. His warning arrives at a pivotal moment, as democratic and authoritarian nations compete for dominance in AI development. “We must ensure democratic societies lead in AI, and that authoritarian countries do not use it to establish global military dominance,” Amodei wrote in Anthropic’s official statement. His concerns extend beyond geopolitical competition to encompass supply chain vulnerabilities in chips, semiconductor manufacturing and cybersecurity. The summit exposed deepening fractures in the international approach to AI regulation. U.S. Vice President JD Vance rejected European regulatory proposals, dismissing them as “massive” and stifling. The U.S. and U.K. notably refused to sign the summit’s commitments, highlighting the growing challenge of achieving consensus on AI governance. Anthropic has positioned itself as an advocate for transparency in AI development. The company launched its Economic Index this week to track AI’s impact on labor markets — a move that contrasts with its more secretive competitors. This initiative addresses mounting concerns about AI’s potential to reshape global employment patterns. Three critical issues dominated Amodei’s message: maintaining democratic leadership in AI development, managing security risks and preparing for economic disruption. His emphasis on security focuses particularly on preventing AI misuse by non-state actors and managing the autonomous risks of advanced systems. Race against time: The two-year window to control superintelligent AI The urgency of Amodei’s timeline challenges current regulatory frameworks. His prediction that AI will achieve genius-level capabilities by 2027 — with 2030 as the latest estimate — suggests current governance structures may prove inadequate for managing next-generation AI systems. For technology leaders and policymakers, Amodei’s warning frames AI governance as a race against time. The international community faces mounting pressure to establish effective controls before AI capabilities surpass our ability to govern them. The question now becomes whether governments can match the accelerating pace of AI development with equally swift regulatory responses. The Paris summit’s aftermath leaves the tech industry and governments wrestling with a fundamental challenge: How to balance AI’s unprecedented economic and scientific opportunities against its equally unprecedented risks. As Amodei suggests, the window for establishing effective international governance is rapidly closing. source

Anthropic CEO Dario Amodei warns: AI will match ‘country of geniuses’ by 2026 Read More »

ConverzAI bags $16M for virtual recruiters that deliver 30% efficiency boost for enterprises

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Redmond, Washington-based startup ConverzAI, a provider of AI-driven recruitment automation, has raised $16 million in a Series A funding round to help it drive product innovation and expand its market reach. The round was led by Menlo Ventures, with participation from Left Lane Capital, Foundation Capital and Afore Capital. The funding will support ConverzAI’s plans for international expansion, workforce growth and enhancements to its AI-powered virtual recruiters. ConverzAI claims its virtual recruiters significantly boost hiring efficiency and revenue Founded in 2019 by CEO Ashwarya Poddar, ConverzAI has developed AI-driven virtual recruiters that handle key elements of the hiring process — from sourcing and engaging candidates to screening and final placement decisions. The platform engages candidates through voice, text and email, analyzing responses in real-time to streamline recruitment workflows, reduce hiring bias and improve efficiency. ConverzAI claims that its virtual recruiters have processed more than 100,000 jobs and engaged millions of candidates, resulting in tens of thousands of successful placements. The company also says that some customers have seen recruitment efficiency improve by 30% and revenue increase by up to 40% after implementation, attributed mainly to an increased number of successful placements. One of the platform’s key advantages is its ability to significantly accelerate hiring timelines. According to ConverzAI, it can reduce time-to-placement by as much as 90% while maintaining a high-quality candidate experience. The AI recruiter automates routine administrative tasks such as initial outreach, follow-ups and screening conversations, allowing human recruiters to focus on relationship-building and strategic decision-making. Investor support for ConverzAI’s AI recruiting Venky Ganesan, a partner at Menlo Ventures, expressed confidence in ConverzAI’s ability to reshape the recruitment industry. He noted that ConverzAI operates at the intersection of agentic AI, voice AI and recruiting, offering a fully automated recruitment solution that still prioritizes the candidate experience. ConverzAI founder and CEO Ashwarya Poddar emphasized that his company’s vision is to create transformational value for staffing firms. “Conducting entire business conversations through AI technology is game-changing,” he said. He also highlighted the company’s commitment to advancing agentic AI and voice AI, positioning ConverzAI as an essential recruitment partner for staffing firms. Customer impact and adoption Several industry leaders have endorsed ConverzAI’s technology, citing tangible improvements in hiring speed, candidate engagement and overall recruitment outcomes. Rob Lowry, chief talent strategy officer at Apex Systems, noted that ConverzAI’s virtual recruiters have had more than 250,000 candidate conversations and achieved a candidate satisfaction score exceeding 80%. He emphasized that the platform has significantly accelerated recruitment efforts while improving the quality of candidate data. Richard Wahlquist, CEO of the American Staffing Association, echoed this sentiment, saying that ConverzAI’s AI-powered virtual recruiters are setting a new standard in staffing innovation, helping businesses optimize hiring outcomes at scale. Future plans of expansion With this new funding, ConverzAI aims to increase its workforce and expand internationally by the end of the year. The company is also focused on continuous innovation, particularly in enhancing the scalability, personalization and decision-making capabilities of its AI-powered recruiters. The platform’s rapid deployment capabilities — allowing staffing firms to implement the solution in less than five days — continue to be a major selling point, particularly as the industry shifts toward AI-powered efficiency solutions. source

ConverzAI bags $16M for virtual recruiters that deliver 30% efficiency boost for enterprises Read More »

AI’s biggest obstacle? Data reliability. Astronomer’s new platform tackles the challenge

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Astronomer, the company behind Apache Airflow orchestration software, has launched Astro Observe, marking its expansion from a single-product company into the competitive data operations platform market. The move comes as enterprises struggle to operationalize their AI initiatives and maintain reliable data pipelines at scale. The new platform aims to help organizations monitor and troubleshoot their data workflows more effectively by combining orchestration and observability capabilities in a single solution. This consolidation could significantly reduce the complexity that many companies face when managing their data infrastructure. “Previously, our customers would have to come to us for orchestration data pipelines, and they’d have to go figure out a different data observability and Airflow observability vendor,” Julian LaNeve, CTO of Astronomer, said in an interview with VentureBeat. “We’re trying to make that a lot easier for our customers and give them everything in one platform.” AI-powered predictive analytics aims to prevent pipeline failures A key differentiator of Astro Observe is its ability to predict potential pipeline failures before they impact business operations. The platform includes an AI-powered “insights engine” that analyzes patterns across hundreds of customer deployments to provide proactive recommendations for optimization. “We will actually tell people two hours before the SLA is going to happen that they’re likely to miss it because there was some delay far upstream,” LaNeve explained. “That moves people from this very reactive world to a lot more proactive [approach], where you can start to address issues before downstream stakeholders find out.” The timing is particularly significant as organizations grapple with operationalizing AI models. While much attention has focused on model development, the challenge of maintaining reliable data pipelines to feed these models has become increasingly critical. “Ultimately, to take these AI use cases from prototype to production, it becomes a data engineering problem at the end of the day,” LaNeve noted. “How do you effectively feed these LLMs the right data on time every time? That’s what data engineers have been doing for many years now.” Astronomer moves from open source success to enterprise data management The platform builds on Astronomer’s deep expertise with Apache Airflow, an open-source workflow management platform downloaded more than 30 million times monthly. This represents a significant increase from just four years ago when Airflow 2.0 saw less than a million downloads. One notable feature is the “global supply chain graph,” which provides visibility into both data lineage and operational dependencies. This helps teams understand complex relationships between different data assets and workflows — which is crucial for maintaining reliability in large-scale deployments. The platform also introduces a “data product” concept, allowing teams to group related data assets and assign service level agreements (SLAs). This approach helps bridge the gap between technical teams and business stakeholders by providing clear metrics around data reliability and delivery. Early adopter GumGum, a contextual intelligence company, has already seen benefits from the platform. “Adding data observability alongside orchestration allows us to get ahead of issues before they impact users and downstream systems,” said Brendan Frick, GumGum senior engineering manager at GumGum. Astronomer’s expansion comes at a time when enterprises are increasingly looking to consolidate their data tooling. With organizations typically juggling eight or more tools from different vendors, the move toward unified platforms could signal a broader shift in the enterprise data management landscape. The challenge for Astronomer will be competing with established observability players while maintaining its leadership in the orchestration space. However, its deep integration with Airflow and focus on proactive management could give it an edge in the rapidly evolving market for AI infrastructure tools. source

AI’s biggest obstacle? Data reliability. Astronomer’s new platform tackles the challenge Read More »

OpenAI CEO Sam Altman shares plans to bring o3 Deep Research agent to free and ChatGPT Plus users

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Earlier this month, OpenAI debuted a new AI agent powered by its upcoming full o3 reasoning AI model called “Deep Research.” As with Google’s Gemini-powered Deep Research agent released late last year, the idea behind OpenAI’s Deep Research is to provide a largely autonomous assistant that can scour the web and other digital scholarly sources for information about a topic or problem. The agent then compiles it all into a neat report while the user goes about their business in other tabs, or leaving their computer behind entirely, providing the final report several minutes or even hours later with a notification. Yet unlike Google’s Deep Research, the value of the OpenAi o3 Deep Research was immediately apparent to many outside the AI community, including economist Tyler Cowen, who called it “amazing.” OpenAI provides more democratic access While initially unveiled as a product limited to ChatGPT Pro subscribers ($200 per month), OpenAI said at the time it would move its subscription tiers down to the lower-priced ChatGPT Plus ($20 per month) and Team ($30 per month) as well as Edu and Enterprise (variable pricing) plans. OpenAI co-founder and CEO Sam Altman clarified more of the company’s current thinking around making o3 Deep Research more widely available, quote-posting another user on X, @seconds_0, who wrote: “ok, OAI Deep Research is worth probably $1,000 a month to me. This is utterly transformative to how my brain engages with the world. I’m beyond in love and a little in awe.” Altman responded: “I think we are going to initially offer 10 uses per month for ChatGPT plus and 2 per month in the free tier, with the intent to scale these up over time. It probably is worth $1,000 a month to some users but I’m excited to see what everyone does with it!” While 10 users per month for the ChatGPT Plus tier seems workable, to me two uses per month seems almost trivial. I guess if you’re a free user, the hope is to hook you with how well it works and encourage you to upgrade to a higher cost plan, pulling you up the funnel — or whatever salespeople like to say. Still, it is helpful to learn what OpenAI is thinking when it comes to the availability of its powerful new products and agents. If you’re a free ChatGPT user, you best make sure your 2 uses per month of Deep Research are for queries your really want or need answered. And, compared to Deep Research — which is free (although powered by last generation’s Gemini 1.5 Pro model — OpenAI better hope that its o3 Deep Research is worth the price. source

OpenAI CEO Sam Altman shares plans to bring o3 Deep Research agent to free and ChatGPT Plus users Read More »

Who’s using AI the most? The Anthropic Economic Index breaks down the data

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI is reshaping the modern workplace, but until now, its impact on individual tasks and occupations has been difficult to quantify. A new report from Anthropic, the AI startup behind Claude, offers a data-driven view of how businesses and professionals are integrating AI into their work. The Anthropic Economic Index, released today, provides a detailed analysis of AI usage across industries, drawing from millions of anonymized conversations with Claude, Anthropic’s AI assistant. The report finds that while AI is not yet broadly automating entire jobs, it is being widely used to augment specific tasks—especially in software development, technical writing and business analysis. “AI usage primarily concentrates in software development and writing tasks, which together account for nearly half of all total usage,” the report states. “However, usage of AI extends more broadly across the economy, with ~36% of occupations using AI for at least a quarter of their associated tasks.” Computer-related jobs dominate AI usage, while physical labor shows minimal adoption, according to Anthropic’s analysis. (Credit: Anthropic) Not just hype: Anthropic provide a ground-level view of AI adoption Unlike previous studies that have relied on expert predictions or self-reported surveys, Anthropic’s research is based on direct analysis of how workers are actually using AI. The company leveraged its privacy-preserving analysis tool Clio to examine more than four million user conversations with Claude. These interactions were then mapped to occupational categories from the U.S. Department of Labor’s O*NET database. The data suggests that AI is playing a significant role as a collaborative tool rather than simply serving as an automation engine. In fact, 57% of AI usage in the dataset involved “augmentation,” meaning AI was assisting workers rather than replacing them. This includes tasks such as brainstorming, refining ideas and checking work for accuracy. The remaining 43% of usage fell into the category of direct automation, where AI performed tasks with minimal human involvement. This balance between augmentation and automation is a crucial indicator of how businesses are deploying AI today. “We find that 57% of interactions show augmentative patterns (back-and-forth iteration on a task) while 43% suggest automation (fulfilling a request with minimal human involvement),” the report states. Workers are using AI more as a collaborator (57%) than as a replacement (43%), the study finds. (Credit: Anthropic) More partner than replacement: AI is boosting, not eliminating, jobs One of the report’s most striking conclusions is that AI is not rendering entire job roles obsolete. Instead, it is being adopted selectively, assisting with specific tasks rather than fully automating occupations. “Only ~4% of occupations exhibit AI usage for at least 75% of their tasks, suggesting the potential for deep task-level use in some roles,” the report notes. “More broadly, ~36% of occupations show usage in at least 25% of their tasks, indicating that AI has already begun to diffuse into task portfolios across a substantial portion of the workforce.” This selective adoption suggests that while AI is transforming work, it is not yet leading to widespread job displacement. Instead, professionals are using AI to enhance productivity, offload repetitive work and improve decision-making. The report identifies software engineering as the field with the highest AI adoption, accounting for 37.2% of the analyzed conversations. These interactions typically involved tasks like debugging code, modifying software and troubleshooting networks. The second-highest category of use was in creative and editorial work, including roles in media, marketing and content production (10.3% of queries). AI is widely used to draft and refine text, assist with research and generate ideas. However, AI usage was significantly lower in fields that require physical labor, such as healthcare, transportation and agriculture. For example, only 0.1% of analyzed conversations were related to farming, fishing and forestry tasks. This disparity highlights the current limitations of AI, which excels at text-based and analytical tasks but struggles with jobs that require hands-on work, manual dexterity or complex interpersonal interactions. AI’s wage divide: The surprising sweet spot for adoption One of the most intriguing findings of the report is that AI usage does not follow a simple pattern when correlated with wages. Rather than being concentrated in either low- or high-wage jobs, AI adoption peaks in the mid-to-high salary range. “AI use peaks in the upper quartile of wages but drops off at both extremes of the wage spectrum,” the report notes. “Most high-usage occupations clustered in the upper quartile correspond predominantly to software industry positions, while both very high-wage occupations (physicians) and low-wage positions (restaurant workers) demonstrate relatively low usage.” This means that AI is being adopted most aggressively in roles that require analytical and technical skills but not necessarily the highest levels of specialized expertise. It also raises important questions about whether AI will exacerbate or mitigate existing economic inequalities — particularly if lower-wage workers have less access to AI’s productivity-boosting benefits. AI adoption peaks among mid-salary jobs like computer programmers, with less usage among both low-wage and very high-wage positions. (Credit: Anthropic) What business leaders need to know as AI reshapes the workforce For technical decision-makers, the report provides a roadmap for where AI is likely to have the greatest near-term impact. The data suggests that businesses should focus on AI adoption in knowledge-based professions where augmentation, rather than outright replacement, is the dominant pattern. The report also provides an early warning for policymakers: While AI is not yet replacing entire jobs at scale, its increasing presence in high-value tasks could have a profound impact on workforce dynamics. “AI has already begun to diffuse into task portfolios across a substantial portion of the workforce,” the report states. “While our data reveals where AI is being used today, inferring long-term consequences from these early usage trends poses significant empirical challenges.” Anthropic has open-sourced the dataset behind its analysis, inviting researchers to further explore how AI is shaping the economy. A detailed look at how different professions are using AI, with software development leading adoption. (Credit:

Who’s using AI the most? The Anthropic Economic Index breaks down the data Read More »

Cerebras-Perplexity deal targets $100B search market with ultra-fast AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Cerebras Systems and Perplexity AI are joining forces to challenge the dominance of conventional search engines, announcing a partnership that promises to deliver near-instantaneous AI-powered search results at speeds previously thought impossible. The collaboration, announced in an exclusive VentureBeat report, centers on Perplexity’s new Sonar model, which runs on Cerebras’s specialized AI chips at 1,200 tokens per second — making it one of the fastest AI search systems available. Built on Meta’s Llama 3.3 70B foundation, Sonar represents a significant bet that users will embrace AI-first search experiences if they’re fast enough. “Our partnership with Cerebras has been instrumental in bringing Sonar to life,” Denis Yarats, Perplexity’s CTO, said in a statement. “Cerebras’s cutting-edge AI inference infrastructure has enabled us to achieve unprecedented speeds and efficiency.” AI search just got faster — and big tech should pay attention The timing is notable, coming just days after Cerebras made headlines with its DeepSeek implementation, which demonstrated speeds 57 times faster than traditional GPU-based solutions. The company appears to be leveraging this momentum to establish itself as the go-to provider for high-speed AI inference. According to Perplexity’s internal testing, Sonar outperforms both GPT-4o mini and Claude 3.5 Haiku “by a substantial margin” in user satisfaction metrics, while matching or exceeding more expensive models like Claude 3.5 Sonnet. The company’s evaluations show Sonar achieving factuality scores of 85.1 out of 100, compared to 83.9 for GPT-4o and 75.8 for Claude 3.5 Sonnet. Specialized hardware: The new battleground for AI companies The partnership reflects a growing trend of AI companies seeking competitive advantages through specialized hardware. Cerebras CEO Andrew Feldman recently argued that such technological advances expand rather than contract the market. “Every time compute has been made less expensive, they [public market investors] have systematically assumed that made the market smaller,” Feldman told ZDNET in a recent interview. “And in every single instance, over 50 years, it’s made the market bigger.” Industry analysts suggest this alliance could pressure traditional search providers and other AI companies to reconsider their hardware strategies. The ability to deliver near-instant results could prove particularly compelling for enterprise customers, where speed and accuracy directly impact productivity. Market impact: Can specialized chips reshape enterprise search? However, questions remain about the scalability and cost-effectiveness of specialized AI chips compared to traditional GPU-based solutions. While Cerebras has demonstrated impressive speed advantages, the company faces the challenge of convincing customers that the performance benefits justify potential premium pricing. The partnership also highlights the increasingly competitive landscape in AI search, where companies are racing to differentiate themselves through speed and accuracy rather than just raw model size. For Perplexity, which has been gaining attention as an AI-native alternative to traditional search engines, the Cerebras partnership could help establish it as a serious contender in the enterprise search market. Perplexity plans to make Sonar available to Pro users initially, with broader availability coming soon. The companies did not disclose the financial terms of their partnership. source

Cerebras-Perplexity deal targets $100B search market with ultra-fast AI Read More »

Don’t sleep on Google Gemini’s Deep Research mode: 8 examples of informative reports

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Many of us in the AI and business worlds are focused — anecdotally and in terms of the number of articles and messages being written/posted — on OpenAI and DeepSeek, especially OpenAI’s o3-powered Deep Research mode, a new reasoning AI agent that performs extensive web research on behalf of the user and compiles it into neat and tidy, well-cited reports. This is natural since it’s a relatively new product (announced earlier this month) and OpenAI remains among the most highly-regarded and widely used AI model providers. Plus, CEO Sam Altman recently shared plans to make this product available outside of the current $200-per-month ChatGPT Pro subscription, at least on a limited trial basis. Yet for those seeking to use AI to perform Deep Research and have it write reports for you, there’s another model worth checking out without waiting for OpenAI’s Deep Research to make its way to more affordable subscription tiers or shelling out the $200-per-month for the ChatGPT Pro plan. Search giant Google’s own Deep Research mode, powered by its prior generation Gemini 1.5 Pro model, is available now on Google’s Gemini chatbot online through the Google One AI Premium plan (~$20 USD per month), and performs many of the same functionality as OpenAI’s Deep Research at 1/10th the monthly cost. Google actually offers the first month free, currently. It also allows you to export the resulting reports directly to Google Docs with one click. For those that use Google Workspace apps like Docs, this is an incredibly helpful and natural integration. How to use Google Deep Research to generate reports in minutes To access it, subscribe to the Google One AI Premium plan using the link above, then navigate to gemini.google.com, click the drop down menu labeled “Gemini Advanced” in the upper left corner, and select “1.5 Pro with Deep Research.” Every query you type into the entry bar at the bottom after this will now engage Deep Research mode. After the user enters the prompt, the Deep Research agent will draft a research plan for the user’s approval that looks something like this: The user can click to edit parts of the plan by prompting with new adjustments, or go ahead and click the “Start Research” button to begin the process. The Deep Research agent will compile a list of websites to perform the research on, and finally, a report in the form of a response that the user can quickly export to Google Docs with the “Open in Docs” button at the top right of the response box. Whether it’s researching scholarly topics such as conflict throughout history, or the science of new materials like graphene, or market fluctuations, or coming up with concrete business plans for mass producing a new physical small consumer goods product, my own extensive hands-on usage of Google’s Deep Research over the last few days has produced informative reports on a wide range of subjects, complete with citations and well-constructed explanations of the topics discussed. Even such controversial subjects that other AI models often refuse to engage with whatsoever — such as the recent Israeli military campaign in Gaza and whether or not it qualifies as a genocide, or the treatment of transgender people throughout history and in recent times — Google’s Deep Research will attempt to address using evidence from a variety of reputable sources, albeit with a bit of prompt engineering to get around initial resistance. I would strongly encourage all and any business leaders, especially those in “knowledge work” or manufacturing fields, to try Google Deep Research: have it produce reports on subjects related to your industry, and ask it to identify new opportunities or helpful insights to grow your business and gain efficiencies, which you might have missed. Basically treat it like a new helpful researcher on your team, give it some instructions in the form of a paragraph (or a few), and let it compile the report for you — mine took anywhere from seconds to less than 10 minutes. I strongly believe you will be impressed with the results, and may find it changes your workflow and approach for the better. Take a look at examples of 8 reports I generated with Google Deep Research below, complete with initial prompt, and try it for yourself. These are all unedited, raw reports produced directly by Google’s Deep Research powered by Gemini 1.5 Pro. I should hasten to add I’m not being paid by Google for this post or any other work, and am simply a tech journalist/geek by constitution who enjoys testing out new products and services and seeing how, if at all, they can be useful to me and my own personal knowledge repository. 1. Sleep research Prompt: “Compile me a report cross referencing various and any recent applicable studies and other scientific information about sleep length per night, and why some people may need less or more sleep than others, any genetic basis for this, and health effects of low sleep as well as whether low sleepers tend to suffer these or have genetics that protect them from the effects of low sleep.” Result: “Sleep Duration, Individual Variability, and Health Consequences: A Comprehensive Review“ 2. Economic boom and bust research Prompt: “Markets globally and for individual countries and commodities are known for having peaks and valleys, with sudden “black swan” events that often drive economic activity down. Research these throughout history, from the Tulip Craze of Amsterdam to the Great Depression and Global Financial Crisis and subsequent Global Recession, plus any other notable examples you can find, and discuss any overlapping commonalities in causes for market decline and resurgence, and also which markets continued to grow and thrive in depression and recessionary environments.“ Result: “Peaks, Valleys, and Black Swans: An Analysis of Market Crashes and Thriving Sectors in Economic Downturns“ 3. Making a new mass produced consumer product Prompt: “I have an idea

Don’t sleep on Google Gemini’s Deep Research mode: 8 examples of informative reports Read More »

Hugging Face brings ‘Pi-Zero’ to LeRobot, making AI-powered robots easier to build and deploy

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Hugging Face and Physical Intelligence have quietly launched Pi0 (Pi-Zero) this week, the first foundational model for robots that translates natural language commands directly into physical actions. “Pi0 is the most advanced vision language action model,” Remi Cadene, a principal research scientist at Hugging Face, announced in an X post that quickly gained attention across the AI community. “It takes natural language commands as input and directly outputs autonomous behavior.” This release marks a pivotal moment in robotics: The first time a foundation model for robots has been made widely available through an open-source platform. Much like ChatGPT revolutionized text generation, Pi0 aims to transform how robots learn and execute tasks. The future of robotics is open! Excited to see Pi0 by @physical_int being the first foundational robotics model to be open-sourced on @huggingface @LeRobotHF. You can now fine-tune it on your own dataset. ??? pic.twitter.com/ar8SHgyFbv — clem ? (@ClementDelangue) February 4, 2025 How Pi0 brings ChatGPT-style learning to robotics, unlocking complex tasks The model, originally developed by Physical Intelligence and now ported to Hugging Face’s LeRobot platform, can perform complex tasks like folding laundry, bussing tables and packing groceries — activities that have traditionally been extremely challenging for robots to master. “Today’s robots are narrow specialists, programmed for repetitive motions in choreographed settings,” the Physical Intelligence research team wrote in their announcement post. “Pi0 changes that, allowing robots to learn and follow user instructions, making programming as simple as telling the robot what you want done.” The technology behind Pi0 represents a significant technical achievement. The model was trained on data from seven different robotic platforms and 68 unique tasks, enabling it to handle everything from delicate manipulation tasks to complex multi-step procedures. It employs a novel technique called flow matching to produce smooth, real-time action trajectories at 50Hz, making it highly precise and adaptable for real-world deployment. Credit: Physical Intelligence New FAST technology accelerates robot training by 5X, expanding AI’s potential Building on this foundation, the team also introduced “Pi0-FAST,” an enhanced version of the model that incorporates a new tokenization scheme called frequency-space action sequence tokenization (FAST). This version trains five times faster than its predecessor and shows improved generalization across different environments and robot types. The implications for industry are substantial. Manufacturing facilities could potentially reprogram robots for new tasks through simple verbal instructions rather than complex coding. Warehouses could deploy more flexible automation systems that adapt to changing needs. Even small businesses might find robotics more accessible, as the barrier to programming and deployment significantly decreases. However, challenges remain. While Pi0 represents a significant advance, it still has limitations. The model occasionally struggles with very complex tasks and requires substantial computational resources. There are also questions about reliability and safety in industrial settings. The release comes at a crucial time in the AI industry’s evolution. As companies race to develop and deploy artificial general intelligence (AGI), Pi0 represents one of the first successful attempts to bridge the gap between language models and physical world interaction. The technology is now available through Hugging Face’s platform, where developers can download and use the pretrained policy with just a few lines of code: pythonRunCopy policy = Pi0Policy.from_pretrained(“lerobot/pi0”) For enterprise users, this accessibility could accelerate the adoption of advanced robotics across industries. Companies can now fine-tune the model for specific use cases, potentially reducing the time and cost associated with deploying robotic solutions. Credit: Physical Intelligence Why enterprise leaders should pay attention to open-source robotics The development team has also released comprehensive documentation and training materials, making the technology accessible to a broader range of users. This democratization of robotics technology could lead to innovative applications across various sectors, from healthcare to retail. As the technology matures, it could reshape how we think about automation and human-robot interaction. The ability to control robots through natural language could make robotic assistance more accessible in homes, hospitals and small businesses — areas where traditional robotics has struggled to gain traction due to programming complexity. With this release, the future of robotics looks increasingly conversational, adaptive and accessible. While there’s still work to be done, Pi0 represents a significant step toward making versatile, intelligent robots a practical reality rather than a science fiction fantasy. source

Hugging Face brings ‘Pi-Zero’ to LeRobot, making AI-powered robots easier to build and deploy Read More »