AI Startup CoreWeave's Tepid Debut Chills IPO Enthusiasm

By Tom Zanki ( March 28, 2025, 8:41 PM EDT) — Artificial intelligence startup CoreWeave Inc.’s skittish debut following a scaled-down initial public offering chills recovery hopes for an IPO market that was already wobbly, though experts say viable candidates are waiting to strike if conditions stabilize…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

AI Startup CoreWeave's Tepid Debut Chills IPO Enthusiasm Read More »

Observe launches VoiceAI agents to automate customer call centers with realistic, humanlike voices that don’t interrupt

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Observe.AI has officially launched VoiceAI agents, a solution designed to automate routine customer interactions in contact centers. The latest addition to the company’s AI-driven conversational intelligence platform, VoiceAI agents aim to improve customer experience while reducing operational costs. With this release, Observe.AI is positioning itself as the only complete AI-powered platform that supports enterprises across the entire customer journey. The company’s suite of solutions now includes enterprise-grade VoiceAI agents, real-time agent assist tools, AutoQA for quality monitoring, agent coaching, and business insights. Automating the routine Observe.AI’s VoiceAI agents are built to handle a wide range of customer service inquiries, from frequently asked questions to more complex, multi-step conversations. They are built atop a combination of in-house AI models and partnerships with major AI providers like OpenAI and Anthropic for large language models (LLMs). “It’s an ensemble of multiple smaller models,” Jain explained. “For example, we have a specific model for number detection, a specific model for entity detection, a model for turn detection, and so on.” The goal is to alleviate the burden on human agents, allowing them to focus on higher-value interactions. As Swapnil Jain, CEO and co-founder of Observe.AI, told VentureBeat in a recent video call interview: “Enterprises are saying, ‘Do we really need human agents for these kinds of use cases?’” Jain said that companies often receive calls for basic tasks like checking an account balance or resetting a password—interactions that AI can now handle efficiently. For customers, this means eliminating long hold times and avoiding frustrating IVR menus that require pressing multiple buttons or repeatedly requesting a human agent. The voice AI space is becoming increasingly crowded with options ranging from proprietary models like OpenAI’s newly released GPT-4o-transcribe family and ElevenLabs to open source solutions as well. So why would someone pick Observe.AI’s agents over these? In a nutshell: specialization and ease-of-use. Instead of having to use raw voice AI models through providers’ APIs and building custom integrations with a business, or custom voice apps, Obseve.AI’s platform is already built to essentially “plug and play” with existing workflows and operations. So while, GPT-4o and other LLMs provide raw AI capabilities, Jain and Observe.AI’s contention is that they don’t offer a fully integrated solution for customer service workflows. In addition, unlike traditional voice AI assistants, Observe.AI’s VoiceAI agents are specifically designed for contact centers. The system combines various AI technologies, including: Automatic Speech Recognition (ASR): Converts spoken language into text in real time. Text-to-Speech (TTS): Delivers responses in a human-like voice. Proprietary AI Models: Specialized for handling numbers, turn-taking, and interruptions—critical in customer service settings. Jain noted that one of the key challenges AI agents face is knowing when a customer has actually finished speaking. “When do you know that the AI agent can start processing and the customer has stopped speaking?” he asked. “Sometimes I’m taking pauses because my sentence is over and I’m starting a new one. Sometimes I just stop speaking. How do you know the difference?” Observe.AI has developed custom in-house models that solve these nuances, ensuring smoother conversations between AI and customers. Deploys fast while integrating deeply with enterprise product support and tracking systems One of Observe.AI’s key advantages is its ability to integrate seamlessly with existing enterprise systems. Over time, the company has developed pre-built integrations with more than 250 platforms, including leading telephony, CRM, and workforce management tools such as Salesforce, Zendesk, and ServiceNow. This approach allows businesses to implement VoiceAI agents quickly. While AI deployments can sometimes take months, Observe.AI claims that its VoiceAI agents can go live in as little as one week, with minimal setup costs. “It’s not a professional services model where we take six months to customize something for you,” Jain said. “We come in, take two weeks to configure the product, and it works.” Security and compliance at the forefront Given the sensitivity of customer interactions, Observe.AI has built its solution with enterprise-grade security. The company holds certifications including GDPR, HIPAA, HITRUST, SOC2, and ISO27001. While voice biometrics have been used in the past for authentication, Jain stated that Observe.AI does not rely on them due to security concerns. Instead, the system follows traditional authentication methods, such as verifying Social Security numbers or account details. Additionally, Observe.AI offers data redaction capabilities to remove personally identifiable information (PII) before storage, and customers can opt for private instances to ensure data remains isolated. “In today’s world, you cannot rely on individual speech patterns for authentication,” Jain said. “We work with businesses to configure the same security rules they use for their human agents into our AI agents.” Saving $$$ through automation Observe.AI’s pricing model is based on completed tasks rather than per-minute usage. The cost depends on the complexity of the interaction, with simpler tasks (such as routing a call) priced lower than more involved tasks (such as processing an insurance claim). According to Jain, businesses can expect to save between 70-80% on customer service costs compared to using human agents. Early enterprise success stories Companies using VoiceAI agents are already seeing significant improvements. Emmanual Noyola, Director of Patient Services at Affordable Care, highlighted the impact on his team: “Beth, our VoiceAI agent, handles multiple intents with a 95% containment rate so our customer care team can focus on more complex cases.” By analyzing every conversation, Observe.AI’s platform continuously refines AI agent performance, ensuring accuracy and compliance. Businesses can also use AutoQA to evaluate both AI and human agents, identifying areas for improvement. One of the key challenges in AI-driven customer service is maintaining accuracy while preventing unintended responses. Jain acknowledged these concerns, referencing past AI missteps in customer service automation. “The core thesis behind making these enterprise-grade is having a very high bar on the confidence of the response,” he said. “If our response confidence is less than a certain threshold, it’s better for the AI agent to not even engage.” Blending AI automation with

Observe launches VoiceAI agents to automate customer call centers with realistic, humanlike voices that don’t interrupt Read More »

Breaches And Lawsuits And Fines, Oh My! What We Learned, The Hard Way, From 2024

With the average cost of a data breach at $2.7 million and 33% of enterprises reporting being breached three or more times over the past 12 months, understanding and learning from past incidents is not just beneficial — it’s essential. Our detailed examination of the top 35 breaches and privacy fines of 2024 has unearthed critical insights into the evolving cyberthreat landscape. Among the key findings: Attacks cause more than just monetary damage; inadequate data protection severely impacts customer trust; and healthcare in particular is at a critical juncture, because it’s not just brand reputation at stake but delivery of critical medical services. 2024 also saw hefty fines levied on organizations. GDPR is once again the most enforced privacy regulation in the world, but it isn’t the only regulation with sharp penalties. In the US, more states are putting privacy laws in place and holding organizations accountable. Not only does Meta hold the record of the highest-ever GDPR fine at €1.2 billion in 2023 from an Irish regulator, but in 2024, Meta took home the largest US state fine ever at $1.4 billion. While some companies can pay off their fines like parking tickets, most organizations do not have the capital or lawyers to copy this behavior. From our analysis of the top breaches and fines, we found the following: Massive breaches and outages drive regulatory proposals and changes. In early 2024, US Executive Order 14117 focused its attention on bulk sensitive personal data, with emphasis on telecommunications and the healthcare market. The US Federal Communications Commission has proposed telecom cybersecurity and supply chain risk management rules. The proposed HIPAA Security Rule that is currently open for comment is the first major update to the rule in over a decade. New York State, acting independently, implemented strict cybersecurity mandates for hospitals. And not to be outdone, the EU has focused on operational resilience, as the Digital Operational Resilence Act (DORA), which has been years in the making and has sweeping demands on security practices, went into effect January 17, 2025. Organizations need to worry about more than regulatory fines. It is important for firms operating within the US to be aware that, although the regulatory penalties they face can be substantial, there is another financial risk on the horizon that can’t be overlooked. Recent data indicates that the proportion of companies confronted with class-action lawsuits has reached its highest point in 13 years, and it is projected this year that the expenses associated with defending against these class-action lawsuits could exceed the costs of regulatory fines. Not all breaches are for financial gain. This past year, US ISPs and telecoms found their systems infiltrated by Chinese state-affiliated actors. After the investigation of these breaches, it appears that the focus was on a small number of individuals of political interest. In a separate incident, state-sponsored Chinese attackers breached the US Department of the Treasury through third-party vendor BeyondTrust’s support software. The objective was to gain sensitive information and conduct reconnaissance. To see the rest of our analysis and, more importantly, get the recommended actions you can take to protect your organization, read our report, Lessons Learned From The World’s Biggest Data Breaches And Privacy Abuses, 2024, or schedule a guidance session with us to talk more. (written with Danielle Chittem, research associate) source

Breaches And Lawsuits And Fines, Oh My! What We Learned, The Hard Way, From 2024 Read More »

Microsoft's Recent Quantum Claims: Breakthrough or Overreach?

In February, Microsoft claimed it had created a new form of matter and used it to develop a quantum computer architecture that could potentially be put to work solving complex industrial problems within years. Since the announcement, some researchers and scientists have disputed these claims, saying Microsoft hasn’t actually achieved what it’s suggesting. The promise of topological qubits Microsoft stated its in-house experts had created “the world’s first topoconductor, a breakthrough type of material which can observe and control Majorana particles to produce more reliable and scalable qubits, which are the building blocks for quantum computers.” Majorana particles are fermions, a subatomic particle. What makes these particular topological qubits potentially promising is their supposed natural ability to reduce errors, which is one of the biggest challenges facing all current quantum computers. The topoconductor is one component of a new chip called Majorana 1, which Microsoft said could unlock industrial-scale uses for quantum computing within years. The company claimed the chip is an important roadmap to fitting one million qubits on a single chip. Moreover, Microsoft said the topoconductor can create an entirely new state of matter that enables Majorana particles to be arranged in a neat grid of H-shaped units. “It’s complex in that we had to show a new state of matter to get there, but after that, it’s fairly simple. It tiles out. You have this much simpler architecture that promises a much faster path to scale,” said Krysta Svore, Microsoft technical fellow. The research is part of DARPA’s Underexplored Systems for Utility-Scale Quantum Computing (US2QC) competition to create a quantum computer whose computational value outweighs its costs. Skepticism from the scientific community There are many reasons Microsoft’s announcement was a shock to the community, but the biggest one is because of the elusiveness of these Majorana particles. The particles were first proposed in 1937, but actually finding them has been challenging; yet, Microsoft declared it had not only detected these elusive particles but had managed to harness them in a working machine containing eight topological qubits. Objections to Microsoft’s methodology have arisen since then, including the editor of Nature pointing out that the paper Microsoft published does not prove there are Majorana particles in any specific devices. What’s more, experiments of the type Microsoft performed tend to create false signals that can look like the presence of Majoranas, according to interviews conducted by NewScientist. In addition, researchers argue that Microsoft simply hasn’t shared enough proof to back up its claims. Henry Legg, a lecturer in theoretical physics at the University of St Andrews in the U.K., recently published a pre-print critique that states Microsoft’s work “is not reliable and must be revisited.” Legg says the company’s work does not have a “consistent definition,” and that the findings “vary significantly, even for measurements of the same device.” Microsoft’s quantum VP, Zulfi Alam, fired back, calling Legg a “pontificator” who didn’t “bother to read the papers or even try to understand the data.” SEE: Amazon says its Ocelot chip reduces errors that can plague quantum computing.   “The announcement from Microsoft on their topological qubit – a qubit harnessing matter which can be reformed to perform the low-error operations crucial to quantum computing’s success – has been a core strategy for Microsoft for over a decade,” said Gerald Mullally, interim CEO of Oxford Quantum Circuits. “Their announcement indicates that qubits could be formed from a single ‘physical qubit’ using smart (but incredibly difficult) material and fabrication techniques just microns in size.” Mullally goes on to say, “While this is a significant moment for the maturity and fast march of the industry, further research on measured coherence and gate fidelity characterisation – key metrics to understand the platform’s viability – is required to really understand its impact. Research such as this from a major technology company underpins the importance and prospects of commercial quantum computing.” Time will tell whether Microsoft’s announcement represents a genuine quantum revolution. TechnologyAdvice staff writer Megan Crouse contributed to this article. source

Microsoft's Recent Quantum Claims: Breakthrough or Overreach? Read More »

Military vehicles to get mixed reality windshields controlled by eyes

Finnish startup Distance Technologies emerged from stealth last year with a technology it claims can turn any transparent surface into a mixed reality (MR) display. Now, it has teamed up with Patria to trial the tech on the defence firm’s armoured vehicles. The partners will jointly develop a heads-up display for Patria’s six-wheel drive armoured personnel carrier. The system will display 3D tactical data, terrain mapping, and AI-driven military insights directly onto the windshield, allowing military personnel to see in low-visibility environments like darkness and smoke.  The MR technology promises to eliminate the need for additional screens or clunky headsets. The display also remains covert, preventing light leakage that could reveal vehicle positions, Distance said. Urho Konttori, the startup’s CEO and co-founder, claims it will give drivers “super sensing abilities.” “Creating XR heads-up displays that visualise mission-critical information on the windshield offers unprecedented speed, confidence, and decision-making ability on the battlefields of the future,” said Konttori, the former CTO of Varjo, another Helsinki-based XR startup. The 💜 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now! Unlike standard heads-up displays in cars, which project static or pre-determined information onto the windshield, Distance’s technology tracks the user’s eye movements and then displays the correct light field to match where they are looking. The “brain” behind the system is so-called contextual AI, a type of artificial intelligence that understands and reacts based on its situation. Distance can add its light field optics on top of most LCDs. When users look through the screen, they see a computer-generated 3D light field mixed in with the real world. This means virtually any transparent surface can transform into an MR window — whether that’s the windshield of a car, an F-18 fighter jet, or a 6×6 armoured vehicle. An early prototype of the MR windshield. Distance said the final version delivered Patria will be of significantly higher quality. Credit: Distance Technology Oy and Patria Distance claims the system is capable of “infinite” pixel depth, which means it should be indistinguishable from natural sight. Konttori no doubt drew inspiration from his work at Varjo, which in 2023 unveiled the world’s first retina-resolution XR headset.  The MR model for Distance Konttori left Varjo early last year. “I’ve started increasingly feeling that it’s time for me to move on,” he said in a LinkedIn post at the time. “I’m leaving Varjo and starting something new. Not anything Varjo does… but pretty mind-blowing.” Distance is that new venture. The company indeed takes a very different approach from his former employer While Varjo develops headsets with lenses, Distance offers glasses-free XR tech to the automotive, aerospace, and defence markets. In July last year, the startup emerged from stealth with a $2.7mn pre-seed investment. Three months later, it raised $11.1mn in a round led by Google Ventures. Now the company is looking to test its tech in the real world.   Distance’s collaboration with Patria is part of the defence firm’s government-funded eALLIANCE program, which seeks to support Finnish civil and defence tech companies.    The partnership comes as Europe rushes to increase defence spending amid cooling relations with the US. Just yesterday, Germany voted to create a massive €500bn ($545bn) fund for defence and infrastructure. source

Military vehicles to get mixed reality windshields controlled by eyes Read More »

The open source Model Context Protocol was just updated — here’s why it’s a big deal

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The Model Context Protocol (MCP)—a rising open standard designed to help AI agents interact seamlessly with tools, data and interfaces—just hit a significant milestone. Today, developers behind the initiative finalized an updated version of the MCP spec, introducing key upgrades to make AI agents more secure, capable and interoperable. In a very significant move, OpenAI, the industry leader in generative AI, followed the MCP announcement today by saying it is also adding support for MCP across its products. CEO Sam Altman said the support is available today in OpenAI’s Agents SDK and that support for ChatGPT’s desktop app and the Responses API would be coming soon. Microsoft announced support for MCP alongside this release, including launching a new Playwright-MCP server that allows AI agents like Claude to browse the web and interact with sites using the Chrome accessibility tree. “This new version is a major leap forward for agent-tool communication,” Alex Albert, a key contributor to the MCP project, said in a post on Twitter. “And having Microsoft building real-world infrastructure on top of it shows how quickly this ecosystem is evolving.” What’s new in the updated MCP version? The March 26 update brings several important protocol-level changes: OAuth 2.1-Based Authorization Framework: Adds a robust standard for securing agent-server communication, especially in HTTP-based transports. Streamable HTTP Transport: Replaces the older HTTP+SSE setup, enabling real-time, bidirectional data flow with better compatibility. JSON-RPC Batching: Allows clients to send multiple requests in one go, improving efficiency and reducing latency in agent-tool interactions. Tool Annotations: Adds rich metadata for describing tool behavior, enabling more imaginative discovery and reasoning by AI agents. Figure 1: Claude Desktop using Playwright-MCP to navigate and describe datasette.io, demonstrating web automation powered by the Model Context Protocol. The protocol uses a modular JSON-RPC 2.0 base, with a layered architecture separating core transport, lifecycle management, server features (like resources and prompts) and client features (like sampling or logging). Developers can pick and choose which components to implement, depending on their use case. Microsoft’s contribution: Browser automation via MCP Two days ago, Microsoft released Playwright-MCP, a server that wraps its powerful browser automation tool in the MCP standard. This means AI agents like Claude can now do more than talk—they can click, type, browse, and interact with the web like real users. Built on the Chrome accessibility tree, the integration allows Claude to access and describe page contents in a human-readable form. The available toolset includes: Navigation: browser_navigate, go_back, go_forward Input: browser_type, browser_click, browser_press_key Snapshots: browser_snapshot, browser_screenshot Element-based interactions using accessibility descriptors This turns any compliant AI agent into a test automation bot, QA assistant or data navigator. people love MCP and we are excited to add support across our products. available today in the agents SDK and support for chatgpt desktop app + responses api coming soon! — Sam Altman (@sama) March 26, 2025 Setup is easy: users simply add Playwright as a command in claude_desktop_config.json, and the Claude Desktop app will recognize the tools at runtime. The bigger picture: Interoperability at scale Figure 2: The modular design of MCP enables developers to implement only the layers they need, while maintaining compatibility. Anthropic first introduced MCP in late 2023 to solve a growing pain point: AI agents need to interact with real-world tools, but every app speaks a different “language.” MCP aims to fix that by providing a standard protocol for describing and using tools across ecosystems. With backing from Anthropic, LangChain and now Microsoft, MCP is emerging as a serious contender for becoming the standard layer of agent interconnectivity. Since MCP was launched first by Anthropic, questions lingered whether Anthropic’s largest competitor, OpenAI, would support the protocol. And of course, Microsoft, a big ally of OpenAI, was another question mark. The fact that both players have supported the protocol shows momentum is building among enterprise and open-source communities. OpenAI itself has been opening its ecosystem around agents, including with its latest Agents SDK announced a week ago — and the move has solidified support around OpenAI’s API formats becoming a standard, given that others like Anthropic and Google have fallen in line. So with OpenAI’s API formats and MCP both seeing support, standardization has seen a big win over the past few weeks. “We’re entering the protocol era of AI,” tweeted Alexander Doria, the co-founder of AI startup Pleias. “This is how agents will actually do things.” What’s next? With the release of MCP 0.2 and Microsoft’s tangible support, the groundwork is being laid for a new generation of agents who can think and act securely and flexibly across the stack. Figure 3: OAuth 2.1 Authorization Flow in Model Context Protocol (MCP) The big question now is: Will others follow? If Meta, Amazon, or Apple sign on, MCP could soon become the universal “language” of AI actions. For now, it’s a big day for the agent ecosystem—one that brings the promise of AI interoperability closer to reality. source

The open source Model Context Protocol was just updated — here’s why it’s a big deal Read More »

From automation to transformation: How AI is reshaping business

Are you using artificial intelligence (AI) to do the same things you’ve always done, just more efficiently? If so, you’re only scratching the surface. EXL executives and AI practitioners discussed the technology’s full potential during the company’s recent virtual event, “AI in Action: Driving the Shift to Scalable AI.” “AI isn’t about automation or efficiency,” said Vishal Chhibbar, chief growth officer at EXL. “It’s about driving smarter decisions, improving experiences and creating lasting value. And when AI is built on industry-specific knowledge, it transforms customer experience, operations, and IT in ways that weren’t possible before.” Accelerating business outcomes To illustrate these capabilities, EXL demonstrated EXLerate.AI, its AI orchestration platform. By using industry-specific AI agents and large language models (LLMs) to manage and automate complex business workflows, it enables enterprises to achieve a greater return on their AI investments through higher efficiency, enhanced customer experiences, improved accuracy, and increased scalability. Rohit Kapoor, chairman and CEO of EXL, highlighted the platform’s three core principles: The ability to integrate AI seamlessly into enterprise workflows A strong foundation based on data and domain expertise An open architecture that allows for flexibility for rapid innovation “It’s designed to help enterprises unlock AI’s full potential and accelerate business outcomes,” Kapoor said. The use of agentic AI, which relies on domain-specific logic and real-time data to validate and correct its outputs, makes EXLerate.AI more autonomous than traditional AI platforms. It goes beyond automating existing processes to instead reimagine new processes and manage them to ensure greater efficiency and compliance from the get-go. Wyatt Bennett, AI platform product lead at EXL, walked event attendees through the platform’s dashboard, demonstrating how users can easily select the LLMs and data sources they want to use, deploy connectors to third-party data sources, and even configure integrations with internal knowledgebases to create retrieval-augmented generation solutions. “The EXLerate.AI platform enables teams to quickly spin up new agentic solutions while simplifying the configuration of common foundational components,” Bennett said. Attendees also saw demos of Code Harbor, EXL’s generative AI-powered code migration tool, and EXL’s Insurance LLM, a purpose-built solution to the industry’s challenges around claims adjudication and underwriting. AI’s ‘enormous opportunities’ In one of the event’s panel discussions, “Staying Ahead in the Age of AI: Practical Lessons from Visionary Leaders,” AI practitioners shared how they’re using the technology and the benefits they’ve seen. Alexandra Hordern, general manager, regulatory and consumer policy at the Insurance Council of Australia, said AI expedites claims processing and frees up teams to perform more valuable work, leading to better outcomes. She also highlighted its potential to make fraud detection and prevention more efficient, reducing costs for customers. “AI has enormous opportunities to transform the way that general insurers and other businesses are operating in the economy,” she said. Jeffery Eberwein, chief solutions and data officer at Avant Insurance, agreed, adding that his organization is working with healthcare providers to improve services through AI-powered advancements such as enhanced radiology scanning and automated scribes. “Not only are we focused on how we can improve internally our processes as an insurer, but also supporting our doctors through advocacy and education as to how they’re using it in their practices,” he said. Enterprises that are best able to scale their AI initiatives will have the biggest competitive advantages, said Troy Williams, Asia Pacific digital leader at ISG. Working with third parties that specialize in data and process automation and focus on outcome-based operating models will also help them gain an edge, because most businesses don’t operate in that way. “What we’re seeing is a movement towards one-to-one, personalized experiences leveraging generative AI as a way of engaging with those customers at a personal level,” Williams said. “At the same time … it’s really a matter of scale, and moving those generative AI capabilities to the market at scale.” Working with partners — and making intentional choices about whether to build or buy certain solutions — can help enterprises achieve this scale quickly, said John Kim, chief data officer at Zurich Australia. “We certainly don’t believe we can build every single bit of our AI ecosystem, so we’re quite conscious about what we decide to build internally, what we’ll adopt, and what we’re going to buy,” he said. AI as a competitive differentiator Advancements in generative and agentic AI are empowering organizations to rethink processes, modernize systems and streamline workflows. The businesses that combine these technologies with high-quality data and domain expertise will be best equipped to turn AI from a tool into a true competitive advantage. To learn more about what AI can do for your business, visit exlservice.com. source

From automation to transformation: How AI is reshaping business Read More »

1. Views on deportations and arrests of immigrants in the U.S. illegally

This chapter explores Americans’ views on which groups of immigrants who are in the country illegally should be deported, where arrests should be allowed, and whether police should be able to check a person’s immigration status. Views on whether immigrants living in the country illegally should be deported About half of U.S. adults (51%) say some immigrants living in the country illegally should be deported, compared with 32% who say all should be deported. Some 16% say none should be deported. By political party Nearly all Republicans and Republican-leaning independents (96%) say at least some immigrants living in the country illegally should be deported, compared with 71% of Democrats and Democratic leaners. A far larger share of Republicans (54%) than Democrats (10%) say all immigrants in the country illegally should be deported. By race and ethnicity Similar shares of White (87%) and Asian (86%) adults say at least some immigrants living in the country illegally should be deported. Lower shares of Black (75%) and Hispanic (72%) adults say so. However, White adults (39%) are more likely than Asian (22%), Black (19%) or Hispanic (16%) adults to say all immigrants in the country illegally should be deported. Views on which groups of immigrants living in the country illegally should be deported Among U.S. adults who say some immigrants living in the country illegally should be deported, nearly everyone supports deporting those who have committed violent crimes. However, views vary among these Americans on whether immigrants living in the country illegally should be deported if they have committed nonviolent crimes or if they have arrived in the U.S. during the last four years. Here are views by different demographic groups among U.S. adults who say some immigrants living in the country illegally should be deported: By political party A greater share of Republicans than Democrats who favor some deportations say immigrants living in the country illegally should be deported if they have committed nonviolent crimes (67% vs. 42%) or have arrived in the last four years (63% vs. 32%). When it comes to those who have committed violent crimes, nearly all Republicans and Democrats (97% each) say this group should be deported. By race and ethnicity Most White (59%) and Asian (60%) adults who support some deportations say immigrants living in the country illegally should be deported if they have committed nonviolent crimes. By contrast, lower shares of Hispanic (43%) and Black (34%) adults say this. Roughly half or fewer of White (48%), Asian (43%), Hispanic (41%) and Black (34%) adults say immigrants living in the country illegally should be deported if they have arrived in the U.S. during the last four years. The survey also asked about whether other groups of immigrants in the country illegally should be deported. Relatively few Americans support deporting these immigrants if they have a job (15%), are parents of children born in the U.S. (14%), came to the U.S. as children (9%) or are married to a U.S. citizen (5%). Views on where arrests of immigrants living in the country illegally should be allowed A majority of U.S. adults say law enforcement should be allowed to arrest immigrants living in the country illegally at protests or rallies, in their homes or in their workplaces. By political party 89% of Republicans say arrests of immigrants living in the country illegally should be allowed at protests or rallies, compared with 44% of Democrats. Republicans and Democrats hold starkly different views on whether arrests of these immigrants should be allowed in their homes (84% vs. 44%). By race and ethnicity Hispanics are the only racial or ethnic group where fewer than half say arrests of immigrants in the country illegally should be allowed in their homes (38%). Roughly a third of Black (35%) and Hispanic (32%) adults say arrests at workplaces should be allowed, a lower share than for White and Asian adults. About half or more of all racial or ethnic groups say law enforcement should be allowed to make arrests at protests or rallies. By nativity A majority of U.S.-born and immigrant adults (69% vs. 55%) say arrests of immigrants living in the country illegally should be allowed at protests or rallies. U.S.-born adults are more likely than immigrants to say arrests should be allowed in homes (67% vs. 46%) and in workplaces (57% vs. 36%). The survey also asked about whether immigration arrests should be allowed in other places. Fewer than half of Americans say arrests should be allowed in hospitals (37%), schools (35%) or places of worship (33%). Views on whether police should be able to check for immigration status A slim majority of U.S. adults say law enforcement should be able to check a person’s immigration status during daily activities like traffic stops. Overall, 56% say this should be allowed while 43% say it should not. By political party Republicans (81%) are far more likely than Democrats (33%) to say law enforcement should be allowed to check for a person’s immigration status during daily activities like traffic stops. By nativity Those born in the U.S. are more likely than immigrants (60% vs. 36%) to say law enforcement should be allowed to check for immigration status. By race and ethnicity 66% of White adults say police should be allowed to check for immigration status. By contrast, roughly half or fewer of Asian (45%), Black (42%) and Hispanic (35%) adults say so. By age U.S. adults under age 50 are less likely than those 50 and older to say law enforcement should be able to check a person’s immigration status during daily activities. source

1. Views on deportations and arrests of immigrants in the U.S. illegally Read More »

'AI Biology' Research: Anthropic Explores How Claude 'Thinks'

It can be difficult to determine how generative AI arrives at its output. On March 27, Anthropic published a blog post introducing a tool for looking inside a large language model to follow its behavior, seeking to answer questions such as what language its model Claude “thinks” in, whether the model plans ahead or predicts one word at a time, and whether the AI’s own explanations of its reasoning actually reflect what’s happening under the hood. In many cases, the explanation does not match the actual processing. Claude generates its own explanations for its reasoning, so those explanations can feature hallucinations, too. A ‘microscope’ for ‘AI biology’ Anthropic published a paper on “mapping” Claude’s internal structures in May 2024, and its new paper on describing the “features” a model uses to link concepts together follows that work. Anthropic calls its research part of the development of a “microscope” into “AI biology.” In the first paper, Anthropic researchers identified “features” connected by “circuits,” which are paths from Claude’s input to output. The second paper focused on Claude 3.5 Haiku, examining 10 behaviors to diagram how the AI arrives at its result. Anthropic found: Claude definitely plans ahead, particularly on tasks such as writing rhyming poetry. Within the model, there is “a conceptual space that is shared between languages.” Claude can “make up fake reasoning” when presenting its thought process to the user. The researchers discovered how Claude translates concepts between languages by examining the overlap in how the AI processes questions in multiple languages. For example, the prompt “the opposite of small is” in different languages gets routed through the same features for “the concepts of smallness and oppositeness.” This latter point dovetails with Apollo Research’s studies into Claude Sonnet 3.7’s ability to detect an ethics test. When asked to explain its reasoning, Claude “will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps,” Anthropic found. SEE: Microsoft’s AI cybersecurity offering will debut two personas, Researcher and Analyst, in early access in April. Generative AI isn’t magic; it’s sophisticated computing, and it follows rules; however, its black-box nature means it can be difficult to determine what those rules are and under what conditions they arise. For example, Claude showed a general hesitation to provide speculative answers but might process its end goal faster than it provides output: “In a response to an example jailbreak, we found that the model recognized it had been asked for dangerous information well before it was able to gracefully bring the conversation back around,” the researchers found. How does an AI trained on words solve math problems? I mostly use ChatGPT for math problems, and the model tends to come up with the right answer despite some hallucinations in the middle of the reasoning. So, I’ve wondered about one of Anthropic’s points: Does the model think of numbers as a sort of letter? Anthropic might have pinpointed exactly why models behave like this: Claude follows multiple computational paths at the same time to solve math problems. “One path computes a rough approximation of the answer and the other focuses on precisely determining the last digit of the sum,” Anthropic wrote. So, it makes sense if the output is right but the step-by-step explanation isn’t. More must-read AI coverage Claude’s first step is to “parse out the structure of the numbers,” finding patterns similarly to how it would find patterns in letters and words. Claude can’t externally explain this process, just as a human can’t tell which of their neurons are firing; instead, Claude will produce an explanation of the way a human would solve the problem. The Anthropic researchers speculated this is because the AI is trained on explanations of math written by humans. What’s next for Anthropic’s LLM research? Interpreting the “circuits” can be very difficult because of the density of the generative AI’s performance. It took a human a few hours to interpret circuits produced by prompts with “tens of words,” Anthropic said. They speculate it might take AI assistance to interpret how generative AI works. Anthropic said its LLM research is intended to be sure AI aligns with human ethics; as such, the company is looking into real-time monitoring, model character improvements, and model alignment. source

'AI Biology' Research: Anthropic Explores How Claude 'Thinks' Read More »

Apple’s Next Big Thing is AI on Smart Watches

Apple Watch Series 10. Credit: Apple Apple’s future smartwatches may include cameras to enable AI features like translating signs between languages. Bloomberg’s Mark Gurman reported on the possibility on March 23, saying a camera would be added to the Apple Watch to enable features comparable to those debuted on the iPhone 16. Meanwhile, consumers filed a class-action lawsuit in mid-March alleging many Apple Intelligence features supposed to be enabled by AI in the digital assistant Siri were never delivered. Must-read Apple coverage Camera on Apple Watch would add AI-enabled ways to interact with the real world According to Bloomberg, Apple may add the camera and AI features to their line of smartwatches by 2027. The cameras would be inside the area of the display on the standard watch and next to the digital crown and button on the side of the Apple Watch Ultra. If the AI features on the watch are intended to be similar to the Apple Intelligence features enabled by visual intelligence on the iPhone 16, they could: Summarize and copy text from images captured by the camera, including translating between languages. Automatically open a prompt to add email addresses or phone numbers to contacts if you see them in the real world. Search Google for where to buy an item directly from a photo of that item. Ask ChatGPT to explain unfamiliar diagrams or notes. Gurman also predicted that Apple is exploring the idea of adding a camera to AirPods. SEE: AI literacy, conflict mitigation, and adaptability are skills on the rise in today’s workplace, according to LinkedIn. Apple’s AI division shaken up Apple has historically taken a cautious approach to AI adoption. We had predicted this measured approach would allow the company to introduce generative AI in a way that differentiates its ecosystem. However, the rollout has been gradual, largely consisting of incorporating widely established generative AI tools into its devices. An upgraded version of Siri — expected to understand natural language more intuitively — has reportedly been delayed until 2026. Behind the scenes, Apple removed John Giannandrea as head of the AI division and appointed Vision Pro executive Mike Rockwell to lead the team. source

Apple’s Next Big Thing is AI on Smart Watches Read More »