OpenAI’s new voice AI model gpt-4o-transcribe lets you add speech to your existing text apps in seconds

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI‘s voice AI models have gotten it into trouble before with actor Scarlett Johansson, but that isn’t stopping the company from continuing to advance its offerings in this category. Today, the ChatGPT maker has unveiled three new proprietary voice models: gpt-4o-transcribe, gpt-4o-mini-transcribe and gpt-4o-mini-tts. These models will initially be available through the ChatGPT maker’s application programming interface (API) for third-party software developers to build their own apps. They will also be available on a custom demo site, OpenAI.fm, that individual users can access for limited testing and fun. Moreover, the gpt-4o-mini-tts model voices can be customized from several pre-sets via text prompt to change their accents, pitch, tone and other vocal qualities — including conveying whatever emotions the user asks them to, which should go a long way to addressing any concerns OpenAI is deliberately imitating any particular user’s voice (the company previously denied that was the case with Johansson, but pulled down the ostensibly imitative voice option, anyway). Now, it’s up to the user to decide how they want their AI voice to sound when speaking back. In a demo with VentureBeat delivered over a video call, OpenAI technical staff member Jeff Harris showed how, using text alone on the demo site, a user could get the same voice to sound like a cackling mad scientist or a zen, calm yoga teacher. Discovering and refining new capabilities within GPT-4o base The models are variants of the existing GPT-4o model OpenAI launched back in May 2024 and which currently powers the ChatGPT text and voice experience for many users, but the company took that base model and post-trained it with additional data to make it excel at transcription and speech. The company didn’t specify when the models might come to ChatGPT. “ChatGPT has slightly different requirements in terms of cost and performance trade-offs, so while I expect they will move to these models in time, for now, this launch is focused on API users,” Harris said. It is meant to supersede OpenAI’s two-year-old Whisper open-source text-to-speech model, offering lower word error rates across industry benchmarks and improved performance in noisy environments, with diverse accents, and at varying speech speeds across 100+ languages. The company posted a chart on its website showing just how much lower the gpt-4o-transcribe models’ error rates are at identifying words across 33 languages compared to Whisper — with an impressively low 2.46% in English. “These models include noise cancellation and a semantic voice activity detector, which helps determine when a speaker has finished a thought, improving transcription accuracy,” said Harris. Harris told VentureBeat that the new gpt-4o-transcribe model family is not designed to offer “diarization,” or the capability to label and differentiate between different speakers. Instead, it is designed primarily to receive one (or possibly multiple voices) as a single input channel and respond to all inputs with a single output voice in that interaction, however long it takes. The company is also hosting a competition for the general public to find the most creative examples of using its demo voice site OpenAI.fm and share them online by tagging the @openAI account on X. The winner will receive a custom Teenage Engineering radio with the OpenAI logo, which OpenAI Head of Product, Platform Olivier Godement said is one of only three in the world. An audio applications gold mine The enhancements make them particularly well-suited for applications such as customer call centers, meeting note transcription, and AI-powered assistants. Impressively, the company’s newly launched Agents SDK from last week also allows those developers who have already built apps atop its text-based large language models like the regular GPT-4o to add fluid voice interactions with only about “nine lines of code,” according to a presenter during an OpenAI YouTube livestream announcing the new models (embedded above). For example, an e-commerce app built atop GPT-4o could now respond to turn-based user questions like “Tell me about my last orders” in speech with just seconds of tweaking the code by adding these new models. “For the first time, we’re introducing streaming speech-to-text, allowing developers to continuously input audio and receive a real-time text stream, making conversations feel more natural,” Harris said. Still, for those devs looking for low-latency, real-time AI voice experiences, OpenAI recommends using its speech-to-speech models in the Realtime API. Pricing and availability The new models are available immediately via OpenAI’s API, with pricing as follows: • gpt-4o-transcribe: $6.00 per 1M audio input tokens (~$0.006 per minute) • gpt-4o-mini-transcribe: $3.00 per 1M audio input tokens (~$0.003 per minute) • gpt-4o-mini-tts: $0.60 per 1M text input tokens, $12.00 per 1M audio output tokens (~$0.015 per minute) However, they arrive at a time of fiercer-than-ever competition in the AI transcription and speech space, with dedicated speech AI firms such as ElevenLabs offering their new Scribe model, which supports diarization and boasts a similarly (but not as low) reduced error rate of 3.3% in English. It is priced at $0.40 per hour of input audio (or $0.006 per minute, roughly equivalent). Another startup, Hume AI, offers a new model, Octave TTS, with sentence-level and even word-level customization of pronunciation and emotional inflection — based entirely on the user’s instructions, not any pre-set voices. The pricing of Octave TTS isn’t directly comparable, but there is a free tier offering 10 minutes of audio and costs increase from there between Meanwhile, more advanced audio and speech models are also coming to the open source community, including one called Orpheus 3B which is available with a permissive Apache 2.0 license, meaning developers don’t have to pay any costs to run it — provided they have the right hardware or cloud servers. Industry adoption and early results According to testimonials shared by OpenAI with VentureBeat, several companies have already integrated OpenAI’s new audio models into their platforms, reporting significant improvements in voice AI performance. EliseAI, a company focused on property management automation, found that OpenAI’s text-to-speech model enabled more natural and

OpenAI’s new voice AI model gpt-4o-transcribe lets you add speech to your existing text apps in seconds Read More »

Agentic AI is Coming — Are We Ready?

As I was writing this article, it was perhaps not so coincidental that I took a break and made a phone call to a home appliance customer support line for a microwave that we owned.  I soon found myself trapped in an automated agentic AI phone system with no way out and no way to reach a human agent. I finally gave up, and called a local appliance company, where a human salesman gave me the answer that I needed.  The experience is common. There are millions of consumers who experience frustration with automated phone systems and chat services that have no way of routing them to the person (or function) that can help them resolve their issues.   Companies know this, but it’s not stopping them from adopting agentic AI at breakneck speeds, as evidenced by a projected market growth for agentic AI of 43.8% CAGR (compound annual growth rate) between now and 2034. It’s all the more reason for CIOs to get involved early with agentic AI to make sure that it works for people as well as for systems.  Just What Is Agentic AI and How Does it Work?  In a 2024 interview with the Harvard Business Review, Enver Cetin, an AI expert at global experience engineering firm Ciklum, said, “[Agentic AI] refers to AI systems and models that can act autonomously to achieve goals without the need for constant human guidance. The agentic AI system understands what the goal or vision of the user is and the context to the problem they are trying to solve.”   Related:Why Most Agentic Architectures Will Fail Agentic AI uses a combination of machine learning (ML), natural language processing (NLP) and automation to do this. Its mission to make decisions and act on them.  Companies can design agentic AI systems that require a final human authorization for some decisions, or they can make the agentic AI completely autonomous, so it makes decisions on its own.  Most agentic AI adoptions are being sponsored and funded by end-user departments, which suggests that IT may or may not be in on the initial evaluations and buy decisions.  Gartner cites an early example of how agentic AI can be deployed in retail.  “AI-enabled machine customers — or nonhuman economic actors that obtain goods and services in exchange for payment — are examples of increasingly common intelligent agents. In the near future, they will make optimized decisions on behalf of human customers based on preset rules and will quickly evolve toward greater autonomy and inferring of needs.”  So, knowing that agentic AI is coming, and that IT might also be the last to know about an agentic AI buy decision, what should CIOs be doing?  Related:10 Reasons Why Multi-Agent Architectures Will Supercharge AI Key CIO Points for Agentic AI  Work up IT’s agentic AI strategy now. Agentic AI has enormous potential. It can automate rote business operations and decision making, and IT needs to strategize for it.  In a sense, agentic AI and what it can do has already been known in previous incarnations, such as automated loan decisioning software that has existed and functioned capably in bank lending departments for decades. However, now the needle is moving toward more autonomy. Business users will decide where they want to use agentic AI, but it will be IT’s responsibility to ask the questions about system and process integration, governance and security that will enable agentic AI to be used safely and to best advantage.  In this environment, an immediate CIO goal should be to participate with users in agentic AI strategy discussions so that “best use” business cases can be identified. Then, there should be a collaborative strategy with users and IT that takes into account not only business process streamlining and automation, but process exception handling, process and system integration. Plus, they should address user and IT training, security and governance. Although agentic AI will be driven by users, this is no time for IT to take a back seat.  Related:How Safe and Secure Is GenAI Really? Discuss security and failover. Your sales department might fund and adopt agentic AI to autonomously execute the mechanisms of product ordering, but what happens if a bad actor penetrates the agentic AI and locks it down for ransom or, what if that bad actor invades agentic AI software and injects malware or faulty algorithms that compromise and endanger the function?   The sales team will quickly pivot to IT to fix these issues, so CIOs should be proactively querying sales and agentic AI vendors about the types of security that come with the agentic AI. There should be a company review of the AI to ensure that it complies with corporate security and governance standards. Questions should also be asked as to whether there is a failover to a human agent if the agentic AI fails or sputters. There should be a defined failover procedure in the company disaster recovery plan that provides for human ability to override or take over from agentic AI if that becomes necessary.  Be prepared for project inclusion, whether you want it or not! IT may not be involved in initial agentic AI purchase decisions, but it will surely be pulled into agentic AI projects, because the AI won’t get very far if it isn’t integrated with other corporate systems.  Accordingly, IT should ensure that user-IT agentic AI project discussions focus on system integration, and on the clear definition of a project test bed for agentic AI integration into business processes themselves.  A successful business process integration addresses user training and readiness for a new technology, and what will happen if the agentic AI fails or begins to make poor decisions.  CIOs shouldn’t shy away from insisting that these process-oriented elements are tasked in agentic AI projects, because if anything goes wrong after the technology is placed into production, it will likely be blamed on “the system.”  source

Agentic AI is Coming — Are We Ready? Read More »

DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Chinese AI startup DeepSeek has quietly released a new large language model that’s already sending ripples through the artificial intelligence industry — not just for its capabilities, but for how it’s being deployed. The 641-gigabyte model, dubbed DeepSeek-V3-0324, appeared on AI repository Hugging Face today with virtually no announcement, continuing the company’s pattern of low-key but impactful releases. What makes this launch particularly notable is the model’s MIT license — making it freely available for commercial use — and early reports that it can run directly on consumer-grade hardware, specifically Apple’s Mac Studio with M3 Ultra chip. The new Deep Seek V3 0324 in 4-bit runs at > 20 toks/sec on a 512GB M3 Ultra with mlx-lm! pic.twitter.com/wFVrFCxGS6 — Awni Hannun (@awnihannun) March 24, 2025 “The new DeepSeek-V3-0324 in 4-bit runs at > 20 tokens/second on a 512GB M3 Ultra with mlx-lm!” wrote AI researcher Awni Hannun on social media. While the $9,499 Mac Studio might stretch the definition of “consumer hardware,” the ability to run such a massive model locally is a major departure from the data center requirements typically associated with state-of-the-art AI. DeepSeek’s stealth launch strategy disrupts AI market expectations The 685-billion-parameter model arrived with no accompanying whitepaper, blog post, or marketing push — just an empty README file and the model weights themselves. This approach contrasts sharply with the carefully orchestrated product launches typical of Western AI companies, where months of hype often precede actual releases. Early testers report significant improvements over the previous version. AI researcher Xeophon proclaimed in a post on X.com: “Tested the new DeepSeek V3 on my internal bench and it has a huge jump in all metrics on all tests. It is now the best non-reasoning model, dethroning Sonnet 3.5.” Tested the new DeepSeek V3 on my internal bench and it has a huge jump in all metrics on all tests.It is now the best non-reasoning model, dethroning Sonnet 3.5. Congrats @deepseek_ai! pic.twitter.com/efEu2FQSBe — Xeophon (@TheXeophon) March 24, 2025 This claim, if validated by broader testing, would position DeepSeek’s new model above Claude Sonnet 3.5 from Anthropic, one of the most respected commercial AI systems. And unlike Sonnet, which requires a subscription, DeepSeek-V3-0324‘s weights are freely available for anyone to download and use. How DeepSeek V3-0324’s breakthrough architecture achieves unmatched efficiency DeepSeek-V3-0324 employs a mixture-of-experts (MoE) architecture that fundamentally reimagines how large language models operate. Traditional models activate their entire parameter count for every task, but DeepSeek’s approach activates only about 37 billion of its 685 billion parameters during specific tasks. This selective activation represents a paradigm shift in model efficiency. By activating only the most relevant “expert” parameters for each specific task, DeepSeek achieves performance comparable to much larger fully-activated models while drastically reducing computational demands. The model incorporates two additional breakthrough technologies: Multi-Head Latent Attention (MLA) and Multi-Token Prediction (MTP). MLA enhances the model’s ability to maintain context across long passages of text, while MTP generates multiple tokens per step instead of the usual one-at-a-time approach. Together, these innovations boost output speed by nearly 80%. Simon Willison, a developer tools creator, noted in a blog post that a 4-bit quantized version reduces the storage footprint to 352GB, making it feasible to run on high-end consumer hardware like the Mac Studio with M3 Ultra chip. This represents a potentially significant shift in AI deployment. While traditional AI infrastructure typically relies on multiple Nvidia GPUs consuming several kilowatts of power, the Mac Studio draws less than 200 watts during inference. This efficiency gap suggests the AI industry may need to rethink assumptions about infrastructure requirements for top-tier model performance. China’s open source AI revolution challenges Silicon Valley’s closed garden model DeepSeek’s release strategy exemplifies a fundamental divergence in AI business philosophy between Chinese and Western companies. While U.S. leaders like OpenAI and Anthropic keep their models behind paywalls, Chinese AI companies increasingly embrace permissive open-source licensing. This approach is rapidly transforming China’s AI ecosystem. The open availability of cutting-edge models creates a multiplier effect, enabling startups, researchers, and developers to build upon sophisticated AI technology without massive capital expenditure. This has accelerated China’s AI capabilities at a pace that has shocked Western observers. The business logic behind this strategy reflects market realities in China. With multiple well-funded competitors, maintaining a proprietary approach becomes increasingly difficult when competitors offer similar capabilities for free. Open-sourcing creates alternative value pathways through ecosystem leadership, API services, and enterprise solutions built atop freely available foundation models. Even established Chinese tech giants have recognized this shift. Baidu announced plans to make its Ernie 4.5 model series open-source by June, while Alibaba and Tencent have released open-source AI models with specialized capabilities. This movement stands in stark contrast to the API-centric strategy employed by Western leaders. The open-source approach also addresses unique challenges faced by Chinese AI companies. With restrictions on access to cutting-edge Nvidia chips, Chinese firms have emphasized efficiency and optimization to achieve competitive performance with more limited computational resources. This necessity-driven innovation has now become a potential competitive advantage. DeepSeek V3-0324: The foundation for an AI reasoning revolution The timing and characteristics of DeepSeek-V3-0324 strongly suggest it will serve as the foundation for DeepSeek-R2, an improved reasoning-focused model expected within the next two months. This follows DeepSeek’s established pattern, where its base models precede specialized reasoning models by several weeks. “This lines up with how they released V3 around Christmas followed by R1 a few weeks later. R2 is rumored for April so this could be it,” noted Reddit user mxforest. The implications of an advanced open-source reasoning model cannot be overstated. Current reasoning models like OpenAI’s o1 and DeepSeek’s R1 represent the cutting edge of AI capabilities, demonstrating unprecedented problem-solving abilities in domains from mathematics to coding. Making this technology freely available would democratize access to AI systems currently limited to those with substantial budgets. The potential R2 model arrives amid significant revelations about

DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI Read More »

Salesforce Launches Agentforce: What Technology Leaders Need To Know

It’s safe to run your AI agents (they’re mostly chatbots, case summaries, or simple text generators today) on the Agentforce chassis — as long as you run them inside your Salesforce application domain. We spent two days in San Francisco at Salesforce’s TDX developer conference. Together with 5,000 Salesforce developers and administrators (Trailblazers, rebranding as Agentblazers), we touched software, attended classes, and spoke with executives, including President and CMO, Ariel Kelman, EVP of AI engineering, Jayesh Govindarajan, and senior vice president, strategic partnerships & business development, Nick Johnston. We come away impressed with the CRM giant’s commitment to agent-powered workflows and cautiously optimistic that the no-software company can host many, if not all, the AI agents running in the Salesforce ecosystem. We liked the empowerment angle. We don’t love the ham-handed labeling of Agentforce as “the digital labor platform” because what we saw were agents doing mundane work that empower people, not automate away jobs. In subsequent sessions with chief AI officers in retail, banking, and hospitality, we learned that they too believe generative AI (genAI) is a power tool, not a digital worker. Here’s what CIOs and other technology executives need to know: Salesforce is massively interested in hosting your AI agents. If you’re a Salesforce shop, we think you should try it out and see how the platform works for you. Salesforce is using a monthly release cycle to rapidly improve the product. Features like choose-your-own language model, PII data masking, prompt templates, zero-copy data, vector embeddings, agent benchmarking tools to build a business case, and an agent lifecycle toolkit are available today in the Agentforce platform. Most agents are repaved task paths, not automated workflows. Boston believes its crazy roads are paved over cow paths. It turns out that paved over cow paths are better than paths knee-deep in mud. It turns out agents today repave existing manual processes. That’s OK. There are dozens of scenarios in the Salesforce ecosystem where an AI agent can empower an employee, do the grunt work, and maybe give some advice in a text response. Of course, if that advice is through a customer chatbot, then fewer calls may flow into the contact center, thus reducing the number of reps needed. But does that make the agent digital labor? Nah. It’s just an application to help customers serve themselves. Don’t let the $2/call pricing model confuse you — that’s just a value-oriented negotiating stance on the cost of the “equipment.” If you build on Agentforce, you’re committing to Data Cloud. This is the biggest strategic play we see Salesforce making — and your biggest risk of agent and knowledge asset lock-in. Salesforce, along with ServiceNow, Microsoft, Oracle, SAP, Workday, Deloitte (shockingly), and others, want your proprietary knowledge assets as well as your AI agents. Salesforce could already have your front-office data or could get it through zero-copy retrieval from an Amazon, Databricks, Google, Microsoft, or Snowflake database. But genAI and knowledge graphs have a symbiotic relationship. That means Agentforce also needs your proprietary sales manuals, product literature, marketing materials, process methods, and more to generate. That makes Salesforce Data Cloud a vital component of AI agents, hence a strategic and expensive commitment for you to make. What Technology Executives Should Do If you use Salesforce, then ask a small team to investigate the boundaries of common sense for building and operating AI agents on Agentforce. For ideas, check out Salesforce’s AI library, or try out one of the agent templates for sales coaching or case summaries, for example. Test these out: Build a prompt template for common retrieval patterns. For example, if your team is constantly asking for the same data, give them a chat interface but prepopulated with context and prompt suggestions. That’s a prompt template. Build a simple agent to do something not yet in the product and make it available with a button. More and better summary tools; personalized emails; or simulated coaching for the next sales call are good candidates. Load documents into Data Cloud so they’re available to an agent through vector embeddings. If you load all your sales training material, for example, this could power your sales coach agent. One executive at a healthcare provider network we spoke with is using an agent like this so that a clinician facing a tough patient conversation can get some coaching in the context of the diagnosis. If you want to dig deeper into the CIO’s role in AI agent success, please reach out to me by scheduling a guidance session or an inquiry via email: [email protected]. source

Salesforce Launches Agentforce: What Technology Leaders Need To Know Read More »

Who Makes the Best Citizen Developers?

Low-code/no-code platforms have given rise to the “citizen developer” — a power user of tools such as Microsoft Excel. In other cases, this person tends to be someone who needs an immediate solution, has an idea in mind, and isn’t afraid to try something new to turn their dream into a reality.  Citizen developers aren’t a threat to professional developers because they don’t understand software architecture and the hand-written code it would take to customize the app. They’re simply a less expert member of the workforce who happens to understand the context of a task, workflow or technology, and are motivated to make improvements on their own.  In many cases, citizen developers aren’t left to their own devices. They’re using wizards and visual tools instead of writing lines of code. In some organizations, citizen development has been enabled by IT and developers in a way that benefits both professional and citizen developers.   For example, a citizen developer might build a solution that may eventually need a professional developer’s expertise to take it to the next level. The beauty of the center of excellence approach is that professional developers can spend more time on difficult problems while citizen developers solve the simple ones. If the organization has standardized on a platform, then handoffs between citizen developers and professional developers are seamless. It is common, however, for enterprises to use more than one low-code/no-code solution.  Related:How to Get a Delayed IT Project Back on Track The “best” citizen developers have some traits in common, though their roles may differ. A proactive mindset and a love of learning help.  Traits of An Effective Citizen Developer Brett Smith, distinguished software developer at data and AI provider SAS, believes effective citizen developers are usually subject matter experts on the business problem and possess a basic understanding of programming concepts. They are also problem solvers who are self-motivated and have a growth mindset. Notably, they can learn new skills quickly and are not afraid to experiment with new technologies.  “Citizen developers have a deep understanding of the business problem and the domain. They [can] communicate effectively with IT teams, which helps to ensure the solutions they develop are aligned with the needs of the business,” says Smith. “It’s critical that enterprises provide citizen developers with the tools and resources they need to be successful. This includes access to training and support, as well as creating a culture that encourages innovation and experimentation.”  Related:Quick Study: The Evolving Roles of CIOs and IT Leaders Brett Smith, SAS Brett Smith, SAS Nick Vlku, VP of product growth at end-to-end AI search and discovery platform provider Algolia, says citizen developers hold different roles such as product managers, project managers, designers and analysts, to name a few. One common trait is that they’re intensely solution-oriented with an intrinsic drive to tackle business challenges, he adds.  “I’ve witnessed this firsthand, like watching a non-technical product manager who taught themselves SQL simply because they needed better answers to their data questions,” says Vlku. “These individuals are natural problem solvers who take initiative. Rather than waiting for help, they actively search for no-code solutions or teach themselves low-code approaches they find online.”   Their ability to focus on solving the problem at hand will become even more valuable with the rise of AI-assisted development tools and coding applications, Vlku says.    “Citizen developers will naturally incorporate these advances as additional tools to help them achieve solutions more efficiently,” says Vlku.  However, the enterprise also has a role to play. Vlku says enterprises should actively support and cultivate citizen developers, as they represent highly valuable employees who prioritize efficient problem-solving.   Related:How WFH and RTO Burnout Differ “There’s a notable challenge: these individuals often undervalue their technical capabilities, placing software engineering on a pedestal that makes them doubt their own abilities or feel uncomfortable embracing their problem-solving approaches,” says Vlku. “Organizations need to take specific actions to nurture this talent.”  First, enterprises should explicitly recognize and reward this initiative during performance reviews, acknowledging the solutions delivered and the innovative approaches used to achieve them. Second, they should streamline access to necessary tools and platforms.  “While determined citizen developers might find ways around organizational barriers, removing these obstacles upfront will encourage more employees to step into this role,” says Vlku. “This support is particularly important because citizen developers tend to doubt their technical legitimacy despite their demonstrated ability to deliver solutions. By creating an environment that actively validates and enables their efforts, organizations can help overcome this self-doubt and expand their pool of effective citizen developers.”  Karl Threadgold, managing director at Oracle NetSuite provider Threadgold Consulting, says the most effective citizen developers tend to have four defining traits: a problem-solving mindset, a strong understanding of business operations, a willingness to collaborate with IT and a hunger for learning.  “The most successful citizen developers deeply understand their organization’s workflows, pain points and inefficiencies. They don’t just automate processes for the sake of it; they focus on solving real business challenges,” says Threadgold. “Rather than working in isolation, they engage with IT teams to ensure their solutions are scalable, secure and aligned with governance policies. Given how quickly no-code and low-code tools are evolving, top citizen developers continuously upskill to stay ahead.”  The reason successful citizens outperform their peers is that they create solutions that are technically sound and strategically relevant.  “They don’t just build the bare minimum,” says Threadgold. “They go above and beyond and build what the organization needs to thrive. Their ability to communicate with IT teams also helps prevent shadow IT issues, ensuring their applications integrate seamlessly into the broader tech landscape.”  The enterprise also has a role to play here, which is enabling this broader base of problem-solvers.  “Many enterprises still take a passive approach to citizen development. [They assume] that providing access to low-code tools is enough — it’s not,” says Threadgold. “They need to provide clear training structures, chances for people to work alongside experienced developers, and have clear collaboration frameworks in place.

Who Makes the Best Citizen Developers? Read More »

Consumers React To Tariffs With Concern And Caution

“Liberation” Or Turbulence? April 2 will be, according to the US administration, Liberation Day! While that does have all the makings of a Hollywood potboiler, in reality, it’s a little more sedate — well, only just a little more. On April 2, the US is supposed to introduce a slew of new tariffs, the details of which are shrouded in a fog of uncertainty and confusion that is now par for the course. In a brief preview of what liberation might look like, the US government has already declared a 25% tariff on imported cars. Markets are in turmoil; the Fed has cut growth rate forecasts and declared this a time of remarkably high economic uncertainty, while blue-chip companies are downgrading their financial outlook. It’s not quite Hollywood, but there’s drama nevertheless! Tariffs Are Not A Crowd-Pleaser To understand how consumers are reacting to the constant chaos of tariffs, we polled Forrester’s CommunityVoices Market Research Online Community comprised of online adults from the US and Canada. As you might expect, opinions are polarized along party and national lines, but fault lines are beginning to emerge: Ninety-one percent of Democrats oppose the tariffs while 43% of Republicans support them; 4% of Democrats support the tariffs, and 26% of Republicans oppose them. This is a significant finding — Democrats are significantly more vocal in their opposition than Republicans are in their approval, suggesting that, for many, their pocketbook concerns (of which there are plenty) are beginning to outweigh their partisan convictions. Those without a political affiliation generally view these tariffs as a bad idea — a majority of Independents (55%) oppose tariffs, while only 24% support them. Canada has borne much of the US government’s ire and is, not surprisingly, vehemently opposed to the US tariffs — 91% of Canadian consumers oppose them and 9% are neutral.   Consumers Brace For Impact In the face of uncertainty and imminent price increases after four years of sustained inflation, people are bracing for what is yet to come. As a result, consumers are (in their own words): Reducing spending “Cut back on spending on unnecessary items” “Buying less often and more affordable products” Saving more “Saving, saving, saving; following a strict budget, only purchasing the necessities, and living below my means” “I am trying to save more money, as the stock market keeps losing money due to the whiplash and uncertainty of the current administration” Stocking up “I have purchased items in bulk to stock up” “Stocking up now on things that I need before the prices increase” Avoiding major purchases “I am currently avoiding major purchases, such as a car, due to already high prices” “Holding off on large purchases” Growing their food “Growing my own food — getting chickens” “Purchasing from farmers’ markets; filling my freezer now and preparing to plant a large garden” Monitoring the situation closely “At this point, I am just paying close attention to the news and what is going on” “I am waiting to see what happens so I can react appropriately” Canadian Consumers Push Back An overwhelming majority of Canadians are miffed about the tariffs (among other things, such as the threat of annexation) and view these developments negatively (“The US will alienate its closest ally and trading partner”). There remains a small minority with hope of an amicable resolution: “We are neighbors with the US and always will be; it makes more sense to work together” and “There needs to be a compromise.” In the meantime, they, like their southern neighbors, are changing their buying behavior by: Avoiding US products “Boycott products made in USA” “I will no longer buy anything from American companies” “I’m dumping the services I use that are American; that includes giants like Amazon” Buying Canadian products “Buy all-Canadian products” “I will purchase more locally made products and use services from my country” Being financially cautious “I plan on cutting back on spending everywhere with everything” “I will be buying fewer items that are not a necessity” “Buying things in bulk and on sale” “When something storable is on special, I will buy more” “I will be trying to grow vegetables in my house and buying only food items” To better manage your brand and business through this period of uncertainty and shifting consumer behaviors, please read our report: Consumer Marketing, CX, And Digital Leaders: How To Thrive Through Volatility (US). (Tyler Castro contributed to the analyses and research for this post.) Follow my work: Go to my Forrester bio and click “Follow.” Chat with me: If you are a Forrester client interested in discussing these topics, please schedule time with me for an inquiry or guidance session. Plan a session: If you are a Forrester client looking to host a strategy session on a related topic, please contact your account team or email me at [email protected]. source

Consumers React To Tariffs With Concern And Caution Read More »

Chamber Tells Justices To Review Duke Energy Monopoly Suit

By Matthew Perlman ( March 27, 2025, 8:54 PM EDT) — The U.S. Chamber of Commerce urged the U.S. Supreme Court on Thursday to review a decision  that revived a case accusing Duke Energy of squeezing a rival out of the market in North Carolina, saying the appeals court was wrong to recognize a “Frankenstein’s monster” theory of harm…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Chamber Tells Justices To Review Duke Energy Monopoly Suit Read More »

6. Religious switching into and out of Judaism

Terminology Throughout this report, religious switching refers to a change between the religious group in which a person says they were raised (during their childhood) and their religious identity now (in adulthood). The rates of religious switching are based on responses to two survey questions we asked of adults ages 18 and older: “What is your current religion, if any?” “Thinking about when you were a child, in what religion were you raised, if any?” The responses to these two questions allow us to calculate what percentage of the public has left a religious group (or “switched out”) and what percentage has entered (or “switched in”). This kind of switching can take place without any formal rite or ceremony. We have analyzed switching into and out of five widely recognized, worldwide religions to allow for consistent comparisons around the globe. Specifically, this report analyzes change between the following groups: Christianity, Islam, Judaism, Buddhism, Hinduism, other religions, religiously unaffiliated adults, and those who did not answer the question. For example, someone who was raised Buddhist but now identifies as Christian would be considered as having switched religions – as would someone who was raised Christian but is now unaffiliated. However, switching within a religious tradition, such as between Catholicism and Protestantism, is not captured in this report. (Refer to Pew Research Center’s 2023-24 Religious Landscape Study for an analysis of switching in the United States that does count some switching within Christianity. Read “4 facts about religious switching within Judaism in Israel” for an analysis of switching within Judaism.) Religiously unaffiliated refers to people who answer a question about their current religion (or their upbringing) by saying they are (or were raised as) atheist, agnostic or “nothing in particular.” This category is sometimes called “no religion” or “nones.” Other religions is an umbrella category. It contains a wide variety of religions that are not in the other categories and that have survey sample sizes too small to analyze separately in most countries. This includes Sikhism, Jainism, the Baha’i faith, African traditional religions, Native American religious traditions, and others. Disaffiliation rates refer to the percentage of adults who say they were raised in a religion but are now religiously unaffiliated (or have no religion). Net gains/losses are the differences between the percentage of survey respondents who say they were raised in a particular religious category (as children) and the percentage who identify with that same category at the time of the survey (as adults). The “net” gain or loss takes into account both sides of the equation – those who have left and those who have entered the group. Retention rates show, among all the people who say they were raised in a particular religious group, the percentage who still describe themselves as belonging to that group today. Accession rates (also called entrance rates) show, among all the people who describe themselves as belonging to a particular religious group today, the percentage who were raised in some other group. This section describes religious switching into and out of Judaism, reviewing the net gains and losses for Judaism in Israel and the United States, what percentage of adults who were raised Jewish are still Jewish (i.e., retention rates), which religious groups those who have left Judaism have switched into, and where Judaism has the largest shares of new entrants (i.e., the highest accession rates). Around 80% of the world’s Jews live in just two countries: Israel and the United States. Both countries were included in our 2024 survey, allowing us to examine religious switching among a majority of the world’s Jewish population. However, people may identify as Jewish in a multitude of ways, including ethnically, culturally, religiously or by family background. In this report, we use the term “Jewish” to mean only religious identity, because the survey questions used in the analyses ask about a person’s current religion and what religious group they were raised in (their childhood religion). Net gains and losses for Judaism Viewed as a percentage of all U.S. adults, few people have left or joined Judaism. But Jewish adults make up only a small fraction of the U.S. population to begin with (about 2%). Remaining Jewish Most people who were raised Jewish in Israel and the U.S. still identify this way today, resulting in high Jewish retention rates in both countries – though it’s higher in Israel than in the U.S. Leaving Judaism In the U.S., about a quarter of adults who were raised Jewish no longer identify as Jewish. In Israel, fewer than 1% of adults who were raised Jewish no longer identify as such. Most adults who have left Judaism in both countries now are unaffiliated (i.e., they identify religiously as atheist, agnostic or “nothing in particular”). Entering Judaism Most Jewish adults in Israel and the U.S. were raised Jewish, meaning the “accession” (or entrance) rates into Judaism are fairly low in both places. But of the two countries, the U.S. has the higher accession rate, with 14% of Jewish Americans saying they were raised outside of Judaism, compared with just 1% of Israeli Jewish adults. Refer to Pew Research Center’s “4 facts about religious switching within Judaism in Israel” and “Denominational switching among U.S. Jews: Reform Judaism has gained, Conservative Judaism has lost” for analyses of switching within Judaism. Has Judaism experienced net gains or losses from religious switching? In Israel and the U.S., the proportion of the overall populations that have either switched into or switched out of Judaism is very small (1% or less). This is true in both places, even though Jewish adults make up a sizable majority of all adults in Israel and a small sliver of all U.S. adults. What percentage of people raised Jewish are still Jewish? The Jewish retention rate is high in both Israel and the U.S. In Israel, virtually all adults who were raised Jewish still identify as Jewish today. In the U.S., 76% of adults who were raised Jewish still identify this way. Which religious

6. Religious switching into and out of Judaism Read More »

Be THE Human-In-The-Loop: Data & AI Literacy is Your Edge

AI is transforming the way we live, work, and play. It’s altering how we make decisions and interact with technology. But for all its power, it still needs humans (for now) — not just any humans but those who understand how AI works, the dependencies between good data and useful AI outputs, and where human judgement is irreplaceable. Amidst a world rushing towards automation, data and AI literacy isn’t just a skill — it is how you become THE human in the loop. What Does It Mean To Be “The Human In The Loop”? The phrase “human in the loop” (HITL) comes from AI and machine learning, referring to the humans who step in to guide, correct, or make sense of AI-driven processes. Sometimes, it means reviewing AI-generated decisions to catch mistakes (think fraud detection or medical diagnoses). Other times, it’s about injecting human expertise where AI lacks context, nuance, or ethical reasoning. If you’ve attended a conference in the past year, the HITL is what vendors point to when assuring people with AI concerns that humans still will be a part of key governance structures and decision-making. What is often overlooked is how many humans will be in the loop, what the loops might look like, or how many AI/software loops one human can be responsible for. Here is our reality: Not all humans in the loop will be equal. Some will be passive overseers, clicking “approve” or “reject” on AI recommendations (the hospital scene from the 2006 film “Idiocracy” comes to mind here). Others will be active decision-makers driven within a culture of inquiry who shape how AI is used, train models with better data, and ask questions before being prompted by an algorithm. The key difference between passive human drones and those actively involved in guiding AI decisions is data and AI literacy within a culture of inquiry. Why AI And Data Makes You Indispensable Two short anecdotes illustrate the point: Over the past year, I’ve been showing a friend who works at a bank how the simple use of AI tools outside of her company can help her improve engagement and impact at work. She was just highlighted at work for being “forward-thinking and proactive” for getting creative without sacrificing security. KPMG recently gave me a demo of its “Curiosity Workbench,” an AI tool that helps its employees locate and leverage decades of knowledge, data, and expertise to help with clients and get them moving quickly. Both of these examples depend on humans interpreting information and learning more by being curious and inquisitive. After all, AI is only as good as the data it learns from — and data is only as useful as the humans interpreting it. If you want to be the human in the loop, you need: Data literacy: the foundation | AI depends on clean, consistent, relevant, and representative data. Without data literacy, you’re just a spectator to the AI revolution. With it, you’re the one shaping impact. Ask yourself: Can you spot bad data before it leads to bad outcomes? Do you understand how bias can slide into datasets like a creepy social media stalker can slide into your DMs? Can you interpret AI-driven insights to make business decisions, rather than just accepting whatever a model spits out? AI literacy: the next level | AI literacy isn’t about coding your own model from scratch. It’s about understanding how AI influences decisions, where it’s useful, and where it needs a human course interaction. In 2025, I ask our clients to imagine that AI is like the world’s best intern: It can do 80% of most common jobs very well, but that remaining 20% is still pretty suspect and needs the guidance of a wiser mentor who can work with it to get you 100% there. Ask yourself: Do you know how AI models make predictions and where they can go wrong? Can you question AI outputs instead of blindly trusting them? Are you aware of ethical risks, compliance issues, and real-world AI failures? Enterprise culture of (data) inquiry | AI is just software, but without a body of users who are enabled to find it, ask questions of it, grow using it, communicate with it, and trust it, it is as worthless as the grains of sand that its chips are built from. A culture of inquiry is one where all are empowered in a psychologically safe environment to ask questions and share commentary. A culture of data inquiry ensures that, within that safe environment, users can locate, leverage, trust, and communicate those insights found within data without fear. Ask yourself: Do I work within an environment where all can locate data? Do I work in an environment where all can leverage data? Do I work in an environment where all can trust data? Do I work in an environment where all can communicate data? Be The One Behind The AI Automation is here for many routine tasks. But organizations will need humans who: Understand when AI is making good vs. bad recommendations. Know how to validate AI insights before acting on them. Can explain AI-driven decisions in clear, human terms — to coworkers, executives, regulators, and customers. Can translate business challenges to more technical and data-focused AI engineers while also listening and learning from them in turn. Being the human in the loop isn’t about resisting AI. It’s about being the person who knows how to use it responsibly, effectively, and strategically. Now What? Reach out for an inquiry ([email protected]) with me today to uncover your natural strengths and purpose, via your own roles, goals, and values VIP evaluation, to improve your own data communications and data storytelling skills, and then to discover how to build your enterprise culture of data inquiry via curiosity velocity and data and AI literacy programming. I look forward to working with you! If you are a vendor looking to share insights on your AI literacy offerings or have a use case of how you’ve helped others with the above,

Be THE Human-In-The-Loop: Data & AI Literacy is Your Edge Read More »

Where Tech Meets the Soul: Bowdoin’s AI Plan Gets $50M Kickstart from Netflix Legend Hastings

Netflix co-founder Reed Hastings donates $50 Million to Bowdoin College. Source: Bowdoin College Netflix co-founder Reed Hastings has donated $50 million to his alma mater, Maine’s Bowdoin College, to launch the Hastings Initiative for AI and Humanity, a bold step toward preparing students to critically shape the future of artificial intelligence. Rather than focusing purely on coding and algorithms, the initiative takes a broader, more human-centered approach. One of its missions will be to study the effects of AI across society, the economy, creativity, and even the possibility of humans losing touch with essential skills in an AI-driven world. “We aim to develop leaders who can be ‘at home’ in both the present and future technological landscape,” Hastings said in a press release. Studying AI’s Impact, not Just Building it At a time when AI is embedded in everything from healthcare to hiring, Bowdoin is positioning itself to ask the tough questions: How does AI shift how we think, learn, and create? What happens when machines replace tasks we once considered uniquely human? Could we lose essential skills in the process? To answer these questions, the college plans to hire 10 new faculty members across multiple disciplines and support current faculty in weaving AI into their courses, research, and creative work. Whether it’s examining algorithmic bias in political science or exploring how generative AI affects the future of storytelling, the initiative is about building fluency and critical thinking – not blind adoption. Workshops, symposia, and funding for student and faculty research will create space for meaningful conversations about AI’s growing role in our lives and the challenges it brings with it. Empowering the Next Generation of Ethical Leaders For Hastings, this isn’t just a donation — it’s an investment in the ethical and intellectual backbone of future leaders. By giving students the tools to understand and challenge AI, the initiative fosters a mindset that balances curiosity with caution. Bowdoin College President Safa Zaki noted that the initiative fits squarely within their liberal arts tradition of empowering students to question, reflect, and lead with purpose in an age of rapid change. In a world increasingly shaped by machine intelligence, the Hastings Initiative offers a timely reminder: the future of AI doesn’t belong to engineers alone. It belongs to the thoughtful and ethical individuals who dare to ask what kind of world we’re building. Bowdoin plans to ensure those voices are ready to lead and be heard. source

Where Tech Meets the Soul: Bowdoin’s AI Plan Gets $50M Kickstart from Netflix Legend Hastings Read More »