Payment Fraud Detection and Prevention: Here's All To Know

Key takeaways: With AI technology, payment fraud and fraud prevention have become more complex and sophisticated. Both financial data owners and businesses that accept illegal payments are directly affected by payment fraud. The best payment fraud prevention strategy requires careful risk assessment, planning, implementation, incident response preparation, and documentation. What is payment fraud? Payment fraud is the unauthorized use of an individual’s financial information to conduct illegal transactions. The overall strategy is to deceive individuals into sharing their financial and other sensitive information using hardware and software hijacking technology. Payment fraud happens when devices such as scanners, keystroke loggers, and malware capture manually entered data to divert the information back to the perpetrators. Businesses invest significantly in payment fraud prevention tools to counter these attacks. Types of payment fraud With today’s technology, every payment method is unfortunately at risk of fraud. We discuss them briefly below: Credit card fraud According to a 2025 Nilson Report, global payment card fraud losses reached $33.83 billion in 2023, with the US bearing approximately 42% of these losses. The most prevalent type of credit card fraud happens remotely: card-not-present (CNP) fraud, which occurs when stolen card information is used to make purchases online or over the phone. While EMV chip technology has reduced card-present fraud, criminals still find ways to exploit merchant vulnerabilities, often involving cloning, where criminals copy card details onto a blank magnetic stripe card or stolen cards used before the victim notices and reports them. Example: In November 2024, a UK resident’s replacement credit card was intercepted and used fraudulently before she received it, underscoring the vulnerabilities in card issuance and delivery processes. See: Detecting Credit Card Fraud by Decision Trees and Support Vector Machines Debit card fraud Debit card fraud involves the unauthorized withdrawal of funds directly from the victim’s bank account. This happens via physical theft of the card, skimming devices capturing card details, or data breaches exposing card information. Unlike credit card fraud, victims of debit card fraud may experience immediate financial loss as funds are withdrawn directly from their accounts. Example: In October 2024, a UK resident discovered unauthorized transactions exceeding £100 on their Uber and Uber Eats accounts linked to their debit card. Uber refunded the fraudulent charges, but it was unclear where the unauthorized transactions came from. Mobile payment fraud Mobile payment fraud occurs when fraudsters exploit mobile payment systems, apps, or devices to make unauthorized transactions or steal financial information. One way this is done is through SIM swapping, where an attacker gains control of a victim’s phone number to access their accounts, or through malware that infects a device to intercept sensitive information like payment credentials. Example: In November 2024, three Indiana residents were charged in connection with a nationwide SIM-swapping conspiracy. The defendants managed to steal funds and personal data through the mobile numbers connected to the victims’ email, social media, and cryptocurrency accounts. See: Mobile Device Security Policy Wire fraud Wire fraud involves schemes conducted via phone calls, emails, or online messaging platforms, often using false representations or promises to defraud individuals or organizations of money or property. Fraudsters trick victims into transferring funds to accounts they control, leading to substantial financial losses. Example: In July 2024, individuals based in Michigan, Illinois, and Texas pleaded guilty to conspiracy in international mail and wire fraud, defrauding victims of at least $2 million from 2017 to 2022. Check fraud Despite declining check usage due to digital payment methods, check fraud remains common. This involves illegal activities such as forging signatures, altering check details, or depositing counterfeit checks. Example: In late 2024, JPMorgan Chase filed lawsuits against customers who exploited a viral “money glitch” by depositing large, fake checks via ATMs and withdrawing funds before the checks cleared. This scheme resulted in over $660,000 in losses for the bank. Bank fraud Bank fraud involves schemes to steal cash and other bank assets, such as loan fraud, account takeover, fraudulent wire transfers, and embezzlement. Criminals may carry out these types of fraud using stolen identities, forged documents, or insider access. Example: In December 2024, reports emerged of low-level bank employees selling client data to online scammers, facilitating sophisticated financial fraud schemes. Staffers in various banks made copies of customer financial information, which they then sold to buyers on Telegram. Payment fraud strategies The different types of payment fraud involve various deceptive practices aimed at stealing financial data for unauthorized use. Here are seven of the most common ways payment fraud happens: Phishing Phishing is when scammers impersonate legitimate entities to trick individuals into revealing sensitive information. This deception is often carried out using fake emails, text messages, or websites that appear legitimate. How to detect phishing Watch out for unsolicited communications requesting personal information, generic greetings, grammatical errors, and URLs that deviate slightly from authentic addresses. How to prevent phishing Implementing email filtering solutions can help identify and isolate potential phishing attempts. Multi-factor authentication (MFA) adds an extra layer of security, and employees should be trained to recognize phishing emails. Skimming Skimming is when criminals install devices on ATMs or point-of-sale terminals to illicitly capture card information during legitimate transactions. These devices read the magnetic stripe data, enabling the creation of counterfeit cards for fraudulent use. How to detect skimming Signs of skimming devices include loose or misaligned card slots, unfamiliar attachments on payment terminals, or visible adhesive residues. How to prevent skimming Upgrade to payment terminals that support EMV chip technology, which is more secure than magnetic stripe systems. Additionally, install tamper-evident seals and conduct routine checks on all payment devices. Identity theft Identity theft involves the unauthorized access and use of someone’s personal information — such as Social Security numbers, bank account details, or credit card numbers — to commit fraud or theft. How to detect identity theft Consider installing monitoring services that can identify unusual account activities, such as unrecognized transactions, changes in account details, or unexpected credit inquiries. How to prevent identity theft Implement layers of identity verification processes,, such as biometric data and MFA. Update and patch systems regularly to protect against data breaches. Train

Payment Fraud Detection and Prevention: Here's All To Know Read More »

Adapting To Private Practice: From DOJ Leadership To BigLaw

By Richard Donoghue ( April 2, 2025, 2:14 PM EDT) — Attorneys frequently transition from government work to private practice during changes in administration, encountering challenges and surprises as they do so. In this Expert Analysis series, attorneys who made that move in the last few years reflect on how they adapted to law firm life, and discuss tips for others. If you are interested in writing about your experience, please email [email protected]…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Adapting To Private Practice: From DOJ Leadership To BigLaw Read More »

Pixel 10 Pro Fold Leak: Can Google Finally Crack the Foldable Phone Market?

Leaks show Google’s Pixel 10 Pro Fold resembles its predecessor, shown in this Google ad. Image: Google Google continues to struggle to maintain its foothold in the mobile market. With just 4.5% Americans using a Google smartphone as their primary device, the tech giant is banking on its upcoming Pixel 10 Pro Fold to make a stronger impression than previous iterations — despite not depending on smartphone sales to drive annual revenue. While official details remain limited, a recent leak offered an early glimpse into what users can expect from Google’s next foldable flagship, the successor to the Pixel 9 Pro Fold. What the latest leak reveals The leak includes digital renders of the Google Pixel 10 Pro Fold that experts were quick to point out were similar in design to its predecessor. This may suggest that Google is prioritizing internal upgrades and performance enhancements over cosmetic redesigns. The leaked images of the Google Pixel 10 Pro Fold share a similar, if not identical, form factor to the Pixel 9 Pro Fold. It also has a triple rear camera on the back of the phone. However, like other smartphones in the Pixel 10 series, the SIM card slot has been relocated to the upper edge of the device. Multiple sources also suggest that Google’s new Pixel 10 Pro Fold could launch at a lower price point than the Pixel 9 Pro Fold, which debuted at $1,799 for 256GB or $1,919 for 512GB models. Historically, Google has been aggressive with pricing strategies, often offering steep post-launch discounts on Pixel devices. The Pixel 9 Pro Fold is already available at significantly reduced prices, indicating the Pixel 10 Pro Fold could see early markdowns soon after release. More Google news & tips What we already knew Additional information about the Pixel 10 Pro Fold surfaced earlier — a September 2024 leak revealed the device’s internal codename of “Rango” and hinted at a possible launch in fall 2025. Though the device has yet to be formally announced, its likely competitor will be Samsung’s highly anticipated Galaxy Z Fold 7. According to industry sources, Samsung is expected to unveil the Z Fold 7 in July 2025, with availability in stores to follow weeks later. While an official price hasn’t been disclosed, analysts project a retail cost between $1,899 and $2,199. source

Pixel 10 Pro Fold Leak: Can Google Finally Crack the Foldable Phone Market? Read More »

Augment Code debuts AI agent with 70% win rate over GitHub Copilot and record-breaking SWE-bench score

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Augment Code, an AI coding assistant startup, unveiled its new “Augment Agent” technology today, designed to tackle the complexity of large software engineering projects rather than simple code generation. The company claims its approach represents a significant departure from other AI coding tools by focusing on helping developers navigate and modify large, established codebases that span millions of lines of code across multiple repositories. The company also announced it has achieved the highest score to date on SWE-bench verified, an industry benchmark for AI coding capabilities, by combining Anthropic’s Claude Sonnet 3.7 with OpenAI’s O1 reasoning model. “Most work in the coding AI space, which is clearly a hot sector, has focused on what people call ‘zero to one’ or ‘vibe coding’ – starting with nothing and producing a piece of software by the end of the session,” said Scott Dietzen, CEO of Augment Code, in an exclusive interview with VentureBeat. “What we targeted instead is the software engineering discipline of maintaining big, complex systems — databases, networking stacks, storage — codebases that have evolved over many years with hundreds of developers working on them collaboratively.” Founded in 2022, Augment Code has raised $270 million in total funding, including a $227 million Series B round announced in April 2024 at a post-money valuation of $977 million. The company’s investors include Sutter Hill Ventures, Index Ventures, Innovation Endeavors (led by former Google CEO Eric Schmidt), Lightspeed Venture Partners, and Meritech Capital. How Augment’s context engine tackles multi-million line codebases What sets Augment Agent apart, according to the company, is its ability to understand context across massive codebases. The agent boasts a 200,000 token context window, significantly larger than most competitors. “The challenge for any AI system, including Augment, is that when you’re working with large systems containing tens of millions of lines of code – which is typical for meaningful software applications – you simply can’t pass all that as context to today’s large language models,” explained Dietzen. “We’ve trained our AI models to perform sophisticated real-time sampling, identifying precisely the right subset of the codebase that allows the agent to do its job effectively.” This approach contrasts with competitors that either don’t handle large codebases or require developers to manually assemble the relevant context themselves. Another differentiator is Augment’s real-time synchronization of code changes across teams. “Most of our competitors work with stale versions of the codebase,” said Dietzen. “If you and I are collaborating in the same code branch and I make a change, you’d naturally want your AI to be aware of that change, just as you would be. That’s why we’ve implemented real-time synchronization of everyone’s view of the code.” The company reports its approach has led to a 70% win rate against GitHub Copilot when competing for enterprise business. Why ‘memories’ feature helps AI match your personal coding style Augment Agent includes a “Memories” feature that learns from developer interactions to better align with individual coding styles and preferences over time. “Part of what we wanted to be able to deliver with our agents is autonomy in the sense that you can give them tasks, but you can also intervene,” Dietzen said. “Memories are a tool for the model to generalize your intent, to capture that when I’m in this situation, I want you to take this path rather than the path that you took.” Contrary to the notion that coding is purely mathematical logic without stylistic elements, Dietzen emphasized that many developers care deeply about the aesthetic and structural aspects of their code. “There is definitely a mathematical aspect to code, but there’s also an art to coding as well,” he noted. “Many of our developers want to stay in the code. Some use our agents to write all of the code, but there’s a whole group of engineers that care about what the ultimate code looks like and have strong opinions about that.” Enterprise adoption of AI coding tools has been slowed by concerns about intellectual property protection and security. Augment has focused on addressing these issues with a robust security architecture and enterprise-grade integrations. “Agents need to be trusted. If you’re going to give them this autonomy, you want to make sure that they’re not going to do any harm,” said Dietzen. “We were the first to offer the various levels of SOC compliance and all of the associated penetration testing to harden our solution.” The company has also established integration with developer tools like GitHub, Linear, Jira, Notion, Google Search, and Slack. Unlike some competitors that implement these integrations on the client side, Augment handles these connections in the cloud, making them “easily shareable and consistent across a larger team,” according to Dietzen. Augment Agent is generally available for VS Code users starting today, with early preview access for JetBrains users. The company maintains full compatibility with Microsoft’s ecosystem, unlike competitor Cursor, which forked VS Code. “At some level, customers that choose Cursor are opting out of the Microsoft ecosystem. They’re not allowed to use all of the standard VS Code plug-ins that Microsoft provides for access to their environment, whereas we’ve preserved 100% compatibility with VS Code and the Microsoft ecosystem,” Dietzen explained. The evolving partnership between human engineers and AI assistants Despite the advances in AI coding assistance, Dietzen believes human software engineers will remain essential for the foreseeable future. “The arguments around whether software engineering is a good discipline for people going forward are very much off the mark today,” he said. “The discipline of software engineering is very, very different in terms of crafting and evolving these large code bases, and human insight is going to be needed for years to come.” However, he envisions a future where AI can take on more proactive roles in software development: “The real excitement around where we can ultimately get to with AI is AI just going in and assessing quality

Augment Code debuts AI agent with 70% win rate over GitHub Copilot and record-breaking SWE-bench score Read More »

Appendix B: Selected tables by expert and public demographics

ABOUT PEW RESEARCH CENTER Pew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. It does not take policy positions. The Center conducts public opinion polling, demographic research, computational social science research and other data-driven research. Pew Research Center is a subsidiary of The Pew Charitable Trusts, its primary funder. source

Appendix B: Selected tables by expert and public demographics Read More »

Don’t Leave Consumers Behind In Your Agentic AI Journey

Generative AI was so two years ago. Now, businesses are all about agentic AI (not to be confused with AI agents), channeling the brouhaha of ChatGPT in 2023. But where is the consumer in all this? While businesses are moving to experiment and learn more about agentic AI, consumers are not. Business-oriented, not consumer-oriented, use cases continue to drive most of the buzz around agentic AI. Today, AI Assists And Informs Consumer Decisions For consumers right now, AI primarily helps them access information faster. But while it informs their research and purchase decisions, AI does not pick the right item or complete the transaction; that action still falls to the consumer. For example: Walmart’s GenAI search surfaces collections of products. Consumers now put in a query such as “Plan my daughter’s unicorn-themed fifth birthday” and results will show all the various products related to that ask. Previously, consumers had to search for specific products or product types. Amazon Fashion takes the guesswork out of unfamiliar brands’ sizing. Based on a customer’s past purchases and an analysis of product reviews, Amazon will show the likely size of a clothing item that the customer is viewing. AI @ Morgan Stanley helps advisors better focus on their clients. Financial services companies such as Morgan Stanley are using AI to help assist in note-taking and email summarization for financial advisors so that they can better focus on their conversations with their wealth management clients. Soon, AI Will Be Cognizant Of Consumer Context As businesses implement agentic AI, it will begin to trickle into consumer experiences, but widespread adoption will be limited by consumers’ comfort with agentic AI. An intermediary phase of not-fully-agentic AI will emerge in which AI apps will connect with third-party tools and datasets to understand consumer context and need but won’t yet have the full executional capabilities required to act on a consumer’s behalf. Big Tech — think Microsoft, Apple, and Google — will lead the way, not brands. For example: Project Astra engages multiple sources to better personalize responses. Google’s AI assistant taps into not just Gemini but Google Search, Maps, and Lens to create responses. Apple’s Siri observes users’ interactions to predict next steps. Siri can register the apps that a user has open and derive what the user is hoping to accomplish when Siri is prompted. Microsoft Copilot interacts with different browser tabs to suggest next-step actions. Vision is a feature within Microsoft’s Edge browser that analyzes a user’s browser tabs to answer questions and suggest next steps. What will agentic AI use cases for consumers look like? According to Forrester’s Market Research Online Community, only 12% of consumers have heard of the term agentic AI while 38% have heard of AI agents. But even among those who have heard of either, most assume that they’re terms for AI assistants or customer service help. Check out our new report, Consumer Use Cases For AI, 2025, for a deeper dive into all of the phases of consumer-facing AI. Clients, schedule a guidance session with us to learn more about how to roll out consumer-facing AI experiences in ways that engender trust. source

Don’t Leave Consumers Behind In Your Agentic AI Journey Read More »

Bybit 衍生品及機構銷售主管Shunyet Jan預測金價將持續上揚

平台單日交易量創紀錄達 100 億美元 4月3日,金價突破歷史紀錄,超越每盎司3,100美元,進一步鞏固了其在日益動盪的全球環境中作為重要避險資產的地位。作為全球第二大加密貨幣交易所,Bybit 堅持 為加密貨幣社群提供多樣化的投資機會。作為首個推出以USDT進行黃金交易的加密貨幣交 易所,Bybit助力用戶充分把握市場變化。 Bybit的衍生品及機構銷售主管Shunyet Jan預測金價將持續上漲,並提到了幾個關鍵因素: 戰略性中央銀行積累: 由於許多國家希望減少對美元的依賴,並擁有更為多樣化的儲備資產。一些亞洲中央銀行正 積極多元化其儲備,大幅增加黃金的持有量,並減少對美元的依賴。這一策略轉變創造了持續 的需求,並對金價施加上行壓力。 持續的地緣政治不穩定: 全球貿易局勢的不確定性導致市場波動,進而增加對避險資產的需求。特朗普政府預期將實 施的新關稅引發重大的地緣政治不確定性。這些緊張局勢,加上現有的全球衝突,預計將持續 下去,推動投資者尋求黃金的穩定性。 黃金作為可靠的通脹對沖工具: 黃金作為通脹對沖工具的角色仍然至關重要,尤其在對通脹的擔憂加深之際。與比特幣等加密貨幣相比,黃金仍然保持與通脹壓力的反向關係,而比特幣則展現更緊貼廣泛市場趨勢的表現。由於關稅和其他全球經濟因素可能引發的通脹上升,投資者正在轉向黃金,以保護其資產。 為此,Bybit 對市場趨勢取積極態度應對,先後於2024 年 8 月推出黃金及外匯交易服務,以及 2025 年1月推出黃金及外匯的跟單交易(Copy Trading)功能,成功讓任何投資水平的用戶都 能自由接觸傳統金融市場。Bybit 平台的服務大獲好評,僅昨天便達至 100 億美元的黃金交易量。 此外,Bybit 將於今天推出 XAUTUSDT 永續合約,為廣大用戶提供更多在加密原生環境中參 與金市交易的機會。Shunyet 補充:「透過Bybit於黃金及外匯交易中的戰略整合,以及平台創 新的跟單交易功能,讓Bybit能為用戶帶來多樣的投資良機。」 LinkedIn Email Facebook Twitter WhatsApp source

Bybit 衍生品及機構銷售主管Shunyet Jan預測金價將持續上揚 Read More »

Microsoft at 50: Bill Gates is Gifting Everyone With the Company’s Original Source Code

Image: Bill Gates/YouTube Fifty years ago, Bill Gates and his childhood friend Paul Allen founded a company called “Micro-Soft” in a strip mall in Albuquerque, New Mexico. Half a century later, the company has cemented its place among tech giants and ranks as the world’s second-largest company. Currently, the only company with a higher market cap is Apple, maker of the ubiquitous iPhone. While Microsoft is reflecting on its past successes in honor of its 50-year anniversary this April, including Gates sharing the company’s original source code, the tech giant is also hustling to secure its place among the leaders of the artificial intelligence revolution. After largely missing the boat on smartphones and the shift to mobile devices, Microsoft is hoping to avoid a repeat of this mistake and instead dominate the fields of cloud computing and generative AI. Microsoft still dominates office software and operating systems Microsoft’s Windows operating system — originally called MS-DOS — runs the majority of the world’s computers. The company also made a name for itself by dominating the office software market, and it still reigns supreme today. Once available on floppy disks and then CDs, the software can now be downloaded on any device thanks to the power of cloud computing. Even though its main competitor Google Docs is free for many to use, Microsoft Office products remain the standard for most offices around the world. Attempts to diversify weren’t always home runs The company has made moves to diversify beyond office software and operating systems throughout the years. For example, Microsoft launched Xbox consoles in 2001, introduced the Bing search engine in 2009, and acquired the LinkedIn social media website in 2016. Despite these moves, Microsoft’s products and services often lag behind its competitors. PlayStations outnumber Xboxes almost two to one, Google Search continues to dominate Bing, and other social media websites like Facebook, Instagram, and YouTube outperform LinkedIn. Must-read developer coverage Microsoft now bids on AI and cloud computing Microsoft is now working hard to future-proof its tech legacy through cloud computing and artificial intelligence. Its cloud platform Microsoft Azure is currently the second-largest by market share, though Amazon Web Services (AWS) still leads by a wide margin, and Google Cloud is also gaining in popularity. The company has also made significant moves to strengthen its AI offerings. It invested its first $1 billion in OpenAI back in 2019, and over the years has invested a total of $14 billion. Microsoft has also developed its own in-house AI tools, such as Microsoft 365 Copilot. However, the tech giant lacks its own proprietary silicon chips and relies on other companies to produce these essential AI model components. It remains to be seen whether Microsoft’s other AI investments will be enough to uphold its legacy for another 50 years — or if it will fall behind other AI companies. source

Microsoft at 50: Bill Gates is Gifting Everyone With the Company’s Original Source Code Read More »

Building resilient and innovative security teams in the age of AI

The promised land of AI transformation poses a dilemma for security teams as the new technology brings both opportunities and yet more threat. Threat actors are already using AI to write malware, to find vulnerabilities, and to breach defences faster than ever. At the same time, machine learning is playing an ever-more important role in helping enterprises combat hackers and similar. According to Palo Alto Networks, its systems are detecting 11.3bn alerts every day, including 2.3m new and unique attacks.[1] It is beyond human capabilities to monitor and respond to these attacks; it is also putting immense stress on security teams. How, then, can CISOs and CSOs build resilient security teams that can defend their organisations, and continue to innovate? Arms race Cybersecurity teams are in an “arms race” with attackers, as threat groups use AI to increase both the volume and speed of attacks. “AI has created a powerful toolkit for threat actors, and it has changed the way that we’re seeing attacks,” warns Nick Calver, VP for Financial Services at Palo Alto Networks. “Two or three years ago a ransomware attack would typically take 44 days before they could extract data or cause your systems a problem. Now we’re seeing that exact same attack happening in a number of hours,” he says. This acceleration is happening even as businesses struggle with visibility of how AI is being used in their own organisations, and as regulators struggle to keep up with a fast-changing landscape. “Everybody needs to be aware of AI,” says Calver. “Threat-based assessment is incredibly powerful, and I’ve seen it put to good use. It’s immediately helped improve organisations’ protection,” says Calver. Threat assessment is just one area where AI can also play a positive role in security. AI has been in use in cyber defence for over 10 years. “When you consider those attack volumes, it is not possible for humans to actually keep up and respond effectively,” says Calver. “Security technicians need to harness the power of AI.” Resilience, and human factors However, there is also a different side to an increasingly hostile security environment. Increasing threats are challenging organisations’ abilities to recover from attacks. This is changing how security leaders think. Focus remains on preventing a breach, but increasing attention is being given to how to respond and recover from attacks. Regulations are helping ensure consistency in this area with DORA being just one example. “Historically, we’d try to build a moat around the technology, and just stop anybody crossing in. But people do come in,” says Calver. “How do we actually segment and protect systems and provide a level of resilience?” Architectures such as zero trust will also play a role in building resilience, he says. But it is people who will ultimately secure an organisation. Even with automation and AI tools, businesses will only survive cyber attacks if their security teams can function under pressure. This means bringing together technical tools, training, testing and above all support for those in the front line. “Without people, we are nothing,” warns Calver. “Ultimately, the team, the people, that’s what actually makes an organisation successful, and that’s what protects the organisation too.” Watch the full interview below. CIO’s interview with Nick Calver of Palo Alto Networks For more information, please visit Palo Alto Networks’ Precision AI page. [1] Foundry Interview with PAN’s Nick Calver source

Building resilient and innovative security teams in the age of AI Read More »

OpenAI to release open-source model as AI economics force strategic shift

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI announced plans to release its first “open-weight” language model since 2019, marking a dramatic strategic shift for the company that built its business on proprietary AI systems. Sam Altman, OpenAI’s chief executive, revealed the news in a post on X on Monday. “We are excited to release a powerful new open-weight language model with reasoning in the coming months,” Altman wrote. The model would allow developers to run it on their own hardware, departing from OpenAI’s cloud-based subscription approach that has driven its revenue. “We’ve been thinking about this for a long time but other priorities took precedence. Now it feels important to do,” Altman added. The announcement coincided with OpenAI securing $40 billion in new funding at a $300 billion valuation — the largest fundraise in the company’s history. These major developments follow Altman’s admission during a February Reddit Q&A that OpenAI had been “on the wrong side of history” regarding open-source AI — a statement prompted by January’s release of DeepSeek R1, an open-source model from China that reportedly matches OpenAI’s performance at just 5-10% of the operating cost. TL;DR: we are excited to release a powerful new open-weight language model with reasoning in the coming months, and we want to talk to devs about how to make it maximally useful: https://t.co/XKB4XxjREV we are excited to make this a very, very good model! __ we are planning to… — Sam Altman (@sama) March 31, 2025 OpenAI faces mounting economic pressure in a marketplace increasingly dominated by efficient open-source alternatives. The company reportedly spends $7-8 billion annually on operations, according to AI scholar Kai-Fu Lee, who recently questioned OpenAI’s sustainability against competitors with fundamentally different cost structures. “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free,” Lee said in a Bloomberg Television interview last week, comparing OpenAI’s finances with DeepSeek AI. Meta’s Llama models have established formidable market presence since their 2023 debut, surpassing one billion downloads as of this March. This widespread adoption demonstrates how quickly the field has shifted toward open models that can be deployed without the recurring costs of API-based services. Clement Delangue, CEO of Hugging Face, celebrated the announcement, writing: “Amazing news for the field and the world. Everyone benefits from open-source AI!” Amazing news for the field and the world. Everyone benefits from open-source AI! @elonmusk where’s open groq? https://t.co/ATThJQKIUH — clem ? (@ClementDelangue) March 31, 2025 The billion-dollar gamble: Why OpenAI is risking its primary revenue stream OpenAI’s move represents a high-stakes bet that could either secure its future relevance or accelerate its financial challenges. By releasing an open model, the company implicitly acknowledges that foundation models are becoming commoditized — an extraordinary concession from a company that has raised billions on the premise that its proprietary technology would remain superior and exclusive. The economics of AI have shifted dramatically since OpenAI’s founding. Training costs have fallen precipitously as hardware efficiency improves and algorithmic innovations like DeepSeek’s approach demonstrate that state-of-the-art performance no longer requires Google-scale infrastructure investments. For OpenAI, this creates an existential dilemma: maintain course with increasingly expensive proprietary models or adapt to a market that increasingly views base models as utilities rather than premium products. Their choice to release an open model suggests they’ve concluded that relevance and ecosystem influence may ultimately prove more valuable than short-term subscription revenue. This decision also reflects the company’s growing realization that competitive moats in AI may not lie in the base models themselves, but in the specialized fine-tuning, domain expertise, and application development that build upon them. Balancing openness with responsibility: How OpenAI plans to control what it can’t contain OpenAI emphasizes that safety remains central to its approach despite embracing greater openness. “Before release, we will evaluate this model according to our preparedness framework, like we would for any other model. And we will do extra work given that we know this model will be modified post-release,” Altman wrote. This represents the fundamental tension in open-weight releases: once published, these models can be modified, fine-tuned, and deployed in ways the original creators never intended. OpenAI’s challenge lies in creating guardrails that maintain reasonable safety without undermining the very openness they’ve promised. The company plans to host developer events to gather feedback and showcase early prototypes, beginning in San Francisco in the coming weeks before expanding to Europe and Asia-Pacific regions. These sessions may provide insight into how OpenAI plans to balance openness with responsibility. Enterprise impact: What CIOs and technical decision makers need to know about OpenAI’s strategic shift For enterprise customers, OpenAI’s move could significantly reshape AI implementation strategies. Organizations that have hesitated to build critical infrastructure atop subscription-based models now have reason to reconsider their approach. The ability to run models locally addresses persistent concerns around data sovereignty, vendor lock-in, and long-term cost management. This shift particularly matters for regulated industries like healthcare, finance, and government, where data privacy requirements have limited cloud-based AI adoption. Self-hosted models potentially enable these sectors to implement AI in previously restricted contexts, though questions around compute requirements and operational complexity remain unanswered. For existing OpenAI enterprise customers, the announcement creates uncertainty about long-term investment strategies. Those who have built systems atop GPT-4 or o1 APIs must now evaluate whether to maintain that approach or begin planning migrations to self-hosted alternatives — a decision complicated by the lack of specific details about the forthcoming model’s capabilities. Beyond base models: How the AI industry’s competitive landscape is fundamentally changing OpenAI’s pivot highlights a broader industry trend: the commoditization of foundation models and the shifting focus toward specialized applications. As base models become increasingly accessible, differentiation increasingly happens at the application layer — creating opportunities for startups and established players alike to build domain-specific solutions. This doesn’t mean the race to build better base models has

OpenAI to release open-source model as AI economics force strategic shift Read More »