Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has released a new proprietary AI model in time to counter the rapid rise of open source rival DeepSeek R1 — but will it be enough to blunt the latter’s success? Today, after several days of rumors and increasing anticipation among AI users on social media, OpenAl is debuting o3-mini, the second model in its new family of “reasoners,” Al models that take slightly more time to “think,” analyze their own processes and reflect on their own “chains of thought” before responding to user queries and inputs with new outputs. The result is a model that can perform at the level of a PhD student or even degree holder on answering hard questions in math, science, engineering and many other fields. The o3-mini model is now available on ChatGPT, including the free tier, and OpenAI’s application programming interface (API), and it’s actually less expensive, faster, and more performant than the previous high-end model, OpenAI’s o1 and its faster, lower-parameter count sibling, o1-mini. While inevitably it will be compared to DeepSeek R1, and the release date seen as a reaction, it’s important to remember that o3 and o3-mini were announced well prior to the January release of DeepSeek R1, in December 2024 — and that OpenAI CEO Sam Altman stated previously on X that due to feedback from developers and researchers, it would be coming to ChatGPT and the OpenAI API at the same time. Unlike DeepSeek R1, o3-mini will not be made available as an open source model — meaning the code cannot be taken and downloaded for offline usage, nor customized to the same extent, which may limit its appeal compared to DeepSeek R1 for some applications. OpenAI did not provide any further details about the (presumed) larger o3 model announced back in December alongside o3-mini. At that time, OpenAI’s opt-in dropdown form for testing o3 stated that it would undergo a “delay of multiple weeks” before third-parties could test it. Performance and Features Similar to o1, OpenAI o3-mini is optimized for reasoning in math, coding, and science. Its performance is comparable to OpenAI o1 when using medium reasoning effort, but offers the following advantages: 24% faster response times compared to o1-mini (OpenAI didn’t provide a specific number here, but looking at third-party evaluation group Artificial Analysis’s tests, o1-mini’s response time is 12.8 seconds to receive and output 100 tokens. So for o3-mini, a 24% speed bump would drop the response time down to 10.32 seconds.) Improved accuracy, with external testers preferring o3-mini’s responses 56% of the time. 39% fewer major errors on complex real-world questions. Better performance in coding and STEM tasks, particularly when using high reasoning effort. Three reasoning effort levels (low, medium, and high), allowing users and developers to balance accuracy and speed. It also boasts impressive benchmarks, even outpacing o1 in some cases, according to the o3-mini System Card OpenAI released online (and which was published earlier than the official model availability announcement). o3-mini’s context window — the number of combined tokens it can input/output in a single interaction — is 200,000, with a maximum of 100,000 in each output. That’s the same as the full o1 model and outperforms DeepSeek R1’s context window of around 128,000/130,000 tokens. But it is far below Google Gemini 2.0 Flash Thinking’s new context window of up to 1 million tokens. While o3-mini focuses on reasoning capabilities, it doesn’t have vision capabilities yet. Developers and users looking to upload images and files should keep using o1 in the meantime. The competition heats up The arrival of o3-mini marks the first time OpenAI is making a reasoning model available to free ChatGPT users. The prior o1 model family was only available to paying subscribers of the ChatGPT Plus, Pro and other plans, as well as via OpenAI’s paid application programming interface. As it did with large language model (LLM)-powered chatbots via the launch of ChatGPT in November 2022, OpenAI essentially created the entire category of reasoning models back in September 2024 when it first unveiled o1, a new class of models with a new training regime and architecture. But OpenAI, in keeping with its recent history, did not make o1 open source, contrary to its name and original founding mission. Instead, it kept the model’s code proprietary. And over the last two weeks, o1 has been overshadowed by Chinese AI startup DeepSeek, which launched R1, a rival, highly efficient, largely open-source reasoning model freely available to take, retrain, and customize by anyone around the world, as well as use for free on DeepSeek’s website and mobile app — a model reportedly trained at a fraction of the cost of o1 and other LLMs from top labs. DeepSeek R1’s permissive MIT Licensing terms, free app/website for consumers, and decision to make R1’s codebase freely available to take and modify has led it to a veritable explosion of usage both in the consumer and enterprise markets — even OpenAI investor Microsoft and Anthropic backer Amazon rushing to add variants of it to their cloud marketplaces. Perplexity, the AI search company, also quickly added a variant of it for users. DeepSeek also dethroned the ChatGPT iOS app for the number one place in the U.S. Apple App Store, and is notable for outpacing OpenAI by connecting its R1 model to web search in its app and on the web, something that OpenAI has not yet done for o1, leading to further techno anxiety among tech workers and others online that China is catching up or has outpaced the U.S. in AI innovation — even technology more generally. Many AI researchers and scientists and top VCs such as Marc Andreessen, however, have welcomed the rise of DeepSeek and its open sourcing in particular as a tide that lifts all boats in the AI field, increasing the intelligence available to everyone while reducing costs. Availability in ChatGPT o3 is now rolling out globally to ChatGPT