GPT-4.1, a trio of new AI models with framework panels of up to one million tokens, was unveiled by OpenAI on Monday, a move that can process full codebases or little novels at once. The portfolio includes developer-targeted GPT-4.1, Mini, and Nano varieties.

The bank’s most recent giving comes just weeks after GPT-4.5 was released, giving us a discharge order that makes a lot of feeling. ” These 4.1 were a deliberate choice. We are also trying to find out what those objectives were, according to OpenAI product direct Kevin Weil during the announcement, but it’s not just that we’re negative at naming.

GPT-4. 1 displays some very interesting features. It was 55 % more accurate than the GPT-4o’s 33 % ) and cost 26 % less, according to OpenAI. The bank’s” smallest, fastest, and cheapest type possibly” is the new Nano version, which costs only 12 cents per million currencies.

Additionally, OpenAI didn’t charge for processing large volumes of records and actually using the one million key environment. Kevin remarked,” Long context does not have a sales bump.”

The new designs exhibit significant achievement improvements. GPT-4.1 created a perfect web application that could examine a 450, 000-token NASA server log file from 1995 in a live show. OpenAI claims the design passes this test with nearly 100 % precision yet at million tokens of environment.

Michelle, the OpenAI post-training research lead, also praised the models ‘ improved ability to follow instructions. She remarked,” The model follows all your instructions to the tea,” noting that GPT-4.1 had consciously adhered to stringent formatting standards without the customary tendency for AI to” creatively interpret” instructions.

OpenAI’s Guide to Naming Models: How Not to Count

GPT-4.1, which replaces GPT-4.5, makes you feel as though you’re watching someone count “5, 6, 4, 7,” with a straight face. The most recent installment in OpenAI’s bizarre versioning saga is here.

GPT-4 was released, and the model was upgraded to have multimodal capabilities. The company decided to call that new model GPT-4o ( “o” for “omni,” a name that could also be read as “four zero” depending on the font you choose to use.

Then, OpenAI released a reasoning-focused model that was simply called “o.” Don’t confuse OpenAI’s GPT-4o with OpenAI’s o, though, because they are different. Nobody knows why they chose this name, but GPT-4o was a “normal” LLM, whereas OpenAI o1 was a reasoning model, in general.

A few months after the release of OpenAI o1, a few months later, came OpenAI o3.

But what about the oxygen? That model, however, never existed.

You would assume that our new model, which is called o2, should have been called o2, but Sam Altman said that because of respect for Telefonica’s friends, who are in the great tradition of open AI being really bad at names, it will be called o3.

The lineup further disorganizes, with the smaller, more effective o3 mini and the regular o3. Due to the power of AI, they also released a model called the” OpenA I o3 mini-high,” which positions two absolute antonyms next to each other. In essence, the OpenAI o3 mini-high is a more potent version than the o3 mini, but it is not as potent as the OpenAI o3, which is referenced in a single chart by Openai as “o3 ( Medium )”, as it should be. ChatGPT users can choose either OpenAI o3 mini or OpenAI o3 mini high for the moment. The generic version can’t be found.

Image: OpenAI

We don’t want to make you think that OpenAI will stop making up content, but the company has already announced its upcoming release of OS. Don’t, of course, confuse o4 with 4o because they are two distinct things: o4 is different from 4o for a reason.

Let’s return to the recently released GPT-4.1. The model is so excellent that it will soon demolish GPT-4.5, making it the shortest living LLM in ChatGPT history. We’re announcing that we’re going to be deprecating GPT-4.5 in the API, Kevin said, giving developers a three-month deadline to switch. He continued,” We really need those GPUs back,” confirming that even OpenAI cannot solve the industry’s silicon shortage.

We are unavoidably going to see GPT-3 or GPT-4 at this rate. 2 before the year is over, but hey, they get better with time, no matter the names.

The models are already accessible via the API and OpenAI’s playground, but they won’t be, at least not yet, via the user-friendly ChatGPT UI.

edited by James Rubin

Generally Intelligent Newsletter

A generative AI model’s generative AI model, Gen, tells a weekly AI journey.

Share This Story, Choose Your Platform!