[ad_1]
On Thursday, OpenAI introduced updates to the AI fashions that energy its ChatGPT assistant. Amid much less noteworthy updates, OpenAI tucked in a point out of a possible repair to a broadly reported “laziness” downside seen in GPT-4 Turbo since its launch in November. The corporate additionally introduced a brand new GPT-3.5 Turbo mannequin (with decrease pricing), a brand new embedding mannequin, an up to date moderation mannequin, and a brand new approach to handle API utilization.
“As we speak, we’re releasing an up to date GPT-4 Turbo preview mannequin, gpt-4-0125-preview. This mannequin completes duties like code technology extra totally than the earlier preview mannequin and is meant to cut back circumstances of ‘laziness’ the place the mannequin doesn’t full a job,” writes OpenAI in its weblog submit.
Because the launch of GPT-4 Turbo, numerous ChatGPT customers have reported that the ChatGPT-4 model of its AI assistant has been declining to do duties (particularly coding duties) with the identical exhaustive depth because it did in earlier variations of GPT-4. We have seen this conduct ourselves whereas experimenting with ChatGPT over time.
OpenAI has by no means provided an official clarification for this variation in conduct, however OpenAI workers have beforehand acknowledged on social media that the issue is actual, and the ChatGPT X account wrote in December, “We have heard all of your suggestions about GPT4 getting lazier! we have not up to date the mannequin since Nov eleventh, and this definitely is not intentional. mannequin conduct could be unpredictable, and we’re wanting into fixing it.”
We reached out to OpenAI asking if it might present an official clarification for the laziness situation however didn’t obtain a response by press time.
New GPT-3.5 Turbo, different updates
Elsewhere in OpenAI’s weblog replace, the corporate introduced a brand new model of GPT-3.5 Turbo (gpt-3.5-turbo-0125), which it says will supply “numerous enhancements together with larger accuracy at responding in requested codecs and a repair for a bug which precipitated a textual content encoding situation for non-English language operate calls.”
And the price of GPT-3.5 Turbo by way of OpenAI’s API will lower for the third time this 12 months “to assist our prospects scale.” New enter token costs are 50 p.c much less, at $0.0005 per 1,000 enter tokens, and output costs are 25 p.c much less, at $0.0015 per 1,000 output tokens.
Decrease token costs for GPT-3.5 Turbo will make working third-party bots considerably inexpensive, however the GPT-3.5 mannequin is mostly extra prone to confabulate than GPT-4 Turbo. So we’d see extra eventualities like Quora’s bot telling folks that eggs can soften (though the occasion used a now-deprecated GPT-3 mannequin known as text-davinci-003). If GPT-4 Turbo API costs drop over time, a few of these hallucination points with third events would possibly finally go away.
OpenAI additionally introduced new embedding fashions, text-embedding-3-small and text-embedding-3-large, which convert content material into numerical sequences, aiding in machine studying duties like clustering and retrieval. And an up to date moderation mannequin, text-moderation-007, is a part of the corporate’s API that “permits builders to establish probably dangerous textual content,” in accordance with OpenAI.
Lastly, OpenAI is rolling out enhancements to its developer platform, introducing new instruments for managing API keys and a brand new dashboard for monitoring API utilization. Builders can now assign permissions to API keys from the API keys web page, serving to to clamp down on misuse of API keys (in the event that they get into the fallacious arms) that may probably value builders a number of cash. The API dashboard permits devs to “view utilization on a per function, crew, product, or challenge stage, just by having separate API keys for every.”
Because the media world seemingly swirls across the firm with controversies and assume items concerning the implications of its tech, releases like these present that the dev groups at OpenAI are nonetheless rolling alongside as common with updates at a reasonably common tempo. Regardless of the corporate nearly fully falling aside late final 12 months, plainly, below the hood, it is enterprise as common for OpenAI.
[ad_2]