OpenAI's first developer conference, DevDay, occurred today, November 6, in San Francisco. The undisputed highlight of the event was the keynote conference, where OpenAI shared announcements about plenty of upcoming additions and improvements to its model and product lineup. Among all the new exciting features and upgrades announced, the GPT-4 Turbo preview, Assistants API introduction, and GPTs launch stole the spotlight.
GPT-4 Turbo now supports 128K context tokens
After making GPT-4 generally available in July, OpenAI announced today that it will launch a preview of the model's next generation, GPT-4 Turbo. The model features a 128K token window, roughly equivalent to over 300 pages of text per prompt. GPT-4 Turbo has also been updated with knowledge of world events up to April 2023 and will cost less than its predecessors: input tokens will be offered 3x cheaper, while output tokens will cost 2x less compared to GPT-4. ChatGPT-3.5 Turbo also received an update: it now features a default context size of 16K tokens and other functions coming to GPT-4 Turbo, such as improved instruction following, JSON mode, and parallel function-calling.
More generally, OpenAI announced higher rate limits, paid legal defense for Enterprise customers and Developer platform users facing copyright infringement claims, and the open-source availability of the most recent version of the Whisper engine and the Consistency Decoder.
Assistants API release
The Assistants API is OpenAI's solution for developers looking to build agent-like experiences into their applications. Assistants are capable of receiving specific instructions, leveraging extra knowledge, and performing simple tasks using tools or calling a model. The Assistants API incorporates the Code Interpreter and Retrieval capabilities, as well as function calling. These new capabilities should help developers save time and empower them to build high-quality apps effortlessly by drastically reducing the work required to incorporate these features into AI-powered applications.
The API's Code Interpreter can write and run Python code in a sandbox, generate graphs and charts, and handle several data and file formats. The Code Interpreter also lets assistants run code iteratively, enabling them to solve complex math problems, among other applications. The Retrieval capability allows developers to augment a model with external data, such as proprietary domain data, product information, or user-provided documents. This is done without the need for embeddings or chunk and search algorithms. With function calling, assistants can invoke user-defined functions and incorporate the function response in their messages. OpenAI has stated that, as is their standard procedure, data and files passed through the API are excluded from model training, and developers can delete their data any time they consider it pertinent.
GPTs: customize ChatGPT for any purpose
On the consumer end of their offerings, OpenAI also announced the launch of GPTs, custom versions of ChatGPT that users can build for a specific purpose, such as learning the rules of any board game, helping parents teach their children math, and designing stickers. GPTs do not require coding knowledge and can be built for personal use, internal use within a company, or even public availability. To create a GPT, users need only start a conversation, give ChatGPT some additional instructions and knowledge, and specify the actions it can do. Eligible users can try creating GPTs at chat.openai.com/create. This feature and the possibility to try out example GPTs such as Canva and Zapier AI Actions is currently only available for ChatGPT Plus and Enterprise users.
Publicly shared GPTs will be available at the GPT Store launching later this month. Once a GPT hits the store, it will become searchable and can climb the leaderboards as more users try it out. OpenAI has stated they will "spotlight the most useful and delightful GPTs [they] come across in categories like productivity, education, and 'just for fun'." In a few months, GPT creators will be able to earn money based on the popularity of their GPTs.
OpenAI has committed to building GPTs with privacy and security as a priority. Chats held with GPTs will not be disclosed to creators, and when a GPT uses a third-party API, users can decide whether they want to share their information with the API or not. Builders can determine whether user chats can be used in model training. New privacy options are best seen as an enhancement to previously existing ones, such as opting whole accounts out of training. The company has also pledged to set up new systems that will prevent the publication of harmful GPTs that include hateful, fraudulent, or adult content. Builders will be given the option to verify their identity, and concerns will always be able to be voiced on the reporting feature page.
OpenAI has publicly stated more than once that its team shares the belief that community involvement is critical to its "mission of building safe AGI that benefits humanity." That the company is working actively to incorporate this belief into its products is more than evident from the variety of remarkable announcements made at DevDay's keynote. Additionally, this focus on accessibility and developing APIs that prioritize developers' time and effort will certainly turn heads and maybe even convince some of those holding out of trying OpenAI's products. The introduction of GPTs appears exceptionally well-timed, as it comes at a time when more and more non-specialists are looking to take advantage of generative AI and exploit as much of its as-yet-untapped potential as possible.
Data Phoenix Newsletter
Join the newsletter to receive the latest updates in your inbox.