Site iconSite icon ForkLog

Turbo GPT Versions and a Developer Assistant — OpenAI Presentation

Turbo GPT Versions and a Developer Assistant — OpenAI Presentation
\n
    \n
  • OpenAI unveiled turbo versions of its neural networks with additional options.
  • \n

  • API Assistants helps developers with writing code and carrying out other tasks.
  • \n

  • GPTs — micro-extensions for ChatGPT.
  • \n

\n

\n\n

OpenAI CEO Sam Altman delivered a presentation at DevDay, where he outlined upcoming updates to the ChatGPT chatbot and new tools for developers.

\n\n

Turbo mode

\n\n

First, the team introduced an enhanced GPT-4 Turbo with a context window expanded to 128 000 tokens. That figure equates to about 300 pages of text per query.

\n\n

The AI is more functional and has knowledge of world events up to April 2023.

\n\n

The chat bot gained an updated function-calling system that allows a request for two actions to be made simultaneously, for example “open car window and turn off the air conditioning.” The enhanced API for the neural network remembers and “with higher likelihood” reproduces the correct function parameters.

\n\n

Additionally, GPT-4 Turbo has been trained to adhere closely to the requested format when a special parameter is specified (for example, “always respond in XML”). The neural network also supports a JSON mode for composing the results.

\n\n

The new parameter — seed — ensures reproducibility of outputs, making the network repeat agreed results. The beta feature, which provides greater control over the model’s behaviour, is useful for creating multiple prompts for debugging and complex modular tests.

\n\n

In addition, OpenAI released a turbo version of GPT-3.5 with a 16 000-token context window. The network supports similar GPT-4 Turbo functionality, but in a slower mode.

\n\n

The Useful Assistant

\n\n

API Assistants — a purpose-built AI that has explicit instructions, leverages additional knowledge, and can call models and tools to carry out tasks.

\n\n

The assistant interface provides Python code interpretation and extraction capabilities. The tool can also execute some functions that previously had to be coded manually, and enables the creation of “high-quality AI applications.”

\n\n

\n

“The API is designed with flexibility: use cases range from a natural-language data analysis app, a programming assistant, an AI-powered vacation planner, a voice-controlled DJ, an intelligent visual canvas — the list goes on,” OpenAI emphasised.

\n

\n\n

Additional capabilities

\n\n

The expanded functionality that the GPT turbo versions gain allows for additional solutions. For example, the API Chat Completions enables AI to take images as input data, allowing the network to generate captions for drawings, perform detailed analyses of photographs, or read and then summarize documents.

\n\n

Thanks to ChatGPT’s “vision,” Be My Eyes has appeared, using the extension to assist blind and visually impaired people in performing daily tasks such as indoor navigation.

\n\n

Developers can now integrate the generative AI DALL-E 3 into their products directly via the interface. The tool features built-in content moderation to combat copyright infringement.

\n\n

Moreover, OpenAI products gained full support for text-to-speech conversion and offer six predefined voices. The extension features various modes, such as for real-time conversation or producing a high-quality audio track.

\n\n

For those who need more than standard ChatGPT functionality, an experimental capability for “fine-tuning” and the Custom Models tool has been added. This will allow modifying the language model’s code at any stage, starting from training.

\n\n

Finally, OpenAI unveiled custom, narrowly targeted versions of the neural network — GPTs. They resemble browser extensions.

\n\n

Some can perform specific internet information-seeking tasks or serve as virtual assistants in work processes.

\n\n

According to the company, creating GPTs requires no coding. The tool can be made for personal use, corporate use, or released publicly.

\n\n

A dedicated extensions store will appear later in November. Third-party users will be able to add their own developments to the platform, with the OpenAI team selecting the best ones.

\n\n

Earlier in September, developers released a major update for ChatGPT. The chatbot learned to “see, hear and speak” for the first time.

\n\n

Exit mobile version