GPT-4 Turbo Development:
- An advanced model, GPT-4 Turbo, with a 128K context window and updated knowledge until April 2023, has been introduced.
- Costs have been significantly reduced: input tokens now cost $0.01/1K, and output tokens $0.03/1K – a marked reduction compared to the previous model.
- Enhancements in function calling now allow for multiple functions to be called simultaneously and ensure more accurate function parameter returns.
- A new beta feature ensures more deterministic model outputs.
- GPT-4 Turbo is accessible via ‘gpt-4-1106-preview’, with a stable version expected later.
GPT-3.5 Turbo Update:
- GPT-3.5 Turbo now offers a 16K context window at lower prices and fine-tuning options.
- Cost reductions have made the fine-tuned model more affordable: input tokens are now $0.003/1K, and output tokens $0.006/1K.
- It shares enhanced function calling and reproducible outputs with GPT-4 Turbo.
Assistants API:
- The new Assistants API beta facilitates the creation of agent-like applications across diverse use cases.
- It allows for specific AI assistants that follow instructions and interact with models to perform tasks.
- Developers can hand over threads and utilize new tools like the Code Interpreter and Function Calling.
- Playground offers a code-free testing environment for the API.
Multimodal Capabilities:
- GPT-4 Turbo now supports visual inputs for applications such as image captioning and analysis.
- Integration of DALL·E 3 for image generation and a text-to-speech model is possible.
The keynote can be watched here: https://www.youtube.com/watch?v=U9mJuUkhUzk