Changes to GPTs

How Open AI Updates are Changing the AI Landscape

News From OpenAI Developer Summit

The OpenAI Developer Summit last week brought a number of changes to building and using AI platforms of all kinds.

The Changes

  • Introduction of GPT-4, which is an improved version of ChatGPT, and a new GPT Store for user-created AI bot.

  • Launch of the Assistants API, which allows developers to build their own "agent-like experiences" and make agents that retrieve outside knowledge or call programming functions for specific actions.

  • Multimodal capabilities in the platform, include vision, image creation (DALL-E 3), and text-to-speech (TTS).

  • OpenAI also introduced function calling updates, which allow developers to describe functions of their app or external APIs to models.

What these changes mean is that now almost anyone can build their own GPT for personal use, special use cases for their business, and for a personal side project.

The store that will be launched next month, December 2023, will be a marketplace where you can sell access to a GPT that you have created on your own. This democratizes the AI bot creation process so that now almost anyone can build their own chatbot or other AI tool. How this will flesh out in the long run nobody knows at this point.

Why is OpenAI doing this?

There have been a lot of really bad AI tools created over the last year where the builders simply place a UI over a call to the OpenAI API and leave it at that. Most of these tools are not useful or nearly completely unuseable at all. Part of the reason I created BrainScriblr was to weed through the poor tools and bring you the tools that are actually useful.

This will have to change.

You need more than an API call now to build something usable. You can build it yourself.

So I am shifting my focus from finding the best tools to teaching you how to build your own tools for personal and business uses.

Regards,

Chester “Buzz” Beard