There are "levels of engagement" with the various models that OpenAI and others have released, and plenty of confusion around both how to actively experiment with the new tech, what tools to use, what experience is required, and what the heck all the different model variants are about. This is an initial look at the landscape, as we've discovered ourselves here at Bee Partners, and some of the tips and advice we are actively sharing with our portfolio companies.

As you read through the following, consider what stage of implementation and experimentation you (as company ambassador) and/or your company are currently at.

Stage 1: Let's ChatGPT!

The simplest entry into the world of generative AI is via the numerous chatbots that have been deployed providing a simple interrogatory interface to large language models.

Apart from the increased number of parameters and improved accuracy, GPT-4 introduces other significant differences. To access the latest version, OpenAI has introduced ChatGPT+, a paid version of the chatbot.

The Chat GPT-3/4 web interface

The Chat GPT-3/4 web interface

Due to server capacity problems caused by the high user demand, a monthly fee of $20 grants users access to a separate server with more capacity and less traffic, resulting in faster response times. This subscription also offers access to the new GPT-4 model and other features provided by OpenAI.

<aside> 💡 For more information about the differences between GPT-3 and GPT-4 you might want to read this article here.


We will explore how to get up and interact with ChatGPT early in this program.

Stage 2: Prompt Engineering, Application Integration, and APIs

A step up from “command-line” like interaction with a chatbot is learning to integrate large language model API calls into your applications.

OpenAI has recently released an API that provides programmatic access to GPT-4. This API allows developers to build generative AI apps and seamlessly integrate ChatGPT into their existing business systems, leveraging the latest language model and state-of-the-art technology. The availability of this API is expected to significantly increase the number of apps using GPT-4. Embedding language model API calls into common application development environments (e.g. Python) is a natural next stage of experimentation beyond chatbots.

Developers are encouraged to explore signing up for the API to ensure their products can quickly make an impact. An early recommended course in this program will cover setting up a simple dev environment and experimenting with the OpenAI API within existing code bases.

Stage 3: Vector Databases & Embeddings

Referencing large bodies of data can quickly (at least for now) overwhelm the specifications for calls to LLMs (in terms of the number of tokens). A common way around the challenge of having a model consume large amounts of data is to use vector databases.

The AI revolution sets great promises but also leaves us with great challenges to access its full potential. Applications relying on LLMs, GenAI, and semantic search are highly reliant on efficient data processing for their operational success.

New applications rely on vector embeddings, a type of data representation that carries within it semantic information that’s critical for the AI to gain understanding and maintain long-term memory.

Embeddings are generated by AI models and have a large number of features, making their representation a challenge. These features are crucial as they represent different dimensions of the data that are essential for understanding patterns, relationships, and underlying structures.