Source

Source

Using embeddings allows a company to create what is, in effect, a custom AI without having to train an LLM from scratch.

Imagine words are like unique colored marbles, and we want a machine to understand and relate these colors to each other. To do this, we give each marble (word) a "color code" that represents its shade.

  1. Giving Color Codes (Embedding Layer Initialization): We start by giving each marble a random color code. At the beginning, our codes aren't accurate; a blue marble might have a red color code.
  2. Feeding Marbles to the Machine: When we show the machine a group of marbles (a sentence), it looks at the color codes instead of the actual marbles. These codes help it understand and remember the colors better.
  3. Teaching the Machine (Training): We ask the machine to guess the next marble in a sequence, and when it's wrong, we correct it. Each time it makes a mistake, it adjusts the color codes a bit. Over time, blue marbles will have blue color codes, and red marbles will have red color codes.
  4. Fine-tuning the Color Codes: The more marbles we show, the better the machine gets at giving accurate color codes. Marbles that are similar in color will end up having similar color codes. So, a navy and sky-blue marble might both get codes that are shades of blue.
  5. End Result: After showing the machine many marbles, it will have a refined system of color codes. Now, it can easily understand, relate, and even predict marbles based on their codes.

So, just as the machine uses color codes to understand and work with marbles, AI models use embeddings (or number codes) to understand and work with words or even multi-modal content (see Model Types above).


<aside> 👉 Next: What the Foundational Models Know

</aside>

<aside> ☝ Back to The Models

</aside>