A Primer on Conversational Artificial Intelligence Agents & Large Language Models Updated

Overview

Conversational agents and the large language models (LLM) at their core have become increasingly proficient at mimicking human language and behavior, so that they can respond to a wide variety of instructions. They have been shown to be effective in many different tasks such as question and answering, dialogue generation, text summarization, machine translation, and even programming. But the questions remain: do LLMS actually know anything? And, how do they actually work?

Evolution of Large Language Models

The first generation of the large language models included the original transformer models such as GPT, BERT, BLOOM, PaLM. These models were all trained on large unlabeled text corpora.1

The second generation of LLMs included InstructGPT, ChatGPT, Bard and Alpaca.  These were fine-tuned on labeled target data using one of two methods: supervised learning objectives or reinforcement learning with human feedback.

Future generations of large language models will likely focus on three capabilities:

  • Multimodal and multitask learning
  • Pre-training on domain-specific data, and
  • Parameter-efficient fine-tuning

How Large Language Models Work

LLMs’ ability to produce often accurate results is because they contain large amounts of data from different sources and domains. LLMs are types of neural networks that are comprised of (hundreds of) billions of parameters or tokens. Tokens may include words, parts of words, and punctuation. These mathematical models are, in essence, statistical distributions of the probability of the adjacency of those tokens to one another.

Said differently, these generative models predict the statistically likely continuation of words that follow a given prompt. However, that doesn’t mean the most likely continuation of tokens. Frequently, it may be preferable to use a lower-ranked token to ensure a more “creative” output from the LLM. This is known as the temperature parameter.23

Issues

Despite sounding human and very confident in their answers, the output of conversational agents can be prone to inaccuracy, toxicity and bias.



Updated on May 2nd, 2023