What is an LLM (Large Language Model)?

Definition

An LLM (Large Language Model) is an artificial intelligence model trained on large amounts of text to generate, understand, and synthesize natural language. ChatGPT, Claude, Gemini, and Mistral are LLMs. They form the foundation of AI answer engines.

LLMs (Large Language Models) are neural networks trained at scale on text corpora from the web, books, and scientific articles. They learn to predict the next token in a sequence, enabling them to generate coherent, context-aware responses.

LLMs and search

Several search engines now integrate LLMs into their interface: Google with AI Overviews (powered by Gemini), Bing with Copilot (GPT-4), Perplexity with its own system. These engines generate a synthetic response rather than simply displaying a list of links.

What this means for brands

LLMs build their responses from their training corpus and, for the most recent models, from real-time web searches. Being mentioned in quality sources is one of the levers for influencing their generated content.

A search engine indexes the web and returns a list of relevant links. An LLM generates a synthetic response in natural language. The two are converging: Google integrates LLMs into its results, and LLMs like Perplexity incorporate real-time web searches.

No. Each developer builds its own corpus with different sources, cutoff dates, and filtering methods. This is why a brand may be well represented in ChatGPT but absent from Claude, or vice versa. Multi-LLM monitoring is essential.