Vector Database
In one line
A vector database stores embeddings and performs fast similarity search across them — the core infrastructure behind RAG and semantic search.
Going deeper
A vector database stores embeddings and runs fast similarity search across them. Pinecone, Weaviate, Qdrant and Postgres' pgvector are typical choices. Where a relational database asks 'WHERE column = value', a vector DB asks 'top-N most semantically similar items to this query'.
Marketers rarely run one directly, but it is essential context for how AI retrieves your content into an answer. If your site is chunked and embedded cleanly, a RAG system can pull the right paragraph; if not, it pulls something close but wrong.
Lately, general-purpose databases and search engines have added vector capabilities, so 'do we need a dedicated vector DB' is genuinely a case-by-case decision now.
Related terms
Embedding
An embedding is a numeric vector representation of text or other data that preserves semantic meaning — the foundation of semantic search, vector databases and RAG.
LLMRAG
RAG (Retrieval-Augmented Generation) lets an LLM fetch external documents at answer time and ground its response in them — the technique behind ChatGPT Search, Perplexity and most AI search products.
LLMLLM
A large language model (LLM) is a neural network trained on massive text corpora to understand and generate human language — the engine behind ChatGPT, Claude, Gemini and similar products.
LLMStructured Output
Structured output forces an LLM to reply in a predefined JSON or schema shape instead of free text — essential when you need to plug AI reliably into other systems.
GEO·AEOLLMO
LLMO (Large Language Model Optimization) is the work of shaping content, data and context signals so that LLMs understand and cite your brand correctly.
How does your brand show up in AI answers?
Villion measures how your brand appears across ChatGPT, Perplexity and AI Overviews, then automates the work that lifts citation rate and share of voice.
Get a free audit