Crawlability
In one line
Crawlability is how easily a search engine bot can reach and follow your site's pages — the precondition for indexing.
Going deeper
Crawlability is the most basic layer of SEO. If robots.txt blocks you, internal links are broken, or content only renders via JavaScript, bots stop at the crawl step and indexing never happens.
Check sitemap submission, robots.txt rules, internal link structure and rendering strategy (SSR/CSR/hybrid) together. Google Search Console's 'Crawl stats' report shows which pages bots actually visit and how often.
The same logic applies to GEO. If GPTBot, PerplexityBot or ClaudeBot cannot read your site, you drop out of the AI citation pool. That is why basic SEO crawl hygiene is also where GEO starts.
Related terms
Indexing
Indexing is the step where a search engine stores a crawled page in its database. If a page is not indexed, it cannot appear in search results at all.
SEOnoindex / nofollow
noindex tells search engines not to add a page to the index, while nofollow tells them not to follow a specific link — both are page-level robots directives.
SEOGoogle Search Console
Google Search Console (GSC) is Google's free tool for monitoring how a site performs in Search — impressions, clicks, indexing status and technical issues.
GEO·AEOGPTBot
GPTBot is OpenAI's official web crawler used for ChatGPT training and search indexing — controllable via robots.txt.
GEO·AEOPerplexityBot
PerplexityBot is the web crawler Perplexity uses to gather sources for its answer engine — controllable separately via robots.txt.
SEOPagination
Pagination is the practice of splitting a long list across multiple pages — easy to ship, but a frequent source of duplicate, thin and orphan-page issues if handled carelessly.
How does your brand show up in AI answers?
Villion measures how your brand appears across ChatGPT, Perplexity and AI Overviews, then automates the work that lifts citation rate and share of voice.
Get a free audit