The noindex directive is one of the most direct technical SEO controls available. It tells Googlebot: “crawl this page if you wish, but do not add it to the index and do not show it in search results.” Unlike robots.txt which blocks crawling, noindex allows the robot to access the page while preventing it from appearing in SERPs.
The two implementation methods
The most common method is the meta robots tag in the HTML head: <meta name="robots" content="noindex">. The alternative is an X-Robots-Tag HTTP header, which works for any URL including non-HTML resources. Both are equally valid; the meta tag is simpler for most CMS implementations.
When to use noindex
Noindex is the right tool for pages with no organic search value: login pages, admin panels, internal search results, thank-you pages, duplicate content variants, and low-value paginated pages. Keeping these pages out of the index improves overall site quality signals and focuses crawl budget on pages that matter.
Common mistakes with noindex
The most damaging error is applying noindex to a page that is also blocked by robots.txt. If Googlebot cannot crawl the page, it cannot read the noindex directive, so the page may remain indexed indefinitely. Another frequent mistake is leaving noindex on pages after a site launch — a development-phase directive forgotten in production.


