Solutions Employed to Prevent Google Indexing

Have you ever desired to reduce Google from indexing a particular URL on your internet web page and exhibiting it in their research motor outcomes webpages (SERPs)? If you take care of world-wide-web sites extensive more than enough, a working day will likely arrive when you want to know how to do this.

The three procedures most usually utilized to avoid the indexing of a URL by Google are as follows:

Employing the rel=”nofollow” attribute on all anchor things used to hyperlink to the web page to avoid the backlinks from becoming adopted by the crawler.
Utilizing a disallow directive in the site’s robots.txt file to prevent the site from currently being crawled and indexed.
Utilizing the meta robots tag with the content material=”noindex” attribute to prevent the website page from being indexed.
Although the dissimilarities in the 3 methods appear to be refined at to start with glance, the efficiency can change considerably depending on which technique you pick.

Employing rel=”nofollow” to avoid Google indexing

Numerous inexperienced website owners try to reduce Google from indexing a individual URL by making use of the rel=”nofollow” attribute on HTML anchor aspects. They include the attribute to just about every anchor factor on their web site utilised to connection to that URL.

Including a rel=”nofollow” attribute on a url helps prevent Google’s crawler from adhering to the website link which, in change, helps prevent them from identifying, crawling, and indexing the target webpage. Although this system may possibly perform as a limited-time period resolution, it is not a viable extended-term resolution.

The flaw with this solution is that it assumes all inbound back links to the URL will consist of a rel=”nofollow” attribute. The webmaster, nonetheless, has no way to avert other website sites from linking to the URL with a adopted url. So the likelihood that the URL will ultimately get crawled and indexed employing this technique is very high.

Using robots.txt to avoid Google indexing

Yet another common method utilized to protect against the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be included to the robots.txt file for the URL in problem. Google’s crawler will honor the directive which will avoid the web page from becoming crawled and indexed. In some instances, nevertheless, the URL can however show up in the SERPs.

Often Google will exhibit a URL in their SERPs even though they have never indexed the contents of that webpage. If more than enough website websites hyperlink to the URL then Google can usually infer the subject matter of the web site from the website link textual content of people inbound inbound links. As a outcome they will display the URL in the SERPs for relevant searches. Even though making google serp data of a disallow directive in the robots.txt file will avoid Google from crawling and indexing a URL, it does not assure that the URL will never appear in the SERPs.

Utilizing the meta robots tag to avoid Google indexing

If you need to prevent Google from indexing a URL although also blocking that URL from being displayed in the SERPs then the most productive technique is to use a meta robots tag with a articles=”noindex” attribute within the head aspect of the world-wide-web web site. Of program, for Google to really see this meta robots tag they will need to first be equipped to uncover and crawl the web page, so do not block the URL with robots.txt. When Google crawls the webpage and discovers the meta robots noindex tag, they will flag the URL so that it will never ever be proven in the SERPs. This is the most efficient way to reduce Google from indexing a URL and displaying it in their look for benefits.