Google Cancels Support for Robots.txt Noindex


Google Cancels Support for Robots.txt Noindex
‘ );

h3_html = ‘

‘+cat_head_params.sponsor.headline+’

‘;

cta = ‘‘+cat_head_params.cta_text.textual content+’
atext = ‘

‘+cat_head_params.sponsor_text+’

‘;
scdetails = scheader.getElementsByClassName( ‘scdetails’ );
sappendHtml( scdetails[0], h3_html );
sappendHtml( scdetails[0], atext );
sappendHtml( scdetails[0], cta );
// brand
sappendHtml( scheader, “http://www.searchenginejournal.com/” );
sc_logo = scheader.getElementsByClassName( ‘sc-logo’ );
logo_html = ‘http://www.searchenginejournal.com/‘;
sappendHtml( sc_logo[0], logo_html );

sappendHtml( scheader, ‘

ADVERTISEMENT

‘ );

if(“undefined”!=typeof __gaTracker)
} // endif cat_head_params.sponsor_logo

Google formally introduced that GoogleBot will now not obey a Robots.txt directive associated to indexing. Publishers counting on the robots.txt noindex directive have till September 1, 2019 to take away it and start utilizing an alternate.

Robots.txt Noindex Unofficial

The motive the noindex robots.txt directive gained’t be supported is as a result of it’s not an official directive.

Google has prior to now supported this robots.txt directive however this can now not be the case. Take due discover therof and govern your self accordingly.

Google Mostly Used to Obey Noindex Directive

StoneTemple printed an article noting that Google principally obeyed the robots.txt noindex directive.

Their conclusion on the time was:

“Ultimately, the NoIndex directive in Robots.txt is fairly efficient. It labored in 11 out of 12 instances we examined. It may work for your website, and due to the way it’s carried out it provides you a path to stop crawling of a web page AND even have it faraway from the index.

That’s fairly helpful in idea. However, our assessments didn’t present 100 p.c success, so it doesn’t at all times work.”

That’s now not the case. The noindex robots.txt directive is now not supported.

This is Google’s official tweet:

“Today we’re saying goodbye to undocumented and unsupported guidelines in robots.txt

If you have been counting on these guidelines, study your choices in our weblog put up.”

This is the related a part of the announcement:

“In the curiosity of sustaining a wholesome ecosystem and making ready for potential future open supply releases, we’re retiring all code that handles unsupported and unpublished guidelines (akin to noindex) on September 1, 2019. “

How to Control Crawling?

Google’s official weblog put up listed 5 methods to manage indexing:

  1. Noindex in robots meta tags
  2. 404 and 410 HTTP standing codes
  3. Password safety
  4. Disallow in robots.txt
  5. Search Console Remove URL software

Read the official Google announcement right here:
https://webmasters.googleblog.com/2019/07/a-note-on-unsupported-rules-in-robotstxt.html

Read the official Google tweet right here
https://twitter.com/googlewmc/status/1145950977067016192



Source hyperlink search engine marketing

Be the first to comment

Leave a Reply

Your email address will not be published.


*