.Google's John Mueller addressed an inquiry concerning why Google marks pages that are actually forbidden coming from creeping through robots.txt as well as why the it is actually safe to disregard the similar Look Console records concerning those creeps.Bot Traffic To Inquiry Criterion URLs.The individual talking to the inquiry chronicled that crawlers were actually creating web links to non-existent query parameter Links (? q= xyz) to webpages along with noindex meta tags that are additionally obstructed in robots.txt. What caused the concern is that Google is actually crawling the hyperlinks to those web pages, getting obstructed by robots.txt (without seeing a noindex robots meta tag) after that receiving turned up in Google Explore Console as "Indexed, though obstructed by robots.txt.".The individual talked to the adhering to inquiry:." However listed below is actually the large question: why would Google index pages when they can't even view the material? What's the benefit in that?".Google's John Mueller verified that if they can't crawl the web page they can not see the noindex meta tag. He also makes an exciting reference of the website: hunt operator, advising to dismiss the outcomes due to the fact that the "average" customers will not find those end results.He composed:." Yes, you're right: if our experts can not crawl the page, our company can not view the noindex. That mentioned, if we can't crawl the pages, then there is actually certainly not a great deal for us to mark. Therefore while you may see several of those webpages with a targeted web site:- concern, the normal customer won't find them, so I would not bother it. Noindex is additionally fine (without robots.txt disallow), it only indicates the Links will certainly find yourself being actually crept (as well as wind up in the Search Console report for crawled/not indexed-- neither of these standings trigger problems to the remainder of the web site). The vital part is that you do not create all of them crawlable + indexable.".Takeaways:.1. Mueller's answer validates the constraints being used the Website: search evolved hunt driver for diagnostic main reasons. One of those factors is considering that it is actually not connected to the routine search index, it's a different point entirely.Google.com's John Mueller talked about the website search operator in 2021:." The quick solution is actually that a web site: inquiry is actually certainly not meant to be full, nor utilized for diagnostics reasons.A website query is a certain kind of hunt that restricts the outcomes to a particular website. It is actually basically simply words site, a digestive tract, and after that the internet site's domain.This query limits the results to a certain site. It's certainly not indicated to be a comprehensive selection of all the web pages coming from that web site.".2. Noindex tag without utilizing a robots.txt is actually alright for these kinds of scenarios where a robot is linking to non-existent web pages that are actually obtaining found through Googlebot.3. Links along with the noindex tag will generate a "crawled/not recorded" entry in Look Console and that those won't possess a negative impact on the rest of the website.Go through the concern and respond to on LinkedIn:.Why would Google.com index web pages when they can not also view the web content?Featured Photo by Shutterstock/Krakenimages. com.