Google Search Console Crawl Stats
Google’s ability to fetch and crawl web pages is incredibly fast and efficient, the crawl stats provides insights for
Pages Crawled Per Day
Kilobytes downloaded per day
This option will show you how much Kilobytes (thousand bytes) has been downloaded while crawling your web pages
Time spent downloading a page (in milliseconds)
Time in milliseconds Google’s crawl bots has spent while downloading the page per day
What is Important When Analyzing Google Search Console Crawl Stats?
The Crawl Stats page should be viewed for any unusual activity for Google’s crawling process. This is true with most of the settings you analyze Google Search Console. Meaning, Google Webmaster Tools is mainly used for identifying errors and issues.
Think of it this way, if you are publishing content on regular basis, then Google will allocate certain crawl budget. The more content you publish on regular basis will then mean, it will crawl your website more often. The less you publish content will then mean, less crawling from Google. In terms of increasing Google rankings, you should publish blog posts regularly.
Can You Increase Google Crawling Requests?
Although its best to let Google work out when and how often it should crawl your website, you can increase the crawl process by making sure that your website uses XML sitemap and gives cues for Date Modified information for example:
You can also use Structured Data to give further cues about Published and Updated Dates for your blog posts
Below code is used on WordPress blog post time information, although now has datePublished and dateModified schema markup item properties
<time itemprop="datePublished" class="entry-date published" datetime="%1$s">%2$s</time><time itemprop="dateModified" class="updated" datetime="%3$s">%4$s</time>
What is a Web Crawler?
Web Crawler (also known as user-agents, spiders, web fetchers, bots) is a generic term for any computer program used to automatically discover and scan websites by following links from one webpage to another. Googlebot is Google’s main web crawler, and you can use that crawler name to override many others.
However, keep in mind that depending on your website requirements, you may want to use different directives for different Google crawlers