Categories
How To Guides

How to make sure your Robots.txt file is working

Somewhere on your website there is a file called Robots.txt. If not, your website designer may well have decided not to include it. It is important to check this file occasionally as we have recently discovered to our cost.

The robots file acts as a filter for search engine crawlers – these are computer programmes that trawl your website for content and send this information to Google and the other engines for ranking. The file determines which pages are crawled and which ones are ignored.

A robots.txt file should read:
User-agent: *
Disallow:

The user-agent bit allows all search engines access to the site. The Disallow instruction specifies which areas the robots are not allowed access to.
Check your robots.txt file by typing www.yourwebsite.co.uk/robots.txt

If your file has the following in:

User-agent: *
Disallow: /

Your robots.txt file will be blocking the whole website from the search engines. This happened to one of our clients, www.Ten-Percent.co.uk – we are still not sure how the file got altered – whether we were attacked by a virus or at some stage the client inadvertently altered it. The effect was to remove Ten-Percent Legal Recruitment from the first page of Google search results for two of their keywords. We are fairly confident of a swift return (we spotted the error after 2 months) but it may be worth checking your file to make sure something similar has not happened to your site.

For web development services, SEO and online marketing, please visit https://chesterwebmarketing.co.uk/