Free Robots.txt Generator – Optimize Your Site Fast

Search Engine Optimization

Robots.txt Generator


Default - All Robots are:  
    
Crawl-Delay:
    
Sitemap: (leave blank if you don't have) 
     
Search Robots: Google
  Google Image
  Google Mobile
  MSN Search
  Yahoo
  Yahoo MM
  Yahoo Blogs
  Ask/Teoma
  GigaBlast
  DMOZ Checker
  Nutch
  Alexa/Wayback
  Baidu
  Naver
  MSN PicSearch
   
Restricted Directories: The path is relative to root and must contain a trailing slash "/"
 
 
 
 
 
 
   



Now, Create 'robots.txt' file at your root directory. Copy above text and paste into the text file.


About Robots.txt Generator

What is a robots.txt file?

The Robots.txt is a text file that website owners create to instruct search engine robots to crawl pages on their websites. The robots.txt file is part of the Robot Exclusion Protocol (REP). REP is a group of web standards that operate how robots crawl the web, access and index content, and serve that content to users.
Practically, robots.txt files determine whether certain users/ software can or cannot crawl parts of a website. The instructions are identified by “disallowing” or “allowing” the behavior of user agents.

Syntax:
User-agent: [user-agent name]Disallow: [URL string cannot be crawled]

Important Features of robots.txt File

The robots.txt file is publicly available. To see the robots.txt file of any website, just add “/robots.txt” to the end of any root domain to see that website’s directives. Anyone can see what pages you do or don’t want to be visited, so don’t hide private user information in the robots.txt file. Every subdomain on a root domain uses separate robots.txt files. The robots.txt is case-sensitive, which means the file must be named exactly "robots.txt" and not "Robots.txt", "robots.TXT", or otherwise. A robots.txt file must be placed in a website’s top-level directory to be found easily by users.

Why do you need robots.txt?

Robots.txt files control visiting access to certain areas of your site. Modifying robots.txt files incorrectly can be dangerous, especially if you accidentally block Googlebot from accessing your entire site, there are some situations in which a robots.txt file can be very useful.
Some common use cases include:

  • Robots.txt files help in preventing duplicate content from appearing in SERPs.
  • It also helps in keeping entire sections of a website private.
  • It keeps internal search results pages from showing up on a public SERP.
  • It also sets the location of the sitemap.
  • It prevents search engines from indexing certain files on your website.

How does robots.txt work?

Search engines have two main jobs:

  1. To visit and analyze the web to discover content.
  2. To index that content so that it can be served up to users who are looking for information.

After arriving at a website, the search engine looks for a robots.txt file. If it finds one, it will read that file first before continuing through the page. Because the robots.txt file contains insightful information about how the search engine should analyze the website, the information found there will instruct further action of the visitor on this particular site. If the site does not have a robots.txt file, it will proceed to search for other information on the site.

How can you create your first robots.txt file?

  1. First, allow or disallow the visitors of the web access your website. This menu allows you to decide whether you want your website to be visited.
  2. Add your XML sitemap file by entering its location within this field.
  3. In the last text box, you are given the option to block certain pages or directories from being indexed by search engines.
  4. When it is done, you can download your eobots.txt file.
  5. After generating your robots.txt file, upload it into the root directory of your domain.