Web Spiders, Robots, or Crawlers can be used to retrieve any webpage and can recursively traverse all the hyperlinks present in the webpages. This software uses recursive methods to help them retrieve furthermore web content. The robots use the Robots Exclusion Protocol’s specified behaviour to retrieve information from the application. The Robots Exclusion Protocol is specified in the robots.txt file, found on the web application’s web root folder. The robot.txt file contains the protocol along with all the folders the software must ignore. But, the Web Spiders, Robots, or Crawlers can intentionally ignore the disallowed directives in the robot.txt file. These types of robots can be found on many social networks. Due to this reason, robot.txt is not the safest method to enforce restrictions on the way the web content is used by 3rd parties.
An attacker can get robot.txt by using wget.
The output of the above command will be as follows:-
The following is the example of code written in robot.txt :-
An attacker can easily find all the hidden folders used by the application.
Mitigation / Precaution
Beagle recommends the following fixes:-
Make sure the Robots.txt file does not reveal any information about the application’s directory and internal folder structure details.
Check your website security today and
identify vulnerabilities before hackers exploit them.