Web Spiders, Robots, or Crawlers can be used to retrieve any webpage and can recursively traverse all the hyperlinks present in the webpages. This software uses recursive methods to help them retrieve furthermore web content. The robots use the Robots Exclusion Protocol’s specified behaviour to retrieve information from the application. The Robots Exclusion Protocol is specified in the robots.txt file, found on the web application’s web root folder. The robot.txt file contains the protocol along with all the folders the software must ignore. But, the Web Spiders, Robots, or Crawlers can intentionally ignore the disallowed directives in the robot.txt file. These types of robots can be found on many social networks. Due to this reason, robot.txt is not the safest method to enforce restrictions on the way the web content is used by 3rd parties.
An attacker can get robot.txt by using wget.
wget http://www.google.com/robots.txt
The output of the above command will be as follows:-
Example
The following is the example of code written in robot.txt :-
Impact
An attacker can easily find all the hidden folders used by the application.
Mitigation / Precaution
Beagle recommends the following fixes:-
Make sure the Robots.txt file does not reveal any information about the application’s directory and internal folder structure details.
Automated human-like penetration testing for your web apps & APIs
Teams using Beagle Security are set up in minutes, embrace release-based CI/CD security testing and save up to 65% with timely remediation of vulnerabilities. Sign up for a free account to see what it can do for you.