Hello,
The issue you’re experiencing with accessing the robots.txt file on your website might be due to a misconfiguration or server-related problem. When you try to access it directly using the URL “https://somesite/robots.txt,” you receive a 404 Not Found error, indicating that the file is not found on the server. The “nginx/1.20.2” message suggests that your website is using the Nginx web server.
However, when you access the URL “https://somesite/?robots=1,” it seems to work fine. This behavior could indicate that the server is configured to handle requests for the robots.txt file through a different method, using the query parameter “?robots=1” instead of the standard file path.
To resolve this issue, you should check your website’s server configuration and ensure that it is correctly serving the robots.txt file at the expected location (“/robots.txt”) without requiring the query parameter. You may need to consult your web server’s documentation or contact your hosting provider for assistance in configuring Nginx to serve the robots.txt file properly.
Additionally, if the robots.txt file is not being discovered during a site audit, you should double-check that it is accessible and properly formatted. Ensure that there are no typos in the file name or path and that it follows the correct syntax for specifying rules and directives for search engine crawlers.
By addressing these potential configuration issues and verifying the accessibility of the robots.txt file, you can ensure that search engines can correctly access and interpret the directives you’ve set for web crawlers on your website.