Thank you for your feedback much appreciated.
In order to understand why this occurs you need to understand the basics of caching and how wordpress works.
Firstly, the current implementation of the plugin is designed to operate with the WordPress process, when no caching mechanism is in place, when a request is made to wordpress site, we hook into the requestparser object within WordPress API, to check the request details. WordPress then builds then dynamically builds the webpage to serve.
However, when a caching is involved, actual physical pre-rendered HTML files are physically stored on the server. Therefore when a request is made, everything then comes under management Apache/Nginx depending on the web server technology your hosting provider uses.
Therefore, we would need to build into the plugin configuration options for Apache/nginx. Whilst not too difficult to do, it is rather time consuming, and would require some significant QA effort to ensure that we are able to do this under many different hosting configurations.
We presently do not have the time or the resources due to significant additional paid for work/projects we are engaged in. However, as this is a Free Open Source Project and open to community contributions, anyone who would like to make a contribution to the project certainly can. We would be only to happy to accept pull requests for any enhancements/features.
If you would like to make a contribution you can visit our Github repository https://github.com/threenine/StopWebCrawlers clone it and get to work.
Alternatively, if you would like to make a financial contribution to enable the team to dedicate time and resources to the projects outside of free time then you can do so by either using the donate options within the plugin or right here on the plugin homepage.
Thank you for your feedback, and we will attempt implement the functionality to the plugin as and when resource constraints allow.