Hi @oliverkardos,
I always forward requests to the development team for internal discussion, and I’ll do that in your case. However, I am already aware that bot/human detection can be tricky to always get right, despite the use of some helpful tools.
Wordfence classifies a bot or human by checking a large list of IP blocklists and looking for human-like activity on your site with Javascript. However, these can identify a human as a bot if Javascript is off in the visitor’s browser (or script errors/conflicts/local settings are causing this not to load properly), or they share part of a blocked IP range. Also if there wasn’t much on-screen activity from the user on your site, this could’ve been seen as a bot.
The important part is that Wordfence wants to know if bots or humans are behaving in a malicious manner, but generally, just detecting a visitor as a bot rather than a human isn’t necessarily significant in itself.
I generally set my Rate Limiting Rules to these values to start with:
Rate Limiting Screenshot
- If anyone’s requests exceed – 240 per minute
- If a crawler’s page views exceed – 120 per minute
- If a crawler’s pages not found (404s) exceed – 60 per minute
- If a human’s page views exceed – 120 per minute
- If a human’s pages not found (404s) exceed – 60 per minute
- How long is an IP address blocked when it breaks a rule – 30 minutes
I also always set the rule to Throttle instead of Block. Throttling is generally better than blocking because any good search engine understands what happened if it is mistakenly blocked and your site isn’t penalized because of it. Make sure and set your Rate Limiting Rules realistically and set the value for how long an IP is blocked to 30 minutes or so.
Remember there is no hard and fast, one size fits all set of rules for every site. This is just a good place to start. During an attack you may want to make those rules stricter. If you see visitors, like search engine crawlers getting blocked too often, you might want to loosen them up a little.
Thanks,
Peter.