• Wordfence is scanning a huge number of files for sites hosted on one particular LAMP (Ubuntu 12.04) server. Typically, it’s scanning 48,000 files for sites that actually contain only 4000 to 8000 files. It looks like this behaviour started with the Wordfence 6.2 update.

    Example from one very small site:
    [Sep 30 04:35:30] Scan Complete. Scanned 48283 files, 17 plugins, 5 themes, 5 pages, 0 comments and 16213 records in 1 hour 19 minutes 46 seconds.
    [Sep 30 04:35:31] Wordfence used 119.96MB of memory for scan. Server peak memory usage was: 209.09MB

    All those numbers are correct except the number of files (17 plugins, 5 themes, 5 pages, 0 comments and 16213 records).

    I checked all the Wordfence options, paying particular attention to ‘Scan files outside your WordPress installation’ (which was, and is, disabled). I enabled ‘Delete Wordfence tables and data on deactivation?’, deactivated, removed, installed and activated Wordfence on one of the sites, and nothing changed. I tried disabling all of the scan options, and while some of the checks were skipped, the scan still went through all those 48,000 files as usual.

    Also, during scans on affected sites, the server’s hard drive light is on constantly, and disk I/O is extreme. I should point out that Wordfence scans have always hammered my server’s hard drive like this, but I’ve always assumed it was normal. Now I’m not so sure.

    Since upgrading to Wordfence 6.2, scans are taking longer to complete, which makes sense if they’re scanning six times as many files. Previously they ran for about 20 minutes; now they’re running over an hour in some cases.

    On other servers, the scans are indeed faster with Wordfence 6.2. And each group of 100 files scanned takes longer on the server in question than for sites on other servers.

    Clearly there’s something about my server or the way it (Linux/Apache/Wordpress/Wordfence) is configured that’s causing the excessive disk I/O, and the mysterious extra files to be scanned. But what that is remains a mystery.

    I’ve enabled debugging for one of the sites and will post any new info here.

Viewing 11 replies - 1 through 11 (of 11 total)
  • Plugin Author WFMattR

    (@wfmattr)

    Hi,

    Sorry to hear about the trouble — there might be a couple of possibilities. Do the sites have any symbolic links in their directories? We’ve seen some servers in the past that include a symlink like ~/public_html/www that points back to ~/public_html, which can cause a loop (www/ and www/www/ and www/www/www/ could be scanned). Wordfence will follow the symlinks but version 6.2 keeps track of the real locations of files scanned, which might cause the extra counting (and attempted disk reads) — older versions behaved differently on different servers in this situation. If that is the issue, you can add the symlink’s name to the field “Exclude files from scan that match these wildcard patterns” on the Wordfence options page.

    If that isn’t it, do you know what PHP’s max_execution_time on that server is? By default, Wordfence uses half of that value for each stage of the scan — if you can set “Maximum execution time for each scan stage” to 80% of the maximum instead, is there any improvement? We’re looking at an issue where some large directories may be processed multiple times if the scan stages are too short. (So far, we’ve only seen that on sites with large numbers of uploads and “Scan images, binary, and other files as if they were executable” enabled.)

    -Matt R

    Thread Starter jrivett

    (@jrivett)

    Thanks for the information.

    I checked for symlinks in the site files, and found none. So clearly that’s not the issue.

    PHP has max_execution_time set globally to 30, and the sites in question had “Maximum execution time for each scan stage” set to blank. Which means Wordfence would have been using 15. I changed the setting to 24 (80% of 30) as requested, and YAY, it’s back to scanning the right number of files. Total scan times are now on the order of 15 minutes. Thanks!!

    Still, the scanning does seem to take longer than it ought (basic on other people’s scan times), and it does still hammer my server’s poor hard drive, AND there’s no noticeable improvement in scan times or I/O activity in Wordfence 6.2. Should I perhaps report this separately?

    Plugin Author WFMattR

    (@wfmattr)

    Hi,

    Thanks for the additional information. I’m glad this helped decrease the scan time again — it’s ok to continue discussing the remaining issue here too. If you can send me the scan logs from one or two of the sites that had this issue, I can see more about the performance.

    To send a detailed log:

    1. Go to the Diagnostics page and turn on “Enable debugging mode” near the bottom of the page

    2. Run a scan manually

    3. When the scan has finished (even if it stops early), click the link “Email activity log” above the Scan Detailed Activity box, and then click the Send button

    4. Turn off the debugging mode option again on the Diagnostics page, for best performance

    -Matt R

    Thread Starter jrivett

    (@jrivett)

    I enabled debugging mode on one of the sites and ran a scan before the change to “Maximum execution time for each scan stage”. The detailed activity box stopped updating during the scan, and I eventually killed the scan as it was taking forever. Attempting to view the log in Wordfence showed a blank page. However, I emailed it to myself and that worked, or at least I now have a good sized chunk of that scan. Should I forward that email to [email protected], or run another scan (now that “Maximum execution time for each scan stage” has been adjusted) and send that?

    Thread Starter jrivett

    (@jrivett)

    I sent that earlier log to you, as well as the log from a new debug scan, which completed this time, taking about two hours. Interestingly, despite having already demonstrated that changing “Maximum execution time for each scan stage” to 24 fixed the problem of scanning too many files, the new debug scan once again scanned too many files:
    Scan Complete. Scanned 45268 files, 17 plugins, 5 themes, 5 pages, 0 comments and 44279 records in 1 hour 57 minutes 32 seconds.

    So it looks like debug mode changes the conditions enough to trigger that weirdness. I’m hoping you find that useful in your own investigations.

    One other thing I should mention. My tests show that in Wordfence scans run on sites hosted on a Dreamhost VPS, about 500 files per second are scanned. On my own server, where I theoretically should be getting better performance, about 10 files per second are scanned. That’s a huge difference, clearly.

    I’m starting to suspect a problem with jbd2/sda1-8 on my server: when there’s a lot of disk I/O going on (including during Wordfence scans), jbd2/sda1-8 is doing 99% of that I/O.

    Plugin Author WFMattR

    (@wfmattr)

    Hi,

    Thanks for the additional details — I didn’t receive the scan logs, but sometimes they can be blocked as if they were spam, due to so much repetitive content, unfortunately. But we have identified a similar issue when the “files per second” is low, and we will be including some improvements for that situation in an upcoming version.

    Since you’re seeing a lot of activity from jbd2, if it’s mainly during the scans, it might be that the atime option is enabled when disks are mounted — it causes a timestamp to be updated every time a file is accessed, even if it’s not modified. Some details on that are available here:
    https://askubuntu.com/questions/2099/is-it-worth-to-tune-ext4-with-noatime

    (relatime or noatime could improve speed if that is the case, but just be sure nothing else you’re running relies on that before changing it, of course.)

    -Matt R

    Thread Starter jrivett

    (@jrivett)

    Regarding the missing log: should I try again to send a debug log, or perhaps send a portion of a log instead of the whole thing? It might contain information useful to your efforts.

    That’s good news on the upcoming improvements for slow ‘files per second’. There’s no way to know if it will have any effect on my issue, but my fingers are crossed.

    My own research into this problem led to similar discussions of the possible performance benefits of noatime and relatime. I haven’t yet tried changing anything, but I do see that my main filesystem mounts all either specify relatime or don’t specify any options, which means the default should be used, and for Ubuntu 12, that’s relatime.

    Since I see the same high levels of jbd2 disk I/O when other disk intensive processes are running (eg. Clamav database updates), I think it’s probably safe to assume that the problem is the server, not Wordfence.

    I’ll update this post if I learn anything new, or if subsequent Wordfence updates fix the problem, since it may be of some use to others.

    Thanks for your help!

    Plugin Author WFMattR

    (@wfmattr)

    Thanks for the update — I’ve heard that relatime should be fine for performance in most cases, so maybe that isn’t related. I haven’t done any comparisons myself and used the defaults on recent servers. If you can try sending the scan log again, I can look for it. If you tried sending it directly from the plugin, it’s possible that forwarding the copy you received would work. (Or if you forwarded it before, try sending it directly from the plugin — either way, please use [email protected]).

    -Matt R

    Plugin Author WFMattR

    (@wfmattr)

    Hi,

    Sorry for the delayed response. I did receive the forwarded scan log this time, and I can see that this does look like the same issue that should be fixed in the next version — we just released that new version today, 6.2.1, so you can install it and try a scan when ready.

    The disk activity may still be high during scans, but should be more efficient. As long as the scan is running properly, you can try turning on the new scan option “Use low resource scanning. Reduces server load by lengthening the scan duration” — this spreads out the scan over a longer time, which can help the disk keep up with demand from other processes or swapping if the machine is low on free memory.

    -Matt R

    Thread Starter jrivett

    (@jrivett)

    Thanks for the update. I installed the new version of Wordfence but sadly the scans show the same behaviour as before, with each set of 100 files taking 10+ seconds to complete.

    At this point I’m about 95% certain that the remaining problem is not Wordfence, and I’ll know for sure when I upgrade Linux on my server.

    I just found this thread, as I am facing a problem of the same nature.

    My main difference is, I am using glusterfs for the storage backend. Anyone with any experience using glusterfs for the storage backend? I am seeing a LOT of i/o with the files that are located in wp-content/wflogs directory.

Viewing 11 replies - 1 through 11 (of 11 total)
  • The topic ‘Wordfence scanning way too many files, using tons of I/O’ is closed to new replies.