• Resolved websavers

    (@websavers)


    Heya,

    Not sure how you normally handle these kinds of issues, but wanted to at least document this here in case others experience it.

    We have a WooCommerce site with the WooCommerce Products Filter (WOOF) plugin active with a few taxonomy filters and its text search enabled.

    When Docket Cache’s Object Cache Drop-In is enabled and a text search occurs, CPU usage spikes considerably and the search process takes perhaps 5-10x as long as it normally does. Because it takes so long, in most cases it reaches the PHP timeout, particularly if there’s multiple searches running at once (which is pretty common for a busy WooComm site). WooComm exception log says:

    2022-04-22T19:23:22+00:00 CRITICAL Maximum execution time of 30 seconds exceeded in /path-to-wp/wp-content/plugins/docket-cache/includes/src/Filesystem.php on line 826

    When I disable the Object Cache Drop-In, performance goes back to normal.

    Is this an acceptable solution? Or should there be some kind of exclude for queries from this plugin or should its specific issue be narrowed down to have the problem resolved?

    TIA

    • This topic was modified 2 years, 7 months ago by websavers.
    • This topic was modified 2 years, 7 months ago by websavers.
Viewing 15 replies - 1 through 15 (of 20 total)
  • Plugin Author Nawawi Jamili

    (@nawawijamili)

    Hi there,

    Thanks for reporting this issue.

    You may try disabling these options one by one on the Configuration Page:

    – Admin Object Cache Preloading
    – Advanced Post Caching
    – Object Cache Precaching

    2022-04-22T19:23:22+00:00 CRITICAL Maximum execution time of 30 seconds exceeded in /path-to-wp/wp-content/plugins/docket-cache/includes/src/Filesystem.php on line 826

    From the latest version seems like cachedir_flush() takes too long to finish. You may download the dev version here https://docketcache.com/devtest/docket-cache-21.08.05-devfix1.zip that fixes the “maximum execution time” issue.

    Is this an acceptable solution? Or should there be some kind of exclude for queries from this plugin or should its specific issue be narrowed down to have the problem resolved?

    Docket Cache is a file-based caching, and it is had a common issue with server I/O resources. I will try to replicate this issue and improve it if possible.

    Thanks.

    Plugin Author Nawawi Jamili

    (@nawawijamili)

    Hi,

    This is the latest dev file for the next release that improved some file dir iterations and cpu usage:
    https://docketcache.com/devtest/docket-cache-21.08.05-devfix2.zip

    You may try it first.

    Thanks.

    Thread Starter websavers

    (@websavers)

    Hey @nawawijamili

    Tried out the devfix version. Immediately after reactivating the Object Cache Drop In, load went from 0.5-0.6 up to 2.5 and numerous PHP processes were stuck at 100% CPU.

    Disabled it again and processing went back to normal (after a PHP-FPM restart to clear the long-running processes).

    You mentioned these options:

    – Admin Object Cache Preloading: was enabled
    – Advanced Post Caching: was enabled
    – Object Cache Precaching: was disabled

    I disabled the first two as well (so all of those are disabled), then enabled the object cache drop in again and the issue returned.

    I dunno what it is about the WooCommerce Products Filtering plugin, but it’s definitely got some kind of incompatibility with object caching. Is there a way to determine how to exclude its searches, or something like that? It’d be nice if they could play nice together without having to disable options globally.

    Plugin Author Nawawi Jamili

    (@nawawijamili)

    @websavers

    Ok, I think we need to do some debugging here:

    1. Enable WP Debug and WP Debug Log.
    2. Enable Cache Log: Configuration -> Cache Log
    3. Enable Cache Log -> Log All
    4. Disable WP-Cron: put this code at wp-config.php

    
    define('DISABLE_WP_CRON', 'true');
    

    5. Restart php-fpm + nginx
    6. Monitor Cache Log from the command line if possible:

    
    tail -f /wp-content/.object-cache.log
    

    to see which part cache group caused this issue.

    I dunno what it is about the WooCommerce Products Filtering plugin, but it’s definitely got some kind of incompatibility with object caching.

    I’ve already tested it and didn’t find any issue, maybe I need to test it again, or maybe my server is not so heavy.

    Is there a way to determine how to exclude its searches, or something like that?

    Object cache doesn’t aware of pages, we only can exclude it based on cache group/key, you may refer to it here https://docs.docketcache.com/constants#docket_cache_ignored_groups

    If possible you may email me directly here https://docketcache.com/contactus/, it is much faster to debug if doing it on the server.

    Thanks.

    Plugin Author Nawawi Jamili

    (@nawawijamili)

    @websavers

    This is my latest attempt try to solve this issue, download and install this dev file https://docketcache.com/devtest/docket-cache-21.08.05-devfix3.zip

    1. Goto Overview and click “Flush Object Cache”
    2. Click “Enable Object Cache”

    View file wp-content/object-cache.php and should contain this code

    
    // temp: woof issue
    if (!empty($_GET['swoof']) && (!empty($_GET['woof_text']) || !empty($_GET['woof_sku']) || !empty($_GET['woof_author']))) {
        return;
    }
    

    If not exist, Disable and Enable back Object Cache. Please confirm if this solution fixes the issue.

    Thanks.

    Thread Starter websavers

    (@websavers)

    Hey @nawawijamili

    Apologies for the lack of response to the last message – I’ve been meaning to get back to you!

    It seems improved with the latest devfix, but searches are still faster when object caching is disabled entirely.

    I do think you’re probably right about it being connected to cache flushes. Even with the object cache drop-in disabled, if I flush the object cache it takes a good 30 seconds and CPU usage of PHP processes spike. Perhaps even weirder, if I flush it again about 15 seconds later (with the object cache having never been enabled in between), it does the same thing – 30 seconds processing, and significant CPU usage.

    The server powering this has strictly SSDs and IO performance is pretty solid overall. When I flush the object cache, iotop shows practically no IO activity. Yet htop indicates it’s at least 50% kernel time.

    wp-content/cache/docket-cache contains 97,643 entries and running ls on it takes some time. I’m wondering if the issue here is the same issue we run into when you don’t tell WP to organize uploads by date: simply too many files in one folder causing access performance issues — that would explain the kernel processing time when I flush the cache. I’ve seen other file stores like this split the files alphabetically (alpha hash?) into folders like aa, ab, ac, ad, … ba, bb, bc, … fa, fb, fc.

    Thread Starter websavers

    (@websavers)

    Also interesting: it says “Object cache was flushed.” after the 30 seconds, but the number of files in wp-content/cache/docket-cache is now higher… I’m used to cache flushes clearing the contents of the cache folder, is that not how this works with Docket?

    I also can’t erase that folder manually as there’s too many files in the argument list to run rm -rf on it.

    Perhaps the cache flush is simply reaching the PHP 30s timeout, but for some reason Docket says it was a successful cache flush, when it actually didn’t flush anything?

    Thread Starter websavers

    (@websavers)

    I deleted the contents of the folder manually:

    find . -type f -delete

    Flushed the cache from Docket’s end, re-enabled the object cache and as far as I can tell the issue is resolved. But I suspect if it continues to generate large numbers of files, the problem will return.

    Edit: After about 15 minutes it was up to 75k files and super slow searches again.

    I think this identifies a couple possible improvements:

    1. FLUSH CACHE RESPONSE / ERRORS:

    Ensure the response from flushing the cache is accurate – if it doesn’t actually flush the cache, it should indicate when it was unsuccessful. When unsuccessful it might also be beneficial to indicate to the end user that they need to clear it manually.

    2. ALPHABETIC HASH DIRECTORY SYSTEM?

    It sounds to me like Docket could minimize the problems created by this scenario by using an alphabetic hash directory system for storing its files. This way even if there’s a problem, flushing the cache should work more consistently. This *might* also improve cache file access times when there’s a large number of cache files. At bare minimum, even if the number of cache files is larger than it should be, due to some other conflict, this would be a great way to mitigate the effects.

    Cheers,

    Jordan

    • This reply was modified 2 years, 7 months ago by websavers.
    • This reply was modified 2 years, 7 months ago by websavers.
    • This reply was modified 2 years, 7 months ago by websavers.
    • This reply was modified 2 years, 7 months ago by websavers.
    Plugin Author Nawawi Jamili

    (@nawawijamili)

    @websavers

    I think the cache folder needs to be fully clean first before we do some debugging here.

    1. Disable Object Cache or manually remove the wp-content/object-cache.php file
    2. cd to wp-content/cache/ folder and enter this command

    
    for f in ./docket-cache/*-*.php; do rm -fv $f; done
    

    3. Disable Object Precaching and Advanced Post
    4. Install the latest devfix3 file.

    wp-content/cache/docket-cache contains 97,643 entries and running ls on it takes some time. I’m wondering if the issue here is the same issue we run into when you don’t tell WP to organize uploads by date: simply too many files in one folder causing access performance issues — that would explain the kernel processing time when I flush the cache. I’ve seen other file stores like this split the files alphabetically (alpha hash?) into folders like aa, ab, ac, ad, … ba, bb, bc, … fa, fb, fc.

    Probably, I think. The reason docket cache file is structured like now, it contains group-key naming for faster loading, and I never thought it will cause an issue with flushing. Maybe I will revamp it later if possible.

    Perhaps the cache flush is simply reaching the PHP 30s timeout, but for some reason Docket says it was a successful cache flush, when it actually didn’t flush anything?

    The devfix file will stop the flushing process if reach maximum execution time. It is my bad decision, should leave it to PHP to throw the error message.

    Thanks.

    Plugin Author Nawawi Jamili

    (@nawawijamili)

    @websavers

    Hi Jordan,

    1. FLUSH CACHE RESPONSE / ERRORS:

    Ensure the response from flushing the cache is accurate – if it doesn’t actually flush the cache, it should indicate when it was unsuccessful. When unsuccessful it might also be beneficial to indicate to the end user that they need to clear it manually.

    2. ALPHABETIC HASH DIRECTORY SYSTEM?

    It sounds to me like Docket could minimize the problems created by this scenario by using an alphabetic hash directory system for storing its files. This way even if there’s a problem, flushing the cache should work more consistently. This *might* also improve cache file access times when there’s a large number of cache files. At bare minimum, even if the number of cache files is larger than it should be, due to some other conflict, this would be a great way to mitigate the effects.`

    Thanks for the suggestion, really appreciate it. It may take a long hour to revamp and test it again with opcache and everything. If possible, I may require your/team help to test it on your environment once it complete.

    Thanks.

    Thread Starter websavers

    (@websavers)

    Hey @nawawijamili

    The devfix file will stop the flushing process if reach maximum execution time. It is my bad decision, should leave it to PHP to throw the error message.

    Ahh, so that explains the slight improvement with that version in that at least when it hits the timeout it stops processing.

    Thanks for the suggestion, really appreciate it.

    No prob! I hope it actually helps. It certainly seems like it could be a solution to the issue, but I don’t know that with certainty.

    It may take a long hour to revamp and test it again with opcache and everything. If possible, I may require your/team help to test it on your environment once it complete.

    Understandable! Happy to test whenever you have it ready ??

    • This reply was modified 2 years, 7 months ago by websavers.
    Plugin Author Nawawi Jamili

    (@nawawijamili)

    Hi Jordan @websavers

    This is the latest devfix file https://docketcache.com/devtest/docket-cache-21.08.05-devfix4.zip that fixes what we discuss earlier:

    1. Cache file has been stored as hash path:
    docket-cache/49/de/c8/03/f2/3d/d2/02/b1/a8/d5/ee/d202b1a8d5ee-49dec803f23d.php

    2. When flushing cache, will display timeout notice when reached max execution time.

    The older cache will automatically remove by the garbage collector and cache size stats. The previous cache file format can be enabled back by defining this constant at the wp-config.php file.

    
    define('DOCKET_CACHE_CHUNKCACHEFILE', false);
    

    I hope this will fix the issue with the WOOF search.

    Thanks.

    • This reply was modified 2 years, 7 months ago by Nawawi Jamili.
    Thread Starter websavers

    (@websavers)

    Hey @nawawijamili

    Tried it out. It somehow seems to be even slower at the WOOF product searches.

    I do see it generating the hashed path for cache files.

    Interestingly the default setting for “Cache Disk Limit” is set (500M), yet the docket-cache folder size according to du has grown to 2.5G in just 10 minutes.

    The correct error does indeed appear now if timeout is hit when flushing ??

    “Object cache could not be flushed. Maximum execution time of 30 seconds exceeded.”

    But I’m afraid that also means flushing the cache hasn’t been improved by this either, at least by the time the cache grows to be 2.5GB. I’m wondering if it’s duplicating files rather than replacing them when content needs updating.

    On the upside the hash path method means I was able to run the standard rm -rf docket-cache/* to clear the folder contents.

    • This reply was modified 2 years, 7 months ago by websavers.
    Plugin Author Nawawi Jamili

    (@nawawijamili)

    @websavers

    Interestingly the default setting for “Cache Disk Limit” is set (500M), yet the docket-cache folder size according to du has grown to 2.5G in just 10 minutes.

    Docket Cache can’t enforce disk limit in real-time, this will take care of by the “garbage collector” cron event and check every 5 minutes.

    But I’m afraid that also means flushing the cache hasn’t been improved by this either, at least by the time the cache grows to be 2.5GB. I’m wondering if it’s duplicating files rather than replacing them when content needs updating.

    By default, when the cache expires, Docket Cache will empty the cache file rather than delete it. And then Garbage Collector will remove it if the cache file is empty and more than 3 minutes. You can change this behaviour by defining this constant
    define('DOCKET_CACHE_FLUSH_DELETE', true);

    You can try run Garbage Collector from wp-cli command “wp cache run:gc”

    Please try this devfix5 https://docketcache.com/devtest/docket-cache-21.08.05-devfix5.zip.

    – Disable Object Cache or remove wp-content/object-cache.php
    – Place this constant at wp-config.php file

    define('DOCKET_CACHE_IGNORE_REQUEST', ['swoof', 'woof_text', 'woof_sku', 'woof_author']);
    
    define('DOCKET_CACHE_IGNORED_GROUPS',
      [
        'counts',
        'plugins',
        'themes',
        'products',
      ]
    );

    – Enable back Object Cache.

    Thanks.

    Plugin Author Nawawi Jamili

    (@nawawijamili)

    @websavers

    Please ignore previous solutions and remove any constant mentioned to place in the wp-config.php file.

    After doing an intensive test, I found that “wc_session_id” and woocommerce cache “wc_cache” contribute a lot to this issue, it uses unpredictable names for keys and groups by including strings based on time() which makes Docket Cache store a lot of caches with the same data. For example:
    Key: wc_cache_0.95649000 1651244254__type_14
    Group: wc_cache_0.94325000 1651244254_20

    Each wc_cache doesn’t have an expiry time, it meant it was only for a single page load and discards objects in the cache at the end of the request.

    wc_session_id will regenerate each time load the page, although it has an expiry time, it has the same data for each session for the current session.

    The solution is, that we ignore wc_session_id and any group key that has the prefix wc_cache.

    Please download and try this devfix6 and confirm the outcome.

    https://docketcache.com/devtest/docket-cache-21.08.05-devfix6.zip

    1. Disable object cache.
    2. Install devfix6
    3. Remove cache files manually.
    4. Enable Object Cache
    5. Also, enable Advanced Post Cache and Precaching.

    We will redo the fix from here.

    Thanks.

Viewing 15 replies - 1 through 15 (of 20 total)
  • The topic ‘WooCommerce Products Filter and Object Cache Drop-In Conflict’ is closed to new replies.