websavers
Forum Replies Created
-
Turns out this was Divi Builder. No idea why it forces a 256M limit like that. In case anyone needed more proof Divi is garbage…
Thanks @joneiseman !
Forum: Plugins
In reply to: [Social Sharing Plugin - Social Warfare] Fatal Error with new versionSame here.
Scratch that, I can see this data-srcset is in the ad code itself. Not adrotate’s fault.
Thanks! Works great.
Emailed logs just now! We can continue via email if you wish; whatever is easier for you.
That’s a good find!
I wish I could say it resolved the problem, but after completing the suggested steps using devfix6, the cache dir grew to >1GB in under 5 minutes still ??
And the same issues are present with CPU usage spiking on PHP processing and searches taking a very long time.
Should I try the constant values in wp-config from your prior post? Or will those not work with devfix6?
Hey @nawawijamili
Tried it out. It somehow seems to be even slower at the WOOF product searches.
I do see it generating the hashed path for cache files.
Interestingly the default setting for “Cache Disk Limit” is set (500M), yet the docket-cache folder size according to
du
has grown to 2.5G in just 10 minutes.The correct error does indeed appear now if timeout is hit when flushing ??
“Object cache could not be flushed. Maximum execution time of 30 seconds exceeded.”
But I’m afraid that also means flushing the cache hasn’t been improved by this either, at least by the time the cache grows to be 2.5GB. I’m wondering if it’s duplicating files rather than replacing them when content needs updating.
On the upside the hash path method means I was able to run the standard
rm -rf docket-cache/*
to clear the folder contents.- This reply was modified 2 years, 10 months ago by websavers.
Hey @nawawijamili
The devfix file will stop the flushing process if reach maximum execution time. It is my bad decision, should leave it to PHP to throw the error message.
Ahh, so that explains the slight improvement with that version in that at least when it hits the timeout it stops processing.
Thanks for the suggestion, really appreciate it.
No prob! I hope it actually helps. It certainly seems like it could be a solution to the issue, but I don’t know that with certainty.
It may take a long hour to revamp and test it again with opcache and everything. If possible, I may require your/team help to test it on your environment once it complete.
Understandable! Happy to test whenever you have it ready ??
- This reply was modified 2 years, 11 months ago by websavers.
I deleted the contents of the folder manually:
find . -type f -delete
Flushed the cache from Docket’s end, re-enabled the object cache and as far as I can tell the issue is resolved. But I suspect if it continues to generate large numbers of files, the problem will return.
Edit: After about 15 minutes it was up to 75k files and super slow searches again.
I think this identifies a couple possible improvements:
1. FLUSH CACHE RESPONSE / ERRORS:
Ensure the response from flushing the cache is accurate – if it doesn’t actually flush the cache, it should indicate when it was unsuccessful. When unsuccessful it might also be beneficial to indicate to the end user that they need to clear it manually.
2. ALPHABETIC HASH DIRECTORY SYSTEM?
It sounds to me like Docket could minimize the problems created by this scenario by using an alphabetic hash directory system for storing its files. This way even if there’s a problem, flushing the cache should work more consistently. This *might* also improve cache file access times when there’s a large number of cache files. At bare minimum, even if the number of cache files is larger than it should be, due to some other conflict, this would be a great way to mitigate the effects.
Cheers,
Jordan
Also interesting: it says “Object cache was flushed.” after the 30 seconds, but the number of files in wp-content/cache/docket-cache is now higher… I’m used to cache flushes clearing the contents of the cache folder, is that not how this works with Docket?
I also can’t erase that folder manually as there’s too many files in the argument list to run rm -rf on it.
Perhaps the cache flush is simply reaching the PHP 30s timeout, but for some reason Docket says it was a successful cache flush, when it actually didn’t flush anything?
Hey @nawawijamili
Apologies for the lack of response to the last message – I’ve been meaning to get back to you!
It seems improved with the latest devfix, but searches are still faster when object caching is disabled entirely.
I do think you’re probably right about it being connected to cache flushes. Even with the object cache drop-in disabled, if I flush the object cache it takes a good 30 seconds and CPU usage of PHP processes spike. Perhaps even weirder, if I flush it again about 15 seconds later (with the object cache having never been enabled in between), it does the same thing – 30 seconds processing, and significant CPU usage.
The server powering this has strictly SSDs and IO performance is pretty solid overall. When I flush the object cache, iotop shows practically no IO activity. Yet htop indicates it’s at least 50% kernel time.
wp-content/cache/docket-cache contains 97,643 entries and running ls on it takes some time. I’m wondering if the issue here is the same issue we run into when you don’t tell WP to organize uploads by date: simply too many files in one folder causing access performance issues — that would explain the kernel processing time when I flush the cache. I’ve seen other file stores like this split the files alphabetically (alpha hash?) into folders like aa, ab, ac, ad, … ba, bb, bc, … fa, fb, fc.
Hey @nawawijamili
Tried out the devfix version. Immediately after reactivating the Object Cache Drop In, load went from 0.5-0.6 up to 2.5 and numerous PHP processes were stuck at 100% CPU.
Disabled it again and processing went back to normal (after a PHP-FPM restart to clear the long-running processes).
You mentioned these options:
– Admin Object Cache Preloading: was enabled
– Advanced Post Caching: was enabled
– Object Cache Precaching: was disabledI disabled the first two as well (so all of those are disabled), then enabled the object cache drop in again and the issue returned.
I dunno what it is about the WooCommerce Products Filtering plugin, but it’s definitely got some kind of incompatibility with object caching. Is there a way to determine how to exclude its searches, or something like that? It’d be nice if they could play nice together without having to disable options globally.
Submit as a support ticket ??
Thank you @champsupertramp !
I am not a huge fan of the need to add this data to postmeta for every post when it applies to every single one of them. It just seems wasteful to me, though I get why this is a good option for some.
Instead of this method (and because I found the original devs of this site already had a custom page template for the post type) I modified the page template and put my display conditionals based on user role in there.
- This reply was modified 3 years, 4 months ago by websavers.