• Resolved develobaby

    (@bornefysioterapi)


    I discovered this plugin some weeks ago, when I researched the WP-warning “you should use a persistent object cache” and your plugin seems to work wonders, specifically for pages that’re (rightfully) not cached by WPRocket, such as almost all Woo-pages and most of the WP-backend. THANK YOU!

    Alrighty, here’s my problem, though: the default cache size is set to 16megs. After some time, however, the actual disk consumption can grow bigger than that. I discovered this, when WP warned me about response times above 600msecs, which I of course ignored / mistook as a measurement glitch. But then only hours later my hoster sent an email saying that my resource consumption is through the roof … and that is when I woke up.

    Turns out, disk-IO hit several times per minute >40 megs / sec, which apparently hits my hosting companies nerves. As a result, my site was being throttled (=> >600msecs). I discovered that .ht-object-cache.sqlite had grown to almost 44megs, which seemed a reasonable enough culprit. So I ran “Clean up now”, which didn’t change the file size, though. Next, I ran a “Flush Now”, which brought the file size to 0 and disk-IO fell immediately and drastically … back to normal.

    Now, looking at the code, a full vacuum is only run alongside flushes. But that seems to happen on demand only. Meantime, my sqlite-file size already has already grown back to over 35 megs inside of 8 hours. So, I guess, I will be hitting this problem again, quite soon actually.

    So, how would I approach this? I feel like, reducing the cache size is not an option, because db-fragmentation is created by deleting rows and inserting new ones, i.e. the “final” file size is probably correlated with the turnover of outdated cache records rather than the initial cache size. Alternatively, I could run a scheduled full vacuum. Maybe the plugin could even do this alongside the hourly background cleanup. Or maybe …. I am getting ahead of myself and should rather ask somebody, who knows stuff. ??

    Kind regards,
    Uwe

Viewing 3 replies - 1 through 3 (of 3 total)
  • Thread Starter develobaby

    (@bornefysioterapi)

    UPDATE:

    Weirdly enough, the database file size is now at 49 megs (that is, bigger than when I first ran “flush now”), but the disk IO remains normal. Did the “Flush now” and its implicit “vacuum” (or possibly the subsequent catch block) perhaps repair a corrupt database file? That could indicate that the database file size was never the problem to begin with.

    Plugin Author OllieJones

    (@olliejones)

    Thanks for the problem report!

    “Flush” puts your site into maintenance mode for a moment, while it empties out and does a SQLite flush operation. If we do a flush while the site is not in maintenance mode, visitors see errors.

    The size limit is advisory, not mandatory. There’s an occasional background operation that removes the least-recently updated cache entries to bring things back to the advisory size. (Not least-recently used, least-recently updated: keeping track of use times would require writing to the SQLite database on each cache hit, which would be very slow.)

    Suggestions for you:

    1. Increase your cache size to 48 MiB; this will make the background cleanup process have to do less work when it does run.
    2. Try moving the cache file to your /tmp directory. It’s possible some hosts will not ding you for I/O to that directory like they do to your /wp-content/ directory. Please read this.

    Suggestion for me:

    1. Figure out how to schedule the background cleanup more frequently when the advisory cache size limit is too low.
    • This reply was modified 5 months, 1 week ago by OllieJones.
    Thread Starter develobaby

    (@bornefysioterapi)

    Thanks for your quick response. I appreciate it!

    1. Done.
    2. Yes, I did consider this, too. I already asked my hosting folks about the storage details and they confirmed that there is no difference between /tmp and /var/www/…/wp-content. Same physical, local disk. Same rules.

    I’ll monitor this and should it happen again, I’ll take a snapshot of the DB-file to see if it’s corrupt. I find this explanation more likely at this point.

    Again, thanks for pitching in and have great weekend!
    Uwe

Viewing 3 replies - 1 through 3 (of 3 total)
  • You must be logged in to reply to this topic.