Crawler always miss into blue
-
Dear support team,
Find my report: ZKMZIMCF
All my crawls are missed, I couldnt find any details in the documentation on this.
Thank you for your attention!
The page I need help with: [log in to see the link]
-
it looks like your server is running on Apache & Nginx ?
in that case, the crawler won’t work.
Nginx, thats correct.
So I better off switching the crawlers off as its doing no good?Without the crawlers can I enhance the speed of my site better?
What I have noticed is that When I do page speed insights first 1-2 inspects results in root document reach 2,500ms++ then it changes to 20ms. I assume that the cache is not being delivered first searches. How to improve that?yes, if you are not using litespeed webserver, then crawler doesn’t really do you much good , you can crawler on the QC node , but that only to the node which is closest to your origin server, still not much good as visitor or yourself, could be somewhere else , unless your origin server and your most visitors are in same city/region.
crawler is pre-cache the page , it’s not necessary but good to have
you can’t really improve that without backend server to have cache capability ??
so you say this phenom:
“What I have noticed is that When I do page speed insights first 1-2 inspects results in root document reach 2,500ms++ then it changes to 20ms. I assume that the cache is not being delivered first searches. How to improve that?”
comes from the fact that when the root document takes too long to process it is a cache miss.?yes, unless your server is running on a very slow connection , otherwise the only logical explanation is cache miss and waiting for your PHP to generate the content.
- This reply was modified 2 months, 1 week ago by qtwrk.
check this example:
I just ran a test first time, and the root took 4,600ms
https://pagespeed.web.dev/analysis/https-www-instawalk-eu-services-budapest-proposal-photographer/t5ft470zso?form_factor=mobile
3rd run:
https://pagespeed.web.dev/analysis/https-www-instawalk-eu-services-budapest-proposal-photographer/ceuy121qg7?form_factor=mobilesorry for insisting, I just wanted to make sure that I utilize your service on its peak, as the results are pretty inconsistent.
I didn’t delete the cache for weeks, yet after few hours when I check the page first check it always shows me low server root.your origin server does NOT have cache as origin cache requires the LiteSpeed webserver , all the cache were stored on the QC CDN node
the PSI could be connected to the node that doesn’t have cache yet , or cache is purged at CDN
how to prevent this?
I did not delete any cache for days and the pages that I’m testing has been tested multiple times, so the cache should have been built already, yet the root document is reached 4,500ms on the first tests.
Your suggestions to imporve that is most welcome!sadly I don’t really have good solution for this
the only possible way is to move a LiteSpeed webserver, this way , the cache will be stored at origin as well , so even CDN cache is purged , the origin cache will serve as backup , it won’t be fast as CDN cache , but should still be satisfiable
for example , just imagine it to be like this: no cache or cache miss -> 2500 ms , CDN cache -> 25ms , origin cache -> 250 ms
thank you for the rundown, just what I expected, that the cache is not served, thats why I started to mess around with the crawlers in the first place.
but what I also understand most people don’t use crawlers.
they also have so many missed cache and root documents loading 2,500ms+?
cant I achieve consistent cache distribution without crawlers? if yes, what settings do I miss?no no , the issue is not about cache itself or the crawler , the issue is about the webserver you are running
our plugin requires litespeed webserver to cache , as the cache engine is built-in into the webserver, instead of from plugin-side , the plugin just give out instruction to webserver to how to cache , when to cache, how to purge or when to purge …etc , and this instruction is only accepted by litespeed webserver, on other webserver like apache or nginx , it is ignored
for normal user , with or without crawler , if they are running on litespeed webserver, the cache will be there for designated TTL , until it is purged or expired , but in your case, you are running apche & nginx, that’s different story as you don’t have cache at origin server
If I get it right, you mean that:
Litespeed plugin doesnt communicate correctly with my nginx?server, as a result it doesnt store cache on server side, which results in building the first requests without cache, and there is nothing that I can do take make it work with nginx.On the otherhand I still dont understand why I dont have CDN level caching which suppose to deliver predfined content to the visitiors?
Also if my server is not capable, then on the 3rd time connecting the root document drops to 20ms, that means it is capable, however not stored for long?
yes, the plugin requires litespeed webserver to cache , Apache & nginx won’t cache it
it could be like connected to different nodes , or node invalidated the cache before its due time.
- You must be logged in to reply to this topic.