Viewing 15 replies - 31 through 45 (of 69 total)
  • Hi @shanafourde
    Thanks – I tried adding the lines to robots.txt file. It does help a little but unfortunately not much. We used to use wp-super-cache with it but by activating this, it went way over the allowed resource usage. I deactivated it and have installed Quick Cache and this helped and our site is currently back and running again, but then only marginally. The CPU usage is still 75-100% most of the time and any additional overload would certainly kill the site.

    I just saw @nicola.peluchetti’s update above – thanks. In fact, before finding this thread, this is the first thing I tried and tested but it hardly helped. My site only has now events on and after 1st May.

    We have another user which have problems with bot, his host suggested to add to the .htaccess file the following code

    SetEnvIfNoCase User-Agent “googlebot” bad_bot
    SetEnvIfNoCase User-Agent “msn.com” bad_bot
    SetEnvIfNoCase User-Agent “bingbot” bad_bot
    SetEnvIfNoCase User-Agent “baidu” bad_bot
    Deny from env=bad_bot

    as another user suggested, this means shutting down the blog totally from web search engine. It could be useful to determine exactly if the bots are the problem. The robots.txt should fix the bot problem, if it doesn’t fix it we’d need to know which pages are crawled

    @nicola.peluchetti

    Completely de-listing your entire site from Google, MSN and Bing by blocking access to their spiders – because of bugs with the event calendar? No, thanks!

    Dear reader: please do not use the code above unless you want your site to simply disappear from all major search engines.

    -1

    as i said, “his host suggested”.
    I’ll add what you sai as a specification.
    Actually the only problem with crawling should be solved by the robots.txt fix (unless it’s something different but we need to know which pages are crawled)

    Thanks for the updates, Nicola. After three days of having the robots.txt changes in place the issue still occurs (but I’ve noticed it takes longer to occur). Our sites that use the events plugin do have a large number of events and we do not delete old ones (but we’ll try and see what it does).

    And I’m a little wary of blocking those crawling bots since it would have a negative effect on our search rankings for the respective sites. https://support.google.com/webmasters/bin/answer.py?hl=en&answer=2387297 explains what happens when you block Googlebot.

    @nicola.peluchetti

    Thanks, please do add the note about what the code does. Many people, desperate to find a solution (as I am) will blindly copy & paste code without understanding what it does.

    In this case, the disclaimer is necessary, as we have many non-technical very creative people reading the forums.

    As you said, the problem is not with the bots, it’s with the memory usage of the plugin in part caused by the loading of all posts.

    And … I think there is a core memory management issue in the plugin (bug), as this problem was introduced within the last few updates.

    I do appreciate the time you’ve spent tracing down the problems, your insights have been very useful in helping me to fix my sites. Thank you.

    @mjhale Well said and much more diplomatically.

    The problem happens whenever pages are hit, whether that’s by a spider or a human.

    By blocking spiders, you are essentially decreasing traffic your site, which is why it seems to make things better.

    However, whenever people hit the problem pages, the plugin still runs out of memory rendering those pages.

    Focusing on bots, then, is somewhat of a red-herring – though there is a separate, duplicate content issue with allowing all events to be spidered.

    We need to be focusing at the causes of the memory issues. The fact all events are loaded on some pages is a point in the right direction.

    Can anyone pinpoint what release changes to the plugin started causing the issues? Maybe looking at what changed could help, or even downgrading to the last known good version?

    @ntemple
    we have the memory problem on admin pages, at least that’s what we have discovered.
    The problem started in 1.9 i think when we set our custom post type to be “hierarchical”. This means that wordpress loads all the events and metadata for events on some pages. This happens in the admin area only, which shouldn’t be crawled.
    The issue with crawling it’s different. The fact is that

    https://www.xxx.edu/news-events/action:month/exact_date:1375340400/cat_ids:149/

    is a valid url, but also

    https://www.xxx.edu/news-events/action:month/exact_date:1375340401/cat_ids:149/

    https://www.xxx.edu/news-events/action:month/exact_date:1375340402/cat_ids:149/

    and so on ( i just changed the exact_date parameter )

    i just realized that the problem might be that in some views our url are not consistent like

    https://www.xxx.edu/news-events/page_offset:-1/action:agenda/time_limit:1373594399/cat_ids:149/

    this link should be

    https://www.xxx.edu/news-events/action:agenda/page_offset:-1/time_limit:1373594399/cat_ids:149/

    i’m fixing this and posting a patch ASAP, probably this is why bots are not stopped.

    Turning pretty permalinks of would stop the crawling but obviously it has drawbacks on search engine scores so it’s not adviced.

    Ok, the patch to have consistent url is the following one.

    in file class-ai1ec-href-helper.php ( found in lib/router folder ) change lines from 13 to 25 so that they lok like this

    private $used_paramaters = array(
    		'action',
    		'page_offset',
    		'month_offset',
    		'oneday_offset',
    		'week_offset',
    		'time_limit',
    		'exact_date',
    		'cat_ids',
    		'auth_ids',
    		'post_ids',
    		'tag_ids',
    	);

    let me know if this stop the crawling.

    [email protected]

    (@valprestonfamilyeguidecom)

    Does anyone know if Timely is following this thread, or working on the issue?

    Also, @bravenewniche How is your site with Quick Cache doing? Did that seem to help the problem?

    @valpreston
    we are working on the issue.
    Above you find the patch for the bot crawling which you should associate to the robots.txt file.
    The memory issue is more complex, we are working on that, but it shouldn’t affect the frontend.
    I work for time.ly

    Unfortunately, we’re seeing memory issues on the front-end as well. It’s causing the site to die with OOM errors even when no one is using the back-end. Raising the memory limit in PHP fixes this, but then …

    On a VPS like at Hostgator, when the system overall uses too much memory, the system just kills Apache (as there is no swap).

    [email protected]

    (@valprestonfamilyeguidecom)

    @nicola.peluchetti
    Awesome! I’ll have my developer look at the patch, and will watch for updates. We’ve had to upgrade our services with HostGator 3x in the last month, so I’m hoping it’s resolved soon, so we don’t have to keep paying for more resources:) I love the calendar, and prior to the last couple of upgrades, never had issues.

    [email protected]

    (@valprestonfamilyeguidecom)

    @ntemple

    Yeah, we’ve had the same issues. HostGator has reset Apache for us countless times during the last month or two, and after each series of resents we’ve upgraded our VPS package. Obviously, that’s not a long-term solution:( Really hoping Timely can fix this soon.

    @ntemple
    how much memory is the plugin using?What’s your memory limit?
    If you have lots of events month view can be memory heavy because there are a lot of events on screen.

Viewing 15 replies - 31 through 45 (of 69 total)
  • The topic ‘MAX CPU Usage’ is closed to new replies.