• Hi i work for a company that provides professional website building services using wordpress as a base. In a few months we plan to move to colocation since the dedicated hosting offerings here are poor in relation to price, hardware and bandwidth. What i found from profiling is that How many websites you can have at a time is determined by memory, but simultaneous visitor count is determined by CPU and network usage. After optimising, on average the uncacheable files total below 400KB in size and the maximum interface bandwidth that can be given by the hosting service is 200Mb/s unlimited usage. How much CPU would be needed before the network becomes the bottleneck?

    Currently the dedicated server is a low power dual quad 2.2Ghz xeon with 100Mb/s interface and have seen significant CPU usage before optimisation during big client launches involving hundreds in the span of minutes. The websites on average have 10-20 plugins including wordfence.

    For optimisation 400KB is after using cloudflare, minifying and more, using PHP OPcache, redis, compression on the server side as well. For wordpress other than the usual plugins the uncacheable files are dynamic data and scripts that bloat when cached by cloudflare such as doubling in size when cloudflare caches them.

    Any advice on hardware needed and any server optimisation tips would be helpful. Current server uses litespeed but im thinking of using apache +nginx and use eaccelerator with apache if it would lower CPU usage. With redis already in use, would there be benefit of using SSD over a hard drives?

Viewing 7 replies - 1 through 7 (of 7 total)
  • What you’re asking is pretty much unknowable (I don’t know if that’s a real word, but it suits this situation).

    When you ask…

    How much CPU would be needed before the network becomes the bottleneck?

    What WordPress verion/s would be running? What theme/s would be running? What plugins would be running? What pages/posts/archives/tags/etc are on each site?

    What you’re asking has so many variables that it’s just not possible to give any sort of difinative answer. We manage sites where you could fit 1,000 of them onto your server easily. We’re also running sites where one would break that server in a heartbeat

    Given that… the best answer that you’ll get is “somewhere between 1 and 20,000 – as a pure guess”.

    I’m no expert in this but colocation worries me… you’ll have a box that you will be responsible for physically maintaining that’s not in your possession and running some distance from you and your people. Downtime will be an issue somewhere along this adventure.

    Years ago I leased a pair of boxes and there were problems with one or the other constantly. I had to pay for an IP address with a reset switch for each just in case they crashed.

    I think that with today’s technology a Cloudbased box that’s upgradeable on the fly is a much better solution. SSD and big memory will help and a nice wide pipeline to the backbone will help greatly.

    What I use now is some very cheap cloud VPSs and run the database on one with the WordPress install on the other. Mine’s a small operation and the servers are not all that well maintained but I can log in to the VPS dashboard and reboot if needed and boost the horsepower when necessary. Upgrades are also fast.

    Give some thought to that colocation idea…

    Maybe some creativity might give you a safer, flexible system(s) with some real future stamina and horsepower.

    Like I said, I’m no expert in this but having a small budget means I have to put more work into what I have to give me what I need and a path for the future as things grow.

    Thread Starter System Error Message

    (@systemerrormessage)

    I always keep wordpress to the latest version and use PHP 7.3. No themes used only page builders.

    Given a worse case of 400KB of data transfer per visitor after caching, 200Mb/s would fit 400-500 visitors a second. During one launch on a website of a heavy plugins, the peak CPU usage was 6 full cores until the wifi provided on site started to have problems but there was 4G on the area and many visitors used mobiles for approximately 100-200 a second peak.

    From here i can estimate how much cpu is needed, but im also asking of experience of others in terms of CPU usage based on visitors not websites hosted as no CPU is used when there are no visitors, only ram and storage for files.

    Some plugins like wordfence run scans once a week, so that will use CPU obviously. It would help to know how much CPU is used for others for visitors as well.

    The other reason is because CPU cache amount does make a difference, and AMD ryzen tends to have more cache than intel so it would be helpful to know how well does a heavy php wordpress run on different CPUs. There isnt an exact estimate as things can change during the life of a website, but knowing an approximate of how much is needed is helpful as well.

    @jnashhawkins
    I have given thought to this. The problem is that in the region that we need hosting for, the best there is is pretty bleak in offerings despite good support. Datacenter isnt very far though, less than an hours drive, and the challenge here is to fit the right cooling into 1U, and what extensions might be needed such as PCIe slot based low profile SSD cards or to go with integrated motherboard m.2, and also using a proper server card. The problem about my region is that bandwidth offerings suck. VPS get 1Mb/s here dedicated, and looking through costs and hardware, its cheaper to do colocation. Budget wise there isnt enough to host redundant servers or rent cloud boxes as in my region there area no suitable or decent offerings that are hosted here and not in singapore.

    I can’t speak to the bigger numbers and multisite is my focus.

    I like the idea of using the Nginx box out front also… that’s actually part of my near future plans.

    I’m pro Cloudflare also… Cloudflare seems to boost the overall capabilities by around 16% to 20% but that depends on the server being able to keep up. Cloudflare starts throwing 500 errors when the origin server starts choking on the load. It appears the paid Cloudflare accounts are a little more patient.

    Other than that I can’t think of any reproducible numbers to give you.

    One more thing to look at would be a decent CDN to serve up the static portion of your sites. I use W3Total Cache and a CDN (no I’m not talking Cloudflare as a CDN). My CDN is underutilized but it helps.

    Thread Starter System Error Message

    (@systemerrormessage)

    @jnashhawkins my personal wordpress site which is on a shared host achieves a 0.6s load time with a total size of 800KB for the front page after caching and bloating the cache. This is with just using W3 total cache for objects, database, fragments, and using litespeed to cache the files, and on cloudflare i use a blanket rule to cache everything but standard cache for wp-admin. I also use a plugin so the only external domain requests is cloudflare. I also use quite a few plugins on it too and instant click from litespeed which does break tiny looks on some pages but i can live with it.

    Cloudflare is a CDN, there is a way to cache files on it for free but i havent figured out how to cache specific files on it with a lifetime of forever (like fonts) otherwise i could get around the bloat from caching js and css optimised by litespeed.

    Sadly many nice CDN services arent even available in my region, closest being singapore and many ISPs here are slow for international traffic. For example i could load a MMO really fast on 1 ISP, but other ISPs i have to go through many long failed attempts before finally getting in. So because most of the audience is here the only solution is to host it here and not next door to singapore, this removes many good options like inexpensive high speed VPS with decent interface bandwidths and more. Not many hosts provide redis either.

    Dion

    (@diondesigns)

    It’s really difficult to provide any type of assistance without actually seeing some basic server stats. But I’ll try. ?? You should definitely be using solid-state (NVMe) drives if throughput is important. MySQL is disk-intensive, and switching to solid-state drives will typically result in a noticeable drop in server load. A poorly-configured MySQL or webserver can also have a significant impact on server load. If the datacenter supports GigE throughput, give some thought to updating your network card.

    I also suggest taking a look at the dedicated/colocation offers forums on webhostingtalk.com. There may still be some Black Friday and Cyber Monday deals available, and they are typically the lowest prices of the year.

    Otherwise, I suggest hiring a server administrator. They will be able to diagnose the bottlenecks in your server and suggest ways to minimize those bottlenecks.

    Thread Starter System Error Message

    (@systemerrormessage)

    @diondesigns thanks, i’ve been going over server stats. It seems a single visit uses 1.5Mb/s burst from one website currently on the dedicated host, this sadly means a peak concurrency of 100 users which is a far cry from my planned 400 concurrent users per 100Mb/s. CPU usage is poor since it does use quite a bit and it also uses a HDD resulting in a 2s load time. Part of the problem i have is because the dedicated server offering with the provider is poor. I should write a little here so other admins can take note on configuring their server.

    in regards to server setup, KVM can be very helpful here. Heres a benchmark when i ran windows server on KVM with opensuse as host using a single WD black HDD

    View post on imgur.com

    So for hardware specs i’ve decided to go with the latest ryzen 8 core, a motherboard with 2 nvme slots, 32GB of fast DDR4 ram, 2x512GB SSDs in raid 0, Intel server quad port NIC.

    For software config the host will run opensuse with mail server, redis and nginx proxy cache, on KVM1 centOS with apache, ISPConfig , on KVM2 debian and mariaDB. I think this is the best setup i could go with as KVM’s disk caching can use ram and significantly speed things up, have the host handle catching and interfaces, letting KVM have high speed IO between VMs and in the future upgrade ram, CPU and add a router as the provider offers 100Mb/s intefaces only, so allows to cheaply combine 10x100Mb/s interfaces into 1Gb/s via a single port, though the router with SFP+ is inexpensive, alongside 2nd hand SFP+ server cards.

    On an optimised server, i think the bottleneck is network. The new ryzen CPUs offer a lot better performance than the xeons offered at datacenters here and have a lot more cache which is pretty helpful for hosting websites and easy upgradeabability of CPU but at 1U of size, going to need to find a closed loop liquid cooling that would fit or at least be PCIe slot mountabble. Im not a fan of litespeed despite the resource usage, since it gets difficult to add OPCache and configure things.

    Over here the offerings are really poor. Very few international hosting services actually have servers in my country and the local ones do not advertise themselves here and every offering i’ve seen here is poor, so colocation is the best since i then have full control over the config of the server which dedicated here does not give, and VPS here only gets 1Mb/s. Just to pass from my country to the hub next door incurs a big penalty in network performance for locals here since most ISPs here throttle international traffic.

Viewing 7 replies - 1 through 7 (of 7 total)
  • The topic ‘Hardware needed to host multiple wordpress sites’ is closed to new replies.