From a SysAdmin point of view, sites with some kind of FPC can handle much more traffic, with fewer server resources (read: cheaper for you, IT Manager) and are usually much better at handling sudden traffic spikes.
Varnish is as fast as it gets. But Varnish requires a lot of skill to implement well and work around any niggles (and there are always issues). Now, I love Varnish, but for many it’s just too complex, or time consuming, especially if you’re working to a tight deadline. Instead, there are plenty of code-based solutions which aim to implement Enterprise-like FPC but for a fraction of the cost.
I’ve seen a lot of Community Edition customers using this extension successfully, and I wanted to see what it was all about.
Special mention here to the folks at Mirasvit, who were were kind enough to send us a copy for evaluation at Rackspace. The turnaround was good so I’m happy you’ll get a responsive support experience. For us, that’s really important.
Your Milage May Vary
I was testing with stock Magento Community 18.104.22.168 and the sample data.
The settings I’ll discuss here should work fine for most, but your milage may vary if your Magento store is heavily customised. Always test new modules in a staging environment before implementing on your live website.
I pretty much followed the bundled instructions – no need for me to detail it here but it was very straightforward. See also the Mirasvit FPC user manual.
Let’s dive into the config, in your Magento Dashboard (System > Configuration > MIRASVIT EXTENSIONS/Full Page Cache )
- Enabled: Yes (obviously)
- Cache Lifetime (sec): I’ve gone for two days here, you could use more. If your site gets indexed by a search engine once a day, the first hit will warm up, and the page won’t have expired by the next day’s index. If your site traffic is quite low, and it could be a few days between page views of any one particular product, then you should keep this value high, like a week (604800 seconds).
- Flush Cache Expr: Leave it empty to disable the auto-flushing. I tested that saving a product will automatically expire the relevant pages, so you are not likely see out-of-date content. My general rule is that you shouldn’t have to specifically flush caches (development aside); the more you flush them, the less effective they are.
- Max. Cache Size (Mb): 128 is probably OK for most, but you might need more if you have a lot of products/categories. You should understand where you cache is, though, before increasing this. For example, if you’re using a 512M Redis instance from ObjectRocket, then setting this higher than 500 would start to cause problems if it gets full. For a local Redis instance, your maxmemory directive in /etc/redis.conf will be relevant here.
- Max. Number of Cache Files: 20000 seems ample; you might need to increase this if you have a lot of SKUs, categories, etc.
- Enabled: Yes. If your site is pretty busy, and your expiry times are high, then you might find your customers do a great job of warming up the cache for you. For quieter sites though, or to ensure that most people hit cache most of the time, definitely enable it.
- Number of Threads: 1. First of all, you should find out how many CPU cores are available on your server. My test site is running on a small Cloud Server, with only one vCPU core, and my load testing experience tells me the default of ‘2’ might slow things down for my 1 vCPU core. lscpu is a command you can run to find out quickly. Half that number, as a rule of thumb, should safely avoid impacting performance for real users.
- Thread Delay: I’ve put half a second in there to further reduce load impact.
- Limit of Crawled URLs per Run / Schedule : A higher limit here will warm up the cache more quickly, in conjunction with the Schedule, but the idea here is to prevent the crawler from running away with itself and endlessly hammering your server. The default setup is going to crawl up to 10 URLs every 15 minutes, which is fairly conservative and only 40 pages per hour. Something like 20 URLs every 10 minutes should be fine. If you wanted to get even more granular, we could run every 10 mins but avoid peak hours (let’s say they are 12-2pm and 6-10pm), with something like:
- */10 0-11,15-17,22-23 * * *
- Sort Crawler urls by: Popularity. Sounds sensible; I didn’t bother setting up the custom order.
- Run crawler as apache user: No. I didn’t need to do this, although my PHP is running under FPM as its own user, and that user is also running the Magento Cron job.
- Max. Allowed Page Depth: 10. I’ve seen sites where heavily layered or even cyclical navigation leads to endless unique URLs, and it’s not practical to cache them all. This is there to prevent over-caching of those pages, and 10 seems like a decent default value.
- Cacheable Actions: The defaults here are the home page, product pages, and category pages. That’s probably fine for most; you might need to add bits if you have heavy CMS pages, or if your store is heavily customised.
- Allowed/Ignored Pages: What it says on the tin. Maybe you have a special CMS page which includes a live Twitter feed, and you don’t want to cache it.
- User Agent Segmentation: If you have a responsive theme, you won’t need this. If any part of your code relies on device detection, like a different tablet layout, or a popup about your iPhone app, then it’s likely you need to use this. Above is my example which should take care of most popular devices right now (2015); you might need to work on your own expressions depending on which devices/browsers your site care about. One thing you should not do is separate GoogleBot or other engines/bots/crawlers – if you do that then they’re less likely to get the page from your main cache. Faster for them is good for your rankings, and hitting the cache is good for your server load.
The debug options are pretty self-explanatory, and should usually be disabled in production. The Time Stats are really handy to compare uncached vs. cached performance, and I like that you can show these only for your IP address(es). The code is using $_SERVER[‘REMOTE_ADDR’] though, so it won’t work behind reverse proxies or load balancers.
The obvious thing here is that you get another option for Full Page Cache under System > Cache Management. You’ll need to enable that, then flush all cache, for the FPC to start working.
Everyone loves a nice graph – ask NewRelic – and understanding how your cache is performing will help you drive a faster experience for your users. Here’s what Mirasvit FPC adds to your Cache Management page:
You can zoom the graph to a smaller time period, or get an overview for much longer.
Source: http://fpc.demo.mirasvit.com/admin/?demo=fpc (because my test store didn’t have enough data for an interesting graph).
More screenshots on the Mirasvit website or Magento Connect.
When testing with two browsers side by side, I did at first get some crossover where one browser would see the page with cart contents showing from the other session. I found that this was down to the way Magento includes the Session ID in the URL by default, combined with the default of not doing any session validation. After disabling that, everything worked as expected.
- System > Configuration > Web > Session Validation: “Use SID on Frontend” = “No”.
- Clear all cache to apply.
What I liked
- Easy setup. I just plonked the files in place, and pressed “go”. You may want to tweak the default settings as above, but it pretty much works out of the box
- No extra local.xml config. It just uses whatever <cache><backend> you already have configured, which is great. I was already using Redis, and Mirasvit lapped it up.
- Good support: That’s the main theme in the comments on Magento Connect, and the team did respond to my email within a day. For an extra $50 USD, Mirasvit will even install the plugin for you – great if you don’t have the skills or don’t have a developer on hand.
- Cache Rules are really nice to configure, and extras like User Agent separation mean that it’s very flexible.
- Built in Crawler seemed to work really well, and it won’t smash your server to pieces.
I didn’t test:
- Dynamic blocks. Blocks and layouts are going to be unique for each store, so working on the default probably won’t help. Mirasvit provide full documentation and offer to help with this as part of the installation service. You may not need to configure this – in my experience, hole-punching for dynamic blocks generally creates more complexity and extra work in the long run for your frontend developer. Simply using the cache as-is will still cut out 90% of server load while keeping your deployment simple.
- Debug stats behind a reverse proxy. A lot of the customers I work with have their main web server(s) behind a load balancer, or maybe a CDN like CloudFlare. It’d be nice to see this implemented from the Magento client IP, which can be configured in local.xml to get the real IP from X-Forwarded-For or any other HTTP headers.
- UPDATE from Mirasvit: “We will use similar approach in our next releases.”
Quick to get going, feature rich, and not overly complicated, it’s a great alternative to a complex Varnish configuration. My cached page loads (coming from Redis) were showing around 37-70 ms, which is on par with the Enterprise FPC. With great support too, and all for a one-off $149, it’s probably the best $149 you could spend for your Community Edition Magento store.