In the vast, multi-layered subject area of web performance, server response time is an important metric. From a CMS standpoint however, it is one of the most significant. Best practise recommends a 200ms or lower time-to-first-byte (TTFB). That is the time in which a CMS has to figure out what to do with the request, parse templates, query the database, render the HTML, capture it all and send it back as the response.
Craft CMS — our preferred CMS for content-heavy websites here at Miranj — is quite fast out of the box, but as our pages and content models grow in complexity so does the server response time. Add traffic to that mix and it can quickly lead to poor TTFBs and slow overall experience.
In this talk we go through a multi-tiered caching strategy using Craft and Nginx that enables a single VPS to consistently deliver sub-200ms response times, even while handling loads of 10 to 100 concurrent requests per second. It covers our learnings from optimising a low-powered server to handle millions of visitors each month on the Guiding Tech website project. We achieve this by caching the website at two places — at the web server level using Nginx’s FastCGI micro-caching, and at the CMS level using flag-based template caches in Craft CMS. We also factor in real-world edge cases such as bypassing the cache, delivering variations to different visitors, etc, that are necessary to account for in a robust, production-ready system.
This talk was prepared for and delivered at Dot All 2019 in Montréal, Canada. This was my first time speaking at a conference outside India, but any nervousness I carried on stage was quickly dispelled by the warm engagement and wonderful conversations with the Craft community.
I email hidden; JavaScript is required about your mileage from adopting any of the caching strategies mentioned in the talk, or any alternate approaches you have implemented to optimise Craft CMS for heavy loads and high traffic.