As I get closer to launching my redesigned website (this side of Christmas would be nice), I wanted to ensure pages were served over HTTPS. This reflects a broader community effort to make the web more secure and trustworthy, a move encouraged by the W3C.
Enabling HTTPS wasn’t too difficult, largely thanks to Josh’s excellent how-to guide. He recommends getting a free certificate from StartSSL, but as I have sub-domains for different staging and image resizing servers, I bought a wildcard certificate from NameCheap instead. The launch of Let’s Encrypt later this year should hopefully make this exercise easier and better yet, free.
An unexpected consequence of enabling HTTPS
With my certificates installed and HTTPS enabled, I visited my site and smiled broadly as I saw a little lock icon next to my domain. Ah, such simple pleasures.
One reason for enabling HTTPS was to examine its impact on performance, so I submitted my newly secured site to Google’s Page Insights tool. Expecting a 100% score for speed as I had before, I instead got a value in the low 90% range. Enabling compression would reduce the response size of my static assets, I was told. But wait, I had enabled compression. Why was this setting not being respected?
I eventually realised this was a server related issue1, yet my original line of enquiry had me believe compression was disabled because using both SSL2 and HTTP compression could make a site vulnerable to the BREACH attack. This was the first time I had heard about such an exploit, which I find surprising given that it can occur when authors follow two sets of best practice. More precisely, a site is vulnerable to this attack when pages:
- Are served from a server that uses HTTP-level compression
- Reflect user-input in HTTP response bodies
- Reflect a secret (such as a CSRF token) in HTTP response bodies
Any discussion about encryption usually leaves me confused and befuddled (the shear number of abbreviations doesn’t help), but the BREACH team suggest a number of ways to mitigate against an attack. Ordered by effectiveness, it’s worth noting that their first suggestion is to simply disable HTTP compression. Given the impact this can have on performance, that’s quite concerning.
My new website serves only static resources (pages are generated using Jekyll on my local machine), does not submit any form data (my contact form uses Formspree, a third-party service), and does not transmit any sensitive data. Therefore I believe it’s safe for me to continue using compression. However, if you have a website that posts sensitive data (perhaps you have a secure admin area), you may wish to investigate this issue further.
In the meantime, perhaps there needs to be a discussion about which approaches will help us build websites that are both secure and fast.
I originally thought I wasn’t able to override this behaviour because requests to my custom-configured nginx instance are proxied via WebFaction’s own front-end nginx process. After considerable head scratching, I realised several conflicting nginx processes were running. Quitting these and starting a new single process fixed the issue. ↩︎