How we made OpenClassrooms load faster. Much faster.

Website page loading is something that can be hard to keep fast, especially on a large, ever-evolving site like OpenClassrooms. We need to add features, improve our design, support big resolutions (shiny Macbook Retinas) as well as small resolutions (phones), but keep the whole thing compact and fast.

The fact is: loading time is always going up, even with day-to-day optimizations.

We’ve worked a lot on this in the past few weeks. We’ve specifically tested a lot using Chrome DevTools, which allow us to slow down CPU or/and connection on purpose.

We’ve found the bottleneck for us: mobile. Like any website, the number of people accessing the site from mobile devices is increasing. We have more than 12% of page views from mobile today, it was 3% just 4 years ago. This increase is the cause of our metrics going down: the more mobile usage we have, the more slow-loading devices we have, and the higher our average page load time becomes.

Smartphones are not great at handling JavaScript. Even the better phones are slow compared to an average computer. We’ve added a lot of JavaScript since the early days (maybe too much, and we need to clean that up, but this is a long-term plan that we’re executing step-by-step).

This was our page load time before the change:

Maximum is 10s, minimum is 7.5s. It’s an average, so some users load pages within 1 second (desktop users with fast connections in France), and others are waiting 30+ seconds to load the page (mobile devices with slow connections is farther away countries).

Our APDEX is set to 7 seconds, and we had a score of ~0.7. It means (roughly) 70% of our users load the page under 7 seconds, and 30% in over 7 seconds.

This is our page load time now:

Time load is now between 4.3 seconds and 6 seconds. Our APDEX score jumped to 0.9+.
We are going to set up our APDEX to 6 seconds and continue to optimize until we are confident enough to change to 5 seconds. 🙂

Here are both load times on the same chart for easy comparison:

The good thing is that optimization helps mobile a lot but is also good for desktop.

Here are some stats from a simulated Chrome desktop user in India:

The idea is to see what a user’s experience would be when they try to reach the website from far away with a slow connection.

Before/after:

We can see the loading time is divided by 2. There are also less requests, because we are talking of requests made DURING the initial page load/rendering. The majority of requests are now done after the initial load, helping the browser to quickly show the page. (Note we also have better PageSpeed and Yslow grades).

And here is the omg-we’ve-done-it chart:

(The black vertical bar is the moment when we pushed the change in production :p)

Let’s get technical!

Our <script>s tags used to be included at the end of the HTML code rendered by the server, before the </body> tag. It’s a very common way of approaching this, and most of the time, it’s the best way to load JavaScript.

We basically have 2 big scripts to load, one with all the vendors (jQuery, React, etc) and our own code. Scripts are loaded asynchronously, then parsed one after the other, blocking the load event of the page (that’s why having <script> in the <head> element is not a good thing; it blocks everything after that).

What happens usually? A browser loads the page, parses the HTML from start to the end of file, can start to display something, BUT then starts to handle the first <script> tag. It must load the first JavaScript files then parse them. During that time, the visual rendering can be slowed down by JavaScript parsing (this was our case).

Our solution is simple: we need to defer JavaScript parsing until the page is rendered. However, implementation is not so simple, because each browser handles loading slightly differently.

Thanks to this great article, we found a solution that works nicely everywhere.

Instead of outputting regular <script> tags pointing to JavaScript files, our server outputs all the scripts needed in one Javascript array like this:

Those scripts tags are non-blocking because they are inlined.

Then, at the end of body, we’ve added a small JavaScript script to create <script> elements on the fly and inject them in the body when the ‘load’ event is fired by the window object:

The tags are created when the ‘load’ page event occurs, meaning all the initial rendering is done, and the browser has more free time to load and parse our JavaScript code.

It’s important to note that we have to set the <script> tag async property to “false” in order to keep the execution order (script 1 is big and loads more slowly than script 2, but the first should always be executed before the second).

Caveats

Of course, it’s not black magic. Because the page “loads” faster, the user can see most of the page displayed faster, that’s a fact.

However, all JavaScript-heavy tasks are still executed after everything is loaded and parsed. It means that rich interactivity within the page (instant course search, MCQ, the course editor, etc) is still delayed if you have a bad connection the first time on mobile. The next time, the browser cache will speed up all file loading, and everything executes faster.

Your comments

Your email address will not be published. Required fields are marked *