Getting HTTP/2 up and running is such a pain that I decided to that I would try sparing the rest of the world some of the pain by sharing the story. Let’s get the basics out of the way first: The biggest advantage of HTTP/2 is that files get interleaved over a single connection, which means in normal language that you can send one thousand one kilobyte sized files for virtually the same cost as one huge concatenated one megabyte file. Practically this thus means you don’t need to have huge complex build steps where you attempt to inline all your dependencies and concatenate the entire thing into one huge file. Want to read up on other advantages then this is the best resource I found.
So, in this article we’re going to assume that our intention is to deploy just the Polymer Starter Kit and a simple node backend API. Additionally the goal of implementing HTTP/2 is not about getting marginal speed benefits, but about easing development by making production more like development and simplifying the build process.
Hurdle number Zero: Should we use it yet?
Do browsers even support it yet? And how bad is the impact on browsers which don’t? Well, let me just share the numbers I found during my research: Assuming Europe (the US has slightly more IE users so is slightly worse off) we can estimate based on the linked data that around 75% of all browser users support HTTP/2 (assuming half of IE11 users are using Windows 8.1 or older). If we next consider the browsers which aren’t supported by Polymer we can discard 9.5% of users (so 83% of Polymer supported browsers support HTTP/2). This leaves us mostly with a huge number of IE11 users on Windows 8.1 and older and a fair number of users on old versions of Safari on both desktop and mobile.
Now, that sounds pretty bad, but the ‘only’ consequence for those last 17% is that your web app will load significantly slower for them whilst it will load slightly faster for the rest. The question next ends up being to what extent you have to support those browsers. In my case this trade off was acceptable, especially as IE support is far from sure and older devices (like old iPhones with outdated Safari browsers) will probably have a hard time running my application either way, so I am not going to worry too much about them.
Hurdle number One: Secure connections
For reasons I am not going to discuss it was decided by most browsers that HTTP/2 will require a secure connection to work. It simply boils down to the fact that that a couple of browser makers have come up with the idea that they are going to forcefully deprecate insecure connections by requiring HTTPS for cool new features. Be that as it may be, the standard argument in their defense is that those same people have invested a lot of time in building Letsencrypt, a service which should simplify getting free certificates. Now, it turns out Letsencrypt isn’t that simple at all, but that’s enough of a pain that I will write separately about it to keep this short.
Hurdle number Two: Serving HTTP2
For simplicities sake I originally wanted to just run everything on node
. So the idea was that I would have two node instances which run on separate ports and an extremely simple third instance on port 80 which would pass them along to those first two based on the host name using bouncy or node-http-proxy. I knew this probably wasn’t going to be as performant as an nginx
solution, but at least it was supposed to be simple to set up and would allow me to easily run the same set up on both development and production. Well, the important take away here is that just because there is an implementation in node of HTTP/2 with near perfect compatibility with the standard HTTPS API that still won’t allow you to easily make something like bouncy
work with it. I actually got pretty far, but in the end it just wasn’t going to work without forking a second bigger project as well (bouncy is just 125 lines) so I gave up on that.
Next I decided to go and use nginx
after all, thus sacrificing some of the development simplicity I was looking for in the hope that nginx
was going to support HTTP/2 better. So, we have to use the node
instance for the API, but PSK can be served directly by nginx
. Now, just in case you’re new to nginx
like I was: To set it up you have one server
per ‘domain’ as it’s called. Per each server
you have a listen
directive where you set up on which port it listens and all you need to do is add http2
to this directive to server it over http2
… is how the stories went. Turns out that if you run any version of Ubuntu, be it the latest LTS version (14.04) or the most recent version (15.10) they all come with a version of nginx
which doesn’t support HTTP/2
and the error message isn’t that clear either. The version you need to support HTTP/2
is 1.9.5. As the company who makes nginx
is commercial it can be a bit hard on the site to find the installation instructions for the open source version, so here is a direct link to the relevant portion.
Either way, once we have that version up and running our config file will look something like
server {
listen 80 http2;
server_name example.com;
root /var/www/example/;
location / {
try_files $uri $uri/ =404;
}
}
Important: No browser out there currently would support the above server, as it’s serving HTTP/2 over an insecure connection. I just left out the SSL related stuff to keep it plain and simple.
And if we want to use URL’s without #
’s in them we would use historyApiFallback
for that in the official PSK release, but in nginx
we either have to drop it at the end of the try_files
directive instead of the =404
which means that any 404
s will load the index.html
. A practice I personally don’t like much, because that means 404
s will return 200
status codes. Luckily there is a way we can recognize those natural URLs beforehand: they don’t contain .
s, so we can use the following location
directive I concocted
location ~ ^[^.]*?$ {
default_type text/html;
index index.html;
alias /var/www/example/;
}
which can be placed before the generic location
directive.
Next we still need to support the API proxy as well, so we add another configuration file with
server {
listen 80 http2;
server_name api.example.com;
location / {
proxy_pass http://127.0.0.1:8002;
}
}
to simply proxy that. I am not sure how much use there is to serving an API over HTTP/2, but as far as I know there isn’t a reason not to and HTTP/2 should be marginally faster for single resources either way, so go HTTP/2.
Hurdle number Three: Your new build process
So, we finally are able to serve HTTP/2 files, but that doesn’t necessarily mean we can get entirely rid of a build process. If you don’t want to make your application available offline then you actually can drop gulp
entirely. To do this you would have to move bower.json
to your app
directory and run bower
inside app
. And if you want to still use Browsersync
you should check out their website for information how you can use it directly (the static sites example).
But if we want to make our website available offline stuff becomes a lot harder, because we will still need to create a cache-config.json
file for the platinum-sw
elements… which is so much harder because without vulcanization suddenly we have to selectively list only the relevant files from bower_components
, but I will leave that part for a third separate post about service workers.
Picking the fruits of your labors
Well, the simplified build process is already great, but there are more cool things to realize:
Loading elements dynamically suddenly becomes a relative walk in the park. Whilst in the past you had to set up different vulcanization paths and load the relevant element groups that way you can now simply call
this.importHref('elements/somedirectory/element.html');
to load relevant elements. The easiest way to implement this in PSK I have figured out so far is create a list (well, it’s always just a single element in my setups as I create <page-*>
elements) of dependencies in routing.html
and then importHref
those before changing app.route
. So it will end up looking something like the following:
var loadedDependencies = [];
app.loadDependencies = function(els){
els.filter(el => loadedDependencies.indexOf(el) === -1);
els.forEach(el => this.importHref('elements/' + el + '.html'));
}
page('/login', function (data) {
app.loadDependencies(['login/page-login']);
app.route = 'login';
});
Still, the cool thing is that there is no need to rush this. If your website needs it you can add it anytime, and if you don’t need it you can wait (as I am doing right now, though I think I am going to use it on mobile devices and load everything at once on desktops/laptops).
You can now drop the is: 'some-element'
line from your element definitions, as the Polymer()
function will always be in the same file as your dom-module
which if my understanding is correct means there will never be any issues with automatic discovery. The reason this was a problem in the past was that your HTML could end up in one file after the build process whilst your JS was in another file (especially if you were using something like crisper
) and thus explicit naming was a requirement.
Reference files safely relative to your element file. For example: I have an element which loads various texts and these texts are distributed with the element. Now assuming an ES7 browser I can safely write something like the following
var response = await fetch(this.resolveUrl('texts/'+this.textName+'.md'));
var text = await response.text();
console.log(text);
without having to make an incredibly complex copy
step in your build process to copy those files to the exact same location as on development, but without the files that have been vulcanized.
And lastly we should be able to wave iron-iconset
goodbye and CSS spriting in general, which is super obvious, but still something to look forward too.
Comments
Post a Comment