UI-less custom elements?
So, when you start using Polymer you will at some point encounter <iron-ajax>
and if you’re like me you will be somewhat confused by the fact that it’s an element. After all, it has no UI and and seems to be entirely misplaced in the DOM. Now, after that you can do one of two things: Either accept it as ‘the way it works in Polymer land’ or you can just ignore it and not use elements like that. I - somewhat uncomfortably - did the former1 and continued making a lot of elements like that. I made complex <auth-api>
elements that would consume my API and toolbox elements like <text-utils>
that provided a set of functions to work with text. And all seemed good in the land of Polymer.
Till on a fateful day I was implementing a complex web worker and I realized I couldn’t use any of those libraries as web workers (and service workers) do not implement the DOM. Which sucked big time as I need functions on my <text-utils>
element. Now, you might think this won’t ever matter for you, as you aren’t using a web worker, but the thing is: You never know whether you will be using one of your libraries in a pure JS environment like node.js
or web workers. And most importantly you’re breaking with the spirit of the web platform:
- HTML and elements are for the content and semantics
- CSS is for the styling of those elements
- And JS is for adding logic to all that
Which directly links back to the web worker issue, because the decision to not include a DOM implementation in web workers is exactly because of this spirit of the web platform. There should be no need to have a DOM library in web workers, and if you do have that need you’re probably doing something wrong2. And sharing libraries between the front and backend is becoming more and more popular as well, after all, what’s more beautiful then having the exact same validation logic on both sides written out only once?
Now, from my discussions in the Slack group I know that a lot of people disagree with this and the biggest argument is how easy to use the data binding is with <iron-ajax>
. Now, there is no denying that it’s simple when you’re making very basic sites and can use the auto
option, but would
observers: [
'_reloadContent(page)'
],
_reloadContent: function(){
this.content = await ajaxRequest('/api/my-content', {page: this.page});
}
be so much worse? If anything I - personally - think the above is a lot clearer3 and to the point.
Alternatives
So considering the above I decided that I had to move away from this mess - as I now had custom element libraries including old fashioned libraries so that I could use the old fashioned libraries in my web workers - and refactor my application to use proper JS libraries. Turns out however that it isn’t as easy as I expected it to be, so I will just run through the various options I considered:
The traditional way: Require.js
tl;dr: Good idea, but async nature causes issues and code looks a bit ugly.
My first thought was just playing it safe and including require.js
. Turned out that was a bit of a pain with page.js
(which is included in PSK for the routing), but after that was fixed4 I realized that the code started being incredibly ugly. Instead of
goToBook: function(ev){
page('/book/'+ev.model.book.reference);
}
I suddenly had to write
goToBook: function(ev){
require(['/bower_components/page/page.js'], function(page){
page('/book/'+ev.model.book.reference);
});
}
everywhere. My next thought was to make it in to a global promise
window.page = new Promise(function(resolve, reject) {
require(['/bower_components/page/page.js'], resolve, reject)
});
which allowed me to do
goToBook: function(ev){
page.then(function(page){
page('/book/'+ev.model.book.reference);
});
}
which was marginally better, but I ended up polluting the window
scope that way after all and I couldn’t use my libraries in computed functions that way as they have to return
the new value.
Next up was the idea to wrap the entire Polymer({...})
call like
<script>
require([dependency1, dependency2], function(dep1, dep2)){
Polymer({
is: 'some-element',
[...]
});
});
</script>
which was somewhat better, although it first didn’t seem to work5, but more importantly it breaks unresolved
behavior which allows you to style your page before it’s loaded and I think - not 100% sure - it will cause problems with the data binding system6. All considered not too bad - provided you can work around the data binding issues - , but not a huge fan either.
The transpiler way: import
statements
tl;dr: Improved version of the above provided you use Typescript, still causes same issues
In ES6 an import
statement is defined which will essentially do all we want perfectly. We can simply do
import { getUsefulContents } from "file";
and be done. As far as I know that handles all those pesky things like resource deduplication and everything. So it might sound like a great idea to just use a transpiler and compile it in to current-gen JS. Turns out however that when you do this what it will internally translate to is either node.js
synchronous require
calls (Babel) or async require
calls like the above (Typescript). The first of which is what we would want, but doesn’t work in the browser, and the second of which is causing the already outlined issues above.
Fake retro-way: simple JS objects in HTML imports
tl;dr: Can be messy and optionally can complicate build flow
Which got me thinking about why exactly I wasn’t just using old fashioned JS libraries in the first place. The main thing that something like requirejs
provides is dependency resolution and resolving name conflicts, but in the Polymer world dependency resolution is for the most part already provided by HTML imports, thus leaving only the pollution of the global scope. Is that really an issue? I honestly don’t know. In my experience I have only seen it be a minor problem twice or maybe three times, but then again, none of the applications I have worked on are Huge with a capital H.
So, how would this look: Every JS library would need to be a .js
file (otherwise it can’t be used in web workers or other pure JS contexts) included in a .html
file (otherwise it won’t benefit from the dependency resolution provided by HTML imports). And that for me is a huge pain point. I absolutely hate the idea of having two files like that for every JS library I make. One ‘solution’ to this could be to use a gulp
build task to wrap every file ending in .lib.js
inside a .html
. Now, I think that might actually be one of the best ideas, however I have been moving away from a build system recently as much as possible, so I would feel kind of sad to complicate mine again7.
Additionally when you would be using such a library in node.js
it will suddenly be relatively hard to require
them (which can be fixed by adding a couple of smart if
statements) which isn’t a deal breaker, but is far from neat in my humble opinion either.
The custom Web Platform way: A synchronous require()
function
tl;dr: Big improvement over the previous scheme, but still some downsides
At this point I was thinking about how scripts are executed synchronously in the previous scheme, so I could just have them do a define()
and fix the global name issue. Then after giving it even more thought I realized there a whole lot more small issues I could fix.
Performance
Now, before I go on, let me just touch upon how bad this synchronous loading and execution thing is. In require.js
land if you require two dependencies - depA
and depB
- their requests will be started at the same time and they will be executed in the order they came in. In the Web Platform land import
s however will both be loaded asynchronously as well (and the parsing of the main page will continue), but they will be executed in the order they were defined8. So
<link rel="import" href="some-file.html">
<script>
alert('This will never alert before `some-file.html` is loaded');
</script>
In other words: there definitely can be some performance hit with these schemes and I think they are acceptable considering most of my files are loaded this way already, but it deserves to be documented (also note that this is still significantly faster than making your libraries custom elements).
The requirements
- JS libraries are located in
.js
files so that they can be used in any JS-only environments and should be loadable byrequire.js
- It should be possible for libraries to
import
their own dependencies - Library deduplication should be done by HTML imports
- Accessing a library should be easy and not require complex nested code.
- The feel should be similar to
node.js
requires - Switching to
import
statements once they become available should be relatively painless.
The setup
Let me first introduce you to the worlds simplest require
and define
functions:
var define = function(val){
document._currentScript.ownerDocument.exports = typeof val == 'function' ? val() : val;
};
var require = function(id){
return document._currentScript.ownerDocument.getElementById(id).import.exports;
};
Which would be used with an import looking like
<script src="my-lib.js"></script>
containing
define('Hello World from an Import');
and can then be imported like
<link id="myImport" rel="import" href="http://anotherbibleapp.com:8000/">
and required like
var msg = require('myImport');
Polymer({
ready: function(){
this.message = msg;
}
});
It’s super straightforward, even if I can imagine the define
and require
calls looking like magic. Read up on HTML imports for the explanation.
Limitations
- Defining your dependencies is different in a browser context from a JS-only context. Yes, that sucks, big time.
- You can only do
require
calls from outside thePolymer({...})
construct. This is quite alike tonode.js
where you would place therequire
calls at the top of your file, however it might feel a bit odd. - I sincerely do not get how Polymer succeeded at polyfilling the
link rel="import"
s chronological sequencing perfectly. I thought this would’ve been impossible, but this setup seems to work in all the browsers I need to support… though it leaves me a tiny bit nervous.
Conclusion
The most ironic thing is that I have just spend a lot of time fighting with this when the Javascript import
statement is going to make all of this pointless in no time. Still, I guess that’s just the way technology always works. All considered however I am quite happy, because the big big advantage of this last scheme is that it’s incredibly easy to switch to import
statements once they become available. Alternatively if you can live with the require.js
issues I think the Typescript option is quite neat as well. The important take away however is that all of these are better than continuing with building JS libraries as custom elements.
- Well, I accepted it as ‘the way it was done in Polymer land’, but at the same time I didn’t use
iron-ajax
from my second project on wards as it simply didn’t seem to provide enough of a benefit. ↩ - Although for XML manipulation it would be very nice to have it, so this statement is a bit harsh. ↩
- I am using the
await
syntax because it’s less distracting than the promise syntax. Of course without a transpiler it will be a bit longer. ↩ - Page.js will automatically act as a proper module if it sees
window.define
. ↩ - After I moved to HTTP 2 I realized I didn’t have to worry about concatenating files and/or extracting JS, so stopped using
is: 'element-name'
to prevent the duplication it caused. ↩ - So based on my experience with lazy loaded elements I know that the data binding system doesn’t like them. This however is improved in Polymer 1.3 to which I haven’t been able to migrate yet, so this might or might not be an issue anymore. ↩
- With HTTP 2 I don’t need stuff like vulcanization, and all those things, so that’s where the simplification came from. Right now my gulp file is half as long as the PSK one I started with (still need a couple of prod-only related tasks). ↩
- There are no
async
ordefer
like attributes on<link ref="import">
s ↩
Comments
Post a Comment