6 Reasons Isomorphic Web Apps is not the Silver Bullet You’re Looking For

During the last two years, I’ve heard the term Isomorphic Web Apps mentioned in a positive way more and more frequently. Also during this time, I’ve done some thinking myself about the technique. My conclusion is that Isomorphic Web Apps is something that would not add value for me or the typical organisations I work for. Several things have led me to this conclusion.

But, before I list the reasons, let me first stress that I think isomorphic javascript libraries (i.e. lodash.js) are awesome. Also, I think that isolated isomorphic components can be valuable, for example a live updated stock market indicator.

The reasons why I don’t believe that Isomorphic Web Apps are valuable for me or the organisations I work for are:

  • The web server can now only be written in javascript
  • Isomorphic Web Apps can never be Progressive Enhancement for non-trivial apps
  • Blocking vs non-blocking data flow
  • Time-to-interaction and the Uncanny Valley
  • Mobile devices freeze during parsing of javascript
  • Your best developers are now busy not producing value

I want to start with a definition of Isomorphic Web Apps. An Isomorphic Web App is a web app where the application code has no knowledge of where it’s run. Instead, this knowledge is kept in the infrastructure code and allows the application as a whole to run on both client-side and server-side.

The web server can now only be written in javascript

If the client code and server code is the same (and the client can today only interpret javascript), then we are limited to only use javascript on the server. I like node.js very much, but I think it’s better to be open to alternatives in the future, in my perspective.

Isomorphic Web Apps can never be Progressive Enhancement for non-trivial apps

Client-side javascript has access to an in-memory state machine that allows for a fine-grained level of interaction, with a very low latency (sub-milliseconds). The same in not true for the HTTP state machine, which is driven by link clicks and form submissions.

For me, Progressive Enhancement is to start with a baseline that is accessible for all possible devices/browsers, from old to current to future. In practice, this means a baseline of server-side rendering. The enhancement step is a business decision on where to enhance the site/app to increase the value. In other words, the enhancement is done in the application code (specific) and *not* in the infrastructure code (general), except for optimisation techniques like pjax or turbolinks.

The way for Isomorphic Web Apps to get Progressive Enhancement is to take this fine-grained in-memory state machine and “translate” it to only use only links and forms. The only cases I can see where this is possible is where you didn’t need full client-side rendering in the first place, but instead could rely on above mentioned optimisation techniques and client-side components for enhancing the experience (i.e. a calendar component for an input field for a date).

A variant of this solution is to not support every client-side state/transition in the server-side state machine. In this case, this should be a business decision that needs to be reflected in the application code, making the application code environment sensitive, which goes against the idea of Isomorphic Web Apps.

The way for Isomorphic Web Apps to get Progressive Enhancement is to take this fine-grained in-memory state machine and “translate” it to only use only links and forms.

Blocking vs non-blocking data flow

The rendering sequence for server-side web and client-side web are different: the server-side blocks and the client-side doesn’t block.

Imagine that we need to do two requests to some services in order to render a view. We can do these requests in parallel.

On the server side, if the second request returns first, it can render that part of the response but needs to block sending those bytes until the first has rendered and returned that part of the response back to the browser.

On the client side, this constraint doesn’t exist: the two responses can be handled and rendered independently.

Also, I think the above represents the simplest of examples. Imagine you have a tree of components, where the root component is smart (see Presentational and Container Components for an explanation of smart/dumb components), followed by a few levels of dumb components, and then at least one leaf component that is smart.

Smart and dump components

The infrastructure then needs a way to make sure the smart leaf’s context doesn’t get lost, so that we lose control of the blocking/non-blocking execution, depending on server/client mode. One way to solve this problem could be to make the whole program be executed in a free monad, representing the execution as data to later be interpreted. Another solution could be to use some form of visitor pattern and let components declare what data they need (similar to Tutorial: Handcrafting an Isomorphic Redux Application (With Love)). The latter is probably easiest. My point is that the problem of different blocking modes are probably much more complicated that one imagines initially.

An alternative design is to have a rule that says “only the root component can be a smart component”, similar to how you might use the IO Monad in Haskell, keeping the leaves pure and only having side effects at the top level. I think this design is good on the server side but not on the client side: a component on the client side should be able to load its own data in at least some scenarios, for example regarding social media related components. Having a rule that never allow for these scenarios seems very unproductive.

Time-to-interaction and the Uncanny Valley

With the Isomorphic Web Apps approach, there will be an amount of time where the user sees graphical elements on the screen that appears to be interactive but aren’t. In other words, the time between when the browser has rendered the server response and when the javascript is downloaded, parsed and executed. This is called the “Uncanny Valley” or “Potemkin Village”.

 

This is why I prefer Progressive Rendering + Bootstrapping. I’d love to see more frameworks support this approach

See this tweet and the responses for more details.

Mobile devices freeze during parsing of javascript

On a “typical” mobile phone, you get 1ms UI thread stall per 1KB javascript, according to this tweet. Malte Ubl is the tech lead of the AMP project, so I suspect that he knows what he’s talking about.

@tomdale @HenrikJoreteg on a phone 1KB of JS roughly equal to 1ms of UI thread stall. Size matters.

 

Your best developers are now busy not producing value

Isomorphic Web Apps is an approach that demands a high level of development skill. In many areas (in the western world, at least), it’s difficult to find and recruit highly skilled developers. Choosing to develop Isomorphic Web Apps allocates those developers to do something that I question produces any business value at all. There are better ways to make use of their time.

Summary

Isomorphic Web Apps limits your web server language/platform choice to only be javascript. It has the ambition to allow for Progressive Enhancement, but it cannot deliver. It introduces a high level of technical complexity due to the differences in blocking/non-blocking data flow. It introduces a time window where things appear to be interactive, but aren’t (the Uncanny Valley). A high amount of javascript also freezes mobile browsers, with the rule-in-thumb that 1KB of javascript means 1ms of stalled UI thread. And, since it requires a complicated software design, it takes valuable time and effort from your best developers – there are better way to make use of their time.

Finally, I want to stress that your experiences might be different from mine. Maybe there are scenarios when Isomorphic Web Apps are good. But they are not a silver bullet for bridging every gap between server-side web benefits and client-side web benefits, while getting none of the drawbacks of both approaches.

What do you think? Have you considered “going isomorphic”? Have you any thoughts about the challenges and costs? How do you deal with them?

Acknowledgements

Thank you to Oskar Wickström and Per Ökvist for valuable discussions around this topic. Oskar also reviewed this post – thank you Oskar.

4 Comments

  1. Several years back, while working on a mobile app for a major media company, we needed to pre-render pages for SEO purposes. There were a several issues that needed to be dealt with:

    1) The uncanny valley was the most significant; the image heavy site would take too long to become interactive as the javascript execution was blocked till all the images had been loaded. We addressed the issue by only rendering “above the fold” images on the server and ensuring image containers were sized correctly for the image content that would fill them. What was above the fold, varies from device to device, so that was problematic. Isomorphic apps would need some mechanism to support image loading optimization.

    2) What do you do when using responsive images and need to know which device the code is running on? A naive approach would result in a potentially large image being requested unnecessarily. We avoided the issue by generating inline css with media-queries that referenced all possible images resulting in the appropriate background-image being rendered, which works with isomorphic apps, but it’s something you need to think about.

    3) Some mobile devices have underpowered gpu’s. For complex pages with a lot of effects and animations that would be thrown onto the gpu, it’s more performant to progressively render the page from top to bottom.
    We discovered this while tracking down crashes. The pre-rendered page was run with javascript turned off to isolate the cause. The page crashed immediately, in many cases, and when it did render the perceived render times were slower as the entire page needed to be composed at once.

    Isomorphism is an elegant solution to the SEO problem and can improve perceived performance on a case-by-case basis. As the author states, it’s not a silver bullet and should be applied with care.

  2. I tend to agree with everything you say here. The issue that bugs me the most is the fact that the emerging javascript library and framework designs are making it much easier to structure and modularize javascript development. At the same time they are not very progressive enhancement friendly. So we are stuck with less elegant solutions such as jquery selector spaghetti/soup.

    I want to see a modern, modular framework that makes progressive enhancement a first-class citizen. Something that enables me to use Python (for example) as my rendering backend, while making it easy to layer on javascript functionality in a clean fashion.

  3. Great post — it’s really interesting to engage with these practicalities. Not everyone can or should chase “the new coolness” for it’s own sake.

    The points about progressive enhancement & losing your best developers from delivering business value, are particularly insightful from a business perspective. From my seat I’m deeply ambivalent about Javascript as a server language, as there appear greater costs/ complexities and less reliability compared with other paradigms.

    Tom

Leave a Reply