Index
Published on
Last updated

Progressive enhancement needs better advocacy

Update 2015-12-03: Added Postscript 1 in response to reader feedback. Thanks!

For well over a decade1, progressive enhancement has been the gold standard for Web site development. It’s a simple philosophy of ensuring universal access to content by layering enhanced functionality over a core experience that’s available to older browsers, screen readers, user-agents without JavaScript, search engine spiders, feature phones, et cetera. To make basic progressive enhancement work, the minimal core experience doesn’t even need to be pretty, it just needs to exist, so that content on the Web remains accessible to everyone.

Who cares? It’s the future!

Unfortunately, over the past several years, the Web development community has developed an enthusiastic and single-minded obsession with client-side scripting, and an increasingly vocal crowd has been insisting everyone has modern browsers with modern JavaScript support, that there are no users or systems today that browse the Web without scripting, and so nobody needs to worry about making their sites work—even perfunctorily—without the latest and greatest in client-side tech.

Of course, this isn’t true, but the gospel has spread to some fairly well-known shops:

support.microsoft.com in 2015
Hopefully nobody’s going to Microsoft Support to get support because their browser’s JavaScript engine isn’t working.

These encounters are vexing, not only because of how trivial it is to develop these sorts of basic, information-driven Web sites without requiring client-side scripting, but because these sites are intentionally broken for some, with no added benefit to anyone else.2

Actually, they remind me of something else from the past:

Get Macromedia Flash Player
99.3% of Web users had Flash installed at one point. It didn’t make much difference to Linux users, who could not easily access any of that content.

Trouble with evangelists

While people still occasionally talk about progressive enhancement at Web design & development conferences, the core philosophy of universal access is often distorted to serve a regressive agenda that makes the lives of lazy developers easier at the expense of minority users’ access to content & services.

Because the term “progressive enhancement” was never defined in terms of a particular technology baseline, today’s evangelists exploit this ambiguity to float statements like “depending on JavaScript doesn’t necessarily mean you exclude users”—but for the vast majority of Web content, depending on JavaScript does exclude users, whether due to individual choice or individual necessity.

The arguments from JavaScript evangelists are usually well-crafted and delivered in a way that make it sound like the principles of progressive enhancement are simply evolving to match today’s technology. However, what’s actually happening is that “universal access to content” is being subversively replaced with “universal access for a limited subset of users that I care about”.

“Taking time to design sites that work within the basic constraints of the Web’s primary accessibility model isn’t smart,” they say, “because all these minority users we’ve been told about for years don’t really exist.”

Just because you can’t see them, doesn’t mean they don’t exist

Talking about progressive enhancement often means specifically telling developers to write sites in consideration of assistive technology and user-agents without scripting, because these are the groups that need a small amount of consideration in order for the Web to be at all usable. They aren’t large market segments—unless you count Web spiders—but they aren’t inventions, nor are they simply ghosts from an irrelevant past.

While WAI-ARIA hypothetically3 provides some hope for users that need to use assistive technologies in order to interact with scripted Web pages, there is a growing segment of users who are disproportionately affected by today’s progressive enhancement failures: those who are concerned with their online security and privacy.

This group is often difficult to see in analytics because, for many of them, their intent is typically to avoid being tracked, but it’s not hard to find evidence of their existence. NoScript is the third most popular add-on for Firefox with over 2.3 million active users:

Most popular Firefox add-ons as of November 2015
Ignore these millions of people browsing the Web without JavaScript—they don’t actually exist, according to many evangelists.

Similar extensions for Chrome like ScriptSafe and ScriptBlock share another 279,000-or-so users:

Users of NoScript-like extensions for Chrome, as of November 2015
More fake people over here.

In fact, it’s probably more likely that a user will try browsing the Web today without scripting than at any other time since the 1990s, due to the fact that nearly 2% of Web sites now serve malware4 (many of which are reputable but hacked), antivirus is not sufficient to protect against zero-day threats, and the ramifications of a malware infection are much more serious than they were in the past.

While plenty of experiences do legitimately require client-side scripting, the vast majority of sites that break in the absence of scripting do so due to a lack of care and effort, rather than due to any real requirement for executing arbitrary code on the client. In several cases, sites are broken because people use off-the-shelf libraries like Twitter Bootstrap, which don’t follow PE principles and instead offer unsafe defaults that hide critical navigation behind #-hrefs.

This means that security- and privacy-conscious users are forced to choose between not getting the information they need, messing around in Dev Tools to figure out the magic URL for the actual Web service they need to hit, or increasing their exposure to potential malware infection. Nobody should ever end up with their data encrypted and held for ransom because a lazy Web developer did something stupid like this:

edX home page in November 2015
Most sites don’t even bother to throw in a <noscript> these days—you just get missing, broken content.

In this case—and in many others—the page’s content has already been delivered and is simply being withheld from the user by CSS:

edX’s home page HTML in November 2015
You could be viewing a Web page right now—if edX followed the principles of progressive enhancement.

Supporting the needs of minority users usually also improves the experience of typical users: robustness against partial system failures, better content visibility in search engines, streaming content, and improved usability for power users. If your JavaScript CDN is down, or the user’s connection is slow, or they prefer keyboard navigation, sites built using progressive enhancement will work better every time.

From a user experience perspective, there’s little rationale for intentionally neglecting progressive enhancement and breaking the Web for anybody by requiring client-side scripting. Only bad libraries and immature tools make breaking the Web seem necessary or right.

Universal access with progressive enhancement isn’t new or difficult, you just have to care a little

You wouldn’t know it by looking at the Web today, but progressive enhancement for even highly complex Web apps has been around for a while. Mature technologies like Wt Toolkit provide fallbacks to HTML forms for rich single-page Web apps without compromising on “modern” user experience. Other server-side frameworks in many languages also support progressive enhancement to improve the core user experience without breaking users.

Only in the land of “modern” JavaScript-only Web development do programmers seem to think that delivering static content and basic interactivity to users is too difficult without the crutch of tens of kilobytes of client-side code.

Without regular instruction and reinforcement about how to develop using progressive enhancement techniques—and with many Web evangelists undermining progressive enhancement by trying to redefine what it means—it’s currently way too trivial and accepted for developers to break the Web. More and more, this is exactly what is happening, and many users of the Web are being disenfranchised as a result.5

If you believe in fundamental principles of the open Web, that content should be universally accessible, then following progressive enhancement principles is a critical part of the job you’re doing every time you crack open a code editor and start writing.

If you’re already ensuring your content is universally accessible, the next step is to talk back to evangelists that try to redefine universal access using metrics and leading questions about who really benefits. Additionally, talk more to other Web developers about how they can add dynamic content without scripting: collapsible sections with <details> instead of jQuery, menus with <menu> instead of Bootstrap, and single-tab interfaces with :active instead of AngularJS. Ask browser vendors and standards committees to add other extremely common UI functionality that is frequently implemented using custom scripts. Use more mature tools that don’t break the Web.

If enough people start talking again about progressive enhancement, the way that they were in the mid-to-late 2000s, we can get back to a more balanced Web where universal access is the norm, not the exception. If not, the platform may be destined to become the next Adobe Flash Player: slow, insecure, inaccessible, and loathed by its users. And then native apps will take over once and for all.


Postscript 1

Šime Vidas sent a nice response to this article on Twitter (thanks Šime!):

@zetafleet Ironically, the web cannot win against native without JavaScript. (Service Worker is needed for offline functionality.) —Šime Vidas (@simevidas)

Yes, it is ironic that the Web may be overtaken by native apps both because people are trying too hard to use it like an application platform instead of the World Wide Web, and because it doesn’t have enough features versus native platforms to remain relevant for application development. (I honestly don’t think the second one is a big deal; native platforms have always been more feature-capable than the Web, but that didn’t stop Web apps from becoming a thing. Platforms are not one-size-fits-all!)

That said, the good news is that Service Workers are actually not necessary for basic offline functionality of Web pages. Offline support has existed in browsers for significantly longer than mobile data connections have existed, since dial-up connections in the 90s were often time-metered and not terribly reliable. So, there are some very old (and some new!) technologies in your Web stack to allow for some offline browsing of content today.

The most important of these technologies is the Cache-Control HTTP header. It allows Web servers to tell browsers (and proxies) how to cache resources.

As long as must-revalidate is not included in a Cache-Control header, browsers and proxies are allowed to serve cached content if a network connection is unavailable, even if the content is above the max-age specified in the header. When a connection is available, and max-age is reached, the browser will ask the server if the content has changed or not, and the server will either respond with “304 Not Modified”, or with the newer content.

For preloading content, HTML5 introduced a “prefetch” link type that allows authors to ask user-agents to download some additional resources in the background. This means that even if the user’s connection goes away, the browser’s cache can be primed with related content, so they can browse a bit more while waiting for a connection to return. HTTP 2.0 also includes Server Push, which allows an HTTP server to preemptively transmit content to a browser, which can again be cached and retained for offline use.

HTML5 also originally introduced the HTML cache manifest, which adds more reliability to the offline cache by avoiding random eviction, but it is pretty broken, will always load offline content once filled instead of only using offline content when a network is unavailable, and will only update resources after redownloading the entire manifest. So, I’m really only mentioning it here because it exists already, rather than endorsing it as a worthwhile feature.

Nominally, Service Workers represent only a small additional benefit for offline support by giving authors greater control over network resource fetching, transmission, and invalidation, and by making the local cache much less likely to be evicted by the browser. In practice, browser vendors haven’t worked very hard at making Cache-Control work effectively for managing offline content, so the benefits of Service Workers are larger.

That said, the non-starter problems with Cache-Control can be mitigated today with a local caching proxy like Squid, or by browser extensions that can hook into the networking internals of the browser to retain offline content more effectively. HTTP 1.1 also allows for extensions to Cache-Control, which could be used to add the necessary cache hints that prevent it from being highly reliable today.

TL;DR: Service Workers aren’t the only possible solution for managing offline Web content, they’re just currently the most robust proposal. (And to be clear, I like the Service Workers proposal, but I also don’t want people to think a JavaScript-based solution is the only way one could ever manage robust offline support for content that doesn’t otherwise require scripting.)

In any case, the point of this article is not that JavaScript can’t enhance the user experience—of course it can, and I’ve spent a lot of my professional career doing just that. The point I am trying to make is that the vast majority of the time today, client-side scripting is being used improperly as a replacement for basic HTML and CSS, not as an enhancement, and it’s slowly transforming the Web into an inaccessible wasteland of slow, insecure, stratifying client-side code.


  1. According to Wikipedia, progressive enhancement as a distinct term was introduced by Steven Champeon and Nick Finck in Inclusive Web Design for the Future, March 2003. Graceful degradation as a concept has existed for much longer—since at least 1994 for the Web, and quite a bit more in basic fault-tolerant industrial design principles. 

  2. In the case of Microsoft Support, adding the word “Googlebot” to the browser’s user-agent causes the server to send a static page containing all of the relevant content, so there isn’t even an argument to be made that server-side functionality doesn’t exist to support browsers without JavaScript! Whoever manages the site simply decided that using AngularJS on the client was more important than providing basic access to content. Decisions like these should be loudly ridiculed. 

  3. WAI-ARIA support is honestly really, really bad, even today. It’s a very complex system. Unlike new browser features that usually just require the browser to implement them, ARIA requires Web page authors, Web browser vendors, assistive technology vendors, and operating system vendors to all correctly implement each assistive technology role and attribute in order for proper information to make it to the end user—and none of them do a very good job at it, because just like nobody cares much about progressive enhancement, nobody cares very much about disabled people. 

  4. In the first half of 2015, according to the Microsoft Security Intelligence Report Volume 19, there were 16.7 malware hosting sites per 1000 Internet hosts. Over the last year, at least 17% of users running Microsoft antimalware products reported malware encounters. Out of the ten most common exploit families, five were JavaScript-based exploit kits: JS/Axpergle, JS/Fiexp, JS/Neclu, JS/NeutrinoEK, and HTML/Meadgive (a.k.a. Redkit). As I was writing this article, another news report surfaced that Readers Digest has been serving malware for days. Security on the Web is a nightmare. 

  5. Over the last week I’ve run into many sites that are completely broken or missing large sections of content when scripting is disabled: AngularJS (broken menus, links, content, & documentation), Bugmenot (broken form), Business Insider (blank content), CloudUp (blank content), Creative Commons (broken menus and forms), Droplr (missing drop content), edX (blank content), Forbes (blank content), Imogen Heap (block message), Madcatz (broken links), Mentimeter (blank content), Microsoft Support (block message), Microsoft Visual Studio Marketplace (spinner), MyFonts (blank content), Netgear (blank content, broken menus), Opera Philadelphia (blank content), Secunia (broken forms), and Sennheiser (block message), to give a sense of how broad this problem has become. It’s not even just marginalised groups that are routinely disenfranchised by lazy developers incorrectly implementing Web tech; hundreds of problems with sites using default Web browser settings are reported annually to webcompat