As long as I’m posting puzzles, riddle me this: what happens to the browser’s lookahead pre-parser if we find the holy grail of responsive images—an image format that works for all resolutions?
In nearly every discussion about responsive images, the conversation eventually turns to the idea that what we really need is a new image format that would contain all of the resolutions of a given image.
For example, JPEG 2000 has the ability to “display images at different resolutions and sizes from the same image file”.
A solution like this would be ideal for responsive images. One image file that can be used regardless of the size the image is in the page. The browser only downloads what it needs and nothing more.
As Ian Hickson recently wrote on the WhatWG mailing list regarding future srcset syntax complexities that might come with hypothetical 4x displays:
Or maybe by then we’ll have figured out an image format that magically does the right thing for any particular render width, and we can just do:
…and not have to worry about any of this.
It seems that no matter where you’d like to see responsive images go—srcset, picture, whatever—that everyone agrees we’d all be happier with a new, magical image format.
If this mythical image format existed and was adopted by browsers, it would mean the end of image breakpoints for most responsive images. You can see this in Ian Hickson’s hypothetical code above.
Unless the image required editing at various viewport widths (see art direction use case), the document author could simply point to a single file and know that the browser would download the data necessary to display the image for anything from a small mobile screen to a large, high-density display.
This is where the thought experiment gets interesting for me. As noted before, the big challenge with responsive images is that the browser’s lookahead pre-parser wants to start downloading images before the layout is finished. This is the central conflict for responsive images which I described as:
How do we reconcile a pre-parser that wants to know what size image to download ahead of time with an image technique that wants to respond to its environment once the page layout has been calculated?
Image breakpoints are designed to give the lookahead pre-parser the information it needs to pick the right image before the browser knows the size of the element in the layout. This is how we get around the problem.
But we just saw that image breakpoints would go away if we had a new, magical image format. Without the image breakpoints and without knowing the size of the image in the page, how would the browser know when to stop downloading the image?
Unless I’m missing something, it wouldn’t. The browser would start downloading the image file and would only stop once the layout had been determined. In the meantime, it may download a lot more data for a given image than is necessary.
Let’s take a look at Flickr as an example. Flickr would be a huge beneficiary of a new image format. Right now, Flickr maintains a bunch of different sizes of images in addition to the original source image.
Instead of having to resize every image, Flickr could use a single image format no matter where the image was used throughout the site.
Lets also assume that in this future world, Flickr was responsive design using flexible images so that the size of an image is dependent on the width of the viewport. What happens to a typical Flickr layout like the one below?
Because it would be a responsive design, the size of the thumbnails on the page will vary based on the screen size. But no matter how big the thumbnails got, they would never equal the full size of the original image which means that the browser would need to stop downloading the magic image file at some point or it would download extra data.
What would the lookahead pre-parser know at the moment it started downloading the images? All it would know is the size of the viewport and the display density.
Based on those two data points, the pre-parser might try to determine when to stop downloading the image based on the maximum size the display could support. That works ok if the display is small, but on a large, high density display, the maximum image could be gigantic.
And because our magical file format would contain multiple resolutions, the browser would likely keep downloading image data until the layout was calculated making it very likely excess image data would be downloaded and the download of other assets would be delayed.
Of course, this is all hypothetical. We don’t have a magical image format on the table. So for now we can avoid resolving what I see as a central conflict between the need for the pre-parser to start speculatively downloading images before it knows the layout of the page and the fact that the size of responsive images isn’t known until the page layout is determined.
But in the long run—if we find our holy grail—this conflict is likely to resurface which makes me wonder about our current efforts.
I whole-heartedly agree with Steve Souders that “speculative downloading is one of the most important performance improvements from browsers”, and until a new image format materializes, it seems we should do everything we can to accomodate the pre-parser.
And at the same time, I can’t help but wonder, if we all want this magical image format and if in some ways it seems inevitable, then are we jumping through hoops to save browser behavior that won’t work in the long run regardless?