Skip to main content

The real conflict behind <picture> and @srcset

By Jason Grigsby

Published on May 21st, 2012


Some people do a lot of research before they travel. They read guidebooks. They like to know what they are going to do before they get there.

Others want to experience the new place and let the serendipity happen. In their minds, planning ahead would take all of the fun out of the trip.

Either approach works fine. But anyone who has travelled in a group knows what happens when the person who plans ahead is forced to contend with the care free attitude of someone who wants to wait until the last minute to decide what to do.

And that is essentially the conflict we have between the browser’s lookahead pre-parser and responsive images.

In recent years, browser makers have put an emphasis on making pages load as quickly as possible. The lookahead pre-parser is part of their efforts.

The Internet Explorer team describes the lookahead pre-parser as follows:

To reduce the delay inherent in downloading script, stylesheets, images, and other resources referenced in an HTML page, Internet Explorer needs to request the download of those resources as early as possible in the loading of the page.… Internet Explorer runs a second instance of a parser whose job is to hunt for resources to download while the main parser is paused. This mode is called the lookahead pre-parser because it looks ahead of the main parser for resources referenced in later markup.

The lookahead pre-parser is much simpler than the full parser used to determine how the page will be rendered. Because the scripts and css have yet to be processed, the lookahead pre-parser is taking some guesses about what assets will be downloaded:

The download requests triggered by the lookahead are called “speculative” because it is possible (not likely, but possible) that the script run by the main parser will change the meaning of the subsequent markup (for instance, it might adjust the BASE against which relative URLs are combined) and result in the speculative request being wasted.

How lookahead pre-parsers work isn’t that important. What does matter is that the browser wants to start downloading assets before full page layout has been determined. To be successful, the pre-parser needs to know what the page is likely to do ahead of time.

If the lookahead pre-parser is the tourist with a detailed itinerary of places to visit, responsive images are the go-with-flow tourist waiting to see what things look like before choosing what to do.

In a responsive web design, the layout and images are all fluid. The size of any given image cannot be determined until the page layout is calculated by the rendering engine.

No matter if you favor <picture>, @srcset, or some other solution, the fundamental conflict between the lookahead pre-parser and responsive images persists. Let me demonstrate with some of the more popular options.

One of the biggest points of confusion for the srcset proposal was understanding what the width and height attributes in the new syntax were supposed to represent.

Originally, I thought the 600w 200h in the example syntax represented the image size. But instead, they list the minimum viewport resolution that should be used for a particular image. Think CSS min-width being used in a media query.

Once I understood what the width was meant to represent, I began to see the shortcomings of this approach. Whatever values where listed in srcset would need to match the breakpoints specified in the design’s media queries.

Jeremy Keith pointed out that by only supporting “min-width”, srcset will have difficulty matching breakpoints defined using max-width in CSS. He writes:

One of the advantages of media queries is that, because they support both min- and max- width, they can be used in either use-case: “Mobile First” or “Desktop First”.

Because the srcset syntax will support either min- or max- width (but not both), it will therefore favour one case at the expense of the either.

Both use-cases are valid. Personally, I happen to use the “Mobile First” approach, but that doesn’t mean that other developers shouldn’t be able to take a “Desktop First” approach if they want. By the same logic, I don’t much like the idea of srcset forcing me to take a “Desktop First” approach.

The inability of srcset to match breakpoints defined in media queries is just the beginning of the challenges.

The @srcset attribute currently only supports px and not other units like ems which we’ve advocated using in responsive designs. Why this limitation? Odin Hørthe Omdal explained why:

You can use em and % freely in your stylesheets/CSS. The values from srcset is used to fetch the right resource during early prefetch, checked against the width and height of the viewport (and only that viewport).

Having ems or % would make no sense whatsoever there, because you don’t know what they mean…If you make a solution that will support em/% in a meaningful way, you would have to wait for layout in order to know what size that means. So you will have slower-loading images, and ignore the “we want pictures fast” requirement.

Sound familiar? It’s the conflict between the lookahead pre-parser and responsive images again.

Without support for ems, it will be very difficult to match srcset attributes to responsive design breakpoints that use ems.

As the challenges of matching media queries to @srcset values became clearer to me, it also became apparent what a mess updating this code would be when a redesign occurred. This is a problem for both @srcset and <picture> as D. Pritchard pointed out on the WhatWG list:

I dread the day when I have to dig through, possibly hundreds of pages, with multiple images per, to update the breakpoints and resolutions. Surely there’s a better way to manage breakpoints on a global level rather than burying the specifics within the elements or attributes themselves.

Unfortunately, no one has suggested a better way yet likely because most of the global rules for a site exist in CSS which is parsed much later by the browser.

Surely all these problems point out that we shouldn’t be messing around with breakpoints in HTML anyways. What we really need is a new progressive image format (or perhaps an old format used in a new way).

Le Roux Bodenstein describes well the appeal of this solution:

Use a progressive image format and HTTP range requests. Ideally the image metadata at the start of the file would include some hints about how many bytes to download to get an exact image size. The browser can then download the smallest size equal to or greater than the dimensions it needs based on the layout’s width as specified in CSS.

Unfortunately, this description also demonstrates why this approach will also battle the lookahead pre-parser. How will the browser know when to stop downloading the image? It will know based on the size of the image in the layout.

I could list other proposed solutions. Each has their own merits and problems, but they all suffer from the same conflict due to the fact that the proper size of a responsive image isn’t known until the layout is complete which is too late for the lookahead pre-parser.

What if we’re complicating this too much. Perhaps we shouldn’t try to replicate the breakpoints in HTML. Instead, we should simply supply different sizes of the image and let the browser decide on the best version.

There are two problems with this idea.

  1. How does the browser know when to download each image? At the time of the lookahead pre-parser, it doesn’t know what size the image will be. So it will need you to tell it when to use each size image.
  2. If you’re not tying the selection of images to the breakpoints in your design, how do you decide when you should switch images? Should you create three versions of each image? Five? Ten? Should you switch them at 480 because that is the iPhone width in landscape and then lament your decision if rumors of a taller iPhone screen come to pass?

The problem is there is nothing intrinsic to the image that would guide you in deciding where you should switch from one size of the image to another. Whatever we select will be entirely arbitrary unless we base it on our design breakpoints.

Since coming to the realization that the real conflict is between the lookahead pre-parser and responsive images, I’ve been wondering which we should prioritize.

The lookahead pre-parser has been essential to providing a better experience for users given the way that web pages have been traditionally coded. Doing anything that prevents the pre-parser from working seems like a step backward.

At the same time, while it may seem responsive images is an author issue, the biggest impact is felt by users. Downloading images that are too large for the size that they are displayed at makes pages load more slowly. It is possible that the performance gains from the pre-parser could be lost again due to unnecessarily large image downloads.

For existing web content, the lookahead pre-parser is undoubtably the fastest way to render the page. But if web development moves towards responsive images as standard practice, then delaying the download of images until the proper size of the image in the layout can be determined may actually be faster than using the lookahead pre-parser. The difference in size between a retina image for iPad and an image used on a low resolution mobile phone is significant.

It seems like this tradeoff ought to be measurable in some way so we can quantify what the impact would be. I’m not skilled enough to construct that test, but hopefully others can help evaluate it so we can make an informed decision.

It has been two years since I first started looking at images in responsive designs. It seemed simple. The <img> tag had one src and we needed multiple sources.

Until the WhatWG got fully engaged in this question, I thought I understood the problem. Now I realize it was much bigger and more difficult than I originally thought.

We have an existential problem here. A chicken and egg conundrum.

How do we reconcile a pre-parser that wants to know what size image to download ahead of time with an image technique that wants to respond to its environment once the page layout has been calculated?

I don’t know what the answer is, but I’m very curious to see what we decide.

  1. It is worth noting that the original srcset proposal by Ted didn’t suffer from the conflict described in this post because it didn’t attempt to address responsive images. It focused solely on image density and thus didn’t contain any viewport width declarations.


Jonathan Stark said:

Another great post! Thanks for thinking this through so others don’t have to 🙂

Here’s my “monday morning quarterback” opinion:

Whatever the answer, multiple sources is not it.

Plz correct me if I’m deluded, but here’s my reasoning:

An image embedded in an HTML page as an IMG tag is content in the exact same way conceptually as text in an HTML page. Therefore, web devs would provide a reference URL to the *canonical* version an image and be done with it.

i.e., This. Is. The. Image.

Of course, the trouble with this is that a link to a canonical image would be a link to the highest quality image, and therefore, the biggest one. In most cases, it would be wasteful to download such a file just to have the browser downsize it. It’s a double hit on bandwidth and processing.

But conceptually – and semantically – multiple IMG sources hits me across the head sideways as a short-term bandaid that we’ll live to regret.

It seems to me the problem could be better attacked in a few other places:

* Better image format – You mentioned this already but I want to add that as a site owner, I’d rather implement something on my web server once for all image assets than repeatedly throughout every web page I author.

* Browser setting for “Load images (on/off)” – Web devs would provide links to hi-res images by default and the browser vendors would allow users to toggle a “load images or not” setting. Opera Mini, Android native browser, and Silk already have this.

* Browser setting for “Image quality (low/med/high)” – Web devs would provide links to hi-res images by default and the browser vendors would proxy image requests through their own servers to downsize to a max provided by the user. Caching and CDNs could make this a realistic option. Again, Opera Mini provides this and Silk has “Accelerate page loading” which perhaps is similar.

* Better bandwidth – Multiple image sources will seem silly once fast wireless is ubiquitous and cheap. Okay, this is a long way off but imagining this future highlights the wrongness of a markup based approach: i.e. that it makes no sense to attack what is at root a bandwidth issue with markup.

* Better protocol – K, now I’m really out of my depth but it seems to me that updates to HTTP (or alternates like SPDY) are another avenue that would make more sense than markup-based solutions.

# Final Thought

It scares me to imagine defining alternate content in markup (or CSS), regardless of whether that content happens to be images, text, or what have you. Imagine if we could use media queries to specify alternate text for an H1 at different screen widths?


# Final, FINAL Thought

You asked:

“What matters more: lookahead pre-parser or responsive images?”

The user is the only person in a position to make this decision. Settings for this have been implemented in browsers already. Totally doable.

Sorry for the long comment!

Replies to Jonathan Stark

Andy Davies replied:

This ain’t no easy problem…

Faster connections will help but not as much as we might like, what we really need is reduced latency.

Often a browser doesn’t use the maximum bandwidth available due to latency, the way TCP works etc.

@souders explains it very well here:

A ‘better’ (single?) image format also has issues

The browser has to start to download the image to get details of what images are within it before it could make a range-request for the actual section it needs but abandoning the original download comes with a cost at TCP level – other packets would have been sent by the server, it kills keep-alive behaviour etc.

I could imagine content negotiation solutions that might make it possible to simplify the markup but they’d require server-side support (I think, not sure it would even work ATM).


Replies to Andy Davies
Yoav Weiss replied:

@andydavies – the range request solution I suggested was to send the initial image request with a range of zero to 10-20K. That way no resets are required (which as you said have a BW cost and kill keep-alive connections), and the browser would have an initial low-res image to present to the user even before it makes further requests for the rest of the range it needs.

Replies to Yoav Weiss
Andy Davies replied:

@yoav – Was thinking about that as I wrote the original comment but couldn’t quite form the thought…

I think we could only range-request a header, as we wouldn’t know whether the first image in the file was a hi-res or low-res version (depending on whether some’s using a mobile or desktop first approach for example)

It’s a hard problem…

Replies to Andy Davies
Yoav Weiss replied:

@andydavies – Assuming we don’t want to do art direction, and the goal is simply to save BW, we can assume that if an image is less than 10K, we will fetch it anyway, and only the larger images will benefit from BW savings. Not ideal in a “many small images” scenario, but will probably work for most use cases, and will avoid the delay of fetching the headers.

OTOH, it is true that progressive imgs can’t do art direction, where the mobile/desktop first approach affects which image we want to fetch, not just how many bytes we want to fetch.
Art direction must be resolved in markup.

Gunnar Bittersmann replied:

web devs would provide a reference URL to the *canonical* version an image and be done with it.

Yes, that would be the ideal (for web page authors).

Of course, the trouble with this is that a link to a canonical image would be a link to the highest quality image, and therefore, the biggest one.

No, not necessarily.

Just as the generic URI leads you to either or or some other language version (depending on your browser settings for the ‘Accept-Language’ HTTP header field) a generic image URI might lead to either or or some other image version (content negotiation). If there was some kind of an ‘Accept-Image-Size’ HTTP header field.

A mobile client might even automatically change its value depending on current connection (GPRS / HSPA / Wi-Fi).

Wouldn’t an extension to HTTP be superior to any markup solution?

Yoav Weiss said:

Amen! 🙂
The problem of measuring the current trade-off between the lookahead parser and responsive images is that it varies greatly depending on the page.
Basically, if we turn off the lookahead parser, the browser would hold off downloading of images until it fetched and executed *all* the external stylesheets & scripts it has in its head tag, and rendered the HTML up until the img tag.
This adds a fixed time “cost” to the page download (which changes according to the page and available BW), which later may or may not be compensated by responsive images, depending on the size difference of the images and the BW available.
We should not go there. It is a bad place to be. And it’s certainly not a decision you want to inflict on your user via preferences as suggested in the comments.

The advantage of the range request based progressive format (be it progressive JPEG, another format or both), is that the lookahead parser can start by downloading the initial buffer for the images it encounters (which contains a dimensions=>bytes mapping), and continue to download the rest of the images once it is done with that, which is hopefully after the layout was rendered. This way you don’t eliminate the conflict, but you reduce its impact.

Another option is to continue to rely on viewport, but make the syntax DRYer, by using something like Matt Willcox’s suggestion.

Sophie Dennis said:

I find your argument that specifying multiple imaged + breakpoints in the HTML is storing up trouble for later. Similar to relying on inline CSS.

I like the new image format concept, but can see adoption would be slow. We need not just browsers but also graphics progs and other web devs to support it. Look at how slow SVG and even PNG have been to gain common currency.

What if srcset functioned like longdesc? Instead of a list of images it linked to a separate file listing the alternate images and their display rules. This could be a simple text or XML file (for easy authoring on basic sites) perhaps following similar syntax to or a script that generated such a file dynamically (for larger sites).

The preparser could fetch only the main src image. It would be up to the dev to weigh whether that should be a low-res mobile-first version, the full-fat desktop one, or something in between.

This would also be compatible with a new responsive format for the images themselves. Images in that format would simply not require srcset.

Matt Wilcox said:

I came to this realisation a while back too, and it’s infuriating. My conclusion is that there simply can’t be a reliable and future-friendly solution to responsive images whilst the pre-parser is still in use.!/MattWilcox/status/193448241096953856

Even the meta variable suggestion only goes part-way to solving things. Potentially its presence could indicate that the pre-parser should be disabled on the page (because instead we’re using “the other optimisation technique”) But it’s bloody hard getting this all to work…

Replies to Matt Wilcox

Ben replied:


I ‘m puzzled that your “variables in the head” idea:

isn’t referenced at all in Jason’s otherwise-excellent piece. Yours is the only idea I’ve seen so far that has a real whiff of “yeah, that could work.” I want to fan its flames a bit.

GIVEN: A) We don’t want to lose the pre-parser. and B) We don’t want double-downloads of multiple files, that necessarily means we need to guide the preparser somehow. That means something in the head telling both parser and preparser how to look at the file.

If, as you suggest, breakpoints are defined in meta tags in the head, couldn’t pre-parsers be rewritten easily enough to only read those meta tags first, then only prefetch the relevant image specified lower down in the markup? If future browsers want to be lame they could just turn off the preparser as you’re suggesting when they stumble across your meta tag. But why would they do that instead of look up the appropriate one based on the breakpoint? Am I missing something? I really feel this is where the meat of this whole discussion lies. The obvious pain point is that it’s totally incumbent on vendors to implement this. There’s no developer-only whiz-bang solution.

Lastly, the new image format idea is totally pie-in-the-sky. Bound to fail. Look at browser support timeline for SVG. It’s taken a decade to mature, and growth in bandwidth has made it mostly obsolete in that time. People will only use what can easily be understood. It has to be separate image files at different pixel dimensions. The visible file separation is what allows the bandwidth savings. Simple for everyone to grasp.

Phil Ricketts said:

I think that WebP should be extended to encompass many resolutions with intelligent differencing, and also that browsers should decide what to use based on the display and context.

Replies to Phil Ricketts

Ben replied:

Can you explain this?

Would a mobile and a desktop both then download the same WebP file? Would that file be the same size in both cases? If so, there would be no bandwidth savings. Both devices download the whole file.

Or would the mobile and the desktop download two different files with the exact same file name that were different sizes? (!) Wow, that would be a recipe for confusion! Maybe it could work, but it would force people to be very careful about checking filesizes in addition to file names. You’d get a lot of blowback from baffled users.

I actually think running a script that generates multiple resolutions for big “hero” images is simpler, and less error prone.

Oskar Eisemuth said:

I do think the generally debate is to much focused on image pixel widths and dpi.

So some aspects aren’t widely discussed at all: Printing. We want even higher dpi with printing too.

And there the common problem I usually do myself as well:
Mixing dpi and min-width is a failure.
A viewport width in css pixels or % can’t represent the dpi.

Then we have device viewport (auto) scaling trying to fix problems because of fixed dpi to pixel resolution set in stone in our minds.

So there is a general lack of media queries with resolution in mind. And assuming again a small screen has higher dpi then a big screen.
Actually we see already the change: Desktops get higher dpi screens… The end result, hardware accelerated scaling of images or whole pages.

Will Desktop Browsers in the long run do auto scaling like small screen devices?
So most css media queries will create computed faked values so it looks right to the css style of a page?

It’s not possible to download a head of time something if the layout is unknown. Prefetching will fail, always.
HTML currently gives only the content, we merged layout somewhere between head element with meta viewport, media queries and css media queries.

If we want to only download the “right” files, the rendering engine need to know all layouts of the site, then the layout can be selected and then it can decide what content it will download.

And if we think about mobile first,
maybe some “extra” content may be referenced and downloaded as well for the desktop user.
Think about it as content pool referenced by a url.

The question here, depends the use case really on the device screen size and dpi?

In the long run we need a complete language set, describing the layout,
selecting a layout by defining the layouts dpi/virtual pixel sizes and content this layout will show.

Layout selection can’t be done simply by using a keyword.
Selecting means thinking about: usecase, dpi, size, how far the reader is a way from the medium.

We use media queries in css to select a style and trying to relayout the page with varying success.
(3 Column Layouts with content in any html order, mobile first, desktop first)

We try to merge the use cases of the content with varying success into a site.

Doing evil browser detection code trying to give the user the right layout and browsers trying to fix the mess on the on the other site to get the best user experience.

Or we use the opposite direction, design x sites under x urls for each use case and redirects.

We aren’t in a perfect world, what can we do?

Stop trying to browser sniffing and browsers trying to be over intelligent, we won’t solve the content vs. layout vs. use case stuff by adding duct tape to the image tag alone.

A way to define different files of the same image with same virtual css pixel size based on dpi, fine:

Image tag/picture tag need:
– a failback
– no css changes for new content images.
– as dpi-style
— with name
— attachable to an image tag
— attachable to a css background url. like: background-dpirewrite: “name”;
— flexible enough to do url rewriting based on dpi requirements, use different folder, prefix / postfix.
— definable in html head so we can prefer page speed
– and directly on an image tag
– avoid downloading two files.

Duct tape layout needs:
– make it possible to avoid downloading parts if they are hidden. (img, video, audio, object),
and never optimize it with some intelligent Blackbox because of page download speed.

Layout needs:
– define layouts
– select layouts, make it possible for the user to change selection…
– content based on layout, don’t requests file this layout won’t need.

Daniel Scully said:

Remind me again, why do we want to select the image based on its layout size?

Perhaps I’m missing a use case but whenever I’ve done responsive layouts, images usually go in with width:100%; height:auto; or something similar.

This gives me everything I need for every device in terms of layout. The reason this is unsatisfactory is because devices on slow connections have to download a relatively large file that they’d rather not wait/pay for.

So couldn’t we let the browser chose a file based on knowledge of its connection speed? Knowledge of the layout isn’t required, so the pre-parser can do its job. I know measuring connection speed is a complex science, but it doesn’t need to know an absolute speed, just an idea of whether it’s slow or fast.

And do we really care if a smartphone on wifi gets a file three times bigger than it needs? If I’m using my laptop through 3G will I mind if the image is a little pixelated, so long as I get it?

Matias Etchevarne said:

What about defining our desired breakpoints on the html tag:

Then we just tell the browser what images need to be treated as responsive

So when the browser try to pre-fetch the images, it knows our desired breakpoints, and it already knows our vieport size.

So we just need to name our responsive images as our breakpoints

And as you see in my img tag I took a mobile first approach defining the 320 breackpoint as the default img.

what do you think?

Matias Etchevarne said:

What about defining our desired breakpoints on the html tag:

Then we just tell the browser what images need to be treated as responsive

So when the browser try to pre-fetch the images, it knows our desired breakpoints, and it already knows our vieport size.

So we just need to name our responsive images as our breakpoints

And as you see in my img tag I took a mobile first approach defining the 320 breackpoint as the default img.

what do you think?

PS: in my previous response the code was not visible

Steve Souders said:

Speculative downloading is one of the most important performance improvements from browsers (parallel script downloading is a subset of the benefits of lookahead parsing). It’s important to preserve these benefits. As I’ve said before, I support a server-side solution (a la Src aka tinysrc). The markup is simpler and speculative downloading still works. CDNs and WPO services are starting to support such a technology. I expect it to be widely available in a year to people working on the popular web hosting, cloud, and CMS frameworks.

Replies to Steve Souders

Jason Grigsby (Article Author ) replied:

@Steve, the ability of servers to determine the correct size image is decreasing rapidly. The server doesn’t know anything about the pixel density of the device.

Better yet, let’s use some real numbers. Let’s assume there is an image that will be displayed in a web page at 400×400 dimensions regardless of the viewport size (to keep things simple).

In order to support retina displays, the high resolution source image needs to be 800×800. So we have a img path like:×800.jpg

We route that through Src or something like it:×800.jpg

On a non-retina iPad with a screen resolution of 1024×768, Src is going to return the full image size of 800×800 because it doesn’t know how big the image is going to be in the page so it simply grabs the largest image that fits the screen size.

In that scenario, we’ve downloaded an image twice as large as it needed to be.

When we’re dealing with mobile phones where the maximum screen size is often 480px or so, grabbing a slightly larger image isn’t a big deal. But as we move into larger displays and higher density displays, making decisions based simply on the resolution of the display instead of hte size of the image in the layout becomes much more problematic.

Chris Zacharias replied:

What about extending the HTTP Accept header to include image-specific information, such as device pixel ratio, media resolution, etc.? It seems like the appropriate place for it, it would be backwards compatible, and it wouldn’t negatively impact the pre-parser.

Current browsers don’t really send anything along in the Accept header for images, but seems like an interesting way to hint at which images to send, with corresponding fallbacks and all. Downside is CDNs have to be smarter.

We would happily support this in our responsive image service, Imgix ( if anyone is interested.

Larry Garfield said:

Fun times. 🙂

IMO this decision must be made client-side. The server has simply no way of knowing if the image it’s serving will be shown on a retina iPad (meaning 300 dpi is great), or a low-end Android (meaning 300 dpi is insanely stupid), or a low-end Android that is simply acting as a pass-through and displaying the actual image on a 52″ TV (making 300 dpi AND 1000px wide exactly what you want). Even device detection won’t help you in the latter case, to say nothing of its other issues.

So one way or another the intelligence has to live client-side. Were it not for art direction, I’d say push the logic into the browser. Tell the browser “this variant is 72dpi, 300x500px”, “this variant is 300 dpi, 3000x5000px”, etc. Even if it cannot know the full rendered screen width necessarily, it can know “I’m displaying to a screen that’s 200px wide, so a 300dpi image is a waste, period.” That’s logic the look-ahead parser can know, or at least, knows better than the server or developer. Quite simply, I trust the browser to figure that out more than I trust me.

Art direction is the tricky part, because for that I DO need breakpoints that I set.

Is there any way we can have our cake and eat it too? By that I mean specify rich metdata for the browser AND media queries? So if the media queries are missing we let the browser do what it wants. If there is a media query, then the browser knows to exclude/include a particular variant period, and then apply whatever it’s logic is to whatever’s left. “Under 200px use this image, over that use whichever of these 3 images you think is appropriate.”

Basically bring along a travel itinerary, but allow ourselves to deviate it and go where this train is going if we decide it looks cool. 🙂

While an HTTP-based solution sounds good, it is only viable if there’s a script on the other end that can process the headers and return a different bytestream. That’s overhead that you want to avoid if possible.

Sean Hogan said:

Another factor is that, with history.pushState(), sites can load pages with XMLHttpRequest() and pre-process the HTML so that only the desired [img src=”url”] is seen by the parser.

Of course this doesn’t happen for the the landing page (unless you use a few tricks) so the landing page will incur that pre-parser induced img hit.

Brian Gallagher said:

The things that bother me most about this whole responsive image debacle are not the arguments of @srcset or “picture”. It’s when we start deciding what images to serve up to the user based on the viewport. If I have 5 breakpoints, and 30 images on a page, i need 150 seperate images to be created and served up. Forget the coding, what a pain in the ass to make 5 different sized versions of the same image? And what about bandwidth? If I’m on a 15MB connection I don’t want to be served up some lo-res version of the image because the developer assumes that because I’m on a mobile device I must be using Mobile data and thinks I want a low-res version to reduce my data and load time.

Who are we to make the distinction between what the users NEEDS to see, and what the user WANTS to see?

Replies to Brian Gallagher

Jason Grigsby (Article Author ) replied:

Who are we to make the distinction between what the users NEEDS to see, and what the user WANTS to see?

I don’t think that’s what we’re doing here. If anything, we’re talking about adding more flexibility so that the browser can give more control to users than they currently have.

First, I would argue that every image has an ideal resolution for a particular device. If an image is rendered in a page at 400×400 pixels and the device has standard pixel density, the ideal resolution to serve to that device is 400×400. If the pixel density is 2x, then the ideal size is 800×800 pixels.

If we could reasonably maintain it, it would be ideal to serve the exact size that is used in the page—nothing more, nothing less.

But that is unlikely to be an option unless we get a new image format AND browsers decide they won’t request images until the size of the image in the page is known.

What solutions like srcset and picture allow for that wasn’t allowed for in the past is to give the user the option to do something different. Perhaps on 1x pixel density display the image loads at 400×400, but the moment the user starts zooming, the browser starts fetching the higher-resolution images.

Or maybe when the user saves the image to their desktop the higher resolution version of the image is downloaded.

Or perhaps there is a user setting where the user can declare they always want high-resolution. Or low resolution.

This space can be areas where browsers can differentiate and innovate so long as authors provide different versions of the image and give enough data about those images that the browser can handle them smartly.

Replies to Jason Grigsby
Brian Gallagher replied:

I totally concur with the “resolution” setting in the browser. That is where I browser/user should be able to decide what image they get served. That was the point I was trying to make. It’s not as easy as slapping a new element into the spec to solve the issue. It’s a much bigger problem we are tackling. I’m not against the new element spec, I just think we are sidestepping the real issue.

Nate Hart said:

Maybe it’s a crazy idea, but what if we used meta tags to define breakpoints and relate them to special attributes (like data attributes) on the img element? Maybe something like “meta name=’imagebreakpoint’ media=’min-width:480px’ src=’data-480′”, and then that way if the viewport is wider than 480px it’ll fetch the image specified by a data-480 attribute on the img tag, if it has that attribute.

Older browsers would simply fall back on the img’s src, while more modern browsers would respect the meta tags. This way we can keep the media query syntax, but it doesn’t necessarily have to support ems and whatever else the pre-parsers can’t work with. As far as I can tell, it’d also be easier to maintain because the breakpoint definitions are in the document head, which is usually a single file when working with a CMS.

I’m not sure if it’s plausible with the pre-parsers, but I thought I’d give my two cents on the issue. It’s pretty late as I type this, so maybe I’m just wrong about the whole thing.

Replies to Nate Hart

Larry Garfield replied:

Partial-page loads. That was suggested on another blog recently, but as I noted there the problem is that pages that are build from html tag to html tag all at once are a dying breed. See this blog entry about where the web is moving in terms of partial page loads. (The article is Drupal-centric, but the problem is not.)

Putting all the breakpoints in the head works only if every partial page fragment we load (which includes anyone doing ESI and Varnish, which is most high-end sites), uses the exact same breakpoint sets, universally. It’s the same problem as assuming that every fragment will use only the CSS and JS that’s already loaded. That assumption is simply not true, which means we have to either front-load all CSS, JS, and breakpoints that could possibly get used (wasteful), or write a custom wrapper that ships forward JS, CSS, HTML, and breakpoints bundled up in JSON and then in Javascript decompile it and shove new script or style tags into the header before we can insert the extra markup. And that leaves out Varnish entirely.

That’s another problem we already have to deal with sooner or later. I don’t think we should layer responsive images into that mess until/unless we have a solution to that mess.

Chris said:

How about this solution: Remember progressively loading gifs? They loaded a crap version, then as more came down the resolution improved?

Maybe we need a new image format where the more you download the more resolution you get. Suck down 1/4 of the file and you get 1/4 the quality. Then the browser can start loading the images the way it does now, but stop when it has enough resolution, which it might not know till sometime later after has started loading.

Replies to Chris

Scott replied:

I don’t think any progressive image format can work. With the solution you describe, a browser would for example first get every fourth pixel in the image to get an image 1/4 of the size. But the image that’s formed is of horrendous quality because there is no interpolation – you are doing a straight “pixel resize”.

Simon said:

Could we not add a css property called ‘image-base-src’?
It’s default source would be the current dir.
In your media queries you could add css like…
@small screen{
@big screen{

So looking for this tag….

On unsupported browsers, it shows the img/my.jpg
On small screen it shows img/small/my.jpg
On big is shows img/big/my.jpg

By only changing directories, and storing the setting in CSS it makes min / max with break point possible, and editing settings for hundred of pages / images a snap.
You could even generate the different sized images on the fly on your web server in a cron job with imagemagick, meaning in theory you only need upload one image file, and you are ready to go on any device you want.

It doesn’t solve the problem of browser pre-parsers, but I personally think they need to parse media queries before proceeding.

James Abley said:

A progressive download image format feels bad from a caching perspective.

Can someone help me understand why I’m wrong to think this?

Replies to James Abley

Yoav Weiss replied:

Since the download of the progressive format would be done using standard HTTP/1.1 range requests, cache servers should support reconstructing the entire image from these range requests, and later serve the image in ranges as well.

Unfortunately, “should” is not good enough. It seems like caching of range requests is not currently supported in Squid. I’m not sure about other cache servers.

We can only hope that if range requests become something that browsers use a lot, they’d eventually be supported by cache servers.

Replies to Yoav Weiss
Brian Gallagher replied:

IMO I think this is the direction we need to go. The images should be determined by the browser/client, not by the developer. This whole discussion is a much bigger beast than just needing a new image element. It involves new browser techniques and technologies, a new/better image format, and yes, a better and smarter caching system. With the advent of hundreds of screen resolutions and pixel densities, we are going to tie our hands catering to multiple versions of the same image, which is only going to make our development time longer and more frustrating.

I strongly believe we need a solution starting with the image itself and quit focusing on the elements. The responsive image element is only a band-aid on a bullet wound. A temporary fix for a bigger issue.

Evan Mullins said:

Echoing some other thoughts and comments I’ve read, but writing it out so I can grasp the details.

What about the updated image format that by default only loads the smallest embedded size. Then once the browser finishes layout and knows what size the image is rendered at (and also resolution) it can update the request for an image that’s embedded in the file that best fits or matches the requirements. This by default would load the smaller part of the file and display something for the prefetching browsers and “enhance” the image as it can. I can see photo editing software like photoshop (if that’s your poison of choice) to have extra settings for creating this new image type – resolutions to include, breakpoints to be set, etc. The we aren’t managing multiple images – it’d be one file that included all the desired sizes. Very much like progressive images and old gifs. Mobile browsers and old fallback browsers would display the smallest (or first) specified img (although this could also be determined in the meta data of the file – which subimg would be the default). This may end up in users with high BW/large screens to download two subimages rather than only the high res image, but remember that the first one would be much smaller in size and if they can handle it the big one the big and smallest together shouldn’t be too heavy. The page would also load very fast as the first image to load would be much faster than loading the large image initially whatever the bandwidth. Then browsers and image compression engineers can get together and figure out a communication method between the browsers and images via metadata and requests while developers/designers can just choose the proper settings as they save the files (or software can have standard-best-practice defaults) and focus on creating websites rather than having to understand how all browsers and devices will load and present content.

Also – Amen to @Brian Gallagher: “Who are we to make the distinction between what the users NEEDS to see, and what the user WANTS to see?”

Enno said:

Well, either I just get the idea of responsive web design wrong or I am just not smart enough, so I’d be glad if some of you guys could explain what I’m not able to understand.

The lookahead pre-parser can’t gather layout information. I get that one 🙂 But… most media-queries depend on device-width. And a browser surely does know the width of the device it’s running on (or the size of the window), doesn’t it?

So let’s say the parser recognizes a @srcset or a which tells to load photo-400.jpg for a device-width <= 400px and photo-default.jpg for anything else. The browser knows "oh, I'm running on iPad 3, I'm 1536px wide, so I'm gonna load photo-default.jpg&quot;.
It's exactly the same with pixel density. The image says "load photo-2x.jpg if your pixel density is doubled, otherwise load photo-1x.jpg&quot;. And that's information the browser knows before having any layout information.

That's how I understand it, so I don't really get the point of this blogpost. Where's my mistake?

Chris said:

Upon reflection, I do think it’s wrong for a solution to rely on a new file format: but not because it would be hard to get people to adapt, but rather because that’s too much implemention detail at the http/HTML level.

But by the same token, embedding into the HTML a list of files or URLs is also wrong in that it tends to preclude various implementions on the server (e.g. The server has just one image and interpolates on the fly the right scale which may not even be of a fixed set of sizes). In the future we may have a need for dozens of sizes and it may be impractical to keep changing and updating them all the time.

It seems to me that the HTML should give one file name, possibly with information about the maximum resolution available. The client should ask the server for that image plus preferences and data about the scale it would prefer. The server will give the client an image which best suits that request, and how it implements that, whether on the fly compression, or a list of files, or a subset of one big new file format is the servers decision.

Of course that doesn’t totally resolve the clients problem of not knowing what size to ask for when layout is not yet resolved. HOWEVER, if the above system were in place, the problem could be pushed back onto the browser implementers to play with their code and various strategies, whether it be making a best guess, or asking for a low res image then a better one later, or waiting for the layout to be done or whatever. Who knows what solutions they will come up with or what they will find is optimal if they have a more general basis to build upon.

Chris said:

If there is no server, you are left pointing to the highest resolution file.

Since the problem is so intractable, people may need to rethink where the solution lies. Especially since the problem is one of optimization. Maybe we need to think in terms that the system works fine now, we just need smart servers to save bandwidth.

Ethan Resnick said:

Maybe this is an inadequate answer but what if the img src attribute just contained the image at its smallest size, which the pre-parser could load immediately, while srcset/picture contained the bigger sizes. Those bigger sizes would be css aware so wouldn’t be loaded til after the basic page rendering is complete, but until then the smaller image could be shown (and could be shown in a blown up form between the end of rendering and the download of the bigger version). That way, you’d have some image right away and it would be enhanced with a higher resolution version as the page finishes loading.

Doesn’t solve the art direction problem (you’d need to expand the above to have multiple base images maybe?) but it’s an idea at least.

Red Feet said:

My 1st thought was a new streaming progressive enhancement image format: similar to interlaced gif in the 90s, but rather in a clockwise circular way (you won’t need the “wait while loading” spinning daisy, because the image itself acts like a progress indicator).

Then I read about the CDN/caching problem: the progressive stream of increasing detailed information could be cut in multiple chunks: files that can be cached. The next time you view the image, it will show up immediately and continue progressing enhancement.

The user’s preferences and behaviour can also influence the priority of what images should be continue loading/gaining detail: browser settings+download speed, only images within viewport, (parts of) the image where the mouse is over, or which are in the center of the viewport could get more priority.

If all web images would load in the same way as how Google maps reloads sharper images when I zoom in on my iPad I would be very happy as a user/visitor (and as a developer if I could provide this functionality by uploading just 1 big jpg/png/gif and including 1 js-plugin)