Skip to main content

Responsive IMGs Part 2 — In-depth Look at Techniques

By Jason Grigsby

Published on September 30th, 2011

In Responsive IMGs Part 1, I took a high-level look at what responsive IMGs are, the problem they are trying to solve, and the common issues they face. In this post, I’m going to take a deeper look at the specific techniques being used to provide responsive IMGs and try to evaluate what works and doesn’t. If you haven’t read part 1, you may want to do so before reading this post as it will help explain some of the terms I use.

When I started working on this project two months ago, I thought I would get to the end and be able to say, “Here are the three approaches that work best. Go download them and figure out how to integrate them into your systems.” Oh naivety!

What I’ve found is that there is no comprehensive solution. Instead, we have several months of experiments. Each experiment has its own advantages and disadvantages.

Because of this, the best thing we can do is understand the common elements and challenges so that we can start to pick the best parts of each for building our own solutions.

So um… this is a long post. Sorry. 🙂

Many of the early techniques used javascript to dynamically change the base tag. The new base tag would add directories into the path that would be used to indicate what size image should be retrieved. After the document loaded, the base tag would be removed.

Unfortunately, this approach ran into race conditions that I described in part 1. I found that Google Chrome was downloading both the mobile and desktop images. Scott Jehl found the problem to be a difference between how inline and external javascript is handled. He submitted a bug to webkit which has been marked as “won’t fix” because:

Inserting base element effectively changes all the subsequent URLs on the page. Any script may insert one so to avoid double loads we could never load anything else as long as there is a pending script load. This would mean disabling preloading, which is out of the question.

In theory, you could still use a dynamic base tag inline, but the Filament Group has been primarily using a cookies-based approach instead which seems safer.

Another early technique was to have the src of imgs pointing to a temporary image and then having javascript replace the source with the correct file path. In most cases, the image was an one pixel transparent gif set up with caching which would hopefully prevent the browser from requesting it more than once no matter how many times it was referenced in the page.

The problem with this technique is that if javascript isn’t present, the browser will never download the images.

If the the img points to ‘small.jpg’, where do you put the information that ‘large.jpg’ is what should be loaded on larger screens?

One solution is to put the path to alternate versions of the image in the src attribute as url parameters. In its simplest form:

<img src="small.jpg?full=large.jpg">

If you have multiple sizes of images, they simply get added as additional values on the url. The key to making this work is coupling it with an .htaccess file.

The big drawback to using URL parameters is that it may cause problems with content deliver networks and proxies that doesn’t pay attention to url parameters when caching content. Some caching algorithms ignore anything that has a URL parameter on it which means that pages will slow down because images aren’t cached.

Others will simply cache the first version of the image they see. If the first person behind a proxy cache happened to view the page on a mobile phone, then every subsequent user sees the mobile size image until the cache expires.

How likely is this to be an issue? I had the same question so I asked Steve Souders. He says that it is enough of a problem that you can’t ignore it. This echoes comments by Bryan and Stephanie Rieger at Breaking Development about problems with caching and CDNs.

Therefore, I think we should be looking for techniques that don’t use url parameters.

Instead of putting the file path into the url parameters, the information is put in one or more data- attributes. For example:

<img src=”small.r.jpg” data-fullsrc=”large.jpg”>

Which element has data attributes added to it and how many are added depends on the technique.

The only disadvantage to this technique that I’m aware of is the fact that the javascript has to loop through every image, check for data attributes, and then modify the src attribute depending on screen size. This is probably not a big problem on desktop browsers which is where the loop is mostly to be used.

In this variation, the file path isn’t included in the HTML document. Instead, it is assumed that the images are put on the server in a regular fashion. For example, all small images might be in /images/sml/ whereas large images are in /images/lrg/.

If this is true, then the html doesn’t need to provide both paths. It just needs to provide the image filename (e.g., boat.jpg) and then let javascript modify the src to be appropriate for the size of the screen (/images/lrg/boat.jpg for desktop).

One of the things that I suggested in part 1 was that we might need arbitrary image sizes. Some of the solutions are built around the assumption that you can pass the dimensions that you want in the url and get back an image at that size.

Because the images are resized on the fly, there is no need to store alternative file paths in the HTML document. Javascript will modify the filename from something like ‘boat.jpg’ to ‘boat-480×200.jpg’. There is no issue with caching or CDNs because each image is unique.

This approach doesn’t provide a good solution for manually choosing images at different sizes. It assumes that resizing images will work in all cases which we know is not true.

Many of the solutions rely on server rewrite rules. The examples are usually written using Apache .htaccess files, but they could be any sort of rewrite rule.

Lets look at a snippet of the .htaccess file from Responsive Images JS cookie-based branch to see how rewrite rules are being used:

RewriteEngine On
#large cookie, large image
RewriteCond %{HTTP_COOKIE} rwd-screensize=large
RewriteCond %{QUERY_STRING} large=([^&]+)
RewriteRule .* %1 [L]

The first line turns rewrite rules on. Next comes a couple of conditions (RewriteCond). The first checks to see if there is a cookie called rwd-screensize that has the value of large. The second checks to see if the query string for the url contains a value for large. This .htaccess file is looking for something like:

<img src="small.jpg?large=large.jpg">

If both conditions are met—the cookie is set to large and there is a large value in the query string—then the rewrite rule will send the file that was specified in the query string (in the example above, that would be large.jpg).

The rwd-screensize cookie is set by javascript after it tests for the screen size.

With the basics out of the way, we can now get to the tricky part. As mentioned in part 1, intercepting the browser before it starts downloading images so that you can evaluate and possibly change the source of those images is tricky and may result in race conditions.

Now that the dynamic base tag has been ruled out, there are two main techniques that remain.

This is the method that the Filament Group settled on for the Boston Globe. Javascript is inserted into the head of the document so that it evaluates as soon as possible.

After it determines the screen size, it sets a cookie. Every subsequent image request sent from the browser will include the cookie. The server can use the cookie to determine the best image to sent back to the user.

If the browser doesn’t support cookies or the user blocks them, then the javascript will have no effect.

Also, Yoav Weiss has done some testing and shared results that indicate that duplicate files will be downloaded by IE9. Firefox will download duplicate files if the script is external, but not if it internal. This suggests that cookies may also be subject to the race condition problem that caused us to abandon the dynamic base tag approach.

Within the last couple of months, new techniques have emerged that use the noscript tag as a way to prevent extra downloads. The first post I saw describing this technique was by Mairead Buchan. She describe it as having “ the elegance of a wading hippo”. Despite that description, I think this technique holds promise.

A cleaner implementation of the noscript approach was created independently by Antti Peisa. Here is the html:

<noscript data-large='Koala.jpg' data-small='Koala-small.jpg' data-alt='Koala'>
<img src='Koala.jpg' alt='Koala' />

The values for the various sizes of image tags are stored in the data attributes on the noscript tag itself. Antti then provides sample jQuery code used to process the image:

    var src = screen.width >= 500 ? $(this).data('large') : $(this).data('small');
    $('<img src="' + src + '" alt="' + $(this).data('alt') + '" />').insertAfter($(this));

These lines go through the document to find noscript tags with the appropriate data attributes. It tests for the screen size and then inserts a new img tag with the appropriate image path and alt tag.

When using the noscript tag, there are no rendering race conditions. The image in the noscript tag never starts downloading. Mairead explained that “it works because children of the <noscript> tag are not added to the DOM”.

This makes sense. The browser knows if javascript is available before it starts rendering a page. If javascript is available, there is no reason to worry about doing anything with items inside the noscript tag. If they aren’t getting added to the DOM, they certainly aren’t going to get downloaded.

This technique also has fallbacks if javascript isn’t enabled and doesn’t rely on cookies or htaccess files.

The biggest gotcha will be devices that profess to support javascript, but have poor implementations. For example, Blackberry 4.5 has javascript, but javascript cannot manipulate the DOM. Ergo, the noscript tag will not get used because scripts are available, but the script won’t successfully add a new img tag so no images will show.

Please note, this is speculation on my part. I know how Blackberry 4.5 behaves, but I haven’t tested this particular approach on a 4.5 device.

Even though this approach does not create a race condition, it is important that the javascript execute as quickly as possible. Inserting all of these images may require the browser to reflow the page. It also may cause the browser to load assets less efficiently because it cannot start prefetching assets.

Because of the need to execute as quickly as possible, it makes sense to remove the jQuery dependency from Antti’s javascript and put the code in the head of the document.

Most of these techniques rely on the size of the screen to determine what the image size should be. Andy Hume points out that the size of the screen may be misleading. He writes:

The content driven approach to fixing this is to decide which image to load based on whether the image will be stretched beyond its true pixel width. If you stretch an image beyond its true width it begins to look pixelated or blurry. In this scenario, we want to load in a higher resolution version of the image.

Andy’s fork of the Responsive Images JS tackles this problem (and adds support for nginx).

I’ve been looking forward to the Boston Globe’s launch for quite some time. It is a tremendous feat of engineering and design. It has the volume of traffic necessary to test different approaches to responsive IMGs and see what works and what doesn’t.

The technique that they chose to use combines data attributes with cookies. Unfortunately, responsive IMGs are currently broken on the Boston Globe site. This is a known problem and they are working on fixing it.

The upshot is that we don’t yet have a large scale deployment of any of these techniques that we can interrogate and point to as validation that a particular combination is battle-hardened.

In my mind, cookies plus data-src and noscript are the two most promising techniques. Both have problems, but they have far fewer gotchas than other approaches.

Most of the javascript techniques require little, if any, support from the server. There are alternate approaches that leverage the server for a bunch of the heavy lifting.

A few people have demonstrated solutions that do light-weight user agent string parsing to identify various mobile phones. If the user agent can be identified as iPhone or Android, then declare the device mobile and set the image size appropriately.

Unlike a lot of developers, I don’t have a problem with device detection based on user agent string. But if you’re going to start doing it for mobile, you have to take on real device detection via WURFL, Device Atlas, etc. Simplistic regular expression matching and assumptions about screen sizes isn’t going to work.

There are a couple of different approaches that rely on device detection to determine the screen size and deliver an appropriate image back. Device detection databases are pretty good about having basic information like screen size.

James Pearce created a fantastic service called TinySRC. He later went to work for Sencha and TinySRC became Src. Src automatically resizes images for you. You reference Src in your img stag like this:

When a browser requests the url above, Src will look up the user agent of the device making the request to determine what size image is appropriate. It will then grab the image from your server and resize it. It then caches the resized image so that subsequent requests can be served quickly.

In addition to the automatic mode, Src will also allow you to specify specific sizes that you would like the image resized to.

Andrea Trasatti forked Scott Jehl’s Responsive Images JS to combine responsive IMGs with TinySRC. The script finds the screen size using javascript and then uses htaccess to request the image at the correct size from Src.

Andrea’s version was written fairly early. It still uses dynamic base tags, url parameters, and results in “1 HTTP request for every image that we might avoid”. But all of these problems could be remedied by combining what Andrea started with some of the newer approaches.

First, if you have a religious aversion to device detection, then you probably don’t want to use Src or you need to use it in a scenario where can specify the image size that you want.

As an aside, I’ve found it funny to see people who speak ill of device detection and user agent strings suggest that people use TinySRC. I once saw a slide deck that dismissed device detection and then a couple of slides later talked about how great TinySRC is. If only they knew. 🙂

On a more practical level, you have to evaluate whether or not the service will remain up and what happens if all of your content points to sencha urls that suddenly go away. I don’t think Sencha is going to go anywhere anytime soon. I know James well enough to know he’ll want to keep this service running forever if he can. But even all that said, looking at the long term availability of a service is something that needs to be considered.

WURFL is the largest open source device database. After attending the Breaking Development conference earlier this month, Carson McDonald was inspired to develop a WURFL-based solution for images. It’s awesome to see something come together so quickly after the conference.

(BTW, Breaking Development is the best conference in North America for web on mobile. Registration for the next event opens today. You should attend!)

Carson notes that his approach will likely have the same problems with CDNs and caching because different size images come from the same url.

Google’s mod_pagespeed Apache module automates many performance tasks and includes an option to scale any images on the fly. There are many ways to scale images (GD, ImageMagick, etc.). I decided to call out mod_pagespeed because it was one I hadn’t considered until I saw it suggested in a forum. I don’t know of anyone who has explored how it might be used in an responsive IMGs solutions.

As you can probably tell by now, there are few solutions that you can simply install and forgot about. Most require at minimum changes to the way you mark up the page. The two solutions that come closest to be plug and play are Src and

Adaptive images was developed by Matt Wilcox. It turns the premise of Responsive IMGs on its head by assuming that the markup on the page will contain the large versions of images and will not start with the mobile versions.

The solution consists of three pieces:

1. A small snippet of javascript placed in the head that sets a cookie with the screen width and height.

<script>document.cookie='resolution='+Math.max(screen.width,screen.height)+'; path=/';</script>

2. A .htaccess that rewrites all requests for images to a php file. You declare directories that you want to exempt from this rewrite. For example, you don’t want your media query savvy CSS background images getting routed through the php file.

3. The php file which resizes the image based on breakpoints that you can configure.

The best part of Matt’s solution is that as long as you can separate out your image files so you can exclude ones that shouldn’t be resized, you can implement this technique without making any changes to your existing markup. Existing pages and posts will suddenly have different image sizes.

Come on, by now you weren’t expecting it to be that simple did you?

Because the images start with the large size, if javascript is not available, the large size will be delivered. The most common devices to not have javascript support are older feature phones. The type of devices that will choke and even crash on large images are older feature phones.

This technique also suffers from the same race conditions that most of the javascript solutions do. The cookie has to be set early to avoid extra downloads.

Update: Matt commented below and points out that the default settings will result in a small image being delivered if javascript isn’t present. The markup will point to a large version, but the php file returns a small version. All of this is configurable.

Also, he is right that the result of the race condition would not be multiple downloads. I think the race condition still exists with different drawbacks, but I’m going continue the conversation in the comments where Matt and I can converse.

I also missed the fact that the url will stay the same regardless of the size of image which can cause issues with CDNs and proxy caching as noted earlier.

Brian and Stephanie Rieger presented the work they did for at Breaking Development conference. For that project, they invented a new way to combine client side information with device detection.

When a browser first requests something from the server, they don’t know anything about the device. So they check with a device detection database to see what they can find out about the size of the screen (and other details). They then check their own local database of tacit knowledge. This a database of things they’ve learned about how specific browsers work and any overrides they want to use. They use the combination of this information to deliver the appropriate HTML, javascript and images.

Once the browser gets this information, a javascript runs that tests the various aspects of the browser including screen size. It then stores this information in a profile cookie.

On the second request, the server receives the profile cookie and compares it to the information it has in its tacit database. It may update the tacit database. It combines the information into a revised profile combining server side information with client feature detection data.

I’m likely doing a poor job of describing the solution. Your best bet is to look at their slides:

This combined technique mitigates the problem of first load without any of the race conditions or potential problems that the client-only solutions have. It also extends beyond images to other content and javascript.

It is a complex system and requires significant changes to infrastructure to support. Bryan and Stephanie have published the approach, but the code isn’t available for download. It may be coming, but they took a well-deserved vacation after spending most of the summer working on the Nokia Browser project.

Probably the biggest problem with this approach is that most of us are not the Riegers. They have been doing mobile web for years. Their tacit knowledge of devices is exceptional. Freaking geniuses. That’s hard to replicate.

The same is true of the Boston Globe project. The team working on that included significant portions of the jQuery Mobile team and the guy who coined the phrase responsive web design. Few of us are going to be so lucky on our next project. 🙂

As I’ve reviewed the various techniques, I keep thinking back to something Andy Hume said in response to part 1:

Our current solutions are hugely dependant on the current (and undefined) behaviour of browsers in regard to the page-load race conditions you mention. For example, most responsive image implementations would be compromised if a particular type of look-ahead pre-parser ( began to speculatively download images before actually parsing the HTML or executing any script. (I half expect us to get bitten by this any day.) One way or the other we need to consort with browser makers to get future-friendly.

That’s the truth of it. Most of these techniques are based on our hope that browsers continue to download assets in the order we have observed to date. If the the order changes or if browsers start pre-parsing more aggressively, the whole house of cards may fall down.

In part 3 of this series, I’m going to look at the conversations going on about ways to change the img tag or replace it with something that will work better with multiple file sources.

I reviewed 18 different techniques for this post. My notes are captured in a Google spreadsheet that you are welcome to review for detailed comments on each library. Thanks to everyone for publishing their thoughts and experiments. I learned a lot from each one.

This series wouldn’t have been possible without the assistance of Scott Jehl and Bryan and Stephanie Rieger. Scott in particular helped me sort out the problems with the main Responsive Images JS library. Thanks to all three of you for putting up with my many naive questions and for taking the time to explain all of the work you’ve been doing!


Matt Wilcox said:

Hi there,

great work on these round-ups! I do want to point out however that you're warning about Adaptive Images sending high res images without JS is incorrect! Adaptive Images has a flag to choose how this scenario is handled, and by default if a cookie is not set it sends the mobile resolution image. It's one of the big benefits of the approach. Additionally, there is a CSS and PHP method of setting the cookie, so JS doesn't need to be relied upon.


James Young said:

Excellent post and a round up that's been much needed as there are some great projects dealing with responsive/adaptive images.

We're working with Matt's adaptive-images right now, it's a clever bit of kit that solves many of the main issues we need but some of the others look good bets to follow progress of too.

Great post :)


Matt Wilcox said:

I can imagine it has - that's a big post with a lot of detail! One other side-note:

"This technique also suffers from the same race conditions that most of the javascript solutions do. The cookie has to be set early to avoid extra downloads."

The implication there is that there will be a double download when the cookie is set. That's not strictly true. If there isn't a cookie it will resort to the default behaviour as set in the PHP config - that can either be to send the smallest image (mobile) or the largest configured image (not the original resource). After that the cookie will be set and it will only take effect on images loaded after the cookie is set.

The absolute worst-case-scenario is that the first page load on a site with AI isn't *fully* optimised. But there are never any re-requests using AI, the JS never alters any mark-up.

You're right though, race conditions are a bitch, and it's why there's a bit of logic in AI that means browsers identifying as IE don't obey the mobile-first flag. Firefox is also temperamental on occasion, but Webkit seems to be pretty solid.

What I wouldn't give for browsers to send headers that outline device capabilities along with all requests. That's what we really need. There are two approaches to adapting websites for mobile, and they don't work on the same principles. Adding capabilities to HTML is only a solution that's of any use if you want to have complete control of the exact sizes you request. But to be honest that's rare and that puts HUGE overheads of maintenance into the content layer (HTML). It's really only HTMLs job if it's negotiating different content to send (as opposed to simple scaled versions of otherwise identical content).

The second approach take they methodology that we're not really talking about the content layer at all, but about resource adaption. That means it's not HTMLs job at all, but the server or browser's. "Hey, I got your content, thanks, do you have it in a mobile flavour?" The content semantics are not changed, so it shouldn't be in the HTML.

This is why I'm not confident that there is a realistic, viable, and useful HTML solution to the issue of responsive images. But, whatever, we need headers so the server is aware of the device capabilities. We can't rely on cookies anymore.

Yoav Weiss said:

Great summary of all current techniques.
One problem with all client-side techniques that do not double load, is that they stall preloading of images. The preloader ignores the image, and the image starts downloading only after the script that loads it downloads and runs. I'm guessing that in most cases, that script runs on the "domContentLoaded" event or at the end of the page (it is the simplest to implement, since we have all the images), which may mean a significant delay in image download resulting in a performance impact.
This may be better then double loading, but it is far from ideal.
Looking forward to Part 3!

John Polacek said:

What about removing from the html altogether and instead using CSS background-image. Then media queries or javascript to load css based on screen width. (Or not even display images in some cases)

Pete Duncanson said:

I'd also like to point out that the current state of play for most websites is that they don't have any responsive image goodness on them at all. So users of mobile devices are just getting the current defacto experience, its the normal to view a site build for desktops and have to wait for the larger images to be downloaded.

The various libraries and techniques above are to try to lessen that pain/slowness but as you've clearly stated there is no 100% fool proof answer for every device but by having a go you get a pragmatic increase for most devices, those that don't work/have quirks/are too old just have to keep using the internet as it is today for 99.9% of for desktop and boardband speeds. There is no loss of performance here, they is only no gain.

Great write up by the way. I really like Matt's solution, simplest and pragmatic. So much so I'm porting it to .net :)



Lars Lindbäck said:

Anyone experimented with background-image (using media queries to set the background image) ?



/* default */
#img1 {
background-image: url('small.jpg');
width: 200px;
height: 200px;

@media screen and (min-width: 500px){
#img1 {
background-image: url('large.jpg');
width: 400px;
height: 400px;

Jason Grigsby (Article Author ) said:

@matt The post has been updated to correct how Adaptive Images works.

I still think the race condition matters for Adaptive Images. Let’s imagine a scenario where the browser starts downloading images before the cookie is set. Say someone put the javascript in the head, but put it after a bunch of other javascript and the a look-ahead pre-parser has already made requests for images before the cookie was set.

In the techniques where the url is modified in some way (src changes, dynamic base tag inline), the browser sees that the resource location has changed so it reissues the request. For Adaptive Images this wouldn’t happen because the resource is still in the same location (at least that is my understanding).

But you could end up with a mixture of images if the requests without the cookies and the ones with returned images from different breakpoints.

How often will this happen? I’m not certain. That’s the problem with a lot of this stuff. We’re building on assumed behavior instead of documented behavior.

FWIW, I really like Adaptive Images. That’s one of the reasons I highlighted it explicitly. It takes care of some many of the pieces automatically. Great job!

Chris Jacob said:

Personally I like the clientside noscript solutions - they are arguably the easiest to implement - which is VERY important for for widespread adoption.

noscript means your page starts with "nothing" which I feel is a solid foundation... Seriously...

For feature-phones you might completely disable images altogether until the user chooses to enable images ( saving many bytes and http requests when all the user wants is the text content ).

noscript also presents an interesting opportunity to lazy load images that are "below the fold". This again could have significant gains to the speed of page load & reducing bandwidth consumption (across mobile and desktop). JAIL is a jquery plugin for this... But if we're thinking mobile first a lighter weight solution is needed (anyone with JS skills please consider building this!).

Loading images as a secondary component of the page I think makes sense (particularly in text dense sites). Progressive rendering is about getting "something on screen fast"... giving the user intant "gradification". It also gives them a moment to get aquantted with the page layout; while you continue on with the "heavy lifting".

Feedback please ^_^

Jason Grigsby (Article Author ) said:

@John and @Lars There has been a few experiments with css background images. I did a small set of tests ( as part of my post on media queries problems ( showing which techniques resulted in downloads and which didn’t.

However, moving from the img tag to css background images for all images has three significant problems in my mind:

1. The markup is no longer semantic. When the img is actually part of the content and not part of the look and feel (for example an illustration or graph), moving it to CSS removes it from the related content. It means there is no alt tag to provide descriptions of the image for people using screen readers.

2. Ideally you want to do everything you can to encourage browsers to cache your css file. Can you imagine a site like the New York Times updating their CSS every time a article was written that contained an image. Or if every page required separate inline CSS to handle images? I see problems with that approach scaling.

3. Ultimately we need a solution that will also work for images coming from embedded widgets, ad networks, etc. I’m not sure replacing img tags with CSS is practical in those scenarios.

For me #1 is the biggest reason why we continue to need something like the IMG tag. The other two may be things that can be worked around.

Matt Wilcox said:

@ Jason - thanks for the update :)

I agree, race conditions are a real pain in the back-side, and anything that relies on setting a cookie before parsing the rest of the HTML is going to suffer from race conditions. In the case of Adaptive Images specifically however, this is only a problem on the very first page load on any site employing AI. Once the cookie is set for the domain everything is fine and a race condition will not happen again.

I fully agree this isn't ideal, and that's also why AI provides a toggle for the Mobile First - you can choose whether you want it to send mobile sizes in the event of a cookie not being set, or whether to deliver desktop sizes. Wise use of this means it's hardly ever apparent that there was a first-domain-load race condition.

In the end, there is completely foolproof solution to be had to the problem of adaptive images. But, I think AI provides one of the most flexible and considered solutions for a self-hosted use case that needs to simply scale images. Other solutions are better if you want to deliver different assets or scale beyond one server.

Jason Grigsby (Article Author ) said:

@matt wrote:

“But, I think AI provides one of the most flexible and considered solutions for a self-hosted use case that needs to simply scale images.”

Absolutely agree!

Matt Wilcox said:

PS, thanks for the nice comments and constructive critical thinking :) It may be worth noting that it's a community project on GitHub with a broad Creative Commons licence, so if anyone has any ideas on mitigating the race condition (that aren't just fallback browser detects like are already built-in) I'd love to hear about them! the GitHub project is over at

Scott Jehl said:

Another great post, Jason!

Thanks so much for the great research on this. I do hope this discussion will lead us towards lobbying browser vendors on whatever it is we really need, which probably won't resemble anything in the post above.

On the Boston Globe note: we've been told by the team that some tweaks are underway to get the technique up and running again - it was working at launch, so hopefully it won't take much. :)

It's unfortunate the comment streams of these images posts are separated. Good stuff going on in both :)

Matt Wilcox said:

I am also 100% behind Scott Jehl's comments RE lobbying browser vendors. To my mind, we really need to get them to send device capability headers with all requests. The browser string is not good enough for purpose anymore. The web has changed, and the server needs to know much more about the device it's talking to.

John Keith said:

As one of those "standards based developers" I'm all in favor of browsers sending device capability headers. A bit more data for each request, but completely worthwhile. To be useful, such info will ultimately need to be codified, either in widely adopted de facto standards or in mandated de jure standards. In either case it's a long haul to get to that point, and with the half life issue for installed browsers we still need some defensible strategy that casual developers can simply use. There are so many good ideas being discussed here!

Andy Davies said:

Perhaps what we need is a request parameter that tells us something about the resolution of the user-agent without have all the possible values that user-agent has...

Then not only could the server use it to determine which image to serve but the server could also use a vary: header in the response so that proxies could cache all versions of the image e.g. header could be resolution and the server could response with vary: resolution

This is just like the mechanisms that are used for gzipped content now.

E.Casais said:

Regarding: "To my mind, we really need to get them to send device capability headers with all requests."

There actually is a standard that basically addresses this: uaprof. The x-wap-profile HTTP header field contains the URL to the RDF file that contains the user-agent description. Capabilities can be thus efficiently retrieved over the wireline network instead of clogging the wireless network with bloated HTTP headers.

Yes, manufacturers have published incomplete or unreliable uaprof data -- or even, like Apple, staunchly refuse to publish any uaprof file whatsoever. But this is the basis. In fact, all device description repositories (WURFL, Device Atlas, etc) rely upon uaprof to populate their databases.

Interestingly, there is also a little known, and still little used, HTTP header field called x-wap-profile-diff. Its purpose is to send attribute information overriding or complementing the default profile.

From there, I see two possible paths to dispense with cookies and other hacks for responsive images:
1) Browsers send, by default, a set of variable attributes in the said diff field (e.g. the screen dimensions, especially if one can change the orientation of the display);
2) Browsers allow Javascript code to set the diff field in the HTTP request header after collecting relevant information. Obviously, race conditions will still be an issue, if one cannot guarantee that a script assigns a value to the field before any other sub-request takes place.

Now, if there were not this instinctive reaction to reject everything that comes from the WAP strand of mobile web...

Keith Clark said:

Great job on summing up these techniques Jason, and thanks for the spreadsheet, i'll be digesting that information.

I've spent time researching many variations of the techniques discussed above and I've always ended up having to compromise on something, but i'm sure a robust solution must exist. In the long term, this is a problem that browser vendors need address.

Also, I noticed you picked up on the fact that my "responsive images using cookies" idea serves hi-res images by default, this could be changed to server low-res images if needed, by adding a switch statement to the image serving script.

Brent said:

Wow - damn exhaustive article indeed. Thank you Jason! Again and as always, great work.

Yeah this is a tricky beast indeed. Just when ya think something might be the answer, there's a gotcha. For small mobile-only sites, and am tending to lean towards the background-image + media queries approach, but that's for coding simplicity and non-tech end-user maintainability. The screen-reader and content separation is definitely a bummer. No easy answers.

Joel_hughes said:

Superb Article!

This is *really* timely for me as I'm just playing with responsive web design and the first two sites will have image galleries.

Ideally i'm after a solution which means images have a unique URL (with dimensions embedded perhaps in the filename - not querystring) to aid cacheability

Thanks for your superb work researching this


Dug said:

Great post on a complex issue.

I was reading through the various options presented thinking that you were missing device detection like WURFL. I was pleased when I finally got there.

"Carson notes that his approach will likely have the same problems with CDNs and caching because different size images come from the same url."

Couldn't you combine user-agent-based device detection (WURFL) with PHP to dynamically modify the IMG HREF based on the device's capabilities, before the page is even sent to the user's device? This should circumvent any issues with image caching or CDNs.

I could be wrong - much of my knowledge is book learning rather than practical.

Morten Skogly said:

Good article.

Something I've been pondering about serverside scaling. A part of the solution must be context aware. Scaling an image is fine and good, I would perhaps simply do that using timthumb, but to what size? An image in the main content, lets say with a default of 640px, needs one set of instructions for scaling, lets say relative to a predefined grid, but an image in the sidebar needs another. A solution using jquery thats reads the dimensions of the wrapping divs (or other element) + subtracting padding/margins, reading current width/height attr of each image, then rewriting the src of the image with size params and calling timthumb or other service, would do the trick, combined with solid caching.

I also think it would be good to just use max-width for slight downsizing of images, and to save proper server side downscaling for devices with very small screens, like a mobile phone.

I also think that it would be smart to have a rather strict set of column widths = max number of possible dimensions an image would need to take, both to save cpu and harddrive space. I see some flex width grids have a wide array of possible widths, often width no clear logic (clear logic would be for instance two col-6 + padding should be equal to one col-12), making it exponentially harder to keep track of all possible combination of widths. To prevent the server filling up with thousands of scaled images for a large content driven site, there should be a clear limitation on the number of widths an image could be scaled to.

Julián Landerreche said:

I've yet to investigate & experiment with mobile-first, responsive design approaches. This article will certainly make things easier when the time comes.

If I should blindly choose an approach, I'd go with a server-side solution based on device detection, expecting some edge cases and false positives being triggered from time to time.

One advantage brought by a server-side solution is that you could query the image for its dimensions (width, height) and include them in the tag attributes, improving the page rendering, and avoiding jumping text or reflows.
I've checked that this (querying image's real dimensions) is also possible via JS, but it may be harder to implement, and may add some processing overhead.
Alternatively, you could force some dimension via CSS.

BTW, anyone knows any article addressing "best practices" regarding responsive design, and forcing (or not) image dimensions (besides what image, large or small, has been served)?

Brett Jankord said:

@Jason - Great articles. RWD is evolving and we need more in depth articles like this to evaluate where we are at, and ways to solve the issues that we've run into.

As far as RWD images goes, Adaptive Images seems to be the solution I lean towards most. It's easy for me. If we could get more information in the header on the initial page load, that would be great, but I think you bring up a good point. Your limited to knowledge from the past when using a device database. I like the idea the Rieger's brought up about pairing a device database with tacit knowledge database based on created user profiles. That seems like it would help handle new/unknown devices.

@Matt - Interesting update, I'll have to check it out, thanks for the heads up.

Willabee said:

I'm using CSS background images with media queries, defaulting to mobile first background image for non-supporting media query devices. A fall-back gives older IE browsers a wider image if no JavaScript and reverts to a media query polyfill with JavaScript.

For the img tag, I use a thumbnail with the src set to a mobile optimised image. With JavaScript enabled, the source is reset based on device width and displayed in a device friendly (lightbox style) dialog.

Great article in the quest to find the holy grail for RWD images. The search goes on.