Skip to main content

Demystifying Google’s Recent Switch to INP

By Jason Grigsby

Published on April 16th, 2024

Topics
Fourcast Podcast featuring Tammy Everts. Hosted by Jason Grigsby.

In this episode of Fourcast, I sat down with Tammy Everts from SpeedCurve to chat about Google’s recent switch from First Input Delay (FID) to Interaction to Next Paint in Core Web Vitals and what it means for website owners.

I’ve always appreciated how Tammy can explain complex web performance topics in terms anyone can understand. It helps that Tammy is not only a recognized expert in web performance, but also in user experience, and she brings both of those perspectives to this discussion.

In this episode, we cover:

  • What are Core Web Vitals, and why should website owners should care?
  • What was FID and why is INP replacing it?
  • What are the ramifications for websites? Are there many sites that passed FID that don’t pass INP?
  • What Tammy is hearing from SpeedCurve customers about this transition.
  • Some UX implications of INP that Tammy believes too many people are overlooking.

Tammy shares several recommendations for website owners including how to try performance metrics to key performance indicators for your business.

This episode is a must-listen for anyone looking to understand how INP impacts them and what they can do to improve their website performance.

Subscribe to Fourcast on SpotifyApple PodcastsYouTube, or wherever you get your podcasts.

Jason: [00:00:00] Welcome to Forecast. I’m Jason Grigsby, one of the partners at Cloud Four, and we have an exciting episode for you today. As you may have heard, on March 12th, Google made a change to the way it ranks web pages in its search results. In particular, it replaced FID with INP in the Core Web Vitals that it uses to evaluate a web page’s performance.

Now, did that sound like gibberish to you? A bit of acronym soup? Well, don’t worry. That’s why we’re here today to clear that up. And we have a phenomenal guest to help us with it. Tammy Everts is a long time user experience and web performance expert. She wrote a book called time is money. The business value of web performance.

She helps curate WPO stats, which, stands for web performance optimization stats, and it keeps track of performance success stories. So if you’re looking for examples of how performance impacts business, you can go to that site and you can find really great [00:01:00] examples. It was also the inspiration for our own website, PWA stats, which does something similar for progressive web apps.

Not only is Tammy a sought after speaker, she’s also the co chair of the annual Performance Now Conference, which takes place in Amsterdam this November. Tammy works as the Chief Experience Officer for SpeedCurve. And more than all of that, Tammy is an exceptional human being and tremendous contributor to the web performance community.

Tammy, welcome to the show.

Tammy: Oh, what a nice intro. Thanks, Jason. I said, I laugh every time I hear I think it was a couple of performance. nows ago that somebody coined the term TLA three letter acronyms and how much in our industry we really like our TLA. So it was like TLA, FID, WPO. So yeah, a big part of my job is slowing down and explaining to people what the three letter acronyms are and, you know, evaluating, are they even helpful?

[00:02:00] Are they measuring what you need to measure? Like what, what do we actually learn from all these TLAs?

Jason: Well, that’s excellent because that’s, that’s really what we want to get into today. And I think we should start with some of the basics, like just to catch people up to speed. And let’s start with Core Web Vitals.

Like what are they and why should website owners care about them?

Tammy: Yeah. So a little history of Core Web Vitals. It feels like they’ve been around kind of forever in tech years. They’re really only about four years old. It’s a Google initiative that started in 2020. And the focus was to kind of take, we have this ever increasing swath of metrics to use to measure, you know, various things like rendering times and how pages are built and other things to do with web performance.

And to kind of simplify it because it’s pretty overwhelming and to simplify it down to a set of currently three metrics that are intended. To let you know how to measure performance from [00:03:00] the perspective of like what, what actually matters to users. And so right now, those three metrics are Largest Contentful Paint, which is the kind of loading metric.

It lets you know that the page is loading, something meaningful is happening on the page. Interaction to Next Paint, which is the interactivity metric. So it just lets you know how interactive the page is. Are there any interaction delays or responsiveness issues? And the visual stability metric, which is cumulative layout shift.

So short form LCP is Largest Contentful Paint. INP is Interaction to Next Paint and CLS is a Cumulative Layout Shift. So those are those. So those are the three metrics. They are among the page experience signals that Google Google factors into its search ranking algorithm. Hence all the fuss because when Google says something is part of its search algorithm, everybody sits up and takes notice in that respect.

[00:04:00] They’ve been really great for the performance community because they’ve gotten a lot of people other than performance engineers and developers to think about and care about web performance. So, like, kudos to everyone at Google on the team who develops and continues to maintain Core Web Vitals. But I think the thing that gets a little bit lost is that they’re just part of the ranking algorithm.

We don’t actually know how much weight they have. And today there, and there are other ranking factors like, like mobile friendliness or security or accessibility, absence of interstitials, like there’s all kinds of things that go into that. So focusing just on Core Web Vitals and kind of leaving those things behind is not recommended.

And also it’s really important to remember that, Since Core Web Vitals have been announced I think a lot of good things have happened in terms of people caring about performance and trying to optimize for those metrics, [00:05:00] but we don’t actually have any meaningful case studies that show us the impact of Core Web Vitals on SEO, and I’m kind of just saying that up front because inevitably it’s the question that people ask me, and unless, you know, maybe one of your listeners has one they can share with me, I would really love to hear it, but to date there aren’t any.

Jason: Yeah, it’s interesting. We were working with a client a couple of years ago, now maybe, maybe a year ago, I can’t remember, but they had an SEO firm that was seemed to know their stuff, right? Like they’re, multiple times where I talked to SEO folks, and I’m not so certain, but this, this group really seemed to know seem to be very knowledgeable.

And when I double check things seem to match up and they were incredibly focused on Core Web Vitals as a key thing that was going to help them in their rankings. And they would actually see as we started implementing faster pages and [00:06:00] started seeing Core Web Vitals go up, that they were actually seeing an increase for their search engine rankings and the amount of traffic they were getting.

Now I didn’t have access to any of that data. I was just hearing it secondhand. So I can’t. I can’t speak to a case study. I don’t know what difference it made. I don’t know if there were other changes but we did make a substantial increase in performance overall for them. That was part of what we were working on.

And they saw that reflected in, you know, in SEO and in traffic. So it, it makes sense to me that you know, to the degree to which the algorithm that Google uses is a complete black box. But you need every little edge that you can get that you want to care about Core Web Vitals if you’re a website owner. And it also makes sense for users.

Tammy: Absolutely. And kind of like to that point there are really good case studies around Core Web Vitals and other metrics, other business and engagement metrics. So if you go to web. dev and look at the case studies [00:07:00] that Google has collected and, and other places, you can actually see that you know, improving INP, improving LCP has also improved revenue conversions, time on site, like, you know, a swath of other metrics. So I don’t mean to say that, you know, that SEO can’t also be improved.

Jason: Right.

Tammy: I guess it kind of Mike, my colleague, Yeah, exactly. We just, we just can’t demonstrate SEO exactly, but we can demonstrate a lot of other helpful things, which you should also care about.

My colleague, Andy Davies, who some of your, your audience might know, he’s a performance person from years and years back. And he probably forgotten more about performance than most people will ever know. He has a really good breakdown where he talks about SEO as being about user acquisition.

So you should care about performance and SEO from an acquisition perspective, but then you should keep caring about [00:08:00] vitals and other, you know, performance metrics from a retention aspect. So in the short run, unfortunately, we’re kind of hearing more and more about companies, agencies, consultancies.

And, and I believe most of them are doing things that are above the board, but a few that are kind of gaming some of the Core Web Vitals to get that SEO boost. And it’s really kind of a short lived strategy because at the end of the day, it’s not going to get you retention. So, you know, trying to game your metrics doesn’t really get you very many places.

And also, like, even Google will tell you that the metrics don’t matter as much as the content on the page itself. So, having great metrics is not a substitute for original content and, like, really meaningful original content. So, I would always recommend, like, you know, it’s important to care about SEO, obviously, but don’t make your pages faster or optimize for your metrics solely for SEO purposes.

You do it for your users, as you said. [00:09:00]

Jason: Right. You, you did a good job of describing , the current three Core Web Vitals. But this is new as of March 12th. And it used to be instead of INP Interaction to Next Paint, it used to be FID. And I wonder if you could talk just, you know, briefly about what FID was or is, I suppose it’s still around and why there were problems with it.

Why did Google decide to replace it?

Tammy: Yeah so the one thing that I forgot to mention earlier is how these metrics are actually measured. Like what are the tools that we use to measure these? And so the Core Web Vitals are measurable in any real user monitoring tool. So basically any tool that you’re using on your pages that measures real user experiences are real in the way that actual users interact with your pages.

And Google also, in terms of the thresholds that it’s created, because I didn’t mention those. Google [00:10:00] has sort of recommended thresholds for the different metrics that they’re kind of good, needs improvement and poor. And the recommendations are to kind of achieve those numbers or ideally achieve the good number at the 75th percentile of your users.

So what that means is, for example for Largest Contentful Paint, which is that loading metric that kind of tells you, okay, when is the most meaningful visual element above the fold rendered? I know we’re not supposed to say above the fold, but I say it anyways. And so you, you want to know that it is rendering in under two seconds, which is Google’s threshold, and you want to know that it’s doing that at the 75th percentile.

So basically 75 percent of your users are getting that experience of LCP happening at two seconds or sooner. I just kind of wanted to get that out of the way. Because getting into talking about FID it’s an interactivity metric and a responsiveness metric that measures actual user interactions.

So, how [00:11:00] quickly the page responds to the first user interaction. Specifically, a click, or a tap, or a key press. So the thing about that was, it was a good first attempt just, like, understanding, like, it’s not just about how quickly the actual content renders, but how that content behaves when people interact with it.

So, it was a, it was a first step. But the gaps sort of started to appear pretty quickly where realize that it’s not measuring the overall responsiveness of the page because there can be multiple user interactions on a page and actually like that overall responsiveness really matters. Like 90 percent of the user’s time on the page is spent after it loads.

So you want to capture as many different interactions as are happening on the page. And another telling thing that kind of exposed maybe some of the weakness of of FID, first input delay, is my colleague Cliff Crocker did an analysis pretty early on with FID, [00:12:00] where he, among other things, looked at how FID correlated to business and user engagement metrics.

So in performance if you’re capturing real user data, you can actually create something that we call correlation charts where you correlate your performance metrics like FID or like start render or anything else with your business metrics like conversion rate or user engagement metrics like bounce rate.

Really any of the metrics that you can capture. So the idea is that if FID was meant to be a user experience signal and a user experience oriented metric, that any changes good or bad to FID should affect, you know, some kind of business or user engagement metric. It’s, you know, FID gets better, conversions get better, that kind of thing. And what Cliff found was that changes in FID really didn’t correlate with any changes in those metrics. And so we realized clearly it’s not quite capturing [00:13:00] exactly what we need to capture from a usability, from a user experience perspective. So in the background while, and I think it was pretty early days when a lot of people, including the Google folks who work on the vitals team sort of realized that these cracks existed and they have been exploring Interaction to Next Paint as a potential replacement for quite some time.

Cause these things obviously aren’t trivial to, you know, implement and introduce… Do you have any questions about anything this?

Jason: Yeah, so, it seems like the way that I have been thinking about the difference between the two is that FID or which I didn’t realize we were, we were pronouncing it instead of sounding it out.

But that FID was really just measuring like the first thing that somebody did on a site. So if, if somebody built a site you know, like, I, you know, we see this a lot where you’ve got a webpage and the webpage loads, and then like a bunch of [00:14:00] other stuff loads, like a bunch of other JavaScript loads later that the person could have, if they happen to click in that window between when that initial stuff loads and when the later things load, they could have a good experience for their first click, but their second click could be really slow because, you know, like the chatbots loading up or something of that nature. And the way that I understand INP is that it’s an attempt to sort of capture that entire experience better. Whether it does or not I guess remains to be seen, but that it actually is attempting to, you know, look at all of the clicks that somebody has on a webpage.

Tammy: Exactly, exactly. So INP, it still only focuses on clicks and taps and key presses.

So, you know, it’s, that’s kind of the extent of it. But it measures all of the user interactions on the page and then gives you a single value. So a good INP is under 200 milliseconds. Basically, it’s [00:15:00] saying that if a user is on your page and they’re clicking on various things, the sum total of all the response time for those various interactions should not exceed 200 milliseconds, which sounds like a lot because it’s a three digit number, but 200 milliseconds is

Jason: No…

Tammy: .2 seconds. So it’s really not very much time at all.

Jason: One of the things that I heard a lot about sort of last year as people were talking about this transition but then I haven’t really circled back. It looks like some of the folks at SpeedCurve may have done a little more analysis on this was to try to understand how many sites were doing fine with, with FID, but maybe, you know, failing INP. And it seems like there may be quite a few of them.

Tammy: So we haven’t analyzed our own customers because there’s sometimes issues with, you know, doing analysis of aggregated data and the kind of the agreements that we have with our customers, what we’re able to do. But [00:16:00] Cliff my colleague, my colleague Cliff again, did an analysis of the top million websites via the HTTP archive and kind of looking at sites that had good FID versus good INP. And what he found was that for FID. It was really easy to have good FID, like, almost 100 percent of desktop sites had good FID, and about 93 percent of mobile had good FID.

So those are really good numbers. And so, a lot of people were really complacent, but it almost kind of worked against FID because people just stopped thinking about it or caring about it. Everybody, everybody just kind of, it became a metric that was really easy to ignore because it was always going to be in the green for you. Looking at the numbers on for INP, however, kind of paints a different story.

So numbers are still pretty good on desktop overall, like for the top million sites, it’s something like 96 percent of [00:17:00] desktop sites have good INP, but for mobile it goes way down and only two thirds of mobile sites have good INP. So still, I mean, roughly 65 percent is pretty, it’s pretty good, but it’s not great.

Like I wouldn’t want, you know, I would still want to be sure that I’m not in that one too.

Jason: Like those are the ones that presumably have larger budgets and people spending, you know, working on them more professionally than maybe the smaller, you know, like a mom and pop e-commerce site or something of that nature.

Tammy: Correct. And then Cliff did some other interesting research for example, like just kind of looking at the meaningfulness of INP. We did find that INP does correlate more closely to business metrics and

Jason: Oh, that’s great.

Tammy: Things like that. So that was kind of just like, it’s almost kind of our first, it’s our go to whenever a new metric comes out, if it’s measurable and RUM, can we create a correlation chart to see if it actually, you know, kind of moves the needle on any of your [00:18:00] important other business metrics.

And then interestingly, Cliff also found that mobile INP matters even more than desktop INP. So there was an even stronger correlation. You know, it’s a good or bad INP and good or bad, you know, conversions or bounce rate or anything like that. So the, the challenging piece there is that INP is harder to optimize for on mobile, but it’s kind of, if you have a, like a large swath of your users that are coming, you know, to you via mobile, you’re really going to want to make sure that you are optimizing for them because you have, there’s, there’s more potential there to move the needle on your business metrics if you do.

Jason: I mean, it’s great too, if you’ve managed to make your site fast on mobile, then it’ll fly on a desktop site or desktop browser. Exactly. So as far as SpeedCurve and the to the degree to which you can talk about these things, like what are you seeing at [00:19:00] SpeedCurve with companies trying to adapt to INP or is it a big.

Concern. Is it something that people are struggling with or is it something that, you know, that they’re already well suited for,

Tammy: so in SpeedCurve, anything, a few other tools as well. We’ve had the ability to track INP for quite some time. So we were ready for the transition. We actually have a really good relationship with the Google team.

We meet with them once a month and kind of share what we see kind of in the wild and they share what they’re doing on their end. And that’s it. It’s really helpful and super collaborative. So. Yeah, companies have had lots of leeway to adapt.

So they’re the people who saw it coming, wanted to be ready for it, kind of ahead of the game. And they were, and then, you know, definitely a fair share of companies that realized maybe kind of got real for them, like maybe in January or February or even just now and kind of realizing that they’ve got some catching up to do.

And so, you know, [00:20:00] a lot of the conversations that I’m having because I talked with a lot of our customers pretty regularly is just around turning on tracking for INP, understanding what it’s measuring, and as importantly, understanding what it’s not capturing for you. And I can kind of go into that a little bit, if that’s something that.

Yeah. So it’s funny. So there was all the hype around IMP. I kind of jokingly started calling it the Barbie movie of performance metrics, because it was, it was. I’ve never actually seen, so I’ve been doing performance stuff for like 14, 15 years now. And I’ve never seen as much hype around a single metric as around INP.

If you were like, it was, it was released on March 12th. And if you were on social media, like tech, social media on March 12th, it was literally, you’re just scrolling. It was like, INP this, INP that, that, like. It would be really easy to take away from that. Like, Oh my gosh, this is the only metric that matters.

[00:21:00] I just need to focus on INP and forget everything else. And I, a little bit of that did kind of trickle over to me through, you know, kind of through the, through SpeedCurve and talking to customers. Some of the conversations I had were like, it’s okay. It’s just one metric among many, like it’s, you know, it’s a good way to track you know, if your INP is really poor, like, you know, and, and you’ve had pretty good, you know, SEO, you know, ranking, like you’ve been in that top 10 for a while and you get crawled and, you know, Yeah, that might actually, you might take a little bit of a hit from that for sure, but there’s room to recover. The other caveats around INP that people might not be aware of are that it’s, it’s a very narrow set of parameters that are, in terms of the cohorts that are being tracked. But it’s still a very large group. So what I mean by that is INP is only supported in Chromium based browsers on non iOS [00:22:00] devices.

So what that means is that it’s not captured in other browsers, even if you’re using a Chrome type browser on an iOS device, it’s not captured in that either. So it’s really important to know that and to look at your RUM data and see where your actual users are coming from. So that you can prioritize, like, how much do I actually need to care about this?

And I’m, and I say, and I’m not saying you shouldn’t care, but I’m saying just kind of how much. So for example, I was speaking with a customer last week and they were asking about INP. It comes up at every call. And when we looked at their RUM data, we realized, okay, well, half your traffic is coming from iPhones. A little chunk is coming from the iPad. Another chunk is coming from Safari. So it’s like, as soon as you saw that, it was like, okay, well actually only about maybe 25, 30 percent of their traffic was coming from a Chrome [00:23:00] browser on a non iOS device. So it’s still a pretty significant chunk of traffic, but you know, not everything.

So I guess the thing that I’ve been coaching people on is, definitely optimize for INP because Google search cares about it and you want to make sure that you’re showing up well in Google search results. It’s still like, I think something like 80 percent of, of market share in terms of search.

Jason: Right.

Tammy: So that’s kind of the SEO side of things. But then in the tracking side of things, don’t assume that what you’re measuring in RUM is capturing all your user experiences, because it’s, it’s really, really not. And so you could have a really huge black box around a big chunk of your users.

So that’s a, quite an enormous caveat that I , try to share with people.

Jason: Yeah, that makes sense. It’s one of the things I guess from like a just general industry perspective that I was, I’m hopeful that maybe some focus [00:24:00] on INP will help with is reducing the amount of JavaScript that is in pages and the amount to which we burden users with that. And I think you see that really particularly on the underpowered mobile devices where, you know, iPhones generally are more expensive, higher performance, you get the mid tier and low tier Android devices, and they don’t have the CPU capacity to kind of handle the amount of JavaScript that some sites are using particularly sites that are sort of built around a single page applications and sort of expecting the web browser to build the application on the fly. And those are actually the sites that I worry about the most when it comes to trying to fix INP, because it seems like they like, particularly like a single page application, maybe built in React and like with a ton of JavaScript in it.

It might be hard to [00:25:00] make the transition to having something faster that doesn’t have delays for, I, like, I don’t mean to be pessimistic, but it does seem like a bigger undertaking . And it does, you know, like like you said, even in that example, it’s still like a quarter of their users who might be impacted by, I guess you said a quarter were chromium users, not necessarily Android Chrome. But there are a lot of Android users. Yeah. Yeah.

Tammy: So, and, and a way to think about it is really mobile INP right now is Android INP. So if you are tracking INP from, from mobile, that’s, that’s kind of what you’re, you’re

Jason: Yeah,

Tammy: what you’re getting what you’re learning about.

It’s funny. I have just as you said, I have a almost eight year old iPhone seven. And to your point about JavaScript, like, I can tell, like, I almost want to play a game with myself where I’m when I’m using an app or, or visiting a site. And my phone starts to heat up, like [00:26:00] how many scripts are on, on the, the site.

Like you can just, it’s like, it’s like, I’m so sorry, CPU, you just keep doing your thing. You know, it’s, it’s kind of crazy.

Jason: So you mentioned that you’ve been sort of looking at INP from a different angle than I’ve really heard anyone else talking about it thus far, which is sort of researching and thinking about UX ramifications of INP.

Can you tell me a little bit about this? Like how do you see INP impacting UX?

Tammy: I guess the, the core thing that I am focused on when I talk about INP with people and when I investigate INP, is reminding people that it’s not an SEO metric, it’s a UX metric. Mm-Hmm. But the purpose of it is to measure.

Interactivity. So kind of just to what I said before, like the, if you’re not thinking about it that way, then you’re not really going to be able [00:27:00] to communicate the importance of it as well as you might be able to with other people in your organization. Because, you know, talk about these TLAs devs, engineers, other folks who are kind of deep in the weeds.

We throw around these terms and they don’t mean anything outside our little circle. So if you want to actually get other folks in your org to care about any metric, it’s finding that usability slash business angle. So again, it’s kind of going back to what I talked about earlier, like that first principle of like, Okay, we have a metric that claims to be a UX metric.

How can we correlate it to something in the business? So making sure that you do that. So you can talk about the metric in business terms. So we can say that, you know this page is really janky because it has, you know, a poor CLS score. Because CLS kind of measures how much the visual elements on the page are kind of moving around and it has poor interactivity because you’re clicking on things and they don’t [00:28:00] happen and all of those things have a real impact on real people and how they feel when they’re using your site.

And so I think it’s really easy to kind of lose sight of that whenever we’re kind of like, I need to make sure I’ve got an INP of, you know, 150 milliseconds, things like that.

Jason: Yeah, it’s this weird, I mean, this has been a challenge for a while, right? The, the idea of, going back to the YSlow performance rules and the YSlow extension, right, where is the goal to have a better experience or is the goal to get the top grade? The nice thing about Core Web Vitals is there, there seems to be a real emphasis on trying to create metrics that measure real user experience, like to the degree that we can, which I don’t know whether we ever truly can, but you know, get as close as possible But that it’s I think kind of human nature to just be [00:29:00] like, I want to pass these three things.

And call it good or get a hundred on lighthouse or whatever it is. Yeah,

Tammy: To that point, one of the things that has come out of some of the research we’ve done looking at correlation charts is again if you’re just going to unquestioningly, look at a threshold that Google or someone else has defined.

And I’m not, I think somebody has to create thresholds, if only as a starting point. It’s like somebody has to write that first draft and put it out there. And it’s a good place to start from. But if you’re only focused on that and you’re not actually looking at your own users and your own user behavior, you could be thinking that you’re fine and you’re actually not.

So what I, what I mean by that is as an example I was looking at some correlation charts that Cliff created as part of his INP investigation. And one of the things that jumped out at me was that looking at the swath of like, say conversion rate to [00:30:00] various INP times.

Jason: Mm-Hmm.

Tammy: We saw that. Oh, okay, great. As INP improves conversion rate also improves. That’s great.

Jason: Right.

Tammy: But it wasn’t always consistent with Google’s thresholds. Like so for example, one site actually it was a hundred milliseconds, so that was a hundred millisecond point, not the 200 millisecond point.

Jason: Wow.

Tammy: It was where we started to see a difference at some. For some other sites it was later on. For some it was like, it was. Pretty much dead on, which was, you know, kind of, it’s a testament to whoever at Google kind of did this meta research to kind of come to that, to that 200 millisecond threshold. But it’s important to remember that these thresholds that are recommended to us are recommendations.

They’re based on looking at metadata, like aggregated data across a lot of different sites. Not your own site. So you could think 200 milliseconds is great, but actually for your own site, it would be better for you to move more of your users over to that a hundred millisecond point, and see conversions go up overall for your [00:31:00] business.

There’s a term that we use in looking at correlation charts called the performance plateau. And that’s basically when conversion just sort of. Like, so you see a decrease in conversions when your site goes from like two seconds to three seconds, you know, it gets a bit slower.

And then it kind of stays at that lower conversion rate for. Four or five more seconds. You could think that making your site a second faster for a user or for the swath of users who are getting five seconds for INP or sorry, that’s terrible number, five seconds for largest contentful page, moving them over to four seconds is going to make a difference.

It won’t, it’s not, it’s not going to be until you get them off that plateau and back in that zone. Where making an improvement improves conversions. I don’t know if I explained that well, and I don’t know if the air drawing charts really help.

Jason: So so it sounds like one of the recommendations you would make for website owners is to [00:32:00] take these performance metrics and try to do that.

Correlation to connect it to the, the key performance indicators that matter to their business like conversion rate, things of that nature. Are there other recommendations you would make to people who might be worried about INP and what it means for their business?

Tammy: Yeah. So, I mean, the first one, the one you just said is huge, just validating that it’s a meaningful metric for you.

So as I said earlier, looking at your RUM data and kind of just seeing, do I even have a significant portion of users for whom this is super relevant. If you do, yes, validating it looking to see where what the threshold for your own site should be, so maybe it’s not 200 milliseconds, as I said, maybe it’s 100 milliseconds.

Optimize for INP like as much as you realistically can so as I said, you kind of at the top of the, of this conversation make sure your content, you know, your content matters too. So, you know, if you can get your metrics to [00:33:00] a pretty good point and it can like kind of decide what’s good enough for that particular page so that you’re not kind of overly optimizing it and, and not really making a difference. I would also say don’t just measure Core Web Vitals.

Tammy: So, for example, like, if you care about understanding actual user experience and you know that you need to measure all of your users then there are some other metrics that I would recommend checking out as well. So for example, long tasks time. It is It’s broadly supported, supported across browsers.

It’s measurable in synthetic tools and in real user monitoring tools. And what it measures is the slow JavaScript on your page. So any long, a long task is any JavaScript task that takes more than 50 milliseconds to execute and do all the stuff that it needs to do. So it’s a major cause of delayed responsiveness.

So it’s a, it’s a pretty good proxy for INP.

I would really [00:34:00] recommend tracking long task time. And then as a companion to that total blocking time, which measures blocking JS, and it’s kind of similar to long tasks, except that it’s only measurable in synthetic, and it’s going to look at just, like, all of the long tasks.

That are blocking rendering on your page and the nice thing about measuring total blocking time. I know in SpeedCurve we do this and maybe other tools do it as well. Is we actually show you all the long tasks on your page so you see which specific scripts are those, those long tasks slash blocking scripts.

And actually what their blocking time is. So those are really helpful to look at as companions to INP. And if you see discrepancies are big differences between the numbers that you’re getting in your long tasks and total blocking time versus what you’re seeing in INP. That discrepancy is probably because [00:35:00] long tasks and total blocking time are measuring all of your users and not just, you know, those chromium based ones.

Jason: Oh, oh, interesting. Right. Right. Okay. Yeah, that totally makes sense.

Tammy: And then I guess kind of the final one, that thing that I would really recommend to people is if they’re not familiar with the concept of performance budgets, using performance budgets to fight regressions is like an amazing tool, like I, you know, again, talk to a lot of companies and you know, the one thing that the fastest sites, the companies that are renowned for being fast, like Pinterest, Etsy, and other companies like that, the thing they have all in common is that they use some variation on performance budgets.

And so, a performance budget is simply you tracking a metric or a few different metrics looking at the tracking them over time, looking at [00:36:00] kind of what is maybe you’re the worst day you had over the last two to four weeks for that particular metric, what that number was. So, for example, say you had an INP of you actually, you know, achieved 200 milliseconds and you don’t want to get worse.

You want to make sure that, you know, if you get worse, you set a performance budget within whatever monitoring tool you’re using, and you tell it to alert you when things get worse. I work with Tim Kadlec, who also some of your audience might know of his work. He’s a great performance consultant and he used a really great analogy of like guardrails and breadcrumbs.

So he talks about using performance budgets and testing on each deploy and, you know, kind of like firing, when you do a test on a deploy, it triggering letting you know when you violated a performance budget. So you just know right away. As like guardrails and then kind of just tracking and having access to [00:37:00] all of your, your test data is being kind of like the breadcrumb so that you can kind of quickly triangulate, triage, figure out what went wrong and fix it.

So performance budgets are an amazing tool. If you’re not already using them. Oh my goodness. I, I, I could have a, whole other podcast talk.

Jason: I was actually going to recommend that you when I was doing some research for this podcast that I’d saw that you have some recent talks on performance budgets and videos of them are online.

And so if people are interested, they should check it out. Yeah. Go to YouTube and search for Tammy and find those performance budget talks.

Tammy: Yeah. I’ve done, yeah, I’ve done a few talks about them and, and it’s kind of been my main focus over the past few years. Like , it makes me feel like I’m not doing enough to know that there’s this great tool, not just in SpeedCurve, like other tools have it as well.

Like there’s this great conceptually, this great tool. That you can use just to know that [00:38:00] things aren’t working anymore and that you’re not as fast as you used to be you could even set performance budgets on things like the the number of scripts on your page or the total JavaScript time or total long task time total blocking time all of these things you can you can create these guardrails

Jason: We use them even though our site doesn’t, you know, it doesn’t have our site’s pretty performant and we don’t have a lot of changes, but you know, like it’s helped us. Our site’s on WordPress and Jetpack has like randomly started inserting things into our web page.

And then all of a sudden we see the numbers. We’re like, Oh, what happened? We didn’t change anything. We, you know, then we go look and figure it out. So yeah, I totally agree. Well, thank you so much, Tammy. It was wonderful. This was incredibly helpful. And uh, where can people find you?

Tammy: Oh, you can find me in a lot of places.

So I’m on Mastodon Tammy Everts or it might just be Tammy on Mastodon on the web perf server. You could find me on Twitter. I’m still calling it [00:39:00] Twitter. Yes. @tameverts and um, I have a personal site, tammyeverets.com. You can find me there as well and contact me through that. If you have any questions, I love talking about performance as you can tell.

Jason: Yes. Awesome. Well, thank you so much, Tammy. And uh, we’ll see you all soon.

Tammy: Thank you, Jason.