Tags: speed

224

Codestin Search App

Thursday, January 8th, 2026

The Main Thread Is Not Yours — Den Odell

Every millisecond you spend executing JavaScript is a millisecond the browser can’t spend responding to a click, updating a scroll position, or acknowledging that the user did just try to type something. When your code runs long, you’re not causing “jank” in some abstract technical sense; you’re ignoring someone who’s trying to talk to you.

This is a great way to think about client-side JavaScript!

Also:

Before your application code runs a single line, your framework has already spent some of the user’s main thread budget on initialization, hydration, and virtual DOM reconciliation.

Wednesday, October 29th, 2025

I Built the Same App 10 Times: Evaluating Frameworks for Mobile Performance | Loren Stewart

A very, very deep dive into like-for-like comparison of JavaScript frameworks. The takeaway:

Nuxt demonstrates that established “big three” frameworks can achieve next-gen performance when properly configured. Vue’s architecture allows competitive mobile web performance while maintaining a mature ecosystem. React and Angular show no path to similar results.

And the real takeaway:

Mobile is the web. These measurements matter because mobile web is the primary internet for billions of people. If your app is accessible via URL, people will use it on phones with cellular connections. Optimizing for desktop and hoping mobile is good enough is backwards. The web is mobile. Build for that reality.

Tuesday, October 21st, 2025

“AI is inevitable” is bullshit · Eric Eggert

LLMs are useful when you need a compromise between fast and good. You will never get a good outcome fast.

I’m afraid we are settling into a status of good enough when using “AI,” which is especially hurtful for accessibility.

Saturday, August 30th, 2025

The Invisibles

When I was talking about monitoring web performance yesterday, I linked to the CrUX data for The Session.

CrUX is a contraction of Chrome User Experience Report. CrUX just sounds better than CEAR.

It’s data gathered from actual Chrome users worldwide. It can be handy as part of a balanced performance-monitoring diet, but it’s always worth remembering that it only shows a subset of your users; those on Chrome.

The actual CrUX data is imprisoned in some hellish Google interface so some kindly people have put more humane interfaces on it. I like Calibre’s CrUX tool as well as Treo’s.

What’s nice is that you can look at the numbers for any reasonably popular website, not just your own. Lest I get too smug about the performance metrics for The Session, I can compare them to the numbers for WikiPedia or the BBC. Both of those sites are made by people who prioritise speed, and it shows.

If you scroll down to the numbers on navigation types, you’ll see something interesting. Across the board, whether it’s The Session, Wikipedia, or the BBC, the BFcache—back/forward cache—is used around 16% to 17% of the time. This is when users use the back button (or forward button).

Unless you do something to stop them, browsers will make sure that those navigations are super speedy. You might inadvertently be sabotaging the BFcache if you’re sending a Cache-Control: no-store header or if you’re using an unload event handler in JavaScript.

I guess it’s unsurprising the BFcache numbers are relatively consistent across three different websites. People are people, whatever website they’re browsing.

Where it gets interesting is in the differences. Take a look at pre-rendering. It’s 4% for the BBC and just 0.4% for Wikipedia. But on The Session it’s a whopping 35%!

That’s because I’m using speculation rules. They’re quite straightforward to implement and they pair beautifully with full-page view transitions for a slick, speedy user experience.

It doesn’t look like WikiPedia or the BBC are using speculation rules at all, which kind of surprises me.

Then again, because they’re a hidden technology I can understand why they’d slip through the cracks.

On any web project, I think it’s worth having a checklist of The Invisibles—things that aren’t displayed directly in the browser, but that can make a big difference to the user experience.

Some examples:

If you’ve got a checklist like that in place, you can at least ask “Whose job is this?” All too often, these things are missing because there’s no clarity on whose responsible for them. They’re sorta back-end and sorta front-end.

Friday, August 29th, 2025

Databasing

A few years back, Craig wrote a great piece called Fast Software, the Best Software:

Speed in software is probably the most valuable, least valued asset. To me, speedy software is the difference between an application smoothly integrating into your life, and one called upon with great reluctance.

Nelson Elhage said much the same thing in his reflections on software performance:

I’ve really come to appreciate that performance isn’t just some property of a tool independent from its functionality or its feature set. Performance — in particular, being notably fast — is a feature in and of its own right, which fundamentally alters how a tool is used and perceived.

Or, as Robin put it:

I don’t think a website can be good until it’s fast.

Those sentiments underpin The Session. Speed is as much a priority as usability, accessibility, privacy, and security.

I’m fortunate in that the site doesn’t have an underlying business model at odds with these priorities. I’m under no pressure to add third-party code that would track users and slow down the website.

When it comes to making fast websites, most of the obstacles are put in place by front-end development, mostly JavaScript. I’ve been pretty ruthless in my pursuit of speed on The Session, removing as much JavaScript as possible. On the bigger pages, the bottleneck now is DOM size rather than parsing and excuting JavaScript. As bottlenecks go, it’s not the worst.

But even with all my core web vitals looking good, I still have an issue that can’t be solved with front-end optimisations. Time to first byte (or TTFB if you’d rather use an initialism that takes just as long to say as the words it’s replacing).

When it comes to reducing the time to first byte, there are plenty of factors that are out of my control. But in the case of The Session, something I do have control over is the server set-up, specifically the database.

Now I could probably solve a lot of my speed issues by throwing money at the problem. If I got a bigger better server with more RAM and CPUs, I’m pretty sure it would improve the time to first byte. But my wallet wouldn’t thank me.

(It’s still worth acknowledging that this is a perfectly valid approach when it comes to back-end optimisation that isn’t available on the front end; you can’t buy all your users new devices.)

So I’ve been spending some time really getting to grips with the MySQL database that underpins The Session. It was already normalised and indexed to the hilt. But perhaps there were server settings that could be tweaked.

This is where I have to give a shout-out to Releem, a service that is exactly what I needed. It monitors your database and then over time suggests configuration tweaks, explaining each one along the way. It’s a seriously good service that feels as empowering as it is useful.

I wish I could afford to use Releem on an ongoing basis, but luckily there’s a free trial period that I could avail of.

Thanks to Releem, I was also able to see which specific queries were taking the longest. There was one in particular that had always bothered me…

If you’re a member of The Session, then you can see any activity related to something you submitted in the past. Say, for example, that you added a tune or an event to the site a while back. If someone else comments on that, or bookmarks it, then that shows up in your “notifications” feed.

That’s all well and good but under the hood it was relying on a fairly convuluted database query to a very large table (a table that’s effectively a log of all user actions). I tried all sorts of query optimisations but there always seemed to be some combination of circumstances where the request would take ages.

For a while I even removed the notifications functionality from the site, hoping it wouldn’t be missed. But a couple of people wrote to ask where it had gone so I figured I ought to reinstate it.

After exhausting all the technical improvements, I took a step back and thought about the purpose of this particular feature. That’s when I realised that I had been thinking about the database query too literally.

The results are ordered in reverse chronological order, which makes sense. They’re also chunked into groups of ten, which also makes sense. But I had allowed for the possibility that you could navigate through your notifications back to the very start of your time on the site.

But that’s not really how we think of notifications in other settings. What would happen if I were to limit your notifications only to activity in, say, the last month?

Boom! Instant performance improvement by orders of magnitude.

I guess there’s a lesson there about switching off the over-analytical side of my brain and focusing on actual user needs.

Anyway, thanks to the time I’ve spent honing the database settings and optimising the longest queries, I’ve reduced the latency by quite a bit. I’m hoping that will result in an improvement to the time to first byte.

Time—and monitoring tools—will tell.

Tuesday, August 5th, 2025

It’s time for modern CSS to kill the SPA - Jono Alderson

SPAs were a clever solution to a temporary limitation. But that limitation no longer exists.

Use modern server rendering. Use actual pages. Animate with CSS. Preload with intent. Ship less JavaScript.

Saturday, July 19th, 2025

I’m more proud of these 128 kilobytes than anything I’ve built since | by Mike Hall | Jul, 2025 | Medium

I don’t normally link to articles on Medium—I respect you too much—and I do wish this were written on Mike Hall’s own site, but this is just too good not to share.

And don’t dismiss this as a nostalgiac case study from the past:

At no point did the constraints make the product feel compromised. Users on modern devices got a smooth experience and instant feedback, while those on older devices got fast, reliable functionality. Users on feature phones got the same core experience without the bells and whistles.

The constraints forced us to solve problems in ways we wouldn’t have considered otherwise. Without those constraints, we could have just thrown bytes at the problem, but with them every feature had to justify itself. Core functionality had to work everywhere, and without JavaScript crutches proper markup became essential.

This experience changed how I approach design problems. Constraints aren’t a straitjacket, keeping us from doing our best work; they are the foundation that makes innovation possible. When you have to work within severe limitations, you find elegant solutions that scale beyond those limitations.

Tuesday, July 15th, 2025

(optional.is) Latency and the Sea

Brian’s excellent comparison of network latency and the nervous system of animals:

If an earthquake occurs in California USA, halfway around the globe someone can find out faster than a blue whale detects something has touched its tail.

Friday, January 10th, 2025

Website Speed Test

Here’s a handy free tool from Calibre that’ll give your website a performance assessment.

Wednesday, October 16th, 2024

content-visibility in Safari

Earlier this year I wrote about some performance improvements to The Session using the content-visibility property in CSS.

If you say content-visibility: auto you’re telling the browser not to bother calculating the layout and paint for an element until it needs to. But you need to combine it with the contain-intrinsic-block-size property so that the browser knows how much space to leave for the element.

I mentioned the browser support:

Right now content-visibility is only supported in Chrome and Edge. But that’s okay. This is a progressive enhancement. Adding this CSS has no detrimental effect on the browsers that don’t understand it (and when they do ship support for it, it’ll just start working).

Well, that’s happened! Safari 18 supports content-visibility. I didn’t have to do a thing and it just started working.

But …I think I’ve discovered a little bug in Safari’s implementation.

(I say I think it’s a bug with the browser because, like Jim, I’ve made the mistake in the past of thinking I had discovered a browser bug when in fact it was something caused by a browser extension. And when I say “in the past”, I mean yesterday.)

So here’s the issue: if you apply content-visibility: auto to an element that contains an SVG, and that SVG contains a text element, then Safari never paints that text to the screen.

To see an example, take a look at the fourth setting of Cooley’s reel on The Session archive. There’s a text element with the word “slide” (actually the text is inside a tspan element inside a text element). On Safari, that text never shows up.

I’m using a link to the archive of The Session I created recently rather than the live site because on the live site I’ve removed the content-visibility declaration for Safari until this bug gets resolved.

I’ve also created a reduced test case on Codepen. The only HTML is the element containing the SVGs. The only CSS—apart from the content-visibility stuff—is just a little declaration to push the content below the viewport so you have to scroll it into view (which is when the bug happens).

I’ve filed a bug report. I know it’s a fairly niche situation, but there are some other issues with Safari’s implementation of content-visibility so it’s possible that they’re all related.

Wednesday, June 26th, 2024

Pivoting From React to Native DOM APIs: A Real World Example - The New Stack

One dev team made the shift from React’s “overwhelming VDOM” to modern DOM APIs. They immediately saw speed and interaction improvements.

Yay! But:

…finding developers who know vanilla JavaScript and not just the frameworks was an “unexpected difficulty.”

Boo!

Also, if you have a similar story to tell about going cold turkey on React, you should share it with Richard:

If you or your company has also transitioned away from React and into a more web-native, HTML-first approach, please tag me on Mastodon or Threads. We’d love to share further case studies of these modern, dare I say post-React, approaches.

Tuesday, May 21st, 2024

Speculation rules

There’s a new addition to the latest version of Chrome called speculation rules. This already existed before with a different syntax, but the new version makes more sense to me.

Notice that I called this an addition, not a standard. This is not a web standard, though it may become one in the future. Or it may not. It may wither on the vine and disappear (like most things that come from Google).

The gist of it is that you give the browser one or more URLs that the user is likely to navigate to. The browser can then pre-fetch or even pre-render those links, making that navigation really snappy. It’s a replacement for the abandoned link rel="prerender".

Because this is a unilateral feature, I’m not keen on shipping the code to all browsers. The old version of the API required a script element with a type value of “speculationrules”. That doesn’t do any harm to browsers that don’t support it—it’s a progressive enhancement. But unlike other progressive enhancements, this isn’t something that will just start working in those other browsers one day. I mean, it might. But until this API is an actual web standard, there’s no guarantee.

That’s why I was pleased to see that the new version of the API allows you to use an external JSON file with your list of rules.

I say “rules”, but they’re really more like guidelines. The browser will make its own evaluation based on bandwidth, battery life, and other factors. This feature is more like srcset than source: you give the browser some options, but ultimately you can’t force it to do anything.

I’ve implemented this over on The Session. There’s a JSON file called speculationrules.js with the simplest of suggestions:

{
  "prerender": [{
    "where": {
        "href_matches": "/*"
    },
    "eagerness": "moderate"
  }]
}

The eagerness value of “moderate” says that any link can be pre-rendered if the user hovers over it for 200 milliseconds (the nuclear option would be to use a value of “immediate”).

I still need to point to that JSON file from my HTML. Usually this would be done with something like a link element, but for this particular API, I can send a response header instead:

Speculation-Rules: “/speculationrules.json"

I like that. The response header is being sent to every browser, regardless of whether they support speculation rules or not, but at least it’s just a few bytes. Those other browsers will ignore the header—they won’t download the JSON file.

Here’s the PHP I added to send that header:

header('Speculation-Rules: "/speculationrules.json"');

There’s one extra thing I had to do. The JSON file needs to be served with mime-type of “application/speculationrules+json”. Here’s how I set that up in the .conf file for The Session on Apache:

<IfModule mod_headers.c>
  <FilesMatch "speculationrules.json">
    Header set Content-type application/speculationrules+json
   </FilesMatch>
</IfModule>

A bit of a faff, that.

You can see it in action on The Session. Open up Chrome or Edge (same same but different), fire up the dev tools and keep the network tab open while you navigate around the site. Notice how hovering over a link will trigger a new network request. Clicking on that link will get you that page lickety-split.

Mind you, in the case of The Session, the navigations were already really fast—performance is a feature—so it’s hard to guage how much of a practical difference it makes in this case, but it still seems like a no-brainer to me: taking a few minutes to add this to your site is worth doing.

Oh, there’s one more thing to be aware of when you’re implementing speculation rules. You have the option of excluding URLs from being pre-fetched or pre-rendered. You might need to do this if you’ve got links for adding items to shopping carts, or logging the user out. But my advice would instead be: stop using GET requests for those actions!

Most of the examples given for unsafe speculative loading conditions are textbook cases of when not to use links. Links are for navigating. They’re indempotent. For everthing else, we’ve got forms.

Wednesday, April 17th, 2024

Faster Connectivity !== Faster Websites - Jim Nielsen’s Blog

The bar to overriding browser defaults should be way higher than it is.

Amen!

Tuesday, April 16th, 2024

Standing still - a performance tinker | Trys Mudford

What Trys describes here mirrors my experience too—it really is worth occasionally taking a little time to catch the low-hanging fruit of your site’s web performance (and accessibility):

I’ve shaved nearly half a megabyte off the page size and improved the accessibility along the way. Not bad for an evening of tinkering.

Tuesday, March 26th, 2024

Fidinpamp

If you’re a fan of gratuitous initialisms, you’ll love Google’s core web vitals. Just get a load of the obfuscation in the important-sounding metrics like CLS, FCP, LCP, and more.

To be fair to Google, this is a problem in the web performance world in general. Practioners prefer to talk about TTFB rather than “time to first byte” even though both contain exactly the same number of syllables.

The big news in the web performance community this month is the arrival of a new initialism. INP sounds like one of those pseudo-scientific psychologic profiles but it’s meant to stand for Interaction to Next Paint (even if they were to swear off pointless initialisms, you’d still have to pry Pointless Capitalisation from Google’s cold dead hands).

This new metric is a welcome one. It’s replacing first input delay. Sorry, First Input Delay, or FID, one of the few web vital initialisms that can be spoken as a word, making it a true acronym (fortunately fid’s successor, inp, also works as an acronym).

First Input Delay has long outstayed its welcome. It was always an outlier in the core web vitals. It didn’t seem to measure anything actually useful. I know it sounds like it’s measuring the delay until the user can interact with a web page, but when you dive into what it actually does, it’s a mess:

FID measures the time from when a user first interacts with a page (that is, when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to begin processing event handlers in response to that interaction.

See that word “begin” in there? It’s doing a lot of work. First Input Delay doesn’t measure the lag between the user interaction and the browser response; it only measures the lag between the user interaction and the browser beginning to respond. The actual response could take ages, but that lag doesn’t get measured. Unlike the other core web vitals, this metric is very far removed from what actually matters to the user’s experience.

What the fid where they thinking? How the fid did this measurement ever get included in core web vitals in the first place?

Well, feel free to take what I’m about to say as pure gossip, but I have my sources, I trust ’em, and no, I’m not going to reveal ’em…

It’s because of AMP.

Remember Google AMP? An acronym so pointless they eventually just forgot it ever stood for anything?

The AMP project ended up doing incredible damage to Google’s developer relations. By colluding with the search team to privilege the appearance of AMP pages in the top news carousel, Google effectively blackmailed the entire publishing industry into using their format.

In the end, it didn’t work. It was a shit format. All they did was foster resentment and animosity:

AMP seems to have faded away. Most publishers have started dropping support, and even Google doesn’t seem to care much anymore.

It turns out that Google search wasn’t the only team infected by AMP. The core web vitals team also had to play ball.

Originally they had a genuinely useful metric for measuring the lag between input and response. But guess which pages did terribly? That’s right: AMP pages.

Rather than ship an actually-useful measurement, the core web vitals team instead had to include the broken First Input Delay, brainchild of a certain someone on the AMP team.

Now it all makes sense.

So good riddance to FID. Welcome to INP. And here’s hoping it won’t be much longer till we’re finally burying AMP.

Tuesday, February 27th, 2024

JavaScript Bloat in 2024 @ tonsky.me

This really is a disgusting exlusionary state of affairs.

I hate to be judgy, but I honestly wonder how the people behind some of these decisions can call themselves web developers.

Thursday, February 22nd, 2024

PageSpeed Insights bookmarklet

I’m a little obsessed with web performance. I like being able to check a page’s core web vitals quickly and easily.

Four years ago, I made a Lighthouse bookmarklet. Whatever web page you were on, when you clicked on the bookmarklet you’d get the Lighthouse results for that page. Handy!

It doesn’t work anymore. This is probably because Google are in the loop. Four years is pretty good innings for anything involving that company.

I kid (mostly). Lighthouse itself is still going strong, despite being a Google product. But the bookmarklet needs updating.

Rather than just get Lighthouse results, I figured that the full PageSpeed Insights results would be even better. If your website is in the Chrome UX report, you get to see those CrUX details too.

So here’s the updated bookmarklet:

PageSpeed Insights

Drag that up to your desktop browser’s bookmarks toolbar. Press it whenever you want to test the page you’re on.

Monday, February 19th, 2024

Speedier tunes

I wrote a little while back about improving performance on The Session by reducing runtime JavaScript in favour of caching on the server. This is on the pages for tunes, where the SVGs for the sheetmusic are now inlined instead of being generated on the fly.

It worked. But I also wrote:

A page like that with lots of sheetmusic and plenty of comments is going to have a hefty page weight and a large DOM size. I’ve still got a fair bit of main-thread work happening, but now the bulk of it is style and layout, whereas previously I had the JavaScript overhead on top of that.

Take a tune like Out On The Ocean. It has 27 settings. That’s a lot of SVG markup that needs to be parsed, styled and rendered, even if it’s inline.

Then I remembered a very handy CSS property called content-visibility:

It enables the user agent to skip an element’s rendering work (including layout and painting) until it is needed — which makes the initial page load much faster.

Sounds great! But there are two gotchas.

The first gotcha is that if a browser doesn’t paint the element, it doesn’t know how much space the element should take up. So you need to provide dimensions. At the very least you need to provide a height value. Otherwise when the element comes into view and gets rendered, it pushes down on the content below it. You’d see a sudden jump in the scrollbar position.

The solution is to provide a value for contain-intrinsic-size. If your content is dynamic—from, say, a CMS—then you’re out of luck. You don’t know how long the content is.

Luckily, in my case, I could take a stab at it. I know how many lines of sheetmusic there are for each tune setting. Each line takes up roughly the same amount of space. If I multiply that amount of space by the number of lines then I’ve got a pretty good approximation of the height of the sheetmusic. I apply this with the contain-intrinsic-block-size property.

So each piece of sheetmusic has an inline style attribute with declarations like this:

content-visibility: auto;
contain-intrinsic-block-size: 380px;

It works a treat. I did a before-and-after check with pagespeed insights on the page for Out On The Ocean. The “style and layout” part of the main thread work went down considerably. Total blocking time went from more than 600 milliseconds to less than 400 milliseconds.

Not a bad result for a little bit of CSS!

I said there was a second gotcha. That’s browser support.

Right now content-visibility is only supported in Chrome and Edge. But that’s okay. This is a progressive enhancement. Adding this CSS has no detrimental effect on the browsers that don’t understand it (and when they do ship support for it, it’ll just start working). I’ve said it before and I’ll say it again: the forgiving error-parsing in HTML and CSS is a killer feature of the web. Browsers just ignore what they don’t understand. That’s what makes progressive enhancement like this possible.

And actually, there’s something you can do for all browsers. Even browsers that don’t support content-visibility still understand containment. So they’ll understand contain-intrinsic-size. Pair that with a contain declaration like this to tell the browser that this chunk of content isn’t going to reflow or get repainted:

contain: layout paint;

Here’s what MDN says about contain:

The contain CSS property indicates that an element and its contents are, as much as possible, independent from the rest of the document tree. Containment enables isolating a subsection of the DOM, providing performance benefits by limiting calculations of layout, style, paint, size, or any combination to a DOM subtree rather than the entire page.

So if you’ve got a chunk of static content, you might as well apply contain to it.

Again, not bad for a little bit of CSS!

Wednesday, January 31st, 2024

SpeedCurve | The psychology of site speed and human happiness

Tammy takes a deep dive into our brains to examine the psychology of web performance. It opens with this:

If you don’t consider time a crucial usability factor, you’re missing a fundamental aspect of the user experience.

I wish that more UX designers understood that!

Tuesday, October 24th, 2023

Ship Faster by Building Design Systems Slower | Big Medium

Josh mashes up design systems and pace layers, like Mark did a few years back. With this mindset, if your product interface are in sync, that’s not good—either your product is moving too slow or your design system is moving too fast.

The job of the design system team is not to innovate, but to curate. The system should provide answers only for settled solutions: the components and patterns that don’t require innovation because they’ve been solved and now standardized. Answers go into the design system when the questions are no longer interesting—proven out in product. The most exciting design systems are boring.