-
-
Notifications
You must be signed in to change notification settings - Fork 9.1k
Description
Webpack version:
1.10.x and 2.x
Please tell us about your environment:
OSX 10.x / Linux / Windows 10 [all]
Expected/desired behavior:
Highlight at build-time any JavaScript chunks or bundles that are over a size threshold and can negatively impact web performance load times, parse/compile and interactivity. A default performance budget could indicate if total chunk sizes for a page are over a limit (e.g 250KB).
I'd love to see if we could make this a default and offer a config option for opting out ๐โโ๏ธ Concern with an opt-in is the folks with the worst perf issues may not know to enable it or use this via a plugin if this suggestion was deferred to one. These folks may also not be testing on mobile devices.
Optionally, highlight where better performance patterns might be helpful:
Current behaviour:
Many of the apps bundled with Webpack that we trace ship a large, single bundle that ends up pegging the main thread and taking longer that it should for webapps to be interactive:
This isn't Webpack's fault, just the culture around shipping large monolithic bundles. This situation gets particularly bad on mobile when trying these apps out on real devices.
If we could fix this, it would also make it way more feasible for the Webpack + React (or Angular) stacks to be good candidates for building fast web apps and Progressive Web Apps ๐ฅ
What is the motivation / use case for changing the behavior?
I recently dove into profiling a large set (180+) React apps in the wild, based on 470 responses we got from a developer survey. This included a mix of small to large scale apps.
I noted a few common characteristics:
- 83+% of them use Webpack for module bundling (17% are on Webpack 2)
- Many ship large monolithic JS bundles down to their users (0.5-1MB+). In most cases, this makes apps interactive in well over 12.4 seconds on real mobile devices (see table 1) compared to the 4 seconds we see for apps properly using code-splitting and keeping their chunks small. Theyโre also far slower on desktop than they should be - more JS = more parse/execution time spent at a JS engine level.
- Developers either donโt use code-splitting or use it while still shipping down large chunks of JS in many cases. More on this soon.
Table 1: Summary of time-to-interactive scoring (individual TTI was computed by Lighthouse)
Condition | Network latency | Download Throughput | Upload Throughput | TTI average React+Webpack app | TTI smaller bundles + code-splitting |
---|---|---|---|---|---|
Regular 2G | 150ms | 450kbps | 150kbps | 14.7s | 5.1s |
Regular 3G | 150ms | 1.6MBPs | 750kbps | 12.4s | 4s |
Regular 4G | 20ms | 4MBPs | 3MBPs | 8.8s | 3.8s |
Wifi | 2ms | 30MBPs | 15MBPs | 6s | 3.4s |
We generally believe that splitting up your work into smaller chunks can get you closer to being interactive sooner, in particular when using HTTP/2. Only serving down the code a user needs for a route is just one pattern here (e.g PRPL) that weโve seen helps a great deal.
Examples of this include the great work done by Housing.com and Flipkart.com. They use Webpack and are getting those real nice numbers in the last column thanks to diligence with perf budgets and code-splitting ๐.
What impacts a user's ability to interact with an app?
A slow time to being interactive can be attributed to a few things:
- Client is slow i.e keeping the main thread busy ๐ Large JS bundles will take longer to compile and run. There may be other issues at play, but large JS bundles will definitely peg the main thread. Staying fast by shipping the smallest amount of JS needed to get a route/page interactive is a good pattern, especially on mobile where large bundles will take even longer to load/parse/execute/run โณ
- Server/backend may be slow to respond
- Suboptimal back and forth between the server and client (lots of waterfall requests) that are a sequence of network busy -> CPU idle -> CPU busy -> network idle and so on.
If we looked at tools like performancebudget.io, targeting loading in RAILโs <3s on 3G would place our total JS budget at a far more conservative 106KB once you factor in other resources a typical page might include (like stylesheets and images). The less conservative number of 250KB is an upper bound estimate.
Code-splitting and confusion
A surprising 58%+ of responders said they were using code-splitting. We also profiled just this subset and found that their average time to being interactive was 12.3 seconds (remember that overall, the average TTI we saw was 12.4 with or without splitting). So, what happened?
Digging into this data further, we discovered two things.
- Either folks that thought they were code-splitting actually weren't and there was a terminology break-down somewhere (e.g maybe they thought using
CommonsChunkPlugin
to 'split' vendor code from main chunks was code-splitting?) ๐ค - Folks that definitely were code-splitting had zero insight into chunk size impact on web performance. We saw lots of people with chunk sizes of 400, 500...all the way up to 1200KB of script who were then also lazy-loading in even more script ๐ข
Keep in mind: it's entirely possible to ship fast apps using JS that are interactive quickly with Webpack - if Flipkart can hit it in under 5 seconds, we can definitely bring this number down for the average Webpack user too.
Note: if you absolutely need a large bundle of JS for a route/page to be useful at all, our advice is to just serve it in one bundle rather than code-split. At an engine level this is cheaper to parse. In most cases, devs aren't going to need all that JS for just one page/view/route so splitting helps.
What device was used in our lab profiling?
A Nexus 5X with a real network connection. We also ran tests on emulated setups with throttled CPU and network (2G, 3G, 4G, Wifi). One key thing to note is if this proposal was implemented, it could benefit load times for webapps on all hardware, regardless of whether it's Android or iOS. Fewer bytes shipped down the line = a little more โค๏ธ for users data plans.
The time-to-interactive definition we use in Lighthouse is the moment after DOMContentLoaded
where the main thread is available enough to handle user input โ. We look for the first 500ms window where estimated input latency is <50ms at the 90th percentile.
Suppressing the feature
Users could opt-out of the feature through their Webpack configuration (we can ๐ฒ ๐ over that). If a problem is that most devs don't run their dev configs optimized for production, it may be worth considering this feature be enabled when the -p production flag is enabled, however I'm unsure how often that is used. Right now it's unclear if we just want to define a top-level performanceHints
config vs a performance
object:
performance: {
hints: true,
maxBundleSize: 250,
warnAtPercent: 80
}
Optional additions to proposal
Going further, we could also consider informing you if:
- You werenโt taking advantage of patterns like code-splitting (e.g not using
require.ensure()
orSystem.import()
). This could be expanded to also provide suggestions on other perf plugins (likeCommonChunksPlugin
) - What if Webpack opted for code-splitting by default as long as you were using
System.import()
orrequire.ensure()
? The minimum config is just the minimum requirements aka the entry ouput today. - What if it could guide you through setting up code-splitting and patterns like PRPL if it detected perf issues? i.e at least install the right Webpack plugins and get your config setup or point you to the docs to finish getting setup?
Thanks to Sean Larkin, Shubhie Panicker, Gray Norton and Rob Wormald for reviewing this RFC before I submitted it.