Better HTTP/2 Prioritization for a Faster Web

by zspitzeron 5/14/2019, 1:03 PMwith 50 comments

by pornelon 5/14/2019, 4:38 PM

The big take-away from this is that "HTTP/2" is not the same thing everywhere. Quality of implementation matters. We didn't see so much variation in HTTP/1, because servers had almost no control, and clients were opening multiple connections, so even bad prioritization was hidden by TCP-level parallelism.

In HTTP/2 we've reached a good level of interoperability, but bolting on HTTP/2 on top of a server architected for HTTP/1 is not enough. We have room for optimizations and maturity.

by skybrianon 5/14/2019, 3:32 PM

It seems like this is another example of every aspect of web development getting more complicated, favoring the larger players. On the other hand, it's good that in this case, any website could just buy the results of that expertise.

by 3xblahon 5/14/2019, 6:15 PM

"Web pages are made up of dozens (sometimes hundreds) of separate resources that are loaded and assembled by the browser into the final displayed content."

Could that be the reason that that the web needs to be "faster" (despite tremendous advances in CPU, storage, bandwidth and network speeds)?

The dozens (sometimes hundreds) of separate resources are loaded by default.

What is their purpose? Where do they come from? Are all of them necessary?

What if an advanced user could tell the browser to only load certain resources from certain sources?

For example, maybe skip certain ads and tracking, as specified by the user.

Perhaps do not load the Facebook "Like" buttons (images), but load all other images.

Control exactly which Javascripts to load.

In addition to the options that browsers now provide, provide more fine-grained controls.

Could that make the web faster?

A bonus would be if these user-defined, fine-grained browser settings could be saved in a portable, interoperable format, e.g. to external media, in addition to being able to save them to "the cloud" (which may be servers run by advertising-supported browser authors). Browser authors do not need to know which resources users may wish to block.

by tempguy9999on 5/14/2019, 5:35 PM

I block all JS and have a comprehensive blocklist for just about every ad service there is. Stuff usually loads damn fast (ok, when not broken by JS missing, an acceptable price to me, I get speed and safety for free). I always recommend people try it.

It's fast enough except when webbish types do stupid things.

Just yesterday I was looking at a Scientific American article which was held up by an entirely pointless 2.6MB gif, this one <https://static.scientificamerican.com/blogs/cache/file/80C44....

Quanta articles are bloody nuts too, check this shit, a 4MB animation that tells you nothing. Fucking nothing. Absolfuckinglutely sweet FA. (Edit, forgot link; here: <https://www.quantamagazine.org/mathematicians-discover-the-p...)

We have a fast web, what we also have are idiots - technical solutions to idiocy aren't solutions.

(NB 'scuse swearing)

by forgotmypw14on 5/14/2019, 5:51 PM

The web is quite fast already, if you don't weigh down your web pages with tens of megabytes of crap...

by voiper1on 5/14/2019, 4:24 PM

So their main improvement over Chrome is that chrome doesn't benefit from progressive images -- they load images in sequence instead of parallel. Sounds like a simple fix. (If you actually know the images are progressive.)

by Matthias247on 5/15/2019, 6:05 AM

As someone who has implemented HTTP/2 too (for the apparently unpopular .NET ecosystem) I am guilty of totally missing out on the prioritization feature too.

There's definitely some good ideas and use-cases in the article that make it more worthwhile!

Some things I'm wondering about:

The proposed strategy seems to prefer sometimes sending single resources in a sequential fashion instead of lots of resources in parallel. Doesn't that essentially bring the communication back into a HTTP/1.1 style with less parallelism - and only with the benefit of no extra connections? And how well does the approach fit together with browsers flow control windows? If a browser set a small flow control window per stream then sending only a single object at a time wound still require lots of round-trips for flow control window updates - which might make it worse than HTTP/1.1. However I heard at some time that browsers have configured huge flow control windows. If that's true it seems more likely to work out (and the default strategy where HTTP/2 prefers parallelism over throughput seems worse).

by TomGullenon 5/14/2019, 10:49 PM

Would love to enable HTTP2 on Cloudflare on our site, but there seems to be a bug with it in Cloudflare where it randomly stops requests, and we're stuck in an endless support loop so we have to disable it on our PWA which would have huge benefits for it.

by kenon 5/14/2019, 8:29 PM

Don't some of these browsers display a best-approximation font, while the actual font file is downloading, while others display no text until the font file is available? That seems like an awfully big distinction which is omitted here.

by MrStonedOneon 5/14/2019, 5:52 PM

I don't know that their assumption about bandwidth getting maximized period is correct.