Speed and Mobility: An Approach for HTTP 2.0 to Make Mobile Apps and the Web Faster


This week begins face to face meetings at the IETF on how to approach HTTP 2.0 and improve the Internet. How the industry moves forward together on the next version of HTTP – how every application and service on the Web communicates today – can have a positive impact on user experience, operational and environmental costs, and even the battery life of the devices you carry around.

As part of this discussion of HTTP 2.0, Microsoft will submit to the IETF a proposal for “HTTP Speed+Mobility.”  The approach we propose focuses on Web performance improvements at the same time it takes into account the important needs of mobile devices and applications.

You can read more here about this proposal and an approach to HTTP 2.0 which considers network speed, cost, and device battery life, and works for everyone from sites owners, content providers, to consumers.

— Rob Mauceri, Group Program Manager, Internet Explorer

Comments (13)

  1. Anonymous says:

    Google SPDY technology is to be based and HTTP2.0 is true? Rumor has it. It security is OK?

    That side firmly talked, I want to thank you for working on development, implementation. I would like to ask well balanced.

  2. Anonymous says:

    A copy of the proposal would be nice.

  3. Anonymous says:

    This tells us almost nothing. A copy of the proposal please.

  4. Anonymous says:

    why IE10 is so slow to issue? Microsoft is really too soft. You people are lazy or silly?

  5. Anonymous says:

    Did any of you bother to follow the link? Idiots.

    Oh, and @Martin? The only "lazy or silly" person on this blog is the guy asking why IE10 hasn't shipped yet and offering no desire to help get it out the door. Maybe you want to whine about not having a flying car or jet backpack, too, while you're at it.

  6. Anonymous says:

    That link provides no info other than a bunch of hand waving… It isn't even slideware yet.

    Just how are you proposing to save battery life?

    How are you hoping to merge web sockets with SPDY?

    And most importantly… When IE supports HTTP 2.0 will you ensure that it properly supports all headers?! We don't want another "vary" header that IE ignores or an HTTPS cache rule that erases downloads before the end user can open them… Or a save page functionality that requires a complete reload of the entire page.

    Make sure you are willing to commit fully to HTTP 2.0 bred ore you toot your horn!

  7. Anonymous says:

    The eventual http 2.0 specification will likely have levels of compliance to scale with limited devices.  IE  and other browsers will have most of the standard eventually.

  8. Anonymous says:

    (Second post; first was eaten.) Reading between the lines of the other blog post. 1) I'm guessing the 'client in control' line means the new proposal does away with SPDY's server-pushed resources (seems OK; I think I picked up Google moving away from that and towards 'Server Hint' in some presentation). 2) The language about building on WebSockets and being friendly to existing routers, etc. suggests to me it's going to look more like HTTP1.1/WebSockets than SPDY over the wire: probably no compressed headers or TLS wrapping or so forth.

    I'm actually a bit torn about #2 — not making problems for proxies — because hidden, badly-implemented proxies outside the user's control (imposed by some firewall or local software or the Wi-Fi router at the airport or some such) seem to be the source of some grief on the Web today. Proxies seem like something that should be rethought in a way that gives the user more control — maybe a SPDY-like crypto wrapper is the mechanism for that, along with some standard way for the user to affirmatively add a man in the middle if they want (and, perhaps, a way for a network to demand that they do so, which could matter in a corporate network that requires security filtering).

    Interested to see how it all shakes out.

  9. Anonymous says:

    IE 10 does not support mathml (already included in HTML5)

    IE 10 does not support webgl    

    And microsoft is proposinf a new specification ?

    Implement first what other browsers already support.

    Then, AFTER THIS, you can propose new specifications…

  10. Anonymous says:

    The intro to the proposal is online.

    I think it's totally sensible for Microsoft to make a new proposal before they've got all existing specs done. (I don't entirely like the order they chose for tackling some specs, but that's inevitable.) And IETF _needs_ a concrete incrementalist proposal, at least to clarify another direction the design could go.

    I think there's a conversation to have about whether HTTP 2.0 should maintain the status quo where network equipment rewrites packets in flight to disable gzip, redirect webpages, change content, etc., without user consent or even awareness. I don't think it should. Users need control, including users of public Wi-Fi networks. And, exactly as the intro says, if we go the other direction and say HTTP 2.0 must look almost like HTTP 1.1 to support legacy request rewriters, we can't do a lot of big stuff.

    You're right that corporations will need proxies, public networks will need splash pages, etc. But I think we need a more orderly way to do all that than today's man-in-the-middle approach — you need an out-of-band way for the network to tell the client "you need to visit this page first" or "you need to add a proxy first", without posing as the server. (The HTTP 511 proposal from Adobe is a step in that direction; couple it with a way for the network to name a required proxy.)

    SPDY got end-to-end integrity using TLS; you can do something different, but do _something_. Turning the client's "phone call" to a the server into a party line without the user's agreement was a useful hack for a while, but it's not such a great arrangement that we should design HTTP 2.0 around it.

    If pervasive content encryption is too upsetting to companies or governments, then protecting headers' integrity would be a huge step up. To minimize the burden on servers, each client could, one time only, send its public key DH/Curve25519/whatever key that the server can use to generate a short HMAC key. The server can send a cookie with that HMAC key encrypted with a domain-specific symmetric key; the client can then send back that cookie with future requests to skip the public-key computation. The server has nothing to store, there are no new round-trips, and there's only one public-key operation per server-client pair.

    An ssh-style check that the server's key hasn't changed would be enough, since we're not concerned with the server's real-world identity here. Call the scheme httpi:// (HTTP+integrity) and stick it on another port. If the header check shows the key changed unexpectedly, give the user a big warning; if headers were tampered or the port's unreachable, downgrade to http:// and give the user a less drastic warning saying httpi isn't supported (about as prominent as today's mixed-content warnings). Take header integrity, add a Content-Hash header that can do modern hashes, and you have content integrity too.

    Then, once you have a way to get integrity, you can feel more free to mess with the protocol.

    Drastic stuff, but if it's HTTP 2.0 we're talking about, drastic stuff should be on the table.

    If Microsoft is concerned about compression burdening small devices, perhaps it's also worth proposing a lighter compression option than GZip (LZO, Snappy, or a new algorithm entirely). Small devices are probably bandwidth-constrained, too, after all.

    I haven't seen the technical details of the proposal, of course. I hope Microsoft proposes, or implements, something that's substantially as functional and efficient as SPDY, etc., or I'm sure Google and Amazon (and Apple and Moz?) will march on with their approaches, MS will do its own or stick to 1.1, and we developers will again lament our apps performing differently depending on the platform.

  11. Anonymous says:

    Wait what?!

    @Randall – just where is this proposal online? all we've seen is "talk" about what this "thing" would do – there has been ZERO cold hard facts or a specification for us to read.

    we're glad that Microsoft is interested in telling us about this before forcing an implementation on us – but the community will be much more willing to listen if you give us something to read… to let us decide if we should be happy about this new concept or be fearful that IE is just going to open up and monopolize more connections to the server than IE9 already does with its over zealous pre-loading and re-loading if the context changes instead of re-parsing the files it already downloaded (a serious WT*?! in the IE9 design).

    Nick

  12. Anonymous says:

    @Wait what — I was only replying to the intro (the -00 version of the Internet-Draft), but -01 is up now. Haven't had time to read through. It has features I didn't expect, like default-on header compression.

    The 00 version seemed to hint Microsoft really wants to protect existing network infrastructure that tinkers with HTTP packets on the wire ("switches, routers, proxies, load balancers, security systems"). I was saying I think men-in-the-middle editing HTTP sessions without user/server consent is a bug, not a feature we want to carry forward, and I sketched out one way to encourage end-to-end integrity for next-gen HTTP connections short of a TLS wrapper.

    (And, revising what I laid out yesterday, a new schema and port for HTTP+integrity wouldn't make sense — just use integrity checks to determine 1) whether it's safe to use new features that might not work with packet-rewriting hardware, and 2) whether to show the user a "checks passed" cue analogous to the SSL lock (a checkmark in the address bar, say).)

  13. Anonymous says:

    muito bom