The speed of light is unlikely to improve: consequences


In the process of designing a way for a game to obtain the the ping time to a game server, the question arose as to what units the ping time should be expressed in. The original idea was to express the ping time in milliseconds, since that's the unit in general use.

But someone asked, "What if networking technology improves to the point where sub-millisecond ping times become commonplace? Should we use a time reporting unit with higher resolution, so that if there are multiple servers with sub-millisecond times, the app can pick the best one?"

Well, first of all, for people playing online games, any ping time less than 20 milliseconds is probably good enough. If you had to choose between two servers that both have sub-millisecond ping times, you would be ecstatic.

Second, the speed of light is unlikely to improve. The absolute lowest ping time is therefore the time it takes light to travel from your computer to the other computer, and back. Make all the improvements in netwokring technology you want. A system more than 150km away will never be able to produce a sub-millisecond round-trip ping time.

We stuck with milliseconds.

Comments (50)
  1. Brian_EE says:

    >A system more than 150km away will never be able to produce a sub-millisecond round-trip ping time.

    It's actually not even that good, unless the space between you and the server is a vacuum and the trip is light all the way. The speed of light in a fiber optic cable is about 30% slower (a simple rule of thumb is that a signal using optical fiber for communication will travel at around 200,000 kilometers per second). So, make that ~100 km (~60 miles) and that's fiber distance, not physical distance. Subtract more for time spent as electrons in copper plus latency through the integrated circuits and software that makes up switches, routers, and media converters (copper to fiber and back).

    You might not see sub-millisecond times even if you're in the same building.

    1. We're assuming arbitrary improvements in network technology. Maybe direct line-of-sight vacuum replaces copper wire as a medium.

      1. Cesar says:

        > Maybe direct line-of-sight vacuum replaces copper wire as a medium.

        That exists. Not vacuum, but close enough: https://en.wikipedia.org/wiki/Free-space_optical_communication

        Also, some high-frequency traders buy line-of-sight microwave links, since they have less latency than optic fiber.

      2. Joshua says:

        Believe it or not, I got bit by can't display hundredths of ms of ping time measurement in 2007.

  2. Koro says:

    In the last paragraph, you have a typo: "netwokring".

    1. Muzer says:

      A group of online chefs specialising in using a particular cooking implement originating in South China?

      1. Jason says:

        Please, the NetWokRing was obviously a group of Geocities sites about Chinese food.

    2. Also "the the" in the first line.

      1. Ray Koopa says:

        Funny how I didn't notice that when I read over it.

  3. DWalker07 says:

    The question "what if networking technology improves to the point where sub-millisecond response times become commonplace?" is an example of someone either not using common sense, or not being "numerically literate". Even applying grease to the wires won't make the electrons travel faster, as you said.

    Sometimes it's hard to look at the bigger picture.....

    A ping to the router that's 60 feet down the hall from me often says "<1 ms", and that's good enough info for me!

    1. Dmitry says:

      A pair of servers in a rack could be two inches apart, and blades are packed even tighter.

      1. Scarlet Manuka says:

        But what kind of person logs onto one blade to play an online game being hosted on an adjacent blade?

        1. xcomcmdr says:

          A Blade Runner ?

  4. Adam says:

    Don't sleep on quantum entanglement!

    1. Euro Micelli says:

      Quantum entanglement does not enable information to travel faster than the speed of light. Other forms of magic, perhaps, but not that one.

      1. ErikF says:

        I'm waiting on Microsoft Research to get that time machine working so that packets arrive before they've even been sent!

        1. pc says:

          I once had some sort of bizarre misconfiguration or odd drivers or something, such that I saw negative ping times. I saved the output for posterity (actual host/IP redacted):

          Pinging ■■■■.■■■■.■■ [192.168.■.■■] with 32 bytes of data:

          Reply from 192.168.■.■■: bytes=32 time=-14ms TTL=127
          Reply from 192.168.■.■■: bytes=32 time=-14ms TTL=127
          Reply from 192.168.■.■■: bytes=32 time=-14ms TTL=127
          Reply from 192.168.■.■■: bytes=32 time=-18ms TTL=127

          Ping statistics for 192.168.■.■■:
          Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
          Approximate round trip times in milli-seconds:
          Minimum = -18ms, Maximum = -14ms, Average = 1073741809ms

          1. Kristof says:

            Any reason you're hiding a class C network? It's not like that info is valuable to the outside world!

          2. Scarlet Manuka says:

            Minimum = -18ms, Maximum = -14ms, Average = 1073741809ms? That's not how mathematics worked when *I* was studying it.

          3. Yuri Khan says:

            @Scarlet Manuka: If you’re mixing signed and unsigned 32-bit arithmetics, that’s exactly how it works.

          4. Lawrence says:

            I can't tell if "I'm going to redact numbers on a local network that are meaningful only to me" is a brilliant satire on Raymond's theme of "person tries to think big-picture and misses the point that the situation they want to plan for will never happen" or if it's a live example of that happening!

          5. pc says:

            @Kristof:
            It was over ten years ago, and was redacted in what I originally saved. I was probably just using an overabundance of caution. I wish I remember what the actual cause was, but I do remember how bizarre it was.

          6. Doug says:

            @pc
            When I saw negative ping times, it was because of an outdated CPU driver on an AMD processor. Does that ring a bell?

          7. NickViz says:

            It happened even on early Intel quad core CPU. The CPU time for different cores was eh, different. Solution that time was to enable virtualization technology (Intel-VT) in the BIOS. That fixed the problem. HP DC5700 IIRC

        2. anai says:

          If that happens (and for larger items like people) Raymond (and his colleagues) would have the opportunity to travel back and fix all those challenges that were not foreseen 'back in the day'. That is, no more "the first thing I would need to fix this is a time machine" retort. Thinking on, Time Machines may be rare, privileged, expensive, regulated etc so may still be a viable response'...

      2. smf says:

        The problem at the moment with quantum mechanics is that the game is run by snake oil salesmen, the same type of people who will keep saying there is a dollar missing no matter how much you try to correct their maths (https://en.wikipedia.org/wiki/Missing_dollar_riddle) or who tell you that the Mexicans will pay to build you a wall.

        Once you realise there is nothing is particularly spooky with quantum mechanics and that you just need a different model for the particles then all the magic disappears. A Ford and a Ferrari will go round corners differently and a model for one doesn't match the model for another, it doesn't mean one of them can fly.

  5. N. says:

    Had a related problem at a past company, a customer was complaining of reliability in our program's comms over a satellite link. The devs said, "oh that shouldn't be a problem". I did a back-of-the-napkin lower bound estimate on latency assuming geosynchronous orbit and speed-of-light transmission, and stuck that number into a router that allowed simulating latency.

    The problem was easily reproduced. And then fixed :)

    1. smf says:

      I'm not sure why you bothered to estimate it. You either test it with multiple latencies to make sure it works no matter what the latency is, or you find out the actual latency that the link has.

      You could have ended up "fixing" an issue that the customers weren't experiencing and ignoring the issue that they were. You got lucky.

      1. N. says:

        I estimated it purely to provide a firm lower bound. Because nobody else was bothering to do it, and didn't seem to occur in their imaginations that the latency could be "that high". Nope, I wasn't doing anything sophisticated, just using a basic laws-of-physics cluebat!

  6. Ken Hagan says:

    It depends on what the pings are used for. With sufficiently fat pipe (to make the wasteful use of bandwidth more acceptable) a server could pre-emptively reply to pings that hadn't been issued yet. Recipients would ignore the replies to things they hadn't asked and enjoy sub-millisecond reply times to things that they had asked. (The server would use the actual questions to guide the second-guessing algorithm.)

    Obviously you'd want to hide most of that complexity behind some sort of proxy layer, at which point the rest of the client code probably believes it has sub-millisecond ping times to the remote server.

    1. SimonRev says:

      Of course the next logical step would be for Netflix to start sending me every video stream in their library continuously so I never have to buffer when starting a video.

      1. zboot says:

        In order to not "buffer", you'd need to actually buffer all that video. So, unless you had a massively parallel connection and memory, you're still "buffering" the pre-emptively sent data sequentially and so, if what you wanted to watch wasn't one of the first things sent, you'd end up needing to wait for it to "buffer". Buffering just leads to more buffering.

        1. smf says:

          It would certainly be possible to buffer Netflix locally, it's not the size of the internet. They almost certainly have their own caches closer to customers already, it's just an extrapolation of that. Like any predictive cache, it has downsides when what you want isn't in the cache. So zero day programmes on Netflix would likely take longer to get out to everyone, but I doubt they care too much as they don't broadcast live and anything they want to make available at a certain time can be pushed out in advance. It would fail completely if they were adding more content quicker than it could be downloaded.

          Some broadcasters do pre-emptively send out content that is stored on your set top box hard drive in case you want it. Kinda like the U2 album being sent out to iphones but because it doesn't fill up your entire device then it doesn't get such bad press.

      2. Anon says:

        ... and you just invented the TV signal.

    2. smf says:

      "It depends on what the pings are used for. With sufficiently fat pipe (to make the wasteful use of bandwidth more acceptable) a server could pre-emptively reply to pings that hadn’t been issued yet. Recipients would ignore the replies to things they hadn’t asked and enjoy sub-millisecond reply times to things that they had asked. (The server would use the actual questions to guide the second-guessing algorithm.)"

      If the client can determine whether the server data should be discarded then it could just calculate the result without using a network at all.

      All you've done is redefine what a ping time is so that you can lie about it.

  7. David Crowell says:

    I get less than 1ms ping to another machine. It's not *much* less than 1ms, and it's hardwired into my local network and in the same room... but still. :)

  8. Zan Lynx' says:

    Using metadata about the movie including the release date, the production company, the director and the actors, the local AI in your desktop should be able to create a convincing replica of the beginning of the movie while waiting for the stream to arrive.

    1. smf says:

      They'd never release that, someone would hack it to remove everyone's clothes.

      1. D-Coder says:

        And this is a problem because...?

  9. Kristof says:

    This story reminds me of the 500-miles away mail server:

    http://www.ibiblio.org/harris/500milemail.html

  10. cheong00 says:

    It's much like people asking why we don't have 64-bit quality music - because the current standard is already an overkill. Pushing up the precision will not give any improvement on things you think could improve.

    1. smf says:

      "It’s much like people asking why we don’t have 64-bit quality music – because the current standard is already an overkill. Pushing up the precision will not give any improvement on things you think could improve."

      The current standard is 16 bit @ 44100 and a lot of music is delivered below that. You can get 24 bit @ 48000 but it's not the standard. Going higher than 48000 is probably worth it, I'm not sure that going higher than 24 bit is. Not because it's some magical property of human hearing, but because the more bits you add the harder (and therefore more expensive) it is to build a DAC & you need that level of accuracy at every level including the speakers.

      1. For a final format, 44.1 kHz 16 bit per sample is almost certainly beyond the limits of human hearing.

        Assuming a decent DAC, that's good enough to get perfect reproduction at up to 20 kHz, and a dynamic range such that if it's just about reproducing the quietest sounds you can hear, the loudest sounds in the media are at about the same volume as a jackhammer at 1 meter distance (that's 3.2808 feet in American measures).

        There's definitely no need for a higher sampling frequency - you *might* gain a little bit from an extra 4 bits (so 20 bits) per sample, if you're mostly interested in things like the 1812 Overture, with a mix of full volume cannons and quieter music.

      2. Antti Huovilainen says:

        smf, going above 48 kHz / 16 bit is largely pointless for music distribution. 96 dB dynamic range is plenty enough for listening, especially given that you won't find a listening environment with low enough background acoutic noise levels to get even that for non-hearing damage inducing listening levels. For playback you can use a couple of more bits to allow digital volume control, and for recording 24 bits (closer to 19 bits of actual resolution unless you pay big money) gives more headroom for unexpected peaks.

  11. ErikF says:

    The overhead of the protocol stack has to be taken into consideration as well. For example, if I ping a VM on my server I get pretty good pings but they're still in the 0.2-0.5ms range (it's not a super-powerful server!); even pinging localhost gives me 0.06-0.07ms times.

    I wonder if ping times will go down as IPv6 becomes adopted more widely. One of the promises that I remember hearing when IPv6 first came out was that the number of hops would decrease as ISPs could simplify their internal network routes, as well as not needing NAT techniques. I'm not certain about this yet: my ping times to IPv4 and v6 hosts are roughly the same.

    1. smf says:

      I don't think you'll see any benefits of IPV6 until IPV4 is dead and buried and IPV4 is still relevant as a lot of devices don't support IPV6 (the majority of smartphones and a lot of ADSL routers for example).

      It may then show some improvement, but maybe not as the ietf have worked hard to keep IPV4 working effectively until it can be dropped. Unfortunately some people have then seen this as a reason not to switch to IPV6, but there will come a time when IPV4 can't be kept working effectively and there are enough IPV6 compatible devices to justify switching IPV4 off.

      Kinda like how the huge investment in Year 2000 changes made most of the problems disappear, so people wondered why so much money was spent on fixing Year 2000 problems.

  12. Chris Crowther says:

    I almost wish you had gone with Planck time, just for the consternation it would cause amongst other people.

  13. Engywuck says:

    if taken to the extreme a ping of 1 Nanosecond would mean at most a 30cm round-trip.

    That's one of the reasons (besides power dissipation etc) that we currently are limited to a few GHz CPU frequency: at 3GHz you can "only" have a chip diameter in the order of a few cm if you want it to be synchronuous (which is the easiest to design and has the lowest latency). We can go up with the speed when we build smaller synchronuous areas, but then we add complexity for the communication between those areas.

  14. cheong00 says:

    Kind of unexpected to see that after waiting for a week, no RFC1925 reference is added. :P

Comments are closed.

Skip to main content