Recently I read the book called Tubes: A Journey to the Center of the Internet by Andrew Blum. If you’ve ever wondered how your computer connects to other computers around the world, this book is a must read. I consider this essential reading for any engineer responsible for delivering online services or networks.
Once you’ve finished reading that book, go and read Wired: Mother Earth Mother Board by Neal Stephenson. It’s quite lengthy at 42,000 words and was written in 1996, but it makes a great companion to Tubes. Here’s a sample:
One day a barge appears off the cove, and there is a lot of fussing around with floats, lots of divers in the water. A backhoe digs a trench in the cobble beach. A long skinny black thing is wrestled ashore. Working almost naked in the tropical heat, the men bolt segmented pipes around it and then bury it. It is never again to be seen by human eyes. Suddenly, all of these men pay their bills and vanish. Not long afterward, the phone service gets a hell of a lot better.
Ever since I was a kid, I’ve had a fascination with submarine cables. I can trace these memories back to family holidays along the NSW South Coast. As you drive up and down the coast, there are lots of rivers to cross and on the shore beside every bridge was one of these signs:
Photo credits: brynau on Flickr
I remember asking my grandparents what the signs were for and being told something about stopping the submarines coming down the river, but that didn’t sit quite right with me. Why would they have a big sign advertising that there was protection there? And why is there a picture of a boat with an anchor on it?
Connecting Australia to the world
As an early user of the Internet in Australia, two things were clear: It was slow and it was expensive. At the time, Australia was connected to the world via a twin pair of 560Mbit/sec cables that went via New Zealand & Hawaii: PacRimWest, PacRimEast and Tasman2.
Then in the year 2000, things started to change. Internet access started becoming a lot faster and a lot more affordable. This was due to the commissioning of two significant cables:
- Southern Cross Cable a triple-ring cable to the US, originally deployed at 120Gbit/sec per cable, since upgraded to 500Gbit/sec with a total system capacity of 3.6Tbit/s
- SEA-ME-WE 3 (South-East Asia – Middle East – Western Europe 3) two fibre pairs with current capacity of 480Gbit/sec/pair
Image credits: “Southern Cross Cable route” by J.P.Lon, Mysid Wikipedia Commons
Image credits: “SEA-ME-WE-3-Route” by J.P.Lon on Wikipedia
These two cables were built using two different business models which are talked about in the book. To summarize:
- Consortium: Cables were financed by consortiums of (usually) government owned telephone providers in the countries that the cable would pass by. Each provider would be responsible for part of the cost of the cable in return for having access to it. Prior to 1997, this is how all cables were built. Because the financiers of the cables were from the “Old world” club of telephones, capacity on the cable was sold in “circuits”. The more bandwidth you wanted, the more circuits you had to buy. It also meant that as the fibre optic technology along the cable was upgraded, they could sell more circuits at the same price.
- Privately financed: “New world” private investors did the math and realized that they could build submarine cables and sell the rights to the actual fibre pairs in the cable. This then allowed the communications providers to put their own fibre optic equipment on the ends of the cable and send/receive as much data as they were capable of, without per-circuit fees.
As the rush of investment on these new world cables picked up pace, some of the old world consortiums felt so threatened that they ended up buying capacity on the cables themselves!
Submarine Cables around the world
Much of the source material in the book originates from a company called TeleGeography. TeleGeography are a telecommunications market research firm that has been studying the physical Internet since 1998. Along with things like bandwidth and co-location pricing research, they also sell a 36″ x 50″ wall map of submarine cables for $250. They also have an interactive online version with additional context for each country’s Internet access.
Being the techie that I am, I ended up getting a framed copy of the map and have it on my wall as a reminder of how far away Australia is from the rest of the world. (Not like I need a reminder! :)
In the December 2009 edition (17.12) of Wired magazine, there was an article called Netscapes: Tracing the Journey of a Single Bit by Andrew that included this picture:
Grover Beach, California
After traversing the continent, our packet will arrive in an LA building much like 60 Hudson Street. But if it wants to ford the Pacific, it can jog north to a sleepy town near San Luis Obispo. This sheltered section of coastline is not a busy commercial port, so it’s unlikely that a ship will drag an anchor through a transoceanic cable here. A major landing point for data traffic from Asia and South America, the station at Grover Beach sends and receives about 32 petabits of traffic per day. As our bit streams through the Pacific Crossing-1 cable (underneath the four posts, left), it’s on the same trail as some of the most important information in the world: stock reports from the Nikkei Index, weather updates from Singapore, emails from China — all moving at millions of miles an hour through the very physical, very real Internet.
This is just one of hundreds of cable landing points around the world and the book describes the process of “landing” a cable on a beach and connecting it to a nearby “Landing Station” like this one. These are usually non-descript buildings nearby the beach, but not actually required to be on the beach.
The next step in the journey of a bit is “How do all these cables criss-crossing the globe connect to each other?”
It turns out that there’s some pretty significant Internet exchange points (IX or IXP) spread around the world for this purpose. An IXP allows networks to directly “cross-connect” (peer) with each other, often at no charge. This literally means patching a cable between the two networks and into the same switch. Keith Mitchell’s presentation Interconnections on the Internet: Exchange Points talks about the different interconnection models and what determines the success of an IXP.
- DE-CIX in Frankfurt, Germany (stats)
- AMX-IX in Amsterdam, Netherlands (stats)
- LINX in London, UK (stats)
- Equinix in Ashburn, Virginia
Unsurprisingly, you will find many cloud service providers (i.e. Azure, Amazon, Google, Facebook, Akamai, etc) have major datacenters located near these exchange points. This allows them to peer with lots of ISPs for cheap/free traffic and reduces the latency between their services and their customers.
Aside: Net Neutrality, Interconnection and Netflix
I won’t go into the details here, but these articles make for interesting reading on the topic of “paid for” interconnects and how they can dramatically effect things like your video streaming experience.
Direct line from Chicago to New York
One of the other books that I came across recently is called Flash Boys by Michael Lewis. The first chapter (which is summarised in this Forbes article) describes how Dan Spivey of Spread Networks came up with the idea to build a fibre optic line directly between Chicago and New York for sending low-latency trades. Dan helped devise a low-latency arbitrage strategy, wherein the fund would search out tiny discrepancies between futures contracts in Chicago and their underlying equities in New York.
Since fibre optics carry light signals at the speed of light, the only way to get the signals to the other end faster is to reduce the distance. What Dan realised was that the existing fibre paths between the two cities were not as direct as they could be, as they tended to follow railroad rights-of-way.
By building a cable that is nearly as straight as the crow flies, Spread Networks was able to shave 100 miles and 3 milliseconds off the latency between the two trading data centers. This made the cable extremely valuable and they ended up selling the exclusive rights to a single broker firm (since if more than one person had access to the cable, that devalued it).
Dan was obsessed with the length of the cable, since every twist and turn adds to the latency. One extreme example is when the cable ducts run down one side of the road and then at an intersection, they cross the road and continue on the opposite side of the road. Instead of making two 90 degree turns they laid the cable diagonally across the road.
I hope you’ve enjoyed this quick excursion around the physical infrastructure of the Internet. If you find any more interesting articles or books on the topic, I’d love to hear about them.