Gigabit Challenges: Why Latency Matters

One of the key challenges to Gigabit service is latency. What is latency, and how does it affect Internet speeds?

Latency is the amount of time it takes for data to travel from its source to its destination across the Internet. Latency depends on several factors;

1) Transmission Latency: the speed of the underlying circuit.
A gigabit circuit (1000 mbps) has lower transmission latency than a T1 circuit (1.5 mbps).

2) Distance Latency: The geographic distance.
The speed of light becomes significant when the distance is over 100 miles.

3) Switching Latency: time it takes for data to be switched through a router.
Each router or switch takes a certain amount of time to transfer data in from one port, through the device, and out another port.

For Internet applications, the major components to latency are usually the Transmission Latency and the Distance Latency.

The time it takes light to travel 100km in fiber optic cable is about 0.5 milliseconds. To go from New York to San Francisco, a distance of 4,673 kilometers, it would take approximately 23 milliseconds. The RTT or Round Trip Time would be 46 milliseconds! That’s the time it would take for a “ping” to go back and forth.

Latency affects the way the Internet’s TCP/IP protocol works. Throughput (the speed you can transfer data) is inversely proportional to the latency. The higher the latency, the slower the connection will be.

Without altering the protocols, the maximum speed you can transmit data between New York and San Francisco is 17.4 megabits, even if you have a gigabit link.

In recent years, both Microsoft Windows and Linux have implemented proprietary hacks to the TCP/IP protocol to help out – but they are not compatible with each other, and only help between Windows computers or between Linux computers. And even with these adjustments, you still don’t get great throughput.

This is why network designs that minimize latency are extremely important. has taken a minimum-latency approach to designing its Internet connectivity services. Through extensive direct peering right here in Colorado, we offer the lowest-latency connections available from any ISP in the state.

How does this work?

A typical ISP only connects to 1 or 2 “backbones”. No backbone controls more than 10% of Internet users, so in order for a CenturyLink user to talk to a Cogent user, their packets have to go to an exchange point at one of the coasts, or in Chicago or Texas. Even if your packet just needs to go across the street, it may be sent on a journey of thousands of miles.

Remember, mileage equals latency, and that’s going to slow you down.

By comparison, has direct peering with dozens of networks. We peer with Cogent, Comcast, CenturyLink, Zayo, Level3, Telia, Hurricane Electric, Google, Apple, Yahoo, Microsoft and many more. This extensive peering costs money – but it provides the absolute lowest latency. With, your packets have a better than 95% chance of going directly to its destination with the minimum latency. No bouncing packets off of California to get from Denver to Boulder.

Lower latency means better speed, better reliability, and an enhanced user experience.

And that’s our commitment to subscribers. Our gigabit service is the best, because we have the lowest latency service.

Calculating Optical Fiber Latency

How to Calculate TCP Throughput for Long-Distance Links