Network Neutrality

There’s been a lot of talk in the news lately about the “Network Neutrality” principle.

As usual with anything in politics, the term means different things to different people. So I’ll define it here: the most common aspect of the principle is that networks (such as your friendly neighborhood internet provider) should provide unfettered, unfiltered access to the Internet.

I agree with this principle. Once providers get into the business of limiting content, we don’t have an Internet – we’d have multiple versions of the Internet and free speech would be at risk.

However, there are circumstances where a provider must manage traffic on their networks.

All Internet access is shared. The Internet is inexpensive because all of us are sharing its infrastructure, and this works because not all of us are using the Internet at the same time. Larger providers may see 60 to 1 effective oversubscription; i.e. out of 60 customers only 1 may be using the service.

Some folks think oversubscription is inherently bad; but really, without it, the Internet as we know it wouldn’t exist. You have to share bandwidth in order to have the potential for $20 home broadband service.

In the early days of the Internet some providers poorly managed their oversubscription, resulting in congestion and poor quality service for their customers. But now in this day of “fiber glut” bandwidth to the rest of the world is cheap and readily available – at least, it is in large urban areas such as Denver. That makes it easy for broadband service providers such as foreThought.net to ensure there is plenty of excess bandwidth, ensuring all customers will have full speed access to their connections when they want it.
But, there are many ISPs in areas that do not have ready access to that kind of bandwidth. There are in fact thousands of ISPs in rural America for whom internet backbone connections are very expensive and hard to get. An ISP in Alaska may get its Internet connection via a T1 over Satellite, for example, and that may be the only option.

Such providers must manage bandwidth carefully in order to provide broadband service of any kind to their subscribers. It is here that some Network Neutrality advocates overreach, and want to propose regulations that would prevent such rural providers from restricting subcriber use of the service that would cause problems for other subscribers. I am fortunate that I do not have to make such choices.

Now we get to Comcast. Comcast (and presumably other cable broadband subscribers) are declaring their need to manage their bandwidth, and have filtered and/or rate-limited traffic from applications such as BitTorrent. Aside from Comcast’s historical behavior of not telling its subscribers they were subject to such filtering and rate-limiting, one wonders why a company such as Comcast that constantly boasts of its “amazing speeds” would need to enforce such limits.

The problem could stem from Comcast simply being cheap, but at the scale they’re at, Internet bandwidth is almost free for them. Rather, the problem likely stems from the inherent nature of Internet-over-cable.

Cable internet services work according to the “DOCSIS” technical standard. This standard basically specifies that Internet service is transmitted on a cable TV cable, by allocating “channels” of bandwidth. They basically take (for instance) channel 2 and instead of using it to send video, they send Internet data. So far so good.

Cable is inherently a broadcast medium. It was designed to efficiently send the exact same TV signal to thousands of houses at the same time. It’s very good at that. What this means for Internet, however, is that the one “internet” channel we referred to earlier is shared by all the all the houses and businesses that share the same coaxial cable.

Nobody is really sure exactly how many houses might share a cable node; Comcast does not release such information. However, in TV services the larger the number the cheaper it is for Comcast. We may surmise that neighborhoods of hundreds, maybe even thousands, of users will share a cable.

Thus, hundreds or thousands of users will share the same Internet bandwidth.

Using the DOCSIS Wikipedia Article as a reference, we can make a few educated guesses. One channel in DOCSIS is about 42Mbps. We don’t know how many users may share a single channel such as this, but at speeds Comcast is now selling it would only take three such subscribers to degrade other user’s performance.

There are technological improvements possible with the cable architecture, but telco-style central office based services are still better right now.

In a telco-style CO deployment such as foreThought.net uses, we run a gigabit or 10 gigabit into our central node, and every customer has a dedicated DSL or Ethernet line from their house or business, to the CO. So it would take hundreds of customers maxing out their service at the same time to cause a congestion issue for us, and in practice such overuse has never occurred.
So, we see why Comcast is opposed to Network Neutrality on this technical basis. But I wonder if they are opposed to it for other reasons as well. They are on the verge of losing their monopolies on television programming. Video services are going to move to being delivered over the Internet; even with DOCSIS 3.0 and the additional bandwidth it promises, true video on demand will break the cable network. DSL-based broadband networks were made for this, however, since every subscriber has its own dedicated line.

Business users, also, who have Internet bandwidth needs far greater than homes, will put great stress on cable networks.

It seems Comcast has a perverse incentive to discourage its users from actually using their service. Fortunately, this is not a problem foreThought.net faces.