Skip to main content

When Will the Internet Reach Its Limit (and How Do We Stop That from Happening)?

The head of Bell Labs Research says the Internet should deal in information rather than simply bits and bytes


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The number of smartphones, tablets and other network-connected gadgets will outnumber humans by the end of the year. Perhaps more significantly, the faster and more powerful mobile devices hitting the market annually are producing and consuming content at unprecedented levels. Global mobile data grew 70 percent in 2012, according to a recent report from Cisco, which makes a lot of the gear that runs the Internet. Yet the capacity of the world’s networking infrastructure is finite, leaving many to wonder when we will hit the upper limit, and what to do when that happens.

There are ways to boost capacity of course, such as adding cables, packing those cables with more data-carrying optical fibers and off-loading traffic onto smaller satellite networks, but these steps simply delay the inevitable. The solution is to make the infrastructure smarter. Two main components would be needed: computers and other devices that can filter their content before tossing it onto the network, along with a network that better understands what to do with this content, rather than numbly perceiving it as an endless, undifferentiated stream of bits and bytes.

To find out how these major advances could be accomplished, Scientific American recently spoke with Markus Hofmann, head of Bell Labs Research in New Jersey, the research and development arm of Alcatel–Lucent that, in its various guises, is credited with developing the transistor, the laser, the charge-coupled device and a litany of other groundbreaking 20th-century technologies. Hofmann and his team see “information networking” as the way forward, an approach that promises to extend the Internet’s capacity by raising its IQ.

[An edited transcript of the interview follows.]


How do we know we are approaching the limits of our current telecom infrastructure?
The signs are subtle, but they are there. A personal example—When I use Skype to send my parents in Germany live video of my kids playing hockey, the video sometimes freezes at the most exciting moments. In all, this doesn’t happen too often, but it happens more frequently lately—a sign that networks are becoming stressed by the amount of data they’re asked to carry.

We know there are certain limits that Mother Nature gives us—only so much information you can transmit over certain communications channels. That phenomenon is called the nonlinear Shannon limit [named after former Bell Telephone Laboratories mathematician Claude Shannon], and it tells us how far we can push with today’s technologies. We are already very, very close to this limit, within a factor of two roughly. Put another way, based on our experiments in the lab, when we double the amount of network traffic we have today—something that could happen within the next four or five years—we will exceed the Shannon limit. That tells us there’s a fundamental roadblock here. There is no way we can stretch this limit, just as we cannot increase the speed of light. So we need to work with these limits and still find ways to continue the needed growth.

How do you keep the Internet from reaching “the limit”?
The most obvious way is to increase bandwidth by laying more fiber. Instead of having just one transatlantic fiber-optic cable, for example, you have two or five or 10. That’s the brute-force approach, but it’s very expensive—you need to dig up the ground and lay the fiber, you need multiple optical amplifiers, integrated transmitters and receivers, and so on. An alternative is to explore another dimension: spatial division multiplexing, which is all about integration. Put simply, you transmit multiple channels within a single cable. Still, boosting the existing infrastructure alone won’t be sufficient to meet growing communications needs. What’s needed is a network that no longer looks at raw data as only bits and bytes but rather as pieces of information relevant to a person using a computer or smartphone. On a given day do you want to know the temperature, wind speed and air pressure or do you simply want to know how you should dress? This is referred to as information networking.

What makes information networking different from today’s Internet?
A lot of people refer to the Internet as a “dumb” network, although I don’t like that term. What drove the Internet initially was non-real-time sharing of documents and data. The system’s biggest requirement was resiliency—it had to be able to continue operating even if one or more nodes [computers, servers and so on] stopped functioning. And the network was designed to see data simply as digital traffic, not to interpret the significance of that data.

Today we use the Internet in ways that require real-time performance, whether that is watching streaming video or making phone calls. At the same time, we’re generating much more data, so having networks that just look at bits and bytes is no longer sufficient. The network has to become more aware of the information it’s carrying so it can better prioritize delivery and operate more efficiently.

How do you make a network more aware of the information it’s carrying?
There are different approaches. Today, if you want to know more about the data crossing a network—for example to intercept computer viruses—then you use software to peek into the data packet, something called deep-packet inspection. Think of a physical letter you send through the normal postal service wrapped in an envelope with an address on it. The postal service doesn’t care what the letter says, it’s only interested in the address. This is how the Internet functions today with regard to data. With deep-packet inspection, software tells the network to open the data envelope and read at least part of what’s inside. [If the data contains a virus, the inspection tool may route that data to a quarantine area to keep it from infecting computers connecting to that network.] However, you can get only a limited amount of information about the data this way, and it requires a lot of processing power. Plus, if the data inside the packet is encrypted, deep-packet inspection won’t work.

A better option would be to tag data and give the network instructions for handling different types of data. There might be a policy that states a video stream should get priority over an e-mail, although you don’t have to reveal exactly what’s in that video stream or e-mail. The network simply takes these data tags into account when making routing decisions.

Even if a smarter Net can move data around more intelligently, content is growing exponentially. How do you reduce the amount of traffic a network needs to handle?
Our smartphones, computers and other gadgets generate a lot of raw data that we then send to data centers for processing and storage. This will not scale in the future. Rather, we might move to a model where decisions are made about data before it is placed on the network. For example, if you have a security camera at an airport, you would program the camera or a small computer server controlling multiple cameras to perform facial recognition locally, based on a database stored in a camera or server. [Instead of bottlenecking the network with a stream of images, the camera would communicate with the network only when it finds a suspect. That way it sends an alert message or maybe a single digital image when needed.]

Would this decentralization mean the end of “the cloud”?
No, it’s a different way of organizing the cloud. Today the cloud is made up of big, centralized data centers. That’s fine for certain functions, such as when you need to aggregate data on a global scale. In the future our devices, whether it’s a smartphone or a television set-top box, will play a larger role in the cloud. [In the case of a set-top box, the box would gather data about a viewer’s preferences, analyze that data right there in the living room and the send specific content recommendations back to the cable provider, rather than a stream of raw data.]

How does information networking address privacy concerns?
At the moment privacy is binary—you either keep your privacy or you have to give it up almost entirely to obtain certain personalized services, such as music recommendations or online coupons. There has to be something in-between that puts the user in control of their information.

The biggest problem is that it has to be simple for the user. Look at how complicated it is to manage your privacy on social networks. You end up having your photos in the photo stream of people you don’t even know. There should be the digital equivalent of a knob that lets you trade off privacy with personalization. The more I reveal about myself, the more personalized the services I receive. But I can also dial it back—if I’m willing to provide less detailed information, I can still receive some personalized, albeit less-targeted, offers.