jump to navigation

Structure of the Internet and bottlenecks August 10, 2005

Posted by Coolguy in Networks.
  • The Internet is made up of thousands of different networks (also called Autonomous Systems or AS’s) that communicate using the IP protocol
  • These networks range from large backbone providers such as UUNet and PSINet to small local ISPs
  • Each of these networks is a complex entity in itself, being made up of routers,switches, fiber, microwave links, and ATM technology.
  • All of these components work together to move data packets
    through the network toward their specified destinations.
  • In order for the Internet to function as a single global network interconnecting everyone, all of these individual networks must connect to each other and exchange traffic. This happens through a process called peering.
  • When two networks decide to connect and exchange traffic, a connection called a peering session is established between a pair of routers,
    each located at the border of one of the two networks. These two routers periodically exchange routing information,thereby informing each other of the destination users and servers that are reachable through their respective networks.
  • There exist thousands of peering points on the Internet, each falling into one of two categories: public or private.
  • Public peering occurs at major Internet interconnection points such as MAE-East, MAE-West, and the Ameritech NAP, while private peering arrangements bypass these points. Peering can either be free, or one network may purchase a connection to another, usually bigger, network.
  • Once the networks are interconnected at peering points, the software running on every Internet router moves packets in such a way as to transport each request and data packet to its correct destination.
  • For scalability purposes, there are two types of routing protocols directing traffic on the Internet today.
  • Interior gateway protocols such as OSPF and RIP create routing paths within individual networks or AS’s
  • Exterior gateway protocol BGP is used to send traffic between different networks.
  • Interior gateway protocols use detailed information on network topology, bandwidth and link delays to compute routes through a network for the packets that enter it.
  • Since this approach does not scale to handle a large-scale network composed of separate administrative domains, BGP is used to link individual networks together to form the Internet.
  • BGP creates routing paths by simply minimizing the number of individual networks (AS’s) a data packet must traverse. While this approach does not guarantee that the routes are even close to optimal, it supports a global Internet by scaling to handle thousands of AS’s and allowing each of them to implement their own independent routing policies.


  • There are four types of bottlenecks that, left unaddressed, can slow down performance and impede the ability of the Internet to handle a quickly growing number of users, services, and traffic.
  • These bottlenecks occur at the following points in the Internet infrastructure:
    1. First Mile
  • 2. Peering Points
  • 3. Backbone
  • 4. Last Mile

Bogged Down at the Beginning

  • Each content provider sets up a Web site in a single physical location and disseminates data, services, and information to all Internet users around the world from this central location.
  • This means that the speed with which users can access the site is necessarily limited by its First Mile connectivity – the bandwidth capacity of the Web site’s connection to the Internet.
  • In order to accommodate a growing number of users in this model, not only must each content provider keep buying larger and larger connections to his or her ISP, but so must the ISP continually expand its internal network capacity, and the same goes for neighboring networks.
  • Since it is impossible in this type of approach to keep up with exponential growth in Internet traffic, the centralized model of content serving is inherently unscalable.
  • In all of situations, the demand for the content exceeded the first mile capacity of the Web site.

Peering:Points of Congestion

  • The second type of Internet bottleneck occurs at peering points – the interconnection points between independent
  • The reasons why peering points are bottlenecks are mostly economic.
  • First of all, networks have little incentive to set up free peering arrangements, since there is no revenue generation opportunity in that type of arrangement, but there are considerable setup costs.
  • At the same time, none of the large networks is going to agree to pay another large network for peering, because from a traffic perspective, they would both benefit equally from such an arrangement. As a result, large networks end up not peering with each other very much and so the limited number of peering points between them end up as bottlenecks.
  • One of the most common types of peering occurs when a small network buys connectivity from a large network. The issue that comes up in this situation is the long time it takes to provision the necessary circuits.
  • Although there is plenty of dark, or unused, fiber already in the ground, the telephone companies who own it prefer to install new fiber for each
    requested circuit in order to increase their revenues. As a result, it takes 3-6 months to provision new circuits for peering. By the time a requested circuit is installed, traffic may have grown beyond expectations, resulting in a full link – a bottleneck.
  • It is also the case that it doesn’t make economic sense for a network to pay for unused capacity. Therefore, a network generally purchases just enough capacity to handle current traffic levels, so every peering link is full. Unfortunately, this practice of running links at full capacity creates severe traffic bottlenecks. Links run at very high utilization exhibit both high packet loss rates as well as very high latency (because of router queuing delays) even for the packets that can get through. As a result, web performance slows to a crawl.
  • Peering bottlenecks may dissipate due to telecom consolidation
  • Since there are thousands of networks in the Internet, there are at least thousands of peering points in the Internet.
  • Since access traffic is evenly spread out over the Internet’s thousands of networks, most traffic must travel through a number of different networks, and, consequently, a number of peering points, to reach its destination.
  • Therefore, the peering bottleneck problem is clearly a large-scale problem inherent to the structure of the Internet.

Breaking the Backbone

  • The third type of Internet bottleneck is in the capacity of the large long-haul networks that make up the Internet backbone.
  • Because today’s centralized model of content serving requires that almost all Internet traffic traverse one or more backbone networks, the capacity of these networks must be able to grow as quickly as Internet traffic.
  • A network’s capacity is determined by the capacity of its cables and routers.
  • Since fiber is cheap, plentiful and able to support high-bandwidth demands, cable capacity is not an issue.
  • Instead, it is the routers at the ends of the fiber cables that limit backbone capacity.
  • At any point in time, the speed of the packet-forwarding hardware and software in routers is limited by current technology.
  • Many ISPs run IP over switched ATM networks, because IP routers have not been able to keep pace with their traffic demands.However an ATM network is more expensive to deploy and maintain.
  • And while backbone providers too are spending a great deal of money upgrading their routers to handle more traffic, demand will still end up far exceeding capacity.
  • E:g
    Let’s compute demand for long-haul Internet capacity. Consider an example application of video-on-demand on the Internet, the personalized Cooking Channel. Instead of watching the broadcast version of the generic Cooking Channel program on TV, each user will be able to put together his own “menu” of recipes to learn, catering to his or her own specific tastes or entertainment plans. How much Internet capacity will the personalized Cooking Channel consume? Viewer monitoring performed by Nielsen Media Research shows that cooking shows rate about 1/10 of a Nielsen rating point, or 100,000 simultaneous viewers. At a conservatively low encoding rate of 300 Kbps and 100,000 unique simultaneous streams, the personalized Cooking Channel will consume 30 Gbps.
  • Now consider WorldCom’s UUNET backbone which carries approximately half of all Internet transit traffic. Its network is comprised of multiple hubs of varying capacity, ranging from T1 (1.544 Mbps) to OC48 (2,450 Mbps or 2.4 Gbps). UUNET’s Canadian and U.S. networks are linked with more than 3,600 Mbps (or 3.6 Gbps) of aggregate bandwidth between multiple metropolitan areas. However, the equivalent of 32 cooking channels alone could overload the capacity of the world’s largest backbone to transfer content from the U.S. to Canada!

The Long Last Mile

  • A common misconception, is that eliminating this bottleneck will solve all Internet performance problems by providing high-speed Internet access for everyone.
  • In fact, by rate-limiting Internet access, 56 Kbps modems are saving the Internet from a meltdown.
  • If all users were able to access the Internet via multi-megabit cable modem or DSL modems, the other three types of bottlenecks would make the Internet unbearably slow.

Delivering Content from the Network Edge

  • The current centralized model of Internet content serving requires that all user requests and data travel through several networks and, therefore, encounter all four types of core Internet bottlenecks, in order to reach their destinations.
  • Due to this model, the entire Internet slows down during peak traffic times
  • Fortunately, this unacceptable end result can be avoided by replacing centralized content serving with edge delivery, a much more scalable model of distributing information and services to end users.
  • In edge delivery, the content of each Web site is available from multiple servers located at the edge of the Internet. In other words, a user/browser would be able to find all requested content on a server within its home network.
  • The edge is the closest point in the network to the end user, who is the ultimate judge of service.
  • Edge delivery solves the first mile bottleneck by eliminating the central point from which all data must be retrieved.
  • By making each Web site’s content available at multiple servers, each Web site’s capacity is no longer limited by the capacity of a single network link.
  • Edge delivery solves the peering bottleneck problem by making it unnecessary for web requests and data to traverse multiple networks and thus encounter peering points. Of course, in order to accomplish this goal, the edge delivery servers must be deployed inside ALL networks that have access customers.
  • Since there are thousands of networks and access traffic is spread evenly among them, edge delivery schemes with only 50-100 locations or deployed in only the top 10 networks cannot in any way solve the peering bottleneck problem.
  • When content is retrieved from the network edge, the demand for backbone capacity decreases, thus alleviating the bottleneck there. Again, this can only be achieved with edge servers deployed at every one of the thousands of Internet access providers.
  • Edge delivery does not solve the last mile bottleneck issue, it does help to alleviate the growing severity of the other three bottlenecks, thus enabling content delivery closer to end users.

Edge Delivery – Challenges

  • Content must be deployed on edge servers
  • Requires massive deployment of edge servers
  • Geographic diversity of edge servers is critical
  • Content management across disparate networks
  • Content must be kept fresh and synchronized
  • Fault tolerance is crucial
  • Monitor Internet traffic conditions
  • Request routing
  • Managing load
  • Performance monitoring



No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: