Mon Apr 10 10:39:25 PDT 2000 -
See bottom of page for new Cisco IOS commands for load balancing! - Tim
Internet Archive is presently connected to the rest of the world via two T1s (aka DS1) lines, nominally capable of supporting 1.5 Mb/Sec each flowing in and out simultaneously for a total of a little more than 3 Mb/Sec in each direction. Getting two T1 lines running so they effectively add together as a 3 Mbit pipe was a study in cisco router technology.
Cisco routers use two primary methods for determining what interface to send a packet. Either "processor" or "fast" switching. When the router is configured for "processor switching" the incoming packets are queued up and examined where they need to go via the routing tables.
If the router is configured for "fast switching" a packet coming through the router is examined for the destination IP address as it is coming in and then shoved out the interface that the destination address best matches in the route cache. Route caches "bolt in" the routing per interface. This reduces the processor time and moves the packet quicker through the router.
Both of these methods work fine when there is only one direct path to the routes. Although "fast switching" actually wins out with one path. When there are two or more parallel connections, then we encounter some problems. Let's look as the differences and some some configuration files.
When configuring two or more parallel link the router needs to see all of the interfaces in the same "net" to treat them equal. Here is a snippet from a cisco configuration file showing this:
interface Serial0 description The first serial line to our provider... ip address 126.96.36.199 255.255.255.240 encapsulation hdlc ! interface Serial1 description The second serial line to our provider... ip address 188.8.131.52 255.255.255.240 encapsulation hdlc ! ! default to the outside world... ip route 0.0.0.0 0.0.0.0 184.108.40.206 ip route 220.127.116.11 255.255.0.0 18.104.22.168 !Note that both serial interfaces have a netmask of 255.255.255.240 and both IP addresses fall in this "net". The router will treat these serial lines as going to the same place. We can put up to 7 lines in this netmask with each line taking two IP numbers. In this "net" we just need to avoid 22.214.171.124 and 126.96.36.199. The default route is pointing to an IP address assigned to the other end of one of the links.
As it is configured above, cisco routers will default to fast switching. A route cache will be created. You can check it with the "show ip cache" command. You will see something like:
gw>show ip cache IP routing cache 1475 entries, 213200 bytes Minimum invalidation interval 2 seconds, maximum interval 5 seconds, quiet interval 3 seconds, threshold 0 requests Invalidation rate 0 in last second, 0 in last 3 seconds Last full cache invalidation occurred 0:00:00 ago Prefix/Length Age Interface Next Hop 188.8.131.52/8 13:36:07 Serial1 184.108.40.206 10.0.0.0/8 0:09:13 Serial0 220.127.116.11 18.104.22.168/8 0:29:43 Serial1 22.214.171.124 [...] 126.96.36.199/8 1:15:28 Serial0 188.8.131.52 184.108.40.206/16 1:03:39 Serial0 220.127.116.11 18.104.22.168/16 0:20:21 Serial1 22.214.171.124 126.96.36.199/16 0:18:24 Serial1 188.8.131.52 [...] 184.108.40.206/24 1:05:44 Serial1 220.127.116.11 18.104.22.168/32 0:01:48 Ethernet0 22.214.171.124 126.96.36.199/32 0:04:29 Ethernet0 188.8.131.52 184.108.40.206/32 0:12:18 Ethernet1 220.127.116.11 18.104.22.168/32 0:13:19 Ethernet1 22.214.171.124 126.96.36.199/24 0:13:38 Serial1 188.8.131.52 [...]
With two serial ports the router sees various networks split between the two ports. It also will show host addresses on the ethernet.
Packets going from the Internet Archive to the net are reasonably distributed between the two serial lines, but what if our provider used route caching towards us? If it was turned on for us we would see something like:
gw0-sf-tlg>show ip cache IP routing cache 22174 entries, 3567320 bytes 8958580 adds, 8936406 invalidates, 23907 refcounts Minimum invalidation interval 2 seconds, maximum interval 5 seconds, quiet interval 3 seconds, threshold 0 requests Invalidation rate 0 in last second, 0 in last 3 seconds Last full cache invalidation occurred 01:23:20 ago Prefix/Length Age Interface Next Hop 184.108.40.206/8 01:13:53 Hssi3/0 220.127.116.11 [...] 18.104.22.168/32 00:59:31 Ethernet0/1 22.214.171.124 [...] 126.96.36.199/32 00:02:43 Serial10/0 188.8.131.52 184.108.40.206/32 00:03:13 Serial10/2 220.127.116.11 18.104.22.168/32 00:28:52 Serial10/2 22.214.171.124 126.96.36.199/32 00:11:00 Serial10/0 188.8.131.52So each machine is associated with an individual interface. This means that all the traffic for a machine will go out only one interface until the cache times out and the traffic may or may not switch to the other T1. So traffic for a single machine would be limited to the speed of a single serial interface. In the case of Internet Archive, this is not what we are looking for as we need the bandwidth of multiple T1s. So with the command "no route-cache" applied on these interfaces we can turn this "feature" off. Below is a sample configuration for these interfaces on our ISP's router.
! interface Serial10/0 description archive A2.2 encapsulation hdlc ip address 184.108.40.206 255.255.255.240 no ip route-cache ! interface Serial10/2 description archive A1.2 ip address 220.127.116.11 255.255.255.240 encapsulation hdlc no ip route-cache !Now when we type "show ip cache" on our ISP's router we get:
gw0-sf-tlg>show ip cache 18.104.22.168 255.255.255.0 IP routing cache 20132 entries, 3234340 bytes 9081327 adds, 9061195 invalidates, 21355 refcounts Minimum invalidation interval 2 seconds, maximum interval 5 seconds, quiet interval 3 seconds, threshold 0 requests Invalidation rate 0 in last second, 0 in last 3 seconds Last full cache invalidation occurred 03:49:09 ago Prefix/Length Age Interface Next Hop gw0-sf-tlg>Nothing... But if we do a "show ip route" a couple of times we get:
gw0-sf-tlg>show ip route 22.214.171.124 255.255.255.0 Routing entry for 126.96.36.199/24 Known via "static", distance 1, metric 0 Redistributing via ospf 2, rip Advertised by rip route-map IGP-TO-EGP Routing Descriptor Blocks: 188.8.131.52 Route metric is 0, traffic share count is 1 * 184.108.40.206 Route metric is 0, traffic share count is 1 gw0-sf-tlg>show ip route 220.127.116.11 255.255.255.0 Routing entry for 18.104.22.168/24 Known via "static", distance 1, metric 0 Redistributing via ospf 2, rip Advertised by rip route-map IGP-TO-EGP Routing Descriptor Blocks: * 22.214.171.124 Route metric is 0, traffic share count is 1 126.96.36.199 Route metric is 0, traffic share count is 1The "*" indicates the preferred path at that moment. The two times we checked it moved between "10/2" to "10/0". As the router is configured now a packet will come in and the router will put the packet in the queue that is the shortest of each of the interfaces going towards Internet Archive. The advantage now we will get full use of both serial lines for each machine. As mentioned earlier the down side is some additional processor load on the router. Since this is only two interfaces on a 7513 we expect that the load is very small.
If you go back to the graph that shows our load on the serial lines you can see the difference between a route cached and non-route cached traffic. The incoming traffic from our ISP is non-route cached. So each packet will either take the first or second line depending on which queue is shorter. With this configuration the incoming lines pretty much track each other.
Since we are using route caching on our outgoing lines and we are crawling many sites at the same time, the outgoing traffic is about equal but you will see some differences when more traffic goes to one site. Since this the output traffic's bandwidth is so low and we are not hitting the limit of our lines, it is not a concern to take off route caching at this point..
Below is a graph of the traffic on the serial lines when we turned up the second line.
More details regarding routing over parallel paths can be found from cisco in:
"Routing Theory for IP Over Equal Paths" - http://cio.cisco.com/warp/public/105/27.html
"Load Balance on Parallel Lines" - http://cio.cisco.com/warp/public/105/33.html
If you are interested in getting more details about this page, send mail to email@example.com.
ip load-sharing per-packet
Path utilization with per-packet load balancing is good, but packets for a given source-destination host pair might take different paths. Per-packet load balancing could introduce reordering of packets. This type of load balancing would be inappropriate for certain types of data traffic (such as voice traffic over IP) that depend on packets arriving at the destination in sequence.
Use per-packet load balancing to help ensure that a path for a single source-destination pair does not get overloaded. If the bulk of the data passing through parallel links is for a single pair, per-destination load balancing will overload a single link while other links have very little traffic. Enabling per-packet load balancing allows you to use alternate paths to the same busy destination.
Keep in mind you may need to enable "Cisco Express Forwarding" before this command will work. Please see the documentation from cisco.