Wednesday, June 24, 2009

CCNA Test

Passed. 825 required, I scored an 874.

I was weak exactly where I thought Iwould be weak - WLAN and switching (VTP to be exact).

Either way, I'm glad to have that over and get on with the summer!

Wednesday, June 17, 2009

Etherchannel

I nearly failed to add Etherchannel to the discussion of STP. The essence of fast etherchannel is that, when you have multiple links between switches STP will set all but one in a blocking state. This is a waste of bandwidth as the blocked ports simply wait for a failure. Fast etherchannel allows you to bind all links between switches together to load-balance across the available connections and combine their bandwidth into a virtual interface that you administer as a single physical link. If one of the links in an etherchannel group fails, the rest of the group continues to work together.

That's cool, I think.

STP and RSTP

Because of the limitations of Spanning-tree Protocol, Cisco added a few options to make it work better - and more "Cisco-like."

When an access port becomes connected to a workstation, it will take up to 50 seconds while it goes through the transition from blocking to listening to learning to forwarding. While it is transitioning, network services are unavailable. This can be a problem if you are in a Windows network where your authentication and DHCP services require a connection and may time out before they complete. To get around this, Cisco added portfast to STP. Portfast essentially tells the switch port to go straight to skip the process and go straight to forwarding. This could be dangerous if the new device is changed out to be a switch, so Cisco added BPDU-guard. BPDU-guard says that, should a switch port receive a BPDU, it sets the port immediately into a blocking state until re-enabled by an administrator.

Two other optional parameters Cisco added to STP are Uplink Fast and Backbone Fast. Uplink Fast sets a port in the access-layer of infrastructure ready to become the root port should it's existing port fail. It has ready knowledge of the alternate path and bypasses the listening and learning states in the event of a failure. Backbone Fast is similar, but it has no direct knowledge of a link to the root. When the backbone link fails, the non-root bridge starts sending BPDU's that it is the new root. The access-layer switch essentially bypasses the listening state, disregards the false BPDU from the non-root bridge and sends a BPDU to the non-root that it has a path to the actual root bridge.

Because these are Cisco-proprietary, the IEEE developed 802.1w, or Rapid Spanning Tree Protocol. RSTP incorporates these features by changing some of the port states and shortening the timers. It changes "blocking" ports to be called "discarding" ports, but because "listening" ports in STP are essentially discarding packets, it lumps that state into discarding. Discarding ports are listening for and forwarding BPDU's but do not forward frames or learn MAC addresses. Learning and forwarding remain the same.

RSTP also adds two port roles. It adds "Alternate" and "Backup" roles to the root and designated port roles. Backup ports are set to assume the role of designated port should the designated port link fail and there are multiple links to that segment (not necessarily to the root). Alternate ports are set to assume the role of root port should the root port link fail.

RSTP also defines a distinction between connections to switches and edge devices differently. Any 100 Mb, full-duplex connection to a switch is defined as a point-to-point link, where half-duplex connections are link-type "shared." Shared link types would take place to a hub, but these are not common any longer so there is no need to worry about them. It then defines access ports as "edge" ports. Edge ports are connected to end devices, and you define them as such by issuing the "switchport portfast" command. If an edge link receives a BPDU, it immediately transitions to point-to-point.

RSTP uses BPDU's as keep-alives, and to aid in rapid convergence, if 3 BPDU's are missed, the switch is considered dead and it floods the BPDU out to all switches (compared to STP sending that TCN to the root and letting the root alert the others) and all switches then age-out any MAC addresses associated with the failed switch. That is quite a lot faster than STP's 50 second wait time.

Lastly, there is a proposal process that neighboring switches go through. When two switches are connected, the send BPDU hello messages to each other. As soon as one decides it is the designated port for that segment, it sends a proposal to forward to the other switch. The receiving switch puts all other non-edge-type ports into a discarding state to avoid loops, and responds that it will accept frames from the sender and becomes the root port. If a switch receives a BPDU from a switch but its path is not optimal to the root, it never sends an agreement so the sender will age out its proposal and become the alternate port and continues to discard.

My understanding is that RSTP configuration is beyond the scope of the CCNA, but it's good to have understanding of how it works to speed up convergence and to know the terms. There you have it folks...

Tuesday, June 16, 2009

Spanning-tree

Spanning-tree protocol is described by the IEEE 802.1d protocol. (I've never, in a real-world situation needed to know that, but it seems like the sort of thing Cisco would put on their test to see if you were paying attention). It is a layer 2 protocol that prevents loops in traffic by figuring out what is the fastest path to the root bridge (or root switch).

STP bases all its calculation on cost, which it figures by default by the speed of the link. STP costs are assigned as follows:

10 Gb - Cost 2
1 GB - Cost 4
100 Mb - Cost 19
10 Mb - Cost 100

When a switch comes online, it sends out a Bridge Protocol Data Unit (BPDU) that carries its bridge ID, which is a combination of the switches priority (32768 by default, but configurable in multiples of 4096) combined with the MAC address of the switch. Lowest Bridge ID becomes the root bridge. It then sends out BPDU's every 2 seconds, and as long as it doesn't receive a response that another switch on the network has a lower ID, it will continue happily as the root bridge.

Once the root bridge is elected, it defines root ports and designated ports throughout the network. Root ports are simply the ports that have the lowest cost back to the root, and designated ports are the ports that connect to other switches. The cost is calculated by adding all cost values for the entire path back to the root. So if you have 3 switches, and there is a 1Gb link between switch A and switch B, and a 100 Mb link between switch B and switch C, the cost for the path is 23 (4 + 19).

If there are multiple paths to the root, the switch will use the following means of deciding which port will forward and which will block. It looks first at cost, with the lowest cost going into a forwarding state and the higher cost blocking. If there are multiple paths with the same cost, the switch will look at bridge ID, with the lowest going to forwarding state. If there are identical bridge ID's it will look at port priority, which is an arbitrarily assigned number that defaults to 128 but can be configured to choose one path over another. If there continues to be a tie, the switch will then look at port number, with the lowest interface ID going to forward packets.

Spanning-tree ports go through transitions, where each port ends either in a forwarding or blocking state. There are timers assigned to each state, and each state has a different function. They are:

Disabled - the port is obviously not forwarding
Blocking - the portaccepts BPDu's but does not send other user data
Listening - port is accepting traffic and accerpts and sends BPDU's
Learning - port is accepting traffic and entering MAC addresses into memory
Forwarding - the port is forwarding user data as well as BPDU's

To transition from blocking to listening takes 20 seconds, from listening to learning takes 15 seconds and from learning to forwarding takes another 15 seconds. When there is a topology change, STP will take 50 seconds to transition and resume network connectivity. PVSTP+ can converge in 2 seconds, but that is another topic... These timers are dictated by the root bridge, so to modify them in your network you only need to change them on the root and they will propagate throughout the network via BPDU's.

Later, VLAN's.

Monday, June 15, 2009

The date is set

June 24th, 2009 at 8:00 AM.

I was glad to see that SCSU has a testing center now. Now I can test without having to drive a couple hours beforehand.

I'm about to do some cramming, more updates later. I feel pretty good right now that I'll be prepared. I'll be going over switching and ACL's, while continuing to blast through subnetting and facts and figures. I wish I had gone over the facts and figures more, but at this point I will just need to make the most of it.

Anyway, less blogging. More studying. Opening up the labs as we speak...

Friday, June 12, 2009

IPSec and VPN - overview

Secure network communication is of utmost importance in today's business world. Having worked in network security for the past several years, I find that the only thing that seems to trump security is cost. If something costs too much, companies are usually willing to take a risk. That's why things like HIPPA and SOX come into play, and while they sometimes make life difficult, I'm glad they are there to keep our information safe.

At any rate, IPSec is a layer 3 protocol framework that encrypts everything from layer 4 up to layer 7. I call it a framework, because it actually is a protocol that you configure other protocols to fit into to work together. This way, as encryption protocols become out-dated and compromised, IPSec can simply plug in the latest and greatest protocol and off you go!

IPSec handles encryption, which is securing your data from anyone else being able to read it. Protocols used for encryption are (ordering from weakest to strongest): DES, 3DES, AES. DES uses a 56-bit key, 3DES runs DES 3 times and AES uses 128, 192 or 256-bit encryption. These are all examples of symmetric encryption, where the key that is used to encrypt data is also used to decrypt data. Assymmetric encryption, such as PKI, uses a public/private key pair. Anything encrypted with the public key can only be decrypted with the private key and vice versa.

To solve the problem of having to configure symmetric encryption keys on all parties attempting to secure data between themselves, Diffie-Hellman Key Exchange (or DH) is used. DH works by having both clients exchange the results of a complex mathematical equation and use the results to create a secure shared key.

SSL is an encryption method that is assymmetric, using a public and private key to secure transmission of a shared secret key. Essentially, once the SSL connection is initiated, the sending party uses the receiving party's public key to encrypt a shared secret key that it generated. Only the receiver can decrypt it, because only they will have their private key.

Authentication and data integrity are handled by the older MD5 and the newer, more secure SHA-1. These protocols run a mathematical algorithm on the data to be sent, creating a "hash" that identifies the data to be sent. When the receiving party runs the algorithm against the same data, it will come up with the same hash. If the hash is different, the data has been corrupted or modified in transit and the receiving party can discard it.

Lastly, the IPSec protocol itself consists of the AH(Authentication Header) and ESP(Encapsulating Security Protocol). AH was developed first and only provided authentication and data integrity, and ESP was developed to provide encryption, which AH lacked.

VPN technologies are available in most modern versions of Cisco IOS, making them what Cisco calls "ISR" or "Integrated Security Routers." VPN technology is also included in the ASA (Adaptive Security Appliance). The 3 main benefits of using VPN technology are that: 1) it costs less to use an Internet connection to secure communication between remote sites than point-to-point leased lines, 2) provides higher speed connectivity to remote users by means of broadband and DSL connections in remote sites, compared to the old dial-up lines companies used to use, and 3) scalability is enhanced because new remote sites can be brought online without significant infrastructure enhancements. The down side to VPN technology is that the Internet is not typically as reliable as a point-to-point line, and there is increased overhead on routers and firewalls as they handle the encryption and decryption of data.

There are two types of VPNs: Site-to-site and remote-access.

Site-to-site VPN's create a secure "tunnel" through the Internet from one site to another. The users in one site have no idea what type of equipment their data passes through. As far as they are concerned, it is a direct connection. The Internet doesn't see any of the user's data, because it is encrypted before it traverses the line. These are configured to be either "permanent" or "semi-permanent" where the permanent VPN is always up, offering speed but greater overhead because the tunnel is up whether you are using it or not. Semi-permanent VPN's are brought up by the sending device when data needs to go to the remote site, and torn down wehn they are no longer in use. This takes a load off the router or firewall, but there will be some latency in the initial connection.

Remote-access VPN's are always semi-permanent. They are brought online when a user establishes a connection to the firewall or router by use of a client. Historically, they would load up a VPN client application that was configured for use by the administrator, enter their username and password (or token if 3-factor authentication is being used) and they would have network connectivity as if on the LAN. The latest and greatest method of remote-access VPN for Cisco is the SSL VPN, or WebVPN or Clientless VPN (they are all names for the same thing).

The WebVPn has a user access a secure website (watch for the https:// in the url). Once engaged in the secure web communication, they enter authentication by means of a username and password. At that point, as long as the web connection stays open they are securely connected to the LAN. In a true clientless situation,m they are presented with a directory listing in the web browser of resources to which they have access. In thin-client situations, an Active-X or java client is dlownloaded and run on the remote user, giving them the ability to use any tcp-based application on their computer. The clientless connection does not allow a user to run programs on their computer.

There is much, much, much more involved in these technologies, but from what I gather all we need for the CCNA is the broad overview and familiarity with the terms and protocols. I'll leave the in-depth analysis for my future certification blog...

Tuesday, June 9, 2009

Checkpoint

I sat this morning and took a 30 question quiz on Frame-relay and WAN connectivity. I smoked the Frame-relay but discovered that I'm a little short on my understanding of VPN technologies. I'll be working on that this week.

I'm in the short haul here. I am planning to take the CCNA exam next week if my study goes well this week. I've shored up the routing protocols, worked through much of the L2 protocols and feel pretty good about the L1 information as far as what makes a router work, where data is stored and what kind of connectors are used in routing.

I'm also assembling a list of labs to complete before I take the exam, and I plan to focus on the troubleshooting aspects of all these technologies. I find that, in a lab situation it's easy to follow along and configure the equipment and then step back and say "Yep, it works. Now I'm an expert." Fact of the matter is, when I get into the office every day I rarely have the opportunity to set up a new enterprise routing topology. Normally I get the call that says "my network is slow" or "why can't I connect to eBay?" Knowing how to set these things up at that point is good because with no foundation it is impossible to troubleshoot, but more importantly to me is the ability to look at a configured system and figure out why it's broken or misbehaving.

So my plan for tomorrow is to take a full practice exam and figure out where I need additional work. I already know that IPSec is something I need a refresher on, and I want to lab the routing protocols and L2 protocols to death. Anyone with some experience who could tell me what else to focus on for the exam that would like to offer pointers or comments would be much appreciated.

Monday, June 8, 2009

Frame Relay - 2

This morning we continue our discussion about Frame Relay. Major benefits of FR are the fast transfer rates and low costs associated to it. However, a major drawback comes into play when you are using a hub-and-spoke topology where one router is the central connection point for all other connections. The problem you run into here is when you're using a distance-vector routing protocol (or hybrid protocol, such as EIGRP), split-horizon restricts routing updates from going out the same interface in which they came. That means that your "spokes" won't be able to communicate with each other.

To get around this issue, you either disable split-horizon on your hub router (which is a little bit like skydiving without a backup parachute - you can do it and probably will be OK, but if you're not...yikes!) or by configuring your router to use subinterfaces.

Subinterfaces are logical interfaces on your physical interface that the router sees and treats as separate interfaces. You configure them by adding a "." and a randomly assigned number to the end of the "interface serial 0/0" command, such as "interface serial 0/0.100". That's all it takes to create a subinterface, after which you assign it its own IP address, mask and assign the DLCI. The router then routes traffic accordingly. To do this, all you need to do is enable the appropriate encapsulation type on the physical interface and bring it up. When doing so, it is good to remember that Cisco didn't become the largest internetworking company in the world by giving away its secrets. When you configure the encapsulation type, you can choose either the proprietary "cisco" type or the industry standard "ietf" type. If all you use is Cisco, ther eis not reason to change it from teh default cisco type. If you are using multiple vendors' equipment, go with ietf.

Because serial interfaces have no MAC address, we need to somehow figure out what IP address belongs to which DLCI. If the service provider uses LMI to send a list of DLCI's that are available, the receiving router sends out inverse-arp requests that pretty much say "hello, DLCI. Send me your IP address." The remote router sends its IP address and the receiving router maps the DLCI to the IP address. Perfect. Inverse ARP is the router's automated method for figuring out which IP address goes with which DLCI. However, this method will not work when you are using mutiple PVC's on a single interface because the Inverse ARP requires that all IP addresses be under the physical interface. This causes problems with split-horizon and your routing protocol won't route and at that point you need another answer...

Enter statically mapped DLCI's. In a multipoint interface, you can map each DLCI to a subinterface, getting around the split-horizon issues. It's a little more work for the administrator, but in 3 lines of code you have configured all that is necessary for a statically mapped DLCI and your subinterfaces are all working. Wonderful.

To configure Frame Relay, you need to make sure the LMI type is the same on both ends. It is important to remember that ietf is a frame relay encapsulation type and *not* an LMI type. LMI types are cisco, ansi and q933a. These are the language that your FR routers speak to each other. Again, remember that ietf is an encapsulation type, not an LMI type.

There are four states that a frame relay circuit acn be in: ACTIVE, INACTIVE, DELETED, and STATIC. You view the state of each FR circuit by using the "show frame-relay pvc" command. This gives a table of circuits and their status, where ACTIVE means that the circuit is good and is in normal operation; INACTIVE means that your end is OK and the remote site is having problems, most likely offline or misconfigured; DELETED meaning that your side of the router is incorrectly configured (most likely an incorrect DLCI setting); and STATIC meaning that the circuit was manually entered by the administrator and not automatically discovered.

FR using a multipoint interface is configured when you add the "multipoint" argument on the end of the interface command where you create the subinterface, such as "interface serial 0.10 multipoint" which marks that subinterface as an interface that will hold multiple DLCI's. That's all fine and well, but if you have a hub and spoke topology, your spoke sites will not be able to see each other because split-horizon keeps routing updates from going out the interface on which they came. So you need to turn off split-horizon, or statically map the DLCI's by issuing the "frame-relay map ip 192.168.1.1 100 broadcast" command, where 192.168.1.1 is the remote router's IP address and 100 is the DLCI. The broadcast argument simply tells your interface not to treat the FR circuit like the NBMA network it is and to send broadcast traffic across the line.

When using Point-to-point subinterfaces, you need to create a subinterface for each PVC coming into your router. So, after enabling frame relay encapsulation on the interface, you tell the router it is a point-to-point with the obvious command "interface serial 0.100 point-to-point". After this, you assign an IP address and netmask to it, issue the "frame-relay interface-dlci 100" command and enable a routing protocol. It's as easy as that.

Frame Relay is a complex topic with much more that can be configured. But if my sources are correct, that can all wait until my CCNP blog comes out...

Friday, June 5, 2009

Frame Relay - 1

And now, back to our program. Even though I'm running a bit short on time this morning, I wanted to drop a few quick thoughts about Frame Relay down before I lose them. Frame Relay is a popular WAN technology because it allows high data transfer rates without having to have the cost of dedicated point-to-point lines for each connection. You can have several FR links on a single interface, reducing cost yet providing a decent throughput. It is the IT manager's dream!

FR utilizes Virtual Circuits, which are logical links through a service provider's network cloud. While the path through "the cloud" may be varied and use several different devices, from the router's perspective it is a direct connection to the other end of the virtual circuit. While a packet may go through several devices between the two ends of the virtual circuit, it still sees it as one hop. Virtual circuits define which devices communicate.

Some of the tpolgies for FR are hub-and-spoke, partial mesh and full mesh. These are exactly like the "networking 101" definitions, so I won't bother to explain them. All that's required to know is that as you move from the hub-and-spoke topology to a full mesh, you increase reliability and cost.

Virtual circuits are either Permanent or Switched. A Permanent Virtual Circuit (PVC) is up all the time adn provides instantaneous data transfer. A Switched Virtual Circuit (SVC) provides data transfer on-demand. These are less common today, and an example would be an ISDN line that connects when there is data to be sent to the other end and torn down when it is inactive.

LMI (Local Management Interface) is the language two devices speak that communicate the state of virtual circuits to a router. Newer versions of IOS are able to automagically determine what LMI is in use, but in older versions it is necessary to enter it manually.

DLCI's (Direct Link Connection Identifier) are the numbers that identify where traffic is destined to reach. These are in essence used as the hardware address to identify the reciving end's network interface, because serial interfaces have no MAC address. They are locally significant only. In other words, if I know that to reach my location in Texas I send frames out virtual circuit with DLCI 100, the Texas router does not need t know what DLCI I send to. It only knows what number returns to me, and could be completely different. A great analogy that I read is one of flight numbers. A flight from Minnesota to Texas needs only be known by the Minnesotan getting on the plane. When he arrives in Texas, he is there and the flight number is unimportant now. When he wants to return to Minnesota, chances are he'll get on a plane with a completely different flight number. As long as the sending terminal knows which plane to put him on, he arrives at his destination. That's how DLCI's work. They answer the question "to get to location X, I send data to circuit Y."

Of importance to network managers is the difference between Local Access Rate and Committed Information Rate. Local access rate is another term for line speed, and is the total bandwidth of an interface. Committed Information Rate (CIR) is the service provider's guarantee for throughput, and as their lines become more heavily subscribed, they will provide at minimum this amount. An interface's local access rate will determine the realistic bandwidth available, and the CIR of all circuits cannot exceed that amount or your line is "over subscribed." When a line is over subscribed, it will allocate bandwidth to all circuits, but perhaps not meet the CIR for each. It's then time to check your SLA and monitor your throughput so you are getting all you pay for.

To keep traffic from backin up too much, service providers use BECN (Backward Explicit Congestion Notifiers) to modify the headers of traffic returning to a router to tell it to slow down its transmission. These are sent back to a transmitting router if there is a large mismatch between the Local Access Rate and the CIR, causing congestion in the line. The converse of BECN is the Forward Explicit Congestion Notifier (FECN), which is a signal from the sending router to the receiver which prompts the receiver to send a BECN in the case where there is no return traffic acknowledgement (such as in a UDP stream).

Lastly, if you are sending traffic above and beyond your CIR, a service provider may mark some of your traffic as "Discard Eligible." Marked as such, most packets still arrive safely. But if the line gets too congested, these will be the first to drop.

Tomorrow, more Frame Relay. It's a complex topic, but once you understand the terms and ideas behind it, the configuration is really quite simple. More on that later.

Thursday, June 4, 2009

A break from our regularly scheduled program

Last night I turned off routers and spent a couple hours just sitting with my wife. It was refreshing and amazingly good for my attitude. Time slips by too fast, and when I hyper-focus on temporal things, it seems that ball stays in motion once it is in motion.

So this morning I was planning to get back to my regular program of networking-related topics. I sat down, dug up some info on frame-relay, started reading about virtual circuits, DLCI, LMI, BECN, FECN...picked up my guitar and started to play a bit.

I find that playing is what I'm wanting to do with my time more than anything these days. Maybe it's a mid-life crisis, maybe it's just what I'm actually programmed to do. Either way, it's what I want to do and at this point in my life I'm too old to do anything I don't want to do any more. I figure if you don't get to do what you want when you turn 40, what good is turning 40?!

So I started a little discipline that I hope to continue through the rest of the summer, if not the rest of my life. I'm trying develop my song-writing to be productive at the moment I pick up a guitar. I play and sing whatever comes to mind, recording on my phone (until I can actually squeeze some time to have my home recording system actually worked out and always available) and not discriminating or editing. I'm just pouring it out and later I'll go back to edit. Granted, the editing needs to happen or all you have is a bunch of free-associated drivel, but for now I'm longing to get the creative juices flowing again.

So, I say a most heart-felt "Thank you" to my lover, my song and my best friend for supporting me through the nonsense, loving me in spite of myself and always reminding me gently and lovingly what is important and lasting in life. Sandra, I love you...and don't forget, I'm your density...

Wednesday, June 3, 2009

WAN Connectivity

Moving on from layer 3 protocols to layer 2, we look into (mainly) two different implementations: Cisco's proprietary HDLC and the industry standard PPP.

Of course, there is a Cisco proprietary L2 protocol. Cisco didn't get to be the world's largest internetworking company by giving its intellectual property away. If your network is all Cisco, HDLC (High-level Data Link Control) started out as an open standard, but lacked multi protocol support. Cisco took the standard and added the features it thought necessary, and made that the default encapsulation for serial interfaces on all Cisco equipment.

The nice thing about HDLC is that it is extremely simple. If you have the physical layer connected properly and both ends of the circuit are using HDLC there is nothing to get in the way. It is very efficient because it lacks any configurable options. If it was changed from HDLC on a router, you only enter "encapsulation hdlc" on the interface and it is done.

What HDLC lacks in configurability, PPP (Point-to-Point Protocol) offers. PPP gives the option of configuring authentication, call-back, compression and multi-link features to enhance your WAN network computing needs. Topping it all off, it is an open standard and allows communication between any vendor's equipment, making it the de facto standard for WAN communication.

PPP authentication comes in two varieties, either PAP (Password Authentication Protocol) or CHAP (Challenge-Handshake Authentication Protocol). PAP is only used if you are using very old equipment, because the password is sent unencrypted in clear text, and also because the client controls the sending of the credentials. Essentially, the client makes the connection and sends the password when it is darn good and ready, after which it does not require continued authentication. This makes it vulnerable to playback attacks, where an attacker captures the data stream and sends the credentials to take over the session.

CHAP is inherently more secure because it uses MD5 hashes to secure the password, and the server (in our case, router) requests the credentials at connection and then again randomly throughout the remainder of the session. If a client device doesn't offer up the password hash when the router asks for it, the router terminates the session immediately. This prevents the playback attack, because predicting when that call for authentication iwll take place would be nearly impossible.

Call-back is just as it sounds. You set the router up to call back a remote user at a predefined number, so that when a client attempts to make a connection the router hangs up on them and dials a predefined phone number where the user with that username and password should be. If it doesn't answer, the connection is not established (obviously).

Compression is used to increase WAN bandwidth at the expense of router CPU and memory. There is the Stacker algorithm, which is a straight-forward dictionary type compression. It reads the data stream, replaces the data with a code and moves on to the next bit of data. The Predictor algorithm tries to predict the next character based on cached data that it has already compressed. This is good for connection types where the protocol does not change often. Then there is the Microsoft-proprietary MPPC which is only good for connecting Microsoft devices (blah).

Finally we have Multilink capabilities. Multilink allows PPP to combine multiple WAN links into a single, logical link. This allows you to manage and monitor a single interface for throughput, to combine anything from a couple 33.6 bps links to several T1 links for increased bandwidth, and exact-bit load balancing by chopping packets into exact-same-size fragments and sending them out across the MPPP link. TO gain this functionality you will sacrifice router CPU and memory - but what do you want for nothing? A rubber biscuit?

These features (authentication, callback, compression and multilink) are provided by one of PPP's sub-protocols. This is the LCP, or Link Control Protocol. Beneath that in the Layer 2 implementation of PPP are the NCP (Network Control Protocol) and the OSI implementation of HDLC. The OSI implementation of HDLC, like I said already, lacks support for multiple protocols, but it acts as in interface to Layer 1. From that, the NCP allows multiple network protocols to "plug in" to it by providing a standard interface. So HDLC gives the ability to support multiple devices, NCP gives PPP the ability to use multiple network protocols (think of it as a standard "jack" into which you plug in your network protocol), and LCP provides added functionality.

Later we go on to Frame-relay and ATM. What fun!

Hi Geoff...

Tuesday, June 2, 2009

EIGRP thoughts

EIGRP (Enhanced Interior Gateway Protocol) is a Cisco proprietary routing protocol. It is a hybrid between distance-vector and link-state protocols, incorporating the best from both worlds.

This protocol is able to route multiple protocols, such as IP, IPX, Appletalk (should you ever need to route Appletalk...), but the biggest difference in how EIGRP computes the metric. It uses a combination of bandwidth and delay, each multiplied by 255 to give a 32-bit metric to determine the best path to any given destination. You can also add in other factors, such as load, MTU and reliability to further enhance the routing decisions. It is also capable of unequal load balancing, splitting up traffic according to available bandwidth, over six paths compared to OSPF and RIP's equal load balancing, and it has a maximum hop count of 244 compared to RIP's 15. Also, EIGRP is classfull by default but can be configured as classless (like RIPv2) by issuing the "no auto-summary" command.

It discovers neighbors using multicast packets to 224.0.0.10, and like OSPF stores the data in a topology table. Once the topolgy tables are in sync, they send out hello packets to keep their dead timers from expiring. Like OSPF, the hello and dead timers differ based on the topology of the network. Multi-access broadcast and point-to-point networks have a hello timer of 5 seconds and a dead timer of 15 seconds, while non-broadcast multi-access have a 60 and 180 hello and dead timer, respectively. The dead timer is 3 times the hello timer, which helps me remember for some reason. Worth noting as well is that it uses the Diffusing Update Algorithm (or DUAL) to compute topology changes in a fractions of a second.

The reason EIGRP can converge so quickly when the topolgy changes is that it chooses a successor route and also a feasible successor. The feasible successor is a backup route in case the successor goes down. It does this by tracking the metric from each neighbor router to all routes advertised by its neighbor. That is the advertised distance. It then adds its composite metric to the advertised distance to come up with the feasible distance. The lowest feasible distance is then added to the route table as the successor. To become the feasible successor, a route must have a lower advertised distance than the feasible distance of the successor route.

Along with calculating the feasible successor route to speed convergence in the case of a link going down, EIGRP has another efficiency over OSPF. In the case of multiple routes to a given network, but no advertised distance is lower than the feasible distance (thereby causing EIGRP to have no feasible successor), EIGRP handles the down link differently than OSPF. OSPF would flood the link state to all routers in the area, consuming considerable amounts of bandwidth and processing power. EIGRP simply queries its neighbors for a route to that path and the router with the down link and then computes a successor path based on the information it receives in return. Of course, it must not create a loop and has to wait out the "Stuck in Active" timer of 180 seconds (which also would exceed the dead timer in a NBMA network) to make sure it gets every possible reply.

Like OSPF, EIGRP also utilizes stub networks to allow route summarization where there is only one path in and out of a network. If the only link to a network goes down, it doesn't do any good to start querying neighbors for another route.

Configuring EIGRP is quite simple in its most basic form. Like RIP, you simply initialize the protocol using the "router" command. The major difference here is that EIGRP requires an Autonomous System number the must be the same on all EIGRP routers or they will not get the updates from one another. Contrast this with OSPF's process ID, because they come in the same location of the router command:

EIGRP
RouterA(config)#router eigrp 100

100 is the randomly assigned AS number. It is randomly assigned by the planner of the network, by the way.

OSPF
RouterA(config)#router ospf 100

In this case, 100 is the process ID which does not need to match anything else. Any network advertised in a given area will update with OSPF.

From here, it is imperative to configure the bandwidth of serial links correctly, since Cisco IOS assumes all serial links are T1's. Ethernet interfaces report their bandwidth accurately. It is also important to remember that, if configuring a network as a stub network, you need to turn off the auto-summarization or manually configure a summary route. Also for contrast, to veiw OSPF's topolgy table you enter "show ip ospf database" where in EIGRP you enter "show ip eigrp topology" which just makes more sense to me. I believe in "calling it what it is"...

With all that said, there are only two drawbacks to EIGRP. One is that you cannot use multiple vendors' equipment. EIGRP is Cisco-only. The other thing is that it consumes a lot fo processing and memory because it maintains a topolgy, neighbor and routing table for each protocol being routed. While that isn't a big deal now that the world is run on IP, it could be a consideration.

EIGRP is a nice protocol, efficient and easy to configure. If you only have Cisco equipment, it only makes sense. Unless you are like me and really loathe using proprietary protocols to base your infrastructure upon. I just like having options.

Monday, June 1, 2009

OSPF thoughts

Open Shortest Path First (OSPF) is a widely-used link-state routing protocol. It is vendor neutral, which contributes to its popularity, since many large networks include equipment from various manufacturers.

OSPF is completely classless - so it incorporates networks and subnet masks into its routing updates and topology tables, and this information is used in the computing of the shortest path to any given network. That is a nice thing about OSPF, since you don't have to worry too much about dis-contiguous networks. You simply add the netmask to the network you plan to advertise in OSPF and the router does the rest of the work.

The down side is that computing the shortest path for all known networks is resource intensive, and in the event of a link going down, the state of that link is flooded to all routers inside the area, and the Dijkstra's Shortest Path First algorithm is run by them all. This takes a lot of memory and CPU power, which can cause a slowdown in your network if it is large. If you have a link "flapping," or going up and down rapidly, this can cause an unstable condition as each router computes and recomputes the paths each time it receives an update (or LSU - link state update).

To mitigate this condition on broadcast multi-access networks and non-broadcast multi-access networks (such as frame relay), OSPF elects a Designated Router and a Backup Designated Router in case the DR fails. This election is won by the device with the numerically largest IP address on the network. OSPF first will calculate the DR/BDR from the logical interfaces and then the physical interfaces, so often times a router will be configured first with a logical loopback address to function as the router ID. On point-to-point networks, where there are only two devices on the network, there is no need for an election. The beauty of this is that any link state updates are sent only to the DR and BDR, and these routers then send the update the remaining routers in its area. Updates are flooded out on multicast addresses: 224.0.0.6 for the DR/BDR updates and then they send updates out to 224.0.0.5, which is the multicast address for all OSPF routers.

Other points of note in configuring ospf is that, when you initiate ospf by entering "router ospf" in global config mode on your router, you specify a "process id," which is a number between 1-65,535. This number is only important to the individual router. It is not advertised to other routers, and does not need to be the same on each router within an area. OSPF uses areas to associate routers together, which gets configured when entering the network to be advertised. All areas must somehow connect to Area 0, which is the backbone area for the OSPF autonomous system.

When configuring OSPF, also of great importance is the use of wildcard masks, which is the opposite of a subnet mask. In essence, a 1 in a subnet mask means "ignore this bit" and a 0 means "check this bit." When you enter a wildcard mask of 0.0.0.255, the router will act on any packet that matches the first three octets of the IP address, with anything in the last octet being a match since it is ignored. So, to advertise the 192.168.1.0/24 network, we would enter the following command after the "router ospf 4" command (the 4 is arbitrary and the process ID for this instance of OSPF on this router):

network 192.168.1.0 0.0.0.255 area 0

This puts all 192.168.1.0 traffic in our backbone area 0.

There is much more about OSPF to know, and these are really just simply my summarization of interesting points. A greater discussion of areas, LSA/LSU's and elections could be had. The main points are that OSPF is a link-state routing protocol that is completely classless, advertises the netmask along with the subnet and interoperates with multiple vendors' equipment because it is non-proprietary. From there you just need to draw it out and configure it a few times...