Monday, July 16, 2007

Network Switching (Part VI)

The Rule of the Network Road


Network administrators and designers have traditionally strived to design networks using the 80/20 rule. Using this rule, a network designer would try to design a network in which 80 percent of the traffic stayed on local segments and 20 percent of the traffic went on the network backbone.This was an effective design during the early days of networking, when the majority of LANs were departmental and most traffic was destined for data that resided on the local servers. However, it is not a good design in today’s environment, where the majority of traffic is destined for enterprise servers or the Internet.A switch’s ability to create multiple data paths and provide swift, low−latency connections allows network administrators to permit up to 80 percent of the traffic on the backbone without causing a massive overload of the network. This ability allows for the introduction of many bandwidth−intensive uses, such as network video, video conferencing, and voice communications.Multimedia and video applications can demand as much as 1.5Mbps or more of continuous bandwidth. In a typical environment, users can rarely obtain this bandwidth if they share an average 10Mbps network with dozens of other people. The video will also look jerky if the data rate is not sustained. In order to support this application, a means of providing greater throughput is needed. The ability of switches to provide dedicated bandwidth at wire−speed meets this need.


Switched Ethernet Innovations


Around 1990, many vendors offered popular devices known as intelligent multiport bridges; the first known usage of the term switch was the Etherswitch, which Kalpana brought to the market in 1990. At the time, these devices were used mainly to connect multiple segments—they usually did very little to improve performance other than the inherent benefits bridges provide, such as filtering and broadcast suppression. Kalpana changed that by positioning its devices as performance enhancers. A number of important features made the Kalpana switches popular, such as using multiple transmission paths for network stations and cut−through switching.Cut−through switching reduced the delay problems associated with standard bridges by providing the means to have multiple transmissions paths to network devices. Each device could have its own data path to the switch and did not need to be in a shared environment.Kalpana was able to do this by dedicating one pair of the station wiring to transmitting data and one pair to receiving data. This improvement allowed the Kalpana designers to ignore the constraints of collision detection and carrier sense, because the cables were dedicated to one station. Kalpana continued its history of innovation with the introduction in 1993 of full−duplex Ethernet.

Full−Duplex Ethernet

Prior to the introduction of full−duplex (FDX) Ethernet, Ethernet stations could either transmit or receive data; they could not do both at the same time, because there was no way to ensure a collision−free environment. This was known as half−duplex (HDX) operation.FDX has been a feature of WANs for years, but only the advent of advances in LAN switching technology made it practical to now consider FDX on the LAN. In FDX operation, both the transmission and reception paths can be used simultaneously. Because FDX operation uses a dedicated link, there are no collisions, which greatly simplifies the MAC protocol. Some slight modifications in the way the packet header is formatted enable FDX to maintain compatibility with HDX Ethernet.You don’t need to replace the wiring in a 10BaseT network, because FDX operation runs on the same two−pair wiring used by 10BaseT. It simultaneously uses one pair for transmission and another pair for reception. A switched connection has only two stations: the station itself and the switch port. This setup makes simultaneous transmission possible and has the net effect of doubling a 10Mbps LAN.This last point is an important one. In theory, FDX operation can provide double the bandwidth of HDX operation, giving 10Mbps speeds in each direction. However, achieving this speed would require that the two stations have a constant flow of data and that the applications themselves would benefit from a two−way data flow. FDX links are extremely beneficial in connecting switches to each other. If there were servers on both sides of the link between switches, the traffic between switches would tend to be more symmetrical.



Fast Ethernet


Another early innovation in the switching industry was the development of Fast Ethernet. Ethernet as a technology has been around since the early 1970s, but by the early 1990s its popularity began to wane. Competing technologies such as FDDI running at 100Mbps showed signs of overtaking Ethernet as a de facto standard, especially for high−speed backbones.Grand Junction, a company founded by many of the early Ethernet pioneers, proposed a new Ethernet technology that would run at 10 times the 10Mbps speed of Ethernet. They were joined by most of the top networking companies—with the exception of Hewlett−Packard (HP), which had a competing product. HP’s product, known as 100Mbps VG/AnyLAN, was in most respects far superior to the product proposed by Grand Junction. It had a fatal flaw, though: It was incompatible with existing Ethernet standards and was notbackward compatible to most of the equipment in use at the time. Although the standards bodies debated the merits of each of the camps, the marketplace decided for them. Fast Ethernet is the overwhelming winner, so much so that even HP sells Fast Ethernet on almost all its products.Note In 1995, Cisco purchased both Kalpana and Grand Junction and incorporated their innovations into its hardware. These devices became the Catalyst line of Cisco products.


Gigabit Ethernet


In order to implement Gigabit Ethernet (GE), the CSMA/CD method was changed slightly to maintain a 200−meter collision diameter at gigabit−per−second data rates. This slight modification prevented Ethernet packets from completing transmission before the transmitting station sensed a collision, which would violate the CSMA/CD rule. GE maintains a packet length of 64 bytes, but provides additional modifications to the Ethernet specification.The minimum CSMA/CD carrier time and the Ethernet slot time have been extended from 64 bytes to 512 bytes. Also, packets smaller than 512 bytes have an extra carrier extension added to them. These changes, which can impact the performance of small packets, have been offset by implementing a feature called packet bursting, which allows servers, switches, and other devices to deliver bursts of small packets in order to utilize the available bandwidth.Because it follows the same form, fit, and function as its 10− and 100Mbps predecessors, GE can be integrated seamlessly into existing Ethernet and Fast Ethernet networks using LAN switches or routers to adapt between the different physical line speeds. Because GE is Ethernet, only faster, network managers will find the migration from Fast Ethernet to Gigabit Ethernet to be as smooth as the migration from Ethernet to Fast Ethernet.



Avoiding Fork−Lift Upgrades


Although dedicated switch connections provide the maximum benefits for network users, you don’t want to get stuck with fork−lift upgrades. In a fork−lift upgrade, you pay more to upgrade your computer or networking equipment than it would cost to buy the equipment already installed. The vendor knows that you are not going to buy all new equipment, so the vendor sells you the upgrade at an enormous price. In order to exchange it for the bigger, better, faster equipment It may sometimes be necessary to support legacy equipment.Fortunately for Ethernet switches you can provide connectivity in a number of ways. You can attach shared hubs to any port on the switch in the same manner that you connect end stations. Doing so makes for a larger collision domain, but you avoid paying the high costs of upgrades.Typically, your goal would be to migrate toward single−station segments as bandwidth demands increase. This migration will provide you with the increased bandwidth you need without wholesale replacement of existing equipment or cabling. In this lower cost setup, a backbone switch is created in which each port is attached to the now−larger collision domain or segment. This switch replaces existing connections to routers or bridges and provides communication between each of the shared segments.

Network Switching (Part V)

Switched Forwarding

Switches route data based on the destination MAC address contained in the frame’s header. This approach allows switches to replace Layer 2 devices such as hubs and bridges.After a frame is received and the MAC address is read, the switch forwards data based on the switching mode the switch is using. This strategy tends to create very low latency times and very high forwarding rates. Switches use three switching modes to forward information through the switching fabric:· Store−and−forward· Cut−through· FragmentFreeTip Switching fabric is the route data takes to get from the input port on the switch to the output port on the switch. The data may pass through wires, processors, buffers, ASICs, and many other components.

Store−and−Forward Switching

Pulls the entire packet received into its onboard buffers, reads the entire packet, and calculates its cyclic redundancy check (CRC). It then determines if the packet is good or bad. If the CRC calculated on the packet matches the CRC calculated by the switch, the destination address is read and the packet is forwarded out the correct port on the switch. If the CRC does not match the packet, the packet is discarded. Because this type of switching waits for the entire packet before forwarding, latency times can become quite high, which can result in some delay of network traffic.

Cut−Through Switching

Sometimes referred to as realtime switching or FastForward switching, cut−through switching was developed to reduce the latency involved in processing frames as they arrive at the switch and are forwarded on to the destination port. The switch begins by pulling the frame header into its network interface card buffer. As soon as the destination MAC address is known (usually within the first 13 bytes), the switch forwards the frame out the correct port.This type of switching reduces latency inside the switch; however, if the frame is corrupt because of a late collision or wire interference, the switch will still forward the bad frame. The destination receives the bad frame, checks its CRC, and discards it, forcing the source to resend the frame. This process will certainly waste bandwidth; and if it occurs too often, major impacts can occur on the network.In addition, cut−through switching is limited by its inability to bridge different media speeds. In particular, some network protocols (including NetWare 4.1 and some Internet Protocol [IP] networks) use windowing technology, in which multiple frames may be sent without a response. In this situation, the latency across a switch is much less noticeable, so the on−the−fly switch loses its main competitive edge. In addition, the lack of error checking poses a problem for large networks. That said, there is still a place for the fast cut−through switch for smaller parts of large networks.

FragmentFree Switching

Also known as runtless switching, FragmentFree switching was developed to solve the late−collision problem.These switches perform a modified version of cut−through switching. Because most corruption in a packet occurs within the first 64 bytes, the switch looks at the entire first 64 bytes to get the destination MAC address, instead of just reading the first 13 bytes. The minimum valid size for an Ethernet frame is 64 bytes. By verifying the first 64 bytes of the frame, the switch then determines if the frame is good or if a collision occurred during transit.

Combining Switching Methods

To resolve the problems associated with the switching methods discussed so far, a new method was developed. Some switches, such as the Cisco Catalyst 1900, 2820, and 3000 series, begin with either cut−through or FragmentFree switching. Then, as frames are received and forwarded, the switch also checks the frame’s CRC. Although the CRC may not match the frame itself, the frame is still forwarded before the CRC check and after the MAC address is reached. The switch performs this task so that if too many bad frames are forwarded, the switch can take a proactive role, changing from cut−through mode to store−and−forward mode. This method, in addition to the development of high−speed processors, has reduced many of the problems associated with switching.Only the Catalyst 1900, 2820, and 3000 series switches support cut−through and FragmentFree switching. You might ponder the reasoning behind the faster Catalyst series switches not supporting this seemingly faster method of switching. Well, store−and−forward switching is not necessarily slower than cut−through switching—when switches were first introduced, the two modes were quite different. With better processors and integrated−circuit technology, store−and−forward switching can perform at the physical wire limitations.This method allows the end user to see no difference in the switching methods.

Switched Network Bottlenecks


This section will take you step by step through how bottlenecks affect performance, some of the causes ofbottlenecks, and things to watch out for when designing your network. A bottleneck is a point in the networkat which data slows due to collisions and too much traffic directed to one resource node (such as a server). Inthese examples, I will use fairly small, simple networks so that you will get the basic strategies that you canapply to larger, more complex networks.Let’s start small and slowly increase the network size. We’ll take a look at a simple way of understanding howswitching technology increases the speed and efficiency of your network. Bear in mind, however, thatincreasing the speed of your physical network increases the throughput to your resource nodes and doesn’talways increase the speed of your network. This increase in traffic to your resource nodes may create abottleneck.


Figure 1.6 shows a network that has been upgraded to 100Mbps links to and from the switch for all the nodes.Because all the devices can send data at 100Mbps or wire−speed to and from the switch, a link that receives data from multiple nodes will need to be upgraded to a faster link than all the other nodes in order to process and fulfill the data requests without creating a bottleneck. However, because all the nodes—including the file servers—are sending data at 100Mbps, the link between the file servers that is the target for the data transfers for all the devices becomes a bottleneck in the network.



Figure 1.6: A switched network with only two servers.


Notice that the sheer number of clients sending data to the servers can overwhelm the cable and slow the data traffic. Many types of physical media topologies can be applied to this concept. In this demonstration, we will utilize Ethernet 100BaseT. Ethernet 10BaseT and 100BaseT are most commonly found in the networks of today.We’ll make an upgrade to the network and alleviate our bottleneck on the physical link from the switch to each resource node or server. By upgrading this particular link to a Gigabit Ethernet link, as shown in Figure1.7, you can successfully eliminate this bottleneck.




Figure 1.7: The addition of a Gigabit Ethernet link on the physical link between the switch and the server.It would be nice if all network bottleneck problems were so easy to solve. Let’s take a look at a more complex model. In this situation, the demand nodes are connected to one switch and the resource nodes are connected to another switch. As you add additional users to switch A, you’ll find out where our bottleneck is. As you cansee from Figure 1.8, the bottleneck is now on the trunk link between the two switches. Even if all the switches have a VLAN assigned to each port, a trunk link without VTP pruning enabled will send all the VLANs to the next switch.



Figure 1.8: : A new bottleneck on the trunk link between the two switches.To resolve this issue, you could implement the same solution as the previous example and upgrade the trunk between the two switches to a Gigabit Ethernet. Doing so would eliminate the bottleneck. You want to put switches in place whose throughput is never blocked by the number of ports. This solution is referred to as using non−blocking switches.

Non−Blocking Switch vs. Blocking Switch

We call a switch a blocking switch when the switch bus or components cannot handle the theoretical maximum throughput of all the input ports combined. There is a lot of debate over whether every switch should be designed as a non−blocking switch; but for now this situation is only a dream, considering the current pricing of non−blocking switches.Let’s get even more complicated and introduce another solution by implementing two physical links between the two switches and using full−duplexing technology. Full duplex essentially means that you have two physical wires from each port—data is sent on one link and received on another. This setup not only virtually guarantees a collision−free connection, but also can increase your network traffic to almost 100 percent on each link.You now have 200 percent throughput by utilizing both links. If you had 10Mbps on the wire at half duplex, by implementing full duplex you now have 20Mbps flowing through the wires. The same thing goes with a 100BaseT network: Instead of 100Mbps, you now have a 200Mbps link.Tip If the interfaces on your resource nodes can implement full duplex, it can also be a secondary solution for your servers.Almost every Cisco switch has an acceptable throughput level and will work well in its own layer of the Cisco hierarchical switching model or its designed specification. Implementing VLANs has become a popular solution for breaking down a segment into smaller collision domains.

Internal Route Processor vs. External Route Processor

Routing between VLANs has been a challenging problem to overcome. In order to route between VLANs, you must use a Layer 3 route processor or router. There are two different types of route processors: an external route processor and an internal route processor. An external route processor uses an external router to route data from one VLAN to another VLAN. An internal route processor uses internal modules and cards located on the same device to implement the routing between VLANs.Now that you have a pretty good idea how a network should be designed and how to monitor and control bottlenecks, let’s take a look at the general traffic rule and how it has changed over time.

Network Switching (Part IV)

Why Upgrade to Switches?


As an administrator, you may not realize when it is time to convert your company to a switched network and implement VLANs. You may also not be aware of the benefits that can occur from replacing your Layer 2 hubs and bridges with switches, or how the addition of some modules in your switches to implement routing and filtering ability can help improve your network’s performance.When your flat topology network starts to slow down due to traffic, collisions, and other bottlenecks, you may want to investigate the problems. Your first reaction is to find out what types of data are flowing through your network. If you are in command of the network sniffer or other such device, you may begin to find over−utilization errors on the sniffer occurring when the Ethernet network utilization reaches above only 40 percent.Why would this happen at such a low utilization percentage on the network? Peak efficiency on a flat topology Ethernet network is about 40 percent utilization. Sustained utilization above this level is a strong indicator that you may want to upgrade the physical network into a switched environment. When you start to notice that your state−of−the−art Pentiums are performing poorly, many network administrators don’t realize the situation may be due to the hundreds of other computers on their flat hub and bridged networks. To resolve the issue, your network administrator may even upgrade your PC to a faster CPU or more RAM. This allows your PC to generate more input/output (I/O), increasing the saturation on the network. In this type of environment, every data packet is sent to every machine, and each station has to process every frame on the network.The processors in the PCs handle this task, taking away from the processing power needed for other tasks. Every day, I visit users and networks with this problem. When I upgrade them to a switched network, it is typically a weekend job. The users leave on Friday with their high−powered Pentiums stacked with RAM acting like 486s. When they come back Monday morning, we hear that their computers boot up quickly and run faster, and that Internet pages come up instantly.In many cases, slow Internet access times were blamed on the users’ WAN connections. The whole time, the problem wasn’t their WAN connections—it was their LAN saturated to a grinding halt with frames from every interface on the network.When network performance gets this bad, it’s time to call in a Cisco consultant or learn how to implement switching. Either way, you are reading this book because you are very interested in switching or in becoming Cisco certified. Consider yourself a network hero of this generation in training.To fix the immediate problems on your 10BaseT network with Category 3 or Category 4 cabling, you might need to upgrade to Category 5 cabling and implement a Fast Ethernet network. Then you need to ask yourself, is this only a temporary solution for my network? What types of new technologies are we considering? Are we going to upgrade to Windows 2000? Will we be using Web services or implementing Voice Over IP? Do we have any requirements for using multicast, unicast, video conferencing, or CAD applications? The list of questions goes on. Primarily, you need to ask yourself if this is a temporary solution or one that will stand the test of time.

Unshielded Twisted−Pair Cable

Category 3 unshielded twisted−pair (UTP) is cable certified for bandwidths of up to 10Mbps with signaling rates of up to 16MHz. Category 4 UTP cable is cable certified for bandwidths of up to 16Mbps with signaling rates up to 20MHz. Category 4 cable is classified as voice and data grade cabling. Category 5 cabling is cable certified for bandwidths of up to 100Mbps and signaling rates of up to 100MHz. New cabling standards for Category 5e and Category 6 cable support bandwidths of up to 1Gbps.

In many cases, network administrators don’t realize that implementing a switched network will allow your network to run at almost wire speed. Upgrading the backbone (not the wiring), eliminating the data collisions, making the network segments smaller, and getting those users off hubs and bridges is the answer. In terms of per−port costs, this is usually a much cheaper solution. It’s also a solution you can grow with. Of course, a 100Mbps network never hurts; but even a switched 10BaseT network that has been correctly implemented can have almost the same effect of providing your network with increased performance.Network performance is usually measured by throughput. Throughput is the overall amount of data traffic that can be carried by the physical lines through the network. It is measured by the maximum amount of data that can pass through any point in your network without suffering packet loss or collisions.Packet loss is the total number of packets transmitted at the speed of the physical wire minus the number that arrive correctly at their destination. When you have a large percentage of packet losses, your network is functioning less efficiently than it would if the multiple collisions of the transmitted data were eliminated. The forwarding rate is another consideration in network throughput. The forwarding rate is the number of packets per second that can be transmitted on the physical wire. For example, if you are sending 64−byte packets on a 10BaseT Ethernet network, you can transmit a maximum of about 14,880 packets per second. Poorly designed and implemented switched networks can have awful effects. Let’s take a look at the effects of a flat area topology and how we can design, modify, and upgrade Ethernet networks to perform as efficiently as possible.


Properly Switched Networks


Properly switched networks use the Cisco hierarchical switching model to place switches in the proper location in the network and apply the most efficient functions to each. In the model you will find switches in three layers:

  • Access layer
  • Distribution layer
  • Core layer

The Access layer’s primary function is to connect to the end−user’s interface. It routes traffic between ports and broadcasts collision domain traffic to its membership broadcast domain. It is the access point into the network for the end users. It can utilize lower−end switches such as the Catalyst 1900, 2800, 2900, 3500, 4000, and 5000 series switches.The Access layer switch blocks meet at the Distribution layer. It uses medium−end switches with a little more processing power and stronger ASICs. The function of this layer is to apply filters, queuing, security, and routing in some networks. It is the main processor of frames and packets flowing through the network. Switches found at this layer belong to the 5500, 6000, and 6500 series. The Core layer’s only function is to route data between segments and switch blocks as quickly as possible. No filtering or queuing functions hould be applied at this layer. The highest−end Cisco Catalyst switches are typically found at this layer, such as the 500, 6500, 8500, 8600 GSR, and 12000 GSR series switches.How you configure your broadcast and collision domains—whether in a switched network or a flat network topology—can have quite an impact on the efficiency of your network. Let’s take a look at how utilization is measured and the different effects bandwidth can have on different media types and networks.


Network Utilization


Network administrators vary on the utilization percentage values for normal usage of the network. Table 1.1 shows the average utilization that should be seen on the physical wire. Going above these averages of network utilization on the physical wire is a sign that a problem exists in the network, that you need to make changes to the network configuration, or that you need to upgrade the network.


Table 1.1: The average limits in terms of physical wire utilization. Exceeding these values indicates a network problem.

You can use a network monitor such as a sniffer to monitor your utilization and the type of traffic flowing through your network. Devices such as WAN probes let you monitor the traffic on the WAN.

Network Switching (Part III)

Network Design

When designing or upgrading your network, you need to keep some basic rules of segmenting in mind. You segment your network primarily to relieve network congestion and route data as quickly and efficiently as possible. Segmentation is often necessary to satisfy the bandwidth requirements of a new application or type of information that the network needs to support. Other times, it may be needed due to the increased traffic inthe segment or subnet. You should also plan for increased levels of network usage or unplanned increases in network population.Some areas you need to consider are the types of nodes, user groups, security needs, population of the network, applications used, and the network needs for all the interfaces on the network. When designing your network, you should create it in a hierarchical manner. Doing so provides you with the ability to easily make additions to your network. Another important consideration should be how your data flows through the network.For example, let’s say your users are intermingled with your servers in the same geographical location. If you create a switched network in which the users’ data must be switched through a number of links to another geographical area and then back again to create a connection between the users and file servers, you have not designed the most efficient path to the destination.Single points of failure need to be analyzed, as well. As we stated earlier, every large−network user has suffered through his or her share of network outages and downtime. By analyzing all the possible points of failure, you can implement redundancy in the network and avoid many network outages. Redundancy is the addition of an alternate path through the network. In the event of a network failure, the alternate paths can be used to continue forwarding data throughout the network.The last principle that you should consider when designing your network is the behavior of the different protocols. The actual switching point for data does not have to be the physical wire level. Your data can be rerouted at the Data Link and Network layers, as well. Some protocols introduce more network traffic than others. Those operating at Layer 2 can be encapsulated or tagged to create a Layer−3−like environment. This environment allows the implementation of switching, and thereby provides security, protocol priority, and Quality of Service (QoS) features through the use of Application−Specific Integrated Circuits (ASICs) instead of the CPU on the switch. ASICs are much faster than CPUs. ASICs are silicon chips that provide only one or two specific tasks faster than a CPU. Because they process data in silicon and are assigned to a certain task, less processing time is needed, and data is forwarded with less latency and more efficiency to the end destinations.In order to understand how switches work, we need to understand how collision domains and broadcast domains differ.

Collision Domains

A switch can be considered a high−speed multiport bridge that allows almost maximum wire−speed transfers. Dividing the local geographical network into smaller segments reduces the number of interfaces in each segment. Doing so will increase the amount of bandwidth available to all the interfaces. Each smaller segment is considered a collision domain.In the case of switching, each port on the switch is its own collision domain. The most optimal switching configuration places only one interface on each port of a switch, making the collision domain two nodes: the switch port interface and the interface of the end machine.Let’s look at a small collision domain consisting of two PCs and a server, shown in Figure 1.4. Notice that if both PCs in the network transmit data at the same time, the data will collide in the network because all three computers are in their own collision domain. If each PC and server was on its own port on the switch, each would be in its own collision domain.





Figure 1.4: A small collision domain consisting of two PCs sending data simultaneously to a server.

Switch ports are assigned to virtual LANs (VLANs) to segment the network into smaller broadcast domains. If you are using a node attached to a switch port assigned to a VLAN, broadcasts will only be received from members of your assigned VLAN. When the switch is set up and each port is assigned to a VLAN, a broadcast sent in VLAN 1 is seen by those ports assigned to VLAN 1 even if they are on other switches attached by trunk links. A switch port can be a member of only one VLAN and requires a Layer 3 device such as an internal route processor or router to route data from one VLAN to another.Although the nodes on each port are in their own collision domain, the broadcast domain consists of all of the ports assigned to a particular VLAN. Therefore, when a broadcast is sent from a node in VLAN 1, all the devices attached to ports assigned to VLAN 1 will receive that broadcast. The switch segments the users connected to other ports, thereby preventing data collisions. For this reason, when traffic remains local to each segment or workgroup, each user has more bandwidth available than if all the nodes are in one segment. On a physical link between the port on the switch and a workstation in a VLAN with very few nodes, data can be sent at almost 100 percent of the physical wire speed. The reason? Virtually no data collisions. If the VLAN contains many nodes, the broadcast domain is larger and more broadcasts must be processed by all ports on the switch belonging to each VLAN. The number of ports assigned to a VLAN make up the broadcast domain, which is discussed in the following section.

Broadcast Domains

In switched environments, broadcast domains consist of all the ports or collision domains belonging to a VLAN. In a flat network topology, your collision domain and your broadcast domain are all the interfaces in your segment or subnet. If no devices (such as a switch or a router) divide your network, you have only one broadcast domain. On some switches, the number of broadcast domains or VLANs that can be configured is almost limitless. VLANs allow a switch to divide the network segment into multiple broadcast domains. Each port becomes its own collision domain. Figure 1.5 shows an example of a properly switched network.


Figure 1.5: An example of a properly switched network.

Note Switching technology complements routing technology, and each has its place in the network. The valueof routing technology is most noticeable when you get to larger networks that utilize WAN solutions in the network environment.

Network Switching (Part II)

The Pieces of Technology

In 1980, a group of vendors consisting of Digital Equipment Corporation (DEC), Intel, and Xerox created what was known as the DIX standard. Ultimately, after a few modifications, it became the IEEE 802.3 standard. It is the 802.3 standard that most people associate with the term Ethernet.The Ethernet networking technology was invented by Robert M. Metcalfe while he was working at the Xerox Palo Alto Research Center in the early 1970s. It was originally designed to help support research on the “office of the future.” At first, the network’s speed was limited to 3Mbps.Ethernet is a multiaccess, packet−switched system with very democratic principles. The stations themselves provide access to the network, and all devices on an Ethernet LAN can access the LAN at any time. Ethernet signals are transmitted serially, one bit at a time, over a shared channel available to every attached station. To reduce the likelihood of multiple stations transmitting at the same time, Ethernet LANs use a mechanism known as Carrier Sense Multiple Access Collision Detection (CSMA/CD) to listen to the network and see if it is in use. If a station has data to transmit, and the network is not in use, the station sends the data. If two stations transmit at the same time, a collision occurs. The stations are notified of this event, and they instantly reschedule their transmissions using a specially designed back−off algorithm. As part of this algorithm, each station involved chooses a random time interval to schedule the retransmission of the frame. In effect, this process keeps the stations from making transmission attempts at the same time and prevents a collision.After each frame transmission, all stations on the network contend equally for the next frame transmission. This competition allows access to the network channel in a fair manner. It also ensures that no single station can lock out the other stations from accessing the network. Access to the shared channel is determined by theMedia Access Control (MAC) mechanism on each Network Interface Card (NIC) located in each network node. The MAC address uses a physical address which, in terms of the OSI Reference Model, contains the lowest level address. This is the address used by a switch. The router at Layer 3 uses a protocol address, which is referred as a logical address.CSMA/CD is the tool that allows collisions to be detected. Each collision of frames on the network reduces the amount of network bandwidth that can be used to send information across the physical wire. CSMA/CD also forces every device on the network to analyze each individual frame and determine if the device was the intended recipient of the packet. The process of decoding and analyzing each individual packet generates additional CPU usage on each machine, which degrades each machine’s performance.As networks grew in popularity, they also began to grow in size and complexity. For the most part, networks began as small isolated islands of computers. In many of the early environments, the network was installed over a weekend—when you came in on Monday, a fat orange cable was threaded throughout the organization, connecting all the devices. A method of connecting these segments had to be derived. In the next few sections, we will look at a number of approaches by which networks can be connected. We will look at repeaters, hubs, bridges, and routers, and demonstrate the benefits and drawbacks to each approach.

Repeaters

The first LANs were designed using thick coaxial cables, with each station physically tapping into the cable. In order to extend the distance and overcome other limitations on this type of installation, a device known as a repeater is used. Essentially, a repeater consists of a pair of back−to−back transceivers. The transmit wire on one transceiver is hooked to the receive wire on the other, so that bits received by one transceiver are immediately retransmitted by the other.Repeaters work by regenerating the signals from one segment to another, and they allow networks to overcome distance limitations and other factors. Repeaters amplify the signal to further transmit it on the segment because there is a loss in signal energy caused by the length of the cabling. When data travels through the physical cable it loses strength the further it travels. This loss of the signal strength is referred to as attenuation.These devices do not create separate networks; instead, they simply extend an existing one. A standard rule of thumb is that no more than three repeaters may be located between any two stations. This is often referred to as the 5−4−3 rule, which states that no more than 5 segments may be attached by no more than 4 repeaters, with no more than 3 segments populated with workstations. This limitation prevents propagation delay, which is the time it takes for the packet to go from the beginning of the link to the opposite end.As you can imagine, in the early LANs this method resulted in a host of performance and fault−isolation problems. As LANs multiplied, a more structured approach called 10BaseT was introduced. This method consists of attaching all the devices to a hub in the wiring closet. All stations are connected in a point−to−point configuration between the interface and the hub.

Hubs

A hub, also known as a concentrator, is a device containing a grouping of repeaters. Similar to repeaters, hubs are found at the Physical layer of the OSI Model. These devices simply collect and retransmit bits. Hubs are used to connect multiple cable runs in a star−wired network topology into a single network. This design is similar to the spokes of a wheel converging on the center of the wheel.Many benefits derive from this type of setup, such as allowing interdepartmental connections between hubs, extending the maximum distance between any pair of nodes on the network, and improving the ability to isolate problems from the rest of the network.Six types of hubs are found in the network:

  • Active hubs—Act as repeaters and eliminate attenuation by amplifying the signals they replicate to all the attached ports.
  • Backbone hubs—Collect other hubs into a single collection point. This type of design is also knownas a multitiered design. In a typical setup, servers and other critical devices are on high−speed FastEthernet or Gigabit uplinks. This setup creates a very fast connection to the servers that thelower−speed networks can use to prevent the server or the path to the server from being a bottleneckin the network.
  • Intelligent hubs—Contain logic circuits that shut down a port if the traffic indicates that malformedframes are the rule rather than the exception.
  • Managed hubs—Have Application layer software installed so that they can be remotely managed.Network management software is very popular in organizations that have staff responsible for anetwork spread over multiple buildings.
  • Passive hubs—Aid in producing attenuation. They do not amplify the signals they replicate to all theattached ports. These are the opposite of active hubs.
  • Stackable hubs—Have a cable to connect hubs that are in the same location without requiring the datato pass through multiple hubs. This setup is commonly referred to as daisy chaining.

In all of these types of hub configurations, one crucial problem exists: All stations share the bandwidth, andthey all remain in the same collision domain. As a result, whenever two or more stations transmit simultaneously on any hub, there is a strong likelihood that a collision will occur. These collisions lead tocongestion during high−traffic loads. As the number of stations increases, each station gets a smaller portionof the LAN bandwidth. Hubs do not provide microsegmentation and leave only one collision domain.

Bridges

A bridge is a relatively simple device consisting of a pair of interfaces with some packet buffering and simplelogic. The bridge receives a packet on one interface, stores it in a buffer, and immediately queues it fortransmission by the other interface. The two cables each experience collisions, but collisions on one cable donot cause collisions on the other. The cables are in separate collision domains.Note Some bridges are capable of connecting dissimilar topologies.The term bridging refers to a technology in which a device known as a bridge connects two or more LANsegments. Bridges are OSI Data Link layer, or Layer 2, devices that were originally designed to connect twonetwork segments. Multiport bridges were introduced later to connect more than two network segments, andthey are still in use in many networks today. These devices analyze the frames as they come in and makeforwarding decisions based on information in the frames themselves.To do its job effectively, a bridge provides three separate functions:

  • Filtering the frames that the bridge receives to determine if the frame should be forwarded
  • Forwarding the frames that need to be forwarded to the proper interface
  • Eliminating attenuation by amplifying received data signals

Bridges learn the location of the network stations without any intervention from a network administrator or any manual configuration of the bridge software. This process is commonly referred to as self−learning. When a bridge is turned on and begins to operate, it examines the MAC addresses located in the headers of frames passed through the network. As the traffic passes through the bridge, the bridge builds a table of known source addresses, assuming the port from which the bridge received the frame is the port to which the device is a sending device is attached.In this table, an entry exists that contains the MAC address of each node along with the bridge interface and port on which it resides. If the bridge knows that the destination is on the same segment as the source, it drops the packet because there is no need to transmit it. If the bridge knows that the destination is on another segment, it transmits the packet on that segment or port to that segment only. If the bridge does not know the destination segment, the bridge transmits a copy of the frame to all the interface ports in the source segment using a technique known as flooding. For each packet an interface receives, the bridge stores in its table the following information:

  • The frame’s source address
  • The interface the frame arrived on
  • The time at which the switch port received the source address and entered it into the switching table

Note Bridges and switches are logically equivalent.

There are four kinds of bridges:

  • Transparent bridge—Primarily used in Ethernet environments. They are called transparent bridgesbecause their presence and operation are transparent to network hosts. Transparent bridges learn andforward packets in the manner described earlier.
  • Source−route bridge—Primarily used in Token Ring environments. They are called source−routebridges because they assume that the complete source−to−destination route is placed in frames sentby the source.
  • Translational bridge—Translators between different media types, such as Token Ring and Ethernet.
  • Source−route transparent bridge—A combination of transparent bridging and source−route bridgingthat enables communication in mixed Ethernet and Token Ring environments.

Broadcasts are the biggest problem with bridges. Some bridges help reduce network traffic by filtering packets and allowing them to be forwarded only if needed. Bridges also forward broadcasts to devices on all segments of the network. As networks grow, so does broadcast traffic. Instead of frames being broadcast through a limited number of devices, bridges often allow hundreds of devices on multiple segments to broadcast data to all the devices. As a result, all devices on all segments of the network are now processing data intended for one device. Excessive broadcasts reduce the amount of bandwidth available to end users.This situation causes bandwidth problems called network broadcast storms. Broadcast storms occur when broadcasts throughout the LAN use up all available bandwidth, thus grinding the network to a halt. Network performance is most often affected by three types of broadcast traffic: inquiries about the availability of a device, advertisements for a component’s status on the network, and inquiries from one device trying to locate another device. The following are the typical types of network broadcasts:

  • Address Resolution Protocol (ARP)
  • Internetwork Packet Exchange (IPX)
  • Get Nearest Server (GNS) requests
  • IPX Service Advertising Protocol (SAP)
  • Multicast traffic broadcasts
  • NetBIOS name requests

These broadcasts are built into the network protocols and are essential to the operation of the network devices using these protocols.Due to the overhead involved in forwarding packets, bridges also introduce a delay in forwarding traffic. This delay is known as latency. Latency delay is measured from the moment a packet enters the input port on the switch until the time the bridge forwards the packet out the exit port. Bridges can introduce 20 to 30 percent loss of throughput for some applications. Latency is a big problem with some timing−dependent technologies, such as mainframe connectivity, video, or voice.High levels of latency can result in loss of connections and noticeable video and voice degradation. The inherent problems of bridging over multiple segments including those of different LAN types with Layer 2 devices became a problem to network administrators. To overcome these issues, a device called a router, operating at OSI Layer 3, was introduced.

Routers

Routers are devices that operate at Layer 3 of the OSI Model. Routers can be used to connect more than one Ethernet segment with or without bridging. Routers perform the same basic functions as bridges and also forward information and filter broadcasts between multiple segments. Figure 1.2 shows routers segmenting multiple network segments. Using an OSI network Layer 3 solution, routers logically segment traffic intosubnets.




Figure 1.2: Routers connecting multiple segments.



Routers were originally introduced to connect dissimilar network media types as well as to provide a means to route traffic, filter broadcasts across multiple segments, and improve overall performance. This approach eliminated broadcasts over multiple segments by filtering broadcasts. However, routers became a bottleneck in some networks and also resulted in a loss of throughput for some types of traffic.When you are connecting large networks, or when you are connecting networks to a WAN, routers are very important. Routers will perform media conversion, adjusting the data link protocol as necessary. With a router, as well as with some bridges, you can connect an Ethernet network and a Token Ring network. Routers do have some disadvantages. The cost of routers is very high, so they are an expensive way to segment networks. If protocol routing is necessary, you must pay this cost. Routers are also difficult to configure and maintain, meaning that you will have a difficult time keeping the network up and running.Knowledgeable workers who understand routing can be expensive.Routers are also somewhat limited in their performance, especially in the areas of latency and forwarding rates. Routers add about 40 percent additional latency from the time packets arrive at the router to the time they exit the router. Higher latency is primarily due to the fact that routing requires more packet assembly and disassembly. These disadvantages force network administrators to look elsewhere when designing many large network installations.


Switches


A new option had to be developed to overcome the problems associated with bridges and routers. These new devices were called switches. The term switching was originally applied to packet−switch technologies, such as Link Access Procedure, Balanced (LAPB); Frame Relay; Switched Multimegabit Data Service (SMDS); and X.25. Today, switching is more commonly associated with LAN switching and refers to a technology that is similar to a bridge in many ways.Switches allow fast data transfers without introducing the latency typically associated with bridging. They create a one−to−one dedicated network segment for each device on the network and interconnect these segments by using an extremely fast, high−capacity infrastructure that provides optimal transport of data on a LAN; this structure is commonly referred to as a backplane. This setup reduces competition for bandwidth on the network, allows maximum utilization of the network, and increases flexibility for network designers and implementers.Ethernet switches provide a number of enhancements over shared networks. Among the most important is microsegmentation, which is the ability to divide networks into smaller and faster segments that can operate at the maximum possible speed of the wire (also known as wire−speed).To improving network performance, switches must address three issues:

  • They must stop unneeded traffic from crossing network segments.
  • They must allow multiple communication paths between segments.
  • They cannot introduce performance degradation.

Routers are also used to improve performance. Routers are typically attached to switches to connect multiple LAN segments. A switch forwards the traffic to the port on the switch to which the destination device is connected, which in turn reduces the traffic to the other devices on the network. Information from the sending device is routed directly to the receiving device. No device other than the router, switch, and end nodes sees or processes the information.The network now becomes less saturated, more secure, and more efficient at processing information, and precious processor time is freed on the local devices. Routers today are typically placed at the edge of the network and are used to connect WANs, filter traffic, and provide security. See Figure 1.3.



Figure 1.3: Routers and switches

Like bridges, switches perform at OSI Layer 2 by examining the packets and building a forwarding table based on what they hear. Switches differ from bridges by helping to meet the following needs for network designers and administrators:

  • Provide deterministic paths
  • Relieve network bottlenecks
  • Provide deterministic failover for redundancy
  • Allow scalable network growth
  • Provide fast convergence
  • Act as a means to centralize applications and servers
  • Have the capacity to reduce latency

Network Switching (Part I)

Physical Media and Switching Types

The following are the most popular types of physical media in use today:
Ethernet—Based on the Institute of Electrical and Electronics Engineers (IEEE) 802.3 standard. However, it doesn’t rely on the Carrier Sense Multiple Access Collision Detection (CSMA/CD) technology. It includes 10Mbps LANs, as well as Fast Ethernet and Gigabit Ethernet.


  • Token−Ring—Not as popular as Ethernet switching. Token−Ring switching can also be used to improve LAN performance.

  • FDDI—Rarely used, chiefly due to the high expense of Fiber Distributed Data Interface (FDDI) equipment and cabling.

The following are some of the protocol and physical interface switching types in use today:



  • Port switching—Takes place in the backplane of a shared hub. For instance, ports 1, 2, and 3 could be connected to backplane 1, whereas ports 4, 5, and 6 could be connected to backplane 2. This method is typically used to form a collapsed backbone and to provide some improvements in the network.



  • Cell switching—Uses Asynchronous Transfer Mode (ATM) as the underlying technology. Switch paths can be either permanent virtual circuits (PVCs) that never go away, or switched virtual circuits (SVCs) that are built up, used, and torn down when you’re finished.


Networking Architectures



Network designers from the beginnings of networking were faced with the limitations of the LAN topologies.In modern corporate networks, LAN topologies such as Ethernet, Token Ring, and FDDI are used to provide network connectivity. Network designers often try to deploy a design that uses the fastest functionality that can be applied to the physical cabling. Many different types of physical cable media have been introduced over the years, such as Token Ring, FDDI, and Ethernet. At one time, Token Ring was seen as a technically superior product and a viable alternative to Ethernet. Many networks still contain Token Ring, but very few new Token Ring installations are being implemented. One reason is that Token Ring is an IBM product with very little support from other vendors. Also, the prices of Token Ring networks are substantially higher than those of Ethernet networks. FDDI networks share some of the limitations of Token Ring. Like Token Ring, FDDI offers excellent benefits in the area of high−speed performance and redundancy. Unfortunately, however, it has the same high equipment and installation costs. More vendors are beginning to recognize FDDI and are offering support, services, and installation for it—especially for network backbones. Network backbones are generally high−speed links running between segments of the network. Normally,backbone cable links run between two routers; but they can also be found between two switches or a switch and a router. Ethernet has by far overwhelmed the market and obtained the highest market share. Ethernet networks are open−standards based, more cost−effective than other types of physical media, and have a large base of vendors that supply the different Ethernet products. The biggest benefit that makes Ethernet so popular is the large number of technical professionals who understand how to implement and support it.Early networks were modeled on the peer−to−peer networking model. These worked well for the small number of nodes, but as networks grew they evolved into the client/server network model of today. Let’s take a look at these two models in more depth.

Peer−to−Peer Networking Model



A small, flat network or LAN often contains multiple segments connected with hubs, bridges, and repeaters. This is an Open Systems Interconnection (OSI) Reference Model Layer 2 network that can actually be connected to a router for access to a WAN connection. In this topology, every network node sees the conversations of every other network node.In terms of scalability, the peer−to−peer networking model has some major limitations—especially with the technologies that companies must utilize to stay ahead in their particular fields. No quality of service, prioritizing of data, redundant links, or data security can be implemented here, other than encryption. Every node sees every packet on the network. The hub merely forwards the data it receives out of every port, asshown in Figure 1.1.









Figure 1.1: A flat network topology.





Early networks consisted of a single LAN with a number of workstations running peer−to−peer networks and sharing files, printers, and other resources. Peer−to−peer networks share data with one another in a non−centralized fashion and can span only a very limited area, such as a room or building.

Client/Server Network Model


Peer−to−peer model networks evolved into the client/server model, in which the server shares applicationsand data storage with the clients in a somewhat more centralized network. This setup includes a little more security, provided by the operating system, and ease of administration for the multiple users trying to access data.A LAN in this environment consists of a physical wire connecting the devices. In this model, LANs enable multiple users in a relatively small geographical area to exchange files and messages, as well as to access shared resources such as file servers and printers. The isolation of these LANs makes communication between different offices or departments difficult, if not impossible. Duplication of resources means that the same hardware and software have to be supplied to each office or department, along with separate support staff for each individual LAN.WANs soon developed to overcome the limitations of LANs. WANs can connect LANs across normal telephone lines or other digital media (including satellites), thereby ignoring geographical limitations in dispersing resources to network clients.In a traditional LAN, many limitations directly impact network users. Almost anyone who has ever used a shared network has had to contend with the other users of that network and experienced the impacts. These effects include such things as slow network response times, making for poor network performance. They are due to the nature of shared environments.When collision rates increase, the usefulness of the bandwidth decreases. As applications begin having to resend data due to excessive collisions, the amount of bandwidth used increases and the response time for users increases. As the number of users increases, the number of requests for network resources rises, as well.This increase boosts the amount of traffic on the physical network media and raises the number of data collisions in the network. This is when you begin to receive more complaints from the network’s users regarding response times and timeouts. These are all telltale signs that you need a switched Ethernet network. Later in this chapter, we will talk more about monitoring networks and solutions to these problems. But before we cover how to monitor, design, and upgrade your network, let’s look at the devices you will find in the network.

Wednesday, July 11, 2007

Security and Access Lists(Cisco routers)

Access lists are similar to packet filtering on an NT server. They are lists of conditions that are set by the administrator to control access to particular network segment by controlling access to a specific router's interface. Access lists are used for controlling access to sensitive networks, and for optimizing the network traffic. Access lists can be used to control inbound or outbound traffic on the interface. It's important to understand that the direction (inbound or outbound) is relative to the router's interface. For example, if the server is connected to one of the router's interfaces, the packet addressed to that server is an outbound traffic for the router's interface.
Once the access list is applied to the interface, all packets are analyzed and compared with entries in the access list. If one of the conditions in the access list matches the packets information (could be IP address, network address, port number, protocol type), the router acts according to instructions in that access list.
· Packet is compared with each line in the access list, starting with line 1, then line 2 , and so on.
· Once the packet matches the condition on one of the lines in the access list, the router acts upon that condition and no further comparisons take place
· If the packet does not match any of the conditions on the access list, the packet is discarded. This is the same as having deny any entry in the access list. This is important to remember when creating the access list.
Access lists can be used to control IP and IPX traffic.
There are two types of access lists - standard and extended access lists. Standard access list can analyze the packet based on the source IP address. Packets source IP address can be used to either allow or deny access (either inbound or outbound) to the interface.
Extended access list can, in addition to source IP address, also include entries for:
· Destination IP address
· Port number
· Protocol type
A router can have many different access lists, but only one access list is allowed per interface.
There are two steps in configuring access list (either standard or extended):
1. Create access list in global configuration mode
2. Apply the access list to the interface in interface configuration mode
Each access list must have a unique number. This number must be within a specific range, depending on the type of access list. You must know the following access list numbers:


Access list number Access list type
1-99 IP standard access list
100-199 IP extended access list
200-299 Protocol type-code access list
800-899 IPX standard access list
900-999 IPX extended access list
1000-1099 IPX SAP access list

Commands to configure access lists:
From the configuration mode type access-list [number] [permit or deny] [source address]
For example, access-list 10 deny 222.122.122.100 Will create an access list number 10 with the condition to deny packets with the source address of 222.122.122.100

It's important to remember that all access lists have implicit deny at the last line. So, when we created our access list 10, it looks like this:

deny 222.122.122.100
deny any

This means that all traffic will be denied. This is not what we wanted to achieve. To correct this problem, and deny only packets with source address 222.122.122.100, we need to add another line to our access list. Once again, type access-list 10 permit any

Now our access list looks like this:
deny 222.122.122.100
permit any
deny any

The last line deny any will always be there because it is inserted automatically by the router, but it will never be used because once the condition is met, (either deny 222.122.122.100, or allow any), the router does not read any further lines in the access list.
When creating access list with a deny directive, it's important to add another line that allows all or some traffic, or you will just shut down the router.

When creating an access list that includes an entire network or subnet, you should use wildcard masking. Wildcard mask is somewhat similar to the subnet mask. Here is an example:

access-list 12 permit 222.122.122.0 0.0.0.255

In this example we created access list that permits traffic from all hosts on a network 222.122.122.0. The wildcard mask of 0.0.0.255 tells the router that the first 3 octets must match up exactly, and the last octet is any number from 0 to 255.

You can have many access lists on a router, but they don't do anything until you apply an access list to an interface.

To apply an access list to an interface, you must first enter an interface configuration mode. For example, from the config mode type int e0 The router prompt will change to
Router(config-if)#, indicating that all the changes made here will only be applied to an interface e0. To apply an access list, type
Router(config-if)#ip access-group 10 [in or out]

For example to apply an access list 10 to control an outbound traffic, type ip access-group 10 out

To deactivate an access list type no ip access list 10 out

On a router with only 2 interfaces - one serial and one Ethernet, applying an access list for an inbound traffic to the serial interface produces the same effect as applying the same access list to the Ethernet interface for an outbound traffic. For multiport router you have to decide whether to apply an access list to an inbound or outbound traffic based on the needs of the network.

Extended access lists

Standard access lists are very simple to configure, but they can only filter the traffic based on the source address. If you want to filter the traffic based on the source and destination address, as well as port number, you need to use extended access lists. Extended access lists also have one important function - logging.

Configuring extended access lists is similar to configuring standard access lists. You begin with creating access list from a global configuration mode. Use numbers 100-199 for extended access list. If you enter access-list 101 ? you will see a lot more parameters available to configure. This is because the router knows by looking at the access list number that you are working with an extended access list.
Router(config)#access-list 101 permit tcp 222.122.122.101 any tcp eq 23

This access list will permit tcp traffic on port 23 (telnet) from ip address 222.122.122.101 to any ip number. Instead of using a port number, you can use the name of the tcp protocol, telnet for example, or it could be dns, echo, ftp, or other tcp protocol.

You can use wildcard masks with extended access lists just like with standard access lists.

To log events triggered by the access list add log parameter to the access list. Like this:
access-list 101 permit ip 222.122.122.101 any tcp eg 23 log

The logging feature can be useful if you want to log the traffic going in or out of the particular interface. To log all traffic, just create an access list that permits all traffic and add log parameter to it.

IPX access lists

IPX access lists are similar to IP access lists. There are standard and extended IPX access lists.
Unlike IP access list, standard IPX access list can filter traffic based on source and destination address.

The syntax for creating standard IPX access list is:
access-list [number] [permit/deny] [source] [destination]
Just like IP access list IPX access list is created from the global configuration mode

For example: access-list 801 permit 40 80
This access list will permit IPX traffic from network 40 to network 80. We use number 801 because IPX access list number must be between 800-899

In order to define any network in IPX access list you use -1 (minus one)

For example access-list 805 deny -1 -1 will deny IPX traffic from any network to any network.
Just like in IP access list there is an implicit deny at the end of IPX access list.

To apply an IPX access list, first go to an interface configuration mode, then type
Router(config-if)#ipx access-group [number] [in or out]
For example:
Router(config-if)#ipx access-group 801 in

Extended IPX access lists

With extended IPX access lists you can filter the traffic based on Source network/node address, destination network/node address, IPX protocol (like SPX, SAP, NetBios, etc.), IPX socket (similar to TCP port number).

The syntax for creating IPX extended access list is:access-list [number] [permit/deny] [protocol] [source] [socket] [destination] [socket]
The [number] must be between 900-999 to tell the router that it's reading an extended IPX access list.

Also, just like with IP access list, you can add log parameter to the end of IPX extended access list to log events generated by the access list.

Example:
access-list 901 deny spx any sap any sap log
This access list will deny sap traffic from any network to any network and events will be logged.

Another example:
access-list 902 deny rip 300 rip 600 log
This access list will deny all IPX rip (not the same as IP RIP) from entering from network 300 to network 600, and all events will be logged.

The procedure and syntax for applying the extended IPX access list to the interface is the same as with standard IPX access list.

Monitoring access lists

There are several commands that you can use to view your access lists: from privileged mode
Show access-list will display all access lists configured on the router, access lists numbers and all the lines in them.

To view IP access lists, use show ip interface (or sh ip int)This will display ip interfaces configurations, including numbers of outgoing and inbound access lists.

To view IPX access list you can use show ipx interface This will show interfaces that are configured with IPX protocols, and IPX access lists associated with them.

Another useful command to view access lists is show run entered from the privileged mode. This will show running configuration, and will show access groups applied to particular interfaces.

Friday, July 6, 2007

Frame Relay (II)

Frame relay is a high-speed, packet-switching WAN protocol that connects
geographically dispersed LANs. Frame relay is usually offered by a public
network provider; however, private organizations can acquire and manage their
own frame relay networks as well.
Frame relay is a connection-oriented protocol, which means that it relies on
end-to-end paths between devices connected across the network. It implements
these connections using permanent virtual circuits (PVCs) or switched virtual
circuits (SVCs).

Frame relay assumes that networks use transmission lines with low error rates,
such as digital transmission media. Therefore, frame relay provides only basic
error detection with no error recovery. This minimizes the processing required for
each packet, allowing frame relay networks to operate at high speeds with few
network delays.
Because frame relay performs only basic error checking, end stations running
upper-layer protocols such as the Internet Protocol (IP) are responsible for
resending packets that did not transmit correctly the first time.
Permanent Virtual Circuits
A permanent virtual circuit (PVC) is a dedicated logical path that connects two
devices over a network. When configured, a PVC is always available to the
connected devices; a PVC does not require setup before data can travel across the
network, nor does it need to be disconnected after data has passed. Because many
PVCs can coexist for one physical line, devices can share the bandwidth of the
transmission line.
Switched Virtual Circuits
A switched virtual circuit (SVC) is a logical path that is established on an
as-needed basis. That is, an SVC exists only when there is data to transfer. SVCs
can connect any two points on a network without the requirement that the provider
preconfigure virtual circuits (VCs).
SVCs can provide an alternative to a large network infrastructure, potentially
resulting in cost savings for networks with infrequent communications between
sites. SVCs can also provide an easy and relatively inexpensive solution for
disaster recovery. Costs associated with having a redundant PVC are eliminated.
In addition, you can prepare an SVC network for disaster recovery by performing
incremental backups to a mirror-image database on a remote server.
In addition to cost savings, SVCs provide other benefits. When frame relay
networks using global addressing approach a thousand sites, they run out of data
link connection identifiers (DLCIs). SVCs enable you to manage connectivity on
the basis of use rather than permanent connections. Using SVCs also simplifies
network administration because you do not have to preconfigure network
topologies and support moves, additions, and changes, as with PVCs. This can be
a significant benefit in large, highly meshed networks.


SVCs provide true bandwidth-on-demand service that you can customize based
on the application in use. For example, a short interactive session might use an
SVC with a low or zero committed information rate (CIR) or throughput rate,
while a large file transfer of time-critical data might require an SVC at a high CIR
value.



Frame Relay Packets
Figure 1-1 illustrates the structure of a frame relay packet. The packet’s header
field includes the following:
• Data link connection identifier (DLCI)
The DLCI is the virtual circuit identification number. The frame relay network
uses the DLCI to direct basic data flow. You configure the DLCI for PVCs.
For SVCs, the frame relay switch assigns the DLCI number on a per call
basis.
• Command/response bit (C/R)
ITU-T (formerly CCITT) standards do not use this bit.
• Forward explicit congestion notification (FECN) and backward explicit
congestion notification (BECN)
The FECN and BECN indicate congestion on the network. For information
about how the frame relay software uses these bits
• Discard eligibility (DE)
The DE bit allows the router to mark specific frames as low priority (discard
eligible) before transmitting them to the frame relay network.
• Extended address bit (EA)
The EA bit signals whether the next byte is part of the address. This bit
indicates the last byte of the DLCI.







Figure 1-1. Frame Relay Header: 2-Byte Format



Figure 1-1 shows the frame relay header as a 2-byte structure. Frame relay can
also format the header using 3 or 4 bytes, as shown in Figure 1-2. Note, however,
that you must configure the frame relay interface on the router to use the same
header length as the switched network to which it is connected.



Figure 1-2. Frame Relay Header: 3- and 4-Byte Formats


Management Protocols
Frame relay is an access protocol that runs between a router or data terminal
equipment (DTE) and a switch or data communications equipment (DCE). The
router and the switch use the Data Link Control Management Interface (DLCMI)
to exchange information about the interface and the status of each virtual circuit

DLCMI supports three standard data link management specifications: LMI, ANSI
T1.617 Annex D, and CCITT (now ITU-T) Q.933 Annex A.
• The networking industry first developed the Local Management Interface
(LMI) specification. The LMI approach is asymmetric; the router sends a
status-inquiry message to the network, signaling that the router’s connection
to the network is functioning. The network replies with a status response.
• ANSI modified the LMI specification and incorporated it as Annex D to
ANSI standard T1.617. The ANSI method is generally similar to the LMI
approach.
• The CCITT (now ITU-T) modified the ANSI standard and adopted it as
Annex A to Q.933. The CCITT Annex A specification is similar to Annex D,
but it uses an international numbering scheme.
Be sure to configure the frame relay interface on the router to use the same
management protocol as the switched network to which it is connected.

Frame Relay SVC Signaling and LAPF
Figure 1-3 shows the layers of protocol standards for frame relay signaling:
• The LAPF Core layer defines basic frame relay protocol for both PVCs and
SVCs and supports the reliable transfer of multiple numbered frames over
SVCs.
• The DLCMI layer defines link management protocol for PVCs.
• The LAPF and Q.933 layers define link management protocol for SVCs.


Figure 1-3. Frame Relay Signaling and LAPF Standards


The link access procedure, frame mode (LAPF) layer defines five unnumbered
control frames and three numbered supervisory frames on the communications
link.

LAPF defines the following categories of management frames to support reliable
transfer of multiple numbered frames over SVCs:
• Unnumbered control—Provides connection and disconnection services and
includes set asynchronous balanced mode extended (SABME), disconnect
(DISC), frame reject (FRMR), disconnected mode (DM), and unnumbered
acknowledgment (UA) frames.
• Numbered supervisory—Provides flow control and retransmission
information and includes receiver not ready (RNR), receiver ready (RR), and
reject (REJ) frames.
• Numbered information (I)—A numbered command/response that passes data
across the link using a sliding window protocol. The frames are numbered
sequentially and carry an acknowledgment of the highest numbered frame
received by the sending peer. The maximum number of frames outstanding is
configurable. These frames can only be sent after setup of multiple frame
communications on the link. A flag within the frame differentiates a
command from its response.
• Exchange identification (XID)—An unnumbered command/response used to
allow peers to exchange identification information.

LAPF Operational States
LAPF has three main operational states:
• TEI-assigned—The terminal end-point identifier (TEI)-assigned state is the
base interface state. When the LAPF circuit is first established, this is its state.
No timers are running and only unnumbered frames are supported across the
link.
• Active—This state indicates that multiple frame support is up and running on
the interface. Numbered information frames can travel across the link.
• Timer recovery—This state indicates that a timer has expired and the peer is
attempting to recover either through retransmission (T200 timeout) or by
initiating an idle time handshake (T203 timeout).
When multiple frame support is operating on the channel, numbered information
frames are exchanged to transfer data and acknowledge earlier transfers.

LAPF Timeout and Retransmission Timers

Timer T200
Timer T200 detects transmission timeouts. When a timeout occurs, the peer enters
the timer recovery state and retransmits the frame, up to a maximum of N200
times. If this limit is reached, the system performs the following operations:
• Terminates multiple frame operation
• Discards all outstanding information frames
• Transitions the peer to the TEI-assigned state
• Initiates multiple frame setup
If the remote peer receives a frame that contains an error, it sends an REJ message
to specify which frame was in error. In response, the local peer retransmits
information frames, beginning with the bad frame.
If the remote peer encounters an error that retransmission cannot remedy, it sends
an FRMR response to the local peer and transitions to the TEI-assigned state. The
local peer discards all outstanding information frames, transitions to the
TEI-assigned state, and initiates multiple frame setup.
The supervisory frames, RNR and RR, support flow control on the channel. If the
receiver is not ready to receive data, it sends an RNR message to tell the sender to
wait. When it is ready to receive data, it sends an RR message. Flow control in one
direction is independent of the other direction.
Timer T203
Timer T203 is used to detect a lost connection. When either end of the link is not
waiting for any data, it starts timer T203. If it sends or receives no frames before
this timer expires, the peer transitions to the timer recovery state, sends either an
RR or RNR message to the remote peer, and starts timer T200. If timer T200
expires, the connection is assumed lost, and the peer transitions to the
TEI-assigned state.
To terminate multiple frame support on the circuit, one peer sends a DISC
message to the other. The receiving peer responds with a UA message, and
disconnects the circuit.

SVC Signaling
The following sections describe the signaling between a DTE and the frame relay
network in which various types of messages are exchanged.

Call Setup
A frame relay SVC is established using the frame relay signaling protocol
between the subscriber (DTE) and the network. This protocol is described in the
sections that follow.

Wednesday, July 4, 2007

IP addressing and subnetting.

There are over 100 different protocols in the TCP/IP protocol suite. IP (Internet Protocol) is one of the protocols in the TCP/IP protocol suite. The primary function of the IP is addressing and routing. An IP address is 32 bit long.
Logically, we write the IP address like this: 202.221.100.121 Notice that when we look at an IP address, we see 4 parts, separated by dots. This is because IP has a hierarchical structure. Looking at this IP address we can identify that it's a class C address because the first octet is 202 (between 192 and 223).
When the computer sees an IP address, it does not see any dots that separate the octets. The computer sees the same address like this: 11001010110111010110010001111001
The software responsible for interpreting the IP address (the software that's installed when you install TCP/IP protocol on your computer) determines the class of IP address by looking at the first 3 bits (leading bits) of the address.
Class A address will always start with 0Class B address will always start with 10Class C address will always start with 110
IP address carries two pieces of information: The first part of IP address is a network address, and the second part is a HOST address. Just exactly how many bits of the address represents the Network address is determined by the class of address and the subnet mask.
In our example, computer reads the fist 3 bits of the address and right away determines that it's a class C address. For the computer it means that the first 3 octets (first 24 bits) is a network address, and the rest is the Host address.

Sometimes it is necessary to divide a network into smaller logical groups. This is called subnetting.
Advantages of subnetting are: Reduced network congestion, Increase in network performance.
To implement subnetting, we borrow some of the bits from the host address and use them for the subnet address.
In our earlier example, we have IP address 202.221.100.121, or 11001010110111010110010001111001. The first 24 bits is a network address (because it's a class C network), and the last 8 bits are HOST address. Network: 110010101101110101100100 Host: 01111001
Lets suppose we have a whole class C network assigned to us 110010101101110101100100. We can have 254 addresses available for us to use on our network. If we need to divide our network into 5 subnets, we would need to borrow 3 bits from the host address portion to use them to identify subnet address. We will have only 5 bits left to use for host addresses. We can have only 32 combinations with 5 bits, minus 2 because we cannot have addresses with all 1s or all 0s. So now we can have 5 subnets with 30 hosts on each one, totaling 150 hosts. Additionally, every subnet must have one host address reserved for the router interface because we need to use routers to connect our subnets. Now we down to 29 addresses available for our computers on our 5 subnets. 29x5=145
In the process of subnetting we lost many of available addresses (From 254 down to 145)
In order to implement subnetting, we use subnet masks to let our computers know how many bits from IP address will be used for the network address, and how many for the host address.
The default subnet mask for class C address is 255.255.255.0, in binary it looks like this:11111111111111111111111100000000
In order to borrow 3 bits from the host address part, we must create a custom subnet mask like this:11111111111111111111111111100000 or 255.255.255.224
When deciding on a subnet musk, remember that you need 1 network ID (separate subnet) for each WAN connection in addition to each subnet. What it means is that if you have 2 offices separated by a WAN link, you need at least 3 subnets: one for each office and one just for a WAN link. The WAN link itself will probably only need 2 IP addresses (one for WAN router interface on each router). As you can see, you would waste a lot of IP addresses just for the WAN link (an entire range of IP addresses for one subnet).
Also remember that you need an IP address for every computer, network printer, other networking devices plus a router interface on each subnet. So, if you are planning a subnet, and have 28 computers and 2 network printers, you need at least 31 IP addresses for it (remember at least one for the router interface).
Remember that the subnet mask must be the same for entire network being subnetted. Subnet mask is not a part of an IP address, it is not included in the packets' address header and not transmitted over the network, it is something that you manually assign to every computer on the network. (or can be assigned by the DHCP server along with IP address and default gateway)
Computers look at the subnet mask to determine if the destination IP address is on a local or remote network. If it's determined to be on a local subnet, it will be sent to the destination computer, using destination computer's MAC address (physical address). The MAC address will either be found in ARP cache or the ARP protocol will be used to find the destination's MAC address.
If after looking at destination's IP address and comparing it to the subnet mask, it's determined that the destination computer is on a remote subnet, the packets will be sent to the default gateway (router). Each subnet must have at least one default gateway address, and this address must be local to the subnet.
Incorrectly configured subnet masks can cause serious problems. If sending computer decides that the destination IP address is on the local network, when in fact it's on a remote network, the packet will never be sent to router, and will be dropped after several attempts to find the remote IP on the local network.

Just like IP address, subnet mask is 32 bits long. The number of 1s in the subnet mask is called "prefix notation" . An IP address 219.34.29.114 with subnet musk 255.255.255.240 can be written like this: 219.34.29.114/28 because the subnet mask of 255.255.255.240 has 28 1s and 4 0s (looks like this: 11111111111111111111111111110000)

When planning your subnets, always plan for growth. If you need 500 host IDs on each subnet on a class B network, choose a subnet mask of 255.255.252.0, not 255.255.255.254. Because if you will later decide to add 11 more computers on any subnet, you will have to change the subnet mask and will have to reconfigure every workstation and router on your entire network.
A subnet mask of 255.255.255.128 can be used on the class C network in order to maximize the number of available addresses when subnetting a class C network. In order for it to work, all routers on the network must be able to understand this type of mask.
Cisco router can understand subnet 0, but it must be enabled in the router by issuing "ip subnet-zero" command.The problem with this type of mask is that it only uses one bit to identify the subnet address, and it can only be 1 or 0. Older router may not be able to understand this subnet mask and interpret it as illegal subnet mask.

OSI model, DOD model, data encapsulation

The first step in becoming an expert in networking is understanding of the OSI model.

OSI layers -

  1. Physical
  2. Datalink
  3. Network
  4. Transport
  5. Session
  6. Presentation
  7. Application

Physical

Functions
Data is sent across physical media like wires and hubs. Responsible for encoding scheme (like Manchester encoding) .

Devices
Hubs, Repeaters, Amplifiers, Transceivers

Protocols
None

Datalink

Functions
Packets placed into frames at this layer. CRC is added at this layer. If CRC fails at the receiving computer, this layer will request retransmission. Mac addresses are resolved at this layer.

Devices
Bridges, Switches.


Protocols
CSMA/CD

Network

Functions
Logical addressing, routing of message, determining the best route.

Devices
Routers.

Protocols
IP, IPX, RIP, OSPF, ICMP, ARP, RARP, IGRP, BGP, EGRP

Transport

Functions
Sequencing, Error free delivery. Sliding window is at this layer.

Devices
Gateways

Protocols
TCP, UDP

Session

Functions
Responsible for opening, using and closing the session. Also places checkpoints in the data flow, so that if the transmission fails, only the the data after the last checkpoint needs to be retransmitted.

Devices
Gateways

Protocols
Network file system, SQL, RPC.

Presentation

Functions
Translating data into understandable format for transmission. Data compression and encryption takes place at this layer. Redirector works at this layer.

Devices
Gateways

Protocols
JPEG, MIDI, MPEG, (All kind of music, pictures and movie formats)

Application

Functions
Interface between the user and the computer. API incorporated in this layer.

Devices
Gateways

Protocols
SNMP, FTP, TELNET, WWW, HTTP, MIME

To help remember the layers of the OSI model, I use this phrase: "People Develop Networks To Send Packets Accurately"
You may see questions about OSI model that are very confusing. It's easier to find the correct answer if you associate each layer with the specific "keywords"
To see what I mean, look at this table. Once you are familiar with all 7 layers of the OSI model, go over it a few times. Then when you see a question on your exam, look for "keywords"; they will give away the correct answer.

OSI layer
Physical

Keywords
Bits, Bit synchronization, Transmissions, Cable, Repeater, HUB, Physical topology

OSI layer
Datalink

Keywords
Data into frames, Frames, Framing, MAC address, Hardware address, LLC, Logical Link Control, Sublayer, Bridge, Polling, Token Passing, Contention, Switches, CRC (Ciclic redundancy check), Frame types

OSI layer
Network

Keywords
Routing, Routers, IP, IP address, IPX, RIP, OSPF, Packet switching, Layer 3 addresses, Layer 3 protocol, Network address, best route.

OSI layer
Transport

Keywords
Error-free delivery, Segment data, reassemble data, sequencing, Sliding window, windowing, determining availability of communication, flow control, acknowledgement. TCP, UDP.

OSI layer
Session

Keywords
Placing checkpoints, NFS, RPC, ASP(AppleTalk), X-Window, SQL, coordinating of communications.

OSI layer
Presentation

Keywords
ASCII, JPEG, MPEG, Graphics, images, multimedia, (you make a presentation of your graphics) compression, encryption, data transfer syntax

OSI layer
Application

Keywords
Telnet, WWW, FTP, email gateway, gateway, applications, sending and receiving applications.

Comments: The easiest to remember is physical layer. Everything that does not involve any processing - wires, cables, hubs.
At the two highest layers of the OSI- Presentation and Application you have all the things that you can see on your screen - browser, email, ftp prompt, graphics, multimedia.Just remember that all the graphics and multimedia are at the presentation layer. Imagine that you are an artist and you making a presentation of your art - pictures, movies, multimedia, graphics.

Advantages of using OSI model is

  1. compatibility. Different operating systems using OSI model can communicate with each other.
  2. It clarifies a general function, rather than a specific on how to do something
  3. Developers can change features of one layer without changing the code Simplifying the troubleshooting

OSI model is just a model. It's up to the software manufacturer to decide what layers they will use in writing the software. Most Network Operating Systems don't use all 7 layers of the OSI model. Depending on the software only 4-6 layer approach is used.
The DOD model is a simplified version of the OSI model. It only has four layers. It's easier to remember four layers than seven layers. Department of Defense wanted to simplify the OSI model and make it easier. Unfortunately it makes it harder for you because now you have to learn the OSI model and the DOD model.


DOD model
Process/Application

Protocols
Telnet, FTP, TFTP, NFS, SMTP, SNMP, X Window

Corresponding OSI layers
Application
Presentation
Session

DOD model
Host-to-Host

Protocols
TCP, UDP

Corresponding OSI layers
Transport

DOD model
Internet

Protocols
IP, ARP, RARP, BootP, ICMP

Corresponding OSI layers
Network

DOD model
Network Access

Protocols
CSMA/CD

Corresponding OSI layers
Data Link
Physical

Data encapsulation:

Encapsulation is the process of inserting the information of upper layer into the data field of a lower layer. Let's say you want to send email from your PC to another PC on the Internet. First you type a message that you want to send. This message is converted into 1s and 0s by the application layer. Then, the presentation layer takes this message, and adds it's own header and footer bytes to it. Your message itself has not been changed, it is contained in the data field of the presentation layer.
Then the session layer takes the resulting message and adds it's own header and footer to it. The process repeats until it gets to the physical layer. When the resulting packet gets to the physical layer, it is allot longer then your original message, because it contains headers and footers from another layers. The physical layer does not care what this data means. Physical layer just converts this data into bits and sends it onto the network media like UTP cable. The physical layer does not even care about the network address or the physical address of the packer because the network address has been added at the network layer and the physical address has been added at the Datalink layer.
When the packet arrives to it's destination, the receiving computer performs the same process in reverse order (de-encapsulation). First, the physical layer of the receiving computer converts the signal into bits, then Datalink layer examines the physical address field of the packet. If the destination physical address of the packed matches the physical address of the machine, the Datalink information is stripped off the packed, and the resulting packed is passed on to the network layer. The network layer will then strip off the header that was placed by the network layer of the sending machine and the rest of the packed is passed on to the transport layer. The process repeats until the packed finally arrives to the application layer. By that time, all other headers and footers have been striped off the packed and it looks like the email message that you typed. The Application layer is then converts this message into the readable format (remember that when data arrives to the application layer it's still a bunch of 1s and 0s) and the email is displayed on the screen of the receiving computer.

For the exam you need to remember the order of data encapsulation.
1. User information converted to data
2. Data converted into segments
3. Segments converted into packets
4. Packets converted into frames
5. Frames converted into bits

Flow control:
Flow control is a process of adjusting the flow of data packets to ensure the reliability of data delivery and data integrity. Flow control is performed at the Transport layer of the OSI model.
When packets are received by the destination computer, they are put into buffer while being processed. If buffer becomes full, any additional data packets will be discarded.
To prevent this from happening, the Transport layer of the destination computer will send "not ready" indicator to the sending computer, requesting it to temporarily stop transmission.
Windowing is the mechanism used for the flow control. Window is the number of data segments a source machine can send before it must receive an acknowledgement from the destination machine. Network administrator can increase or decrease the size of the window. On reliable LAN the size of the window should be increased. This will decrease the number of acknowledgements, freeing up some bandwidth and increasing the speed of the transmission.If the sending machine does not receive the acknowledgement, the packets are retransmitted. This method is called "Positive acknowledgement with retransmission"