Friday, June 22, 2007

Routing Information Protocol (RIP)

Routing Information Protocol (RIP)
Introduction

Routing Information Protocol (RIP) is an Interior Gateway Protocol used to exchange routing information within a domain or autonomous system.
RIP lets routers exchange information about destinations for the purpose of computing routes throughout the network. Destinations may be individual hosts, networks, or special destinations used to convey a default route.
RIP is based on the Bellman-Ford or the distance-vector algorithm. This means RIP makes routing decisions based on the hop count between a router and a destination. RIP does not alter IP packets; it routes them based on destination address only.

RIP Version 1 Support
Overview

RIP Version 1 contains minimal amount of information required for routers to route data within a network. A RIP Version 1 packet contains the following information:
• Version - the version of RIP
• Command - Request, Response
• Address Family - used to identify the protocol associated with the address
• IP Address
• Metric or hop count - indicates the number of hops (routers) the packet must
traverse before reaching the destination

Maximum Hop Count
RIP permits a maximum hop count of 15 and any destination with a hop count exceeding 15 identifies it as unreachable and after time it is removed from the routing table. The maximum hop count restricts RIP use in large networks, however, it prevents the problem of repetitive, network loops.

RIP Version 2 Support
Overview RIP Version 2 is an extension of RIP Version 1. It expands the amount of useful information in RIP packets and adds security features. RIP Version 2 shares the same basic functionality of RIP Version 1, however, it resolves some of the shortcomings of the earlier version by providing the following enhancements:
• Support for Variable Length Subnet Masks
• Support for discontiguous subnets
• Password authentication
• IP address multicasting support
RIP Version 2 lets you design IP networks where you need VLSM and Authentication support without the complexity of OSPF. Moreover, RIP Version 2 may be a better solution in some less complex networks where the limitations of 15 hops maximum and fixed metrics is not prohibitive.

Backward Compatibility
RIP Version 2 routers receive and send either RIP Version 2 or RIP Version 1 messages, depending on how you configure the interfaces on your routers. This means you can have routers running either RIP Version 1 or RIP Version 2 in your network. In addition, you can configure your routers to pass either or both versions’
packets.

Note
Using IP Multicasting in your network prevents RIP Version 1 routers from receiving RIP Version 2 messages.

Maximum Hop Count
Because of the requirement for compatibility with RIP Version 1, RIP Version 2 adheres to same maximum hop count of 15.

RIP Limitations
RIP is primarily intended for use in homogeneous networks of moderate size. Because of this, RIP has some specific limitations including:
• As the maximum number of hops is limited to 15 hops, a hop count of 16 is considered infinite.
• The RIP metric (hop count) cannot adequately describe variations in a path’s characteristics and this could result in suboptimal routing. For example, hop count does not evaluate the link speed of a particular path.
• RIP is slow to find new routes when the network changes. This search consumes considerable bandwidth, and in extreme cases, exhibits a slow convergence behavior referred to as a count to infinity.

RIP Version 2 Packet Format

RIP Version 2 Subnet Masks
Overview
The Subnet Mask portion of a RIP Version 2 packet yields the non-host portion of the IP address. This means RIP Version 2 distinguishes between the host, subnet, or network route for a destination IP address to allow for subnet routing. RIP Version 1 dropped or incorrectly routed packets to disjointed or discontiguous subnets. This is not the case with RIP Version 2 because it sends the subnet mask along with the address.

Discontiguous Subnets
Because RIP Version 2 includes the subnet mask in the IP packet, it also supports discontiguous subnets. Using RIP Version 1, routers R1, R2, R3, and R4 can broadcast network level information only. Without configuring static routes between these routers, other packets cannot be routed over the disjointed subnets. Since RIP Version 2 packets include the subnet mask the packets pass successfully to the subnets.

RIP Version 2 Authentication
Overview

Authentication supports a simple 16-byte password key to provide security between routers. This means you can configure a password for each interface on your router. When you enter the password at the CTP, it is contained in the RIP Version 2 packet, and checked against the authentication key configured in the router. Only matching keys are allowed access to the router.

RIP Version 2 Multicasting
Overview
RIP Version 2 supports broadcast or multicast updates. This means you can multicast RIP Request or Response datagrams instead of broadcasting them. This increases security and conserves resources on non-RIP hosts. Using an IP Multicast address reduces the load on hosts unable to support routing protocols such as RIP. This feature also lets RIP Version 2 routers share information that RIP Version 1 routers cannot hear. This is important since RIP Version 1 routers may misinterpret route information because it cannot apply the subnet mask supplied in RIP Version 2 packets.

Note
IGMP is not needed on a RIP Version 2 router since inter-router messages are not forwarded.
RIP Version 2 and OSPF
Introduction
With the improvements of RIP Version 2, the differences between OSPF and RIP are less significant. Both OSPF and RIP version 2 now support:
• Variable Length Subnet Masks
• Discontiguous subnets
• Authentication
• Routing information sent by multicasting
If you must choose between using OSPF and RIP Version 2 for routing operations on your network, keep in mind that OSPF works best in large, hierarchical networks with redundant paths to destinations requiring best path routing decisions. RIP Version 2 works best in small networks with single links to remote destinations or simple backups.

Advantages and Disadvantages of OSPF and RIP
While a solution that combines using both protocols on some nodes may be used, there are some advantages and disadvantages to think about before you make your choice:
OSPF advantages include:
• Scalable for very large networks
- OSPF uses a path cost rather than a hop count to determine best path
- OSPF can be subdivided into defined areas
• Supports true best path routing
• Acknowledges routing information
• Easier to troubleshoot
• Fast convergence
OSPF disadvantages include:
• Complex network configuration
• Higher CPU and memory requirements
• No support for SVC rerouting
• Update frequency fixed at 30 minute intervals
• Increases routing table size
• Requires more routers for redundant all OSPF network design
• LSA flooding problems in unstable networks
RIP Version 2 advantages include:
• Simple configuration
• SVC rerouting
• RIP on Demand
• Low cost CPU and memory demand
RIP Version 2 disadvantages include:
• Hop count limit of 15 hops
• Does not always pick the best route because routing decisions are always
made on hop count not congestion or traffic limitations
• Slower convergence in large networks
• No acknowledgment of routing updates
• Difficult to troubleshoot.

Dynamic Host Configuration Protocol (DHCP)

Dynamic Host Configuration Protocol (DHCP)
Introduction

Dynamic Host Configuration Protocol (DHCP) is a communications protocol that enables network officers to administer and automate the assignment of Internet Protocol (IP) addresses in a network. Every unit connected to the Internet needs a unique IP address. When an organization sets up its computer users with a connection to the Internet, an IP address is assigned to each computer. If you do not have DHCP, the IP address has to be entered at each computer. If you move the computers to another location in another part of the network, a new IP address must be entered. DHCP lets a network administrator supervise and distribute IP addresses from a central point. Administrators can automatically send a new IP address when a computer is plugged into a different place in the network.

Limitations
1) It is recommended that only one IP interface per port be configured to use DHCP in order to prevent a situation where two interfaces on the router obtain addresses that are on the same subnet.
2) Enabling DHCP and On Net Proxy on the same Ethernet port is not recommended.
3) The router must have a Global Address before DHCP can be operational. If the router does not have a global address configured, it cannot process DHCP replies from the server, and as a result the DHCP configuration process fails.
4) A DHCP address cannot be used as a BGP ID.
5) DHCP client is supported on ethernet ports only.

Componets
The Dynamic Host Configuration Protocol (DHCP) provides configuration parameters for Internet hosts. DHCP consists of two components:
1) A protocol for delivering host specific configuration parameters from a DHCP server to a host.
2) A mechanism for allocation of network addresses to hosts. DHCP is built on a client-server model, where designated DHCP server hosts allocate network addresses and deliver configuration parameters to dynamically configured hosts.
DHCP uses UDP as its transport protocol. DHCP messages from a client to a server are sent to the “DHCP server” (port 67), and DHCP messages from a server to a client are sent to the “DHCP client” (port 68). A server with multiple network address (such as a multi-homed host) may use any of its network addresses in
outgoing DHCP messages.

Although DHCP is not intended for use in configuring routers, routers can use DHCP to obtain some configuration parameters. Below are DHCP terms defined:

DHCP Terms Defined
BOOTP Bootstrap Protocol.
DHCP Server A host providing initialization parameters through DHCP.
DHCP Client A host requesting initialization parameters from a DHCP server.
Lease The period over which a network address is allocated to a client.


Benefits
Having DHCP client and server capability allows customers to reduce the amount of work necessary to administer any IP network. DHCP provides flexibility and allows for easy adds, moves and changes to networks that are divided into subnets on a geographical basis or on separate networks.

Address Resolution Protocol (ARP)

Address Resolution Protocol
Introduction
The Address Resolution Protocol (ARP) is a low-level protocol that dynamically learns and maps network layer IP addresses to physical Medium Access Control (MAC) addresses, for example, Ethernet. Given only the network layer IP address of the destination system, ARP lets a router find the MAC address of the destination host on the same network segment. For example, a router receives an IP packet destined for a host connected to one of its LANs. The packet contains only a 32-bit IP destination address. To be able to forward the packet on the LAN, the router must construct the Data Link layer header using the physical MAC address of the destination host. The router must acquire this physical MAC address of the destination host and map that address to the 32-bit IP address.
To obtain the physical address of the host, the router broadcasts an ARP request to all host of the network. Only the host with that IP address responds with its physical MAC address. The router saves the IP/MAC address mapping in a table called ARP cache and it can use this mapping in the future when forwarding packets to the destination host.

RFC
RFC 826 documents the ARP protocol.

ARP Physical Address Broadcast


Note
If the ARP cache does not contain an entry for a destination, the packet is queued pending an ARP Response. This means that the first packet sent between IP Hosts is queued until the expiration of the Time to Retry timer. If an ARP Response is not received within this time an ARP Request is retransmitted. All IP-based protocols perform this function.

Note
If a second IP packet, intended for the same Destination Address, arrives while the device is awaiting an ARP Response, the packet is queued but a second ARP Request is not sent. When another IP packet, intended for a different Destination Address, arrives while the device is awaiting an ARP Response for the first packet, an ARP Request for the second Destination Address is immediately broadcast to the network.

Proxy ARP
Introduction

Modern IP hosts, such as workstations and PCs, transmit directly to either a destination host or router. If the destination is on the same IP network and subnetwork as the sender’s, the sender transmits an ARP request to determine the destination MAC address and then transmits directly to it over the LAN. If the destination’s net/subnet is not the same as the sender’s, the sender transmits the packet to a router. Hosts are usually configured manually with a default router, which is the IP address of a router on their LAN.
Older hosts may always attempt to ARP for a destination address, even if it is not on the local LAN. The older host expects the router to respond to the ARP request with the router’s MAC address. This is called

Hosts With No Subnet Support
If the host attempts to send a packet to a network subnet, it sends an ARP request to find the MAC address of the destination host. If the subnet is not on the local wire, a router configured for ARP subnet routing may respond to the ARP request with its own MAC address if the following conditions exist:
• The router has the location of the subnet in its routing table.
• The router sends packets to that subnet via a different interface than the interface that received the ARP request.
Because of the second condition, configure all routers on a local wire for ARP subnet routing when you use hosts without network subnet support.

Proxy ARP Request Example
The following list describes the sequence when a station requiring Proxy ARP wants to send an IP packet to a host on a remote network:
• The host issues an ARP request that contains the destination IP address.
• Any router enabled to respond looks at the IP address for a match in its
routing table.
• If there is a match and the route does not pass back through the same LAN
port where the ARP host resides, the router responds with an ARP response
supplying its MAC address. Finding a match without passing back through
the ARP host port implies another router is present, has a shorter path to the
destination, and replies to the ARP itself.
• The host then sends the packet to the router using the newly learned MAC
address.
• The host stores this information (that is, the mapping of the IP address to the
MAC address) in a local cache so that if it sends another packet to the same
destination, it can do so without sending an ARP Request.
• The information is not used. The information is aged out of the cache and may
be relearned by resending an ARP Request.

Caution When Using Proxy ARP
The use of proxy ARP is discouraged in modern IP operation. Few hosts require it.

Proxy Subnet ARP
Introduction
Proxy Subnet ARP is the same as Proxy ARP except that the router responds to ARP requests for hosts it knows are on other subnets remote from the local subnetwork.
Sometimes hosts forward to a router for destinations with different class A, B, or C addresses, but ARP for any destination with the same class A, B, or C address as their own. They do not know about subnets of the class A, B, or C addresses. They expect the router to respond to the ARP for all subnets of the local class A, B, and C net and to forward to the proper subnet.

Proxy Subnet ARP Example
The following example shows that a host functioning with ARP does not use subnetting (i.e., subnetting is not configured or software does not include subnetting). Unless the router is enabled to respond using Proxy ARP subnet, it does not respond to this ARP and denies connectivity to other subnets of the same IP
network.

Example Addressing Description
A single IP class B network number 128.12.0.0 is used to define two subnetworks connected by a router: 128.12.1.0 and 128.12.2.0 (mask 255.255.255.0). The host is on 128.12.1.0 and is attempting to send to 128.12.2.1.

If the host used subnetting, then it sends a packet to its default router and relies on the router to get the packet delivered to the destination 128.12.2.0. If the host does not use subnetting then it sees the IP network address as 128.12.0.0 (it only knows IP network addresses and therefore uses a class B mask of 255.255.0.0 to obtain 128.12.0.0) and calculates that the destination is on the local LAN (because it has the same network number as itself). It therefore ARPs for the 128.12.2.1 address. The router must enable Proxy Subnet ARP in order to respond with the router’s MAC address. It sends a packet to its default router and relies on the router to get the packet delivered to the destination 128.12.2.0. The host does not use subnetting It sees the IP network address as 128.12.0.0 (it only knows IP network addresses and therefore uses a class B mask of 255.255.0.0 to obtain 128.12.0.0) and calculates that the destination is on the local LAN (because it has the same network number as itself). It therefore ARPs for the 128.12.2.1 address. The router must enable Proxy Subnet ARP in order to respond with the router’s MAC address.

Inverse ARP
Description
Inverse ARP is a protocol which allows a device to automatically determine the IP Address of a remote device in a Frame Relay network.

Duplicate IP Address Detection
Duplicate IP Address Detection Defined
Duplicate IP Address Detection is used to detect if the same IP address has been configured on multiple IP devices on the same LAN. If a user configures interface with the same IP address as another device on the same LAN, the network will not work properly. Both devices could receive and respond to packets with that common IP address.

Note
Duplicate IP Address Detection cannot detect all the address duplication problems. There is not a central database to hold all the IP address configurations of a full network. Only unicast addresses are checked.

Monday, June 18, 2007

OSPF

What is OSPF?

The Open Shortest Path First Protocol (OSPF) is an Interior Gateway Protocol (IGP) used to distribute information among routers belonging to an autonomous system (AS). Utilitizing the link state protocol, each OSPF router maintains identical databases describing the AS topology. Routers use this database to calculate and create shortest path routing tables. Routers supporting OSPF share and update information using Link State Advertisements (LSA). OSPF routers quickly learns and distributes routing information, which includes the set of all routers and links between them and the cost of each link.

RFC 1583 defines OSPF Version 2.

OSPF Features and Benefits


Benefits and features of OSPF routing include:
• Least cost routing — Lets you configure path costs based on any combination
of network parameters, for example, bandwidth, delay, and dollar cost.
• Scaliability - OSPF works with large networks and has no limitation to the
routing metric or hop count.
• Area routing — Decreases the resources, such as memory and network
bandwidth, consumed by the protocol and provides an additional level of
routing protection.
• TOS routing - Packet routing based on Type of Service (TOS).
• Variable Length Subnet Masks — Lets you break an IP address into variable
size subnets, conserving IP address space.
• Routing authentication — Provides additional routing security.
• CIDR — Classless Interdomain Routing.
• OSPF supports IP subnetting and the tagging of externally derived routing
information. It uses IP multicast when sending or receiving packets.


OSPF Routing Environment
Introduction

This section describes the AS (Autonomous System) or the OSPF domain including
concepts such as:
• Division of the AS into OSPF areas, OSPF backbone area, and stub areas.
• Variable-length subnetting.
• Functions of internal routers, neighboring routers, Area Border Routers
(ABRs), and AS Boundary Routers (ASBRs).
• Use of Hello protocol.
• Definition of Virtual Links when the OSPF backbone area is not contiguous.

Area
An area is a collection of contiguous networks and hosts together with one or more IP network address ranges. Each address range is an [address, mask] pair. OSPF lets you summarize all the networks in an area into one or a few IP address ranges. Then you assign the area a 32-bit area number.
The topology of any one area is hidden from that of the other areas, which reduces routing traffic and protects routing within an area from outside influence.

Backbone Area
The backbone area is the set of contiguous networks that interconnect all areas. All OSPF domains must have at least one backbone network. The backbone area distributes inter-area routing information. The backbone has an area ID of 0.0.0.0 and consists of any of the following:
• Networks belonging to Area 0.0.0.0.
• Routers attached to those networks.
• Routers belonging to multiple areas.
• Configured virtual links (see the “Virtual Links” section).
Many small- to medium-sized OSPF networks consist of all networks connected to the backbone.

Stub Area
A stub area is an area that does not allow advertisement of external routes. Frequently, branch offices are configured as stub areas and are connected in a star configuration to a hub router. We recommend that when there are more than 40 routers running OSPF, you should define the OSPF areas to limit the OSPF algorithm completely. Typically, you configure the set of branch routers connected to a central hub router as a separate OSPF area.

Internal Routers
Internal routers have all their directly connected networks belonging to the same area. Routers with only backbone interfaces belong to this category. These routers run a single copy of the basic routing algorithm.

Area Border Router (ABR)
An Area Border Router (ABR) connects an area to the backbone. This connection to the backbone can be direct or through a virtual link (see “Virtual Links”, below) and summarizes the area contents by advertising a single route for each address range. If an ABR is attached to multiple areas, it can run multiple copies of the basic algorithm, one copy for each attached area and an additional copy for the backbone. ABRs condense the topology information of attached areas for distribution to the backbone. The backbone distributes the information to other ABRs.

AS Boundary Router (ASBR)
An AS Boundary Router (ASBR) exchanges information with routers that belong to other autonomous systems. Such a router has AS external routes that are advertised throughout the AS. External routes are any routes learned by means of static routing and Routing Information Protocol (RIP).

Hello Protocol
The Hello protocol is the part of OSPF used to establish and maintain neighbors. This protocol is used to form neighbors, establish bidirectional communication, and elect a DR and a backup DR on multi-access networks.

Designated Router(DR)
Each multi-access network having at least two attached routers has a Designated Router (DR). The DR generates a link state advertisement (LSA) for the multi-access network and has other special responsibilities in the running of the protocol. The DR is elected by the Hello protocol. Having a DR reduces the number of adjacencies required on a multi-access network. This in turn reduces the amount of routing protocol traffic.

Backup DR
The backup DR is elected on a multi-access network to ease the transition of DRs when the current DR becomes inactive.


Virtual Links
What are Virtual Links?
The backbone must be contiguous so that all areas are reachable. Virtual links are OSPF neighbor adjacencies established between two ABRs to maintain backbone connectivity. The virtual link forms a logical point-to-point serial connection between the endpoints and is a part of the backbone. When you need to configure a virtual link, configure the following in both ABRs:
• The virtual link.
• The virtual endpoint, which is the other ABR.
• The non-backbone area that the routers have in common, which is called the transit area.
Note
A virtual link cannot transit though a stub area.

Types of Routing

Introduction The three types of OSPF routing are:
• Intra-area
• Inter-area
• External
Intra-Area Routing
Intra-area routing occurs when a packet’s source and destination addresses are in the same area within an AS. The router sends Hello packets to its neighbors and in turn receives their Hello packets.
The router attempts to form adjacencies with some of its newly acquired neighbors. The topological databases are synchronized between pairs of adjacent routers.
On multi-access networks, the DR determines which routers should become adjacent. Adjacencies control the distribution of routing protocol packets. Routing protocol packets are sent and received only on adjacencies. In particular, distribution of topological database updates proceeds along adjacencies.
Inter-Area Routing
Inter-area routing occurs when the packet’s source and destination addresses are in different areas within an AS. The ABRs form the adjacencies and synchronize the topological database.

External Routing
Routers that have information regarding other autonomous systems can flood this information throughout an AS. This external routing information is distributed to every router except for those connecting to stub areas.

OSPF Link State Advertisements
Description

A Link State Advertisement (LSA) describes a link from a router to its interface,
including the metric. An LSA is flooded to all routers in an area to form the area database describing the topology of the area. It is originated on a delta basis, whenever information changes, or at least every 30 minutes otherwise. From this database, each router generates a Shortest Path First (SPF) route with itself as the route of the tree. Then the router forms the routing table.
LSA Format
An LSA is sent in LS update packets that are queued on LS Retransmit queues. Each LSA must be explicitly acknowledged in an LS Acknowledgment packet or it is retransmitted at the retransmission interval (5 seconds).
Each LSA has a sequence number established by its originator. Sequence numbers
are signed 32-bit integers starting at -2**32 -1 (0x8000001) and incrementing to
2**32-1 (0x7FFFFFFF).
LSA Components
An LSA has:
• Link State (LS) type (8 bits) — values 1 to 5.
• Link State ID (32 bits) — the destination of the link.
• Advertising Router (32 bits) — the source of the link.

LSA Types

Router Link - Originated by all routers and flooded throughout a single area only. It describes a router’s interfaces to a network or a router’s links to another router.
Network Link - Generated for multi-access advertisement networks by the DR. It contains the list of the attached routers to the network.
Net Summary - Sent by the ABR summarizing net and mask of all networks in the area.
ASBR - Sent by the ABR and contains the routes to the ASBRs.
External - Sent by ASBRs and contains AS external routes. It is flooded into all but stub areas.

When Are LSAs Generated?
LSAs are generated in these cases:
• The LS refresh time expires.
• An interface’s state changes.
• An attached network’s DR changes.
• One of the neighboring routers changes to or from full state.

LSAs are generated by ABRs when:
• An intra-area route (see the “Types of Routing” section) has been added,
deleted, or modified in the routing table.
• An inter-area route has been added, deleted, or modified in the routing table.
• The router becomes newly attached to an area.

Variable-Length Subnetting
Introduction OSPF attaches an IP address mask to each advertised route. The mask indicates the range of addresses being described by the particular route. Including the mask with each advertised destination enables variable-length subnetting. This means that you can break a single IP class A, B, or C number into many subnets of various sizes. A key advantage of defining OSPF areas is that you can summarize the networks at the area with a single variable-length subnet advertisement.

Example of VLSM

An area that contains networks with IP net numbers 128.185.1.0 to 128.185.31.0 may be summarized in a single advertisement of the network 128.185.0.0 with a mask of 255.255.224.0. The Area Summary Statistics defines how an Area Border Router (see “Area Border Router (ABR)” section on page 1-3) summarizes the networks in its area when advertising on the OSPF backbone. In this example, the ABR sends one network summary advertisement rather than 32 individual network advertisements.

CIDR Support for OSPF
Classless Interdomain Routing (CIDR) is supported for OSPF.
CIDR Feature CIDR support for OSPF includes the following:
• Enhancements of the area range configuration to allow aggregation of routes
on a classless boundary.
• Configuration of a mask on each interface to allow a classless boundary. Configuration of the classless mask on the IP interface allows aggregation of directly connected multiple networks on that interface. The router LSA will contain classless routes only if the interface is not a point-to-point interface. The router LSA will be built as specified by the RFC and would use the configured classless mask.
• Support for external route aggregation on classbased or classless boundary.

Saturday, June 16, 2007

Understanding Frame Relay

In simple terms, Frame Relay is a packet-switched network. A "frame", in the context of Frame
Relay, is a packet of data.
When a data stream enters a Frame Relay network it is broken into frames and each frame is sent across the network to the destination point. The frames contain header information which tells the intervening network nodes where to route them, and which output port to use.
At any one time an individual link in a Frame Relay network is carrying frames from many different sources en route to many different destinations. This frame mixing or multiplexing is one of the keys to Frame Relay’s advantages - it offers benefits over leased lines in terms of performance, cost-saving, manageability and resilience.

Comparisons
It’s worth comparing traditional point-to-point links with Frame Relay links in order to discuss the differences between the two methods.
A traditional leased line is a point-to-point link. You pay for the line irrespective of whether there is traffic on it. Consequently the line cost fficiency, its cost per unit of data sent, can vary. When an organisation has multiple locations then connecting them together using leased lines can become very expensive.
With leased lines, each message has to complete before the next message can travel along the line. This can cause response time problems at sending/receiving terminals and PCs. For example, a 3270 terminal may be slow to respond to a transaction request sent via SNA over a leased line. With Frame Relay the frames from one data source can be intermingled with those from another.
Because a Frame Relay line can carry more data it is often more cost-effective than leased lines. Users have reported substantial cost savings, 20 to 30%, for example, as a result of substituting
a Frame Relay network for a leased line network.

Reliability
With leased lines there is significant management overhead on the customer. Also Frame Relay networks can route frames around failed links or nodes. Leased lines cannot, and users of them often have to have dial-up lines available as backups. Frame Relay is reliable enough that these backups can be dispensed with, saving both cost and management overhead. Frame Relay networks can be reconfigured quite quickly so that, for example, a backup data centre can be used. This kind of reconfiguration would be impossible with leased lines.

Network Description
A user will typically have a private line to a node on a Frame Relay network. Line speed is fixed, and will be somewhere in the range of 56 Kbps to 2 Mbps depending on the service that has been purchased.
The network itself is composed of lines connecting nodes (also known as switches). The receiving location also has a private line to a Frame Relay node.
A permanent virtual circuit, or PVC, is defined to link the sending and receiving end points. The circuit is bi-directional. Frames are routed across the network from sender to receiver using header information which is added to the incoming data stream.
Note that it may be possible to have switched access to Frame Relay by, for example, dialling up an access point on the Frame Relay network over an ISDN interface. Data then flows from the user across an ISDN network and then into the Frame Relay network. Each logical connection from a site via ISDN uses a single ISDN channel; they cannot be multiplexed into one ISDN channel.
This may be a cost-effective way of connecting remote sites with low data traffic rates to a Frame Relay network.
All the nodes have entry and exit ports and a particular route through the network involves each node know ing which exit ports to use for frames in a message. Each frame has a data
link connection identifier - DLCI -which is used by the nodes to choose the right exit port. A DLCI is not constant across the network. It is of only local significance to a Frame Relay node. The routing tables in each node for a PVC take care of alternately reading and assigning DLCI values in frame headers before they send the frame on to the next node.
When PVCs are first defined mis-installation of DLCI numbers can be a common error that prevents proper message transmission and receipt. The FRAD Data enters a Frame Relay network by passing through a FRAD, which is either a Frame Relay Access Device or Frame Relay Assembler/Disassembler depending on who you speak to. Either way, a FRAD is typically a router.
The FRAD breaks the data stream down into sections (frames), adds the header information and a check digit at the end, and sends the frames across the network. At the exit point of the network the frames are reassembled into a continuous data stream once more.
Each frame contains 5 bytes of header information. This header size is constant irrespective of how large the frame is. The larger the frame, the lower the overhead and the more efficient
Frame Relay is at turning theoretical bandwidth into available bandwidth.
A Frame Relay node takes no interest in user data within a frame at all. It looks for a flag that starts a frame, the header, and then for a check digit that marks the end of the user data.
Frames can be of variable size. Each one is transmitted as a stand-alone entity. If it gets corrupted en route through the network it is dropped. There is no frame error checking and recovery within the network. That is the responsibility of software that uses the network. Thus Frame Relay requires a virtually error-free transmission system for this approach to be enable.

Renting Space
A telcomms company (telco to those in the industry) will take a leased line, a T1, for example, and sell you Frame Relay bandwidth on it. So you are, in effect, using only part of the T1 line.
There may be 40 or more customers each with their own Frame Relay bandwidth on the line. Frames are statistically multiplexed on it. The telco is sharing lines and nodes between many customers. This is one reason why a 56 Kbps Frame Relay network is cheaper than a 56 Kbps
leased line.

CIR
A user commits to deliver data to a PVC at a Committed Information Rate (CIR), which is expressed as bits per second. There could be two 28 Kbps CIR PVCs defined over a single 56
Kbps interface to the network. Since data has to be submitted at the line speed, 56 Kbps for example, the CIR is an average over a period of a few seconds. If you under-submit it doesn’t
matter. If you over-submit (called "bursting over your CIR"), excess frames may be discarded if the network gets congested.
When you’re renting Frame Relay lines, you need to discuss CIR figures with the company to ensure that the line can cope with the amount of traffic you intend to throw at it. Congestion occurs when more data is attempting to cross the network than it can handle. When this is detected a congestion bit is added to a frame header to tell the sending FRAD it ought to slow down. It will then keep frames in its buffers until it stops receiving congestion bits.
If buffer space fills up, or there is none, then frames are discarded. The network knows the CIR for a sender and discards frames that have had a Discard-Enable (DE) bit set in their
header meaning that they represent a frame in excess of the sender’s CIR.
Some FRADs can set the DE bit to signal that the frame is low priority. Thus senders can divide messages into normal and low priority groups.

UNI
The UNI is the User Network Interface and defines what user devices need to do to initiate, operate and receive Frame Relay transmissions.

The PVC
When a location or device is given access to an existing Frame Relay network it means wiring in the access port and then configuring a permanent virtual circuit - PVC. A unique PVC connects each of the user’s sending nodes with each of their receiving nodes. Each PVC is defined by DLCIs with routing tables that designate the entry and exit ports on a node in the network.
The PVC is ready to use whenever data needs to be sent. This keeps call latency low. Automatic Re-routing If a node or line in a Frame Relay network goes down, the network is automatically reconfigured to route around the failed component. This is a matter of re-setting routing tables in each node on the network so that PVCs are redefined.

SVC
The Frame Relay specification defines Switched Virtual Circuits -SVCs - as well as PVCs. A calling device would request a connection to a destination device using internationally-recognized X.121 or E.164 numbering plans. SVC services may be cheaper than PVC services in circumstances where there is a low traffic rate or low connection rate. Also SVC services
mean that users don’t have to pre-configure and manage PVCs and can get additional bandwidth on demand.
This is useful where networks are in a state of flux. When an SVC is set up, any encapsulation procedures needed are agreed during the setup. The CIR is also agreed at that time. Note that Multicasting is only available over PVCs, not SVCs.

LMI
When there is no data passing across from the network to a user the network is polled every 10 seconds or so and should return a "keep alive" message signifying that the link is operational. This polling is part of the Line Management Interface - LMI. The link will be assumed to be down if a certain number of keep alive failures occurs. This allows noise on the line to be accommodated.

Frame Relay Forum
In 1991, 42 Frame Relay suppliers formed the Frame Relay Forum. There are now over 300 members and it has proved to be a very effective body in advancing market take-up of Frame
Relay technology.
The Forum has three subgroups which work to develop forum proposals. The Market Development and Education Committee aims to stimulate interest in the technology and its
benefits as well as serving as a user group. Multivendor issues are the concern of the Interoperability and Testing Committee.
The Technical Committee addresses technical issues to encourage interoperability and developments of Frame Relay technology. There are four main areas of activity: User Network
Interfaces (UNI), Network-to-Network Interface (NNI), Multicast Service and Multiprotocol Encapsulation Procedures. The committee develops UNI standards and conformance tests for
vendors to check their products and services. Items in the standard may be mandatory, highly desirable or not critical. Transferring data between network nodes belonging to different Frame
Relay vendors is where the NNI issues surface. A user may use Frame Relay services from two or more carriers with each carrier providing a segment of that user’s Frame Relay network.
Whole PVCs are broken down into PVC segments provided by each carrier. The sum of the segments makes up a complete PVC. The committee decides what each network has to do to
support such interoperability. A peerto-peer interface operates between the carriers providing each network segment.
When a network detects that a User Network Interface or NNI is inoperative, each network notifies the adjacent network via the NNI that the PVC is inactive. The PVC status change is propagated through the adjacent networks to the remote users. The NNI also covers congestion management principles and CIR coordination between the network providers. Ideally a
user should see and receive the same service from a multi-carrier Frame Relay network as from a single carrier network. This is what the committee is working towards. Multicasting, a supplementary Frame Relay service, is the facility to accept a frame at a UNI and broadcast
the frame to multiple destinations. In One-Way multicasting, nodes in the Frame Relay network have an extra PVC. Frames transmitted on this PVC will be delivered to all the neighbouring
nodes. This is helpful for management traffic like routing table updates.
Two-Way multicasting allows a single point or "root" to multicast data units to a specified group of users. Frames transmitted by the root are seen by all group members. Frames transmitted
by group members are delivered only to the root. This is useful for remote learning applications.
With N-Way multicasting frames transmitted by any group member are seen by all group members. This is useful for conferencing situations. Multicasting destinations may be on one
network or multiple networks.

LAN Connections
When Frame Relay is used to interconnect LANs then the LAN traffic (Ethernet or Token Ring, for example) is encapsulated in the frames so that there is no logical break in the LAN structure; the interconnected LANs seem to be a single one. Multiprotocol Encapsulation Procedures describe the methods used to do this. They cover other protocols as well and also include
bridging and routing between LANs. SNA And Frame Relay SNA is another packet switching
protocol developed from broadband ISDN. SNA links may well have used leased 9.6 Kbps lines, either point-topoint or multidropped. Frame Relay will provide much more speed than this. It will also support multiple protocols which the SNA links cannot. As IBM sites have added PCs on LANs and Unix systems with TCP/IP to its SNA 3270, LU6.2 etc networks, Frame Relay is an attractive option for carrying these multiple protocols. Without it, multiple SNA lines have to be
installed.
IBM has ensured that its SNA products support Frame Relay and supports all SNA topologies across Frame Relay networks. For example, in 1994 IBM released a new version of its Network Control Protocol, NCP 7.1, that allowed SNA networks to fully utilise Frame Relay. Also PS/2s running OS/2 can have RoutExpander/2 software and Wide Area Concentrator - WAC - hardware which enables them to access a Frame Relay network and function as a gateway for a mall site. AIX RS/6000s can be connected to a mainframe host via a Token Ring LAN and Frame Relay rather than by SNA links between the mainframe and the Unix box. The same goes for terminals and mainframes.

ATM And Frame Relay
Both Frame Relay and ATM evolved from the broadband ISDN standards developed in the 1980s. The essential difference is ATM’s fixed length cell size versus Frame Relay’s variable length frames. ATM has a fixed 53-byte cell size. That means the header overhead is constant. If cells are not filled with data then the header overhead as a percentage of network traffic grows. Thus ATM may be less efficient than Frame Relay when the data load factor in cells is low.
Many analysts think that Frame Relay and ATM will co-exist with an eventual trend for ATM to capture the backbone traffic and Frame Relay becoming a submitting route to an ATM backbone. ATM may spread to desktop devices where its ability to cope well with delay-sensitive traffic like multimedia make it suitable. In general, it is thought that Frame Relay transfers data more efficiently up to 56 Kbps with ATM being better at higher rates.
The Frame Relay Forum, together with the ATM Forum, has defined a Frame Relay/ATM Network Interworking Implementation Agreement to cover this area. They have also been
working on Frame Relay-to-ATM protocol conversion.

Voice And Frame Relay
Voice traffic is very sensitive to delay. If a voice input stream is broken up into frames which are sent one after the other across a network and reassembled at the other end, the listener should hear normal telephone speech. However, if there are intervals between the frames then the speech becomes disjointed with pauses between sections. The sections may not be at word or sentence boundaries which lowers the perceived quality still further. Frame Relay’s inability to prioritise frames effectively doesn’t help and neither does the use of DE bit setting on frames above the CIR. You just cannot discard voice frames. Thus Frame Relay has been considered unsuitable for voice traffic.
However, there are moves to enable voice traffic to be sent over Frame Relay. The forum’s technical committee is working on a framework for a Voice Over Frame Relay implementation.
Some vendors, like Scitec, have voice solutions ready today. The committee is also working on data compression which should help further and make Frame Relay more suitable for multimedia traffic. Voice-capable FRADS chop big frames up into small ones using ATM
segmentation and re-assembly (SAR) techniques. This stops large frames delaying voice frames and hence the network. This combination handles retransmission of data if errors occur. It is a perfect fit for Frame Relay. lowering voice transmission quality. Integrating voice and data on a Frame Relay network can be effective for companies with international links. Voice calls between countries are expensive and carrying voice over Frame Relay can save millions of dollars annually.

X.25 And Frame Relay
In Europe, X.25 lines have been popular for LAN-to-LAN and SNA connections. Frame Relay offers lower call latency, faster performance and cheaper line cost. It also has a lower data overhead per message. In X.25 the network handles error detection and retransmission. Frame
Relay networks let the user do this. For example, with the TCP/IP protocol, TCP establishes robust transport-level connections across a network and IP, a lower level protocol in the ISO scheme of things, carries data packets across the network. This combination handles retransmission of data if errors occur. It is a perfect fit for Frame Relay.

Frame Relay Traffic Shaping

Introduction

Traffic Shaping is a mechanism for controlling the rate of outgoing traffic in order to minimizes network packet loss. It delays excess PVC traffic by queuing it when traffic throughput is higher than expected and matches its transmission to the speed of the remote, target interface. Traffic shaping also avoids overloading a remote link by smoothing the traffic and regulating the average rate on an outgoing interface.

Feature Summary

Data Traffic Shaping features:
• Outgoing data traffic rate control in a range between the configured CIR an the line access rate
• Seamless transition from Traffic Shaping mode to configured Congestion Control Mode when the network is congested
• Support for existing Voice Bandwidth Allocation mechanism


Traffic Shaping -Rate Control

A configuration parameter Maximum Information Rate (MIR) is available to control the station outgoing information rate and provide traffic shaping capabilities. When the Frame Relay station parameter Congestion Control Mode is configured as NORMAL, DISABLE or LIMIT you have the option to limit the station outgoing information rate to some pre-determined value, Maximum Information Rate (MIR). The value of MIR shall be equal to or greater then CIR and equal to or less than the local interface access rate. When this option is disabled the station transmits (without rate control) at maximum line speed.

Traffic Shaping - Measurement Interval

When the Traffic Shaping feature is enabled, Measurement Interval Tc is forced to be in range of 50 to 200ms. When calculated value Tc = Bc/CIR is greater than 200ms, Tc is set to 200ms and the burst size Bmir (Bmir=Tc*MIR) will be calculated accordingly. The number of data bits, transmitted per Tc will be always kept equal to or less then Bmir.
Data throughput should not be affected by value of Tc. When Bmir is small, it is likely that just few packets can be transmitted per Tc and that some significant portion of the bandwidth may be unused. Since you are not allowed to send more than Bmir bits per Tc, the effective bandwidth is lesser than MIR. When voice is present, this effect is minimized due to the small size of voice packets as well as segmentation of data packets. When the variation in packet size is small, MIR shall be based on that size. For instance, when segmentation is enabled and End-to-End Segment Size When Voice Is Not Present is different then Disabled the MIR shall be selected to allow integer multiple of segment (including all headers) size per Tc. In such a way, when voice is not present, the data throughput will be closest to the selected MIR value. When the size of some packet is bigger than allowed Bmir, a big packet is allowed to pass by crediting in advance, to avoid packet flow blockade. When the size of the packet is bigger then Bmir it will be transmitted more than Bmir data bits per Tc, but the multi interval average rate will be kept no more than the MIR.

Traffic Shaping in Presence of Voice

Traffic Shaping applies to data traffic only. Voice packets will not be delayed or dropped because of Traffic Shaping. When there is voice traffic through PVC, a decision must be made if data throughput or voice quality is the priority. To avoid voice degradation, due to traffic shaping, the Voice Congestion Control Mode parameter shall be enabled. When both Traffic Shaping and Voice Congestion Control Mode parameters are enabled and voice is present, Voice Congestion
Control algorithm will apply (there is no traffic shaping for voice). Required bandwidth is reserved for voice on that PVC and the remaining bandwidth is used for data. At instances when voice is not present, data traffic is shaped. The data rate is then controlled by shaping the mechanism up to MIR, regardless of the Voice Congestion Control status. When Voice Congestion Control Mode is disabled, voice and data will share bandwidth determined by MIR. In all cases, when there are more bits to be transmitted than allowed, data is queued and voice is transmitted.

Traffic Shaping Typical Example

Network congestion is formally defined as “Traffic in excess of network capacity”. Consequences of congestion are long delays, as result of packet queuing in network switches and packet loss, as result of buffer overflow. Source and destination line speed mismatch in presence of bursty traffic is a frequent cause of network congestion. The central site (Node A) has a T1 line into the cloud, while the remote (branch or telecommuter) site has a lower speed (56 Kbps). The central site sends packets at a much higher rate than the remote line can transmit to the destination node (the ingress rate is higher than the egress one). This results in a bottleneck (packet queuing and eventually dropping) in the egress switch. A possible solution is rate limiting at the central site, so you do not exceed the remote side. Configuring the MIR for the corresponding stations in the node A, to 56k, limit their outgoing rates and prevent packet loss.
Traffic Shaping is a mechanism for controlling the rate of outgoing traffic in order to minimizes network packet loss. It delays excess PVC traffic by queuing it when traffic throughput is higher than expected and matches its transmission to the speed of the remote, target interface. Traffic shaping also avoids overloading a remote link by smoothing the traffic and regulating the average rate on an outgoing interface.

Frame Relay Transmission Fairness

Introduction
The transmission of frames is regulated on each DLCI so that one DLCI carrying intense traffic and/or large frames, does not effect other DLCIs on the same link. Effectively, this shares the link transmission bandwidth amongst all DLCIs and ensures that, at the very least, each DLCI obtains its CIR. Transmission Fairness is beneficial to those stations that are constantly transmitting. Stations that transmit at infrequent intervals typically operate well below their CIR and, as such, transmit their data when necessary.
Transmission fairness does not imply a priority level. When the occasional frame is queued for transmission by a station, the frame waits its turn in the queue. Having a large CIR, with respect to other stations, does not mean that the frame is moved up in the queue. This would however, apply to voice packets.
Since no priority is associated with transmission fairness, there is no overall performance change when using pre-5.1 release software. This is especially true of applications using a low window number for its interworking with a remote. A typical example of this is an application that sends a single message and waits for an acknowledgment before sending the next message.

Sharing Link Bandwidth
The sharing of link bandwidth is in proportion to the stations CIR. The amount of additional bandwidth given a station is determined by its configured CIR, and the sum of CIRs from all transmitting stations.
When uncommitted bandwidth is available, it is divided between all transmitting DLCIs.

Zero CIR Configuration
When zero-CIR stations are configured, the Clock Speed parameter must be set to a value representing the actual link operating speed. This also applies to ports that are externally clocked.
Since all DLCIs with a non-zero CIR may periodically saturate the link, DLCIs with a zero CIR are not guaranteed bandwidth on the link.
These two formulas apply to non-congested conditions (when the station is not in a controlled sending state). For the purposes of the calculations, only actively transmitting stations are considered. Nt denotes the total number of such stations while the number of such stations with CIR set to zero.
• The fraction of link bandwidth available to each zero CIR station (Fz) is
calculated here
• The fraction of link bandwidth available to each non-zero CIR station (Fn) is
calculated here:
Fz = 1- (total CIR / link speed)
Nt
Fn = Station CIR
total CIR * (1 - Fz * )


Example
For the purposes of this example, assume that a node has the following conditions:
• FRI port has a link speed of 64 kbps.
• All stations have a CIR of 16 kbps.
• Three Bypass stations carrying LAN traffic (stations 1, 2, and 3) and one
Annex G station carrying serial traffic (station 4).
Station 1 is idle.
Stations 2 through 4 are actively transmitting data.
Since each station is configured with the same CIR, the amount of bandwidth given
each active station is the same:
(16 x 64)/(16+16+16) = 21.3 kbps

Friday, June 15, 2007

Congestion Control with Frame Relay Interface Ports

Introduction

There are two types of congestion notification used to control station transmission rates; Explicit
and Implicit.
• Explicit Congestion notification is done by the attached network sending frames to the Frame Relay Interface station with the Backward Explicit Congestion notification (BECN) bit set in the frame header. This notifies the Frame Relay Interface station that the network is congested for the corresponding DLCI. Frame Relay Interface stations can be configured to respond to Explicit Congestion notification.
• Implicit Congestion notification is the process of an Annex G station detecting lost frames. Frame loss is detected when the LAP-B Annex G station is forced to retransmit a frame. Only Annex G stations can respond to Implicit Congestion notification.
Under normal conditions, neither the Annex G nor the Bypass stations set the
Discard Eligible (DE) bit.

Data Rate

An FRI station normally sends frames at the maximum rate available (line speed). It is possible for the station to exceed its committed rate. Usually, this is a temporary situation and statistically the station sends at or below its committed rate. However, if the network is experiencing congestion, then the implicit or explicit congestion mechanism causes the station to enter a controlled send state and lower its rate of transmission to cooperate with the network in congestion control.
An Annex G station that is in a controlled send state, sends frames that carry voice traffic. This occurs even if transmission causes the send rate to exceed the controlled rate. These frames are not buffered and are sent as quickly as possible. Excess rate voice frames are sent with the DE bit set.

Congestion Control for DTE

For a Frame Relay DTE Interface port, there are five configurable parameters
related to congestion control:
• Committed Information Rate (CIR)
• Committed Burst Size (BC)
• End-to-End Delay
• Congestion Control Mode
• Maximum Information Rate (MIR)

Committed Information Rate (CIR) and Committed Burst Size (BC)

The values to use for the CIR and BC are those to which the Frame Relay port and its DLCIs have subscribed. If this port connects to a Frame Relay carrier, these parameter values are provided by the carrier and should be set accordingly.
•Note
These parameters cannot be tuned; they are set by the provider of the Frame Relay network at subscription time.

End-to-End Delay

The End-to-End delay parameter determines the value of the internal step count parameter used to reduce the transmission rate when congestion is measured by the station. The End-to-End Delay value can be estimated and supplied by the provider of the Frame Relay service. It can also be measured, but this is difficult to do and the estimate is usually sufficient.
These parameters are configured on a per station basis. Excessive frame loss due to congestion indicates the step count used in reducing the transmission rate may be too large. This situation can be improved by adjusting the End-to-End Delay parameter.

Congestion Control Mode
You use the Congestion Control Mode parameter to define how the station handles congestion notification. The FRI station detects the Frame Relay network congested state when it receives a frame from the network with BECN bit set to one (1). As a sender, it constantly monitors this bit in frames received from the Frame Relay network. If the BECN bit is detected as being set, the transmitter reduces its rate of transmitting data bits. Note that the rates are the maximum rates of transmission. Obviously, such rates are achieved only if the transmitter has data constantly queued for transmission on the station. You use the Congestion Control Mode parameter to control the handling of congestion notification for both explicit and implicit congestion notification.

Maximum Information Rate (MIR)

Introduction

In order to control the station outgoing information rate and provide traffic shaping capabilities, a station configuration parameter Maximum Information Rate (MIR) has been created. The MIR parameter is accessible only when Frame Relay station configuration parameter Congestion Control Mode is configured as NORMAL, DISABLE or LIMIT. Valid values for this parameter are between CIR and the local interface access rate. While a network is uncongested, the station maximum average transmission rate is determined by this parameter. Measurement Interval Tc is forced to be in range 50 to 200ms.This reduces burstiness and further reduces a chance for congestion. Large Tc values can cause large gaps between packets, because packets are sent at the beginning of the interval. Smaller Tc values smooth traffic by
spreading one big burst over several time intervals. This reduces the chance for long delays of voice packets caused by previously accumulated data packets in network switches. When the MIR parameter is set to the default value 0, Traffic Shaping is disabled and the station rate and operation are equal to the existing rate and operation. When MIR is enabled station state is Controlled.

•Note
Voice packets and packets having priority PRI_EXP_DROP, will be excepted by the rate control. The packets will not be queued or discarded even when the rate is higher than MIR.

Explicit Congestion Control

Introduction

Both Annex G and Bypass stations permit the use of explicit congestion control. Explicit Congestion Control is the process of reducing a station’s transmission rate when the attached network sends frames to the FRI port, with the BECN bit set. The Congestion Control Mode parameter determines how a station reacts to the BECN.

Normal Congestion Control

This mode of congestion control is obtained by setting the parameter Congestion Control Mode to NORMAL. A station is initially in the uncontrolled state and can transmit data when data is available. This means that the maximum number of characters allowed is only limited by the link speed. Upon receiving the first BECN from the network, the allowed transmission rate is immediately reduced to ensure the CIR is not exceeded, and the station goes into a controlled state. In the controlled state, a step count algorithm calculates two parameters:
Step Count equals (CIR x End-to-End Delay) / max packet size
Delta-T equals Committed Burst Size / CIR

Where max packet size equals a nominal value of 2088 bits, the other values are taken from the stations configured values with the CIR value in bits per second and End-to-End Delay in seconds.

•Note
Step Count cannot be less than 4 or grater than 255

These parameters are used to measure and control congestion and to either reduce the
rate further or increase the rate (re-enter uncontrolled state). Delta-T is the average
time in which a specific number of characters are allowed to be transmitted. While in the controlled state:
• If the number of additional BECNs received (consecutive frames with the BECN bit set) is greater than, or equal to the Step Count, the maximum transmission rate allowed is reduced to 5/8 of CIR. This applies if the allowed rate is between 5/8 CIR and CIR.
• If the number of additional BECNs received (consecutive packets with the BECN bit set) is greater than, or equal to the Step Count, the maximum transmission rate allowed is reduced to 1/2 of CIR. This applies if the allowed rate is between 1/2 CIR and 5/8 CIR.
• If the number of BECNs received (consecutive packets with the BECN bit set) is greater than, or equal to the Step Count, the maximum transmission rate allowed is reduced to 1/4 of CIR. This applies if the allowed rate is between 1/4 CIR and 1/2 CIR.
• If further BECN bits are received, the transmission rate is not set below 1/4 CIR (that is, the lowest transmission rate that can be set).
The Frame Relay network stops sending frames with the BECN bit set when it recovers from its congested state. The FRI station counts the number of consecutive frames received without the BECN bit set. When the number of frames with BECN set to zero exceeds (Step Count)/2, it increases the allowed transmission rate in increments of 1/8 CIR and again counts the number of consecutive frames with BECN set to zero to repeat the increment process. Once the transmission rate reaches CIR, the network leaves the controlled state.

The NORMAL mode is used in most cases when the port is attached to a Frame Relay network provider. It gives a measure of protection from frame loss if the Frame Relay network becomes so congested that it loses frames even if the transmission rate was near CIR bounds. If lost frames requires retransmission, then this is the best mode since retransmission into a congested network causes further congestion.

Disable Congestion Control

This mode of congestion control is obtained by setting the Congestion Control Mode to DISABLE. This disables the FRI station rate reduction congestion management mechanism and you can use this value when frame loss by the network is not an issue. It allows the transmitter to send at its highest rate without regard to possible congestion frame loss. If frame loss is an issue with the application using this DLCI, it usually employs a retransmission scheme to detect and resend lost frames. If this is the case, be aware that disabling congestion control may actually reduce throughput. Retransmissions into an already congested network only adds to the congestion, and congestion likely becomes so severe that overall throughput goes below a level that would be achieved if the transmitter reduced its rate using the congestion notification mechanisms.
•Note
Annex G stations always operate with a LAP-B procedure and retransmit on detecting frame loss. This mode might not be desirable for such stations.

Congested Congestion Control

This mode of congestion control is obtained by setting the Congestion Control Mode to CONG. A station is always in the controlled mode, that is, the maximum transmission rate allowed never exceeds the CIR. In this controlled state, the same rate control algorithm applied in the NORMAL mode is used to further control the transmission rate if BECN bits are received.
This mode allows the transmission rate to be set to a maximum of CIR, even when there is constant data queued for transmission. This mode is useful in situations where the attached network discards frames which are received at a rate greater than CIR.
•Note
An important example of the use of this mode would be a Frame Relay network configured to discard frames that are in excess of CIR.

Limit Congestion Control

This mode of congestion control is obtained by setting the Congestion Control Mode to LIMIT. A station is initially in the uncontrolled state, that is, the maximum transmission rate allowed is limited only by the link speed. Upon receiving the first BECN from the network, the maximum transmission rate allowed is reduced to CIR, and the station goes into the controlled state. The maximum allowed transmission rate is never reduced below CIR, regardless of the number of BECN bits received. Upon receiving [(Step Count)/2] consecutive frames without the BECN bit set, the station goes back into the uncontrolled state.
This mode can be selected when the Frame Relay network is not usually subjected to congestion conditions. Occasional light congestion experienced by the network causes the station to reduce the transmission rate to the CIR value and no lower.

Implicit Congestion Control
Introduction For implicit congestion control on Annex G stations, any time the station needs to
retransmit (frame loss / REJ), it informs the FRI port congestion control mechanism. When congestion control receives this indication, it immediately reduces the maximum allowed transmission rate to 1/4 of CIR and goes into the controlled state. The same rate recovery algorithm used for NORMAL congestion control is used to get out of the controlled state. This consists of receiving [(Step Count)/2] consecutive packets from the network with the BECN bit clear to increase the allowed transmission rate by 1/8 of CIR, and repeating this process until the CIR rate is achieved. A final [(Step Count)/2] count of frames with BECN set to zero moves
the station into the uncontrolled state.
Annex G stations using implicit congestion also use explicit congestion control. In effect, the detection of frame loss is simply an additional method of sending a station into the controlled state, at an allowed rate of 1/4 CIR. Once the station enters a controlled state, the recovery process is the same, that is, counting consecutive frames with BECN set to zero.

Wednesday, June 13, 2007

EIA Connection Types (X.25)

Introduction
A device connected to a port can establish and maintain a connection only after a proper handshake using control signals has occurred. This is called the EIA connection establishment and should not be confused with the physical connection to the port. A port’s physical level is in an idle state when there is no EIA connection and when it is disconnected.
Connection Types
Different types of EIA connections can be used depending on the setting of the Connection Type parameter (in the Port Record):
• SIMP: Simple connection with no control signal handshake.
• SIMPv: The modem switches from leased to dial-only mode when leased line goes down. • DTR: Connection with DTR control signal handshake.
• DTRD: Same as DTR but control signals drop.
• DTRP: When DTR needs to be passed end-to-end.
• DIMO: Dial modem attached to the port and does dial-in/out handshake.
• DIMOa: Same as DIMO except DSR not raised.
• DIMOb: Same as DIMO except DSR follows DTR.
• DIMOv: The port handshakes with attached V.25 bis dial modem.
• EMRI: Port emulates a modem and does dial-in/out handshake with RI.
• EMDC: Port emulates a modem and does dial-in/out handshake with DCD.
Disable/Enable Ports
When a port is disabled, its EIA connection type is changed to NULL and all input control signals are ignored. All output control signals are dropped. If the parameter Port Control is set to MB (Make Busy), RI (pin 22) is raised. When a disabled port is enabled, its EIA connection type changes back to the configured EIA connection type. If the parameter Port Control is set to MB (Make Busy), RI (pin 22) is lowered.
SIMP (Simple) Connections
Introduction
This connection type is used when terminals are connected to a port with a cable that has minimal conductors. Most control signals are absent because of the lack of conductors. This kind of cabling provides only ground, transmit and receive data, transmit and receive clock.
• Note
For DCE ports, DCD, DSR, and CTS control signals remain high. For DTE ports, RTS, DTR, and DRO control signals remain high.
DCE EIA Status for SIMP
Connection - Outbound control signals DCD, DSR, and CTS (pins 8, 6, and 5) are held high at all times. On asynchronous PAD ports, if EIA data restraint is enabled, CTS and RTS (pins 5 and 4) may change according to the requirements of data restraint. Inbound control signals DTR and MB (pins 20 and 25) are ignored. DTE EIA Status for SIMP
Connection - Outbound control signals RTS, DTR, and DRO (pins 4, 20, and 14) are held high at all times. On asynchronous PAD ports, if EIA data restraint is enabled, DCD and DRO (pins 8 and 14) may change according to the requirements of data restraint. Inbound control signals are ignored: DCD, DSR, and CTS (pins 8, 6, and 5).
SIMPv
This is a combination of SIMP and DIMOv Connection Types. It starts as SIMP and after the SIMP connection goes down (leased line), the Connection Type switches to DIMOv (dial line). This is used with dial restoral modems.
DTR Connections
Use this connection type when the device connected to the port provides basic control signals to maintain the EIA connection. The remote user calling the device through a PAD port will know if the device is disconnected or powered down because the call will not be completed. Users connecting to a PAD port will access the terminal handler. They can manually call or be automatically connected if the port is configured for autocalling.
DCE Port States:
Idle - DCD, DSR, and CTS (pins 8, 6, and 5) are held high at all times.
Connection - The port monitors DTR (pin 20). If it is detected high, the EIA connection is established. RTS is ignored. A device on the asynchronous PAD port connects to the terminal handler. A call from the network is accepted if DTR is active. On asynchronous PAD ports, if EIA data restraint is enabled, CTS (pin 5) may go low during the connection.
Disconnection - The port monitors DTR (pin 20). If it goes low for more than 1.5 seconds, disconnection occurs. A call clear is sent to the network if disconnection occurs. DTRD Connections Introduction Use this connection type only on asynchronous PAD ports. Some devices require APAD ports to lower the control signals for a short period after the call is terminated. DCE Port States : (This table describes the conditions during various states for
DTR connections on DCE ports.
Idle - DCD, DSR, and CTS (pins 8, 6, and 5) are held high at all times.
Connection - The port monitors DTR (pin 20). If it is detected high, the EIA connection is established. RTS is ignored. A device on the asynchronous PAD port connects to the terminal handler. A call from the network is accepted if DTR is active. On asynchronous PAD ports, if EIA data restraint is enabled, CTS (pin 5) may go low during the connection.
Disconnection - The port monitors DTR (pin 20). If it goes low for more than 1.5 seconds, the port drops DCD, DSR, and CTS (pins 8, 6, and 5) for one second. A call clear is sent to the network, and the port returns to the idle state. During the control signal drop, the port cannot receive calls from the network. If the user clears the call by entering [CLR] the signals do not drop. If the call is cleared by an X.29 invitation to clear, the signals remain high when the parameter Invitation to clear = CLRWO: the signals are dropped when the parameter Invitation to clear = CLRWD.
DTE Port States:
(This table describes the conditions during various states for DTR connections on DTE ports)
Idle - RTS, DTR, and DRO (pins 4, 20, and 14) are held high at all times.
Connection - The port monitors DSR (pin 6). If it is detected high, the EIA connection is established. DCD is ignored. A device on the asynchronous PAD port connects to the terminal handler. A call from the network is accepted if DSR is active. On asynchronous PAD ports, if EIA data restraint is enabled, DRO (pin 14) may go low during the connection.
Disconnection - The port monitors DSR (pin 6). If it goes low for more than 1.5 seconds, the port drops RTS, DTR, and DRO (pins 4, 20, and 14) for one second. A call clear is sent to the network, and the port returns to the idle state. During the control signal drop, the port cannot receive calls from the network. If the user clears the call by entering [CLR], the signals do not drop. If the call is cleared by an X.29 invitation to clear, the signals remain high when the parameter Invitation to clear = CLRWO: the signals are dropped when the parameter Invitation to clear = CLRWD.
DTRP Connections Port States forDTRP
(Originate End: Autocall Configured)
Idle - DCD, DSR, and CTS (pins 8, 6, and 5) are maintained low.
Connection - The port monitors DTR (pin 20). If it is high or goes high, the port makes a network call according to the autocall mnemonic and waits for the call to be accepted by the remote PAD. If the call is accepted, the port raises DCD, DSR, and CTS (pins 8, 6,and 5) and the connection is established. If the call is not accepted, the port continues to autocall until it reaches the autocall limit. DCD, DSR, and CTS (pins 8, 6,and 5) will remain low.
Disconnection - The port monitors DTR (pin 20). If it goes low for at least 50 milliseconds, the port drops control signals DCD, DSR, and CTS (pins 8, 6, and 5), clears the call, and returns to the idle state. If the call is cleared from the network or by the user entering [CLR] at the port, the port, immediately drops the controls signals.
DTRP connections on DTE ports:
Idle - RTS, DTR, and DRO (pins 4, 20, and 14) are maintained low.
Connection - The port monitors DSR (pin 6). If it is high or goes high, the port makes a network call according to the autocall mnemonic and waits for the call to be accepted by the remote PAD. If the call is accepted, the port raises RTS and DTR (pins 4 and 20) and the connection is established. If the call is not accepted, the port continues to autocall until it reaches the autocall limit. RTS,DTR and DRO (pins 4, 20, and 14) remain low. Disconnection - The port monitors DSR (pin 6). If it goes low for at least 50 milliseconds, the port drops RTS, DTR, and DRO (pins 4, 20,and 14), clears the call, and returns to the idle state. If the call is cleared from the network or by the user entering [CLR] at the port, the port immediately drops the controls signals.
Port States forDTRP (Answer End: No AutoCalling) connections on DCE and DTE ports.
Idle - DCD, DSR, and CTS (pins 8, 6, and 5) are maintained low.
Connection - When a call arrives from the network, the port raises DCD, DSR,and CTS (pins 8, 6,and 5) and monitors DTR (pin 20). If DTR is high or goes high, the PAD accepts the call.
Disconnection - The port continues to monitor DTR (pin 20). If it goes low for at least 50 milliseconds, the port drops DCD, DSR, and CTS (pins 8, 6,and 5), clears the calls and returns to the idle state. If the call is cleared from the network or by the user entering [CLR] at the port, the port immediately drops the controls signals. If DTR is not raised within three seconds after the call arrives from the network, the port drops the control signals and clears the call.
DTRP connections on DTE ports.
Idle -RTS, DTR, and DRO (pins 4, 20, and 14) are maintained low.
Connection -When a call arrives from the network, the port raises RTS, DTR,and DRO (pins 4, 20, and 14) and then monitors DSR (pin 6). If DSR is high or goes high, the PAD accepts the call.
Disconnection - The port continues to monitor DSR (pin 6). If it goes low for at least 50 milliseconds, the port drops RTS, DTR, and DRO (pins 4, 20, and 14), clears the call, and returns to the idle state. If the call is cleared from the network or by the user entering [CLR] at the port, the port immediately drops the controls signals. If DSR is not raised within three seconds after the call arrives from the network, the port drops the control signals and clears the call.
DIMO Connections
Introduction
Use this connection type with a crossover cable to connect a dial modem to the DCE port. When calls are made, the port handshake uses the modem control signals. There are several types of operation that can occur with this connection type including:
• Dial In
• Dial Out
• Dial In/Dial Out Collision
Dial In
When a user dials into a PAD port through a telephone network, the connection depends on whether the port is configured for manual calling or autocalling. When the port is configured for manual calling, the user is connected to the terminal handler when the EIA connection is completed. When the port is configured for autocalling, the call request must be accepted before the EIA connection is completed. This prevents users from being charged for the telephone call if the call cannot be completed.
States for DIMO (Dial In, No Autoconnect).
DCE ports
Idle - DCD, DSR, and CTS (pins 8, 6, and 5) are maintained low.
Connection - The port monitors MB (pin 25) [modem RI]. If it goes high, the port raises DSR, DCD, and CTS (pins 6, 8, and 5) [modem DTR, RTS, and DRO (pins 20, 4, and 14)], then waits up to 240 seconds for DTR and RTS (pins 4 and 20) [modem DSR and DCD] to go high. If the timer expires, DCD, DSR, and CTS (pins 8, 6, and 5) are dropped, the network call is cleared, and the port returns to the idle state. The connection is established when DTR and RTS go high. After the port receives the MB signal, it cannot receive calls from the network, so the dial procedure can be completed.
Disconnection - The port monitors DTR and RTS (pins 20 and 4) [modem DSR, DCD]. If either goes low for at least 50 milliseconds, the port immediately drops DCD, DSR, and CTS (pins 8, 6 and, 5) [modem RTS and DTR] and a call clear is sent to network. A PAD port also drops the control signals and returns to the idle state if the user fails to establish a call within the time configured by the Port Record parameter Call Accept Timeout or makes three unsuccessful call attempts. If the call is cleared by an X.25 clear from the network, the port immediately drops DCD, DSR,and CTS [modem RTS, DTR, and DRO]. The port waits for DTR and RTS [modem DSR, DCD] to go low, at which time the port returns to idle state, ready for another dial-in sequence. If the call is cleared from the port by the user entering [CR] at the port, control signals are not dropped until Call Accept Timeout expires. The port is unavailable to take network calls while waiting for the control signals from the modem to drop. If a call is cleared by an X.29 invitation to clear, the signals remain high when the parameter Invitation to clear = CLRWO: the signals are dropped when the parameter Invitation to clear = CLRWD.
DTE ports
Idle - RTS and DTR (pins 4 and 20) are maintained low.
Connection - The port monitors RI (pin 22). If it goes high, the port raises RTS, DTR, and DRO (pins 4, 20, and 14) then waits up to 240 seconds for DSR and DCD (pins 6 and 8) to go high. If the timer expires, RTS, DTR, and DRO (pins 4, 20, and 14) are dropped, the network call is cleared, and the port returns to the idle state. The connection is established when DSR and DCD go high. After the port receives the RI signal, it cannot receive calls from the network so the dial procedure can be completed. D
isconnection - The port monitors DSR and DCD (pins 6 and 8). If either goes low for at least 50 milliseconds, the port immediately drops RTS, DTR, and DRO (pins 4, 20, and 14) and a call clear is sent to network. A PAD port also drops the control signals and returns to the idle state if the user fails to establish a call within the time configured by the Port Record parameter Call Accept Timeout or makes three unsuccessful call attempts. If the call is cleared by an X.25 clear from the network, the port immediately drops DTR and RTS. The port waits for DSR and DCD to go low, at which time the port returns to idle state, ready for another dial-in sequence. If the call is cleared from the user entering [CLR] at the port, control signals are not dropped until the Call Accept Timeout expires. The port is unavailable to take network calls while waiting for the control signals from the modem to drop. If a call is cleared by an X.29 invitation to clear, the signals remain high when the parameter Invitation to clear = CLRWO: the signals are dropped when the parameter Invitation to clear =CLRWD
DIMO (Dial In, With Autoconnect).
DCE ports
Idle - DCD, DSR, and CTS (pins 8, 6, and 5) are maintained low.
Connection - The port monitors MB (pin 25) [modem RI]. If it goes high, the port makes a network call according to the autocall mnemonic. When the call is accepted, the port raises DCD, DSR, and CTS (pins 8, 6, and 5) [modem RTS, DTR, and DRO (pins 4, 20, and 14)], then waits up to 240 seconds for DTR and RTS (pins 4 and 20) [modem DSR and DCD] to go high. If the timer expires, the DCD, DSR, and CTS (pins 8, 6, and 5) are dropped, the network call is cleared, and the port returns to the idle state. If DTR and RTS go high before the timer expires, the connection is established.
Disconnection - The port monitors DTR, and RTS (pins 20 and 4) [modem DSR and DCD]. If either goes low for at least 50 milliseconds, the port immediately drops DCD, DSR, and CTS (pins 8, 6, and 5) [modem RTS, DTR and DRO] and a call clear is sent to network. A PAD port also drops the control signals and return to the idle state if the user fails to establish a call within the time configured by the Port Record parameter Call Accept Timeout or makes three unsuccessful call attempts. If the call is cleared by an X.25 clear from the network, the port immediately drops DCD, DSR,and CTS [modem RTS, DTR, and DRO]. The port waits for DTR and RTS [modem DSR and DCD] to go low, at which time the port returns to idle state, ready for another dial-in sequence. If the call is cleared from the port, control signals are not dropped until the Call Accept Timeout expires. The port is unavailable to take network calls while waiting for the control signals from the modem to drop. If the call is cleared by an X.29 invitation to clear, the signals remain high when the parameter Invitation to clear = CLRWO: the signals are dropped when the parameter Invitation to clear = CLRWD.
DTE Ports
Idle - RTS and DTR (pins 4 and 20) are maintained low.
Connection - The port monitors RI (pin 25). If it goes high, the port makes a network call according to the autocall mnemonic. When the call is accepted, the port raises RTS, DTR, and DRO (pins 4, 20, and 14) then waits up to 240 seconds for DSR and DCD (pins 6 and 8) to go high. If the timer expires, the RTS, DTR and DRO (pins 4, 20, and 14) are dropped, the network call is cleared, and the port returns to the idle state. If the DSR and DCD go high before the timer expires, the connection is established.
Disconnection - The port monitors DSR and DCD (pins 6 and 8). If either goes low for at least 50 milliseconds, the port immediately drops RTS, DTR, and DRO (pins 4, 20, and 14) [modem DCD, DSR and CTS] and a call clear is sent to network. A PAD port also drops the control signals and returns to the idle state if the user fails to establish a call within the time configured by the Port Record parameter Call Accept Timeout or makes three unsuccessful call attempts. If the call is cleared by an X.25 clear from the network, the port immediately drops RTS, DTR, and DRO. The port waits for DSR and DCD to go low, at which time the port returns to idle state, ready for another dial-in sequence. If the call is cleared from the port, control signals are not dropped until the Call Accept Timeout expires. The port is unavailable to take network calls while waiting for the control signals from the modem to drop. If the call is cleared by an X.29 invitation to clear, the signals remain high when the parameter Invitation to clear = CLRWO: the signals are dropped when the parameter Invitation to clear = CLRWD. If the call is cleared from the port, control signals are not dropped until Call Accept Timeout expires.

Dial Out
In this case a modem is connected to a PAD port. Calls from the network connect to the PAD port and use the modem to call through the telephone network.
States for the connection type DIMO (Dial Out).

DTE Ports
Idle - DCD, DSR, and CTS (pins 8, 6, and 5) are maintained low. Connection - This is for modems with the autodial feature (the modem can dial the number when the DTR input goes from inactive to active). When a call arrives at a port that is idle and available, the call is accepted. The port raises DSR, [modem DTR]. The
modem autodials the destination and, when a connection is made, raises its DCD output. The port monitors RTS (pin 4). If it goes high and if DTR remains high, the port raises DCD, DSR,and CTS. If RTS and DTR are not raised within three minutes after the call is accepted (and DSR being raised), the call is cleared.

Disconnection - The port monitors DTR and RTS (pins 20 and 4) [modem DSR, DCD]. If either goes low for at least 50 milliseconds, the port immediately drops DCD, DSR, and CTS (pins 8, 6, and 5) [modem RTS, DTR, and DRO] and a call clear is sent to network. A PAD port will also drop the control signals and return to the idle state if the user fails to establish a call within the time configured by the Port Record parameter Call Accept
Timeout or makes three unsuccessful call attempts. If the call is cleared by an X.25 clear from the network, the port immediately drops DCD, DSR, and CTS [modem RTS, DTR, and DRO]. The port waits for DTR and RTS [modem DSR and DCD] to go low, at which time the port returns to idle state, ready for another dial-in sequence. If the call is cleared from the port, control signals are not dropped until the Call Accept Timeout expires. The port is unavailable to take network calls while waiting for the control signals from the modem to drop. If the call is cleared by an X.29 invitation to clear, the signals remain high when the parameter Invitation to clear = CLRWO: the signals are dropped when the parameter Invitation to clear = CLRWD.


DTE Ports
Idle - RTS, DTR, and DRO (pins 4, 20, and 14) are maintained low.
Connection - This is for modems with the autodial feature (the modem can dial the number when the DTR input goes from inactive to active). When a call arrives at a port that is idle and available, the call is accepted and the port raises DTR. The modem autodials the destination and, when a connection is made, raises its DCD output. The port monitors DCD (pin 8). If it goes high and if DSR remains high, the port raises RTS, DTR, and DRO (pins 4, 20, and 14). If DCD and DSR are not raised within threeminutes after the call is accepted (and DTR being raised), the call is cleared.
Disconnection - The port monitors DSR and DCD (pins 6 and 8). If either goes low for at least 50 milliseconds, the port immediately drops DCD, DSR, and CTS (pins 8, 6, and 5) [modem RTS, DTR, and DRO] and a call clear is sent to network. A PAD port will also drop the control signals and return to the idle state if the user fails to establish a call within the time configured by the Port Record parameter Call Accept Timeout or makes three
unsuccessful call attempts. If the call is cleared by an X.25 clear from the network, the port immediately drops RTS, DTR, and DRO. The port waits for DSR and DCD to go low, at which time the port returns to idle state, ready for another dial-in sequence. If the call is cleared from the port, control signals are not dropped until the Call Accept Timeout expires. The port is unavailable to take network calls while waiting for the control signals from the modem to drop. If the call is cleared by an X.29 invitation to clear, the signals remain high when the parameter Invitation to clear = CLRWO: the signals are dropped when the parameter Invitation to clear = CLRWD.

If the attached modem does not store telephone numbers, or the caller uses standard AT commands, the modem must be configured so DCD output is always high so the port can send dial information to the modem. The modem’s DSR must be strapped to follow DTR inputs so that when the network disconnects by dropping all EIA control signals, the modem will drop DSR to complete the disconnection. (DTR Control on the modem must be configured as 108.2. This drops the connection when DTR goes from on to off.)

Dial In/Dial Out Collision
This is the case of a telephone call causing the MB [modem RI] signal to arrive at the port at the same time a network call arrives at the port, thus causing the port to raise DCD, DSR, and CTS [modem RTS, DTR, and DRO]. The port can detect this circumstance because the MB signal is not the expected response. The port resolves the collision by clearing the call to the network while the DCD, DSR, and CTS stay raised at the modem. If DTR and RTS are not raised within one minute, the port drops DCD, DSR, and CTS [modem RTS, DTR, and DRO]. Call collision is resolved in favor of the telephone network caller, that is, the call is completed, not cleared. After the collision is resolved, the call is handled like any other incoming call from the telephone network.

Variations of DIMO Connections
DIMOa - This is the same as DIMO, except that the DSR signal is treated differently. Use DIMOa when modems do not have DSR raised on incoming calls.
DIMOb - This is the same as DIMO, except that the DSR signal is treated differently. Use DIMOb when modems have DSR following DTR on incoming calls.
DIMOv - This connection type provides the capability for interfacing to V.25 bis type modems and is the same as DIMO as far as EIA handshaking is concerned.

EMRI/EMDC Connections

Introduction
This case is for a situation where a PAD port connects to a host computer and replaces a modem.
Note
Do not use EMRI with hunt groups or autocalls or when using EIA-232-D DIMs
in the DTE position.

DCE Port States for EMRI
Conditions during various states for EMRI connections on DCE ports:

Idle - The front panel switch RI/TM is set to RI and DCD, DSR, and CTS (pins 8, 6, and 5) are maintained low.
Connection - When a call arrives from the network, the RI (pin 22) is pulsed (two seconds on, four seconds off) for up to five cycles (30 seconds). During the ringing, DTR (pin 20) is monitored. If it is high or goes high, the PAD clears RI (pin 22) and raises DSR and DCD (pins 6 and 8) and waits for RTS (pin 4) to go high. When RTS goes high, the PAD raises CTS (pin 5). The PAD accepts the incoming call from the network only after DTR and
RTS are detected high.
Disconnection - After DTR is detected high, the PAD monitors DTR (pin 20) and if it is low for at least 50 milliseconds, the call is cleared. DSR and DCD (pins 6 and 8) are dropped and the PAD returns to the idle state. If the call is cleared by the network while waiting for RTS to be raised, DSR and DCD are dropped and the PAD waits for DTR to drop before completing the disconnect. The PAD will not accept another dial-out attempt until DTR is lowered. If RTS is not raised within 30 seconds of RI first being raised, then DCD and DSR (pins 8 and 6) are dropped and the call is cleared. If the call is cleared by the network while waiting for DSR to be raised, RI is immediately dropped. Once the call is connected, if the call is cleared from the network DCD, DSR, and CTS are dropped.

EMDC
This is similar to EMRI, but DCD is used to signal the host about arrival of the call.
Note
Do not use this setting with hunt groups or with autocalls.
Note
A change in a EIA control signal may not be detected for up to 50 milliseconds (average 25 ms). As a result, the Vanguard ignores data sent to port before the connection was recognized as valid. To prevent this, before passing data wait at least 50 milliseconds after the EIA handshake or until the Vanguard sends a connection prompt.

DCE Port States for EMDC
Idle - The front panel switch RI/TM is set to TM and DCD, DSR, and CTS (pins 8, 6, and 5) are maintained low. DTR (pin 20) may be high.
Connection - When a call arrives from the network, the DCD (pin 8) is raised. DTR (pin 20) is monitored. If it is high or goes high, the PAD raises DSR (pin 6) and waits for RTS (pin 4) to go high. When RTS goes high, the PAD raises CTS (pin 5). The PAD accepts the call from the network only after DTR and RTS are detected high.
Disconnection - The PAD monitors DTR (pin 20) and if it is low for at least 50 milliseconds, the call is cleared. The control signals DSR and CTS (pins 6 and 8) are dropped, and the PAD returns to the idle state. If the call is cleared by the network, while waiting for RTS to be raised, and then DSR and DCD (pins 6 and 8) are dropped
and the PAD returns to the idle state for the period after DTR is lowered. The PAD will not accept another dial-out attempt until DTR is lowered. If RTS is not raised within 30 seconds of RI being raised, DCD and DSR (pins 8 and 6) are dropped and the call is cleared. If the call is cleared by the network while waiting for DTR to be raised DCD is immediately dropped. Once the call is connected, if the call is cleared from the network DCD, DSR, and CTS are dropped.

DTE Port States for EMDC
Idle - The front panel switch RI/TM is set to TM and RTS, DTR, and DRO (pins 4, 20, and 14) are maintained low. DSR (pin 6) may be high.
Connection - When a call arrives from the network, the RTS (pin 4) is raised. DSR (pin 6) is monitored. If it is high or goes high, the PAD raises DTR (pin 20) and waits for DCD (pin 8) to go high. The PAD accepts the call from the network after DSR and DCD are detected high.
Disconnection - The PAD monitors DSR (pin 6) and if it is low for at least 50 milliseconds, the call is cleared. The control signals RTS, DTR,and DRO (pins 4, 20, and 14) are dropped, and the PAD returns to the idle state. If the call is cleared by the network, while waiting for DCD to be raised. RTS and DTR (pins 4 and 20) are
dropped and the PAD returns to the idle state after DSR is lowered. The PAD will not accept another dial-out attempt until DSR is lowered. If DCD is not raised within 30 seconds of MB being raised, RTS and DTR (pins 4 and 20) are dropped and the call is cleared. If the call is cleared by the network while waiting for DSR to be raised, MB is immediately dropped. Once the call is connected, if the call is cleared from the network RTS, DTR, and DRO are dropped.