3Com Router User Manual
Have a look at the manual 3Com Router User Manual online for free. It’s possible to download the document as PDF or print. UserManuals.tech offer 19 3Com manuals and user’s guides for free. Share the user manual or guide on Facebook, Twitter or Google+.
Physical Interface Line Rate677 Shape all the flows on Ethernet interface 1. [Router] interface ethernet1 [Router-Ethernet1] qos gts any cir 45000000 cbs 5800000 ebs 5800000 Physical Interface Line RateBy using the physical interface line rate (LR), the total rate for sending packets (including the emergency packet) on a physical interface can be limited. LR also uses the token bucket to perform the traffic control. If LR is configured in an interface of the router, the LR token bucket first processes all the packets sent by this interface. If the token bucket has sufficient tokens, the packet can be sent; otherwise, the packet enters the QoS queue for congestion management. Thus, the packet traffic through this physical interface can be controlled. Figure 216 Schematic diagram of LR processing As the token bucket is used to control the traffic, when there is any token in the token bucket, the burst transmission of the packet is allowed. When there is no token in the token bucket, the packet cannot be sent until a new token is generated in the token bucket. Thus, there is a limitation that packet traffic cannot be larger than the generating speed of the token, therefore, it realizes that the traffic is limited and burst traffic is allowed to pass through at the same time. Compared with CAR, LR can limit all the packets passing through the physical interface. CAR is implemented in the IP layer and is ineffective on the packets that are not processed by the IP layer. It is simple to use LR when the user only requires the limitation of all packets. LR ConfigurationTo configure the physical interface line rate, perform the following configurations in the interface view. Ta b l e 717 Configure the Physical Interface LIne Rate By default, the line rate is not performed on the physical interface. incoming packetsoutgoing packets Token Bucket Tokens enter bucket at the given speedclassify buffer queue OperationCommand Configure the physical interface bandwidthqos lr cir committed-rate [ cbs burst-size [ ebs excess-burst-size ] ] Delete the configured physical interface bandwidthundo qos lr
678CHAPTER 48: TRAFFIC POLICING, TRAFFIC SHAPING AND LINE RATE Displaying and Debugging LRTa b l e 718 Display and Debug LR OperationCommand Display the LR configuration conditions and statistic information of the interfacedisplay qos lr [ interface type number ]
49 CONGESTION MANAGEMENT This chapter covers the following topics: ■What is Congestion? ■Congestion Management Policy Overview ■Selecting Congestion Management Policies ■Operating Principle of the Congestion Management Policies ■Configuring Congestion Management ■Congestion Management Configuration Examples What is Congestion?For a network unit, when the speed of the data packet is faster than the speed at which this interface sends the data packet, congestion occurs on the interface. If not enough memory space can be provided to store these data packets, some of them will be lost. The loss of the data packet can cause the host or router that is sending the data packet to resend this data packet because of a timeout which can cause a communication failure. There are many factors causing congestion. For example, when the data packet flow enters the router through the high-speed link and is then transmitted through the low speed link, congestion can occur. When the data packet flow enters the router simultaneously from multiple interfaces and is transmitted from one interface or the processor slows down, congestion may occur. As shown in Figure 217, two LANs of one company are connected with each other through the low speed link. When a user on LAN 1 sends a large number of data packets to a user on LAN 2, it may cause congestion on the interface through which router A of LAN 1 is connected to the low speed link. If an important application is running between the servers of both LANs, while an unimportant application is running between two PCs, the important application will be influenced.
680CHAPTER 49: CONGESTION MANAGEMENT Figure 217 Schematic diagram of the congested network Congestion Management Policy OverviewWhen the congestion occurs, if not enough memory space is provided to buffer the packets, some of the packets will be lost. The loss of the packets may cause the host or router that is sending the packet to resend this packet because of overtime, re-congesting and resending, and so on, thereby causing a vicious circle. Therefore, some policies are used to manage network congestion. When congestion occurs, the router takes some policies to dispatch the data packets, deciding which data packets may be sent first and which ones may be discarded. These policies are called the congestion management policy. For the congestion management, the queuing mechanism is generally used. When congestion occurs, the packet is queued at the router egress by a given policy. During dispatching, the order for sending the packet out of the queue is decided by a given policy. FIFO QueuingIn the FIFO mode, the concept of no communication priority and classification is adopted. During the use of FIFO, the sending order of data packet from the interface depends on the order in which the data packet arrives at this interface, at this time, the queuing and de-queuing orders of the packet are the same. FIFO provides the basic storage and transmission capabilities. Priority QueuingIn Priority Queueing (PQ) mode, you can flexibly specify the priority queues which the packets enter according to the fields packet length, source address, and destination address in the packets header and the interface into which the packets will come. The packets belonging to a higher priority queue can be sent first. In this way, the most important data can be handled first. Custom QueuingIn the Custom Queueing (CU) model, according to the users requirements, the traffic can be classified in terms of TCP/UDP port number, ACL and interface type. Each type of traffic is allocated with a certain percent of bandwidth. When network congestion occurs, the traffic that has high demands on delay (such as voice) can obtain reliable service. If a type of traffic cannot occupy all the reserved bandwidth, other types of traffic will occupy the reserved bandwidth automatically, thus making full use of the resource. DDN/FR/ISDN/PSTN Quidway Router Ethernet Server Ethernet Server Company LAN1 10 M 100 M Occurrence of congestion P C P C Company LAN2 RouterARouterB
Selecting Congestion Management Policies681 For the interface with the lower rate, customizing the queue for it can guarantee that the data flows passing through this interface may also obtain the network services to certain extent. Weighted Fair QueuingWeighted Fair Queuing (WFQ) provides a dynamic and fair queuing mode, which distinguishes the traffic based on the priority/weight and decides the bandwidth size of each session according to the session situation. Thus, it guarantees that all communications can be fairly treated according to the weight allocated to them. The foundation based on which WFQ classifies the traffic includes the source address, destination address, source port number, destination port number, and protocol type. Selecting Congestion Management Policies3Com routers implement the four congestion management policies (FIFO, PQ, CQ and WFQ) discussed previously, in the Ethernet interface and serial interface (encapsulated PPP, FR, HDLC), which may satisfy the requirements for various service qualities to a certain extent. FIFO implements the no priority policy of the data packet in user data communication, which is not needed to determine the priority or type of the communication. However, when using the FIFO policy, some low priority data in abnormal operation may consume most of available bandwidths and occupy the entire queue, which causes the delay of the burst data source, and the important communication may be thereby discarded. PQ can assure some communication transmission with higher priority. That is, the strict priority sequence is conducted at the cost of transmission failure of data packets with lower priority. For example, the packets in the lower priority queue may not be transmitted in the worst case where the available bandwidth is very limited and emergency communication occurs frequently. CQ reserves a certain percent of available bandwidth for each type of specified traffic, so that the interface running at a low rate can obtain network service even if congestion occurs. The size of this queue is determined by deciding the total number of the data packets configured in the queue to control access to the bandwidth. WFQ uses the fair queuing algorithm to dynamically divide the communications into messages. The message is a part of a session. With the use of WFQ, the interactive communication with a small capacity can obtain the fair allocation of the bandwidth, as the same as the communication with a large capacity (such as file transmission). Ta b l e 719 compares between the four different policies:
682CHAPTER 49: CONGESTION MANAGEMENT Ta b l e 719 Comparison of Several Congestion Management Policies Operating Principle of the Congestion Management PoliciesFor congestion management, queuing technology is used. When congestion occurs, the data packet is queued at the router by a policy. When dispatching, the order for sending the data packet is decided by the policy. Number of queues AdvantageDisadvantage FIFO11. It does not need to be configured and is easy to use. 2. The processing is simple with small delay. 1. No matter how urgent they are, all the packets, voice or data, will enter the FIFO (First In, First Out) queue. The bandwidth used for sending packets, delay time, drop rate are decided by the arrival sequence of the packets. 2. It has no restriction on the uncoordinated data sources (such as the packet transmission of UDP), and the unmatched data sources will cause the damage of the coordinated data source bandwidth (such as the TCP packet transmission). 3. The delay of the real time application sensitive to time (such as VolP) cannot be guaranteed. PQ4The absolute priority can be provided to various service data, and the delay of the real time application sensitive to time (such as VolP) can be guaranteed. The bandwidth occupation of the packet with the priority service may have the absolute priority.1. It needs to be configured, and the processing speed is slow. 2. If the bandwidth of the packet with high priority is not restricted, it will cause that the packet with low priority cannot obtain the bandwidth. CQ11. The packets of various services may be allocated with the bandwidths based on the bandwidth proportion. 2. When there is no packet, the available bandwidth occupied by the existing types of packets can be automatically increased. It needs to be configured, and the processing speed is slow. WFQIt is decided by users (256 by default) 1. It is easily configured. 2. The bandwidth of the coordinated (interactive) data source (such as the TCP packet transmission) can be protected. 3. The delayed jitter can be reduced. 4. The small packet has priority. 5. The flows with various priority levels may be allocated with different bandwidths. 6. When the traffic is reduced, the available bandwidth occupied by the existing flows may be automatically increased. The processing speed is slower than FIFO.
Operating Principle of the Congestion Management Policies683 Figure 218 Schematic diagram of the first in first out queue First-In, First-Out (FIFO) QueuingAs shown in Figure 218, the data packets are input to the first-in, first-out (FIFO) queue according to the priority order of their arrivals. Data packets that first arrive are first transmitted, and the data packets that later arrive are transmitted later. All the packets that will be transmitted from the interface are input to the end of the FIFO queue of the interface in the priority order of their arrivals. At the time when the interface transmits the packets, the packets are transmitted in order, starting from the head of the FIFO queue. During the transmission process of all packets, there is no difference and no guarantee is provided for the quality of the packet transmission. Therefore, a single application can occupy all the network resources, seriously affecting the transmission of key service data. Priority Queuing (PQ)As shown in Figure 219, the PQ queue is used to provide strict priority levels for important network data. It can flexibly specify the priority order according to the network protocol (such as IP or IPX), the interface into which the data are input, the length of the packet, and the source address, destination address, and other features. Figure 219 Schematic diagram of the priority queuing When the packets arrive at the interface, all of them are first classified (up to 4 classifications), and then they are input to the ends of respective queues according to the classifications of the packets. Upon the transmission of the packets, according to different priority levels, the packets in the low priority queue are not transmitted until all the packets in the high priority queues are transmitted. Thus, it is guaranteed that, at the network unit where the PQ is utilized, the most important data can be processed the soonest and the packets of the higher priority queues have very low delay. Both packet performance exponents of loss incomi ng pack et s queue out goi ng pac k et s queuei nginterface incomi ng pack et s t op queue mi ddl e queue cl assi f yi ng out goi ng pack et s nor mal queue bot t om queue queuei nginterface
684CHAPTER 49: CONGESTION MANAGEMENT rate and throughput rate can be guaranteed to a certain extent in case of network congestion. The key service (such as ERP) data packets may be put into the higher priority queue, while the non-key service (such as E-Mail) data packets are put into the lower priority queue, so that the data packets of the non-key service are transmitted in the idle intervals during the processing of the key service data. In this way, the priority of the key service is guaranteed and network resources are optimized. However, it brings the problem that the data packets in the lower priority queue may be blocked in the packet queue of the transmission interface for a long period because of the existence of the data packets in the higher priority queue. Custom Queuing (CQ)As shown in Figure 220, custom queuing (CQ) divides the data packets into 17 classifications (corresponding to 17 queues of CQ) according a given policy, and data packets are input respective CQ queues based on their own classifications following the FIFO policy. In 17 queues of CQ, the queue 0 is the system queue, and queues 1 to 16 are the user queues. The users can configure the proportional relationship of the occupied interface bandwidth between various user queues. When dispatching the queue, the data packets in the system queue are first transmitted. Before the system queue is empty, a certain number of data packets from user queues 1 to 16 are not extracted and sent out according to the predetermined configured proportion using polling method. Figure 220 Schematic diagram of the custom queuing PQ assigns the absolute priority to the data packets with higher priority compared to data packets with the lower priority level. In this way, though the priority transmission of the key service data can be guaranteed, when a number of data packets with higher priority need to be transmitted, all bandwidths may be occupied, causing the data packets with lower priority to be completely blocked. With the use of CQ, such a case can be avoided. CQ has total of 7 queues. Queue 0 is the system queue that is first dispatched, and the queues 1 to 16 are the user queues that are dispatched by a polling method based on the bandwidth settings. The users may configure the proportional relationship of the occupied bandwidth between the queues and the enqueuing policy of the packets. Thus, the data packets of various services can be provided with different bandwidths, to guarantee that the key services can be provided with more bandwidth. In addition, it is not likely that non-key services may not be allocated with the bandwidth. incomi ng packet s queue1 queue2 cl assi f yi ng out goi ng pack et s queue15 queue16 queuei nginterface ¡ -¡-10% 30% 10% 5%
Operating Principle of the Congestion Management Policies685 In the network shown in Figure 217, it is assumed that the server of LAN 1 transmits the data of the key service to the server of LAN 2, and the PC of LAN 1 transmits the data of the non-key service to PC of LAN 2. If the serial interface to be connected with the WAN is configured for congestion management with CQ, and the data flows of the key services between the servers are input to queue A, while the data flows of the non-key services are input to queue B, the proportional relationship of the occupied interface bandwidth between queue A and queue B is configured as 3:1 (for example, during dispatching, queue A may continuously transmit 6000 bytes of data packets every time, while queue B may continuously transmit 2000 bytes of data packets every time). Thus, CQ will treat the data packets of both different services differently. Each time queue A is dispatched, the data packets are continuously transmitted, before the transmitted bytes are not less than 6000 or queue A is empty, the next user queue will not be dispatched. When queue B is dispatched, the condition to stop dispatching is that the continuously transmitted bytes are not less than 2000 or queue B is empty. Therefore, when congestion occurs and there are data packets in queues A and B ready to be transmitted, in the view of the statistic results, the proportion between the bandwidths allocated to the key services and the bandwidths allocated to the non-key services is approximately 3:1. Weighted Fair Queuing (WFQ)Weighted fair queuing (WFQ), is based on the guarantee of fair bandwidth delay, and reflects the weighted value that is dependent on the PI priority carried in the IP packet header. As shown in Figure 221, weighted fair queuing classifies the packets based on the flows (identical source IP address, destination IP address, source port number, destination port number, protocol number, and ToS packets that belong to the same flow), with each flow allocated to one queue. When dequeuing, WFQ allocates the available bandwidth of the egress to each flow. The smaller the value of the priority is, the less the allocated bandwidth is. The larger the value of the priority is, the more the allocated bandwidth is. Figure 221 Schematic diagram of weighted fair queuing The occupied bandwidth proportion of each flow is: (its own priority level+1)/(the sum of all of them (the priority levels of the flows+1)) For example, there are 5 types of traffic on an interface, and their priority levels are 0,1,2,3 and 4 respectively, the total quota of the bandwidth is the sum of each priority plus 1, that is 1 + 2 + 3 + 4 + 5 = 15. The percentage of the bandwidth incomi ng pack et s queue1 wei ght 1 queue2 wei ght 2 cl assi f yi ng out goi ng pack et s queueN- 1 wei ght N- 1 queueN wei ght N queuei nginterface ¡ -¡-
686CHAPTER 49: CONGESTION MANAGEMENT occupied by each traffic is (each priority + 1)/ the sum of each priority plus 1, that is, 1/15, 2/15, 3/15, 4/15 and 5/15. For example, there are total 4 flows currently, and the priority levels of three of them are 4, and that of one of them is 5, and then the total number of the allocated bandwidth is: (4 + 1) x 3 + (5 + 1) = 21 Then, the bandwidths of the three flows with the priority levels of 4 are 5/21, and the bandwidth of the flows with the priority level of 5 is 6/21. Configuring Congestion ManagementThis section describes the following types of congestion management: ■Configuring FIFO Queuing ■Configuring Priority Queuing ■Configuring Custom Queuing (CQ) ■Configuring WFQ Configuring FIFO QueuingTo configure FIFO queuing, perform the following configurations in the interface view. Ta b l e 720 Configure the First In First Out Queuing By default, the length of the FIFO queue is 75, with the value ranging 1 to 1024. Configuring Priority QueuingPriority queuing configuration includes: ■Configuring priority queuing ■Applying the priority-list queuing group to the interface ■Specifying the queue length of the priority-list queuing Configuring priority queuing The priority queuing classifies the packets according to a given policy, and all the packets are divided into 4 classifications, each of which corresponds to one of the 4 queues of PQ respectively. Then the packet is input to the corresponding queue according to its classification. The 4 PQ queues are: high priority queue (top), medium priority queue (middle), normal priority queue (normal) and low priority queue (bottom) with priority levels decreased sequentially. Upon the transmission of packets, they are sequentially transmitted according to their priority orders, that is, the packets in the top queue are first transmitted, then the packets in the middle queue are transmitted, and then the packets in the normal queue are transmitted, and finally the packets in the bottom queue are transmitted. The priority queuing includes up to 16 groups (the value range of pql-index is 1 to 16), each of which specifies which types of data packets input which queue, the OperationCommand Configure the length of FIFO queueqos fifo queue-length queue-length Recover the default value of the FIFO queue lengthundo qos fifo queue-length