A study of QoS performance of ATM networks
Abstract: The approaches to QoS support of ATM networks are explained, several kinds
of schemes such as CBR and ABR, weighted round robin and round robin, ERICA and
EFCI are compared. Some simulation work is done and related results are shown and
Application performance depends on factors such as: hardware, protocols, network
design, other users, and the application’s design.
Traditional networks are designed with no traffic differentiation; all traffic – time-critical
and non-time-critical is treated equally. Hence, a user transfer a file and a user executing
some real time application tasks such as videoconference are treated in the same way.
With unlimited bandwidth, this scenario poses no problems. However, as bandwidth
becomes increasingly limited, there is a higher degree of contention amongst these
applications. In this situation, it becomes important to ensure that time-critical
applications do not suffer. A network that can provide different levels service is often
said to support quality of service .
ATM is well known for providing a rich set of QoS capabilities and in many respects,
these schemes are similar to those provided in an IP network, however, the ATM
networks have some special features of their own.
1.1 How ATM Works
? ATM network uses fixed-length cells to transmit information. The cell consists of 48
bytes of payload and 5 bytes of header. Transmitting the necessary number of cells
per unit time provides the flexibility needed to support variable transmission rates. ? ATM network is connection-oriented. It sets up virtual channel connection (VCC)
going through one or more virtual paths (VP) and virtual channels (VC) before
transmitting information. The cells is switched according to the VP or VC identifier
(VPI/VCI) value in the cell head, which is originally set at the connection setup and is
translated into new VPI/VCI value while the cell passes each switch.
? ATM resources such as bandwidth and buffers are shared among users, they are
allocated to the user only when they have something to transmit. The bandwidth is
allocated according to the application traffic and QoS request at the signaling phase.
So the network uses statistical multiplexing to improve the effective throughput. 1.2 ATM QoS parameter
Primary objective of ATM is to provide QoS guarantees while transferring cells across
the network. There are mainly three QoS parameters specified for ATM and they are
indicators of the performance of the network
? Cell Transfer Delay (CTD):
The delay experienced by a cell between the first bit of the cell is transmitted by the
source and the last bit of the cell is received by the destination. This includes
propagation delay, processing delay and queuing delays at switches. Maximum Cell
Transfer Delay (Max CTD) and Mean Cell Transfer Delay (Mean CTD) are used.
? Peak-to-peak Cell Delay Variation (CDV):
The difference of the maximum and minimum CTD experienced during the
connection. Peak-to-peak CDV and Instantaneous CDV are used.
? Cell Loss Ratio (CLR):
The percentage of cells lost in the network due to error or congestion that are not
received by the destination. CLR value is negotiated between user and network
-1-15during call set up process and is usually in the range of 10 to 10.
1.4 ATM Traffic Descriptors
The ability of a network to guarantee QoS depends on the way in which the source
generates cells (Uniformly or in a bursty way) and also on the availability of network
resources for e.g. buffers and bandwidth. The connection contract between user and
network will thus contain information about the way in which traffic will be generated by
the source. A set of traffic descriptors is specified for this purpose. Policing algorithms
check to see if the source abides by the traffic contract. The network only provide the QoS
for the cells that do not violate these specifications.
The following are traffic descriptors specified for an ATM network.
? Peak Cell Rate (PCR):
The maximum instantaneous rate at which the user will transmit.
? Sustained Cell Rate (SCR):
The average rate as measured over a long interval.
? Burst Tolerance (BT):
The maximum burst size that can be sent at the peak rate.
? Maximum Burst Size (MBS):
The maximum number of back-to-back cells that can be sent at the peak cell rate. ? Minimum Cell Rate (MCR):
The minimum cell rate desired by a user.
1.5 ATM Service Categories
Providing desired QoS for different applications is very complex. For example, voice is
delay-sensitive but not loss-sensitive, data is loss- sensitive but not delay-sensitive, while
some other applications may be both delay-sensitive and loss-sensitive.
To make it easier to manage, the traffic in ATM is divided into five service classes
accorcing to various combination requested QoS:
? CBR: Constant Bit Rate
CBR is the service category for traffic with rigorous timing requirements like voice,
and certain types of video. CBR traffic needs a constant cell transmission rate
throughout the duration of the connection.
? rt-VBR: Real-Time Variable Bit Rate
This is intended for variable bit rate traffic for e.g. certain types of video with
stringent timing requirements.
? nrt-VBR: Non-Real-Time Variable Bit Rate
This is for bursty sources such as data transfer, which do not have strict time or delay
requirements. ? UBR: Unspecified Bit Rate
This is ATM’s best-effort service, which does not provide any QoS guarantees. This
is suitable for non-critical applications that can tolerate or quickly adjust to loss of
? ABR: Available Bit Rate
ABR is commonly used for data transmissions that require a guaranteed QoS, such as
low probability of loss and error. Small delay is also required for some application,
but is not as strict as the requirement of loss and error. Due to the burstiness,
unpredictability and huge amount of the data traffic, sources implement a congestion
control algorithm to adjust their rate of cell generation. Connections that adjust their
rate in response to feedback may expect a lower CLR and a fair share of available
The available bandwidth at an ABR source at any point of time is dependant on how
much bandwidth is remaining after the CBR and VBR traffic have been allocated
their share of bandwidth. Figure 1 explains this concept.
Figure 1 -- ATM bandwidth allocation to different service
1.6 ATM QoS Priority Scheme
Each service category in ATM has its own queue. There are mainly two schemes for
queue service. In round-robin scheme, all queues have the same priority and therefore
have the same chance of being serviced. The link’s bandwidth is equally divided amongst
the queues being serviced. Another scheme is weighted round-robin scheme, which is
somehow similar to WFQ in IP networks: queues are serviced depending on the weights
assigned to them. Weights are determined according to the Minimum Guaranteed
Bandwidth attribute of each queue parameter in each ATM switch. This scheme ensures
that the guaranteed bandwidth is reserved for important application such as CBR service
1.7 ATM Congestion control
Due to the unpredictable traffic pattern, congestion is unavoidable. When the total input
rate is greater than the output link capacity, congestion happens. Under a congestion
situation, the queue length may become very large in a short time, resulting in buffer overflow and cell loss. So congestion control is necessary to ensure that users get the negotiated QoS.
In this study, two major congestion algorithms are focused, which are especially for ABR source. Binary Feedback scheme (EFCI) uses a bit to indicate congestion occurs. A switch may detect congestion in the link if the queue level exceeds a certain level. Accordingly, the switch sets the congestion bit to 1. When the destination receives these data cells with EFCI bit set to 1, the destination sets the CI bit of the backward RM cell to 1 indicating congestion occurs. When the source receives a backward RM cell with CI bit as 1, the source has to decrease its rate. The EFCI only told the source increase or decrease the rate and hence the method converges slowly. The Explicit Rate Indication for Congestion Avoidance (ERICA) algorithm solves the problem by allowing each switch to explicitly tell the desired rate to the passing RM cells, the source adjusts the rate according to the backward RM cells.
2.1 simulation Tools
Optimized Network Engineering Tools (OPNET) is the simulation tool used in this study. OPNET has many attractive features and can simulate large communication networks with detailed protocol modeling and performance analysis .
Figure 2 shows an ATM network used in the project to study the ATM network. The network consists of servers, workstations and ATM switches, they are connected by OC3 links that can sustain 155.52Mbps traffic, the ATM switching speed is infinity, and the VC lookup delay is 1E-10, the hence the network capacity is about 150Mbps. Three kinds of traffic are generated by three applications, which are voice, video conference, and Ftp. Voice is run on AAL2 layer, while video conference and ftp are run on AAL5 layer. Voice and video are sensitive to the timeliness, so I arbitrarily define voice use CBR service, video use rt_VBR service, data uses ABR service. The voice traffic is originally around 4Mbps, and the other two are around 3Mbps.
Figure 3 shows a larger ATM network that is also used in the project. The traffic in this
network is generated by ATM_uni_src model, so the traffic generated is ideal--that is no
burst occurs. There are also three kinds of traffic voice, video and data. Because the
traffic is stable, their ratio can be accurately set as 4: 3: 3. This network model is only
used to study in large network, when traffic scales, the behavior of the network for each
kind of service category, and the simulation speed is faster than that traffic pattern
generated by real application in network1.
The QoS of each service category is defined in table1.
CBR RT_VBR ABR ppCDV(msec) 5microsec 10microsec 20microsec maxCTD(msec) 15microsec 15microsec 3millisec CLR 3.00E-07 3.00E-07 1.00E-05
VoiceSwithch 2Swithch 3Voice
local1Swithch 1Swithch 7Swithch 4local2
FtpSwithch 6Swithch 5ServerFtp
Figure 2 -- Network 1
Swithch 2Swithch 3
Swithch 8Swithch 9
local1Swithch 1Swithch 7Swithch 10Swithch 4local2
Swithch 12Swithch 11
Swithch 5Swithch 6
Figure 3 -- Network 2
2.3 Simulation results and discussion
2.3.1 Load and throughput
Four sceneries are run on network 2, they differ in traffic size. The total traffic generated
with each scenery and the result collected are listed in table 2.
Table 2: The result of Network 2
20M 100M 120M 150M Traffic size Statistic Average Maximum Average Maximum Average Maximum Average Maximum ATM ABR Cell Delay 1.8 3.56 0.078 0.154 0.078 0.155 0.078 0.154 (sec) ATM ABR Cell Delay 0.37 1.08 0.00066 0.00194 0.00067 0.00198 0.00066 0.00195 Variation ATM ABR Cell Loss 0 0 0 0 0 0 0 0 Ratio ATM Call Blocking 0 0 0 0 0 0 0 0 Ratio (%) ATM CBR Cell Delay 0.00164 0.00164 0.00164 0.00164 0.00164 0.00164 0.0016 0.0016 (sec) ATM CBR Cell Delay 0 0 0 0 0 0 0 0 Variation ATM CBR Cell Loss 0 0 0 0 0 0 0 0 Ratio ATM Cell 0.52 1.03 0.024 0.0463 0.0301 0.0586 0.0247 0.0476 Delay (sec) ATM Cell Delay 0.25 1.02 0.00019 0.00178 0.00027 0.00198 0.00014 0.00145 Variation ATM Global Throughput 18,218,711 18,252,000 88,332,000 92,300,000 103,970,000 108,600,000 128,052,000 133,800,000 (bits/sec) ATM Load 1,000 1,000 5,000 5,000 5,000 5,000 5,000 5,000 (bits) ATM Load 19,976,307 20,012,333 95,993,000 100,200,000 115,248,000 120,300,000 139,223,000 145,500,000 (bits/sec) ATM RT_VBR Cell Delay 0.00125 0.00125 0.00106 0.00106 0.00125 0.00125 0.00106 0.00106 (sec) ATM RT_VBR Cell Delay 0 0 0 0 0 0 0 0 Variation
ATM RT_VBR Cell Loss 0 0 0 0 0 0 0 0 Ratio
The simulations only run 10 seconds because the memory of my computer is not large enough, and the speed is very slow. Since the traffic is generated in bunch, it is difficult to scale the traffic size, because when the traffic increases, if it exceeds the capacity of network, the request is rejected; therefore, the load is hard to exceed 150Mbps. Because no network overload is simulated, the throughput increases with the load. Ideally, the throughput should be increase with increased load and become static under infinite load. The latency and jitter is very small, this is because congestions do not occur because of the sable traffic and the QoS is guaranteed for each service. From the table, it can be seen that the cell delay for ABR service is largest amongst the three categories, this result is consist with what we expected.
2.3.2 Comparison of ERICA vs EFCI algorithms
ERICA and EFCI are methods used in congestion control for ABR service, in this study, is the FTP application.
Two scenarios are run based on network 1. The scenario 23 uses ERICA algorithm, while the scenario 24 uses EFCI algorithm in ATM switches.
figure 4-- comparison of throughput of ftp with ERICA and EFCI
Figure 4 obviously shows that the throughput obtained from ERICA implementation is greater than that obtained from the EFCI implementation.
CDV is a measure of the variation of CTD and is particularly has significant for application where a higher variation signifies requirement for larger buffering. The ABR traffic, which is specifically used for data traffic does not require a guaranteed CDV but it is highly desirable to minimize the variation as much as possible.Figure 5 bellow compares the CDV of the FTP application, we can see that the CDV with ERICA algorithm converges faster than that of the EFCI algorithm. Theoretically, the EFCI algorithm suffers from its inability to drain it queues quickly and suffers from an early onset of congestion.
Figure 5 -- Cell delay varition comparison
2.3.3 Weighted round-robin vs round-robin
The queueing policy is very important to QoS, the traffic with high priority should have high weights. The weight in OPNET is determined by the minimum guaranteed bandwidth parameter in ATM port buffer configuration. Two scenarios are run with scenario 22 is round-robin, and scenario 25 is weighted round-robin. The minimum guaranteed bandwidth is set as: CBR 25%, RT_VBR 50%, ABR 25%, which means that the RT_VBR has highest priority. Figure 6 demonstrate the CDV of the voice traffic.
Because the weight is lower with scenario 25, the CDV is slightly higher than that of round-robin. Figure 7 shows the comparison of traffic sent by voice with the two algorithm. It seems not much difference between them and the traffic generated by video is same; this may because the simulation time is too short, another consider is that the traffic is not much, so the effect is not fully demonstrated.
Figure 7-- traffic sent by voice
In this report I deal with the definition and deployment of the QoS in ATM networks. The simulations show that ATM can guarantee the QoS for various classes of applications. ERICA is more effective than EFCI algorithm and weighted round-robin can ensure that the guaranteed bandwidth is reserved for important application classes.
 Larry L.Peterson, Bruce S.Davie , “Computer Networks, a systems approach”, second edition.
 http://www.opnet.com/, December, 2002.
5. Appendix A