Development of a Data Chirp
Measurement Method
Information and Communication Technology (ICT)
In measuring data link quality for downloading and browsing services the final perceived quality by the user is determined by the session time. Measuring session times is time consuming because for each context, e.g. a small file download, a large file download or a browse session over a number of small Internet pages, one has to measure these times. Therefore data links are mostly characterized in terms of key performance indicators (kpi's) like bandwidth, loss, delay etc, from which one tries to derive the session times. However the relations between these kpi's and the session times are not clear and also measuring these kpi's is very difficult.
This report presents a method that characterizes a data link by a so-called delay finger print from which the session times can be predicted. The fingerprint is derived from a concatenated lumped UDP packet transfer (packets send immediately after each other) followed by a UDP stream of which the sending speed is increased continuously using a single packet transfer.
This “chirping approach” causes self induced congestion and allows fingerprinting with only minimal loading of the system under test. In this contribution live networks as well as an internet simulator are used to create data links over a wide variety of conditions where both the data chirp fingerprint as well as the download/session times are measured. From the fingerprint a number of link indicators are derived that characterize the link in terms of kpi's such as ping time, loaded bandwidth (congested), unloaded bandwidth (not congested), random packet loss etc.
The fingerprint measurements allow predicting the service download/session times for small downloads and fast browsing with a correlation of around 0.92 for the simulated links. For large file downloads and large browse sessions no acceptable prediction model could be constructed.
Multimedia applications are playing an important role in recent years in everyday life. On the Internet, apart from the widely used Hypertext Transfer protocol (HTTP), many Real Time applications are contributing significantly to the overall traffic load of the network. The state-of-the-art Information and Communication Technology (ICT) concepts enable deployment of various service and applications in various arenas like home entertainment, offices, operations, banking etc. The backbone of all these distributed services is the core network which facilitates data communication. The quality of the available network connections will often have a large impact on the performance of distributed applications. For example, response time of the requested document using World Wide Web crucially depends on network congestion.
In general if we want to quantify the quality of a data link from the user point of view we can use two approaches:
1) A glass box approach in which we know all the system parameters for the network (maximum throughput, loss, delay, buffering) and the application (TCP stack parameters) and then use a model to predict download / session times and UDP throughput.
2) A black box approach where we characterize the system under test with a test signal and derive a set of black box indicators from the output. From these link indicators the download / session times, or other relevant kpi's are predicted.
The first approach is taken in draft recommendation [1]. This report investigates the second approach. In most of the black box approaches estimation is made of the available bandwidth which is an important indicator for predicting the download and session times. Several approaches exist that allow for bandwidth estimation but bandwidth is not the only important link parameter. For small browsing sessions end-to-end delay and round trip time are also key indicators that determine the session time and thus the perceived quality. A good black box measurement method should not quantify kpi's but should be able to predict download and session times for download and browse services that run over the link.
In telecommunication a data link is used to connect one location to another for the purpose of transmitting and receiving the data. It can also be an assemble, consisting of parts of two data terminal equipments (DTEs) and the interconnecting data circuit that is controlled by a link protocol enabling data to be transferred from a data source to a data sink. Two systems can communicate with each other using an intermediate data link which connects both of them.
The data link can be made up of a huge number of elements that all contribute to the final perceived quality. In real world connections cross traffic will always have an impact on the download and session times making it difficult to use them in the development of a data link quality model. Therefore in this report most of the model development is carried out using simulated links.
The setup was established at TNO-ICT Delft, The Netherlands. A Linux system was used with a network card that has two interfaces to emulate a particular network.
In this report chapter 2 describes the problem definition mentioning the tasks that are performed in the project. Chapter 3 explains various key performance indicators that quantify the data link performance. The measurement approach employed and principle behind the chirp is described in chapter 4. The experiment setup at TNO-ICT is described in the chapter 5. In chapter 6 kpi's implementation is discussed. In chapter 7 mapping between chirp and service characteristics are discussed. Chapter 8 is conclusion. Some of the measurement results are discussed in Appendix A.
The management of the project can be found out in the Appendix B.
An operator has to know how to set the network parameters in order to deliver the most appropriate end-to-end quality, based on the network KPI's, the service characteristics and the characteristics of the end user equipment. A fast, efficient method for assessing the impact of a network setting on the final perceived download and session times is thus of vital importance. Plain optimization of KPI's is not necessarily the best strategy because the final perceived quality is determined by the download and session times. In the ideal case a method should be developed with which instantaneous insight can be created into the performance of “all services” that are carried via the link under consideration. Such a method can also be used to create applications which can take decisions on resource selection or reservation.
For small downloads and fast browsing the ping time of the data link will be the dominating factor. For large downloads the available bandwidth will be the dominating factor. The TCP stack parameters also have a significant impact on these times as they determine the slow start behavior. For intermediate file sizes the available band width, the un-congested bandwidth will be important most probably in combination with other kpi's like packet loss, buffer size, and possible bearer switching mechanisms (UMTS).
This report presents a method that characterizes a data link by a so called delay finger print from which a set of kpi's is derived. These kpi's are then used to predict the service quality in terms of download/session times. The basic idea, taken in a modified way from [3], is to characterize the data link by sending a limited set of UDP packets over the line in such a way that the delay behavior of these packets allow to characterize the link in as many aspects as possible. Two types of characterization are used, the first one uses a lumped set of packets that are send over the line immediately after each other form which the smearing of a single packet can be estimated. This estimation is closely related to the un-congested bandwidth of the link. The second one uses a train of UDP packets that are separated by an ever decreasing time interval, resulting in a so called “data chirp” from which the available bandwidth can be estimated.
In this project we will focus on the estimation of some kpi's from which the quality of a data link can be determined. We will see various data link performance indicators that are dominant in their impact on the end to end session time.
Ping Time or Round-Trip Time (RTT) is the amount of time that it takes for a packet to go from one computer to another and for the acknowledgment to be returned. In the case of links that span across long distances, the RTT is relatively large, which directly affects the browse and download times.
Available bandwidth (AB) is the approximate transfer rate that an application can get from a connection in presence of cross traffic load. Measuring the available bandwidth, is of great importance for predicting the end-to-end performance of applications, for dynamic path selection and traffic engineering, and for selecting between numbers of differentiated classes of service [4]. The end-to-end available bandwidth between client and server is determined by the link with minimum unused capacity (referred as tight link). In Figure 3.1 the end-to-end available bandwidth is determined by the minimum unused capacity, indicates is ‘A'.
Figure 3.1 The available bandwidth determined by tight link unused capacity.
Several applications need to know the bandwidth characteristics of the underlying network paths. This is similar to the situation where some peer-to-peer applications need to consider available bandwidth before allowing candidate peers to join the network. Overlay networks can configure their routing table based on the available bandwidth of the overlay links. Network providers lease links to customers and the charge is usually based on the available bandwidth that is provided.
Available bandwidth is also a key concept in congestion avoidance algorithms and intelligent routing systems.
Techniques for estimating available bandwidth fall into two broad categories: passive and active measurement.
In communication networks, high available bandwidth is useful because it supports high volume data transfers, short latencies and high rates of successfully established connections. Obtaining an accurate measurement of this metric can be crucial to effective deployment of QoS services in a network and can greatly enhance different network applications and technologies.
The term un-congested bandwidth (UB) refers to the maximum transfer rate available for a particular connection in absence of other traffic (clean link). For a connection it is hard to achieve transfer rate equal to UB because of various facts like random packet loss, TCP slow start mechanism. The UB is limited by the bottleneck link capacity.
The un-congested bandwidthof a link is determined by the link with the minimum capacity (termed as bottleneck link). In Figure 3.2, the un-congested bandwidth of the link between the client and server is C = C1, where C1, C2, C3 are the capacities of the individual link and C1 < C3 < C2.
Figure 3.2 The un-congested bandwidth determined by bottleneck link capacity.
Packet loss can be caused by a number of factors, including signal degradation over the network medium, oversaturated network links, corrupted packets rejected in-transit, faulty networking hardware, or normal routing routines. The available bandwidth decreases with increasing packet loss. In this project we will be observing two type of packet losses, i.e., random packet loss and congestion packet loss. These two types of losses are discussed in next chapter.
In the next chapter we go into the details of how to measure these kpi's.
In this chapter we will discuss how the key performance indicators as described in chapter 3 will be measured.
Ping is a computer network tool used to test whether a particular host is reachable across an IP network. It works by sending ICMP “echo request” packets to the target host and listening for ICMP “echo response” replies. Ping estimates the round-trip time, generally in milliseconds, and records any packet loss, and prints a statistical summary when finished. The standard ping tool in Windows XP was used to determine the ping time.
The data chirp method is a method to characterize the end-to-end quality of a data link in terms of a delay fingerprint. Using the data chirp method, a train of UDP packets is sent over the data link with an exponentially decreasing time interval between the subsequent packets. Such a train of packets is referred to as a data chirp.
From the delay pattern at the receiving side one can determine characteristic features of the data link such as bandwidth, packet loss, congestion behavior, etc. From the characteristic features one can then try to estimate the service quality of different services that run over the data link.
In the classical data chirp [3] the time interval between two consecutive packets m and packet m+1, ΔTm is given by:
ΔTm = ΔT0 ´ γm, 0 < γ < 1,
where ΔT0 is the time interval between the first two packets. The factor γ (<1) determines how fast the interval between subsequent packets in the data chirp decreases. As a result of this decrease, the instantaneous data rate during the data chirp increases. The instantaneous data rate at packet m, Rm, is given by:
Rm = P / ΔTm[bytes/sec]
where P is the size of a UDP packet in the chirp. A data chirp is illustrated in [3] and shown in Figure 4.1, consisting of individual packets sent over the link with reduced interval.
Figure 4.1 Illustration of a data chirp.
The delay of the UDP packets in the data chirp after traveling over the data link is determined relative to the delay of the first packet. The resulting delay pattern, where the relative delay per UDP packet is shown as a function of the packet number, is referred to as data chirp fingerprint. A typical data chirp finger print for a fixed bandwidth 64 kbps bit pipe without cross traffic is shown in Figure 4.2
Figure 4.2 Data chirp fingerprint for a fixed bandwidth bit pipe of 64 kbps.
From such a data chirp fingerprint a number of parameters can be determined, including Available Bandwidth, Random packet Loss, Congested Packet Loss and Un-congested Bandwidth [5]
In the chirp packets are send individually over the line but with a continuously decreasing sending time quantified by a factor γ (<1) resulting in an inter sending time Tγm for the mth packet. The combination of T and chirp size N determines the lower and upper bound of the throughput of the UDP stream. It is clear that at the start of the chirp packets should be send over the link with large enough time intervals in order to be able to characterize low bandwidth systems.
Furthermore the inter-sending time should decrease to a value so low that the required bandwidth is higher than the maximum expected available bandwidth. Finally a small γ allows for a fast characterization of the link while a γ near 1 allows more accurate, but more time consuming, bandwidth estimations. After some initial experiments the values of the chirp were set to P = 1500 byte, T = 200 ms, γ = 0.99 and N = 400, resulting in a lower and upper input into the system of 64 and 5000 kbit/s respectively. This chirp provides a well balanced compromise between measurements. Figure 4.5 gives an overview of this approach.
Figure 4.3 Data chirp, send to estimate AB, using the idea of self induced congestion with ever smaller inter sending times.
This approach was first tested over the virtual tunnel interface running over the Linux machine. When this chirp was put over a clean link (i.e. over a tunnel with no cross traffic) the estimation of available bandwidth is a bit higher then the actual bandwidth set by the netem GUI. This is due to the buffers present in between which pass the chirp over the link with a higher speed. Due to this factor the estimation is about 20% higher then the actual link speed.
The second problem faced in this scheme was that when we send the chirp over this link with a cross traffic the finger print we get from the chirp was not good enough to get a correct estimation. The reason behind this is that the chirp tries to push in through the TCP cross traffic and the time it is successful there is a high packet loss due to which we can't make proper estimation. So we send this chirp repeatedly (4 times) with inter sending time of 5 seconds.
We can estimate the available bandwidth using:
Rm = P / ΔTm, [bytes/sec]
where P is the packet size and ΔT is the average interval at that time.
In principle, un-congested bandwidth can be estimated from the smearing of a single packet. Even in the case that there is cross traffic on a data link and we would like to estimate the bandwidth for the clean situation we can use the smearing time. In a congested link a single packet is still smeared according to the un-congested bandwidth.
However, obtaining the smearing time of a single packet is difficult to achieve with normal hardware equipment. Therefore packets are sent in pairs as close as possible (‘back-to-back') after each other. This allows to assess smearing time with normal hardware because we can now measure the receive time stamps of each packet and deduce the smearing form this. This method will only work for the situation where the chance that cross traffic will be sent over the data link between the packets is minimal. Figure 4.4 illustrates the packet pair smearing measurement method.
P2
P1
Figure 4.4 The use of packet pairs for the determination of the un-congested bandwidth. Tr is the time when the first bit of packet starts to arrive. Tr' is the time when last bit of the packet is received. Ts is the time when the first bit of packet is set on the link.
As illustrated in Figure 4.4, the packets leaving the data link are smeared compared to the original packets, indicated as ΔT. This ΔT is determined from the time interval between the arrival of the first and second packet in the pair. From this smearing, the un-congested bandwidth UB can be estimated using:
UB = P2 / ΔT, [bytes/sec]
where P2 is the size of the second packet, in bytes.
To estimate the un-congested bandwidth we implemented the method which is described above, i.e., sending packet pairs over the link to estimate un-congested bandwidth. However, buffering can cause measurement problems; when data is stored and forwarded the link speeds preceding the buffer are no longer taken into account in the un-congested bandwidth estimation. This can for a major part be solved by using a lumped set of packets that vary between 1 and N concatenated packets. Packets in lumps can no longer be stored in the small buffers of the link. In the current proposed measurement method we start with a single packet and then concatenate packets till a lump of seven packets from which point the seven lumps is repeated. The reason for not using more packets in the lump was because of the underlying Windows mechanism that does not allow sending more then seven packets. If we increase it then from the first lump of more then seven random packet loss occurs. The lumps are sent using a chirp like approach as given in section 4.2. The length of this series is dependent on the experimental context and an optimal choice is somewhere between 20 and 50 lumped packets for links with a speed between 100 and 10,000 kbit/s. In the final chirp P is set to 1500 byte, the start interval is set to 200 ms with a γ = 0.97. With N=50 this choice is a compromise between a wide range of speeds that can be assessed, measurement time and measurement accuracy.
In most cases these concatenated packets will be handled immediately after each other by all routers and from the so-called packet smearing times a data link characterization is made that has high correlation with the un-congested bandwidth of the link. This bandwidth estimation is always higher than the available bandwidth, since availability is influenced by possible cross traffic on the data link. Test results obtained for un-congested bandwidth are presented in chapter 7. Figure 4.5 provides an overview of extended chirp.
Figure 4.5 Extended data chirp using the idea of measuring the smearing times of concatenated packets. By measuring receive time stamps the smearing of a packet can be measured when two or more concatenated packets are sent over the link.
For this approach the Un-congested bandwidth can be determined by using:
UB = P / ΔT . [bytes/sec]
In this lumped chirp version, P is the size of the lump and ΔT is the time difference between the first and the last packet in the lump.
The Random Packet Loss is determined from the packets before the bending point. In theory before this point no packet loss should occur and by checking whether packets have been lost during the transmission of these first packets the random packet loss can be determined.
At a specific point all buffers are filled to their maximum and then the delay per packet cannot increase anymore because of this buffering. This is the point where packets are being lost due to congestion on the link. This packet loss can be determined from the chirp fingerprint.
The setup was established at TNO-ICT Delft, The Netherlands. The Linux system with the network card which has two interfaces is used to emulate network.
Figure 5.1 Experimental setup used for the simulations.
There exists a module called Netem in the Linux kernel which provides functionality for testing protocols by emulating the properties of wide area networks. The current Netem version emulates variable delay, loss, duplication and packet re-ordering. End users have no direct access to the Netem module and Netem can be accessed using traffic control (tc) command. User can directly using tc commands can direct Netem to change hardware interface settings. The GUI for the tc command is developed termed as Netem PHPGUI which can be accessed via web server.
We have developed an application in Borland Delphi which runs on a Windows XP machine. This client application generates the chirp pattern. Standard TCP/IP stack is used which is present in Windows XP. There are different parameters which can be set through this application like, packet size, interval between the chirps etc.
Application is developed which runs on a machine acting as a server. It also has the same TCP/IP stack which is in Windows XP. The server application dumps the chirp information into the files which are further used to post process and get the kpi's out of the information received. A web server also runs on this machine to get the service characteristics for the FTP and browsing.
As discussed above lumped packet and a single packet data chirp are sent from client to the server. The first lumped chirp pattern sent is used to estimate the un-congested bandwidth of the link. Here instead of sending a single packet, a lump is formed by concatenating several packets and it is sent over the link. After a certain time gap (5 seconds) next chirp pattern consisting of a single packets (unlike first chirp) is sent which is used to get delay signature and the various parameters of the data link associated like available bandwidth, random packet loss, congestion packet loss. The experiments are carried on different link which are emulated using network emulator as mentioned in chapter 5. The sending time and receiving timestamps of each chirp packet is logged and post-processing is done in order to extract the various parameters which are mentioned below.
When a packet is sent on a data link it is smeared in time. Smear time is the time difference between the time at which the complete packet is received and epoch at which packet starts arriving. Figure 6.1 depicts the smear times of the packet received at the server side.
Figure 6.1 Packet Smear Times.
The smear time of the packets is logged into the file. Due to software limitations it is very hard to distinguish between the time packet starts arriving and time packet is completely arrived. Because of this, smearing time is computed from the receiving timestamps of the next and previous packets. It can happen that packets will be dropped which will lead to a wrong estimation of the smear time. Therefore if a packet drop is observed then the smearing time is not computed between two packets of which there exists a packet at transmission but not at the reception. It can also happen that multiple packets are dropped. If there is a packet loss inside a lump of packets, the smearing time is estimated from the maximum number of packets where not a single packet is dropped in between those packets. The logic behind this is depicted in the flowchart as shown in Figure 6.2.
Figure 6.2 The un-congested bandwidth calculation from smear times.
The difference between the times at which the packet is received and the time at which the packet is sent is termed as delay. The packet sending timestamps is placed in the packet on the client side and received timestamp is estimated at the server side. As client and server both are not time synchronized the time synchronization is applied by subtracting the delay of first packet from all the delays. Such a delay is referred as differential delay.
Figure 6.3 Chirp Finger print.
Differential delay is used to generate the chirp signature by plotting the estimated differential delays against the packets received. As one can see from Figure 6.3, the differential delay suddenly increases (in this particular case, this happens around the 90th packet of the chirp train). This is the point where the link is completely occupied by the chirp packets and other cross traffic packets if present. At this point the rate at which chirp packets are sent represents the available bandwidth of the data link under consideration.
The measurement set up used an internet simulator to manipulate the packet loss, buffer size, bandwidth and delay of a single PC to PC network link. We have collected delay finger print data related to the chirp and the session time that quantify service quality. The following parameters were used for these experiments:
From the large set of possible combinations a subset of conditions were chosen. In each condition six measurements were carried out:
A standard Windows XP TCP/IP stack was used and for some conditions the system showed bifurcation behavior. This can be expected since an acknowledge can be received just in time or just too late depending on infinite small changes in the system. In all cases where this behavior was found the minimum download/session time was used in the analysis. Experiments were performed under different data link scenarios by changing buffer sizes, packet loss, and delay, with and without competing cross traffic. Several chirp characteristics were used in order to fend the optimum settings. When a particular data link is considered for the experiment, above mentioned service characteristic parameters were measured (session times) and later on the data link with same conditions, a data chirp is sent and the data link key performance indicators were computed from the chirp signature. The experimental observations are discussed in Appendix A.
The correlation between the link capacity and the un-congested bandwidth estimations were excellent, the correlation was 0.99 (see Figure 7.1).
Figure 7.1 Un-congested bandwidth estimation.
The correlations between the service characteristics (browsing/download times) and kpi's estimated from chirp delay pattern showed lower correlations. The results show that the small browsing session times are dominated by the ping time and the un-congested bandwidth. Figure 7.2 and 7.3 show the relationship between the measured small browsing session times, the small FTP download times and the best two dimensional predictor that could be constructed from the ping time and the un-congested bandwidth. This predictor is the best kpi that can be constructed and shows a correlation of 0.92 for the small browsing data and 0.98 for the small download data.
Figure 7.2 Small browsing session.
Figure 7.3 Small FTP download.
For medium and large browsing/downloading it was not possible to fit to any combination of kpi's (up to three dimensions) that had a satisfactory correlation (>0.9). The available bandwidth gives the highest correlation for these measurements, around 0.7.
In the case of clean data links, the correlation between available bandwidth and the link capacity was found to be 0.93. In the case of one TCP cross traffic stream, the available bandwidth estimation did not show an acceptable correlation with the data link capacity.
In this project a black box measurement approach for assessing the perceived quality of data links is implemented. This quality is defined as the measured browsing and download session times.
The measurement method uses the concept of a data chirp. In general a data chirp puts data on a link with ever increasing sending speed. The delay behavior of the packets is then used to characterize the link. The chirp is implemented in two different ways, the first one uses a set of lumped packets from which the un-congested bandwidth is estimated, the second uses a set of single packets from which the available bandwidth is estimated. Together with the ping time this allows a full characterization of the data link.
From the data link characterization a prediction model for the session times is constructed. The model shows a correlation of 0.98 for the small download data set and of 0.92 for the browsing data set over a number of small pages. The model uses a two-dimensional regression fit derived from ping time and the un-congested bandwidth.
For medium and large browsing/downloading it was not possible to fit to any combination of kpi's (up to three dimensions) that had a satisfactory correlation (>0.9). The available bandwidth gives the highest correlation for these measurements, around 0.7.
Besides the session times the model also allows estimating the link capacity. The correlation between the real link capacity, as constructed with the network simulator and the chirp estimated un-congested bandwidth was 0.99.
ADSL | Asymmetric Digital Subscriber Line |
FTP | File Transfer Protocol |
GSM | Global System for Mobile communication |
GPRS | General Packet Radio Service |
ITU | International Telecommunications Union |
LAN | Local Area Network |
PSTN | Public Switched Telephone Network |
TCP | Transmission Control protocol |
TNO | Nederlandse Organisatie voor Toegepast Natuurwetenschappelijk Onderzoek (Netherlands Organisation for Applied Scientific Research) |
WIFI | Wireless Fidelity |
UDP | User Datagram Protocol |
UMTS | Universal Mobile Telecommunications System |
[1] ITU-T E.800: Recommendation E.800 (08/94) - Quality of service and dependability vocabulary.
[2] M. Jain, C. Dovrolis, "Pathload: a Measurement Tool for Available Bandwidth Estimation", Proc. PAM'02, 2002.
[3] V. J. Ribeiro, R. H. Riedi, R. G. Baraniuk, J. Navratil, L. Cottrell, “pathChirp: Efficient Available Bandwidth Estimation for Network Paths,” Paper 3824, Passive and Active Workshop, April 2003, La Jolla, California USA.
[4] R.Prasad, M. .Murray, C. Dovrolis, K. Claffy "Bandwidth Estimation: Metrics,Measurement Techniques, and Tools", IEEE Network, November-December 2003 issue.
[5] TNO Report: Gap Analysis Circuit Switched Versus Packet Switched Voice and Video Telephony
[6] ITU-T Rec. G.1030, “Estimating end-to-end performance in IP networks for data applications”, International Telecommunication Union, Geneva, Switzerland (2005 November).
[7] Ahmed Ait Ali, Fabien Michaut, Francis Lepage:
“End-to-End Available Bandwidth Measurement Tools:A Comparative Evaluation of Performances”,CRAN (Centre de Recherche en Automatique de Nancy)UMR-CNRS 7039
Appendix A
Measurement Results
Lumped Chirp Tests
Lumped chirp test with cross traffic 5% Loss.
In the first test a TCP cross traffic stream was generated by sending 50 files of size 1MB each in a loop in such way that the slow start does not become active again. A loss of 5% was set for this situation, after analyzing the behavior of TCP with ethereal we send the extended chirp over the link to estimate the un-congested bandwidth.
From the graph we can see that in the start the estimations are quite high, this behavior is totally dependant on the state of TCP mechanism rather the stream is in the stable state or it is still in slow start.
Following parameters were set for this experiment:
Number of Lumps: 50
Maximum lump size: 7 packets
Packet size: 1500 bytes
Interval: 100 ms
Alpha: 0.97
Links Speed: 1000 Kb/s
Loss: 5 %
Cross Traffic: TCP Cross traffic 50 files of 1Mb.
Estimated Bandwidth versus Packet Lumps
Figure.1 Results achieved by sending the extended chirp over a link of speed 1000Kb/s.
The average estimated bandwidth estimated over the time interval is 1030Kb/s. Due to the loss set for the test we can easily judge that the number of observations are reduced to 34 (ideal link 45).
This higher bandwidth estimation is because of the higher bandwidth calculated with the small lump sizes. This is due to the buffers present in the link which avoids the smearing effect on the packets.
We have also tested the same scheme with different scenarios. The average estimated bandwidth in the case of no cross traffic or with a loss over the link is a bit higher then the actual fixed bandwidth. Just in the case of loss we get less number of estimations as loss in packet cause a drop of reading.
Lumped chirp test Real links.
Test 1: TNO Internal Network
For this test the extended chirp was sent over the network in TNO. In this case the only thing which cane be tweaked are the parameter related to the chirp. The other factors, we do not have any idea what policies are implemented over TNO network.
Following settings were used for this test.
Number of Lumps: 50
Maximum lump size: 7 packets
Packet size: 1500 bytes
Interval: 100 ms
Alpha: 0.97
Links Speed: Unknown
Loss: Unknown
Cross Traffic: Unknown.
Estimated Bandwidth versus Packet Lumps
Figure.2 Results achieved by sending the extended chirp over TNO Network.
From the figure.11.2 we can see that there is a packet loss over the network as the observations for lumps are 30 (Under ideal clean link 45), and the estimation drops due to the factor of self induced congestion at higher rate. The averaged estimated un-congested bandwidth calculated over the time is 2.10 Mbs.
From the figure above one can judge that there are policies imposed in between so if a certain stream of data tries to eat a bit higher part of bandwidth the router may not allow the stream to do so.
Test 2: Delft -Eindhoven over the Internet
For this test the extended chirp was sent the internet between Delft and Eindhoven. In this case the only thing which cane be tweaked are the parameter related to the chirp. The other factors, we do not have any idea what policies are implemented over TNO network
Following parameters were set for this test:
Number of Lumps: 50
Maximum lump size: 7 packets
Packet size: 1500 bytes
Interval: 100 ms
Alpha: 0.97
Links Speed: Unknown
Loss: Unknown
Cross Traffic: Unknown.
Estimated Bandwidth versus Packet Lumps
Figure.3 Results achieved by sending the extended chirp over a link between Delft and Eindhoven.
From the figure 3 we can see that there is a packet loss over the network as the observations for lumps are 32 (Under ideal clean link 45), and the estimation drops due to the factor of self induced congestion at higher rate. The averaged estimated un-congested bandwidth calculated over the time interval is 2.30 Mbs.
This experiment shows the behavior same as in the case of TNO network that after some times the router doesn't allow the stream of data to eat up a larger part of the bandwidth instead restricting it to a limit.
Data Chirp Tests
Data chirp test with cross traffic 5% Loss.
In this test a TCP cross traffic was generated over the virtual tunnel link, we wait for a while so that TCP comes out of its slow start; we send the repeated chirp to estimate the available bandwidth of the link.
The following settings were used for the experiment:
Number of Packets: 400
Packet size: 1500 bytes
Interval: 200 ms
Alpha: 0.99
Links Speed: 1000k
Loss: 5%
Cross Traffic: TCP Cross traffic 50 files of 5 Mb.
Differential delays versus Number of Packets
Figure.4. Data chirp over data link of 1000k with TCP cross traffic.
The estimated available bandwidth achieved through the chirp was 839 kb/s. This behavior can be justified in a sense that when ever there is a packet loss TCP tries to re-adjust itself and releases some part of the utilizing bandwidth which is not in the case of UDP so what ever bandwidth is available UDP tries to eat it up.
Data chirp test Real links.
Test 1: TNO Internal Network
In this experiment the repeated data chirp was sent over the network in TNO so we can only adjust the parameter related to the chirp, the other factors we do not have any idea what policies are implemented over TNO network.
The following settings were used for the test:
Number of Packets: 400
Packet size: 1500 bytes
Interval: 200 ms
Alpha: 0.99
Links Speed: Unknown
Loss: Unknown
Cross Traffic: Unknown.
Differential Delays versus Number of Packets
Figure.5. Data chirp over data link of TNO.
From the figure.5 we can see that there is a packet loss over the network as the observations for number of packets is less then 400 (Under ideal clean link 400 with out loss), and the finger print does not look smooth. There are quite few excursions in the fingerprint which can be due to the underlying buffers in between the links. The available bandwidth estimated from this finger print is 491.92 kb/s.
Test 2: Delft -Eindhoven over the Internet
In this experiment the repeated data chirp was sent over the internet between Delft and Eindhoven. We can only tweak the parameter related to the chirp, the other factors we do not know what policies are over the internet.
Number of Packets: 400
Packet size: 1500 bytes
Interval: 200 ms
Alpha: 0.99
Links Speed: Unknown
Loss: Unknown
Cross Traffic: Unknown
Differential Delays versus Number of Packets
Figure. 6. Data chirp over internet between Delft and Eindhoven.
From the figure.6 we can see that there is a packet loss over the internet as the observations for number of packets is less then 400 (Under ideal clean link 400 with out loss), that could be due to congestion or any other factor in between. The available bandwidth obtained from this finger print is 706.60 kb/s. There are excursions present which could be due to the buffers present in the link.
Performance parameters like random packet loss and congestion packet loss are calculated through the observations which are received at the receiver side.
Random packet loss is calculated before the bending point and congestion packet loss is calculated after the bending point.
In the following section analysis of different scenarios are discussed.
First we examined a clean link with parameters: Interval 200ms, loss 5% and delay 10ms.Delay 10ms represents a small simulated network. We run the tests with varying bandwidth, loss and delay.
Figure 7. Data chirp test over different links.
Figure.7 shows the out come of data chirp sent over a clean simulated link with bandwidth set to 64kbps, 256 kbps, 1000 kbps and 5000kbps respectively. In above experiment we did not have any idea about the under lying buffers. After post processing we were able to extract all the kpi's from the above signatures but the only thing which is noticeable here is in the figure 7(a) that the bending point comes too early rather in the start of the signature which is not good enough, one can't judge the link with the behavior of just one or two packets. This behavior is seen only in the link of 64kbps alternative to this is to increase the interval time but that doesn't give a correct estimation of the available bandwidth. We use the same scheme as mentioned earlier to estimate the un-congested bandwidth i.e. over the lump of seven packets.
In the following case we put a TCP cross stream i.e. a large file download over the links to test the developed method.
Figure. 8. Data chirp test over links with one TCP cross traffic.
Figure.8 shows the out come of data chirp sent over simulated link with bandwidth set to 64kbps, 256kbps, 1000kbps and 5000kbps respectively and one TCP cross traffic stream. We can easily observe in the above figure that there is oscillation type behavior present. This is due to the presence of the cross traffic and underlying buffers in the link.
Under this situation it's not easy to extract the kpi's easily. Even in the case of un-congested bandwidth the method did not works as due to the cross traffic and packet loss we were not able to receive the whole lump intact. We then modified the approach to get the un-congested as well as the available bandwidth.
For Un-congested bandwidth the method was modified in a sense that instead of calculating the bandwidth over the whole lump we first find the maximum lump received and calculate the bandwidth from it and then from the rest of the readings. There was no change in the parameters for the estimation of un-congested bandwidth. This gives us a good estimation of un-congested bandwidth.
To get an estimation of the available bandwidth we tweaked some of the parameter for the data chirp inter sending time between lumped chirp and data chirp was set to five seconds, Interval was set to 500ms and alpha to 0.98.
Figure.9. Data chirp with Interval 500ms and alpha 0.98 over clean links.
Figure. 10. Data chirp with Interval 500ms and alpha 0.98 with one TCP cross traffic.
Figure 9 and 10 shows the finger print of the chirp after tweaking the parameters. It is clear from the figures that the finger print is quite clean with no oscillations present. But the problem caused by this adjustment is that the estimation achieved after post processing of the data is not accurate. The estimation achieved is quite high even then the actual bandwidth of the link. Although we were able to get the correct estimation of un-congested bandwidth this approach was not taken into consideration.
The second approach which we took was smoothing of these finger prints. A window size of 25 is used where differential delays from next 25 packets are averaged and the chirp signature is smoothed. This approach was quite helpful enough in estimating available bandwidth. As those oscillation effects were removed in a good way and we were able to get the estimation correctly. This smoothing was done on the observations received with the following parameters:
Interval 200ms, alpha 0.99 and inter-sending time between data chirp five second.
Figure.11. Data chirp with Interval 200ms and alpha 0.99 with one TCP cross traffic after smoothing.
Figure.11 shows the smoothed finger prints which are shown in figure. 8 With the use of this smoothing technique estimation can be easily done even for the links which have quite small buffers present for all these observations the buffer size was set to thirty. Low buffers will have greater impact on the finger print of the chirp so for that we have to adjust the window size accordingly.
Appendix B
End-to-end data Link Quality measurement Tool
Revision History
Version | Date | Changes |
1.0 | 20.2.2007 | Initial version |
2.0 | 25.3.2007 | Results. Delimitation added. Results Updated. Phasing plan. Realization Phase added. Control Plan. Time/ Capacity added. Information Updated. Quality Updated. Organisation added. |
3.0 | 05.06.2007 | Information Updated Organisation Updated Test Plan Added |
Introduction
The developments in information technology of the last years have led to major advances in high-speed networking, multimedia capabilities for workstations and also distributed multimedia applications. In particular, multimedia applications for computer supported cooperative work have been developed that allow groups of people to exchange information and to collaborate and co-operate joint work. However, existing communication systems do not provide end-to-end guarantees for multipoint communication services which are needed by these applications. In this thesis, communication architecture is described that offers end-to-end performance guarantees in conjunction with flexible multipoint communication services.
The architecture is implemented in the Multipoint Communication Framework (MCF) that extends the basic communication services of existing operating systems. It orchestrates endsystem and network resources in order to provide end-to-end performance guarantees. Furthermore, it provides multipoint communication services where participants dynamically join and leave. The communication services are implemented by protocol stacks which form a three layer hierarchy. The topmost layer is called multimedia support layer. It accesses the endsystem's multimedia devices. The transport layer implements end-to-end protocol functions that are used to forward multimedia data.
The lowest layer is labelled multicast adaptation layer. It interfaces to various networks and provides a multipoint-to-multipoint communication service that is used by the transport layer. Each layer contains a set of modules that implement a single protocol function. Protocol stacks are dynamically composed out of modules. Each protocol uses a single module on each layer. Applications specify their service requirements as Quality of Service (QoS) parameters.
The shift from PSTN/GSM/GPRS to ADSL/Cable/WiFi/UMTS technology and the corresponding shift from telephony to multimedia services will have a big impact on how the end-to-end quality as perceived by the customer can be measured, monitored and optimized. For each service (speech / audio / text / picture / video / browsing/ file download) a different perceived quality model is needed in order to be able to predict the customer satisfaction. This project proposal focuses on an integrated approach towards the measurement of the perceived quality of interactive browsing and file downloading over a data link.
Results
Problem Definition:
To place the overall end-to end QoS problem into perspective, it is clear that the emergence and rapid acceptance of Internet and Intranet technologies is providing commercial and military systems with the opportunity to conduct business at reduced costs and greatly increased scales. To take advantage of this opportunity, organizations are becoming, “increasingly dependent on large-scale distributed systems that operate in unbounded network environments.
As the value of these transactions grows, companies are beginning to seek guarantees of dependability, performance, and efficiency from their distributed application and network service providers. To provide adequate levels of service to customers, companies eventually are going to need levels of assured operations. These capabilities include policy-based prioritization of applications and users competing for system resources; guarantees of levels of provided performance, security, availability, data integrity, and disaster recovery, and adaptivity to changing load and network conditions. The problems which are faced by the networks today is that they don't provide the services which they are made for so the problem here is to set the network parameters in such a way that they give the maximum output and resources are used in a good manner.
Project goal:
The ultimate goal of the project is to define a measurement method, with which instantaneous insight can be created into the performance of “all services” that are handled carried via the link under consideration.
It is clear that the operator delivering the best portfolio with the best quality for the lowest price will survive in the market. This means that an operator has to know how to set the network parameters in order to deliver the best end-to-end quality, based on the network indicators, the service characteristics and the characteristics of the end user equipment.
This optimization will increase the number of new subscribers, leading to an increase in revenues. A fast, efficient method for combined data/streaming quality measurement is thus of vital importance.
Results:
At the end of the project the deliverables will be:
A tool that will be able to test the performance as well as the status of the network.
A report that shall accurately describe the processes and explain the choices made.
An analysis of the results and a list of recommendations to achieve the best possible
results.
A presentation of the results to the TNO-ICT and ECR group at TU/e.
Delimitation
The project is focusing on estimation of available as well as un-congested bandwidth. So testing can only be done on the currently available networks in the company.
Since this is a research activity, the approach will be based on the trial and error method.
PHASING PLAN
Initial phase:
Nov2006 - Jan 2007
During this time period focus will be given on current work going on in the market and also what kind of tools are available and can be used to develop this new method for measurement of link performance. All background and necessary knowledge will be gathered.
Design phase:
Feb2007-April2007
Following activities are planed after initial phase:
1) Investigation of hardware timing accuracy in order to improve measurement
accuracy (find the best hardware available).
2) Creation of a test set up that allows investigating the effects of cross traffic.
3) Creation of a “bearer switching like” link simulator to investigate the effects
of traffic channel capacity and channel capacity switching.
4) Effect of Packet loss.
5) Buffer impact measurements.
The measurements will be based on simulations, results will be plotted and analysed.
Preparation phase:
May2007
These measurements are focused on the relation between:
1) Browsing/download times and chirp delay pattern.
2) Audio/video streaming throughput and chirp delay pattern.
3) Re-buffering, pause/play, fast forward behaviour with audio/video streaming in relation to chirp behaviour.
All above mentioned observations will be made on simulated as well as real networks using the developed tool.
Test Phase:
June2007-August2007
The activities involved in the realization phase will be:
1) Decide upon the nature of the approach that is going to be used.
2) Generate results (Cross traffic maps).
3) Collect results and create a solid set for the evaluation.
4) Write down in the report the pros and cons of the method.
5) New set of experiments and reevaluation.
6) Discuss results with supervisors at TNO-ICT.
7) Update report with new findings.
Control Plan
Time/Capacity
Norm Data
Starting date: 01.11.2006
Completing date: 31.08.2007
Final report: Unknown yet
Final presentation: Unknown yet
Duration of the phases:
Initiation phase: 6 (+/- 1) weeks
Design phase: 14 (+/- 2) weeks
Preparation phase: 2(+/-1) weeks
Realization phase: 14 (+/- 2) weeks
Quality:
Information:
Phase | Output | Status |
Literature Study |
| ||||||||
Experiment Setup |
|
Simulation and Measurement | Experiment using virtual tunnel network interfaces. Do simulations to observe the behaviour of the developed method. | Done |
Result Analysis | Analyse Results achieved from experiments over Tunnel. Analyse Results over real networks. | Done |
Documentation and Presentation | First draft version in last week of july. Final version to be handed in second week of Aug. |
Organization:
Progress Control:
Frequent meetings will be arranged with the project manager to show progress of the project.
Risk Analysis:
Slow in implementation (lack of knowledge or un-expected errors) | Get help form co-workers in the company |
Requirement change request from project advisor can lead to delay of designing measurement approach. |
Data Chirp Internet. (2017, Jun 26).
Retrieved December 12, 2024 , from
https://studydriver.com/data-chirp-internet/
A professional writer will make a clear, mistake-free paper for you!
Get help with your assignmentPlease check your inbox
Hi!
I'm Amy :)
I can help you save hours on your homework. Let's start by finding a writer.
Find Writer