Measurement and Characteristics of Aggregated Traffic in Broadband Access Networks

Measurement and Characteristics of Aggregated Traffic in Broadband Access Networks Gerhard Haßlinger1, Joachim Mende1, Rüdiger Geib1, Thomas Beckhaus2...
Author: Joy Barker
3 downloads 0 Views 448KB Size
Measurement and Characteristics of Aggregated Traffic in Broadband Access Networks Gerhard Haßlinger1, Joachim Mende1, Rüdiger Geib1, Thomas Beckhaus2, and Franz Hartleb1 1

T-Systems, Deutsche-Telekom-Allee 7 T-Com, Deutsche-Telekom-Allee 1, D-64295 Darmstadt, Germany {gerhard.hasslinger, joachim.mende, ruediger.geib, franz.hartleb}@t-systems.com, [email protected] 2

Abstract. We investigate statistical properties of the traffic especially for ADSL broadband access platforms, which have been widely deployed in recent years. Measurement traces of aggregated traffic are evaluated on multiple time scales and show an unexpected smooth profile with less relevance of long range correlation than experienced for traffic from Ethernet LANs. A reason for the different characteristics lies in the shift to an increasing population of residential users generating most traffic on IP platforms via ADSL access. In addition, data transfer protocols of peer-to-peer networks strengthen the smoothing effect observed in current IP traffic profiles. Keywords: traffic measurement, ADSL access networks, variability on multiple time scales.

1 Traffic Variability on Different Time Scales Standard measurement in IP networks includes 5- or 15-minute mean values of the traffic rate or the load on the links. In IP platforms with underlying multiprotocol label switching (MPLS) the traffic matrix of flow intensities between all edge routers of a (sub-)network is usually available again for 5- or 15-minute intervals. The data forms a basis for network planning and the process of network resource upgrades to cope with the steadily increasing Internet traffic volume. The measurement can be collected from IP and MPLS routers without stressing the performance of the routing equipment when taken at intervals of several minutes length. The daily traffic profiles can be revealed in this way, showing the peak rates during busy hours, which are most relevant for network dimensioning. The standard statistics does not include all relevant time scales to ensure quality of service, which is affected by congestion even on small time scales of less than a second. Short term overload is often invisible on longer time scales due to compensation by alternating phases of low load. Buffers can bridge temporary overload on account of delay for the buffered data, but only to a limited extend until buffer overflows occur. Real time applications with strict delay bounds of e.g. less than 0.2s for conversational services impose restrictions on waiting times and corresponding buffer sizes. L. Mason, T. Drwiega, and J. Yan (Eds.): ITC 2007, LNCS 4516, pp. 998–1010, 2007. © Springer-Verlag Berlin Heidelberg 2007

Measurement and Characteristics of Aggregated Traffic in Broadband Access Networks

999

Since more than a decade, many evaluations of IP traffic measurement revealed long range dependencies and self similar patterns over the relevant time scales [1][5][11]. While most of this measurement was conducted on Ethernet LANs, ADSL broadband access presently connects a population of more than 170 million residential users worldwide to the Internet, still with increasing tendency [6]. Different traffic profiles are experienced for Ethernet and ADSL using measurement at the digital subscriber line access modules (DSLAMs) [3]. We investigate comparable traffic measurement at the interconnection of the ADSL access network and the IP backbone. In Section 2, we analyse the variability of samples on several time scales starting from 1ms. In addition to the analysis of the complete traffic on a link, we filter HTTP, UDP and a part of the peer-to-peer traffic to investigate their influence on variability in section 3. Section 4 studies the implications of traffic profiles for waiting times as the main QoS indicator in order to estimate load thresholds on transmission links indicating critical QoS conditions.

2 IP Traffic Measurement and Evaluation For measurement purposes, we consider the amount of arriving data in a time slotted system [12], where the time is subdivided into subsequent intervals of arbitrary but constant length Δ. In order to capture the process of arriving traffic in detail, each arriving IP or MPLS packet can be registered with a time stamp as well as its packet size. The storage demand for measurement traces in this representation is increasing with the line speed, where millions of packets are counted per second on high speed links in the Gbit/s range. In a time slotted view, the data volume V of all packets arriving during a slot Δ is computed and traffic is represented as a series V1, V2, V3, … of data volumes per slot. The slot length Δ determines the accuracy of the representation. Measurement equipment is capable to capture the amount of data arriving e.g. per millisecond, from which corresponding buffer occupation and waiting times are derived as main QoS indicators at the same precision of milliseconds. An advantage of a time slotted approach lies in a limited storage demand for M = S/Δ integers to represent a traffic trace over S seconds, which can be controlled by appropriate choice of Δ independent of the transmission speed. The traffic rate in each time slot is given by R (Δ ) = Vj /Δ for j = 1 ,…, M. j

Figure 1 represents a corresponding traffic trace from a 2.5Gbit/s link at the border of the ADSL aggregation and the IP backbone network with intervals starting at the time scale Δ = 1ms. Typical examples have been extracted from measurements running over about a month in December 2005 and January 2006. Traffic rates Rm( KΔ ) for longer time frames of multiples Κ⋅ Δ (K = 2, 3, … ) of a slot are simply computed by the mean over K subsequent Δ−intervals

Rm( KΔ ) =

1 K



mK j = m ( K −1) +1

R (j Δ ) .

Figure 1 includes 4 time scales for Δ = 1ms, …, 1s with multiples K = 10, 100, 1000. It is apparent, that traffic becomes smoother when observed on longer time

G. Haßlinger et al.

Traffic rate per 1ms slot [Mb/s]

1200 1100 1000 900 800 700 600 500 400 300

0

100

200

300

400

500

600

700

800

900

1000

700

800

900

1000

T im e S c a le o f Δ = 1 m s

Traffic rate per 0.01s slot [Mb/s]

1000

900

800

700

600

500

0

100

200

300

400

500

600

T im e S c a l e o f Δ = 0 .0 1 s

Traffic rate per 0.1s slot [Mb/s]

1000

900

800

700

600

500

0

100

200

300

400

500

600

T i m e S c a l e o f Δ = 0 .1 s 1000

Traffic rate per 1s slot [Mb/s]

1000

900

800

700

600

500

0

10

20

30

40

50

T im e S c a le o f Δ = 1 s

Fig. 1. Traffic variability on different time scales

60

Measurement and Characteristics of Aggregated Traffic in Broadband Access Networks

1001

scales. The coefficient of variation σ(Δ) / μ(Δ) is a usual measure of variability computed from the mean μ(Δ) and standard deviation σ(Δ) :

μ (Δ) =

1 M



M j =1

R (j Δ ) and σ ( Δ ) =

(Δ) (Δ) M ∑ j =1 ( R j − μ ) 2

M

.

Considering longer time scales KΔ, the mean value is always preserved μ(ΚΔ) = μ(Δ). The standard deviation and the coefficient of variation σ(Δ) / μ(Δ) are preserved if and only if R ( Δ ) = R ( Δ ) = " = R ( Δ ) = Rn( KΔ ) for n = 0,1, " , ( M/K ) − 1, such that the nK +1

nK + 2

n ( K +1)

traffic rate is constant in each subsequence of K intervals which is comprised on time scale KΔ. Otherwise, the coefficient of variation is smaller σ(ΚΔ) / μ(ΚΔ) ≤ σ(Δ) / μ(Δ). Traffic measurement has been studied at different time scales by [3] starting from 1s intervals. In addition, this work compares modeling approaches including M/G/∞ for the arrival process. The analysis is carried out with different assumptions on the distribution of flow sizes and derives their impact on the autocorrelation of R m(Δ ) . Wavelet-based approaches provide an alternative analysis method on multiple time scales [1]. Another comparative study in [3] reveals higher variability of Ethernet traffic, while measurements taken from the DSLAMs in an ADSL access platforms show smooth pattern, characterized by the fact that about 99% of the mean rates Rm(1s ) over 1s intervals stay below μ(R) + μ (R) , where a 5-minute mean value is taken for μ(R): (1s ) Pr { Rm ≤ μ ( R ) + μ ( R ) } ≈ 99% for rates measured in Mbit/s.

(1)

Therefore C ≥ μ ( R ) + μ ( R ) is proposed as a minimum threshold for bandwidth provisioning on a link. Note that this bound is derived from traffic variability on short 1100

Maximum 99% quantile 1000

Mean + standard deviation

Mbit/s

Mean

900

800

700 0.001s

0.01s

0.1s

1s

Time scale at intervals of length Δ

10s

Fig. 2. Statistical traffic parameters on multiple time scales

1002

G. Haßlinger et al.

to medium time scales, while further aspects have to be involved in network planning including long term traffic fluctuation, link upgrade processes for growing traffic as well as failure resilience. Those factors may lead to provisioning of capacity far beyond the criterion (1). We consider comparable measurement taken at broadband access routers of Deutsche Telekom’s IP platform, which connects regions for ADSL access to the IP backbone. Figure 2 summarizes measurement statistics over multiple time scales, including the maximum, 99%-quantiles γ99% and the standard deviation of a 30-minute trace with mean rate μ (1800s) ≈ 753 Mbit/s, which confirms essential reduction of the variability on longer time scales. Internet traffic measurement on the contrary revealed long range dependencies over many time scales as a motivation to introduce self-similar traffic modeling [1][5][11]. They show only a minor smoothing effect of the variability on longer time scales. There are at least two reasons for a different behaviour observed in our measurement: ¾ Most measurements showing self-similar pattern have been conducted on LANs and aggregation platforms with prevalent Ethernet access. Ethernet is equipped with at least 10Mbit/s to the terminals, ranging up to Gigabit Ethernet nowadays, whereas most ADSL lines are still limited to a few Mbit/s. On the other hand, the residential user population on ADSL platforms counts in millions [6] leading to a high multiplexing level of small flows in aggregation stages. ¾ Most of the traffic volume is generated by peer-to-peer file sharing protocols, which subdivide downloads of a large file into fixed size data units to be transmitted in parallel TCP connections from different sources [1][9][15]. Those multi source downloads lead to many small flows per user and keep the throughput at an almost constant rate even when some sources are going offline and have to be replaced. Usually the uplinks are the bottleneck of the P2P network throughput, which are even limited in speed to a few hundred kbit/s [14].

There are further smoothing effects of peer-to-peer applications with regard to ¾ Traffic variability on longer time scales: The daily traffic profiles in broadband access platforms typically show peak activity in the evening or during the day time [4][9]. For many applications involving human interaction (telephony, web browsing etc.) the shape of daily activity is close to a sinus curve with almost no activity through the night and a ratio of about 2 between the peak and the mean generated traffic rate for such a daily profile. For peer-to-peer traffic, the peak to mean ratio usually is smaller than 1.5 due to background transfers which often last throughout the night. Ongoing peer-to-peer data transfers through the night time are initiated by long-lasting background downloads of large video files of Gigabyte sizes, while the ADSL upstream speed limits the throughput often below 1Mbit/s. ¾ Traffic variability over network topology: The popularity of many Internet servers is changing dynamically causing traffic sources to arise or vanish at one or another location in the network. On the other hand, nodes and data in large peer-to-peer networks are distributed more uniformly over the access areas. During search phases the peer-to-peer protocols often in-

Measurement and Characteristics of Aggregated Traffic in Broadband Access Networks

1003

volve supernodes which are comparable to servers, but the P2P downloads, which transport most of the data volume, are running distributed among the peer nodes. While spontaneously increasing popularity of a server can lead to access bottlenecks, frequently referenced data is soon replicated and then available from many nodes in a peer-to-peer system. Hence, peer-to-peer applications strengthen a uniform distribution of traffic sources over the network independent of sudden changes in the popularity of content. Regarding the QoS criterion (1), we determine the quantiles of the sequence of traffic rates Rm(1s ) of a typical aggregation stream of rate 753Mb/s in Figure 1, which results in Pr { R m(1s ) ≤ μ ( R ) + k

μ ( R ) } = 99% for k ≈ 1.5

(2)

with mean rates again measured in Mbit/s. For a set of 11 included MPLS flows with mean rates from 10 – 40 Mbit/s the same evaluation results in factors in the range 1.7 < k < 2.5 . Thus our measurement traces confirm the form of equation (2) as proposed by [3] with k = 1, but the factor k is experienced to be larger.

3 Variability for Different Application Types In addition to the characteristics of the total aggregated traffic, some applications can be filtered out to study their traffic profiles. Table 1 and Figure 3 show corresponding results for transport layer differentiation of HTTP, UDP and peer-to-peer traffic, where HTTP is identified by TCP port 80 and P2P by the set of ports 4661, 4662, 6881, 6882, 6346, 6348, 6883, 6884 and 52525, which are known to be used by the popular P2P protocols eDonkey, BitTorrent and Gnutella. The port lists are neither complete, nor can the application types be clearly distinguished based on ports. More P2P traffic is being disguised over the HTTP or other ports. Table 1. Parameters for variability of different traffic types

Mbit/s HTTP port 80 P2P ports Other ports UDP Total traffic

Mean Rate μ 278.7 165.6 387.2 50.8 753.9

Standard deviation σ(Δ) over multiple time scales Δ 0.001s 0.01s 0.1s 1s 10s 35.3 20.5 17.2 14.4 88.1 12.5 4.1 1.7 1.3 40.1 30.0 12.4 9.3 7.6 83.0 6.2 2.7 2.2 2.0 16.3 47.9 24.3 19.5 16.3 127.3

About 22% of the total traffic is observed on the set of P2P ports, whereas application layer analysis reveals that P2P represents the major portion of the traffic [14]. On the other hand, we expect only a negligible portion of non-P2P applications on the considered P2P ports as false positives. Therefore the port filtering covers only a part of the P2P traffic, but serves as a simple online method extracting almost pure P2P

1004

G. Haßlinger et al.

traffic, whereas HTTP and UDP are composed of a mixture of several application types including P2P. When we compare the first three columns of Table 1 with the statistics of the total traffic, a perfect confirmation of the statistical multiplexing effect is observed, i.e. σ2HTTP + σ2P2P + σ2Other = σ2Total holds on all time scales with deviations less than 1%. Figure 3 shows that the identified P2P traffic portion has a smaller ratio σ/μ than the other traffic filtered by HTTP, UDP, which becomes most apparent on the longer time scales 1s and 10s. The statistics of Table 1 is given for the downstream direction. The upstream traffic has a similar profile with no essential deviations. Moreover, the total traffic volume is almost symmetrical in both directions, which again is a typical P2P characteristics.

Standard Deviation / Mean Rate

25%

20%

HTTP Ports P2P Ports

15%

Other Ports UDP Total Traffic

10%

5%

0% 0.001s

0.01s

0.1s

1s

Time scale at intervals of length Δ

10s

Fig. 3. Variability of traffic types distinguished by transport layer filtering

4 Load Dependent Waiting Times Computed from Traces The traces of the amount of arriving traffic Rm(Δ ) can be used to determine the development of the waiting time at the accuracy of a slot time Δ. Therefore we presume ¾ ¾

a constant available bandwidth C (Mbit/s) and a buffer size B (Mbit) at the router interface.

Then the waiting time after the 1., 2., 3. … slot of a considered trace can be iteratively computed, where CΔ is the amount of data which can be served per slot. We assume that the capacity is available at the end of each slot for all data that arrives during the slot and for buffered data. The latter assumption is optimistic, since the data may arrive non-uniformly over a slot time, while forwarding can be assumed as a

Measurement and Characteristics of Aggregated Traffic in Broadband Access Networks

1005

continuous constant rate process, since the considered time scales start from 1ms, whereas packets are interspaced in the order of microseconds on Gigabit links. The difference to the pessimistic assumption that forwarding of all data arriving in a slot has to be deferred until the next slot, means an increase in waiting time by no more than one slot time Δ. Let ¾ ¾

Ak denote the amount of data arriving in the k-th slot and Wk denote the waiting time after the k-th time slot.

The waiting time Wk+1 after the next slot can be calculated from the previous Wk Wk+1 = Max(Min(Wk + Ak /C – Δ, B/C), 0). This form of Lindely’s equation accounts for a per slot difference Ak/C – Δ in the workload and a limitation B/C of the waiting time according to a finite buffer size B. Considering a traffic trace over M intervals, a corresponding series of waiting times W1, W2, W3, ..., WM after each time slot is determined starting from W0 = 0. In addition, we obtain the statistical parameters including the mean, the maximum and the quantiles of the waiting time. The analysis can be executed for arbitrary capacities C and corresponding utilization μ(R)/C. We applied the evaluation to the complete traffic on the link and to the largest involved MPLS traffic flows with mean rates from 10 – 40 Mbit/s. The waiting time is computed at the 1ms time scale Wk+1 = Max(Wk + Ak/C – 0.001, 0) and for an infinite buffer (B → ∞), with QoS degradation corresponding to long waiting times. Evaluations for two MPLS traffic traces of about 15 minute length are shown in Figure 4. Each MPLS flow represents a source to destination traffic demand between edge routers of the backbone. Both examples have a mean rate of 18.0Mbit/s and have been filtered out of the total traffic with mean 753Mbit/s on a 2.5Gbit/s link. The measurement trace shows a moderate link utilization of about 30% and does not lead to overload neither for total traffic nor for any included MPLS flows. We analyzed the same measurement trace for smaller capacity C corresponding to higher utilization. Then overload is occurring at some level and the waiting time at each time slot in the trace is increasing with the utilization. In this way, we can determine a critical utilization level, when the mean or maximum waiting time exceeds a predefined threshold. The analysis does not include the TCP congestion control mechanism [7]. Nevertheless, the analysis for the original source traffic is relevant especially for the non-congested regime, where overload phases are seldom and most of them are short, such that TCP has no essential influence. In addition, there is an increasing portion of inelastic and non-responsive applications including UDP traffic, which caused the IETF standardization to consider extension of rate control mechanisms. In both MPLS flow examples, a maximum waiting time of 50ms is exceeded at different utilization of about 90% for C = 20 Mbit/s in the first case and at about 40% or C = 45 Mbit/s in the second case. The representation in the figures does not give a resolution on the 1ms level, but instead includes the maximum waiting time value during each second. The smooth traffic in the first example exhibits a similar behaviour as for the total traffic on the link, where sufficient QoS properties can be met even up to 90% load. Beyond this threshold, QoS is sharply decreasing and long lasting overload phases become visible, e.g. over the last minute of the first trace in

1006

G. Haßlinger et al.

Figure 4 at 95% utilization. In this case, TCP surely would have an influence on the long lasting overload phase, but the example is by far the smoothest case for a set of considered MPLS examples. 200

Waiting Time (ms)

150

95% Load 90% 85%

100

50

0 0

100

200

300

400

500

600

700

800

Time in the course of the trace (s) 125

50% Load 45% 40% 35%

Waiting time (ms)

100

75

50

25

0 0

100

200

300

400

500

600

Time in the course of the traffic trace (s) Fig. 4. Waiting time development during a traffic trace

700

800

Measurement and Characteristics of Aggregated Traffic in Broadband Access Networks

1007

120

Maximum waiting time (ms)

100

80

60

40

20

0 0

20

40

60

80

100

Utilization Fig. 5. Load and maximum waiting time in 11 examples of MPLS traffic traces

The other MPLS traffic flows show higher variability σ(Δ) / μ(Δ) than the entire traffic on a link, due to a smaller rate and multiplexing level. Consequently, the QoScritical thresholds of the utilization shift to lower loads for smaller traffic aggregates. While Figure 4 gives an impression of the distribution of load peaks during a trace for several levels of the long term utilization, Figure 5 indicates the increase of the maximum waiting time with growing utilization. Each curve corresponds to an MPLS traffic flow, where the examples of Figure 4 represent the extreme cases at both sides. Next, we compare the observed relationship between maximum waiting times and long term utilization obtained from the traffic traces with Gaussian dimensioning approaches. The latter only include the mean and standard deviation or the 99%quantile as parameters. The statistical multiplexing effect suggests that the distribution of the traffic rate on relevant time scales approximately shows a Gaussian shape [8][10][13]. This is confirmed by histograms for the traffic rates per time slot, despite of increasing deviations in the outer ranges x < μ – 2σ and x > μ + 2σ , which occur more frequently than expected for a Gaussian distribution. Based on Gaussian distributions, dimensioning rules are available with and without including buffers [8][7]. The zero-buffer analysis determines the loss rate rLoss of a switching system with constant forwarding capacity C in a continuous flow model. It is a simplified worst case approach, since buffers may prevent losses and thus improve the QoS. It accounts for data lost during overload phases when the arrival rate exceeds the forwarding capacity. In general, the loss rate is determined with regard to the rate distribution function FR(x). The loss probability pLoss is given by the ratio of the mean loss rate to the mean traffic rate:

1008

G. Haßlinger et al.

r Loss = ∫x > C ( x − C ) dFR ( x) and pLoss = r Loss / μ ( R).

(3)

We can represent the required capacity as C = μ(R) + mσ(R), where a multiple m of the standard deviation σ(R) is provided in excess of the mean rate μ(R). Then we obtain the loss probability pLoss:

σ ( R)

(φ (m) − m Φ(m)); (4) μ ( R) ∞ where m = (C − μ ( R) ) / σ ( R); φ (m) = exp(−m 2 / 2) / 2π ; Φ(m) = ∫m φ ( x) d x. p Loss =

The relationship (4) for Gaussian dimensioning is not given in a closed formula expression to determine pLoss from C, μ(R) and σ(R), but can be evaluated numerically or from tables of the standard Gaussian distribution. Vice versa, we can determine the required capacity C from μ(R), σ(R) and a demanded loss probability pLoss. Figure 6 shows the allowable load due to Gaussian dimensioning (3) for the set of considered MPLS traffic traces and compares it to the load level, which leads to a maximum waiting time of 100ms to be observed in the course of the trace. Again, the capacity C = μ(R) + mσ(R) and utilization μ(R)/C directly correspond to each other. While μ(R) remains constant over all time scales, σ(R) is reducing on larger time scales. Thus the question arises, which time scale is appropriate to determine σ(R)? The relevant time scale should be of about the same order as the delay introduced by buffers, since fluctuations on smaller time scales can be compensated by the buffer whereas overload phases on longer time scales lead to buffer overflows. The maximum delay in buffers of usual size is in the range of 0.1s – 1s. Thus we focus on those time scales. As the main results of the comparison, Gaussian dimensioning for σ(R) taken from the 1s time scale establishes an optimistic upper bound, resulting in the uppermost curve in Figure 6. When σ(R) is taken instead from the 0.1s time scale, the allowable load in the Gaussian model is reducing by 3-10%. The corresponding curve is alternating with the load levels for 100ms maximum waiting time computed from the traces, with 5 cases lying above and below and thus establishes the most reasonable estimate due to Gaussian modeling. As an alternative, we determined σ~ (R) from the 99%-quantile γ99% of the 0.1s time scale, using the fact that γ99% ≈ μ(R) + 2.33 σ~ (R) ⇒ σ~ (R) ≈ (γ99% – μ(R)) / 2.33 for a Gaussian distribution. This leads to a third curve in Figure 6, which is again below the Gaussian model with σ(R) taken directly from the 0.1s time scale. The second example of Figure 4 is also included as the tenth case in Figure 6, which has a 43% load limit computed from the trace, largely deviating from each of the Gaussian models. The maximum waiting time is encountered in two peaks visible in Figure 4, both of which are shorter than 2s and cannot be predicted from μ(R), σ(R) and γ99%. Although the maximum waiting time is exceptional for this case, the mean waiting time is smaller than for most other MPLS flows for the same load.

Measurement and Characteristics of Aggregated Traffic in Broadband Access Networks

1009

100 90

Alloable Load (%)

80 70 60 50 40

Gaussian dimensioning for σ (R) on the 1s time scale

30

Threshold for 100ms Waiting Time in the Measurement Trace

20

Gaussian dimensioning for σ (R) on the 0.1s time scale

10

Gaussian dimensioning for σ~ (R) via γ99% on the 0.1s time scale

0

11 examples of MPLS flows Fig. 6. Allowable load for Gaussian distribution of the traffic rate compared to the load level which exceeds 100ms as maximum waiting time in the trace

5 Summary and Conclusions The variability of traffic generated on ADSL broadband access platforms essentially reduces on longer time scales and differs from classical IP traffic measurement suggesting a self-similar structure [11], although long range dependency is still visible. The statistical multiplexing effect is strengthened by a large number of transport layer flows at small rates being generated by residential users whose ADSL access speed is still limited to a few Mbit/s, while the variability of traffic via Ethernet access networks remains higher [3]. Peer-to-peer protocols presently contribute most of the traffic using multi source connections for each download. They increase the number of flows and stabilize download rates at an almost constant level for each user. Therefore P2P traffic is observed to be much smoother than classical applications in the Internet. As indicators of QoS properties, the maximum and the quantiles of the waiting time are determined based on traffic traces and as a function of the utilization. Variability is experienced to be much higher for smaller aggregates of flows, which suggests lower allowable load thresholds with regard to QoS. A comparison to dimensioning for Gaussian traffic based only on the mean rate and standard deviation leads to reasonable estimates in most cases, although with exceptions. Further study is required to determine the most significant time scales with regard to QoS aspects and whether the evaluation can be based on only a few measurement parameters.

1010

G. Haßlinger et al.

References [1] Abry, P., Veitch, D.: Wavelet analysis of long range dependent traffic. IEEE Trans. on Information Theory 44, 2–15 (1998) [2] Azzouna, N.B., Clérot, F., Fricker, C., Guillemin, F.: A flow-based approach to modeling ADSL traffic on an IP backbone link. Annals of Telecommunication 59, 1252–1255 (2004) [3] van den Berg, H., Mandjes, M., van de Meent, R., Pras, A., Roijers, F., Venemans, P.: QoS-aware bandwidth provisioning for IP backbone links. Computer Networks 50, 631–647 (2006) [4] Cho, K., Fukuda, K., Esaki, H., Kato, A.: The impact and implications of the growth in residential user-to-user traffic, ACM Sigcomm Conf., Pisa (2006) [5] Crovella, M.E., Bestavros, A.: Self-similarity in world wide web traffic: Evidence and possible causes. IEEE/ACM Trans. on Networking 5, 835–846 (1997) [6] DSL Forum, Information on subscribers Q3’06 (2006) [7] Hartleb, F., Haßlinger, G.: Comparison of Link Dimensioning Methods including TCP Behaviour. Proc. IEEE Globecom Conf, San Antonio, USA, pp. 2240–2247 (2001) [8] Haßlinger, G.: QoS analysis for statistical multiplexing with Gaussian and autoregressive input. Telecommunication Systems 16, 315–334 (2001) [9] Haßlinger, G.: ISP Platforms under a heavy peer-to-peer workload. In: Steinmetz, R., Wehrle, K. (eds.) Peer-to-Peer Systems and Applications. LNCS, vol. 3485, pp. 369–382. Springer, Heidelberg (2005) [10] Kilpi, J., Norros, I.: Testing the Gaussian approximation of aggregate traffic. Proc. Internet Measurement Workshop, Marseille, France (2002) [11] Leland, W., Taqqu, M., Willinger, W., Wilson, D.: On the self-similar nature of Ethernet traffic. IEEE Trans. on Networking 2, 1–15 (1994) [12] Li, S.-Q.: A general solution technique for discrete queueing analysis of multi-media traffic on ATM. IEEE Trans. on Communication, pp. 1115–1132 (1991) [13] Norros, I., Pruthi, P.: On the applicability of Gaussian traffic models. In: Emstad, P.J., et al. (eds.) 13. Nordic Teletraffic Seminar, Trondheim, pp. 37–50 (1996) [14] Siekkinen, M., Collange, D., Urvoy-Keller, G., Biersack, E.W.: Performance limitations of ADSL users: a case study. PAM 2007, 8th Passive and Active Measurement conf, Louvain-la-neuve, Belgium (2007) [15] Tutschku, K., Tran-Gia, P.: Traffic characteristics and performance evaluation of peer-topeer systems. In: Steinmetz, R., Wehrle, K. (eds.) Peer-to-Peer Systems and Applications. LNCS, vol. 3485, pp. 383–398. Springer, Heidelberg (2005)