Home » Novel Approaches to DoS Impact Measurement

Novel Approaches to DoS Impact Measurement

J.Anto Sylverster Jeyaraj, C.Suriya, R.Sudha

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Abstract

Over the past few years Denial of service (DoS) Attacks have emerged as serious vulnerability for almost every internet Services. Existing approach to DoS impact measurement in Deter Testbeds equate service denial with slow communication low throughput, high resource utilization, and high loss rate. These approaches are not versatile, not quantitative, not accurate because they fail to specify exact ranges of parameter values that correspond to good or poor service quality and they were not proven to correspond to human perception service denial. We propose Novel approaches to DoS impact that measure the quality of service experienced by users during an attack. Our novel approaches are quantitative, Versatile, accurate because they map QoS requirements for several applications into measurable traffic parameters with acceptable, scientifically determined thresholds, they apply to a wide range of attack scenarios, which we demonstrate via Deter testbed experiments

 

Keywords

Communication/network, Measurement techniques, performance of system, Network security

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

1. INTRODUCTION

Denial of service (DoS) is a major threat. DoS severely disrupts legitimate communication by exhausting some critical limited resource via packet floods or by sending malformed packets that cause network elements to crash. The large number of devices, applications, and resources involved in communication offers a wide variety of mechanisms to deny service. Effects of DoS attacks are experienced by users as a server slowdown, service quality degradation, service degradation.

DoS attacks have been studied through testbed experiments. Accurately measuring the impairment of service quality perceived by human clients during an attack is essential for evaluation and comparison of potential DoS defenses, and for study of novel attacks. Researchers and developers need accurate, quantitative, and versatile. Accurate metrics produce measures of service denial that closely agree with a human’s perception of service impairment in a similar scenario. Quantitative metrics define ranges of parameter values that signify service denial, using scientific guidelines. Versatile metrics apply to many DoS scenarios regardless of the underlying mechanism for service denial, attack dynamics, legitimate traffic mix, or network topology.

Existing approaches to DoS impact measurement fall short of these goals. They collect one or several traffic measurements and compare their first-order statistics (e.g., mean, standard deviation, minimum, or maximum) or their distributions in the baseline and the attack case. Frequently used traffic measurements include the legitimate traffic’s request/response delay, legitimate transactions durations, legitimate traffic’s goodput, throughput, or loss, and division of a critical resource between the legitimate and the attack traffic. If a defense is being evaluated, these metrics are also used for its collateral damage. Lack of consensus on which measurements best reflect the DoS impact cause researchers to choose ones they feel are the most relevant. Such metrics are not versatile, since each independent traffic measurement captures only one aspect of service denial. For example, a prolonged request/response time will properly signal DoS for two-way applications such as Web, FTP, and DNS, but not for media traffic that is sensitive to one-way delay, packet loss, and jitter. The lack of common DoS impact metrics prevents comparison among published work. We further argue that the current measurement approaches are neither quantitative nor accurate. Adhoc comparisons of measurement statistics or distributions only show how network traffic behaves differently under attack, but do not quantify which services have been denied and how severely. To our knowledge, no studies show that existing metrics agree with human perception of service denial. We survey existing DoS impact metrics in Section 2.

We propose a novel approach to DoS impact measurement. Our key insight is that DoS always causes degradation of service quality, and a metric that holistically captures a human user’s QoS perception will be applicable to all test scenarios. For each popular application, we specify its QoS requirements, consisting of relevant traffic measurements and corresponding thresholds that define good service ranges. We observe traffic as a collection of high-level tasks called “transactions” (defined in Section3).Each legitimate transaction is evaluated against its application’s QoS requirements; transactions that do not meet all the requirements are considered “failed.” We aggregate information about transaction failure into several intuitive qualitative and quantitative composite metrics to expose the precise interaction of the DoS attack with the legitimate traffic. We describe our proposed approaches in Section 3. We demonstrate that our approaches meet the goals of being accurate, quantitative, and versatile through testbed experiments with multiple DoS scenarios and legitimate traffic mixes. Conclude in Section 5.

2. EXISTING METRICS

Prior DoS research has focused on measuring DoS through selected legitimate traffic parameters:

  1. Packet loss,
  2. Traffic throughput or goodput,
  3. Request/response delay,
  4. Transaction duration, and
  5. Allocation of resources.

Researchers have used both simple metrics (single traffic parameter) and combinations of them to report the impact of an attack on the network. All existing metrics are not quantitative because they do not specify ranges of loss, throughput, delay, duration, or resource shares that correspond to service denial. Indeed, such values cannot be specified in general because they highly depend on the type of application whose traffic coexists with the attack: 10 percent loss of VoIP traffic is devastating while 10 percent loss of DNS traffic is merely a glitch. All existing metrics are not versatile and we point out below the cases where they fail to measure service denial. They are inaccurate since they have not been proven to correspond to a human user’s perception of service denial.

3. PROPOSED APPROACHES TO DOS IMPACT EASURMENT

3.3 DoS Metrics

We aggregate the transaction success/failure measures into several intuitive composite metrics.

Percentage of failed transactions (pft) per application type. This metric directly captures the impact of a DoS attack on network services by quantifying the QoS experienced by users. For each transaction that overlaps with the attack, we evaluate transaction success or failure applying Definition 3. A straightforward approach to the pft calculation is dividing the number of failed transactions by the number of all transactions during the attack. This produces biased results for clients that generate transactions serially. If a client does not generate each request in a dedicated thread, timing of subsequent requests depends on the completion of previous requests. In this case, transaction density during an attack will be lower than without an attack, since transactions overlapping the attack will last longer. This skews the pft calculation because each success or failure has a higher influence on the pft value during an attack than in its absence. In our experiments, IRC and telnet clients suffered from this deficiency. To remedy this problem, we calculate the pft value as the difference between 1 (100 percent) and the ratio of the number of successful transactions divided by the number of all transactions that would have been initiated by a given application during the same time if the attack were not present.

The DoS-hist metric shows the histogram of pft measures across applications, and is helpful to understand each application’s resilience to the attack.

The DoS-level metric is the weighted average of pft measures for all applications of interest: DoS-level =, where k spans all application categories, and wk is a weight associated with a category k. We introduced this metric because in some experiments it may be useful to produce a single number that describes the DoS impact. But we caution that DoS-level is highly dependent on the chosen application weights and thus can be biased.

QoS-ratio is the ratio of the difference between a transaction’s traffic measurement and its corresponding threshold, divided by this threshold. The QoS metric for each successful transaction shows the user-perceived service quality, in the range (0, 1], where higher numbers indicate better quality. It is useful to evaluate service quality degradation during attacks. We compute it by averaging

QoS-ratios for all traffic measurements of a given transaction that have defined thresholds. For failed transactions, we compute the related QoS-degrade metric, to quantify severity of service denial.

QoS-degrade is the absolute value of QoS-ratio of that transaction’s measurement that exceeded its QoS threshold by the largest margin. This metric is in the range (0,1] .Intuitively, a value N of QoS-degrade means that the service of failed transactions was N times worse than a user could tolerate. While arguably any denial is significant and there is no need to quantify its severity, perception of DoS is highly subjective. Low values of QoS-degrade (e.g., < 1) may signify service quality that is acceptable to some users. The life diagram shows the birth and death of each transaction in the experiment with horizontal bars. The x-axis is time and the bar’s position shows a transaction’s birth (start of the bar) and death (its end).We show failed and successful transactions on separate diagrams, for clarity. This metric can help quickly show which transactions failed and indicate clusters that may point to a common cause of failure.

The failure ratio shows the percentage of live transactions in the current (1-second) interval that will fail in the future. The failure ratio is useful for evaluation of DoS defenses, to capture the speed of a defense’s response, and for time-varying attacks . Transactions that are born during the attack are considered live until they complete successfully or fail. Transactions that are born before the attack are considered live after the attack starts. A failed transaction contributes to the failed transaction count in all intervals where it was live.

4. EVALUATION IN TESTBED

EXPERIMENTS

We first evaluate our metrics in experiments on the DETER testbed [15]. It allows security researchers to evaluate attacks and defences in a controlled environment. Fig. 2 shows our experimental topology. Four legitimate networks and two attack networks are connected via four core routers. Each legitimate network has four server nodes and two client nodes, and is connected to the core via an access router. Links between the access router and the core have 100-Mbps bandwidth and 10-40-ms delay, while other links have 1-Gbps bandwidth and no added delay. The location of bottlenecks is chosen to mimic high-bandwidth local networks that connect over a limited access link to an over provisioned core. Attack networks host two attackers each, and connect directly to core routers

Fig.2.

Experimental topology.

4.1 Background Traffic

Each client generates a mixture of Web, DNS, FTP, IRC, VoIP, ping, and telnet traffic. We used open-source servers and clients when possible to generate realistic traffic at the application, transport, and network level. For example, we used an Apache server and wget client for Web traffic, bind server and dig client for DNS traffic, etc. Telnet, IRC, and VoIP clients and the VoIP server were custom-built in Perl. Clients talk with servers in their own and adjacent networks. Fig. 2 shows the traffic patterns. Traffic patterns for IRC and VoIP differ because those application clients could not support multiple simultaneous connections. All attacks target the Web server in network 4 and cross its bottleneck link, so only this network’s traffic should be impacted by the attacks. Illustrate our metrics in realistic traffic scenarios for various attacks. We modified the topology from [8] to ensure that bottlenecks occur only before the attack target, to create more realistic attack conditions. We used a more artificial traffic mix , with regular service request arrivals and identical file sizes for each application, to clearly isolate and illustrate features of our metrics. Traffic parameters are chosen to produce the same transaction density in each application category (Table 3): roughly 100 transactions for each application during 1,300 seconds, which is the attack duration. All transactions succeed in the absence of the attack.

bottleneck links (more frequent variant) and 2) by generating a high packet rate that exhausts the CPU at a router leading to the target. We generate the first attack type: a UDP bandwidth flood. Packet sizes had range [750 bytes,1.25 Kbytes] and total packet rate was 200 Kpps. This generates a volume that is roughly 16 times the bottleneck bandwidth. The expected effect is that access link of network 4 will become congested and traffic between networks 1 and 4, and networks 3 and 4 will be denied service.

5. CONCLUSIONS

One cannot understand a complex phenomenon like DoS without being able to measure it in an objective, accurate way. The work described here defines accurate, quantitative, and versatile metrics for measuring effectiveness of DoS attacks and defenses. Our approach is objective, reproducible, and applicable to a wide variety of attack and defense methodologies. Its value has been demonstrated in testbeds environments.

Our approaches are usable by other researchers in their own work. They offer the first real opportunity to compare and contrast different DoS attacks and defenses on an objective head-to-head basis. We expect that this work will advance DoS research by providing a clear measure of success for any proposed defense, and helping researchers gain insight into strengths and weaknesses of their solutions.

REFERENCES

[1] A. Yaar, A. Perrig, and D. Song, “SIFF: A Stateless Internet Flow Filter to Mitigate DDoS Flooding Attacks,” Proc. IEEE Symp. Security and Privacy (S&P), 2004.

[2] A. Kuzmanovic and E.W. Knightly, “Low-Rate TCP-Targeted Denial of Service Attacks (The Shrew versus the Mice and Elephants),” Proc. ACM SIGCOMM ’03, Aug. 2003.

[3] CERT Advisory CA-1996-21 TCP SYN Flooding and IP Spoofing Attacks, CERT CC, http://www.cert.org/advisories/CA-1996-21.html, 1996.

[4] R. Mahajan, S.M. Bellovin, S. Floyd, J. Ioannidis, V. Paxson, and S. Shenker, “Controlling High Bandwidth Aggregates in the Network,” ACM Computer Comm. Rev., July 2001.

[5] G. Oikonomou, J. Mirkovic, P. Reiher, and M. Robinson, “A Framework for Collaborative DDoS Defense,” Proc. 11th Asia-Pacific Computer Systems Architecture Conf. (ACSAC ’06), Dec. 2006.

[6] Cooperative Association for Internet Data Analysis, CAIDA Web page,http://www.caida.org, 2008.

[7] MAWI Working Group Traffic Archive, WIDE Project, http://tracer.csl.sony.co.jp/mawi/, 2008

[8] “QoS Performance requirements for UMTS,” The Third Generation Partnership Project (3GPP), Nortel Networks, http://www.3gpp.org/ftp/tsg_sa/WG1_Serv/TSGS1_03-HCourt/Docs/Docs/s1-99362.pdf, 2008.

[9] N. Bhatti, A. Bouch, and A. Kuchinsky, “Quality is in the Eye of the Beholder: Meeting Users’ Requirements for Internet Quality of Service,” Technical Report HPL-2000-4, Hewlett Packard, 2000.

[10] L. Yamamoto and J.G. Beerends, “Impact of Network Performance Parameters on the End-to-End Perceived Speech Quality,” Proc.EXPERT ATM Traffic Symp., Sept. 1997.

[11] T. Beigbeder, R. Coughlan, C. Lusher, J. Plunkett, E. Agu, and M. Claypool, “The Effects of Loss and Latency on User Performance in Unreal Tournament 2003,” Proc. ACM Network and System Support for Games Workshop (NetGames), 2004.

[12] N. Sheldon, E. Girard, S. Borg, M. Claypool, and E. Agu, “The Effect of Latency on User Performance in Warcraft III,” Proc. ACM Network and System Support for Games Workshop (NetGames), 2003.

[13] B.N. Chun and D.E. Culler, “User-Centric Performance Analysis of Market-Based Cluster Batch Schedulers,” Proc. Second IEEE Int’l Symp. Cluster Computing and the GridProc. Second IEEE/ACM Int’l Conf. Cluster Computing and the Grid (CCGRID ’02), May 2002.

[14] J. Ash, M. Dolly, C. Dvorak, A. Morton, P. Taraporte, and Y.E. Mghazli, Y.1541-QOSM—Y.1541 QoS Model for Networks Using Y.1541 QoS Classes, NSIS Working Group, Internet Draft,work in progress, May 2006.

[15] T. Benzel, R. Braden, D. Kim, C. Neuman, A. Joseph, K. Sklower,R. Ostrenga, and S. Schwab, “Experiences with DETER: A Testbed for Security Research,” Proc. Second Int’l IEEE/Create-Net Conf.Testbeds and Research Infrastructures for the Development of Networks and Communities (TridentCOM ’06), Mar. 2006.

[16] D.J. Bernstein, TCP 22 Syncookies, http://cr.yp.to/syncookies.html, 2008.

 

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more
Live Chat+1 763 309 4299EmailWhatsApp

We Can Handle your Online Class from as low as$100 per week