Author Archives: Steve Uhlig

Workshop on Tracking Quality of Experience in the Internet: Summary and Outcomes

Fabian E. Bustamante, David Clark, Nick Feamster.
Abstract

This is a report on the Workshop on Tracking Quality of Experience in the Internet, held at Princeton, October 21–22, 2015, jointly sponsored by the National Science Foundation and the Federal Communication Commission. The term Quality of Experience (QoE) describes a user’s subjective assessment of their experience when using a particular application. In the past, network engineers have typically focused on Quality of Service (QoS): performance metrics such as throughput, delay and jitter, packet loss, and the like. Yet, performance as measured by QoS parameters only matters if it affects the experience of users, as they attempt to use a particular application. Ultimately, the user’s experience is determined by QoE impairments (e.g., rebuffering). Although QoE and QoS are related—for example, a video rebuffering event may be caused by high packet-loss rate—QoE metrics ultimately affect a user’s experience.

Identifying the causes of QoE impairments is complex, since the impairments may arise in one or another region of the network, in the home network, on the user’s device, in servers that are part of the application, or in supporting services such as the DNS. Additionally, metrics for QoE continue to evolve, as do the methods for relating QoE impairments to underlying causes that could be measurable using standard network measurement techniques. Finally, as the capabilities of the underlying network infrastructure continues to evolve, researchers should also consider how to design infrastructure and tools can best support measurements that can better identify the locations and causes of QoE impairments.

The workshop’s aim was to understand the current state of QoE research and to contemplate a community agenda to integrate ongoing threads of QoE research into a collaboration. This summary report describes the topics discussed and summarize the key points of the discussion. Materials related to the workshop are available at http://aqualab.cs.northwestern.edu/NSFWorkshop-InternetQoE
Download the full article DOI: 10.1145/3041027.3041035

Can We Make a Cake and Eat it Too? A Discussion of ICN Security and Privacy

Edith Ngai, Borje Ohlman, Gene Tsudik, Ersin Uzun, Matthias Wahlisch, Christopher A. Wood.
Abstract

In recent years, Information-centric Networking (ICN) has received much attention from both academic and industry participants. ICN offers data-centric inter-networking that is radically different from today’s host-based IP networks. Security and privacy features on today’s Internet were originally not present and have been incrementally retrofitted over the last 35 years. As such, these issues have become increasingly important as ICN technology gradually matures towards real-world deployment. Thus, while ICN-based architectures (e.g., NDN, CCNx, etc.) are still evolving, it is both timely and important to explore ICN security and privacy issues as well as devise and assess possible mitigation techniques.

This report documents the highlights and outcomes of the Dagstuhl Seminar 16251 on “Information-centric Networking and Security.” The goal of which was to bring together researchers to discuss and address security and privacy issues particular to ICN-based architectures. Upon finishing the three-day workshop, the outlook of ICN is still unclear. Many unsolved and ill-addressed problems remain, such as namespace and identity management, object security and forward secrecy, and privacy. Regardless of the fate of ICN, one thing is certain: much more research and practical experience with these systems is needed to make progress towards solving these arduous problems.
Download the full article DOI: 10.1145/3041027.3041034

Toward a Taxonomy and Attacker Model for Secure Routing Protocols

Matthias Hollick, Cristina Nita-Rotaru, Panagiotis Papadimitratos, Adrian Perrig, Stefan Schmid.
Abstract

A secure routing protocol represents a foundational building block of a dependable communication system. Unfortunately, currently no taxonomy exists to assist in the design and analysis of secure routing protocols. Based on the Dagstuhl Seminar 15102, this paper initiates the study of more structured approaches to describe secure routing protocols and the corresponding attacker models, in an effort to better understand existing secure routing protocols, and to provide a framework for designing new protocols. We decompose the routing system into its key components based on a functional model of routing. This allows us to classify possible attacks on secure routing protocols. Using our taxonomy, we observe that the most eective attacks target the information in the control plane. Accordingly, unlike classic attackers whose capabilities are often described in terms of computation complexity we propose to classify the power of an attacker with respect to the reach, that is, the extent to which the attacker can influence the routing information indirectly, beyond the locations under its direct control.

Download the full article DOI: 10.1145/3041027.3041033

Exploring Domain Name Based Features on the Effectiveness of DNS Caching

Shuai Hao, Haining Wang.
Abstract

DNS cache plays a critical role in domain name resolution, providing (1) high scalability at Root and Top-level-domain (TLD) name servers with reduced workloads and (2) low response latency to clients when the resource records of the queried domains are cached. However, the pervasive misuses of domain names, e.g., the domains of “one-time-use” pattern, have negative impact on the effectiveness of DNS caching as the cache has been filled with those entries that are highly unlikely to be retrieved. In this paper, we investigate such misuse and identify domain name-based features to characterize those one-time domains. By leveraging the features that are explicitly available from the domain name itself, we build a classifier to combine these features, propose simple policy modifications on caching resolvers for improving DNS cache performance, and validate their efficacy using real traces.

Download the full article DOI: 10.1145/3041027.3041032

On the Potential Abuse of IGMP

Matthew Sargent, John Kristoff, Vern Paxson, Mark Allman.
Abstract

In this paper we investigate the vulnerability of the Internet Group Management Protocol (IGMP) to be leveraged for denial-of-service (DoS) attacks. IGMP is a connectionless protocol and therefore susceptible to attackers spoofing a third-party victim’s source address in an effort to coax responders to send their replies to the victim. We find 305K IGMP responders that will indeed answer queries from arbitrary Internet hosts. Further, the responses are often larger than the requests, hence amplifying the attacker’s own expenditure of bandwidth. We conclude that attackers can coordinate IGMP responders to mount sizeable DoS attacks.

Download the full article DOI: 10.1145/3041027.3041031

A Database Approach to SDN Control Plane Design

Bruce Davie, Teemu Koponen, Justin Pettit, Ben Pfaff, Martin Casado, Natasha Gude, Amar Padmanabhan, Tim Petty, Kenneth Duda, Anupam Chanda.
Abstract

Software-defined networking (SDN) is a well-known example of a research idea that has been reduced to practice in numerous settings. Network virtualization has been successfully developed commercially using SDN techniques. This paper describes our experience in developing production-ready, multi-vendor implementations of a complex network virtualization system. Having struggled with a traditional network protocol approach (based on OpenFlow) to achieving interoperability among vendors, we adopted a new approach. We focused first on defining the control information content and then used a generic database protocol to synchronize state between the elements. Within less than nine months of starting the design, we had achieved basic interoperability between our network virtualization controller and the hardware switches of six vendors. This was a qualitative improvement on our decidedly mixed experience using OpenFlow. We found a number of benefits to the database approach, such as speed of implementation, greater hardware diversity, the ability to abstract away implementation details of the hardware, clarified state consistency model, and extensibility of the overall system.

Download the full article DOI: 10.1145/3041027.3041030

Towards a Context-Aware Forwarding Plane in Named Data Networking supporting QoS

Daniel Posch, Benjamin Rainer, Hermann Hellwagner.
Abstract

The emergence of Information-Centric Networking (ICN) provides considerable opportunities for context-aware data distribution in the network’s forwarding plane. While packet forwarding in classical IP-based networks is basically predetermined by routing, ICN foresees an adaptive forwarding plane considering the requirements of network applications. As research in this area is still at an early stage, most of the work so far focused on providing the basic functionality, rather than on considering the available context information to improve Quality of Service (QoS). This article investigates to which extent existing forwarding strategies take account of the available context information and can therefore increase service quality. The article examines a typical scenario encompassing different user applications (Voice over IP, video streaming, and classical data transfer) with varying demands (context), and evaluates how well the applications’ requirements are met by the existing strategies.

Download the full article DOI: 10.1145/3041027.3041029

The October 2016 issue

This issue of Computer Communication Review comes a bit late due to delays in organisation the preparation of the final version of the articles to be published on the ACM Digital Library. Anyway, these page layout problems are now solved and this issue contains four technical articles and three editorials. The first technical article, Recursive SDN for Carrier Networks, proposes a recursive routing computation framework that could be applied to Software Defined Networks. The three other technical articles were selected as the best papers presented at three SIGCOMM’16 workshops. The first one, Measuring the Quality of Experience of Web users, was presented at the Internet-QoE ’16 workshop. It proposes and analyses tools to assess the Quality of Experience of web services. The second article, Latency Measurement as a Virtualized Network Function using Metherxis, was presented at the LANCOMM’16 workshop. It proposes and evaluates a system to conduct latency measurements. The third article, CliMB: Enabling Network Function Composition with Click Middleboxes, was presented at the HotMiddlebox’16 workshop. It proposes a modular implementation of TCP for Click.

This issue contains three editorial contributions. Two of these editorials are reports from workshops related to Internet measurements. The first one, The 8th Workshop on Active Internet Measurements (AIMS-8) Report, summarises the results of the Active Internet Measurements workshop that was held in February 2016. The second one, Report from the 6th PhD School on Traffic Monitoring and Analysis (TMA), summarises a doctoral school that was held in April 2016. Finally, “Resource Pooling” for Wireless Networks: Solutions for the Developing World is a position paper that argues for the utilisation of the resource pooling principle when designing network solutions in developing countries.

Before jumping to those papers, I’d like to point several important changes to the Computer Communication Review submission process. These changes were already announced during the community feedback session at SIGCOMM’16. CCR continues to accept both editorial submissions and technical papers. The main change affects the technical papers. There have been many discussions in our community and elsewhere on the reproducibility of research results. SIGCOMM has already taken some actions to encourage the reproducibility of research results. A well-known example is the best dataset award at the Internet Measurements Conference. CCR will go one step further by accepting technical papers that are longer than six pages provided that these papers are replicable. According to the recently accepted ACM Policy on Result and Artifact Review and Badging, CCR will consider a paper as replicable if other researcher can obtain similar results as the authors of the paper by using the artifacts (software, dataset, …) used by the original authors of the paper. This implies that the authors of long papers will have to release the artifacts that are required to replicate most of the results of the papers. For these replicable papers, there is no a priori page limit, but the acceptance bar will grow with the paper length. Those replicable papers will be reviewed in two phases. The first phase will consider the technical merits of the paper, without analysing the provided artifacts. If the outcome of this first phase is positive, then the artifacts will be evaluated and papers will be tagged with the badges defined by the ACM Policy on Result and Artifact Review and Badging. The public review for the replicable papers will contain a summary of the technical reviews and a summary of the evaluation of the artifacts.

In the long term, replicable papers are important for our community because the availability of the artifacts will encourage other researchers to expand and improve the work. The first who will benefit from replicable papers are our readers who will have access to more information about each published papers. However, authors will probably complain a bit because, it takes more time to write a paper that contains all the associated artifacts than the standalone text. On the other hand, authors can expect more impact from their replicable papers since it will be easier for other researchers to use, expand and cite their work. Reviewers could also complain because the assessing paper artifacts requires more time than assessing a written paper. It is clear that assessing research artifacts will be easier for papers that describe simulation results than for papers describing a system that combines hardware and software prototypes or has already been deployed at a large scale. As a community, we will need to work together to define criteria to correctly assess paper artifacts in our different subdomains. If you’d like to participate in the evaluation of paper artifacts, please contact me by email at ccr-editor@sigcomm.org
Olivier Bonaventure
CCR Editor

October 2016 : Table of contents

Technical papers

Editorials