Category Archives: 2016

The October 2016 issue

This issue of Computer Communication Review comes a bit late due to delays in organisation the preparation of the final version of the articles to be published on the ACM Digital Library. Anyway, these page layout problems are now solved and this issue contains four technical articles and three editorials. The first technical article, Recursive SDN for Carrier Networks, proposes a recursive routing computation framework that could be applied to Software Defined Networks. The three other technical articles were selected as the best papers presented at three SIGCOMM’16 workshops. The first one, Measuring the Quality of Experience of Web users, was presented at the Internet-QoE ’16 workshop. It proposes and analyses tools to assess the Quality of Experience of web services. The second article, Latency Measurement as a Virtualized Network Function using Metherxis, was presented at the LANCOMM’16 workshop. It proposes and evaluates a system to conduct latency measurements. The third article, CliMB: Enabling Network Function Composition with Click Middleboxes, was presented at the HotMiddlebox’16 workshop. It proposes a modular implementation of TCP for Click.

This issue contains three editorial contributions. Two of these editorials are reports from workshops related to Internet measurements. The first one, The 8th Workshop on Active Internet Measurements (AIMS-8) Report, summarises the results of the Active Internet Measurements workshop that was held in February 2016. The second one, Report from the 6th PhD School on Traffic Monitoring and Analysis (TMA), summarises a doctoral school that was held in April 2016. Finally, “Resource Pooling” for Wireless Networks: Solutions for the Developing World is a position paper that argues for the utilisation of the resource pooling principle when designing network solutions in developing countries.

Before jumping to those papers, I’d like to point several important changes to the Computer Communication Review submission process. These changes were already announced during the community feedback session at SIGCOMM’16. CCR continues to accept both editorial submissions and technical papers. The main change affects the technical papers. There have been many discussions in our community and elsewhere on the reproducibility of research results. SIGCOMM has already taken some actions to encourage the reproducibility of research results. A well-known example is the best dataset award at the Internet Measurements Conference. CCR will go one step further by accepting technical papers that are longer than six pages provided that these papers are replicable. According to the recently accepted ACM Policy on Result and Artifact Review and Badging, CCR will consider a paper as replicable if other researcher can obtain similar results as the authors of the paper by using the artifacts (software, dataset, …) used by the original authors of the paper. This implies that the authors of long papers will have to release the artifacts that are required to replicate most of the results of the papers. For these replicable papers, there is no a priori page limit, but the acceptance bar will grow with the paper length. Those replicable papers will be reviewed in two phases. The first phase will consider the technical merits of the paper, without analysing the provided artifacts. If the outcome of this first phase is positive, then the artifacts will be evaluated and papers will be tagged with the badges defined by the ACM Policy on Result and Artifact Review and Badging. The public review for the replicable papers will contain a summary of the technical reviews and a summary of the evaluation of the artifacts.

In the long term, replicable papers are important for our community because the availability of the artifacts will encourage other researchers to expand and improve the work. The first who will benefit from replicable papers are our readers who will have access to more information about each published papers. However, authors will probably complain a bit because, it takes more time to write a paper that contains all the associated artifacts than the standalone text. On the other hand, authors can expect more impact from their replicable papers since it will be easier for other researchers to use, expand and cite their work. Reviewers could also complain because the assessing paper artifacts requires more time than assessing a written paper. It is clear that assessing research artifacts will be easier for papers that describe simulation results than for papers describing a system that combines hardware and software prototypes or has already been deployed at a large scale. As a community, we will need to work together to define criteria to correctly assess paper artifacts in our different subdomains. If you’d like to participate in the evaluation of paper artifacts, please contact me by email at ccr-editor@sigcomm.org
Olivier Bonaventure
CCR Editor

October 2016 : Table of contents

Technical papers

Editorials

Report from the 6th PhD School on Traffic Monitoring and Analysis (TMA)

Idilio Drago, Fabio Ricciato, Ramin Sadre
Abstract

This is a summary report by the organizers of the 6th TMA PhD school held in Louvain-la-Neuve on 5-6 April 2016. The insight and feedback received about the event might turn useful for the organization of future editions and similar events targeting students and young researchers.

Download the full article

CliMB: Enabling Network Function Composition with Click Middleboxes

Rafael Laufer, Massimo Gallo, Diego Perino, Anandatirtha Nandugudi.
Abstract

Click has significant advantages for middlebox development, including modularity, extensibility, and reprogrammability. Despite these features, Click still has no native TCP support and only uses nonblocking I/O, preventing its applicability to middleboxes that require access to application data and blocking I/O. In this paper, we attempt to bridge this gap by introducing Click middleboxes (CliMB). CliMB provides a full-fledged modular TCP layer supporting TCP options, congestion control, both blocking and nonblocking I/O, as well as socket and zero-copy APIs to applications. As a result, any TCP network function may now be realized in Click using a modular L2-L7 design. As proof of concept, we develop a zero-copy SOCKS proxy using CliMB that shows up to 4x gains compared to an equivalent implementation using the Linux in-kernel network stack.

Download the full article

Latency Measurement as a Virtualized Network Function using Metherxis

Diego Rossi Mafioletti , Alextian Bartholomeu Liberatto , Rodolfo da Silva Villaça, Cristina Klippel Dominicini, Magnos Martinello, Moises Renato Nunes Ribeiro.
Abstract

Network latency is critical to the success of many highspeed, low-latency applications. RFC 2544 discusses and defines a set of tests that can be used to describe the performance characteristics of a network device. However, most of the available measurement tools cannot perform all the tests as described in this standard. As a novel approach, this paper proposes Metherxis, a system that can be implemented on general purpose hardware and enables Virtualized Network Functions (VNFs) to measure network device latency with micro-second grade accuracy. Results show that Metherxis achieves highly accurate latency measurements when compared to OFLOPS, a well known measurement tool.

Download the full article

Measuring the Quality of Experience of Web users

Enrico Bocchi, Luca De Cicco , Dario Rossi.
Abstract

Measuring quality of Web users experience (WebQoE) faces the following trade-off. On the one hand, current practice is to resort to metrics, such as the document completion time (onLoad), that are simple to measure though knowingly inaccurate. On the other hand, there are metrics, like Google’s SpeedIndex, that are better correlated with the actual user experience, but are quite complex to evaluate and, as such, relegated to lab experiments. In this paper, we first provide a comprehensive state of the art on the metrics and tools available for WebQoE assessment. We then apply these metrics to a representative dataset (the Alexa top-100 webpages) to better illustrate their similarities, differences, advantages, and limitations. We next introduce novel metrics, inspired by Google’s SpeedIndex, that offer significant advantage in terms of computational complexity, while maintaining a high correlation with the SpeedIndex. These properties make our proposed metrics highly relevant and of practical use.

Download the full article

“Resource Pooling” for Wireless Networks: Solutions for the Developing World

Junaid Qadir, Arjuna Sathiaseelan, Liang Wang.
Abstract

We live in a world in which there is a great disparity between the lives of the rich and the poor. Technology offers great promise in bridging this gap. In particular, wireless technology unfetters developing communities from the constraints of infrastructure providing a great opportunity to leapfrog years of neglect and technological waywardness. In this paper, we highlight the role of resource pooling for wireless networks in the developing world. Resource pooling involves (i) abstracting a collection of networked resources to behave like a single unified resource pool and (ii) developing mechanisms for shifting load between the various parts of the unified resource pool. The popularity of resource pooling stems from its ability to provide resilience, high utilization, and flexibility at an acceptable cost. We show that “resource pooling”, which is very popular in its various manifestations, is the key unifying principle underlying a diverse number of successful wireless technologies (such as white space networking, community networks, etc.). We discuss various applications of resource pooled wireless technologies and provide a discussion on open issues.

Download the full article

The 8th Workshop on Active Internet Measurements (AIMS-8) Report

Abstract

On 10-12 February 2016, CAIDA hosted the eighth Workshop on Active Internet Measurements (AIMS-8) as part of our series of Internet Statistics and Metrics Analysis (ISMA) workshops. This workshop series provides a forum for stakeholders in Internet active measurement projects to communicate their interests and concerns, and explore cooperative approaches to maximizing the collective benefit of deployed infrastructure and gathered measurements. Discussion topics included: infrastructure development status and plans; experimental design, execution, and cross-validation; challenges to incentivize hosting, sharing, and using measurement infrastructure; data access, sharing, and analytics; and challenges of emerging high bandwidth network measurement infrastructure. Other recurrent topics included paths toward increased interoperability and cooperative use of infrastructures, and ethical frameworks to support active Internet measurement. Materials related to the workshop are at
http://www.caida.org/workshops/aims/1602/.

Download the full article

Recursive SDN for Carrier Networks

James McCauley, Zhi Liu, Aurojit Panda, Teemu Koponen, Barath Raghavan, Jennifer Rexford, Scott Shenker.
Abstract

Control planes for global carrier networks should be programmable and scalable. Neither traditional control planes nor new SDN-based control planes meet both of these goals. Here we propose a framework for recursive routing computations that combines the best of SDN (programmability through centralized controllers) and traditional networks (scalability through hierarchy) to achieve these two desired properties. Through simulation on graphs of up to 10,000 nodes, we evaluate our design’s ability to support a variety of unicast routing and traffic engineering solutions, while incorporating a fast failure recovery mechanism based on network virtualization.

Public review by Joseph Camp

While software-defined networks have received significant attention in recent years, the networks studied often lack multiple orders of magnitude from today’s global carrier networks in terms of geographical span and nodal scale. Hence, this paper sets forth a recursive routing computation framework that balances the programmability of SDNs with the scalability of a traditional hierarchical structure. Simulations of about 10,000 nodes are used to show the viability of such an approach. Remarkably, the authors show that their recovery approach can offer “five 9s” of network repair even under a heavy link failure scenario.

Download the full article

April 2016 : Table of contents

Technical papers

Editorials