Category Archives: CCR April 2017

The April 2017 Issue

This new issue of Computer Communication Review (CCR) contains again a mix of technical papers and editorials. In the first paper, Principles for Measurability in Protocol Design, Mark Allman and his colleagues argue about the importance of considering the requirements from measurements in the design of protocols. Protocols are designed to solve operational problems and the operators that deploy or support those protocols often use measurements to monitor or tune their utilisation. This paper proposes six principles that should be used by protocol designers when developing a new protocol or architecture. The vision differs from earlier work that has often relied on specialised out-of-band measurement protocols. Based on a few primitives, the authors propose a high-level design for an In-Protocol Internet Measurement Facility. The solution is discussed with several examples and use cases. The next step will be to completely specify, implement and deploy such a protocol.

Our second technical paper, A Techno-Economic Approach for Broadband Deployment in Underserved Areas, was selected as the best paper of the SIGCOM’16 GAIA workshop, but could not be included in the previous issue of CCR. In this paper, Ramakrishnan Durairajan and Paul Barford describe a framework that allows government or companies to identify the best locations to deploy network infrastructure based on factors that include cost and user demographics.

Our first editorial, Learning Networking by Reproducing Research Results, co-authored by Lisa Yan and Nick McKeown, describes how students at Stanford University have reproduced experimental research results from over 40 different networking papers during the last five years. Since 2012, students who follow this advanced graduate networking course select one scientific paper and try to reproduce one of the results of the paper in pairs. This project lasts three weeks. It appears to work really well and provides several benefits. Firstly, the students to gain a more in-depth knowledge on the chosen paper than by simply presenting the paper. Secondly, the students can interact with the researchers who wrote the paper if they have specific questions about one particular experiment. Thirdly, the students learn the importance of performing reproducible experiments before starting their own research. This looks like an excellent way to educate the next generation of networking researchers and I would strongly encourage you to consider this model if you teach an advanced graduate networking class or seminar.

In our second editorial, Summary of the Works-in-Progress Session at IMC16, Dave Choffnes reports on the work in progress session that was organised during IMC’16. This session was intended as a forum for the exchange of ideas within the IMC community. Dave first describes how the session was organised and then briefly summarises the tools and research that participants shared during this session.

Finally, in our third editorial, 2016 International Teletraffic Congress (ITC 28) Report, Tobias Hossfeld provides a detailed summary of the International Teletraffic Congress that was held in September 2016 in Wurzburg, Germany.

Olivier Bonaventure

CCR Editor

2016 International Teletraffic Congress (ITC 28) Report

Tobias Hossfeld
Abstract

The 28th International Teletraffic Congress (ITC 28) was held on 12–16 September 2016 at the University of Wurzburg, Germany. The conference was technically cosponsored by the IEEE Communications Society and the Information Technology Society within VDE, and in cooperation with ACM SIGCOMM. ITC 28 provided a forum for leading researchers from academia and industry to present and discuss the latest advances and developments in design, modelling, measurement, and performance evaluation of communication systems, networks, and services. The main theme of ITC 28, Digital Connected World, reflects the evolution of communications and networking, which is continually changing the world we are living in. The technical program was composed of 37 contributed full papers, 6 short demo papers and three keynote addresses. Three workshops dedicated to timely topics were sponsored: Programmability for Cloud Networks and Applications, Quality of Experience Centric Management, Quality Engineering for a Reliable Internet of Services.

See ITC 28 Homepage: https://itc28.org/

Download the full article DOI: 10.1145/3089262.3089268

Learning Networking by Reproducing Research Results

Lisa Yan, Nick McKeown
Abstract

In the past five years, the graduate networking course at Stanford has assigned over 200 students the task of reproducing results from over 40 networking papers. We began the project as a means of teaching both engineering rigor and critical thinking, qualities that are necessary for careers in networking research and industry. We have observed that reproducing research can simultaneously be a tool for education and a means for students to contribute to the networking community. Through this editorial we describe our project in reproducing network research and show through anecdotal evidence that this project is important for both the classroom and the networking community at large, and we hope to encourage other institutions to host similar class projects.
Download the full article DOI: 10.1145/3089262.3089266

A Techno-Economic Approach for Broadband Deployment in Underserved Areas

Ramakrishnan Durairajan, Paul Barford
Abstract

A large body of economic research has shown the strong correlation between broadband connectivity and economic productivity. These findings motivate government agencies such as the FCC in the US to provide incentives to services providers to deploy broadband infrastructure in unserved or underserved areas. In this paper, we describe a framework for identifying target areas for network infrastructure deployment. Our approach considers (i) infrastructure availability, (ii) user demographics, and (iii) deployment costs. We use multi-objective optimization to identify geographic areas that have the highest concentrations of un/underserved users and that can be upgraded at the lowest cost. To demonstrate the efficacy of our framework, we consider physical infrastructure and demographic data from the US and two different deployment cost models. Our results identify a list of counties that would be attractive targets for broadband deployment from both cost and impact perspectives. We conclude with discussion on the implications and broader applications of our framework.
Download the full article DOI: 10.1145/3089262.3089265

Principles for Measurability in Protocol Design

Mark Allman, Robert Beverly, Brian Trammell
Abstract

Measurement has become fundamental to the operation of networks and at-scale services—whether for management, security, diagnostics, optimization, or simply enhancing our collective understanding of the Internet as a complex system. Further, measurements are useful across points of view—from end hosts to enterprise networks and data centers to the wide area Internet. We observe that many measurements are decoupled from the protocols and applications they are designed to illuminate. Worse, current measurement practice often involves the exploitation of side-effects and unintended features of the network; or, in other words, the artful piling of hacks atop one another. This state of affairs is a direct result of the relative paucity of diagnostic and measurement capabilities built into today’s network stack.

Given our modern dependence on ubiquitous measurement, we propose measurability as an explicit low-level goal of current protocol design, and argue that measurements should be available to all network protocols throughout the stack. We seek to generalize the idea of measurement within protocols, e.g., the way in which TCP relies on measurement to drive its end-to-end behavior. Rhetorically, we pose the question: what if the stack had been built with measurability and diagnostic support in mind? We start from a set of principles for explicit measurability, and define primitives that, were they supported by the stack, would not only provide a solid foundation for protocol design going forward, but also reduce the cost and increase the accuracy of measuring the network.
Download the full article DOI: 10.1145/3089262.3089264