Author Archives: Olivier Bonaventure

Community feedback

Like most scientific journals, Computer Communication Review (CCR) publishes peer-reviewed papers. Reviewing papers takes time and the papers that appear in CCR have typically been submitted four months before their online publication. Once a paper is submitted, it is reviewed by a few experts in the field who assess the technical merits of the paper. If the paper supplies artefacts (software, datasets, …) additional reviewers also evaluate those artefacts. Those reviews and the associated discussions take time and often allow the authors to significantly improve the quality of their papers.

In parallel with their submission to CCR, some authors also distribute their paper to colleagues or post it on online repositories. To provide more feedback to such authors, we start a new experiment in CCR. Authors who submit papers that contain artefacts (software, datasets, …) can now opt for community feedback. In this case, the paper is quickly checked by the editor and if suitable it is posted on ccronline.sigcomm.org during the review process together with links to the additional material. We start this experiment with the following paper that is currently under review :

We hope that this new service will be useful for the community and encourage you to provide feedback to the authors through website comments. Feel free to also contact the editor by email if you have any suggestion or comment on this new service.

[Community Feedback] Inside the Walled Garden: Deconstructing Facebook’s Free Basics Program

This paper has been submitted to CCR. This is a draft version of the paper that has not been peer-reviewed. Comments on the paper or the supplementary material are encouraged through the comment facility at the bottom of this page.
R. Sen, S. Ahmad, A. Phokeer, Z. Farooq, I. Qazi, D. Choffnes, K. Gummadi
Abstract

Free Basics is a Facebook initiative to provide zero-rated web services in developing countries. The program has grown rapidly to 60+ countries in the past two years. But it has also seen strong opposition from Internet activists and has been banned in some countries like India. Facebook highlights the societal benefits of providing low-income populations with free Internet access, while detractors point to concerns about privacy and network neutrality. In this paper, we provide the first independent analysis of such claims regarding the Free Basics service, using both the perspective of a Free Basics service provider and of web clients visiting the service via cellular phones providing access to Free Basics in Pakistan and South Africa. Specifically, with control of both endpoints, we not only provide a more detailed view of how the Free Basics service is architected, but also can isolate the likely causes of network performance impairments. Our analysis reveals that Free Basics services experience 4 to 12 times worse network performance than their paid counterparts. We isolate the root causes using factors such as network path inflation and throttling policies by Facebook and telecom service providers. The Free Basics service and its restrictions are designed with assumptions about users’ device capabilities (e.g., lack of JavaScript support). To evaluate such assumptions, we infer the types of mobile devices that generated 47K unique visitors to our Free Basics services between Sep 2016 and Jan 2017. We find that there are large numbers of requests from constrained WAP browsers, but also large fractions of high-capability mobile phones that send Free Basics requests. We discuss the implications of our observations, with the hope to aid more informed debates on such telecom policies.

 

Draft article

Supplementary material

The April 2017 Issue

This new issue of Computer Communication Review (CCR) contains again a mix of technical papers and editorials. In the first paper, Principles for Measurability in Protocol Design, Mark Allman and his colleagues argue about the importance of considering the requirements from measurements in the design of protocols. Protocols are designed to solve operational problems and the operators that deploy or support those protocols often use measurements to monitor or tune their utilisation. This paper proposes six principles that should be used by protocol designers when developing a new protocol or architecture. The vision differs from earlier work that has often relied on specialised out-of-band measurement protocols. Based on a few primitives, the authors propose a high-level design for an In-Protocol Internet Measurement Facility. The solution is discussed with several examples and use cases. The next step will be to completely specify, implement and deploy such a protocol.

Our second technical paper, A Techno-Economic Approach for Broadband Deployment in Underserved Areas, was selected as the best paper of the SIGCOM’16 GAIA workshop, but could not be included in the previous issue of CCR. In this paper, Ramakrishnan Durairajan and Paul Barford describe a framework that allows government or companies to identify the best locations to deploy network infrastructure based on factors that include cost and user demographics.

Our first editorial, Learning Networking by Reproducing Research Results, co-authored by Lisa Yan and Nick McKeown, describes how students at Stanford University have reproduced experimental research results from over 40 different networking papers during the last five years. Since 2012, students who follow this advanced graduate networking course select one scientific paper and try to reproduce one of the results of the paper in pairs. This project lasts three weeks. It appears to work really well and provides several benefits. Firstly, the students to gain a more in-depth knowledge on the chosen paper than by simply presenting the paper. Secondly, the students can interact with the researchers who wrote the paper if they have specific questions about one particular experiment. Thirdly, the students learn the importance of performing reproducible experiments before starting their own research. This looks like an excellent way to educate the next generation of networking researchers and I would strongly encourage you to consider this model if you teach an advanced graduate networking class or seminar.

In our second editorial, Summary of the Works-in-Progress Session at IMC16, Dave Choffnes reports on the work in progress session that was organised during IMC’16. This session was intended as a forum for the exchange of ideas within the IMC community. Dave first describes how the session was organised and then briefly summarises the tools and research that participants shared during this session.

Finally, in our third editorial, 2016 International Teletraffic Congress (ITC 28) Report, Tobias Hossfeld provides a detailed summary of the International Teletraffic Congress that was held in September 2016 in Wurzburg, Germany.

Olivier Bonaventure

CCR Editor

2016 International Teletraffic Congress (ITC 28) Report

Tobias Hossfeld
Abstract

The 28th International Teletraffic Congress (ITC 28) was held on 12–16 September 2016 at the University of Wurzburg, Germany. The conference was technically cosponsored by the IEEE Communications Society and the Information Technology Society within VDE, and in cooperation with ACM SIGCOMM. ITC 28 provided a forum for leading researchers from academia and industry to present and discuss the latest advances and developments in design, modelling, measurement, and performance evaluation of communication systems, networks, and services. The main theme of ITC 28, Digital Connected World, reflects the evolution of communications and networking, which is continually changing the world we are living in. The technical program was composed of 37 contributed full papers, 6 short demo papers and three keynote addresses. Three workshops dedicated to timely topics were sponsored: Programmability for Cloud Networks and Applications, Quality of Experience Centric Management, Quality Engineering for a Reliable Internet of Services.

See ITC 28 Homepage: https://itc28.org/

Download the full article DOI: 10.1145/3089262.3089268

Learning Networking by Reproducing Research Results

Lisa Yan, Nick McKeown
Abstract

In the past five years, the graduate networking course at Stanford has assigned over 200 students the task of reproducing results from over 40 networking papers. We began the project as a means of teaching both engineering rigor and critical thinking, qualities that are necessary for careers in networking research and industry. We have observed that reproducing research can simultaneously be a tool for education and a means for students to contribute to the networking community. Through this editorial we describe our project in reproducing network research and show through anecdotal evidence that this project is important for both the classroom and the networking community at large, and we hope to encourage other institutions to host similar class projects.
Download the full article DOI: 10.1145/3089262.3089266

A Techno-Economic Approach for Broadband Deployment in Underserved Areas

Ramakrishnan Durairajan, Paul Barford
Abstract

A large body of economic research has shown the strong correlation between broadband connectivity and economic productivity. These findings motivate government agencies such as the FCC in the US to provide incentives to services providers to deploy broadband infrastructure in unserved or underserved areas. In this paper, we describe a framework for identifying target areas for network infrastructure deployment. Our approach considers (i) infrastructure availability, (ii) user demographics, and (iii) deployment costs. We use multi-objective optimization to identify geographic areas that have the highest concentrations of un/underserved users and that can be upgraded at the lowest cost. To demonstrate the efficacy of our framework, we consider physical infrastructure and demographic data from the US and two different deployment cost models. Our results identify a list of counties that would be attractive targets for broadband deployment from both cost and impact perspectives. We conclude with discussion on the implications and broader applications of our framework.
Download the full article DOI: 10.1145/3089262.3089265

Principles for Measurability in Protocol Design

Mark Allman, Robert Beverly, Brian Trammell
Abstract

Measurement has become fundamental to the operation of networks and at-scale services—whether for management, security, diagnostics, optimization, or simply enhancing our collective understanding of the Internet as a complex system. Further, measurements are useful across points of view—from end hosts to enterprise networks and data centers to the wide area Internet. We observe that many measurements are decoupled from the protocols and applications they are designed to illuminate. Worse, current measurement practice often involves the exploitation of side-effects and unintended features of the network; or, in other words, the artful piling of hacks atop one another. This state of affairs is a direct result of the relative paucity of diagnostic and measurement capabilities built into today’s network stack.

Given our modern dependence on ubiquitous measurement, we propose measurability as an explicit low-level goal of current protocol design, and argue that measurements should be available to all network protocols throughout the stack. We seek to generalize the idea of measurement within protocols, e.g., the way in which TCP relies on measurement to drive its end-to-end behavior. Rhetorically, we pose the question: what if the stack had been built with measurability and diagnostic support in mind? We start from a set of principles for explicit measurability, and define primitives that, were they supported by the stack, would not only provide a solid foundation for protocol design going forward, but also reduce the cost and increase the accuracy of measuring the network.
Download the full article DOI: 10.1145/3089262.3089264

Workshop on Tracking Quality of Experience in the Internet: Summary and Outcomes

Fabian E. Bustamante, David Clark, Nick Feamster.
Abstract

This is a report on the Workshop on Tracking Quality of Experience in the Internet, held at Princeton, October 21–22, 2015, jointly sponsored by the National Science Foundation and the Federal Communication Commission. The term Quality of Experience (QoE) describes a user’s subjective assessment of their experience when using a particular application. In the past, network engineers have typically focused on Quality of Service (QoS): performance metrics such as throughput, delay and jitter, packet loss, and the like. Yet, performance as measured by QoS parameters only matters if it affects the experience of users, as they attempt to use a particular application. Ultimately, the user’s experience is determined by QoE impairments (e.g., rebuffering). Although QoE and QoS are related—for example, a video rebuffering event may be caused by high packet-loss rate—QoE metrics ultimately affect a user’s experience.

Identifying the causes of QoE impairments is complex, since the impairments may arise in one or another region of the network, in the home network, on the user’s device, in servers that are part of the application, or in supporting services such as the DNS. Additionally, metrics for QoE continue to evolve, as do the methods for relating QoE impairments to underlying causes that could be measurable using standard network measurement techniques. Finally, as the capabilities of the underlying network infrastructure continues to evolve, researchers should also consider how to design infrastructure and tools can best support measurements that can better identify the locations and causes of QoE impairments.

The workshop’s aim was to understand the current state of QoE research and to contemplate a community agenda to integrate ongoing threads of QoE research into a collaboration. This summary report describes the topics discussed and summarize the key points of the discussion. Materials related to the workshop are available at http://aqualab.cs.northwestern.edu/NSFWorkshop-InternetQoE
Download the full article DOI: 10.1145/3041027.3041035

Can We Make a Cake and Eat it Too? A Discussion of ICN Security and Privacy

Edith Ngai, Borje Ohlman, Gene Tsudik, Ersin Uzun, Matthias Wahlisch, Christopher A. Wood.
Abstract

In recent years, Information-centric Networking (ICN) has received much attention from both academic and industry participants. ICN offers data-centric inter-networking that is radically different from today’s host-based IP networks. Security and privacy features on today’s Internet were originally not present and have been incrementally retrofitted over the last 35 years. As such, these issues have become increasingly important as ICN technology gradually matures towards real-world deployment. Thus, while ICN-based architectures (e.g., NDN, CCNx, etc.) are still evolving, it is both timely and important to explore ICN security and privacy issues as well as devise and assess possible mitigation techniques.

This report documents the highlights and outcomes of the Dagstuhl Seminar 16251 on “Information-centric Networking and Security.” The goal of which was to bring together researchers to discuss and address security and privacy issues particular to ICN-based architectures. Upon finishing the three-day workshop, the outlook of ICN is still unclear. Many unsolved and ill-addressed problems remain, such as namespace and identity management, object security and forward secrecy, and privacy. Regardless of the fate of ICN, one thing is certain: much more research and practical experience with these systems is needed to make progress towards solving these arduous problems.
Download the full article DOI: 10.1145/3041027.3041034