Category Archives: 2016

July 2016 : Table of contents

Technical papers

Editorials

The July 2016 Issue

This issue of Computer Communication Review is again a bit special. The previous issue was the last issue published on paper. The issue that you read is the first to be printed only on electrons. We hope that moving to an entirely online publication model will allow CCR to better serve the needs of the community. CCR is now available through a dedicated website : https://ccronline.sigcomm.org . We will add new features on the website to encourage interactions among the entire community. Ideas and suggestions on how to improve the website are more than welcome.
This issue contains a wide range of articles. Four peer-reviewed articles have been accepted. In “Controlling Queueing Delays for Real-Time Communication : The Interplay of E2E and AQM Algorithms”, Gaetano Carlucci et al. analyse the performance of the Google Congestion Control algorithm for real time communication with Active Queue Management (AQM) and scheduling techniques. In “What do Parrots and BGP Routers have in common”, David Hauwele et al. use controlled experiments and measurements to determine why BGP routers send duplicate messages. In “TussleOS: Managing Privacy versus Functionality Trade-offs on IOT Devices”, Rayman Preet Singh et al. propose a different model to improve OS-level support for privacy in Internet of Things. In “InKeV: In-Kernel Distributed Network Virtualisation for DCN”, Zaafar Ahmed et al. propose to leverage the extended Berkeley Packet Filter (eBFP), a way to safely introduce new functionality into the Linux kernel.
In addition to the peer-reviewed technical papers, this issue contains a record number of editorial papers. In “Opening Up Attendance at Hotnets”, the HotNets Steering Committee reports the results of the more open attendance policy used for Hotnets 2015. In “EZ-PC: Program Committee Selection Made Easy”, Vyas Sekar argues that selecting a technical program committee (PC) for a conference or workshop is a complex process that can be improved by using software tools. He proposes EZ-PC, an open-source software that formulates this process as a simple constraint satisfaction problem and reports on his experience with the software. In “New Kid on the Block: Network Functions Virtualization: From Big Boxes to Carrier Clouds”, Leonhard Nobach et al. provide an overview on the current state-of-the-art and open research questions in Network Function Virtualisation (NFV). Then five editorials summarise the main findings of five recently held workshops : the Roundtable on Real-Time Communications Research: 5G and Real-Time Communications — Topics for Research, the 2015 Workshop on Internet Economics (WIE), the Internet Research Task Force and Internet Society workshop on Research and Applications of Internet Measurements (RAIM), the Research and Infrastructure Challenges for Applications and Services in the Year 2021 and the 2016 BGP Hackathon. The 2016 BGP Hackathon was a joint effort between researchers who study the Border Gateway Protocol (BGP) and network operators who manage routers that rely on this protocol. During a few days in February 2016, members from these two communities met to develop new software tools together. The results of this hackathon are a clear example that it is possible to achieve excellent results by joining forces and working together on a common objective. The organisers of the hackathon applied for community funding from ACM SIGCOMM and used this funding to offer travel grants and increase the participation of researchers to the hackathon. I hope that the SIGCOMM Executive Committee will receive other requests for funding for similar hackathons in the coming months and years.
Finally, we also have our regular columns. In his student mentoring column, Aditya Akella discusses the different between journal and conference papers, different types of jobs and conference talks. In the industrial column, Nandita Dukkipati and her colleagues discuss the deployment of new congestion control schemes in datacenters and on the Internet based on their experience at a large cloud provider.
Olivier Bonaventure
CCR Editor

Student mentoring column

Dear students:

In this edition of the student mentoring column, we will cover three topics: the importance of journal and workshop papers, questions pertaining to giving/listening to talks, and two specific questions about jobs in networking. Hope you find the answers useful.

I got plenty of help in preparing this edition. In particular, many thanks to Rodrigo Fonseca, Harsha Madhyastha, George Porter, Vyas Sekar, and Minlan Yu.

Journal and Workshop Papers

Q: What are the pros and cons of submitting to a journal vs. a conference in networking?

A: The benefits of conferences are that they are timely (typically, half a year or less from submission to publication), and they have a very wide audience who keeps up with all the research there. The benefits of a journal are that you aren’t limited in terms of page limits (although several conferences are also going this route by, e.g., allowing each submitted paper an unlimited appendix), and your work is evaluated more or less on its own, rather than in a once-yearly batch of hundreds of other papers.

Q: There is a lot of pressure on getting high profile papers to get research positions. How does one balance the tradeoff between “quantity” and “depth”?

A: Research-oriented academic job searches don’t look for quantity–that’s a common myth that many graduate students appear to believe in! Academic job searches always look for impact and the potential to be a research superstar. One paper at the best venue in your field that solves a really important problem, or has had a lot of impact gets you noticed and could definitely get you an interview. So always shoot for quality and depth in your work.

Q: Are workshop papers helpful in the long term on my CV?

A: This is the wrong question to ask. What’s more important in your CV is the quality of your work overall, rather than individual papers or the number of papers you have published.

Workshop papers could be important toward your research program, though. They provide an opportunity to disseminate early or risky/crazy ideas, and get feedback from the community. Also, going to workshops is a great way to both see what topics are active in your field, and also to meet researchers carrying out that work.

Jobs:

Q: How does one apply and obtain a postdoc position in networking?

A: Typically, most postdocs are hired without a formal process. Your best bet is to identify whom you would like to work with, based on the research area that you hope to pursue during your postdoc. Then, accost your ideal prospective postdoc advisor in the hallway at the next conference. He/she may not know you, but this initial conversation will provide some context for you to follow up by email later, and you will find out if he/she is both interested in and has funding for hiring a postdoc. After that, it is just a matter of whether your prospective postdoc advisor is impressed by the research you’ve done over the course of your PhD. Some advisors may invite you to visit and give a talk before making this decision. If your record is strong, they may invite you mostly to impress upon you as to why you should do a postdoc with them and not someone else!

Q: Given some entrepreneurial interests, but would rather not be tied down to one company’s success, how easy or hard it is for a Ph.D. graduate to get a venture capitalist position?

A: (with help from Alex Benik from Battery Ventures) First, there is no clear correlation between having a PhD and being a successful venture capitalist. There are examples of very successful VCs with and without PhDs. While there are a set of overlapping skills, one stark difference is that a freshly minted PhD knows as much as, or more, than anyone else, about a very specific technical area, whereas a VC usually has a much broader, but necessarily much less deep view of more areas. Moving from one to the other is very personal, in that some might find it frustrating to not be able to go as deep, while others might find it exciting.

There is no single path that leads to a VC position, VC firms are interesting that they are “never hiring and always hiring”, they are in the business of tracking interesting people who might be founders, technical advisors, or join the firm. One potentially good strategy is to get involved in the industry, perhaps by joining a startup for a couple of years. It is really important to get a “sense of product”, what makes a good product is different from what makes a good research project. It’s important to understand how going to market interacts with research and development. Especially for technology-oriented folks, it is easy to underestimate how difficult it is to sell an idea or product, to convince investors or companies to give you money. Technology for technology’s sake is not sufficient, and people won’t buy solely because of a good architecture, for example.

Finally, there is a lot of variation among different VC firms. Some companies, for example, will recruit people young, and use an apprenticeship model, where people go from doing market research to observing more senior people making several decisions, until they finally get to make their own investments. Other firms will prefer people with entrepreneurial or operational experience.

So, unfortunately, there is no direct answer to your question, but I hope this helps!

Talks: Giving them, and listening to them

Q: How to listen to a talk effectively?

A: Depends on your goals in listening to the talk. Are you interested in getting better at giving talks? Or are you interested in learning about their actual work?

In the first case, some things I personally look for in a good talk are clear overall structure and a good narrative arc throughout the talk. It is important that the author tells a story that is interesting, and anticipates the needs of the listener. A good talk, even one that is just 12-20 minutes long, has a beginning, middle, and end, and even when the author is explaining the details of their algorithm or whatever, you have a clear sense for why they are describing what they are describing.

In the second case, I’d say you mainly want to focus on the first part of the talk to understand the problem they are trying to solve, their understanding of the problem, and their main approach or idea. If all of those seem of interest, then you can dive into their paper to get all the details.

Q: Is there an “algorithm” for giving a good talk at a conference?

A: Yes, watch as many conference talks as you can. This could include going to conferences, but also many now live stream the talks, so you have a wide set of potential talks to watch. You can see what aspects work, and which don’t. You will also see which elements going into a talk on a measurement study, vs a talk on a big systems building effort, vs. a talk on a more theoretical result. Finally, when you create a talk, you need to show it to as many people as possible, and listen to their feedback! Don’t be hesitant to go through multiple rounds of practice and feedback, and be aware that you may have to rewrite your talk entirely to improve how you deliver your message.

Aditya Akella, UW—Madison

Controlling Queuing Delays for Real-Time Communication: The Interplay of E2E and AQM Algorithms

Gaetano Carlucci, Luca De Cicco, Saverio Mascolo.
Abstract

Real-time media communication requires not only congestion control, but also minimization of queuing delays to provide interactivity. In this work we consider the case of real-time communication between web browsers (WebRTC) and we focus on the interplay of an end-to-end delay-based congestion control algorithm, i.e. the Google congestion control (GCC), with two delay-based AQM algorithms, namely CoDel and PIE, and two flow queuing schedulers, i.e. SFQ and Fq_Codel. Experimental investigations show that, when only GCC flows are considered, the end-to-end algorithm is able to contain queuing delays without AQMs. Moreover the interplay of GCC flows with PIE or CoDel leads to higher packet losses with respect to the case of a DropTail queue. In the presence of concurrent TCP traffic, PIE and CoDel reduce the queuing delays with respect to DropTail at the cost of increased packet losses. In this scenario flow queuing schedulers offer a better solution.

testbed

Public review by Fabian Bustamante

For an increasingly important class of Internet applications – such as videoconference and personalized live streaming – high delay, rather than limited bandwidth, is the main obstacle to improved performance. A common problem that impacts this class of applications is “bufferbloat”, where excess buffering in the network causes high latency and jitter. Solutions for persistently full buffer problems, active queue management (AQM) schemes such as the original RED, have been known for two decades. Yet, while RED is simple and effective at reducing persistent queues is not widely or consistently configured and enabled in routers and sometimes directly unavailable.

Recent focus on bufferbloat has brought a number of new AQM proposals, including PIE and CoDel, which explicitly control the queuing delay and have no knobs for operators, users or implementers to adjust. This paper considers the interplay between some of these AQM protocols and the new end-to-end delay-based congestion control algorithm, Google Congestion Control (GCC) part of the WebRTC framework.

Two sets of reviewers agree that, while the topic is well established, there is still significant work to be done and the authors contribute and incremental yet valuable analysis in the context of real-time communication and the increasingly popular WebRTC. The authors were encouraged to release the software used for conducting their measurements to let other researchers in the community replicate their results and explore some of the variants and alternative scenarios raised by different reviewers.

Download the full article

What do parrots and BGP routers have in common?

David Hauweele, Bruno Quoitin, Cristel Pelsser, Randy Bush.
Abstract

The Border Gateway Protocol propagates routing information accross the Internet in an incremental manner. It only advertises to its peers changes in routing. However, as early as 1998, observations have been made of BGP announcing the same route multiple times, causing router CPU load, memory usage and convergence time higher than expected.

In this paper, by performing controlled experiments, we pinpoint multiple causes of duplicates, ranging from the lack of full RIB-Outs to the discrete processing of update messages. To mitigate these duplicates, we insert a cache at the output of the routers. We test it on public BGP traces and discuss the relation of the cache performance with the existence of bursts of updates in the trace.

 

Public review by Alberto Dainotti

What do parrots and BGP routers have in common?

Nothing, of course.” — you might answer the question in this paper’s title.
Since parrots simply repeat the sounds they hear, with no understanding of their meaning“. On the contrary, BGP speakers process the messages they receive and, hopefully, understand them before talking. However, a careful check of literature, may (or may not) make you reconsider the question:

  • E. N. Colbert-White, M. A. Covington, D. M. Fragaszy,
    Social Context Influences the Vocalizations of a Home-Raised African Grey Parrot (Psittacus erithacus erithacus)
    Journal of Comparative Psychology, Online First Publication, March 7, 2011. doi: 10.1037/a0022097

Moving from the paper title to the content: the authors investigate the problem of redundant BGP update messages (duplicate updates) generated by BGP routers. This phenomenon would normally be prevented by the Adj-RIBs-Out, which “contains the routes for advertisement to specific peers by means of the local speaker’s UPDATE messages.” [RFC 4271]. However, the Adj-RIBs-Out is sometimes not fully implemented or disabled in order to save memory. Previous studies have shown that duplicates can reach percentages above 80% in busy times (showing similarity to parrots to a BGP peer) and be detrimental to operations by causing high CPU loads.

This study contributes to the problem in two ways: (i) it explains the origin of several types of duplicate occurrences; (ii) it demonstrates that a simple
cache, requiring less memory usage than the Adj-RIBs-Out, can significantly
mitigate the problem. Reviewers appreciated the novelty of the contributions but would have liked to see an exhaustive analysis and characterization of all the common causes of duplicates in real world traces. This work is only a first step in fully understanding all the dynamics involved in redundant BGP update messages.

Download the full article

TussleOS: Managing Privacy Versus Functionality Trade-Offs on IoT Devices

Rayman Preet Singh, Benjamin Cassell , S. Keshav, Tim Brecht.
Abstract

Networked sensors and actuators are increasingly permeating our computing devices, and provide a variety of functions for Internet of Things (IoT) devices and applications. However, this sensor data can also be used by applications to extract private information about users. Applications and users are thus in a tussle over access to private data. Tussles occur in operating systems when stakeholders with competing interests try to access shared resources such as sensor data, CPU time, or network bandwidth. Unfortunately, existing operating systems lack a principled approach for identifying, tracking, and resolving such tussles. Moreover, users typically have little control over how tussles are resolved. Controls for sensor data tussles, for example, often fail to address trade-offs between functionality and privacy. Therefore, we propose a framework to explicitly recognize and manage tussles. Using sensor data as an example resource, we investigate the design of mechanisms for detecting and resolving privacy tussles in a cyber-physical system, enabling privacy and functionality to be negotiated between users and applications. In doing so, we identify shortcomings of existing research and present directions for future work.

 

 

Public review by Dave Choffnes

Ubiquitous Internet connectivity and sensing is quickly becoming reality. Many of us welcome this new world and its myriad applications ranging from entertainment and communication to health and education. On the other hand, this new functionality comes with an often invisible and thorny cost: exposure of private information. Historically, operating systems have focused on enabling functionality, with privacy controls being blunt, bolt-on features, if present at all. The Yelp app, for example, will use GPS coordinates to identify local businesses, but there is no easy way for the user to negotiate the use of coarser-grained location data for potentially less customized results but without the privacy cost.

In this paper, the authors propose using tussles as a way to manage the trade-offs between functionality and privacy settings that restrict it, and to provide this service at the operating system layer. Specifically, the paper identifies high-level abstractions to specify privacy and functionality requirements, techniques to resolve competing requirements, and mechanism to enforce the resolved behavior. Instead of focusing on any specific solution, the authors survey application functionality and user privacy requirements, and suggest how they might be addressed. Rather than offering a solution to the problem, this work serves as a starting point for a conversation about how to improve OS-level support for privacy.

The reviewers agreed that the authors identified an important problem and proposed an interesting potential direction for addressing it. The case studies in the paper provide supporting evidence that the approach is viable. There were concerns that the paper raises more questions than it answers (which is typical for a position paper) and that privacy negotiations have been proposed in previous work (impacting novelty). Despite these issues, the reviewers agreed that TussleOS is an interesting topic for future work.

Download the full article

InKeV: In-Kernel Distributed Network Virtualization for DCN

Zaafar Ahmed, Muhammad Hamad Alizai, Affan A. Syed .
Abstract

InKeV is a network virtualization platform based on eBPF, an in-kernel execution engine recently upstreamed into linux kernel. InKeV’s key contribution is that it enables in-kernel programmability and configuration of virtualized network functions, allowing to create a distributed virtual network across all edges hosting tenant workloads. Despite high performance demands of production environments, existing virtualization solutions have largely static in-kernel components due to the difficulty of developing and maintaining kernel modules and their years-long feature delivery time. The resulting compromise is either in programmability of network functions that rely on the data plane, such as payload processing, or in performance, due to expensive user-/kernel-space context switching. InKeV addresses these concerns: The use of eBPF allows it to dynamically insert programmable network functions into a running kernel, requiring neither to package a custom kernel nor to hope for acceptance in mainline kernel. Its novel stitching feature allows to flexibly configure complete virtual networks by creating a graph of network functions inside the kernel. Our evaluation reports on the flexibility of InKeV, and in-kernel implementation benefits such as low latency and impressive flow creation rate.

 

 

Public review by Katerina Argyraki

The ability to program the data plane — to introduce new packet-processing functionality in the network — is one of the main challenges faced by network operators today, whether in the context of datacenters or Internet service providers. A popular approach is to introduce “network functions” at the edge of the network, in general-purpose machines (not custom network equipment). To maximize performance, we would normally run these network functions inside the kernel, however, the standard ways of doing this, e.g., packaging custom kernels, are impractical. Instead, this paper proposes to leverage the extended Berkeley Packet Filter (eBFP), a way to safely introduce new functionality into the kernel, which is now part of the Linux kernel. The paper contributes a framework for managing network functions that are implemented on top of eBPF, and it shows experimentally that the proposed approach significantly outperforms the current standard approach, represented by OpenStack Neutron (which does not run entire network functions in the kernel). The reviewers appreciated the proposed solution for its simplicity and the clear demonstration of its performance benefits; in a sea of proposals for how to build programmable data planes, this one stands out for its potential for practical impact.

Download the full article

Opening up Attendance at HotNets

Jeffrey C. Mogul , Bruce Davie, Hari Balakrishnan, Ramesh Govindan.
Abstract

HotNets has historically been invitation-only. The SIGCOMM community has recently encouraged HotNets to allow broader participation. This note reports on a HotNets 2015 experiment with a more open attendance policy, and on the results of a post-workshop survey of the attendees. Based on this experiment and the survey, the HotNets Steering Committee believes it is possible for the workshop to support broader attendance, while preserving an atmosphere that encourages free-flowing discussions.

 

Download the full article

Research Topics related to Real-Time Communications over 5G Networks

Carol Davids, Vijay K. Gurbani, Gaston Ormazabal, Andrew Rollins, Kundan Singh, Radu State.
Abstract

In this article we describe the discussion and conclusions of the “Roundtable on Real-Time Communications Research: 5G and Real-Time Communications — Topics for Research” held at the Illinois Institute of Technology’s Real-Time Communications Conference and Expo, co-located with the IPTComm Conference, October 5-8, 2015.

IIT_2015-Day2_021

 

Download the full article

Workshop on Internet Economics (WIE2015) Report

kc Claffy, Dave Clark.
Abstract

On December 16-17 2015, we hosted the 5th interdisciplinary Workshop on Internet Economics (WIE) at the UC San Diego’s Supercomputer Center. This workshop series provides a forum for researchers, Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to inform current and emerging regulatory and policy debates.

The FCC’s latest open Internet order ostensibly changes the landscape of regulation by using Title II as its basis. This year we discussed the implications of Title II (common-carrier-based) regulation for issues we have looked at in the past, or those shaping current policy conversations. Discussion topics included differentiated services on the public Internet, evolving approaches to interconnection across different segments of the ecosystem (e.g., content to access), QoE and QoS measurement techniques and their limitations, interconnection measurement and modeling challenges and opportunities, and transparency. The format was a series of focused sessions, where presenters prepared 10-minute talks on relevant issues, followed by in-depth discussions. This report highlights the discussions and presents relevant open research questions identified by participants. Slides presented and this report are available at http://www.caida.org/workshops/wie/1512/

 

Download the full article