Tag Archives: editorial

The July 2019 Issue

This July 2019 issue contains two techni- cal papers and three editorial notes. In ”Securing Linux with a Faster and Scalable IPtables”, Sebastiano Miano and his colleagues revisit how Linux firewalls work. Since version 2.4.0 of the Linux kernel, iptables has been the standard way of defining firewall rules in Linux. These iptables are widely used, but writing and maintaining them can be difficult. Furthermore, they have some limitations in terms of performance. This paper leverages the eBPF virtual machine that is included in the Linux kernel to propose a replacement for iptables that preserves their semantics while providing im- proved performance. They release their implementation and evaluate its performance in details.

In ”Towards Passive Analysis of Anycast in Global Routing: Unintended Impact of Remote Peering”, Rui Bian et al. analyse the deployment of anycast services. For this, they rely on different BGP routing information and highlight the impact of remote peering on anycast performance. They release their data and analysis scripts.

In addition to these two peer-reviewed papers, this issue contains three editorials. In ”Privacy Trading in the Surveillance Capitalism Age: Viewpoints on ‘Privacy- Preserving’ Societal Value Creation”, Ranjan Pal and Jon Crowcroft reconsider the current Mobile App ecosystem from an economical and privacy viewpoint. They show that the current model is not the only possible one and propose the idea of a regulated privacy trading mechanism that provides a better compromise between privacy and the commercial interests of companies. In ”Datacenter Congestion Control: Identifying what is essential and making it practical”, Aisha Mushtaq et al. take a step back at the datacenter congestion control problem. They argue that congestion control mechanisms that use Shortest-Remaining- Processing-Time are the best solution and discuss in the paper how commodity switches could be modified to support it. Finally, in ”The 11th Workshop on Active Internet Measurements (AIMS-11) Workshop Report”, kc Claffy and Dave Clark summarise the discussions at the latest AIMS workshop. They mention several new measurement ini- tiatives and interesting research projects.

Privacy Trading in the Surveillance Capitalism Age: Viewpoints on `Privacy-Preserving’ Societal Value Creation

Ranjan Pal and Jon Crowcroft

Abstract

In the modern era of the mobile apps (part of the era of surveillance capitalism, a famously coined term by Shoshana Zuboff), huge quantities of data about individuals and their activities offer a wave of opportunities for economic and societal value creation. On the flip side, such opportunities also open up channels for privacy breaches of an individual’s personal information. Data holders (e.g., apps) may hence take commercial advantage of the individuals’ inability to fully anticipate the potential uses of their private information, with detrimental effects for social welfare. As steps to improve social welfare, we comment on the the existence and design of efficient consumer-data releasing ecosystems aimed at achieving a maximum social welfare state amongst competing data holders. In view of (a) the behavioral assumption that humans are `compromising’ beings, (b) privacy not being a well-boundaried good, and (c) the practical inevitability of inappropriate data leakage by data holders upstream in the supply-chain, we showcase the idea of a regulated and radical privacy trading mechanism that preserves the heterogeneous privacy preservation constraints (at an aggregate consumer, i.e., app, level) upto certain compromise levels, and at the same time satisfying commercial requirements of agencies (e.g., advertising organizations) that collect and trade client data for the purpose of behavioral advertising. More specifically, our idea merges supply function economics, introduced by Klemperer and Meyer, with differential privacy, that, together with their powerful theoretical properties, leads to a stable and efficient, i.e., a maximum social welfare, state, and that too in an algorithmically scalable manner. As part of future research, we also discuss interesting additional techno-economic challenges related to realizing effective privacy trading ecosystems.

Download the full article

Datacenter Congestion Control: Identifying what is essential and making it practical

Aisha Mushtaq ,Radhika Mittal, James McCauley, Mohammad Alizadeh, Sylvia Ratnasamy, Scott Shenker

Abstract

Recent years have seen a slew of papers on datacenter congestion control mechanisms. In this editorial, we ask whether the bulk of this research is needed for the common case where congestion control involves hosts responding to simple congestion signals from the network and the performance goal is reducing some average measure of Flow Completion Time. We raise this question because we find that, out of all the possible variations one could make in congestion control algorithms, the most essential feature is the switch scheduling algorithm. More specifically, we find that congestion control mechanisms that use Shortest-Remaining-Processing-Time (SRPT) achieve superior performance as long as the rate-setting algorithm at the host is reasonable. We further find that while SRPT’s performance is quite robust to host behaviors, the performance of schemes that use scheduling algorithms like FIFO or Fair Queuing depend far more crucially on the rate-setting algorithm, and their performance is typically worse than what can be achieved with SRPT. Given these findings, we then ask whether it is practical to realize SRPT in switches without requiring custom hardware. We observe that approximate and deployable SRPT (ADS) designs exist, which leverage the small number of priority queues supported in almost all commodity switches, and require only software changes in the host and the switches. Our evaluations with one very simple ADS design shows that it can achieve performance close to true SRPT and is significantly better than FIFO. Thus, the answer to our basic question – whether the bulk of recent research on datacenter congestion control algorithms is needed for the common case – is no.

Download the full article

The 11th Workshop on Active Internet Measurements (AIMS-11) Workshop Report

kc Claffy, David Clark

Abstract

On 16-17 April 2018, CAIDA hosted its eleventh Workshop on Active Internet Measurements (AIMS-11). This workshop series provides a forum for stakeholders in Internet active measurement projects to communicate their interests and concerns, and explore cooperative approaches to maximizing the collective benefit of deployed infrastructure and gathered data. An overarching theme this year was scaling the storage, indexing, annotation, and usage of Internet measurements. We discussed tradeoffs in use of commercial cloud services to to make measurement results more accessible and informative to researchers in various disciplines. Other agenda topics included status updates on recent measurement infrastructures and community feedback; measurement of poorly configured infrastructure; and recent successes and approaches to evolving challenges in geolocation, topology, route hijacking, and performance measurement. We review highlights of discussions of the talks. This report does not cover each topic discussed; for more details examine workshop presentations linked from the workshop web page: http://www.caida.org/workshops/aims/1904/.

Download the full article

The April 2019 Issue

This April 2019 issue contains two technical papers and four editorial notes. In ”On the Complexity of Non-Segregated Routing in Reconfigurable Data Center Architectures”, Klaus-Tycho Foerster and his colleagues analyse whether a data center net- work could dynamically reconfigure its topology the better meet the traffic demand. They explore this problem as a mathematical optimisation problem and seek exact algorithms. The second technical paper, ”Precise Detection of Content Reuse in the Web” ad- dresses the problem of detecting whether the same content is available on different websites. Calvin Ardi and John Heidemann propose a new methodology that enables re- searchers to discover and detect the reutilisation of content on web servers. They provide both a large dataset and software to analyse it.

The four editorial notes cover very different topics. Kc Claffy and Dave Clark summarise in ”Workshop on Internet Economics (WIE2018) Final Report” a recent workshop on Internet economics. In ”Democratizing the Network Edge”, Larry Peterson and six colleagues encourage the community to participate in the innovation that they predict at the intersection between the cloud and the access networks that many refer to as the edge. They propose a plan for action to have a real impact on this emerging domain. In ”A Broadcast-Only Communication Model Based on Replicated Append- Only Logs”, Christian Tschudin looks at the interplay between the append-only log data structure and broadcast communication techniques. He argues that some network architectures could leverage this interplay.

As a reader of CCR, you already know the benefits of releasing the artifacts asso- ciated to scientific papers. This encourages their replicability and ACM has defined a badging system to recognise the papers that provide such artifacts. Several ACM SIGs have associated artifacts evaluation commit- tees to their flagship conferences and encourage their members to release their paper artifacts. Last year, two evaluation of paper artifacts were organised within SIGCOMM. The first one focussed on the papers that were accepted at the Conext’18 conference. Twelve of the papers presented at Conext’18 received ACM reproducibility badges. The second artifacts evaluation was open to pa- pers accepted by CCR and other SIGCOMM conferences. Twenty eight of these papers received ACM reproducibility badges. These evaluations and some lessons learned are dis- cussed in ”Evaluating the artifacts of SIGCOMM papers” by Damien Saucez, Luigi Iannone and myself. We hope that evalu- ating the artifacts will become a habit for all SIGCOMM conferences.

I hope that you will enjoy reading this new issue and welcome comments and suggestions on CCR Online (https: //ccronline.sigcomm.org) or by email at ccr-editor at sigcomm.org.

Workshop on Internet Economics (WIE2018) Final Report

kc Claffy, David Clark

Abstract

On 12-13 December 2018, CAIDA hosted the 9th interdisciplinary Workshop on Internet Economics (WIE) at the UC San Diego’s Supercomputer Center. This workshop series provides a forum for researchers, Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to exchange views on current and emerging regulatory and policy debates. To add clarity to a range of vigorous policy debates, and in pursuit of actionable objectives, this year’s meeting used a different approach to structuring the agenda. Each attendee chose a specific policy goal or harm, and structured their presentation to answer three questions: (1) What data is needed to measure progress toward/away from this goal/harm? (2) What methods do you propose to gather such data? (3) Who are the right entities to gather such data, and how should such data be managed and shared? With a specific focus on measurement challenges, the topics we discussed included: analyzing the evolution of the Internet in a layered-platform context to gain new insights; measurement and analysis of economic impacts of new technologies using old tools; security and trustworthiness, reach (universal service) and reachability, sustainability of investment into Internet infrastructure, as well as infrastructure to measure the Internet. All slides made available at https://www.caida.org/workshops/wie/1812/.

Download the full article

Evaluating the artifacts of SIGCOMM papers

Damien Saucez, Luigi Iannone, Olivier Bonaventure

Abstract

A growing fraction of the papers published by CCR and at SIGCOMM- sponsored conferences include artifacts such as software or datasets. Besides CCR, these artifacts were rarely evaluated. During the last months of 2018, we organised two different Artifacts Evaluation Committees to which authors could submit the artifacts of their papers for evaluation. The first one evaluated the papers accepted by Conext’18 shortly after the TPC decision. It assigned ACM reproducibility badges to 12 different papers. The second one evaluated papers accepted by CCR and any SIGCOMM-sponsored conference. 28 papers received ACM reproducibility badges. We report on the results of a short survey among artifacts authors and reviewers and provide some suggestions for future artifacts evaluations.

Download the full article

The January 2019 Issue

2019 is a special year for SIGCOMM as your SIG will celebrate its 50th birthday at SIGCOMM’19 in August. During the last half century, the networking field has evolved a lot and SIGCOMM Computer Communication Review (CCR) contributed to this evolution by timely disseminating technical papers. CCR will celebrate SIGCOMM’s birthday with a special issue that will contain editorial notes that reflect on both the past and the future of your SIG. This special issue will be published in October 2019. Its detailed content is still being worked on, but we expect that you will find lots of interesting information in this issue. If you plan to submit papers to CCR, please note that the October 2019 issue will not publish any new technical paper. All the papers submitted between March 1st, 2019 and September 1st, 2019 will be considered for the January 2020 issue.

This January 2019 issue contains one technical paper and four editorial notes. In “Parametrized Complexity of Virtual Network Embeddings: Dynamic & Linear Programming Approximations”, Matthias Rost et al. analyse the problem of mapping a virtual network on a physical one. They propose both theoretical experimental results.

The first two editorial notes are position papers addressing different technical topics. In “Network Telemetry: Towards A Top-Down Approach”, Minlan Yu argues that we should view network telemetry from a different angle. Instead of using a bottom-up approach that relies on passively collecting data from various devices and then inferring the target network-wide information, she suggests a top-down approach and envisions the possibility of providing high- level declarative abstractions that would en- able operators to define specific measurement queries. This editorial note could be of interest for many Internet measurement researchers.

In “Thoughts on Load Distribution and the Role of Programmable Switches”, James McCauley and his colleagues take a step back at some usages of programmable network switches. More precisely, they wonder which type of functionality should be migrated to switches and which functionality should not. This is a very interesting question that should be answered when writing the motivation for many papers on programmable switches.

The two other editorial notes were prepared at a recent Dagstuhl seminar that focused on the reproducibility of network research. In “The Dagstuhl Beginners Guide to Reproducibility for Experimental Networking Research”, Vaibhav Bajpai and his eight co-authors have assembled a very interesting and very useful guide filled with hints and recommendations for young researchers who begin to experiment with networks. This article will probably soon become a must read in many graduate schools. During the same seminar, another group of researchers lead by Alberto Dainotti brainstormed about our x pages two column papers. This format was interesting when articles were disseminated on real paper. Today, thirty years after the invention of the web, there are many other possibilities to disseminate scientific information. Many of these techniques are more collaborative and open than putting pdf files on web servers. “Open Collaborative Hyperpapers: A Call to Action” encourages the measurements community to collaborate on the preparation of hyperpapers. This editorial note explains the motivations for these hyperpapers and discusses some solv- able technical challenges. An interesting point about this approach is that it could encourage both a faster dissemination of research results and a truly open model that encourages authors to collaborate. While brainstorming about the 50th birthday issue of SIGCOMM, we had an interesting teleconference with Vint Cerf who reminded us of the role that SIGCOMM Computer Communication played in allowing a fast dissemination of recent research results. He compared CCR with publications such as the Journal of the ACM that had much longer publication delays.

The hyperpapers in the last editorial note of this issue could be a modern way of disseminating important research results. I would love to see researchers collaborating on hyperpapers in the coming months and submitting their work to CCR. Such a submission would violate the CCR submission guidelines that still assume that authors pro- vide pdf files. If such an hyperpaper gets sub- mitted to CCR, we find a suitable reviewing process within the CCR Editorial board.

I hope that you will enjoy reading this new issue and welcome comments and suggestions on CCR Online (https: //ccronline.sigcomm.org) or by email at ccr-editor at sigcomm.org.

Olivier Bonaventure

CCR Editor

Network Telemetry: Towards A Top-Down Approach

Minlan Yu

Abstract

Network telemetry is about understanding what is happening in the current network. It serves as the basis for making a variety of management decisions for improving the performance, availability, security, and efficiency of networks. However, it is challenging to build real-time and fine-grained network telemetry systems because of the need to support a variety of measurement queries, handle a large amount of traffic for large networks, while staying within the resource constraints at hosts and switches. Today, most operators take a bottom-up approach by passively collecting data from individual devices and infer the network-wide information they need. They are often limited by the monitoring tools device vendors provide and find it hard to extract useful information. In this paper, we argue for a top-down approach: We should provide a high-level declarative abstraction for operators to specify measurement queries, programmable measurement primitives at switches and hosts, and a runtime that translates the high-level queries into low-level API calls. We discuss a few recent works taking this top-down approach and call for more research in this direction.

Download the full article

Thoughts on Load Distribution and the Role of Programmable Switches

James McCauley, Aurojit Panda, Arvind Krishnamurthy, Scott Shenker

Abstract

The trend towards powerfully programmable network switching hardware has led to much discussion of the exciting new ways in which it can be used. In this paper, we take a step back, and examine how it should be used.

Download the full article