Tag Archives: editorial

Update on ACM SIGCOMM CCR reviewing process: towards a more open review process

Ralph Holz, Marco Mellia, Olivier Bonaventure, Hamed Haddadi, Matthew Caesar, Sergey Gorinsky, Gianni Antichi, Joseph Camp, kc Klaffy, Bhaskaran Raman, Anna Sperotto, Aline Viana, Steve Uhlig

Abstract

This editorial note aims to first inform the SIGCOMM community on the reviewing process in place currently at CCR, and second, share our plans to make CCR a more open and welcoming venue by making changes to the review process, adding more value to the SIGCOMM community.

Download the full article (from ACM)

Preprint

Lessons learned organizing the PAM 2020 virtual conference

Chris Misa, Dennis Guse, Oliver Hohlfeld, Ramakrishnan Durairajan, Anna Sperotto, Alberto Dainotti, Reza Rejaie

Abstract

Due to the COVID-19 pandemic, the organizing committee of the 2020 edition of the Passive and Active Measurement (PAM) conference decided to organize it as a virtual event. Unfortunately, little is known about designing and organizing virtual academic conferences in the networking domain and their impacts on the participants’ experience. In this editorial note, we first provide challenges and rationale for various organizational decisions we made in designing the virtual format of PAM 2020. We then illustrate the key results from a questionnaire-based survey of participants’ experience showing that, while virtual conferences have the potential to broaden participation and strengthen focus on technical content, they face serious challenges in promoting social interactions and broadening the scope of discussions. We conclude with key takeaways, lessons learned, and suggestions for future virtual conferences distilled from this experience.

Download the full article (from ACM)

Preprint

Workshop on Internet Economics (WIE 2019) report

kc claffy, David Clark

Abstract

On 9-11 December 2019, CAIDA hosted the 10th interdisciplinary Workshop on Internet Economics (WIE) at UC San Diego’s Supercomputer Center. This workshop series provides a forum for researchers, Internet facilities and service providers, technologists, economists, theorists, policymakers, and other stakeholders to exchange views on current and emerging economic and policy debates. This year’s meeting had a narrower focus than in years past, motivated by a new NSF-funded project being launched at CAIDA: KISMET (Knowledge of Internet Structure: Measurement, Epistemology, and Technology). The objective of the KISMET project is to improve the security and resilience of key Internet systems by collecting and curating infrastructure data in a form that facilitates query, integration and analysis. This project is a part of NSF’s new Convergence Accelerator program, which seeks to support fundamental scientific exploration by creating partnerships across public and private sectors to solve problems of national importance.

Download the full article

Thoughts about Artifact Badging

Noa Zilberman, Andrew W. Moore

Abstract

Reproducibility: the extent to which consistent results are obtained when an experiment is repeated, is important as a means to validate experimental results, promote integrity of research, and accelerate follow up work. Commitment to artifact reviewing and badging seeks to promote reproducibility and rank the quality of submitted artifacts.

However, as illustrated in this issue, the current badging scheme, with its focus upon an artifact being reusable, may not identify limitations of architecture, implementation, or evaluation.

We propose that to improve the insight into artifact reproducibility, the depth and nature of artifact evaluation must move beyond simply considering if an artifact is reusable. Artifact evaluation should consider the methods of that evaluation alongside the varying of inputs to that evaluation. To achieve this, we suggest an extension to the scope of artifact badging, and describe both approaches and best practice arising in other communities. We seek to promote conversation and make a call to action intended to strengthen the scientific method within our domain.

Download the full article

The April 2020 Issue

SIGCOMM Computer Communication Review (CCR) is produced by a group of members of our community that spend time to prepare the newsletter that you read every quarter. Olivier Bonaventure served as editor during the last four years and his term is now over. It is my pleasure to now serve the community as the editor of CCR. As Olivier and other editors in the past did, we’ll probably adjust the newsletter to the evolving needs of the community. A first change is the introduction of a new Education series led by Matthew Caesar, our new SIGCOMM Education Director. This series will be part of every issue of CCR, and will contain different types of contributions, not only technical papers as in the current issue, but also position papers (that promote discussion through a defensible opinion on a topic), studies (describing research questions, methods, and results), experience reports (that describe an approach with a reflection on why it did/did not work), and approach reports (that describe a technical approach with enough detail for adoption by others).

This April 2020 issue contains five technical papers, the first paper of our new education series, as well as three editorial notes.

The first technical paper, RIPE IPmap Active Geolocation: Mechanism and Performance Evaluation, by Ben Du and his colleagues, introduces the research community to the IPmap single-radius engine and evaluates its effectiveness against commercial geolocation databases.

It is often believed that traffic engineering changes are rather infrequent. In the second paper, Path Persistence in the Cloud: A Study of the Effects of Inter-Region Traffic Engineering in a Large Cloud Provider’s Network, Waleed Reda and his colleagues reveal the high frequency of traffic engineering activity within a large cloud provider’s network.

In the third paper, The Web is Still Small After More Than a Decade, Nguyen Phong Hoang and his colleagues revisit some of the decade-old studies on web presence and co-location.

The fourth paper, a repeatable paper originated in the IMC reproducibility track, An Artifact Evaluation of NDP, by Noa Zilberman, provides an analysis of NDP (New Data centre protocol). NDP was first presented at ACM SIGCOMM 2017 (best paper award) and proposes a novel data centre transport architecture. In this paper, the author builds the analysis of the artefact proposed by the original authors of NDP, showing how it is possible to carry out research and build new results on previous work done by other fellow researchers.

The Low Latency, Low Loss, Scalable throughput (L4S) architecture addresses this problem by combining scalable congestion control such as DCTCP and TCP Prague with early congestion signalling from the network. In our fifth technical paper, Validating the Sharing Behavior and Latency Characteristics of the L4S Architecture, Dejene Boru Oljira and his colleagues validate some of the experimental result(s) reported in the previous works that demonstrate the co-existence of scalable and classic congestion controls and its low-latency service.

The sixth paper, also our very first paper in the new education series, An Open Platform to Teach How the Internet Practically Works, by Thomas Holterbach and his colleagues, describes a software infrastructure that can be used to teach about how the Internet works. The platform presented by the authors aims to be a much smaller, yet a representative copy of the Internet. The paper’s description and evaluation are focused on technical aspects of the design, but as a teaching tool, it may be more helpful to describe more about pedagogical issues.

Then, we have three very different editorial notes. The first, Workshop on Internet Economics (WIE 2019) report, by kc Klaffy and David Clark, reports on the 2019 interdisciplinary Workshop on Internet Economics (WIE). The second, strongly related to the fourth technical paper, deals with reproducibility. In Thoughts about Artifact Badging, Noa Zilberman and Andrew Moore illustrate that the current badging scheme may not identify limitations of architecture, implementation, or evaluation. Our last editorial note is a comment on a past editorial, “Datacenter Congestion Control: Identifying what is essential and making it practical” by Aisha Mushtaq, et al., from our July 2019 issue. This comment, authored by James Roberts, disputes that shortest remaining processing time (SRPT the crucial factor in achieving good flow completion time (FCT) performance in datacenter networks.

Steve Uhlig — CCR Editor

The January 2020 Issue

This January 2020 issue starts the fiftieth volume of Computer Communication Re- view. This marks an important milestone for our newsletter. This issue contains four tech- nical papers and three editorial notes. In C- Share: Optical Circuits Sharing for Software- Defined Data-Centers, Shay Vargaftik and his colleagues tackle the challenge of designing data-center networks that combine optical circuit switches with traditional packet switches.

In our second paper, A Survey on the Current Internet Interconnection Practices, Pedro Marcos and his colleagues provide the results of a survey answered by about hundred network operators. This provides very interesting data on why and how networks interconnect. In A first look at the Latin American IXPs, Esteban Carisimo and his colleagues analyze how Internet eXchange Points have been deployed in Latin America during the last decade.

Our fourth technical paper, Internet Backbones in Space, explores different aspects of using satellite networks to create Internet backbones. Giacomo Giuliari and his colleagues analyse four approaches to organise routing between the ground segment of satellite networks (SNs) and traditional terrestrial ISP networks,

Then, we have three very different editorial notes. In Network architecture in the age of programmability, Anirudh Sivaraman and his colleagues look at where programmable functions should be placed inside networks and the impact that programmability can have on the network architecture. In The State of Network Neutrality Regulation, Volker Stocker, Georgios Smaragdakis and William Lehr analyse network neutrality from the US and European viewpoints by considering the technical and legal implications of this debate. In Gigabit Broadband Measurement Workshop Report, William Lehr, Steven Bauer and David Clark report on a recent workshop that discusses the challenges of correctly measuring access networks as they reach bandwidth of 1 Gbps and more.

Finally, I’m happy to welcome Matthew Caesar as the new SIGCOMM Education Director. He is currently preparing different initiatives, including an education section in CCR.

I hope that you will enjoy reading this new issue and welcome comments and suggestions on CCR Online (https: //ccronline.sigcomm.org) or by email at ccr-editor at sigcomm.org.

Network architecture in the age of programmability

Anirudh Sivaraman, Thomas Mason, Aurojit Panda, Ravi Netravali, Sai Anirudh Kondaveeti

Abstract

Motivated by the rapid emergence of programmable switches, programmable network interface cards, and software packet processing, this paper asks: given a network task (e.g., virtualization or measurement) in a programmable network, should we implement it at the network’s end hosts (the edge) or its switches (the core)? To answer this question, we analyze a range of common network tasks spanning virtualization, deep packet inspection, measurement, application acceleration, and resource management. We conclude that, while the edge is better or even required for certain network tasks (e.g., virtualization, deep packet inspection, and access control), implementing other tasks (e.g., measurement, congestion control, and scheduling) in the network’s core has significant benefits— especially as we raise the bar for the performance we demand from our networks.

More generally, we extract two primary properties that govern where a network task should be implemented: (1) time scales, or how quickly a task needs to respond to changes, and (2) data locality, or the placement of tasks close to the data that they must access. For instance, we find that the core is preferable when implementing tasks that need to run at short time scales, e.g., detecting fleeting and infrequent microbursts or rapid convergence to fair shares in congestion control. Similarly, we observe that tasks should be placed close to the state that they need to access, e.g., at the edge for deep packet inspection that requires private keys, and in the core for congestion control and measurement that needs to access queue depth information at switches.

Download the full article

The State of Network Neutrality Regulation

Volker Stocker, Georgios Smaragdakis and William Lehr

Abstract

The Network Neutrality (NN) debate refers to the battle over the design of a regulatory framework for preserving the In- ternet as a public network and open innovation platform. Fueled by concerns that broadband access service providers might abuse network management to discriminate against third party providers (e.g., content or application providers), policymakers have struggled with designing rules that would protect the Internet from unreasonable network manage- ment practices. In this article, we provide an overview of the history of the debate in the U.S. and the EU and highlight the challenges that will confront network engineers designing and operating networks as the debate continues to evolve.

Download the full article

Gigabit broadband measurement workshop report

Steve Bauer, David Clark, William Lehr

Abstract

On July 24-25, 2018, MIT hosted an invitation-only work- shop for network researchers from academia, industry, and the policy community engaged in the design and operation of test schemes to measure broadband access, in order to address the measurement challenges associated with high- speed (gigabit) broadband Internet access services. The focus of the workshop was on assessing the current state-of-the- art in gigabit broadband measurement tools and practices, the technical and policy challenges that arise, and possible strategies for addressing these challenges. A goal of this ini- tial workshop was to provide a level-setting and networking opportunity among representatives of many of the leading research and operational efforts within academia, industry, and the government to collect, analyze, and make use of broadband and Internet performance measurement data and in the design of the measurement methods and tools.

Download the full article