The October 2020 issue

This October 2020 issue contains five technical papers, the third paper of our education series, as well as three editorial notes.

The first technical paper, Partitioning the Internet using Anycast Catchments, by Kyle Schomp and Rami Al-Dalky, deals with anycast, one of the core operational strategies to improve service performance, availability and resilience. Anycast is widely used by cloud providers, content delivery networks (CDNs), major DNS operators and many more popular Internet services. However, anycast comes with limited visibility in how traffic will be distributed among the different server locations. The authors of this paper paper propose a technique for partitioning the Internet using passive measurements of existing anycast deployments, such that all IP addresses within a partition are routed to the same location for an arbitrary anycast deployment.

The second technical paper, LoRadar: LoRa Sensor Network Monitoring through Passive Packet Sniffing, by Kwon Nung Choi and colleagues, moves us to a very different topic, in the area of IoT, and in particular Low Power WAN technologies (LPWANs) such as Long Range (LoRa). This paper develops a software tool, LoRadar, to monitor LoRa’s medium access control protocol on commodity hardware via passive packet sniffing.

Our third paper, A first look at the IP eXchange Ecosystem, by Andra Lutu and her colleagues, deals with the very important topic of the IPX Network, which we use every time we roam with our smartphones and interconnects about 800 Mobile Network Operators (MNOs) worldwide. Despite its size, neither its organisation nor its operation are well known within our community. This paper provides a first analysis of the IPX network, which we hope will be followed by other works on this under-studied topic.

The fourth paper, Mobile Web Browsing Under Memory Pressure, by Ihsan Ayyub Qazi and colleagues, investigates the impact of memory usage on mobile devices in the context of web browsing. The authors present a study using landing page loading time and memory requirements for a number of Android-based smartphones using Chrome, Firefox, Microsoft Edge and Brave. The extensive results of this paper cover the effect of tabs, scrolling, the number of images, and the number of requests made for different objects.

The fifth paper, Retrofitting Post-Quantum Cryptography in Internet Protocols: A Case Study of DNSSEC, by Moritz Mueller and his colleagues, analyses the implications of different Post-Quantum Cryptography solutions in the context of Domain Name System Security Extensions. What makes this paper very interesting, is its timeliness, since the networking and security communities are currently investigating suitable alternatives for DNSSEC, and candidate solutions shall be selected by early 2022.

The sixth paper, also our third paper in the new education series, COSMOS Educational Toolkit: Using Experimental Wireless Networking to Enhance Middle/High School STEM Education, by Panagiotis Skrimponis and his colleagues, describes COSMOS, a general-purpose educational toolkit for teaching students about a variety of computer science concepts, including computer networking. The notable aspect of this work is that the COSMOS testbed has already been deployed and used by a large number of students, and has already demonstrated great value to the community.

Then, we have three editorial notes. The first two are coincidentally on the very timely topic of contact tracing. The first one, Coronavirus Contact Tracing: Evaluating The Potential Of Using Bluetooth Received Signal Strength For Proximity Detection, by Douglas J. Leith and Stephen Farrell, reports on the challenges faced when deploying Covid-19 contact tracing apps that use Bluetooth Low Energy (LE) to detect proximity. The second editorial note, Digital Contact Tracing: Technologies, Shortcomings, and the Path Forward, by Amee Trivedi and Deepak Vasisht, investigates the technology landscape of contact-tracing apps and reports on what they believe are the missing pieces. Our third and final editorial note, Using Deep Programmability to Put Network Owners in Control, by Nate Foster and colleagues, share their vision regarding deep programmability across the stack.

I hope that you will enjoy reading this new issue and welcome comments and suggestions on CCR Online ( or by email at ccr-editor at

The July 2020 issue

This July 2020 issue contains four technical papers, the second paper of our education series, as well as two editorial notes.

The first technical paper, Tracking the deployment of TLS 1.3 on the Web: A story of experimentation and centralization, by Ralph Holz and his colleagues, deals with Transport Layer Security (TLS) 1.3, a redesign of the Web’s most important security protocol. TLS 1.3 was standardized in August 2018 after a four year-long, unprecedented design process involving many cryptographers and industry stakeholders. In their work, the authors track deployment, uptake, and use of TLS 1.3 from the early design phase until well over a year after standardization.

The second technical paper, Does Domain Name Encryption Increase Users’ Privacy?, by Martino Trevisan and colleagues, is on a topic related to the first technical paper. This work shows that DNS over HTTP (DoH) does not offer the privacy protection that many assume. For the purposes of reproducibility, the authors provide the data used under NDA with the institution owning the data. The authors also share config files and ML environment details in the interest of promoting replicability in other environments.

Our third paper, Using Application Layer Banner Data to Automatically Identify IoT Devices, by Talha Javed and his colleagues, is of the “repeatable technical papers” type, which are technical contributions that provide their artefacts, e.g., software, datasets. This paper attempts to replicate a Usenix Security 2018 paper. It describes the efforts of the authors at re-implementing the solution described in the Usenix Security paper, especially the challenges encountered when authors of the original paper are unwilling to respond to requests for artefacts. We hope it will encourage additional reproducibility studies.

The fourth paper, Towards Declarative Self-Adapting Buffer Management, by Pavel Chuprikov and his colleagues, introduces a novel machine learning based approach to buffer management. The idea is to provide a queue management infrastructure that automatically adapts to traffic changes and identifies the policy that is hypothetically best suited for current traffic patterns. The authors adopt a multi-armed bandits model, and given that different objectives and assumptions lead to different bandit algorithms, they discuss and explore the design space while providing an experimental evaluation that validates their recommendations. The authors provide a GitHub repository that allows for the reproducibility of their result through the NS-2 simulator.

The fifth paper, also our second paper in the new education series, Open Educational Resources for Computer Networking, by Olivier Bonaventure and his colleagues, describes an effort to create an online, interactive textbook for computer networking. What distinguishes this textbook from traditional ones is that it not only is it free and available for anyone in the world to use, but also, it is also interactive. Therefore, this goes way beyond what a textbook usually offers: it is an interactive learning platform for computer networking. The authors here report on about ten years of experience with it, that led to some interesting experiences and lessons learned.

Then, we have two editorial notes. The first, Lessons Learned Organizing the PAM 2020 Virtual Conference, by Chris Misa and his colleagues, reports on the experience from the organizing committee of the 2020 edition of the Passive and Active Measurement (PAM) conference, that took place as a virtual event. It provides important lessons learned for future conferences that decide to go for a virtual event. The second editorial note, Update on ACM SIGCOMM CCR reviewing process: making the review process more open, by the whole CCR editorial board, aims to inform the SIGCOMM community on the reviewing process in place currently at CCR, and to share our plans to make CCR a more open and welcoming venue, adding more value to the SIGCOMM community.

I hope that you will enjoy reading this new issue and welcome comments and suggestions on CCR Online (https: // or by email at ccr-editor at

The April 2020 Issue

SIGCOMM Computer Communication Review (CCR) is produced by a group of members of our community that spend time to prepare the newsletter that you read every quarter. Olivier Bonaventure served as editor during the last four years and his term is now over. It is my pleasure to now serve the community as the editor of CCR. As Olivier and other editors in the past did, we’ll probably adjust the newsletter to the evolving needs of the community. A first change is the introduction of a new Education series led by Matthew Caesar, our new SIGCOMM Education Director. This series will be part of every issue of CCR, and will contain different types of contributions, not only technical papers as in the current issue, but also position papers (that promote discussion through a defensible opinion on a topic), studies (describing research questions, methods, and results), experience reports (that describe an approach with a reflection on why it did/did not work), and approach reports (that describe a technical approach with enough detail for adoption by others).

This April 2020 issue contains five technical papers, the first paper of our new education series, as well as three editorial notes.

The first technical paper, RIPE IPmap Active Geolocation: Mechanism and Performance Evaluation, by Ben Du and his colleagues, introduces the research community to the IPmap single-radius engine and evaluates its effectiveness against commercial geolocation databases.

It is often believed that traffic engineering changes are rather infrequent. In the second paper, Path Persistence in the Cloud: A Study of the Effects of Inter-Region Traffic Engineering in a Large Cloud Provider’s Network, Waleed Reda and his colleagues reveal the high frequency of traffic engineering activity within a large cloud provider’s network.

In the third paper, The Web is Still Small After More Than a Decade, Nguyen Phong Hoang and his colleagues revisit some of the decade-old studies on web presence and co-location.

The fourth paper, a repeatable paper originated in the IMC reproducibility track, An Artifact Evaluation of NDP, by Noa Zilberman, provides an analysis of NDP (New Data centre protocol). NDP was first presented at ACM SIGCOMM 2017 (best paper award) and proposes a novel data centre transport architecture. In this paper, the author builds the analysis of the artefact proposed by the original authors of NDP, showing how it is possible to carry out research and build new results on previous work done by other fellow researchers.

The Low Latency, Low Loss, Scalable throughput (L4S) architecture addresses this problem by combining scalable congestion control such as DCTCP and TCP Prague with early congestion signalling from the network. In our fifth technical paper, Validating the Sharing Behavior and Latency Characteristics of the L4S Architecture, Dejene Boru Oljira and his colleagues validate some of the experimental result(s) reported in the previous works that demonstrate the co-existence of scalable and classic congestion controls and its low-latency service.

The sixth paper, also our very first paper in the new education series, An Open Platform to Teach How the Internet Practically Works, by Thomas Holterbach and his colleagues, describes a software infrastructure that can be used to teach about how the Internet works. The platform presented by the authors aims to be a much smaller, yet a representative copy of the Internet. The paper’s description and evaluation are focused on technical aspects of the design, but as a teaching tool, it may be more helpful to describe more about pedagogical issues.

Then, we have three very different editorial notes. The first, Workshop on Internet Economics (WIE 2019) report, by kc Klaffy and David Clark, reports on the 2019 interdisciplinary Workshop on Internet Economics (WIE). The second, strongly related to the fourth technical paper, deals with reproducibility. In Thoughts about Artifact Badging, Noa Zilberman and Andrew Moore illustrate that the current badging scheme may not identify limitations of architecture, implementation, or evaluation. Our last editorial note is a comment on a past editorial, “Datacenter Congestion Control: Identifying what is essential and making it practical” by Aisha Mushtaq, et al., from our July 2019 issue. This comment, authored by James Roberts, disputes that shortest remaining processing time (SRPT the crucial factor in achieving good flow completion time (FCT) performance in datacenter networks.

Steve Uhlig — CCR Editor

Using Deep Programmability to Put Network Owners in Control

Nate Foster, Nick McKeown, Jennifer Rexford, Guru Parulkar, Larry Peterson, Oguz Sunay


Controlling an opaque system by reading some “dials” and setting some “knobs,” without really knowing what they do, is a hazardous and fruitless endeavor, particularly at scale. What we need are transparent networks, that start at the top with a high-level intent and map all the way down, through the control plane to the data plane. If we can specify the behavior we want in software, then we can check that the system behaves as we expect. This is impossible if the implementation is opaque. We therefore need to use open-source software or write it ourselves (or both), and have mechanisms for checking actual behavior against the specified intent. With fine-grain checking (e.g., every packet, every state variable), we can build networks that are more reliable, secure, and performant. In the limit, we can build networks that run autonomously under verifiable, closed-loop control. We believe this vision, while ambitious, is finally within our reach, due to deep programmability across the stack, both vertically (control and data plane) and horizontally (end to end). It will emerge naturally in some networks, as network owners take control of their software and engage in open-source efforts; whereas in enterprise networks it may take longer. In 5G access networks, there is a pressing need for our community to engage, so these networks, too, can operate autonomously under verifiable, closed-loop control.

Download the full article (from ACM)

Digital Contact Tracing: Technologies, Shortcomings, and the Path Forward

Amee Trivedi and Deepak Vasisht


Since the start of the COVID-19 pandemic, technology enthusiasts have pushed for digital contact tracing as a critical tool for breaking the COVID-19 transmission chains. Motivated by this push, many countries and companies have created apps that enable digital contact tracing with the goal to identify the chain of transmission from an infected individual to others and enable early quarantine. Digital contact tracing applications like AarogyaSetu in India, TraceTogether in Singapore, SwissCovid in Switzerland, and others have been downloaded hundreds of millions of times. Yet, this technology hasn’t seen the impact that we envisioned at the start of the pandemic. Some countries have rolled back their apps, while others have seen low adoption.

Therefore, it is prudent to ask what the technology landscape of contact-tracing looks like and what are the missing pieces. We attempt to undertake this task in this paper. We present a high-level review of technologies underlying digital contact tracing, a set of metrics that are important while evaluating different contact tracing technologies, and evaluate where the different technologies stand today on this set of metrics. Our hope is two-fold: (a) Future designers of contact tracing applications can use this review paper to understand the technology landscape, and (b) Researchers can identify and solve the missing pieces of this puzzle, so that we are ready to face the rest of the COVID-19 pandemic and any future pandemics. A majority of this discussion is focused on the ability to identify contact between individuals. The questions of ethics, privacy, and security of such contact tracing are briefly mentioned but not discussed in detail.

Download the full article (from ACM)

Coronavirus Contact Tracing: Evaluating The Potential Of Using Bluetooth Received Signal Strength For Proximity Detection

Douglas J. Leith and Stephen Farrell


Many countries are deploying Covid-19 contact tracing apps that use Bluetooth Low Energy (LE) to detect proximity within 2m for 15 minutes. However, Bluetooth LE is an unproven technology for this application, raising concerns about the efficacy of these apps. Indeed, measurements indicate that the Bluetooth LE received signal strength can be strongly affected by factors including (i) the model of handset used, (ii) the relative orientation of handsets, (iii) absorption by human bodies, bags etc. and (iv) radio wave reflection from walls, floors, furniture. The impact on received signal strength is comparable with that caused by moving 2m, and so has the potential to seriously affect the reliability of proximity detection. These effects are due the physics of radio propagation and suggest that the development of accurate methods for proximity detection based on Bluetooth LE received signal strength is likely to be challenging. We call for action in three areas. Firstly, measurements are needed that allow the added value of deployed apps within the overall contact tracing system to be evaluated, e.g. data on how many of the people notified by the app would not have been found by manual contact tracing and what fraction of people notified by an app actually test positive for Covid-19. Secondly, the 2m/15 minute proximity limit is only a rough guideline. The real requirement is to use handset sensing to evaluate infection risk and this requires a campaign to collect measurements of both handset sensor data and infection outcomes. Thirdly, a concerted effort is needed to collect controlled Bluetooth LE measurements in a wide range of real-world environments, the data reported here being only a first step in that direction.

Download the full article (from ACM)

COSMOS educational toolkit: using experimental wireless networking to enhance middle/high school STEM education

P. Skrimponis, N. Makris, S. Rajguru, K. Cheng, J. Ostrometzky, E. Ford, Z. Kostic, G. Zussman, T. Korakis


This paper focuses on the educational activities of COSMOS – __C__loud enhanced __O__pen __S__oftware defined __MO__bile wireless testbed for city __S__cale deployment. The COSMOS wireless research testbed is being deployed in West Harlem (New York City) as part of the NSF Platforms for Advanced Wireless Research (PAWR) program. COSMOS’ approach for K–12 education is twofold: (i) create an innovative and concrete set of methods/tools that allow teaching STEM subjects using live experiments related to wireless networks/IoT/cloud, and (ii) enhance the professional development (PD) of K–12 teachers and collaborate with them to create hands-on educational material for the students. The COSMOS team has already conducted successful pilot summer programs for middle and high school STEM teachers, where the team worked with the teachers and jointly developed innovative real-world experiments that were organized as automated and repeatable math, science, and computer science labs to be used in the classroom. The labs run on the COSMOS Educational Toolkit, a hardware and software system that offers a large variety of pre-orchestrated K–12 educational labs. The software executes and manages the experiments in the same operational philosophy as the COSMOS testbed. Specifically, since it is designed for use by non-technical middle and high school teachers/students, it adds easy-to-use enhancements to the experiments’ execution and the results visualization. The labs are also supported by Next Generation Science Standards (NGSS)-compliant teacher/student material. This paper describes the teachers’ PD program, the NGSS lessons created and the hardware and software system developed to support the initiative. Additionally, it provides an evaluation of the PD approach as well as the expected impact on K–12 STEM education. Current limitations and future work are also included as part of the discussion section.

Download the full article (from ACM)

Retrofitting Post-Quantum Cryptography in Internet Protocols: A Case Study of DNSSEC

M. Mueller, J. de Jong, M. van Heesch, B. Overeinder, R. van Rijswijk-Deij


Quantum computing is threatening current cryptography, especially the asymmetric algorithms used in many Internet protocols. More secure algorithms, colloquially referred to as Post-Quantum Cryptography (PQC), are under active development. These new algorithms differ significantly from current ones. They can have larger signatures or keys, and often require more computational power. This means we cannot just replace existing algorithms by PQC alternatives, but need to evaluate if they meet the requirements of the Internet protocols that rely on them.

In this paper we provide a case study, analyzing the impact of PQC on the Domain Name System (DNS) and its Security Extensions (DNSSEC). In its main role, DNS translates human-readable domain names to IP addresses and DNSSEC guarantees message integrity and authenticity. DNSSEC is particularly challenging to transition to PQC, since DNSSEC and its underlying transport protocols require small signatures and keys and efficient validation. We evaluate current candidate PQC signature algorithms in the third round of the NIST competition on their suitability for use in DNSSEC. We show that three algorithms, partially, meet DNSSEC’s requirements but also show where and how we would still need to adapt DNSSEC. Thus, our research lays the foundation for making DNSSEC, and protocols with similar constraints ready for PQC.

Download the full article (from ACM)

Mobile Web Browsing Under Memory Pressure

I. Qazi, Z. Qazi, T. Benson, E. Latif, A. Manan, G. Murtaza, M. Tariq


Mobile devices have become the primary mode of Internet access. Yet, differences in mobile hardware resources, such as device memory, coupled with the rising complexity of Web pages can lead to widely different quality of experience for users. In this work, we analyze how device memory usage affects Web browsing performance. We quantify the memory footprint of popular Web pages over different mobile devices, mobile browsers, and Android versions, analyze the induced memory distribution across different browser components (e.g., JavaScript engine and compositor), investigate how performance gets impacted under memory pressure and propose optimizations to reduce the memory footprint of Web browsing. We show that these optimizations can improve performance and reduce chances of browser crashes in low memory scenarios.

Download the full article (from ACM)

A first look at the IP eXchange ecosystem

A. Lutu, B. Jun, F. Bustamante, D. Perino, M. Braun, C. Bontje


The IPX Network interconnects about 800 Mobile Network Operators (MNOs) worldwide and a range of other service providers (such as cloud and content providers). It forms the core that enables global data roaming while supporting emerging applications, from VoLTE and video streaming to IoT verticals. This paper presents the first characterization of this, so-far opaque, IPX ecosystem and a first-of-its-kind in-depth analysis of an IPX Provider (IPX-P). The IPX Network is a private network formed by a small set of tightly interconnected IPX-Ps. We analyze an operational dataset from a large IPX-P that includes BGP data as well as statistics from signaling. We shed light on the structure of the IPX Network as well as on the temporal, structural and geographic features of the IPX traffic. Our results are a first step in understanding the IPX Network at its core, key to fully understand the global mobile Internet.

Download the full article (from ACM)

LoRadar: LoRa Sensor Network Monitoring through Passive Packet Sniffing

K. Choi, H. Kolamunna, A. Uyanwatta, K. Thilakarathna, S. Seneviratne, R. Holz, M. Hassan, A. Zomaya


IoT deployments targeting different application domains are being unfolded at various administrative levels such as countries, states, corporations, or even individual households. Facilitating data transfers between deployed sensors and back-end cloud services is an important aspect of IoT deployments. These data transfers are usually done using Low Power WAN technologies (LPWANs) that have low power consumption and support longer transmission ranges. LoRa (Long Range) is one such technology that has recently gained significant popularity due to its ease of deployment. In this paper, we present LoRadar, a passive packet sniffing framework for LoRa’s Medium Access Control (MAC) protocol, LoRaWAN. LoRadar is built using commodity hardware. By carrying out passive measurements at a given location, LoRadar provides key insights of LoRa deployments such as available LoRa networks, deployed sensors, their make, and transmission patterns. Since LoRa deployments are becoming more pervasive, this information is pivotal in characterizing network performance, comparing different LoRa operators, and in emergencies or tactical operations to quickly assess available sensing infrastructure at a given geographical location. We validate the performance of LoRadar in both laboratory and real network settings and conduct a measurement study at eight key locations distributed over a large city-wide geographical area to provide an in-depth analysis of the landscape of commercial IoT deployments. Furthermore, we show the usage of LoRadar in improving the network such as potential collision and jamming detection, device localization, as well as spectrum policing to identify devices that violate the daily duty-cycle quota. Our results show that most of the devices transmitting over the SF12 data rate at one of the survey location were violating the network provider’s quota.

Download the full article (from ACM)

Partitioning the Internet using Anycast Catchments

Kyle Schomp and Rami Al-Dalky


In anycast deployments, knowing how traffic will be distributed among the locations is challenging. In this paper, we propose a technique for partitioning the Internet using passive measurements of existing anycast deployments such that all IP addresses within a partition are routed to the same location for an arbitrary anycast deployment. One IP address per partition may then represent the entire partition in subsequent measurements of specific anycast deployments. We implement a practical version of our technique and apply it to production traffic from an anycast authoritative DNS service of a major CDN and demonstrate that the resulting partitions have low error even up to 2 weeks after they are generated.

Download the full article (from ACM)