Category Archives: CCR October 2018

The October 2018 issue

SIGCOMM’18 was held in Hungary and the participants were pleased with both the technical program and the social interactions with all the members of the community. During her keynote on Networks Capable of Change, Jennifer Rexford mentioned two papers published in CCR that have had an important impact on our community : OpenFlow: enabling innovation in campus networks and P4: programming protocol- independent packet processors. These two editorial notes paved the way for Software-Defined Networks and then programmable switches. They are a good illustration of the benefits of having a venue like CCR that is open to other types of papers than the scientific papers that usually appear in our conferences and workshops.

This issue contains three regular papers, two editorial notes and the best papers of five recent SIGCOMM’18 workshops. Our first paper, On Max-min Fair Allocation for Multi-source Transmission, co-authored by G. Li et al., proposes and evaluates an algorithm to provide max-min fairness in a network where the same information can be downloaded from multiple sources. In On Collaborative Predictive Blacklisting, Luca Malis and his colleagues study collaborative predictive blacklisting (CPB) wherein differ- ent organizations share information about attacks in real time and use it to update their blacklists. Finally, in Bootstrapping Privacy Services in Today’s Internet, T. Lee et al. propose and analyse different services that could be provided by Internet Service Providers to provide better privacy to their users.

The two editorial notes published in this issue are very different. In Toward Demand- Aware Networking: A Theory for Self- Adjusting Network, C. Avin and S. Schmid propose to initiate the study of the theory of demand-aware, self-adjusting networks. In The 10th Workshop on Active Internet Measurements (AIMS-10) Report, kc Claffy and David Clark report the lessons learned from a recent workshop.

This issue also contains the best pa- pers selected by the organizers of five SIGCOMM’18 workshops:

• Learning IP Network Representations presented by M. Li et al. at BIG- DAMA’18

• Measuring the Impact of a Success- ful DDoS Attack on the Customer Behaviour of Managed DNS Service Providers presented by A. Abhista et al. at the workshop on traffic measure- ments for Cybersecurity

• Making Content Caching Policies ’Smart’ using the DeepCache Framework presented by A. Narayanan et al. at NetAI’18

• Refining Network Intents for Self- Driving Networks presented by A. Jacobs et al. at SelfDN’18

• A Formally Verified NAT Stack presented by S. Pirelli et al. at KBNets’18

The composition of the Editorial Board has also been modified recently. After several years of active participation, Costin Raiciu, Fahad Dogar, Alberto Dainotti and David Choffnes have concluded their term. I would like to thank them on behalf of the authors of all the papers that they handled during the last years. Sergey Gorinsky (IMDEA Networks, Spain) has agreed to join the Editorial board.

I hope that you will enjoy reading this new issue and welcome comments and suggestions on CCR Online (https: // or by email at ccr-editor at

Olivier Bonaventure

CCR Editor

On Max-min Fair Allocation for Multi-source Transmission

Geng Li, Yichen Qian, Y. Richard Yang


Max-min fair is widely used in network traffic engineering to allocate available resources among different traffic transfers. Recently, as data replication technique developed, increasing systems enforce multi-source transmission to maximize network utilization. However, existing TE approaches fail to deal with multi-source transfers because the optimization becomes a joint problem of bandwidth allocation as well as flow assignment among different sources. In this paper, we present a novel allocation approach for multi-source transfers to achieve global max-min fairness. The joint bandwidth allocation and flow assignment optimization problem poses a major challenge due to nonlinearity and multiple objectives. We cope with this by deriving a novel transformation with simple equivalent canonical linear programming to achieve global optimality efficiently. We conduct data-driven simulations, showing that our approach is more max-min fair than other single-source and multi-source allocation approaches, meanwhile it outperforms others with substantial gains in terms of network throughput and transfer completion time.

Download the full article

On Collaborative Predictive Blacklisting

Luca Melis, Apostolos Pyrgelis, Emiliano De Cristofaro


Collaborative predictive blacklisting (CPB) allows to forecast future attack sources based on logs and alerts contributed by multiple organizations. Unfortunately, however, research on CPB has only focused on increasing the number of predicted attacks but has not considered the impact on false positives and false negatives. Moreover, sharing alerts is often hindered by confidentiality, trust, and liability issues, which motivates the need for privacy-preserving approaches to the problem. In this paper, we present a measurement study of state-of-the-art CPB techniques, aiming to shed light on the actual impact of collaboration. To this end, we reproduce and measure two systems: a non privacy-friendly one that uses a trusted coordinating party with access to all alerts [12] and a peer-to-peer one using privacy-preserving data sharing [8]. We show that, while collaboration boosts the number of predicted attacks, it also yields high false positives, ultimately leading to poor accuracy. This motivates us to present a hybrid approach, using a semi-trusted central entity, aiming to increase utility from collaboration while, at the same time, limiting information disclosure and false positives. This leads to a better trade-off of true and false positive rates, while at the same time addressing privacy concerns.

Download the full article

Bootstrapping Privacy Services in Today’s Internet

Taeho Lee, Christos Pappas, Adrian Perrig


Internet users today have few solutions to cover a large space of diverse privacy requirements. We introduce the concept of privacy domains, which provide flexibility in expressing users’ privacy requirements. Then, we propose three privacy services that construct meaningful privacy domains and can be offered by ISPs. Furthermore, we illustrate that these services introduce little overhead for communication sessions and that they come with a low deployment barrier for ISPs.

Download the full article

Toward Demand-Aware Networking: A Theory for Self-Adjusting Networks

Chen Avin, Stefan Schmid


The physical topology is emerging as the next frontier in an ongoing effort to render communication networks more flexible. While first empirical results indicate that these flexibilities can be exploited to reconfigure and optimize the network toward the workload it serves and, e.g., providing the same bandwidth at lower infrastructure cost, only little is known today about the fundamental algorithmic problems underlying the design of reconfigurable networks. This paper initiates the study of the theory of demand-aware, self-adjusting networks. Our main position is that self-adjusting networks should be seen through the lense of self-adjusting datastructures. Accordingly, we present a taxonomy classifying the different algorithmic models of demand-oblivious, fixed demand-aware, and reconfigurable demand-aware networks, introduce a formal model, and identify objectives and evaluation metrics.We also demonstrate, by examples, the inherent advantage of demand-aware networks over state-of-the-art demand-oblivious, fixed networks (such as expanders). We conclude by observing that the usefulness of self-adjusting networks depends on the spatial and temporal locality of the demand; as relevant data is scarce, we call for community action.

Download the full article

The 10th Workshop on Active Internet Measurements (AIMS-10) Report

kc Claffy, David Clark


On 13-15 March 2018, CAIDA hosted its tenth Workshop on Active Internet Measurements (AIMS-10). This workshop series provides a forum for stakeholders in Internet active measurement projects to communicate their interests and concerns, and explore cooperative approaches to maximizing the collective benefit of deployed infrastructure and gathered data. An overarching theme this year was how to inform new legislation of communications policy in the U.S. Given the continued limited insight into Internet operations by researchers and policymakers, we tried to focus these discussions on what data is or could be measured to shape and support current and emerging policy debates. Materials related to the workshop are at

Download the full article

Learning IP Network Representations

Mingda Li, Cristian Lumezanu, Bo Zong, Haifeng Chen


We present DIP, a deep learning based framework to learn structural properties of the Internet, such as node clustering or distance between nodes. Existing embedding-based approaches use linear algorithms on a single source of data, such as latency or hop count information, to approximate the position of a node in the Internet. In contrast, DIP computes low-dimensional representations of nodes that preserve structural properties and non-linear relationships across multiple, heterogeneous sources of structural information, such as IP, routing, and distance information. Using a large real-world data set, we show that DIP learns representations that preserve the real-world clustering of the associated nodes and predicts distance between them more than 30% better than a meanbased approach. Furthermore, DIP accurately imputes hop count distance to unknown hosts (i.e., not used in training) given only their IP addresses and routable prefixes. Our framework is extensible to new data sources and applicable to a wide range of problems in network monitoring and security

Download the full article

Refining Network Intents for Self-Driving Networks

Arthur Selle Jacobs, Ricardo José Pfitscher , Ronaldo Alves Ferreira, Lisandro Zambenedetti Granville


Recent advances in artificial intelligence (AI) offer an opportunity for the adoption of self-driving networks. However, network operators or home-network users still do not have the right tools to exploit these new advancements in AI, since they have to rely on low-level languages to specify network policies. Intent-based networking (IBN) allows operators to specify high-level policies that dictate how the network should behave without worrying how they are translated into configuration commands in the network devices. However, the existing research proposals for IBN fail to exploit the knowledge and feedback from the network operator to validate or improve the translation of intents. In this paper, we introduce a novel intent-refinement process that uses machine learning and feedback from the operator to translate the operator’s utterances into network configurations. Our refinement process uses a sequence-to-sequence learning model to extract intents from natural language and the feedback from the operator to improve learning. The key insight of our process is an intermediate representation that resembles natural language that is suitable to collect feedback from the operator but is structured enough to facilitate precise translations. Our prototype interacts with a network operator using natural language and translates the operator input to the intermediate representation before translating to SDN rules. Our experimental results show that our process achieves a correlation coefficient squared (i.e., R-squared of 0.99 for a dataset with 5000 entries and the operator feedback significantly improves the accuracy of our model.

Download the full article

Making Content Caching Policies ‘Smart’ using the DeepCache Framework

Arvind Narayanan, Saurabh Verma, Eman Ramadan, Pariya Babaie, Zhi-Li Zhang


In this paper, we present DeepCache a novel framework for content caching, which can significantly boost cache performance. Our framework is based on powerful deep recurrent neural network models. It comprises of two main components: i) Object Characteristics Predictor, which builds upon deep LSTM Encoder-Decoder model to predict the future characteristics of an object (such as object popularity) — to the best of our knowledge, we are the first to propose LSTM Encoder-Decoder model for content caching; ii) a caching policy component, which accounts for predicted information of objects to make smart caching decisions. In our thorough experiments, we show that applying DeepCache Framework to existing cache policies, such as LRU and k-LRU, significantly boosts the number of cache hits.

Download the full article

Measuring the Impact of a Successful DDoS Attack on the Customer Behaviour of Managed DNS Service Providers

Abhishta Abhishta, Roland van Rijswijk-Deij
and Lambert J. M. Nieuwenhuis

Distributed Denial-of-Service (DDoS) attacks continue to pose a serious threat to the availability of Internet services. The Domain Name System (DNS) is part of the core of the Internet and a crucial factor in the successful delivery of Internet services. Because of the importance of DNS, specialist service providers have sprung up in the market, that provide managed DNS services. One of their key selling points is that they protect DNS for a domain against DDoS attacks. But what if such a service becomes the target of a DDoS attack, and that attack succeeds?

In this paper we analyse two such events, an attack on NS1 in May 2016, and an attack on Dyn in October 2016. We do this by analysing the change in the behaviour of the service’s customers. For our analysis we leverage data from the OpenINTEL active DNS measurement system, which covers large parts of the global DNS over time. Our results show an almost immediate and statistically significant change in the behaviour of domains that use NS1 or Dyn as a DNS service provider. We observe a decline in the number of domains that exclusively use NS1 or Dyn as a managed DNS service provider, and see a shift toward risk spreading by using multiple providers. While a large managed DNS provider may be better equipped to protect against attacks, these two case studies show they are not impervious to them. This calls into question the wisdom of using a single provider for managed DNS. Our results show that spreading risk by using multiple providers is an effective countermeasure, albeit probably at a higher cost.

Download the full article