Andreas Reuter, Randy Bush, Italo Cunha, Ethan Katz-Bassett, Thomas C. Schmidt, Matthias Wählisch
A proposal to improve routing security—Route Origin Authorization (ROA)—has been standardized. A ROA specifies which network is allowed to announce a set of Internet destinations. While some networks now specify ROAs, little is known about whether other networks check routes they receive against these ROAs, a process known as Route Origin Validation (ROV). Which networks blindly accept invalid routes? Which reject them outright? Which de-preference them if alternatives exist?
Recent analysis attempts to use uncontrolled experiments to characterize ROV adoption by comparing valid routes and invalid routes. However, we argue that gaining a solid understanding of ROV adoption is impossible using currently available data sets and techniques. Instead, we devise a verifiable methodology of controlled experiments for measuring ROV. Our measurements suggest that, although some ISPs are not observed using invalid routes in uncontrolled experiments, they are actually using different routes for (non-security) traffic engineering purposes, without performing ROV. We conclude with presenting three AS that do implement ROV as confirmed by the operators.
Download the full article DOI:10.1145/3211852.3211856
Saeed Akhoondian Amiri, Klaus-Tycho Foerster, Riko Jacob, Stefan Schmid
Modern computer networks support interesting new routing models in which traffic flows from a source sto a destination t can be flexibly steered through a sequence of waypoints, such as (hardware) middleboxes or (virtualized) network functions (VNFs), to create innovative network services like service chains or segment routing. While the benefits and technological challenges of providing such routing models have been articulated and studied intensively over the last years, less is known about the underlying algorithmic traffic routing problems.
The goal of this paper is to provide the network community with an overview of algorithmic techniques for waypoint routing and also inform about limitations due to computational hardness. In particular, we put the waypoint routing problem into perspective with respect to classic graph theoretical problems. For example, we find that while computing a shortest path from a source s to a destination t is simple (e.g., using Dijkstra’s algorithm), the problem of finding a shortest route from s to t via a single waypoint already features a deep combinatorial structure.
Download the full article DOI: 10.1145/3211852.3211859
In this article I will first argue that a Service-Infrastructure Cycle is fundamental to networking evolution. Networks are built to accommodate certain services at an expected scale. New applications and/or a significant increase in scale require a rethinking of network mechanisms which results in new deployments. Four decades-worth of iterations of this process have yielded the Internet as we know it today, a common and shared global networking infrastructure that delivers almost all services. I will further argue, using brief historical case studies, that success of network mechanism deployments often hinges on whether or not mechanism evolution follows the iterations of this Cycle. Many have observed that this network, the Internet, has become ossified and unable to change in response to new demands. In other words, after decades of operation, the Service-Infrastructure Cycle has become stuck. However, novel service requirements and scale increases continue to exert significant pressure on this ossified infrastructure. The result, I will conjecture, will be a fragmentation, the beginnings of which are evident today, that will ultimately fundamentally change the character of the network infrastructure. By ushering in a ManyNets world, this fragmentation will lubricate the Service-Infrastructure Cycle so that it can continue to govern the evolution of networking. I conclude this article with a brief discussion of the possible implications of this emerging ManyNets world on networking research.
Download the full article DOI: 10.1145/3211852.3211861
Pavlos Sermpezis, Vasileios Kotronis, Alberto Dainotti, Xenofontas Dimitropoulos
BGP prefix hijacking is a threat to Internet operators and users. Several mechanisms or modifications to BGP that protect the Internet against it have been proposed. However, the reality is that most operators have not deployed them and are reluctant to do so in the near future. Instead, they rely on basic – and often inefficient – proactive defenses to reduce the impact of hijacking events, or on detection based on third party services and reactive approaches that might take up to several hours. In this work, we present the results of a survey we conducted among 75 network operators to study: (a) the operatorsâ awareness of BGP prefix hijacking attacks, (b) presently used defenses (if any) against BGP prefix hijacking, (c) the willingness to adopt new defense mechanisms, and (d) reasons that may hinder the deployment of BGP prefix hijacking defenses. We expect the findings of this survey to increase the understanding of existing BGP hijacking defenses and the needs of network operators, as well as contribute towards designing new defense mechanisms that satisfy the requirements of the operators.
Download the full article DOI: 10.1145/3211852.3211862
Damien Saucez, Luigi Iannone
Ensuring the reproducibility of results is an essential part of experimental sciences, including computer networking. Unfortunately, as highlighted recently, a large portion of research results are hardly, if not at all, reproducible, raising reasonable lack of conviction on the research carried out around the world.
Recent years have shown an increasing awareness about reproducibility of results as an essential part of research carried out by members of the ACM SIGCOMM community. To address this important issue, ACM has introduced a new policy on results and artifacts review and badging. The policy defines the terminology to be used to assess results and artifacts but does not specify the review process or how to make research reproducible.
During SIGCOMM’17 a side workshop has been organized with the specific purpose to tackle this issue. The objective being to trigger discussion and activity in order to craft recommendations on how to introduce incentives for authors to share their artifacts, and the details on how to use them, as well as defining the process to be used.
This editorial overviews the workshop activity and summarizes the main discussions and outcomes.
Download the full article DOI: 10.1145/3211852.3211863
Matthias Flittner, Mohamed Naoufal Mahfoudi, Damien Saucez, Matthias Wählisch, Luigi Iannone, Vaibhav Bajpai, Alex Afanasyev
Reproducibility of artifacts is a cornerstone of most scientific publications. To improve the current state and strengthen ongoing community efforts towards reproducibility by design, we conducted a survey among the papers published at leading ACM computer networking conferences in 2017: CoNEXT, ICN, IMC, and SIGCOMM.
The objective of this paper is to assess the current state of artifact availability and reproducibility based on a survey. We hope that it will serve as a starting point for further discussions to encourage researchers to ease the reproduction of scientific work published within the SIGCOMM community. Furthermore, we hope this work will inspire program chairs of future conferences to emphasize reproducibility within the ACM SIGCOMM community as well as will strengthen awareness of researchers.
Download the full article DOI: 10.1145/3211852.3211864