Pavlos Sermpezis, Vasileios Kotronis, Alberto Dainotti, Xenofontas Dimitropoulos
BGP prefix hijacking is a threat to Internet operators and users. Several mechanisms or modifications to BGP that protect the Internet against it have been proposed. However, the reality is that most operators have not deployed them and are reluctant to do so in the near future. Instead, they rely on basic – and often inefficient – proactive defenses to reduce the impact of hijacking events, or on detection based on third party services and reactive approaches that might take up to several hours. In this work, we present the results of a survey we conducted among 75 network operators to study: (a) the operatorsâ awareness of BGP prefix hijacking attacks, (b) presently used defenses (if any) against BGP prefix hijacking, (c) the willingness to adopt new defense mechanisms, and (d) reasons that may hinder the deployment of BGP prefix hijacking defenses. We expect the findings of this survey to increase the understanding of existing BGP hijacking defenses and the needs of network operators, as well as contribute towards designing new defense mechanisms that satisfy the requirements of the operators.
Download the full article DOI: 10.1145/3211852.3211862
Damien Saucez, Luigi Iannone
Ensuring the reproducibility of results is an essential part of experimental sciences, including computer networking. Unfortunately, as highlighted recently, a large portion of research results are hardly, if not at all, reproducible, raising reasonable lack of conviction on the research carried out around the world.
Recent years have shown an increasing awareness about reproducibility of results as an essential part of research carried out by members of the ACM SIGCOMM community. To address this important issue, ACM has introduced a new policy on results and artifacts review and badging. The policy defines the terminology to be used to assess results and artifacts but does not specify the review process or how to make research reproducible.
During SIGCOMM’17 a side workshop has been organized with the specific purpose to tackle this issue. The objective being to trigger discussion and activity in order to craft recommendations on how to introduce incentives for authors to share their artifacts, and the details on how to use them, as well as defining the process to be used.
This editorial overviews the workshop activity and summarizes the main discussions and outcomes.
Download the full article DOI: 10.1145/3211852.3211863
Matthias Flittner, Mohamed Naoufal Mahfoudi, Damien Saucez, Matthias Wählisch, Luigi Iannone, Vaibhav Bajpai, Alex Afanasyev
Reproducibility of artifacts is a cornerstone of most scientific publications. To improve the current state and strengthen ongoing community efforts towards reproducibility by design, we conducted a survey among the papers published at leading ACM computer networking conferences in 2017: CoNEXT, ICN, IMC, and SIGCOMM.
The objective of this paper is to assess the current state of artifact availability and reproducibility based on a survey. We hope that it will serve as a starting point for further discussions to encourage researchers to ease the reproduction of scientific work published within the SIGCOMM community. Furthermore, we hope this work will inspire program chairs of future conferences to emphasize reproducibility within the ACM SIGCOMM community as well as will strengthen awareness of researchers.
Download the full article DOI: 10.1145/3211852.3211864
This report summarizes a two and a half days Dagstuhl seminar on “Using Networks to Teach About Networks”. The seminar brought together people with mixed backgrounds in order to exchange experiences gained with different approaches to teach computer networking. Despite the obvious question of what to teach, special attention was given to the questions of how to teach and which tools and infrastructures can be used effectively today for teaching purposes.
Download the full article
On December 8-9 2016, CAIDA hosted the 7th interdisciplinary Workshop on Internet Economics (WIE) at the UC San Diego’s Supercomputer Center. This workshop series provides a forum for researchers, Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to inform current and emerging regulatory and policy debates. This year we first returned to the list of aspirations we surveyed at the 2014 workshop, and described the challenges of mapping them to actions and measurable
progress. We then reviewed evolutionary shifts in traffic, topology, business, and regulatory models, and (our best understanding of) the economics of the ecosystem. These discussions inspired an extended thought experiment for the second day of the workshop: outlining a new telecommunications legislative framework, including proposing a set of
goals and scope of such regulation, and minimal list of sections required to pursue and measure progress toward those goals. The format was a series of focused sessions, where presenters prepared 10-minute talks on relevant issues, followed by in-depth discussions. This report highlights the discussions and presents relevant open research questions identified by participants. Slides presented and this report are available at
Download the full article
The 28th International Teletraffic Congress (ITC 28) was held on 12–16 September 2016 at the University of Wurzburg, Germany. The conference was technically cosponsored by the IEEE Communications Society and the Information Technology Society within VDE, and in cooperation with ACM SIGCOMM. ITC 28 provided a forum for leading researchers from academia and industry to present and discuss the latest advances and developments in design, modelling, measurement, and performance evaluation of communication systems, networks, and services. The main theme of ITC 28, Digital Connected World, reflects the evolution of communications and networking, which is continually changing the world we are living in. The technical program was composed of 37 contributed full papers, 6 short demo papers and three keynote addresses. Three workshops dedicated to timely topics were sponsored: Programmability for Cloud Networks and Applications, Quality of Experience Centric Management, Quality Engineering for a Reliable Internet of Services.
See ITC 28 Homepage: https://itc28.org/
Download the full article DOI: 10.1145/3089262.3089268
This is a report on the Workshop on Tracking Quality of Experience in the Internet, held at Princeton, October 21–22, 2015, jointly sponsored by the National Science Foundation and the Federal Communication Commission. The term Quality of Experience (QoE) describes a user’s subjective assessment of their experience when using a particular application. In the past, network engineers have typically focused on Quality of Service (QoS): performance metrics such as throughput, delay and jitter, packet loss, and the like. Yet, performance as measured by QoS parameters only matters if it affects the experience of users, as they attempt to use a particular application. Ultimately, the user’s experience is determined by QoE impairments (e.g., rebuffering). Although QoE and QoS are related—for example, a video rebuffering event may be caused by high packet-loss rate—QoE metrics ultimately affect a user’s experience.
Identifying the causes of QoE impairments is complex, since the impairments may arise in one or another region of the network, in the home network, on the user’s device, in servers that are part of the application, or in supporting services such as the DNS. Additionally, metrics for QoE continue to evolve, as do the methods for relating QoE impairments to underlying causes that could be measurable using standard network measurement techniques. Finally, as the capabilities of the underlying network infrastructure continues to evolve, researchers should also consider how to design infrastructure and tools can best support measurements that can better identify the locations and causes of QoE impairments.
The workshop’s aim was to understand the current state of QoE research and to contemplate a community agenda to integrate ongoing threads of QoE research into a collaboration. This summary report describes the topics discussed and summarize the key points of the discussion. Materials related to the workshop are available at http://aqualab.cs.northwestern.edu/NSFWorkshop-InternetQoE
Download the full article DOI: 10.1145/3041027.3041035
Edith Ngai, Borje Ohlman, Gene Tsudik, Ersin Uzun, Matthias Wahlisch, Christopher A. Wood.
In recent years, Information-centric Networking (ICN) has received much attention from both academic and industry participants. ICN offers data-centric inter-networking that is radically different from today’s host-based IP networks. Security and privacy features on today’s Internet were originally not present and have been incrementally retrofitted over the last 35 years. As such, these issues have become increasingly important as ICN technology gradually matures towards real-world deployment. Thus, while ICN-based architectures (e.g., NDN, CCNx, etc.) are still evolving, it is both timely and important to explore ICN security and privacy issues as well as devise and assess possible mitigation techniques.
This report documents the highlights and outcomes of the Dagstuhl Seminar 16251 on “Information-centric Networking and Security.” The goal of which was to bring together researchers to discuss and address security and privacy issues particular to ICN-based architectures. Upon finishing the three-day workshop, the outlook of ICN is still unclear. Many unsolved and ill-addressed problems remain, such as namespace and identity management, object security and forward secrecy, and privacy. Regardless of the fate of ICN, one thing is certain: much more research and practical experience with these systems is needed to make progress towards solving these arduous problems.
Download the full article DOI: 10.1145/3041027.3041034
Matthias Hollick, Cristina Nita-Rotaru, Panagiotis Papadimitratos, Adrian Perrig, Stefan Schmid.
A secure routing protocol represents a foundational building block of a dependable communication system. Unfortunately, currently no taxonomy exists to assist in the design and analysis of secure routing protocols. Based on the Dagstuhl Seminar 15102, this paper initiates the study of more structured approaches to describe secure routing protocols and the corresponding attacker models, in an effort to better understand existing secure routing protocols, and to provide a framework for designing new protocols. We decompose the routing system into its key components based on a functional model of routing. This allows us to classify possible attacks on secure routing protocols. Using our taxonomy, we observe that the most eective attacks target the information in the control plane. Accordingly, unlike classic attackers whose capabilities are often described in terms of computation complexity we propose to classify the power of an attacker with respect to the reach, that is, the extent to which the attacker can influence the routing information indirectly, beyond the locations under its direct control.
Download the full article DOI: 10.1145/3041027.3041033
These Editor’s notes describe the contents of the January 2017 issue of CCR and summarise a survey on the reproducibility of networking research.
Download the full article DOI: 10.1145/3041027.3041028