Category Archives: CCR January 2020

The January 2020 Issue

This January 2020 issue starts the fiftieth volume of Computer Communication Re- view. This marks an important milestone for our newsletter. This issue contains four tech- nical papers and three editorial notes. In C- Share: Optical Circuits Sharing for Software- Defined Data-Centers, Shay Vargaftik and his colleagues tackle the challenge of designing data-center networks that combine optical circuit switches with traditional packet switches.

In our second paper, A Survey on the Current Internet Interconnection Practices, Pedro Marcos and his colleagues provide the results of a survey answered by about hundred network operators. This provides very interesting data on why and how networks interconnect. In A first look at the Latin American IXPs, Esteban Carisimo and his colleagues analyze how Internet eXchange Points have been deployed in Latin America during the last decade.

Our fourth technical paper, Internet Backbones in Space, explores different aspects of using satellite networks to create Internet backbones. Giacomo Giuliari and his colleagues analyse four approaches to organise routing between the ground segment of satellite networks (SNs) and traditional terrestrial ISP networks,

Then, we have three very different editorial notes. In Network architecture in the age of programmability, Anirudh Sivaraman and his colleagues look at where programmable functions should be placed inside networks and the impact that programmability can have on the network architecture. In The State of Network Neutrality Regulation, Volker Stocker, Georgios Smaragdakis and William Lehr analyse network neutrality from the US and European viewpoints by considering the technical and legal implications of this debate. In Gigabit Broadband Measurement Workshop Report, William Lehr, Steven Bauer and David Clark report on a recent workshop that discusses the challenges of correctly measuring access networks as they reach bandwidth of 1 Gbps and more.

Finally, I’m happy to welcome Matthew Caesar as the new SIGCOMM Education Director. He is currently preparing different initiatives, including an education section in CCR.

I hope that you will enjoy reading this new issue and welcome comments and suggestions on CCR Online (https: //ccronline.sigcomm.org) or by email at ccr-editor at sigcomm.org.

C-Share: Optical Circuits Sharing for Software-Defined Data-Centers

S. Vargaftik, C. Caba, L. Schour, Y. Ben-Itzhak

Abstract

Integrating optical circuit switches in data-centers is an on-going research challenge. In recent years, state-of-the-art solutions introduce hybrid packet/circuit architectures for different optical circuit switch technologies, control techniques, and traffic re-routing methods. These solutions are based on separated packet and circuit planes that cannot utilize an optical circuit with flows that do not arrive from or delivered to switches directly connected to the circuit’s end-points. Moreover, current SDN-based elephant flow re-routing methods require a forwarding rule for each flow, which raises scalability issues.

In this paper, we present C-Share – a scalable SDN-based circuit sharing solution for data center networks. C-Share inherently en-ables elephant flows to share optical circuits by exploiting a flat top-of-rack tier network topology. C-Share is based on a scalable and decoupled SDN-based elephant flow re-routing method comprised of elephant flow detection, tagging and identification, which is utilized by using a prevalent network sampling method (e.g., sFlow). C-Share requires only a single OpenFlow rule for each optical circuit, and therefore significantly reduces the required OpenFlow rule entry footprint and setup rule rate. It also mitigates the OpenFlow outbound latency for subsequent elephant flows. We implement a proof-of-concept system for C-Share based on Mininet, and test the scalability of C-Share by using an event-driven simulation. Our results show a consistent increase in the mice/elephant flow separation in the network, which, in turn, improves both network throughput and flow completion time.

Download the full article

A survey on the current internet interconnection practices

Pedro Marcos, Marco Chiesa, Christoph Dietzel, Marco Canini, Marinho Barcellos

Abstract

The Internet topology has significantly changed in the past years. Today, it is richly connected and flattened. Such a change has been driven mostly by the fast growth of peering infrastructures and the expansion of Content Delivery Networks as alternatives to reduce interconnection costs and improve traffic delivery performance. While the topology evolution is perceptible, it is unclear whether or not the interconnection process has evolved or if it continues to be an ad-hoc and lengthy process. To shed light on the current practices of the Internet interconnection ecosystem and how these could impact the Internet, we surveyed more than 100 network operators and peering coordinators. We divide our results into two parts: (i) the current interconnection practices, including the steps of the process and the reasons to establish new interconnection agreements or to renegotiate existing ones, and the parameters discussed by network operators. In part (ii), we report the existing limitations and how the interconnection ecosystem can evolve in the future. We show that despite the changes in the topology, interconnecting continues to be a cumbersome process that usu- ally takes days, weeks, or even months to complete, which is in stark contrast with the desire of most operators in reducing the interconnection setup time. We also identify that even being primary candidates to evolve the interconnection process, emerging on-demand connectivity companies are only fulfilling part of the existing gap between the current interconnection practices and the network operators’ desires.

Download the full article

Internet backbones in space

Giacomo Giuliari, Tobias Klenze, Markus Legner, David Basin, Adrian Perrig and Ankit Singla

Abstract

Several “NewSpace” companies have launched the ￿rst of thousands of planned satellites for providing global broadband Internet service. The resulting low-Earth-orbit (LEO) constellations will not only bridge the digital divide by providing service to remote areas, but they also promise much lower latency than terrestrial fiber for long- distance routes. We show that unlocking this potential is non-trivial: such constellations provide inherently variable connectivity, which today’s Internet is ill-suited to accommodate. We therefore study cost–performance tradeffs in the design space for Internet routing that incorporates satellite connectivity, examining four solutions ranging from naïvely using BGP to an ideal, clean-slate design. We find that the optimal solution is provided by a path-aware networking architecture in which end-hosts obtain information and control over network paths. However, a pragmatic and more deployable approach inspired by the design of content distribution networks can also achieve stable and close-to-optimal performance.

Download the full article

Network architecture in the age of programmability

Anirudh Sivaraman, Thomas Mason, Aurojit Panda, Ravi Netravali, Sai Anirudh Kondaveeti

Abstract

Motivated by the rapid emergence of programmable switches, programmable network interface cards, and software packet processing, this paper asks: given a network task (e.g., virtualization or measurement) in a programmable network, should we implement it at the network’s end hosts (the edge) or its switches (the core)? To answer this question, we analyze a range of common network tasks spanning virtualization, deep packet inspection, measurement, application acceleration, and resource management. We conclude that, while the edge is better or even required for certain network tasks (e.g., virtualization, deep packet inspection, and access control), implementing other tasks (e.g., measurement, congestion control, and scheduling) in the network’s core has significant benefits— especially as we raise the bar for the performance we demand from our networks.

More generally, we extract two primary properties that govern where a network task should be implemented: (1) time scales, or how quickly a task needs to respond to changes, and (2) data locality, or the placement of tasks close to the data that they must access. For instance, we find that the core is preferable when implementing tasks that need to run at short time scales, e.g., detecting fleeting and infrequent microbursts or rapid convergence to fair shares in congestion control. Similarly, we observe that tasks should be placed close to the state that they need to access, e.g., at the edge for deep packet inspection that requires private keys, and in the core for congestion control and measurement that needs to access queue depth information at switches.

Download the full article

A first look at the Latin American IXPs

E. Carisimo, J. Del Fiore, D. Dujovne, C. Pelsser, I. Alvarez-Hamelin

Abstract

We investigated Internet eXchange Points (IXPs) deployed across Latin America. We discovered that many Latin American states have been actively involved in the development of their IXPs. We further found a correlation between the success of a national IXP and the absence of local monopolistic ASes that concentrate the country’s IPv4 address space. In particular, three IXPs have been able to gain local traction: IX.br-SP, CABASE-BUE and PIT Chile- SCL. We further compared these larger IXPs with others outside Latin America. We found that, in developing regions, IXPs have had a similar growth in the last years and are mainly populated by regional ASes. The latter point clearly contrasts with more interna- tionally re-known European IXPs whose members span multiple regions.

Download the full article

The State of Network Neutrality Regulation

Volker Stocker, Georgios Smaragdakis and William Lehr

Abstract

The Network Neutrality (NN) debate refers to the battle over the design of a regulatory framework for preserving the In- ternet as a public network and open innovation platform. Fueled by concerns that broadband access service providers might abuse network management to discriminate against third party providers (e.g., content or application providers), policymakers have struggled with designing rules that would protect the Internet from unreasonable network manage- ment practices. In this article, we provide an overview of the history of the debate in the U.S. and the EU and highlight the challenges that will confront network engineers designing and operating networks as the debate continues to evolve.

Download the full article

Gigabit broadband measurement workshop report

Steve Bauer, David Clark, William Lehr

Abstract

On July 24-25, 2018, MIT hosted an invitation-only work- shop for network researchers from academia, industry, and the policy community engaged in the design and operation of test schemes to measure broadband access, in order to address the measurement challenges associated with high- speed (gigabit) broadband Internet access services. The focus of the workshop was on assessing the current state-of-the- art in gigabit broadband measurement tools and practices, the technical and policy challenges that arise, and possible strategies for addressing these challenges. A goal of this ini- tial workshop was to provide a level-setting and networking opportunity among representatives of many of the leading research and operational efforts within academia, industry, and the government to collect, analyze, and make use of broadband and Internet performance measurement data and in the design of the measurement methods and tools.

Download the full article