Author Archives: Steve Uhlig

The April 2020 Issue

SIGCOMM Computer Communication Review (CCR) is produced by a group of members of our community that spend time to prepare the newsletter that you read every quarter. Olivier Bonaventure served as editor during the last four years and his term is now over. It is my pleasure to now serve the community as the editor of CCR. As Olivier and other editors in the past did, we’ll probably adjust the newsletter to the evolving needs of the community. A first change is the introduction of a new Education series led by Matthew Caesar, our new SIGCOMM Education Director. This series will be part of every issue of CCR, and will contain different types of contributions, not only technical papers as in the current issue, but also position papers (that promote discussion through a defensible opinion on a topic), studies (describing research questions, methods, and results), experience reports (that describe an approach with a reflection on why it did/did not work), and approach reports (that describe a technical approach with enough detail for adoption by others).

This April 2020 issue contains five technical papers, the first paper of our new education series, as well as three editorial notes.

The first technical paper, RIPE IPmap Active Geolocation: Mechanism and Performance Evaluation, by Ben Du and his colleagues, introduces the research community to the IPmap single-radius engine and evaluates its effectiveness against commercial geolocation databases.

It is often believed that traffic engineering changes are rather infrequent. In the second paper, Path Persistence in the Cloud: A Study of the Effects of Inter-Region Traffic Engineering in a Large Cloud Provider’s Network, Waleed Reda and his colleagues reveal the high frequency of traffic engineering activity within a large cloud provider’s network.

In the third paper, The Web is Still Small After More Than a Decade, Nguyen Phong Hoang and his colleagues revisit some of the decade-old studies on web presence and co-location.

The fourth paper, a repeatable paper originated in the IMC reproducibility track, An Artifact Evaluation of NDP, by Noa Zilberman, provides an analysis of NDP (New Data centre protocol). NDP was first presented at ACM SIGCOMM 2017 (best paper award) and proposes a novel data centre transport architecture. In this paper, the author builds the analysis of the artefact proposed by the original authors of NDP, showing how it is possible to carry out research and build new results on previous work done by other fellow researchers.

The Low Latency, Low Loss, Scalable throughput (L4S) architecture addresses this problem by combining scalable congestion control such as DCTCP and TCP Prague with early congestion signalling from the network. In our fifth technical paper, Validating the Sharing Behavior and Latency Characteristics of the L4S Architecture, Dejene Boru Oljira and his colleagues validate some of the experimental result(s) reported in the previous works that demonstrate the co-existence of scalable and classic congestion controls and its low-latency service.

The sixth paper, also our very first paper in the new education series, An Open Platform to Teach How the Internet Practically Works, by Thomas Holterbach and his colleagues, describes a software infrastructure that can be used to teach about how the Internet works. The platform presented by the authors aims to be a much smaller, yet a representative copy of the Internet. The paper’s description and evaluation are focused on technical aspects of the design, but as a teaching tool, it may be more helpful to describe more about pedagogical issues.

Then, we have three very different editorial notes. The first, Workshop on Internet Economics (WIE 2019) report, by kc Klaffy and David Clark, reports on the 2019 interdisciplinary Workshop on Internet Economics (WIE). The second, strongly related to the fourth technical paper, deals with reproducibility. In Thoughts about Artifact Badging, Noa Zilberman and Andrew Moore illustrate that the current badging scheme may not identify limitations of architecture, implementation, or evaluation. Our last editorial note is a comment on a past editorial, “Datacenter Congestion Control: Identifying what is essential and making it practical” by Aisha Mushtaq, et al., from our July 2019 issue. This comment, authored by James Roberts, disputes that shortest remaining processing time (SRPT the crucial factor in achieving good flow completion time (FCT) performance in datacenter networks.

Steve Uhlig — CCR Editor

The January 2020 Issue

This January 2020 issue starts the fiftieth volume of Computer Communication Re- view. This marks an important milestone for our newsletter. This issue contains four tech- nical papers and three editorial notes. In C- Share: Optical Circuits Sharing for Software- Defined Data-Centers, Shay Vargaftik and his colleagues tackle the challenge of designing data-center networks that combine optical circuit switches with traditional packet switches.

In our second paper, A Survey on the Current Internet Interconnection Practices, Pedro Marcos and his colleagues provide the results of a survey answered by about hundred network operators. This provides very interesting data on why and how networks interconnect. In A first look at the Latin American IXPs, Esteban Carisimo and his colleagues analyze how Internet eXchange Points have been deployed in Latin America during the last decade.

Our fourth technical paper, Internet Backbones in Space, explores different aspects of using satellite networks to create Internet backbones. Giacomo Giuliari and his colleagues analyse four approaches to organise routing between the ground segment of satellite networks (SNs) and traditional terrestrial ISP networks,

Then, we have three very different editorial notes. In Network architecture in the age of programmability, Anirudh Sivaraman and his colleagues look at where programmable functions should be placed inside networks and the impact that programmability can have on the network architecture. In The State of Network Neutrality Regulation, Volker Stocker, Georgios Smaragdakis and William Lehr analyse network neutrality from the US and European viewpoints by considering the technical and legal implications of this debate. In Gigabit Broadband Measurement Workshop Report, William Lehr, Steven Bauer and David Clark report on a recent workshop that discusses the challenges of correctly measuring access networks as they reach bandwidth of 1 Gbps and more.

Finally, I’m happy to welcome Matthew Caesar as the new SIGCOMM Education Director. He is currently preparing different initiatives, including an education section in CCR.

I hope that you will enjoy reading this new issue and welcome comments and suggestions on CCR Online (https: //ccronline.sigcomm.org) or by email at ccr-editor at sigcomm.org.

C-Share: Optical Circuits Sharing for Software-Defined Data-Centers

S. Vargaftik, C. Caba, L. Schour, Y. Ben-Itzhak

Abstract

Integrating optical circuit switches in data-centers is an on-going research challenge. In recent years, state-of-the-art solutions introduce hybrid packet/circuit architectures for different optical circuit switch technologies, control techniques, and traffic re-routing methods. These solutions are based on separated packet and circuit planes that cannot utilize an optical circuit with flows that do not arrive from or delivered to switches directly connected to the circuit’s end-points. Moreover, current SDN-based elephant flow re-routing methods require a forwarding rule for each flow, which raises scalability issues.

In this paper, we present C-Share – a scalable SDN-based circuit sharing solution for data center networks. C-Share inherently en-ables elephant flows to share optical circuits by exploiting a flat top-of-rack tier network topology. C-Share is based on a scalable and decoupled SDN-based elephant flow re-routing method comprised of elephant flow detection, tagging and identification, which is utilized by using a prevalent network sampling method (e.g., sFlow). C-Share requires only a single OpenFlow rule for each optical circuit, and therefore significantly reduces the required OpenFlow rule entry footprint and setup rule rate. It also mitigates the OpenFlow outbound latency for subsequent elephant flows. We implement a proof-of-concept system for C-Share based on Mininet, and test the scalability of C-Share by using an event-driven simulation. Our results show a consistent increase in the mice/elephant flow separation in the network, which, in turn, improves both network throughput and flow completion time.

Download the full article

A survey on the current internet interconnection practices

Pedro Marcos, Marco Chiesa, Christoph Dietzel, Marco Canini, Marinho Barcellos

Abstract

The Internet topology has significantly changed in the past years. Today, it is richly connected and flattened. Such a change has been driven mostly by the fast growth of peering infrastructures and the expansion of Content Delivery Networks as alternatives to reduce interconnection costs and improve traffic delivery performance. While the topology evolution is perceptible, it is unclear whether or not the interconnection process has evolved or if it continues to be an ad-hoc and lengthy process. To shed light on the current practices of the Internet interconnection ecosystem and how these could impact the Internet, we surveyed more than 100 network operators and peering coordinators. We divide our results into two parts: (i) the current interconnection practices, including the steps of the process and the reasons to establish new interconnection agreements or to renegotiate existing ones, and the parameters discussed by network operators. In part (ii), we report the existing limitations and how the interconnection ecosystem can evolve in the future. We show that despite the changes in the topology, interconnecting continues to be a cumbersome process that usu- ally takes days, weeks, or even months to complete, which is in stark contrast with the desire of most operators in reducing the interconnection setup time. We also identify that even being primary candidates to evolve the interconnection process, emerging on-demand connectivity companies are only fulfilling part of the existing gap between the current interconnection practices and the network operators’ desires.

Download the full article

Internet backbones in space

Giacomo Giuliari, Tobias Klenze, Markus Legner, David Basin, Adrian Perrig and Ankit Singla

Abstract

Several “NewSpace” companies have launched the ￿rst of thousands of planned satellites for providing global broadband Internet service. The resulting low-Earth-orbit (LEO) constellations will not only bridge the digital divide by providing service to remote areas, but they also promise much lower latency than terrestrial fiber for long- distance routes. We show that unlocking this potential is non-trivial: such constellations provide inherently variable connectivity, which today’s Internet is ill-suited to accommodate. We therefore study cost–performance tradeffs in the design space for Internet routing that incorporates satellite connectivity, examining four solutions ranging from naïvely using BGP to an ideal, clean-slate design. We find that the optimal solution is provided by a path-aware networking architecture in which end-hosts obtain information and control over network paths. However, a pragmatic and more deployable approach inspired by the design of content distribution networks can also achieve stable and close-to-optimal performance.

Download the full article

Network architecture in the age of programmability

Anirudh Sivaraman, Thomas Mason, Aurojit Panda, Ravi Netravali, Sai Anirudh Kondaveeti

Abstract

Motivated by the rapid emergence of programmable switches, programmable network interface cards, and software packet processing, this paper asks: given a network task (e.g., virtualization or measurement) in a programmable network, should we implement it at the network’s end hosts (the edge) or its switches (the core)? To answer this question, we analyze a range of common network tasks spanning virtualization, deep packet inspection, measurement, application acceleration, and resource management. We conclude that, while the edge is better or even required for certain network tasks (e.g., virtualization, deep packet inspection, and access control), implementing other tasks (e.g., measurement, congestion control, and scheduling) in the network’s core has significant benefits— especially as we raise the bar for the performance we demand from our networks.

More generally, we extract two primary properties that govern where a network task should be implemented: (1) time scales, or how quickly a task needs to respond to changes, and (2) data locality, or the placement of tasks close to the data that they must access. For instance, we find that the core is preferable when implementing tasks that need to run at short time scales, e.g., detecting fleeting and infrequent microbursts or rapid convergence to fair shares in congestion control. Similarly, we observe that tasks should be placed close to the state that they need to access, e.g., at the edge for deep packet inspection that requires private keys, and in the core for congestion control and measurement that needs to access queue depth information at switches.

Download the full article

A first look at the Latin American IXPs

E. Carisimo, J. Del Fiore, D. Dujovne, C. Pelsser, I. Alvarez-Hamelin

Abstract

We investigated Internet eXchange Points (IXPs) deployed across Latin America. We discovered that many Latin American states have been actively involved in the development of their IXPs. We further found a correlation between the success of a national IXP and the absence of local monopolistic ASes that concentrate the country’s IPv4 address space. In particular, three IXPs have been able to gain local traction: IX.br-SP, CABASE-BUE and PIT Chile- SCL. We further compared these larger IXPs with others outside Latin America. We found that, in developing regions, IXPs have had a similar growth in the last years and are mainly populated by regional ASes. The latter point clearly contrasts with more interna- tionally re-known European IXPs whose members span multiple regions.

Download the full article

The State of Network Neutrality Regulation

Volker Stocker, Georgios Smaragdakis and William Lehr

Abstract

The Network Neutrality (NN) debate refers to the battle over the design of a regulatory framework for preserving the In- ternet as a public network and open innovation platform. Fueled by concerns that broadband access service providers might abuse network management to discriminate against third party providers (e.g., content or application providers), policymakers have struggled with designing rules that would protect the Internet from unreasonable network manage- ment practices. In this article, we provide an overview of the history of the debate in the U.S. and the EU and highlight the challenges that will confront network engineers designing and operating networks as the debate continues to evolve.

Download the full article

Gigabit broadband measurement workshop report

Steve Bauer, David Clark, William Lehr

Abstract

On July 24-25, 2018, MIT hosted an invitation-only work- shop for network researchers from academia, industry, and the policy community engaged in the design and operation of test schemes to measure broadband access, in order to address the measurement challenges associated with high- speed (gigabit) broadband Internet access services. The focus of the workshop was on assessing the current state-of-the- art in gigabit broadband measurement tools and practices, the technical and policy challenges that arise, and possible strategies for addressing these challenges. A goal of this ini- tial workshop was to provide a level-setting and networking opportunity among representatives of many of the leading research and operational efforts within academia, industry, and the government to collect, analyze, and make use of broadband and Internet performance measurement data and in the design of the measurement methods and tools.

Download the full article

The First Fifty Years of ACM SIGCOMM

Network researchers know that packets are not always evenly spaced, they sometimes arrive in bursts. This burstiness is also present in history. Important events sometimes occur almost simultaneously even if there is no direct relationship between them. Fifty years ago, several historical events took place within a period of a few months. In July, after years of efforts, NASA engineers successfully launched Apollo 13 and the Eagle landed on the Moon. The first footsteps of Neil Armstrong and Buzz Aldrin have opened the era of space exploration and maybe another human will land on another planet during this century. One month later, hundreds of thousands of music fans gathered at the Woodstock festival. In parallel, a few computer scientists and engineers started to deploy prototype nodes of the ARPANET network. The first packets that they exchanged have initiated the revolution that brought the Internet and today’s connected society. At that time, research in computer networks had already started and several members of this emerging community gathered in an interest group that later became ACM SIGCOMM.

SIGCOMM’s 50th birthday was celebrated at SIGCOMM’20 in Beijing in August with a special panel. This fiftieth birthday was a good opportunity to look back at the evolution of both the networking field and the SIGCOMM community over half a century. Earlier this year, after a very interesting teleconference with Vint Cerf, I contacted all the former SIGCOMM chairs and CCR editors to share their reflections on the evolution of our community. Many of them wrote an invited editorial. As our community is driven by scientific innovations, I also encouraged former recipients of lifetime SIGCOMM awards, test-of-time awards and best paper awards to also share their vision with an invited editorial.
The thirty-three papers published in this special issue address a wide
range of topics. Some provide a very unusual viewpoint.

SIGCOMM Turns 50

Wesley Chu chaired ACM SIGCOMM between 1973 and 1977. In Recalling the early days (first decade) of SIGCOMM and some thoughts of the future research directions, he discusses early networking conferences and the evolution of our field. In It’s Not About the Internet, Lyman Chapin who served as Chair in the 1990s argues that we should collaborate with professionals from other fields such as behavioral psychology, linguistics, sociology, education, history, ethnology, and political science to solve the policy issues that arise in today’s Internet.

In Patience, Jon Crowcroft, who chaired SIGCOMM during the second half of the 1990s recalls his thirty years of participation in our community. He first recalls three important decisions made by SIGCOMM during his term: (i) avoid pangloss, (ii) embrace globalism and (iii) have values. These decisions are still very relevant today. He then looks back and provides useful guidance for young researchers and reports several anecdotes.

Jennifer Rexford served as SIGCOMM chair between 2003 and 2007. In Never Waste a Mid-Life Crisis: Change for the Better, she encourages our networking community to identify a few key problems that require a solution and work together to propose solutions which can eventually be widely deployed.

Mark Crovella, who chaired SIGCOMM after Jennifer Rexford, reflects in The Skillful Interrogation of the Internet on the mindset that researchers should have when studying the Internet. He argues that we have only started to apply a scientific approach to study the Internet and makes a parallel with the evolution of the field of biology.

In Reflections on SIGCOMM’s Fiftieth Anniversary, Bruce Davie, who chaired SIGCOMM from 2009 to 2013 discusses the relationships between networking research and the networking industry. He first argues that it is often useful to revisit old ideas. Since our environment continues to change, ideas that were considered as unrealistic one or two decades ago, could be deployed today. He also encourages us to think about the incremental deployment of new ideas. Several past examples show that incrementally deployable ideas had more impact than stronger ideas that were too difficult to deploy.

Srinivasan Keshav ends the reflections of our past Chairs. In Reflections on being SIG Chair 2013-2017, he recalls some issues that the SIGCOMM Executive Committee had to tackle during his term and the initiatives that they took. He ends his editorial by discussing how the current boom in robotics, AI, big data and deep learning could affect our community by reducing the number of networking students and networking researchers. He argues that, in the long term, this could be beneficial.

The evolution of CCR

The first issue of Computer Communication Review was published in 1969. In 2013, Craig Partridge published an informal history of the first forty years of our newsletter. This issue complements this article. In My view of Computer Communication Review, 1969–1976, Dave Walden explains how he ensured both the regular publication of CCR issues and the quality of the articles. He also presents some key articles published during his term as CCR Editor.

Craig Partridge was CCR Editor from 1988 to 1991. He describes in Changing ACM Computer Communication Review (1988-1991) how Vint Cerf encouraged him to convert CCR into an “entry-level journal”. This conversion was a success and Craig consumed two-thirds of the annual page budget for the January 1989 issue. After that, papers continued to flow.

In 2004, Christophe Diot started his four years term as CCR Editor. In About velocity and dealing with “fake” scientific news, he shares his insight on some changes he brought to CCR, including the public reviews and the editorials. These are still very relevant today. He ends by suggesting, again, to adopt a continuous submission model for the SIGOMM conferences and the importance of volunteering for our community.

Before chairing SIGCOMM, Srinivasan Keshav served as CCR Editor during a four- years term. In Reflections on being CCR Editor 2008-2012 he explains how he replaced the email-based workflow to handle papers with an instance of hotcrp that he modified and why he introduced a 6 pages limit for CCR papers.

Finally, Dina Papagiannaki shares in
Looking back at being the editor of CCR (2013-2016) the initiatives that she introduced during her term as CCR Editor. These include an interview section and regular columns.

Fifty years of networking

In 1999, SIGCOMM celebrated its thirtieth birthday with a tutorial on The Technical History of the Internet organized by Vint Cerf who assembled, as indicated in the tutorial announcement, a stellar cast of original technical contributors to the history of internetworking, including Paul Baran, Bob Braden, Dave Clark, Danny Cohen, Dave Farber, Sandy Fraser, Van Jacobson, Steve Kent, Peter Kirstein, Len Kleinrock, Larry Landweber, Dave Mills, Craig Partridge, Louis Pouzin, Larry Roberts, Dave Walden, Steve Wolff, and Hubert Zimmermann. To prepare this tutorial, Vint Cerf and his friends collected historical documents that were gathered by the late Chris Edmondson-Yurkanan. In ‘Capture it While You Can’: Revisiting SIGCOMM 99’s Technical History of the Internet, Frances Corry and Anna Loup discuss the importance of collecting historical information as computer
networking gets older. They are currently planning an exhibition of the material assembled by Chris Edmondson-Yurkanan at a future SIGCOMM conference.

Our second technical editorial, Five Decades of the ACM Special Interest Group on Data Communications (SIGCOMM): A Bibliometric Perspective was written by a team lead by Waleed Iqbal. They contacted us earlier this year to propose a bibliometric study of the SIGCOMM publications based on a large dataset that they had collected. As this study gives a broad view of our publications, it was interesting to also include it in this special issue. They analyze our publications by using different metrics and provide an interesting viewpoint on the output of our community measured by our publications. Furthermore, they release their entire dataset to enable other researchers to compute their own metrics.

We then explore the evolution of computer networking research through the lenses of SIGCOMM award winners and authors of best and test-of-time papers. We start with two articles co-authored by Jeffrey Mogul and Christopher Katarjiev. In Retrospective on “Fragmentation Considered Harmful”, they first look at the challenges of fragmenting IP packets in 1987. They analyze how things have changed since the publication of their paper with the deployment of Path MTU discovery. In Retrospective on “Measured Capacity of an Ethernet: Myths and Reality”, they revisit the measurements described in their 1988 paper. As this 1988 paper focused on Ethernet networks that used CSMA/CD, it is not anymore relevant to today’s switched Ethernet where CSMA/CD is disabled. However, some lessons learned are still relevant.

Then, Srinivasan Keshav, in Reflections on “Analysis and Simulation of a Fair Queueing Algorithm” by A. Demers, S. Keshav, and S. Shenker, Proc. ACM SIGCOMM 1989, recalls how the Weighted Fair Queueing algorithm was designed and eventually published. This paper initiated a huge amount of work on packet schedulers and is
still very relevant today.

Our next witness is Radia Perlman. In
Network Protocol Folklore: Whys and What- ifs, she looks at the evolution of Ethernet networks, with some anecdotes on the design of Spanning Tree Bridging. She then analyzes the evolution of the network layer protocols, starting from CLNP. Her editorial should be read by today’s students who might believe that there is no alternative to IP.

In Reflections on “A Control-Theoretic Approach to Flow Control”, Srinivasan Keshav explains how he applied principles from control theory to improve flow control in reliable protocols and the importance of regularly noting his ideas on paper.

Walter Willinger, Murad Taqqu and Daniel Wilson describe in Lessons from “On the Self-Similar Nature of Ethernet Traffic” how the networking community has reacted to their SIGCOMM’93 paper. Thanks to a long and precise dataset, this paper demonstrated that the traffic in an Ethernet network could not be modeled using a Poisson process. This editorial provides lots of insight on this important discovery and the impact that it had on the field. It should definitively be on the reading list of any networking researcher who analyzes measurement data.

Matt Mathis and Jamshid Mahdavi take a step back at the modeling of TCP congestion control algorithms in Deprecating The TCP Macroscopic Model. Their original paper proposed mathematical models to describe the evolution of TCP’s congestion control scheme. They argue for a deprecation of their mathematical model and encourage researchers to analyze and model the recently proposed BBR congestion control scheme that takes a very different approach to solve the congestion problem.

In Sizing Router Buffers (Redux), Nick McKeown, Guido Appenzeller and Isaac Keslassy also discuss Internet congestion, but from the viewpoint of the router’s buffers. They discuss how the recommendations for the sizing of router buffers have changed during the last fifteen years.

In Retrospective on “A Delay-Tolerant Network Architecture for Challenged Internets”, Kevin Fall explains the evolution of Delay Tolerant Networking during the last fifteen years and key results in this field.
Retransmissions remain a standard technique to recover from transmission errors in many protocols.

Two of the editorials in this special issue describe the evolution of alternatives to retransmissions. In XORs in the past and future, Muriel Medard, Sachin Katti, Dina Katabi, Wenjun Hu, Hariharan Rahul and Jon Crowcroft describe the evolution of network coding. They discuss both the theoretical aspects and the deployment of protocols that include network coding techniques. In A Digital Fountain Retrospective, John Byers, Michael Luby and Michael Mitzenmacher explain how fast and practical erasure codes have enabled scalable reliable multicast services.

In the late nineties, researchers proposed active networks as a solution to bring innovation back in networks that were already considered as being ossified. In Retrospective on “Towards an Active Network Architecture”, David Wetherall and David Tennenhouse take a step back at network programmability and compare today’s approaches to earlier ones. Other researchers looked at the control plane protocols. In Reflections on a clean slate 4D approach to network control and management, David Maltz, Geoffrey Xie, Albert Greenberg, Jennifer Rexford, Hui Zhang and Jibin Zhan discuss how their idea of centralizing network control has contributed to the development of Software Defined Networking (SDN). Another viewpoint of the evolution of SDN is provided by Martin Casado, Nick McKeown and Scott Shenker in From Ethane to SDN and Beyond. They explain how their design of a simple solution to manage their campus network demonstrated the feasibility of SDN and highlight important lessons.

In Lessons from “A first-principles approach to understanding the Internet’s router-level topology”, Walter Willinger,
David Alderson and John Doyle remind us that the Internet has a highly engineered infrastructure and that it is important to understand its component technologies before analyzing topology measurements. They highlight the limitations of scale-free models that some researchers have tried to apply to the Internet.

In Don’t Mind the Gap: Bridging Network-wide Objectives and Device-level Configurations, Ryan Beckett, Ratul Mahajan, Todd Millstein, Jitendra Padhye and David Walker discuss the evolution of Propane, a high-level language and compiler which can be used by network operators to generate low-level router configurations that meet network-wide routing objectives. They highlight the importance of working with small models when working with complex systems.

In Perspective: White Space Networking with Wi-Fi like Connectivity, Ranveer Chandra and Thomas Moscibroda explore the utilization of the radio spectrum that was reserved for television signal to provide Internet access, notably in rural areas. This TV spectrum has excellent propagation characteristics. They proposed to use it to provide Internet access in their SIGCOMM’09 paper. They discuss how the technology has evolved and the steps required to convert a research prototype in a widely deployed technology.

In Perspective: Eliminating Channel Feedback in Next-Generation Cellular Networks, Deepak Vasisht, Swarun Kumar, Hariharan Rahul and Dina Katabi discuss the impact of their SIGCOMM’16 paper that enabled cellular base stations to estimate the downlink channels without any user feedback.

Finally, Costin Raiciu and Gianni Antichi discuss in NDP: rethinking datacenter networks and stacks, two years after their attempts to convince industry to deploy the ideas proposed in their SIGCOMM’17 paper in datacenters. It reminds us that there are many hurdles in the journey to convert a research prototype into a widely deployed solution. This is the difference between research and innovation. Convincing industry of the validity of research results often requires more time and effort than convincing one of our very selective TPCs. . .

I hope that you will enjoy reading this new issue and welcome comments and suggestions by email at ccr-editor at sigcomm.org.

Olivier Bonaventure