Martín Casado, Nick McKeown, Scott Shenker
We briefly describe the history behind the Ethane paper and its ultimate evolution into SDN and beyond.
There’s no way to understand today’s networks without knowing the history. Too often, network protocols are taught as ‘memorize the details of what is currently deployed’, which creates a lot of confusion, and certainly does not encourage critical thinking. Some decisions have made today’s networks unnecessarily complex and less functional. But surprisingly, mechanisms that were created out of necessity, to compensate for previous decisions, sometimes turn out to be useful for purposes other than the original reason they were invented. If the world had adopted a different approach originally, some very interesting technology may not have been invented. Given limited space in this article, we will not worry about exact details, but rather convey the main conceptual points, and only cover a few examples.
John W. Byers, Michael Luby, Michael Mitzenmacher
We introduced the concept of a digital fountain as a scalable approach to reliable multicast, realized with fast and practical erasure codes, in a paper published in ACM SIGCOMM ’98. This invited editorial, on the occasion of the 50th anniversary of the SIG, reflects on the trajectory of work leading up to our approach, and the numerous developments in the field in the subsequent 21 years. We discuss advances in rateless codes, efficient implementations, applications of digital fountains in distributed storage systems, and connections to invertible Bloom lookup tables.
Creating a better Internet—a global communications infras- tructure that is more secure, reliable, performant, flexible, and so on—is one of the grand challenges of our time. Yet, making substantive change to such a large, distributed, op- erational network is inherently difficult. This position paper argues that the networking research community should come together and adopt a sort of “ambitious pragmatism” that tackles the big problems while identifying the practical steps to take along the way. The community can work together to (i) identify and precisely formulate the main problems we need to address, (ii) more deeply understand a diverse array of practical constraints (including business drivers, economic incentives, government policies, and more), and (iii) create new deployment platforms and institutional structures to en- able good research ideas to “cross the chasm” to deployment.
Nick McKeown, Guido Appenzeller, Isaac Keslassy
The queueing delay faced by a packet is arguably the largest source of uncertainty during its journey. It therefore seems crucial that we understand how big the buffers should be in Internet routers. Our 2004 Sigcomm paper revisited the existing rule of thumb that a buffer should hold one bandwidth-delay product of packets. We claimed that for long-lived TCP flows, it could be reduced by N is the number of active flows, potentially reducing the required buffers by well over 90% in Internet backbone routers. One might reasonably expect that such a result, which supports cheaper routers with smaller buffers, would be embraced by the ISP community. In this paper we revisit the result 15 years later, and explain where it has succeeded and failed to affect how buffers are sized.