Dina Papagiannaki
Our community is celebrating 50 years and I was asked to provide my perspective on one of its longest standing communication vehi- cles, ACM SIGCOMM Computer Communication Review, as one of its editors.
Dina Papagiannaki
Our community is celebrating 50 years and I was asked to provide my perspective on one of its longest standing communication vehi- cles, ACM SIGCOMM Computer Communication Review, as one of its editors.
Deepak Vasisht , Swarun Kumar , Hariharan Rahul , Dina Katabi
Matt Mathis, Jamshid Mahdavi
The TCP Macroscopic Model will be completely obsolete soon. It was a closed form performance model for Van Jacobson’s land- mark congestion control algorithms presented at Sigcomm’88. Ja- cobson88 requires relatively large buffers to function as intended, while Moore’s law is making them uneconomical. BBR-TCP is a break from the past, unconstrained by many of the assumptions and principles defined in Jacobson88. It already out performs Reno and CUBIC TCP over large portions of the Internet, generally without creating queues of the sort needed by earlier congestion control algorithms. It offers the potential to scale better while using less queue buffer space than existing algorithms. Because BBR-TCP is built on an entirely new set of principles, it has the potential to deprecate many things, including the Macro- scopic Model. New research will be required to lay a solid founda- tion for an Internet built on BBR.
David Wetherall, David Tennenhouse
Network programmability has metamorphosed over the past twenty years from the controversial research vision of active networks, through PlanetLab, to the juggernaut of SDN and OpenFlow that has swept industry. Now PISA switches are emerging with support for protocol-independent reconfigurability. We reflect on how net- work architecture has evolved along a different path than we had foreseen to arrive at a place that is not so different than we and other researchers had hoped and imagined.
Martín Casado, Nick McKeown, Scott Shenker
We briefly describe the history behind the Ethane paper and its ultimate evolution into SDN and beyond.
Mark Crovella
As SIGCOMM turns 50, it’s interesting to ask how networking research has evolved over time. This is a set of personal observations about the “mindset” associated with Internet research.
Kevin Fall
This article provides a brief retrospective on the evolution of Delay Tolerant Networking since 2003.
Radia Perlman
There’s no way to understand today’s networks without knowing the history. Too often, network protocols are taught as ‘memorize the details of what is currently deployed’, which creates a lot of confusion, and certainly does not encourage critical thinking. Some decisions have made today’s networks unnecessarily complex and less functional. But surprisingly, mechanisms that were created out of necessity, to compensate for previous decisions, sometimes turn out to be useful for purposes other than the original reason they were invented. If the world had adopted a different approach originally, some very interesting technology may not have been invented. Given limited space in this article, we will not worry about exact details, but rather convey the main conceptual points, and only cover a few examples.
S. Kehav
This article discusses my personal view of the events that led to the publication of the paper “Analysis and Simulation of a Fair Queueing Algorithm” that won a Test-of-Time Award in 2007.
Jon Crowcroft
In this brief note I reflect on my experiences of 60% of the 50 years of the ACM SIGCOMM Experience. This represents very personal views, and I encourage people who were around for any of this time to disagree.