Anirudh Sivaraman, Thomas Mason, Aurojit Panda, Ravi Netravali, Sai Anirudh Kondaveeti
Abstract
Motivated by the rapid emergence of programmable switches, programmable network interface cards, and software packet processing, this paper asks: given a network task (e.g., virtualization or measurement) in a programmable network, should we implement it at the network’s end hosts (the edge) or its switches (the core)? To answer this question, we analyze a range of common network tasks spanning virtualization, deep packet inspection, measurement, application acceleration, and resource management. We conclude that, while the edge is better or even required for certain network tasks (e.g., virtualization, deep packet inspection, and access control), implementing other tasks (e.g., measurement, congestion control, and scheduling) in the network’s core has significant benefits— especially as we raise the bar for the performance we demand from our networks.
More generally, we extract two primary properties that govern where a network task should be implemented: (1) time scales, or how quickly a task needs to respond to changes, and (2) data locality, or the placement of tasks close to the data that they must access. For instance, we find that the core is preferable when implementing tasks that need to run at short time scales, e.g., detecting fleeting and infrequent microbursts or rapid convergence to fair shares in congestion control. Similarly, we observe that tasks should be placed close to the state that they need to access, e.g., at the edge for deep packet inspection that requires private keys, and in the core for congestion control and measurement that needs to access queue depth information at switches.