Category Archives: 2019

Retrospective on “Measured Capacity of an Ethernet: Myths and Reality”

Jeffrey C. Mogul, Christopher A Kantarjiev

Abstract

The original Ethernet design used CSMA/CD on a broadcast cable. Even after it became commercially popular, many people expressed concerns that Ethernet could not efficiently use the full channel bandwidth. In our 1988 paper, “Measured Capacity of an Ethernet: Myths and Reality,” we reported on experiments we ran showing that, even under relatively heavy loads, Ethernet typically still performed well. We describe the context in which we ran those experiments, and some subsequent research conducted by others.

Download the full article

The July 2019 Issue

This July 2019 issue contains two techni- cal papers and three editorial notes. In ”Securing Linux with a Faster and Scalable IPtables”, Sebastiano Miano and his colleagues revisit how Linux firewalls work. Since version 2.4.0 of the Linux kernel, iptables has been the standard way of defining firewall rules in Linux. These iptables are widely used, but writing and maintaining them can be difficult. Furthermore, they have some limitations in terms of performance. This paper leverages the eBPF virtual machine that is included in the Linux kernel to propose a replacement for iptables that preserves their semantics while providing im- proved performance. They release their implementation and evaluate its performance in details.

In ”Towards Passive Analysis of Anycast in Global Routing: Unintended Impact of Remote Peering”, Rui Bian et al. analyse the deployment of anycast services. For this, they rely on different BGP routing information and highlight the impact of remote peering on anycast performance. They release their data and analysis scripts.

In addition to these two peer-reviewed papers, this issue contains three editorials. In ”Privacy Trading in the Surveillance Capitalism Age: Viewpoints on ‘Privacy- Preserving’ Societal Value Creation”, Ranjan Pal and Jon Crowcroft reconsider the current Mobile App ecosystem from an economical and privacy viewpoint. They show that the current model is not the only possible one and propose the idea of a regulated privacy trading mechanism that provides a better compromise between privacy and the commercial interests of companies. In ”Datacenter Congestion Control: Identifying what is essential and making it practical”, Aisha Mushtaq et al. take a step back at the datacenter congestion control problem. They argue that congestion control mechanisms that use Shortest-Remaining- Processing-Time are the best solution and discuss in the paper how commodity switches could be modified to support it. Finally, in ”The 11th Workshop on Active Internet Measurements (AIMS-11) Workshop Report”, kc Claffy and Dave Clark summarise the discussions at the latest AIMS workshop. They mention several new measurement ini- tiatives and interesting research projects.

Securing Linux with a Faster and Scalable IPtables

Sebastiano Miano, Matteo Bertrone, Fulvio Risso, Mauricio Vásquez Bernal,
Yunsong Lu, Jianwen Pi

Abstract

The sheer increase in network speed and the massive deployment of containerized applications in a Linux server has led to the consciousness that iptables, the current de-facto firewall in Linux, may not be able to cope with the current requirements particularly in terms of scalability in the number of rules. This paper presents an eBPF-based firewall, bpf-iptables, which emulates the iptables filtering semantic while guaranteeing higher throughput. We compare our implementation against the current version of iptables and other Linux firewalls, showing how it achieves a notable boost in terms of performance particularly when a high number of rules is involved. This result is achieved without requiring custom kernels or additional software frameworks (e.g., DPDK) that could not be allowed in some scenarios such as public data-centers.

Download the full article

Towards Passive Analysis of Anycast in Global Routing: Unintended Impact of Remote Peering

Rui Bian, Shuai Hao, Haining Wang, Amogh Dhamdere, Alberto Dainotti, Chase Cotton

Abstract

Anycast has been widely adopted by today’s Internet services, including DNS, CDN, and DDoS protection, in which the same IP address is announced from distributed locations and clients are directed to the topologically-nearest service replica. Prior research has focused on various aspects of anycast, either its usage in particular services such as DNS or characterizing its adoption by Internet-wide active probing methods. In this paper, we first explore an alternative approach to characterize anycast based on previously collected global BGP routing information. Leveraging state-of-the-art active measurement results as near-ground-truth, our passive method without requiring any Internet-wide probes can achieve 90% accuracy in detecting anycast prefixes. More importantly, our approach uncovers anycast prefixes that have been missed by prior datasets based on active measurements. While investigating the root causes of inaccuracy, we reveal that anycast routing has been entangled with the increased adoption of remote peering, a type of layer-2 interconnection where an IP network may peer at an IXP remotely without being physically present at the IXP. The invisibility of remote peering from layer-3 breaks the assumption of the shortest AS paths on BGP and causes an unintended impact on anycast performance. We identify such cases from BGP routing information and observe that at least 19.2% of anycast prefixes have been potentially impacted by remote peering.

Download the full article

Privacy Trading in the Surveillance Capitalism Age: Viewpoints on `Privacy-Preserving’ Societal Value Creation

Ranjan Pal and Jon Crowcroft

Abstract

In the modern era of the mobile apps (part of the era of surveillance capitalism, a famously coined term by Shoshana Zuboff), huge quantities of data about individuals and their activities offer a wave of opportunities for economic and societal value creation. On the flip side, such opportunities also open up channels for privacy breaches of an individual’s personal information. Data holders (e.g., apps) may hence take commercial advantage of the individuals’ inability to fully anticipate the potential uses of their private information, with detrimental effects for social welfare. As steps to improve social welfare, we comment on the the existence and design of efficient consumer-data releasing ecosystems aimed at achieving a maximum social welfare state amongst competing data holders. In view of (a) the behavioral assumption that humans are `compromising’ beings, (b) privacy not being a well-boundaried good, and (c) the practical inevitability of inappropriate data leakage by data holders upstream in the supply-chain, we showcase the idea of a regulated and radical privacy trading mechanism that preserves the heterogeneous privacy preservation constraints (at an aggregate consumer, i.e., app, level) upto certain compromise levels, and at the same time satisfying commercial requirements of agencies (e.g., advertising organizations) that collect and trade client data for the purpose of behavioral advertising. More specifically, our idea merges supply function economics, introduced by Klemperer and Meyer, with differential privacy, that, together with their powerful theoretical properties, leads to a stable and efficient, i.e., a maximum social welfare, state, and that too in an algorithmically scalable manner. As part of future research, we also discuss interesting additional techno-economic challenges related to realizing effective privacy trading ecosystems.

Download the full article

Datacenter Congestion Control: Identifying what is essential and making it practical

Aisha Mushtaq ,Radhika Mittal, James McCauley, Mohammad Alizadeh, Sylvia Ratnasamy, Scott Shenker

Abstract

Recent years have seen a slew of papers on datacenter congestion control mechanisms. In this editorial, we ask whether the bulk of this research is needed for the common case where congestion control involves hosts responding to simple congestion signals from the network and the performance goal is reducing some average measure of Flow Completion Time. We raise this question because we find that, out of all the possible variations one could make in congestion control algorithms, the most essential feature is the switch scheduling algorithm. More specifically, we find that congestion control mechanisms that use Shortest-Remaining-Processing-Time (SRPT) achieve superior performance as long as the rate-setting algorithm at the host is reasonable. We further find that while SRPT’s performance is quite robust to host behaviors, the performance of schemes that use scheduling algorithms like FIFO or Fair Queuing depend far more crucially on the rate-setting algorithm, and their performance is typically worse than what can be achieved with SRPT. Given these findings, we then ask whether it is practical to realize SRPT in switches without requiring custom hardware. We observe that approximate and deployable SRPT (ADS) designs exist, which leverage the small number of priority queues supported in almost all commodity switches, and require only software changes in the host and the switches. Our evaluations with one very simple ADS design shows that it can achieve performance close to true SRPT and is significantly better than FIFO. Thus, the answer to our basic question – whether the bulk of recent research on datacenter congestion control algorithms is needed for the common case – is no.

Download the full article

The 11th Workshop on Active Internet Measurements (AIMS-11) Workshop Report

kc Claffy, David Clark

Abstract

On 16-17 April 2018, CAIDA hosted its eleventh Workshop on Active Internet Measurements (AIMS-11). This workshop series provides a forum for stakeholders in Internet active measurement projects to communicate their interests and concerns, and explore cooperative approaches to maximizing the collective benefit of deployed infrastructure and gathered data. An overarching theme this year was scaling the storage, indexing, annotation, and usage of Internet measurements. We discussed tradeoffs in use of commercial cloud services to to make measurement results more accessible and informative to researchers in various disciplines. Other agenda topics included status updates on recent measurement infrastructures and community feedback; measurement of poorly configured infrastructure; and recent successes and approaches to evolving challenges in geolocation, topology, route hijacking, and performance measurement. We review highlights of discussions of the talks. This report does not cover each topic discussed; for more details examine workshop presentations linked from the workshop web page: http://www.caida.org/workshops/aims/1904/.

Download the full article