Author Archives: Steve Uhlig

Perceived Performance of Top Retail Webpages In the Wild

Qingzhu, Prasenjit Dey, Parvez Ahammad
Abstract

Clearly, no one likes webpages with poor quality of experience (QoE). Being perceived as slow or fast is a key element in the overall perceived QoE of web applications. While extensive effort has been put into optimizing web applications (both in industry and academia), not a lot of work exists in characterizing what aspects of webpage loading process truly influence human end-user’s perception of the Speed of a page. In this paper we present SpeedPerception, a large-scale web performance crowdsourcing framework focused on understanding the perceived loading performance of above-the-fold (ATF) webpage content. Our end goal is to create free open-source benchmarking datasets to advance the systematic analysis of how humans perceive webpage loading process.

In Phase-1 of our SpeedPerception study using Internet Retailer Top 500 (IR 500) websites, we found that commonly used navigation metrics such as onLoad and Time To First Byte (TTFB) fail (less than 60% match) to represent majority human perception when comparing the speed of two webpages. We present a simple 3-variable-based machine learning model that explains the majority end-user choices better (with 87 +- 2% accuracy). In addition, our results suggest that the time needed by end-users to evaluate relative perceived speed of webpage is far less than the time of its visualComplete event.

Download the full article DOI: 10.1145/3155055.3155062

Hierarchical IP flow clustering

Kamal Shadi, Preethi Natarajan, Constantine Dovrolis
Abstract

The analysis of flow traces can help to understand a network’s usage patterns.
We present a hierarchical clustering algorithm for network flow data that can summarize terabytes of IP traffic  into a parsimonious tree model.  The method automatically finds an appropriate scale of aggregation so  that each cluster represents a local maximum of the traffic density from a block of source addresses
to a block of destination addresses. We apply this clustering method on NetFlow data from an enterprise network, find the largest traffic clusters,  and analyze their stationarity across time. The existence of heavy-volume clusters that persist over long time scales can help network operators to perform usage-based accounting, capacity provisioning and traffic engineering. Also, changes in the layout of hierarchical clusters can facilitate the detection of anomalies and significant changes in the network workload.

Download the full article DOI: 10.1145/3155055.3155063

Contain-ed: An NFV Micro-Service System for Containing e2e Latency

Amit Sheoran, Puneet Sharma, Sonia Fahey, Vinay Saxena
Abstract

Network Functions Virtualization (NFV) has enabled operators to dynamically place and allocate resources for network services to match workload requirements. However, unbounded end-to-end (e2e) latency of Service Function Chains (SFCs) resulting from distributed Virtualized Network Function (VNF) deployments can severely degrade performance. In particular, SFC instantiations with inter-data center links can incur high e2e latencies and Service Level Agreement (SLA) violations. These latencies can trigger timeouts and protocol errors with latency-sensitive operations.

Traditional solutions to reduce e2e latency involve physical deployment of service elements in close proximity. These solutions are, however, no longer viable in the NFV era. In this paper, we present our solution that bounds the e2e latency in SFCs and inter-VNF control message exchanges by creating micro-service aggregates based on the affinity between VNFs. Our system, Contain-ed, dynamically creates and manages affinity aggregates using light-weight virtualization technologies like containers, allowing them to be placed in close proximity and hence bounding the e2e latency. We have applied Contain-ed to the Clearwater IP Multimedia Subsystem and built a proof-of-concept. Our results demonstrate that, by utilizing application and protocol specific knowledge, affinity aggregates can effectively bound SFC delays and significantly reduce protocol errors and service disruptions.

Download the full article DOI: 10.1145/3155055.3155064

 

How to Measure the Killer Microsecond

Mia Primorac, Edouard Bugnion, Katerina Argyraki
Abstract

Datacenter-networking research requires tools to both generate traffic and accurately measure latency and throughput. While hardware-based tools have long existed commercially, they are primarily used to validate ASICs and lack flexibility, e.g. to study new protocols. They are also too expensive for academics. The recent development of kernel-bypass networking and advanced NIC features such as hardware timestamping have created new opportunities for accurate latency measurements.

This paper compares these two approaches, and in particular whether commodity servers and NICs, when properly configured, can measure the latency distributions as precisely as specialized hardware.

Our work shows that well-designed commodity solutions can capture subtle differences in the tail latency of stateless UDP traffic. We use hardware devices as the ground truth, both to measure latency and to forward traffic. We compare the ground truth with observations that combine five latency-measuring clients and five different port forwarding solutions and configurations. State-of-the-art software such as MoonGen that uses NIC hardware timestamping provides sufficient visibility into tail latencies to study the effect of subtle operating system configuration changes. We also observe that the kernel-bypass-based TRex software, that only relies on the CPU to timestamp traffic, can also provide solid results when NIC timestamps are not available for a particular protocol or device.

Download the full article DOI: 10.1145/3155055.3155065

Wi-Stitch: Content Delivery in Converged Edge Networks

Aravindh Raman, Nishanth Sastry, Arjuna Sathiaseelan, Jigna Chandaria, Andrew Secker
Abstract

Wi-Fi, the most commonly used access technology at the very edge, supports download speeds that are orders of magnitude faster than the average home broadband or cellular data connection. Furthermore, it is extremely common for users to be within reach of their neighbours’ Wi-Fi access points. Given the skewed nature of interest in content items, it is likely that some of these neighbours are interested in the same items as the users. We sketch the design of Wi-Stitch, an architecture that exploits these observations to construct a highly efficient content sharing infrastructure at the very edge and show through analysis of a real workload that it can deliver substantial (up to 70%) savings in network traffic. The Wi-Stitch approach can be used both by clients of fixed-line broadband, as well as mobile devices obtaining indoors access in converged networks.

Download the full article DOI: 10.1145/3155055.3155067

DICE: Dynamic Multi-RAT Selection in the ICN-enabled Wireless Edge

Gaurav Panwar, Reza Tournai, Travis Mick, Abderrahmen Mtibaa, Satyajayant Misra
Abstract

Coupled with the rapid increase in mobile device users and the bandwidth and latency demands are the continuous increase of devices’ processing capabilities, storage, and wireless connectivity options. The multiple radio access technology (multi-RAT) is proposed to satisfy mobile users’ increasing needs. The Information-Centric Networking (ICN) paradigm is better tuned (than the current Internet Protocol approach) to support multi-RAT communications. ICN eschews the connection-based content retrieval model used today and has desirable features such as data naming, in-network caching, and device mobility–a paradigm ripe for exploration.

We propose DICE, an ICN forwarding strategy that helps a device dynamically select a subset of its multi-RAT interfaces for communication. DICE assesses the state of edge links and network congestion to determine the minimum number of interfaces required to perform data delivery. We perform simulations to compare DICE’s performance with bestroute2 and multicast strategies (part of the named data networking simulator, ndnSIM). We show that DICE is the best of both worlds: providing a higher delivery ratio (0.2–2 times) and much lower overhead (by 2–8 times) for different packet rates.

Download the full article DOI: 10.1145/3155055.3155066

 

Report on Networking and Programming Languages 2017

Nikolaj Bjorne, Marco Canini, Nik Sultana
Abstract

The third workshop on Networking and Programming Languages, NetPL 2017, was held in conjunction with SIGCOMM 2017. The workshop series attracts invited speakers from academia and industry and a selection of contributed abstracts for short presentations. NetPL brings together researchers from the networking community and researchers from the programming languages and verification communities. The workshop series is a timely forum for exciting trends, technological and scientific advances in the intersection of these communities.

We describe some of the highlights from the invited talks through the lens of three trends: Advances in network machine architectures, network programming abstractions, and network verification.

NetPL included five invited speakers, four from academia, and one from industry. The program contained six contributed papers out of eight submitted for presentation. The workshop organizers reviewed the abstracts for quality and scope. A total of 42 registrations were received and the attendance occupied the lecture room to the brink.

Slides and abstracts from all talks are available from the workshop home page: http://conferences.sigcomm.org/sigcomm/2017/workshop-netpl.html. Videos of the presentations are available in the NetPL YouTube channel: https://www.youtube.com/channel/UCqU8E2n4MHthZUVb1xK2nRQ.

Download the full article DOI: 10.1145/3155055.3155061

The July 2017 Issue

Computer Communication Review (CCR) continues to evolve. As announced earlier, we now accept technical articles that are longer than six pages provided that the authors release software or datasets to help readers to repeat or reproduce the main results described in the paper. These articles are reviewed on the basis of their technical merits and the supplied artifacts are also peer-reviewed. The summary of those two reviews are attached to each accepted paper. All the technical papers that appear in this issue provide artifacts.

The CCR Online website, https:// ccronline.sigcomm.org has been enhanced with a comments section to encourage inter- actions between readers and authors. We have also launched the Community Com- ments section on CCR Online. This section contains preprints of submitted papers while they are being reviewed. Readers are en- couraged to provide constructive comments to these papers. Authors have to opt-in to have their papers listed in this section.

In 2003, David Clark and his colleagues proposed a vision of a knowledge plane for the Internet. In our first technical paper, Albert Mestres and his colleagues revisit this vision and propose Knowledge-Defined Networking. This paper argues that Software-Defined Networking (SDN) and Network Analytics (NA) are two subdomains where in- teractions between Artificial Intelligence and networking would bring many benefits. The provide both use cases and simple experi- mental results. The available software and datasets could serve as a starting point for researchers willing to explore this emerging field.

Our second technical paper tackles a different topic. Ivan Voitalov and his colleagues propose a new routing scheme in Geohyperbolic Routing and Addressing Schemes. A key concern when designing a routing scheme is the scalability of the solution and the size of the forwarding tables. The solution proposed in this paper couples network topology design with a specific addressing scheme so that forwarding tables and update messages are minimised. The proposed solution is evaluated by simulations and the authors release their software and datasets.

In our third technical paper, On the Evolution of ndnSIM: an Open-Source Simulator for NDN Experimentation, Spyridon Mastorakis et al. describe the ndnSIM network simulator. The first version of this simulator was released in 2012 and it has continuously evolved since. The paper describes the evolution of the simulator, some results and the lessons learned by the authors that would probably apply to other open source projects.

Two editorials also appear in this issue. The first one, co-authored by kc Claffy and David Clark summarises the 7th interdisciplinary Workshop on Internet Economics (WIE) that was held in December 2016. In our second editorial, Using Networks to Teach About Networks (Report on Dagstuhl Seminar #17112), Jurgen Schonwalder and his colleagues summarise a recent Dagsthul seminar that was entirely devoted to network education.

I hope that you will enjoy reading this new issue and welcome comments and suggestions on CCR Online or by email at ccr-editor at sigcomm.org.

Using Networks to Teach About Networks (Report on Dagstuhl Seminar #17112)

Jürgen Schönwälder , Timur Friedman, Aiko Pras.
Abstract

This report summarizes a two and a half days Dagstuhl seminar on “Using Networks to Teach About Networks”. The seminar brought together people with mixed backgrounds in order to exchange experiences gained with different approaches to teach computer networking. Despite the obvious question of what to teach, special attention was given to the questions of how to teach and which tools and infrastructures can be used effectively today for teaching purposes.

Download the full article

Workshop on Internet Economics (WIE2016) Final Report

kc claffy, David Clark.
Abstract

On December 8-9 2016, CAIDA hosted the 7th interdisciplinary Workshop on Internet Economics (WIE) at the UC San Diego’s Supercomputer Center. This workshop series provides a forum for researchers, Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to inform current and emerging regulatory and policy debates. This year we first returned to the list of aspirations we surveyed at the 2014 workshop, and described the challenges of mapping them to actions and measurable
progress. We then reviewed evolutionary shifts in traffic, topology, business, and regulatory models, and (our best understanding of) the economics of the ecosystem. These discussions inspired an extended thought experiment for the second day of the workshop: outlining a new telecommunications legislative framework, including proposing a set of
goals and scope of such regulation, and minimal list of sections required to pursue and measure progress toward those goals. The format was a series of focused sessions, where presenters prepared 10-minute talks on relevant issues, followed by in-depth discussions. This report highlights the discussions and presents relevant open research questions identified by participants. Slides presented and this report are available at
http://www.caida.org/workshops/wie/1612/.

Download the full article