Carol Davids, Vijay K. Gurbani, Gaston Ormazabal, Andrew Rollins, Kundan Singh, Radu State.
In this article we describe the discussion and conclusions of the “Roundtable on Real-Time Communications Research: 5G and Real-Time Communications — Topics for Research” held at the Illinois Institute of Technology’s Real-Time Communications Conference and Expo, co-located with the IPTComm Conference, October 5-8, 2015.
On December 16-17 2015, we hosted the 5th interdisciplinary Workshop on Internet Economics (WIE) at the UC San Diego’s Supercomputer Center. This workshop series provides a forum for researchers, Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to inform current and emerging regulatory and policy debates.
The FCC’s latest open Internet order ostensibly changes the landscape of regulation by using Title II as its basis. This year we discussed the implications of Title II (common-carrier-based) regulation for issues we have looked at in the past, or those shaping current policy conversations. Discussion topics included differentiated services on the public Internet, evolving approaches to interconnection across different segments of the ecosystem (e.g., content to access), QoE and QoS measurement techniques and their limitations, interconnection measurement and modeling challenges and opportunities, and transparency. The format was a series of focused sessions, where presenters prepared 10-minute talks on relevant issues, followed by in-depth discussions. This report highlights the discussions and presents relevant open research questions identified by participants. Slides presented and this report are available at http://www.caida.org/workshops/wie/1512/
Network management currently undergoes massive changes towards realizing more flexible management of complex networks. Recent efforts include slicing data plane resources by network (link) virtualization and applying operating system design principles to Software Defined Networking to rethink network management. Driven by network operators, network management principles are currently envisioned to be even further improved by virtualizing network (middlebox) functions. The resulting Network Functions Virtualization (NFV) paradigm abstracts network functions from dedicated hardware to virtual machines running on commodity hardware. This change in the design of carrier networks is inspired by the success of virtualization in the server market. By deploying NFV, network operators envision to achieve benefits similar to the server market and elastic cloud services, e.g., flexible and dynamic service provisioning, increased resource utilization, improved energy efficiency, and ultimately decreased operational costs. Despite these efforts, the ability of NFV to satisfy performance demands is often questioned. Tackling these challenges opens a set of research questions that felt short in the current discussion and are of particular relevance to the SIGCOMM community. In this position paper, we therefore provide an overview on the current state-of-the-art and open research questions.
Many algorithms proposed in networking research papers are widely used in many areas, including Congestion Control, Rout- ing, Traffic Engineering, and Load Balancing. In this paper, we present algorithmic advancements that have impacted the practice of Congestion Control (CC) in datacenters and the Internet. Where possible, we also describe negative examples, ideas that looked promising on paper or in simulations but that performed poorly in practice. We conclude the paper with observations on the characteristics shared by these ideas in taking them from research to impacting practice.
This report is a brief summary of the Internet Research Task Force and Internet Society workshop on Research and Applications of Internet Measurements (RAIM) in co-operation with ACM SIGCOMM that took place on Saturday, October 31, 2015 in Yokohama, Japan. The workshop provided an opportunity for researchers and practitioners in the field of Internet measurements to become acquainted and share their work.
The Applications and Services in the Year 2021 workshop was successfully organized on January 27-28, 2016 in Washington DC through funding support from the National Science Foundation (NSF). The goal of the workshop was to foster discussions that bring together applications researchers in multidisciplinary areas, and developers/operators of research infrastructures at both national, regional, university and city levels. Discussions were organized to identify grand challenge applications and obtain the community voice and consensus on the key issues relating to applications and services that might be delivered by advanced infrastructures in the decade beginning in 2020. The timing and organization for the workshop is significant because today’s digital infrastructure is undergoing deep technological changes and new paradigms are rapidly taking shape in both the core and edge domains that pose fundamental challenges. The key outcomes of the discussions were targeted to enhance the quality of peoples’ lives while addressing important national priorities, leveraging today’s cutting edge applications such as the Internet of Things, Big Data Analytics, Robotics, The Industrial Internet, and Immersive Virtual/Augmented Reality. This report summarizes the workshop efforts to bring together diverse groups for delivering targeted short/long talks, sharing latest advances, and identifying gaps that exist in the community for ‘research’ and ‘infrastructure’ needs that require future NSF funding.
Selecting a technical program committee (PC) for a conference or a workshop can be an intimidating and time consuming process. PC selection needs to balance several potential considerations; e.g., industry vs. academic participation, inclusion of under-represented communities, ensuring “coverage” over topic areas, among others. This paper presents the design of an open-source tool called EZ- PC which formulates these considerations as a simple constraint satisfaction problem to help PC chairs systematize this selection process. We report on some of the features we have incorporated and experiences in using the tool.
Alberto Dainotti, Ethan Katz-Basset, Xenofontas Dimitropolous.
Internet routes – controlled by the Border Gateway Protocol (BGP) – carry our communication and our commerce, yet many aspects of routing are opaque to even network operators, and BGP is known to contribute to performance, reliability, and security problems. The research and operations communities have developed a set of tools and data sources for understanding and experimenting with BGP, and on February 2016 we organized the first BGP Hackathon, themed around live measurement and monitoring of Internet routing. The Hackathon included students, researchers, operators, providers, policymakers, and funding agencies, working together on projects to measure, visualize, and improve routing or the tools we use to study routing. This report describes the tools used at the Hackathon and presents an overview of the projects. The Hackathon was a success, and we look forward to future iterations.