This issue contains three editorial contributions. Two of these editorials are reports from workshops related to Internet measurements. The first one, The 8th Workshop on Active Internet Measurements (AIMS-8) Report, summarises the results of the Active Internet Measurements workshop that was held in February 2016. The second one, Report from the 6th PhD School on Traffic Monitoring and Analysis (TMA), summarises a doctoral school that was held in April 2016. Finally, “Resource Pooling” for Wireless Networks: Solutions for the Developing World is a position paper that argues for the utilisation of the resource pooling principle when designing network solutions in developing countries.
Before jumping to those papers, I’d like to point several important changes to the Computer Communication Review submission process. These changes were already announced during the community feedback session at SIGCOMM’16. CCR continues to accept both editorial submissions and technical papers. The main change affects the technical papers. There have been many discussions in our community and elsewhere on the reproducibility of research results. SIGCOMM has already taken some actions to encourage the reproducibility of research results. A well-known example is the best dataset award at the Internet Measurements Conference. CCR will go one step further by accepting technical papers that are longer than six pages provided that these papers are replicable. According to the recently accepted ACM Policy on Result and Artifact Review and Badging, CCR will consider a paper as replicable if other researcher can obtain similar results as the authors of the paper by using the artifacts (software, dataset, …) used by the original authors of the paper. This implies that the authors of long papers will have to release the artifacts that are required to replicate most of the results of the papers. For these replicable papers, there is no a priori page limit, but the acceptance bar will grow with the paper length. Those replicable papers will be reviewed in two phases. The first phase will consider the technical merits of the paper, without analysing the provided artifacts. If the outcome of this first phase is positive, then the artifacts will be evaluated and papers will be tagged with the badges defined by the ACM Policy on Result and Artifact Review and Badging. The public review for the replicable papers will contain a summary of the technical reviews and a summary of the evaluation of the artifacts.