This January 2024 issue contains two technical papers.
The first technical paper, Planter: Rapid Prototyping of In-Network Machine Learning Inference, by Changgang Zheng and colleagues, proposes a new framework to streamline the deployment of machine learning models across a wide range of hardware devices such as Intel Tofino, Xilinx/AMD Alveo and NVIDIA BlueField 2. The authors discuss the challenges of deploying machine learning algorithms into different programmable devices.
The second technical paper, iip: an integratable TCP/IP stack, by Kenichi Yasukata, presents an integratable TCP/IP stack, which aims to become a handy option for developers and researchers who wish to have a high-performance TCP/IP stack implementation for their projects. Existing performance-optimized TCP/IP stacks often incur tremendous integration complexity and existing portability-aware TCP/IP stacks have significant performance limitations. This paper introduces an API to allow for easy integration and good performance simultaneously.
I hope that you will enjoy reading this new issue and welcome comments and suggestions on CCR Online (https://ccronline.sigcomm.org) or by email at ccr-editor at sigcomm.org.