I am the assigned Gen-ART reviewer for this draft. The General Area Review Team (Gen-ART) reviews all IETF documents being processed by the IESG for the IETF Chair. Please treat these comments just like any other last call comments. For more information, please see the FAQ at . Document: draft-ietf-bmwg-vswitch-opnfv-?? Reviewer: Dan Romascanu Review Date: 2017-05-11 IETF LC End Date: 2017-05-15 IESG Telechat date: Not scheduled for a telechat Summary: Almost Ready. This document describes describes the progress of the Open Platform for NFV (OPNFV) project on virtual switch performance "VSPERF". That project reuses the BMWG framework and specifications to benchmark virtual switches implemented in general-purpose hardware. Some differences with the benchmarking of specialized HW platforms are identified and they may become work items for BMWG in the future. It's a well written and clear document, but I have reservations about it being published as an RFC, and I cannot find coverage for it in the WG charter. I also have concerns that parts of the methodology used by OPNFV break the BMWG principles, especially repeatability and 'black-box', and this is not clear enough articulated in the document. Major issues: 1. It is not clear to me why this document needs to be published as an RFC. The introduction says: 'This memo describes the progress of the Open Platform for NFV (OPNFV) project on virtual switch performance "VSPERF". This project intends to build on the current and completed work of the Benchmarking Methodology Working Group in IETF, by referencing existing literature.' Why should the WG and the IESG invest resources in publishing this, why an I-D or an Independent Stream RFC is not sufficient? The WG charter says something about: 'VNF and Related Infrastructure Benchmarking: Benchmarking Methodologies have reliably characterized many physical devices. This work item extends and enhances the methods to virtual network functions (VNF) and their unique supporting infrastructure. A first deliverable from this activity will be a document that considers the new benchmarking space to ensure that common issues are recognized from the start, using background materials from industry and SDOs (e.g., IETF, ETSI NFV).'. I do not believe that this document covers the intent of the charter, as it focused on one organization only. 2. In section 3 there 'repeatability' is mentioned, while acknowledging that in a virtual environment there is no guarantee and actually no way to know what other applications are being run. Measuring parameters as the ones listed in 3.3 provides just part of the answer, and they are internal parameters to the SUT. Also, the different deployment scenarios in section 4 require different configurations for the SUT, thus breaking the 'black-box' principle. I believe that there is a need for a more clear explanation of why BMWG specifications are appropriate and how comparison can be made while repeatability cannot be ensured, and measurements are dependent upon parameters internal to the SUT. Minor issues: 1. Some of the tests mentioned in Section 4 have no prior or in progress work in the IETF: Control Path and Datapath Coupling Tests, Noisy Neighbour Tests, characterization of acceleration technologies. If new work is needed / proposed to be added for the BMWG scope and framework it would be useful for BMWG to list these separately. Nits/editorial comments: 1. What is called 'Deployment scenarios' from VS perspective in Section 4 describe in fact different configurations of the SUT in BMWG terms. It seems better to separate this second part of section 4 in a separate section. If it belongs to an existing section it rather belongs in 3 than in 4.