I have reviewed draft-ietf-clue-framework-24 as part of the Operational directorate's ongoing effort to review all IETF documents being processed by the IESG.  These comments were written with the intent of improving the operational aspects of the IETF drafts. Comments that are not addressed in last call may be included in AD reviews during the IESG review.  Document editors and WG chairs should treat these comments just like any other last call comments. This is a standard track document of the CLUE WG and it  defines a framework for a protocol to enable devices  in a telepresence conference to interoperate (mainly by exchanging spatial data and device capabilities). This rather long I-D is quite clear and easy to understand with multiple examples. I found nothing in this framework document which could cause operational issues. The CLUE devices are able to detect other CLUE-enabled devices which is of course good for migration/interoperation. Unrelated to this specific ID (so feel free to ignore) but more on the other documents (data model & protocol): the protocol extensions are briefly mentioned in this framework document section 11, but, can the WG take special care in versioning the CLUE protocol (as proposed in the current protocol I-D) as well as allowing extension of the finite set of values for some information? For exemple, the "view" (section 7.1.1.8) has only a limited set of values and there appears to have no way to extend it, is there an intent to open a  IANA registry for those values? Probably not as the IANA considerations are 'none'.  Does CLUE WG (or another) have the intent to develop a YANG model? It does not appear in the CLUE WG charter. As a small nit, I would suggest to move the long section 12 (informative examples) as an appendix. Now, just for the sake of my curiosity, how does the spatial relationships work when one or several media capture devices are mobile? (smart phones, 'Go Pro', ...) Hope this helps -éric