CURRENT_MEETING_REPORT_ Reported by Dave Raggett/Hewlett-Packard Minutes of the HyperText Transfer Protocol Working Group (HTTP) This group met as the HTTP BOF at the 31st IETF on Wednesday, 7 December, and has since become a working group. Further info on HTTP is currently available from the URLs: http://info.cern.ch/hypertext/WWW/Protocols/Overview.html http://www.ics.uci.edu/pub/ietf/http/ History This BOF followed an earlier one at the World Wide Web Conference at Chicago in October in which it was decided to pursue setting up a working group for the World Wide Web hypertext transfer protocol. The group would be set up to standardise existing practice, and then to work on improved performance, security and payments, and other improvements such as support for transaction mechanisms for distributed updates. The meeting started with a summary by the chair of the steps leading up to the BOF. HTTP originated as a very simple protocol indeed. This version is now known as HTTP/0.9. It was soon extended to include a MIME style wrapper to convey the content type and encoding of the returned document. A number of RFC 822 style headers are used to support negotiation and convey meta information, as well as a basic authentication mechanism. This version is known as HTTP/1.0 and is in very widespread use. The on-line documentation at CERN is no longer adequate. Other pressures on HTTP include the need to improve performance, to avoid the penalties associated with making separate connections for the document and each of the inlined images. We also need to standardise ways of adding support for authentication and encryption to support privacy and commercial services. Setting up an IETF working group is now a matter of urgency to ensure the continued health of both HTTP and the Web. Internet-Draft for HTTP/1.0 The meeting continued with a presentation by Henrik Frystyk Nielsen and Roy Fielding on the Internet-Draft for HTTP/1.0 (draft-fielding-http-spec-00.ps). together with some ideas for extensions for consideration in HTTP/1.1. The consensus of the meeting was that the Internet-Draft for HTTP/1.0 be proposed for the standards track as soon as possible. Revisions to the HTTP/1.0 draft specification are being discussed on the http-wg mailing list. The list of extensions for the 1.1 revision were: o Session method - Intended for short-term session only (about 10 second request timeout) though specialised servers may use longer timeout - Supports multiple transactions over single connection - Allows session-long negotiation of Accept-*, authentication, and privacy extensions - More than just an MGET o Packetised Content-Encoding o Better Access Authentication o Base-URL as a New Object header o Allow relative time in Expires header (seconds) Performance Problems with HTTP/1.0 Simon Spero talked briefly on the performance problems with HTTP/1.0. The current protocol uses a separate connection for the document and each inlined image. Each connection requires at least two round trip times, and in practice more due to the lengthy accept headers in common use. Furthermore, TCP/IP uses a slow start mechanism to avoid congestion and gradually increases the throughput to match the actual bandwidth available. As a result, most HTTP transactions operate at a reduced bandwidth. Simon reported measurements indicating that over a congested link from Bristol, England to North Carolina for a representative home page, the current protocol only uses about 10% of the bandwidth then available (as measured by a bulk transfer over a single connection). The approach used in the Netscape browser of opening multiple concurrent TCP/IP connections gives better performance, but still fails to utilise the full bandwidth. One explanation, is that most TCP/IP implementations fail to share congestion information between connections to the same site. The approach also leads to problems with many more connections left in the time-wait state at the server. Discussion on HTTP Issues Larry Masinter asked whether proposals for extensions such as keep-alive were based on experience or speculation. He prefers MIME boundary markers to packetisation schemes for determining message boundaries. Larry also suggested we should consider richer mechanisms for determining whether a document has expired. For instance, downloading conditions in SafeTCL to be evaluated by the client. Alex Hopmann argued in favor of reusing the connection, e.g., for a series of images, and increasing the use of proxy servers. He has tried out a session method scheme, and has a written proposal for this and a separate proposal for a notification proposal. (For more information send e-mail to Alex.) Simon Spero described work on log analysis which showed clear groupings of requests at 3 to 4 images per document. He mentioned problems in analysing logs due to accesses by Netscape browsers which initiate connections for images concurrently and are then cancelled as the user surfs to the next document. Tim Berners-Lee argued that users connecting over phone lines need browsers that can do things concurrently with dynamic changes in priority as the user changes his/her actions, e.g., the browser should keep the pipe full by following links and then abort if the user does something else. He also raised the issue of abstraction layers for HTTP. Brian Behlendorf discussed the need for user authentication and realms. He wants to be able to distinguish accesses to a given machine according to the alias used for the host name, and advocates using the full URL in the GET request. Digital has collected some 9 gigabytes of log data for requested URL and the duration the connection was kept open. A paper analysing this data was presented at the World Wide Web Fall Conference held in October 1994, and can be found in the on-line proceedings. Someone asked ``what does HTTP do in a couple of words?'' It is currently used for a wide variety of things -- should these be unbundled? Roy Fielding answered that HTTP is an extensible protocol used for information transfer. Tim Berners-Lee replied that this was a good question, and asked is MIME appropriate for small messages (for on-line accesses not for e-mail)? Dave Crocker agreed with the need for performance improvements, and said that the current problem is in making connections. He argued in favor of TCPng and other ideas for optimising the underlying protocol rather than hacking session protocols, etc., above TCP. We should feel free to adapt the MIME syntax to better suit the needs of the Web for on-line use. Eric Sink asked when we can start writing code for this (improved performance). The general reaction was that now was a good time to try out experiments to feed into the next versions of HTTP. Alex Hopmann described his session method for multiple requests. He was encourage to carry on with this work and to repost the results so far. Tim asked Alex why a session method rather than as an attribute of GET (e.g., a keep-alive pragma). The main thing is to avoid unnecessary round-trip delays. The question was raised as to the possible impact of keep-alive versus the session method on the operation of proxies. The discussion was taken off-line. HTTP Security Update from Tim Berners-Lee Tim reported on the HTTP Security BOF, which had taken place the preceding evening. The idea was to split work on security off from the HTTP Working Group. This would lessen the workload and make it easier to involve security experts in a wider context than that of HTTP alone. See the minutes for more details. Dave Krystal -- A Proposed Extension Mechanism for HTTP As the Web has grown, pressures have mounted to add a variety of facilities to HTTP. Some of the new features that have been proposed include: keep-alive, packetized data, compression, security and payment. Dave described an alternative: well-defined hooks in a slightly modified HTTP framework that make it possible to add extensions to the basic protocol in a way that will retain compatible behaviour between clients and servers, yet allow both clients and servers to discover and use extended capabilities. The proposed extension mechanism has two fundamental concepts: wrapping and negotiation. The idea is to avoid a proliferation of new methods and header fields. Instead, these would be handled through extensions with new stuff passed though to feature managers. For further information please read the document URL: http://www.research.att.com/ dmk/extend.txt Dave also asked whether payments would be covered by the HTTP Security group. Simon Spero on the HTTPng Proposal Simon raced through the major ideas for HTTPng. A new protocol is needed which is more efficient; has security built-in from the start; and is caching and payment aware. HTTPng uses a session protocol above TCP/IP to support multiple asynchronous transactions interleaved on the same connection. This allows a browser to send the request for an inlined image before finishing reading the HTML document that references it. The approach avoids the delays associated with starting up new connections and makes better use of available bandwidth and server resources. The proposal uses a subset of ASN.1 and the packed encoding rules to formally specify messages. This simplifies implementations compared to using text based representations. The lengthy Accept headers in HTTP/1.0 are avoided by using a bit vector for common cases with an extension mechanism for the rare occasions when these are insufficient. Servers can challenge for payment. A simple implementation of HTTPng was found to operate approaching an order of magnitude faster than HTTP/1.0 over an intercontinental link. A transition strategy was described that allows existing HTTP/1.0 clients and servers to interoperate with HTTPng via proxy servers supporting both HTTP/1.0 and HTTPng. Discussion of the Charter A show of hands indicated unanimous support for recommending to the area directors that a working group be set up for HTTP. The group briefly discussed the draft charter prepared by Roy Fielding. Some minor revisions were agreed. The meeting expressed confidence in Dave Raggett continuing as chair. Subsequently, following a suggestion by John Klensin, to co-opt an IETF oldtimer as co-chair, Tim Berners-Lee agreeed to taking on this role.