Commit Graph

357 Commits

Author SHA1 Message Date
Scott Mitchell
3f82b53bae Add unit test for HttpObjectDecoder with message split on buffer boundaries
Motivation:
We should have a unit test which explicitly tests a HTTP message being split between multiple ByteBuf objects.

Modifications:
- Add a unit test to HttpRequestDecoderTest which splits a request between 2 ByteBuf objects

Result:
More unit test coverage for HttpObjectDecoder.
2016-12-20 12:59:00 -08:00
Norman Maurer
cb139043f3 [#5831] HttpServerCodec cannot encode a respons e to HEAD
request with a 'content-encoding: chunked' header

Motivation:

It is valid to send a response to a HEAD request that contains a transfer-encoding: chunked header, but it is not valid to include a body, and there is no way to do this using the netty4 HttpServerCodec.

The root cause is that the netty4 HttpObjectEncoder will transition to the state ST_CONTENT_CHUNK and the only way to transition back to ST_INIT is through the encodeChunkedContent method which will write the terminating length (0\r\n\r\n\r\n), a protocol error when responding to a HEAD request

Modifications:

- Keep track of the method of the request and depending on it handle the response differently when encoding it.
- Added a unit test.

Result:

Correclty handle HEAD responses that are chunked.
2016-12-15 07:54:51 +00:00
Stephane Maldini
ea0ddc0ea2 fix #6066 Support optional filename in HttpPostRequestEncoder
Motivation:

According to https://www.ietf.org/rfc/rfc2388.txt 4.4, filename after "content-disposition" is optional and arbitrary (does not need to match a real filename).

Modifications:

This change supports an extra addBodyFileUpload overload to precise the filename (default to File.getName). If empty or null this argument should be ignored during encoding.

Result:
- A backward-compatible addBodyFileUpload(String, File, String, boolean) to use file.getName() as filename.
- A new addBodyFileUpload(String, String, File, String, boolean) overload to precise filename
- Couple of tests for the empty use case
2016-12-01 06:54:51 +01:00
Stephane Landelle
f755e58463 Clean up following #6016
Motivation:

* DefaultHeaders from netty-codec has some duplicated logic for header date parsing
* Several classes keep on using deprecated HttpHeaderDateFormat

Modifications:

* Move HttpHeaderDateFormatter to netty-codec and rename it into HeaderDateFormatter
* Make DefaultHeaders use HeaderDateFormatter
* Replace HttpHeaderDateFormat usage with HeaderDateFormatter

Result:

Faster and more consistent code
2016-11-21 12:35:40 -08:00
radai-rosenblatt
886a7aae46 Fix timestamp parsing in HttpHeaderDateFormatter
Motivation:
code assumes a numeric value of 0 means no digits were read between separators, which fails for timestamps like 00:00:00.
also code accepts invalid timestamps like 0:0:000

Modifications:
explicitly check for number of digits between separators instead of relying on the numeric value.
also add tests.

Result:
timestamps with 00 successfully parse, timestamps with 000 no longer

Signed-off-by: radai-rosenblatt <radai.rosenblatt@gmail.com>
2016-11-21 10:17:54 +01:00
Norman Maurer
0b3122d8ff Deprecate HttpUtil.getCharsetAsString(...) and introduce HttpUtil.getCharsetAsSequence(...).
Motivation:

The method HttpUtil.getCharsetAsString(...) is missleading as its return type is CharSequence and not String.

Modifications:

Deprecate HttpUtil.getCharsetAsString(...) and introduce HttpUtil.getCharsetAsSe
quence(...).

Result:

Less confusing method name.
2016-11-21 07:47:20 +01:00
Stephane Landelle
edc4842309 Fix cookie date parsing, close #6016
Motivation:
* RFC6265 defines its own parser which is different from RFC1123 (it accepts RFC1123 format but also other ones). Basically, it's very lax on delimiters, ignores day of week and timezone. Currently, ClientCookieDecoder uses HttpHeaderDateFormat underneath, and can't parse valid cookies such as Github ones whose expires attribute looks like "Sun, 27 Nov 2016 19:37:15 -0000"
* ServerSideCookieEncoder currently uses HttpHeaderDateFormat underneath for formatting expires field, and it's slow.

Modifications:
* Introduce HttpHeaderDateFormatter that correctly implement RFC6265
* Use HttpHeaderDateFormatter in ClientCookieDecoder and ServerCookieEncoder
* Deprecate HttpHeaderDateFormat

Result:
* Proper RFC6265 dates support
* Faster ServerCookieEncoder and ClientCookieDecoder
* Faster tool for handling headers such as "Expires" and "Date"
2016-11-18 11:22:21 +00:00
Norman Maurer
0bc30a123e Eliminate usage of releaseLater(...) to reduce memory usage during tests
Motiviation:

We used ReferenceCountUtil.releaseLater(...) in our tests which simplifies a bit the releasing of ReferenceCounted objects. The problem with this is that while it simplifies stuff it increase memory usage a lot as memory may not be freed up in a timely manner.

Modifications:

- Deprecate releaseLater(...)
- Remove usage of releaseLater(...) in tests.

Result:

Less memory needed to build netty while running the tests.
2016-11-18 09:34:11 +01:00
Adrian Gonzalez
baac352f74 WebSocketClientHandshaker.rawPath(URI) should use the raw query
Motivation:

If the wsURL contains an encoded query, it will be decoded when generating the raw path.  For example if the wsURL is http://test.org/path?a=1%3A5, the returned raw path would be /path?a=1:5

Modifications:

Use wsURL.getRawQuery() rather than wsURL.getQuery()

Result:

rawPath will now return /path?a=1%3A5
2016-11-14 08:45:27 +01:00
Bryce Anderson
f0f0edbf78 HttpObjectAggregator adds 'Connection: close' header if necessary
Motivation:

The HttpObjectAggregator never appends a 'Connection: close' header to
the response of oversized messages even though in the majority of cases
its going to close the connection.

Modification:

This PR addresses that by ensuring the requisite header is present when
the connection is going to be closed.

Result:

Gracefully signal that we are about to close the connection.
2016-11-08 08:43:30 +01:00
Norman Maurer
8269e0f046 [#5892] Correct handle HttpMessage that is EOF terminated
Motivation:

We need to ensure we not add the Transfer-Encoding header if the HttpMessage is EOF terminated.

Modifications:

Only add the Transfer-Encoding header if an Content-Length header is present.

Result:

Correctly handle HttpMessage that is EOF terminated.
2016-11-01 11:13:44 +01:00
Moses Nakamura
bff951ca07 codec-http: HttpClientUpgradeHandler can handle streamed responses
Motivation:

We want to reject the upgrade as quickly as possible, so that we can
support streamed responses.

Modifications:

Reject the upgrade as soon as we inspect the headers if they're wrong,
instead of waiting for the entire response body.

Result:

If a remote server doesn't know how to use the http upgrade and tries to
responsd with a streaming response that never ends, the client doesn't
buffer forever, but can instead pass it along.  Fixes #5954
2016-11-01 06:32:41 +01:00
Norman Maurer
cf8f6e3e2f [#5861] HttpUtil.getContentLength(HttpMessage, long) throws unexpected NumberFormatException
Motivation:

The Javadocs of HttpUtil.getContentLength(HttpMessage, long) and its int overload state that the provided default value is returned if the Content-Length value is not a number. NumberFormatException is thrown instead.

Modifications:

Correctly handle when the value is not a number.

Result:

API works as stated in javadocs.
2016-09-29 21:32:07 +02:00
Scott Mitchell
dd1ba2a252 HttpObjectDecoder resetRequested not updated after reset
Motivation:
HttpObjectDecoder maintains a resetRequested flag which is used to determine if internal state should be reset when a decode occurs. However after a reset is done the resetRequested flag is not set to false. This leads to all data after this point being discarded.

Modifications:
- Set resetRequested to false when a reset is done

Result:
HttpObjectDecoder can still function after a reset.
2016-09-22 10:58:44 -07:00
Christopher O'Toole
c57d4bed91 Add HttpServerKeepAliveHandler
Motivation:

As discussed in #5738, developers need to concern themselves with setting
connection: keep-alive on the response as well as whether to close a
connection or not after writing a response.  This leads to special keep-alive
handling logic in many different places.  The purpose of the HttpServerKeepAliveHandler
is to allow developers to add this handler to their pipeline and therefore
free themselves of having to worry about the details of how Keep-Alive works.

Modifications:

Added HttpServerKeepAliveHandler to the io.netty.handler.codec.http package.

Result:

Developers can start using HttpServerKeepAliveHandler in their pipeline instead
of worrying about when to close a connection for keep-alive.
2016-09-15 15:59:21 -07:00
Gaston Tonietti
245fb52c90 Provide extra info together with handshake complete event.
Motivation:

As described in #5734

Before this change, if the server had to do some sort of setup after a
handshake was completed based on handshake's information, the only way
available was to wait (in a separate thread) for the handshaker to be
added as an attribute to the channel. Too much hassle.

Modifications:

Handshake completed event need to be stateful now, so I've added a tiny
class holding just the HTTP upgrade request and the selected subprotocol
which is fired as an event after the handshake has finished.
I've also deprecated the old enum used as stateless event and I left the
code that fires it for backward compatibility. It should be removed in
the next mayor release.

Result:

It should be much simpler now to do initialization stuff based on
subprotocol or request headers on handshake completion. No asynchronous
waiting needed anymore.
2016-09-11 17:52:07 +02:00
William Blackie
e3aca1f3d6 CorsHandler to respect http connection (keep-alive) header.
Motivation:

The CorsHandler currently closes the channel when it responds to a preflight (OPTIONS)
request or in the event of a short circuit due to failed validation.

Especially in an environment where there's a proxy in front of the service this causes
unnecessary connection churn.

Modifications:

CorsHandler now uses HttpUtil to determine if the connection should be closed
after responding and to set the Connection header on the response.

Result:

Channel will stay open when the CorsHandler responds unless the client specifies otherwise
or the protocol version is HTTP/1.0
2016-09-06 07:18:53 +02:00
Norman Maurer
a8b8553ad1 Revert "CorsHandler to respect http connection (keep-alive) header."
This reverts commit ecd6e5ce6d.
2016-08-24 08:54:29 +02:00
William Blackie
ecd6e5ce6d CorsHandler to respect http connection (keep-alive) header.
Motivation:

The CorsHandler currently closes the channel when it responds to a preflight (OPTIONS)
request or in the event of a short circuit due to failed validation.

Especially in an environment where there's a proxy in front of the service this causes
unnecessary connection churn.

Modifications:

CorsHandler now uses HttpUtil to determine if the connection should be closed
after responding

Result:

Channel will stay open when the CorsHandler responds unless the client specifies otherwise
or the protocol version is HTTP/1.0
2016-08-24 08:50:29 +02:00
Sergey Polovko
3451b3cbb3 Cookie name must be case sensitive
Motivation:

RFC 6265 does not state that cookie names must be case insensitive.

Modifications:

Fix io.netty.handler.codec.http.cookie.DefaultCookie#equals() method to
use case sensitive String#equals() and String#compareTo().

Result:

It is possible to parse several cookies with same names but with
different cases.
2016-08-23 09:44:38 +02:00
Akhil
8d043cc4dd Do not return Access-Control-Allow-Headers on Non-Preflight Cors requests
Motivation:

The CorsHandler currently returns the Access-Control-Allow-Headers
header as on a Non-Preflight CORS request (Simple request).
As per the CORS specification the Access-Control-Allow-Headers header
should only be returned on Preflight requests. (not on simple requests).

https://www.w3.org/TR/2014/REC-cors-20140116/#access-control-allow-headers-response-header

http://www.html5rocks.com/static/images/cors_server_flowchart.png

Modifications:

Modified CorsHandler.java to not add the Access-Control-Allow-Headers
header when responding to Non-preflight CORS request.

Result:

Access-Control-Allow-Headers header will not be returned on a Simple
request (Non-preflight CORS request).
2016-08-16 13:45:04 +02:00
Scott Mitchell
82b617dfe9 retainSlice() unwrap ByteBuf
Motivation:
retainSlice() currently does not unwrap the ByteBuf when creating the ByteBuf wrapper. This effectivley forms a linked list of ByteBuf when it is only necessary to maintain a reference to the unwrapped ByteBuf.

Modifications:
- retainSlice() and retainDuplicate() variants should only maintain a reference to the unwrapped ByteBuf
- create new unit tests which generally verify the retainSlice() behavior
- Remove unecessary generic arguments from AbstractPooledDerivedByteBuf
- Remove unecessary int length member variable from the unpooled sliced ByteBuf implementation
- Rename the unpooled sliced/derived ByteBuf to include Unpooled in their name to be more consistent with the Pooled variants

Result:
Fixes https://github.com/netty/netty/issues/5582
2016-07-29 11:16:44 -07:00
Ngoc Dao
835f901d5f Fix #5590 QueryStringDecoder#path should decode the path info
Motivation:

Currently, QueryStringDecoder#path simply returns the path info as is, without decoding it as the Javadoc states.

Modifications:

* Make QueryStringDecoder#path decode the path info.
* Add tests to QueryStringDecoderTest.

Result:

QueryStringDecoder#path now decodes the path info as expected.
2016-07-27 09:29:54 +02:00
Norman Maurer
c735b3e147 [#5514] Fix DiskFileUpload and MemoryFileUpload equals(...) method.
Motivation:

DiskFileUpload and MemoryFileUpload.equals(...) are broken.

Modifications:

Fix implementation and add unit test.

Result:

Equals method are correct now.
2016-07-14 09:09:16 +02:00
Tim Brooks
d964bf6f18 Remove usages of deprecated methods group() and childGroup().
Motivation:

These methods were recently deprecated. However, they remained in use in several locations in Netty's codebase.

Modifications:

Netty's code will now access the bootstrap config to get the group or child group.

Result:

No impact on functionality.
2016-06-21 14:06:57 +02:00
Norman Maurer
16be36a55f [#5402] sec-websocket-origin should mention HTTPS
Motivation:

When HTTPS is used we should use https in the sec-websocket-origin / origin header

Modifications:

- Correctly generate the sec-websocket-origin / origin header
- Add unit tests.

Result:

Generate correct header.
2016-06-20 11:22:09 +02:00
Nitesh Kant
ee0897a1d9 HttpContentDecompressor should change decompressed requests to chunked encoding. Fixes issue #5428
`HttpContentDecoder` was removing `Content-Length` header but not adding a `Transfer-Encoding` header which goes against the HTTP spec.

Added `Transfer-Encoding` header with value `chunked` when `Content-Length` is removed.
Modified existing unit test to also check for this condition.

Compliance with HTTP spec.
2016-06-20 07:43:06 +02:00
Norman Maurer
4a1e0ceb4d [5382] HttpContentEncoder should not set chunked transfer-encoding for HTTP/1.0
Motivation:

When using HttpContentCompressor and the HttpResponse is protocol version 1.0, HttpContentEncoder.encode() should not set the transfer-encoding header to chunked. Chunked transfer-encoding is not valid for HTTP 1.0 - this causes ERR_CONTENT_DECODING_FAILED errors in chrome and similar failures in IE.

Modifications:

Skip HTTP/1.0 messages

Result:

Be able to serve HTTP/1.0 as well when HttpContentEncoder is in the pipeline.
2016-06-17 06:35:33 +02:00
Norman Maurer
f5eea4698d Fix possible NPE in HttpCunkedInput if wrapped ChunkedInput.readChunk(...) return null.
Motivation:

Its completly fine for ChunkedInput.readChunk(...) to return null to indicate there is currently not any data to read. We need to handle this in HttpChunkedInput to not produce a NPE when constructing the HttpContent.

Modifications:

If readChunk(...) return null just return null as well.

Result:

No more NPE.
2016-06-17 06:27:04 +02:00
Norman Maurer
398efb1f71 Ensure valid message sequence if channel is closed before receive headers.
Motivation:

When the channel is closed while we still decode the headers we currently not preserve correct message sequence. In this case we should generate an invalid message with a current cause.

Modifications:

Create an invalid message with a PrematureChannelClosureException as cause when the channel is closed while we decode the headers.

Result:

Correct message sequence preserved and correct DecoderResult if the channel is closed while decode headers.
2016-06-09 22:42:46 +02:00
Norman Maurer
7b25402e80 Add CompositeByteBuf.addComponent(boolean ...) method to simplify usage
Motivation:

At the moment the user is responsible to increase the writer index of the composite buffer when a new component is added. We should add some methods that handle this for the user as this is the most popular usage of the composite buffer.

Modifications:

Add new methods that autoamtically increase the writerIndex when buffers are added.

Result:

Easier usage of CompositeByteBuf.
2016-05-21 19:52:16 +02:00
Scott Mitchell
1cb706ac93 HTTP/2 HPACK Header Name Validation and Trailing Padding
Motivation:
The HPACK code currently disallows empty header names. This is not explicitly forbidden by the HPACK RFC https://tools.ietf.org/html/rfc7541. However the HTTP/1.x RFC https://tools.ietf.org/html/rfc7230#section-3.2 and thus HTTP/2 both disallow empty header names, and so this precondition check should be moved from the HPACK code to the protocol level.
HPACK also requires that string literals which are huffman encoded must be treated as an encoding error if the string has more than 7 trailing padding bits https://tools.ietf.org/html/rfc7541#section-5.2, but this is currently not enforced.

Result:
- HPACK to allow empty header names
- HTTP/1.x and HTTP/2 header validation should not allow empty header names
- Enforce max of 7 trailing padding bits

Result:
Code is more compliant with the above mentioned RFCs
Fixes https://github.com/netty/netty/issues/5228
2016-05-17 13:42:16 -07:00
Trustin Lee
3a9f472161 Make retained derived buffers recyclable
Related: #4333 #4421 #5128

Motivation:

slice(), duplicate() and readSlice() currently create a non-recyclable
derived buffer instance. Under heavy load, an application that creates a
lot of derived buffers can put the garbage collector under pressure.

Modifications:

- Add the following methods which creates a non-recyclable derived buffer
  - retainedSlice()
  - retainedDuplicate()
  - readRetainedSlice()
- Add the new recyclable derived buffer implementations, which has its
  own reference count value
- Add ByteBufHolder.retainedDuplicate()
- Add ByteBufHolder.replace(ByteBuf) so that..
  - a user can replace the content of the holder in a consistent way
  - copy/duplicate/retainedDuplicate() can delegate the holder
    construction to replace(ByteBuf)
- Use retainedDuplicate() and retainedSlice() wherever possible
- Miscellaneous:
  - Rename DuplicateByteBufTest to DuplicatedByteBufTest (missing 'D')
  - Make ReplayingDecoderByteBuf.reject() return an exception instead of
    throwing it so that its callers don't need to add dummy return
    statement

Result:

Derived buffers are now recycled when created via retainedSlice() and
retainedDuplicate() and derived from a pooled buffer
2016-05-17 11:16:13 +02:00
Norman Maurer
ef13d19b8b [#5202] Correctly throw ErrorDataDecoderException when invalid encoded form parameters are present.
Motivation:

At the moment we let the IllegalArgumentException escape when parsing form parameters. This is not expected.

Modifications:

Correctly catch IllegalArgumentException and rethrow as ErrorDataDecoderException.

Result:

Throw correct exception.
2016-05-04 21:14:53 +02:00
Daniel Bevenius
0557927b65 Updating allowNullOrigin to return 'null' instead of '*'.
Motivation:
Currently the way a 'null' origin, a request that most often indicated
that the request is coming from a file on the local file system, is
handled is incorrect. We are currently returning a wildcard origin '*'
but should be returning 'null' for the 'Access-Control-Allow-Origin'
which is valid according to the specification [1].

Modifications:
Updated CorsHandler to add a 'null' origin instead of the '*' origin in
the case the request origin is 'null.

Result:
All test pass and the CORS example as does the cors.html example if you
try to serve it by opening the file directly in a web browser.

[1]
https://www.w3.org/TR/cors/#access-control-allow-origin-response-header
2016-05-03 08:39:38 +02:00
Norman Maurer
718bf2fa45 Fix resource-leak which was reported as a result of commit 69070c37ba 2016-04-12 16:27:02 +02:00
Norman Maurer
4652223dec Fix resource leak in test introduced by 69070c37ba 2016-04-10 08:04:57 +02:00
Norman Maurer
f46cfbc590 [#5059] Deprecate method with typo and introduce a new one without typo
Motivation:

There is a spelling error in FileRegion.transfered() as it should be transferred().

Modifications:

Deprecate old method and add a new one.

Result:

Fix typo and can remove the old method later.
2016-04-05 15:06:46 +02:00
Stephane Landelle
881ff3cd98 Drop broken DefaultCookie name validation, close #4999
Motivation:

DefaultCookie constructor performs a name validation that doesn’t match
RFC6265. Moreover, such validation is already performed in strict
encoders and decoders.

Modifications:

Drop DefaultCookie name validation, rely on encoders and decoders.

Result:

no more duplicate broken validation
2016-03-22 12:32:09 +01:00
Stephane Landelle
d747438366 Add ! to allowed cookie value chars
Motivation:

! is missing from allowed cookie value chars, as per https://tools.ietf.org/html/rfc6265#section-4.1.1.
Issue was originally reported on Play!, see https://github.com/playframework/playframework/issues/4460#issuecomment-198177302.

Modifications:

Stick to RFC6265 ranges.

Result:

RFC6265 compliance, ! is supported
2016-03-18 16:58:54 +01:00
Julien Viet
3d7cec6376 Bug fix for HttpPostMultipartRequestDecoder part decoding with an invalid charset not reported as an ErrorDataDecoderException
Motivation:

The current HttpPostMultipartRequestDecoder can decode multipart/form-data parts with a Content-Type that specifies a charset. When this charset is invalid the Charset.forName() throws an unchecked UnsupportedCharsetException. This exception is not catched by the decoder. It should actually be rethrown as an ErrorDataDecoderException, because the developer using the API would expect this validation failure to be reported as such.

Modifications:

Add a catch block for UnsupportedCharsetException and rethrow it as an ErrorDataDecoderException.

Result:

UnsupportedCharsetException are now rethrown as ErrorDataDecoderException.
2016-03-10 18:33:06 +01:00
Dmitry Spikhalskiy
0d3eda38e1 Helper method to get mime-type from Content-Type header of HttpMessage 2016-03-03 15:18:39 +01:00
Xiaoyan Lin
333f55e9ce Add unescapeCsvFields to parse a CSV line and implement CombinedHttpHeaders.getAll
Motivation:

See #4855

Modifications:

Unfortunately, unescapeCsv cannot be used here because the input could be a CSV line like `"a,b",c`. Hence this patch adds unescapeCsvFields to parse a CSV line and split it into multiple fields and unescaped them. The unit tests should define the behavior of unescapeCsvFields.

Then this patch just uses unescapeCsvFields to implement `CombinedHttpHeaders.getAll`.

Result:

`CombinedHttpHeaders.getAll` will return the unescaped values of a header.
2016-02-15 15:26:15 -08:00
Norman Maurer
7ef6db3ffd [#4754] Correctly detect websocket upgrade
Motivation:

If the Connection header contains multiple values (which is valid) we fail to detect a websocket upgrade

Modification:

- Add new method which allows to check if a header field contains a specific value (and also respect multiple header values)
- Use this method to detect handshake

Result:

Correct detect handshake if Connection header contains multiple values (seperated by ',').
2016-02-04 14:03:08 +01:00
Norman Maurer
a0758e7e60 [#4794] Support window size flag by default if ZlibCodecFactory supports it.
Motivation:

If the ZlibCodecFactory can support using a custom window size we should support it by default in the websocket extensions as well.

Modifications:

Detect if a custom window size can be handled by the ZlibCodecFactory and if so enable it by default for PerMessageDeflate*ExtensionHandshaker.

Result:

Support window size flag by default in most installations.
2016-02-04 14:01:40 +01:00
Norman Maurer
7a562943ad [#4533] Ensure replacement of decoder is delayed after finishHandshake() is called
Motivation:

If the user calls handshake.finishHandshake() we need to ensure that the user has the chance to setup the pipeline before any WebSocketFrames are read. Because of this we need
to delay the removal of the HttpRequestDecoder.

Modifications:

- Remove the HttpRequestDecoder via the EventLoop and so delay it which gives the user a chance to setup the pipeline after finishHandshake() completes
- Add unit test for this.

Result:

Less surpising and correct behaviour even if the http response and websocket frame are received in one read operation.
2016-02-04 13:57:35 +01:00
houdejun214
a6fd8a96bf Set default CONTENT_TYPE when it is absent in multipart request body
Motivation:

I am use netty as a http server, it fail to decode some POST request when the request absent Content-Type in the multipart/form-data body.

Modifications:

Set content_type with default application/octet-stream to parse the uploaded file data when the Content-Type is absent in multipart request body

Result:

Can decode the http request as normal.
2016-01-26 10:47:11 +01:00
Fabian Lange
619d82b56f Removed unused imports
Motivation:

Warnings in IDE, unclean code, negligible performance impact.

Modification:

Deletion of unused imports

Result:

No more warnings in IDE, cleaner code, negligible performance improvement.
2016-01-04 14:32:29 +01:00
Norman Maurer
79bc90be32 Fix buffer leak introduced by 693633eeff
Motivation:

As we not used Unpooled anymore for allocate buffers in Base64.* methods we need to ensure we realease all the buffers.

Modifications:

Correctly release buffers

Result:

No more buffer leaks
2015-12-29 17:13:07 +01:00
Scott Mitchell
fd5316ed6f ChunkedInput.readChunk parameter of type ByteBufAllocator
Motivation:
ChunkedInput.readChunk currently takes a ChannelHandlerContext object as a parameters. All current implementations of this interface only use this object to get the ByteBufAllocator object. Thus taking a ChannelHandlerContext as a parameter is more restrictive for users of this API than necessary.

Modifications:
- Add a new method readChunk(ByteBufAllocator)
- Deprecate readChunk(ChannelHandlerContext) and updates all implementations to call readChunk(ByteBufAllocator)

Result:
API that only requires ByteBufAllocator to use ChunkedInput.
2015-12-24 12:46:40 -08:00
Norman Maurer
1a2162ec35 Fix broken tests introduced by dc615ecaaf 2015-12-18 10:16:07 +01:00
Norman Maurer
dc615ecaaf [#4212] Backport WebSocket Extension handlers for client and server.
Motivation:

We have websocket extension support (with compression) in old master. We should port this to 4.1

Modifications:

Backport relevant code.

Result:

websocket extension support (with compression) is now in 4.1.
2015-12-18 09:48:10 +01:00
Trustin Lee
412f719aa8 Extract the builder of CorsConfig to top level
Motivation:

Consistency in API design

Modifications:

- Deprecate CorsConfig.Builder and its factory methods
- Deprecate CorsConfig.DateValueGenerator
- Add CorsConfigBuilder and its factory methods
- Fix typo (curcuit -> circuit)

Result:

Consistency with other builder APIs such as SslContextBuilder and
Http2ConnectionHandlerBuilder
2015-12-18 12:38:44 +09:00
Norman Maurer
f31be51774 [#4505] Correctly handle whitespaces in websocket uri's.
Motivation:

If a uri contains whitespaces we need to ensure we correctly escape these when creating the request for the handshake.

Modifications:

- Correctly encode path for uri
- Add tests

Result:

Correctly handle whitespaces when doing websocket upgrade requests.
2015-12-10 13:52:42 +01:00
Luke Hutchison
4978266d52 Make cookie encoding conform better to RFC 6265 in STRICT mode.
Motivation:

- On the client, cookies should be sorted in decreasing order of path
  length. From RFC 6265:

      5.4.2. The user agent SHOULD sort the cookie-list in the following
      order:

        *  Cookies with longer paths are listed before cookies with
           shorter paths.

        *  Among cookies that have equal-length path fields, cookies with
           earlier creation-times are listed before cookies with later
           creation-times.

      NOTE: Not all user agents sort the cookie-list in this order, but
      this order reflects common practice when this document was
      written, and, historically, there have been servers that
      (erroneously) depended on this order.

  Note that the RFC does not define the path length of cookies without a
  path. We sort pathless cookies before cookies with the longest path,
  since pathless cookies inherit the request path (and setting a path
  that is longer than the request path is of limited use, since it cannot
  be read from the context in which it is written).

- On the server, if there are multiple cookies of the same name, only one
  of them should be encoded. RFC 6265 says:

      Servers SHOULD NOT include more than one Set-Cookie header field in
      the same response with the same cookie-name.

  Note that the RFC does not define which cookie should be set in the case
  of multiple cookies with the same name; we arbitrarily pick the last one.

Modifications:

- Changed the visibility of the 'strict' field to 'protected' in
  CookieEncoder.

- Modified ClientCookieEncoder to sort cookies in decreasing order of path
  length when in strict mode.

- Modified ServerCookieEncoder to return only the last cookie of a given
  name when in strict mode.

- Added a fast path for both strict mode in both client and server code
  for cases with only one cookie, in order avoid the overhead of sorting
  and memory allocation.

- Added unit tests for the new cases.

Result:

- Cookie generation on client and server is now more conformant to RFC 6265.
2015-11-26 21:41:58 +01:00
Dmitry Spikhalskiy
2a65ae256e [#4331] Helper methods to get charset from Content-Type header of HttpMessage
Motivation:

HttpHeaders already has specific methods for such popular and simple headers like "Host", but if I need to convert POST raw body to string I need to parse complex ContentType header in my code.

Modifications:

Add getCharset and getCharsetAsString methods to parse charset from Content-Length header.

Result:

Easy to use utility method.
2015-11-19 15:59:34 -08:00
Louis Ryan
6e108cb96a Improve the performance of copying header sets when hashing and name validation are equivalent.
Motivation:
Headers and groups of headers are frequently copied and the current mechanism is slower than it needs to be.

Modifications:
Skip name validation and hash computation when they are not necessary.
Fix emergent bug in CombinedHttpHeaders identified with better testing
Fix memory leak in DefaultHttp2Headers when clearing
Added benchmarks

Result:
Faster header copying and some collateral bug fixes
2015-11-07 08:53:10 -08:00
Louis Ryan
3eb65797ed Make headers.set(self) a no-op instead of throwing. Makes it consistent with setAll
Motivation:

Makes the API contract of headers more consistent and simpler.

Modifications:

If self is passed to set then simply return

Result:

set and setAll will be consistent
2015-11-06 07:00:54 -08:00
Sverker Abrahamsson
e121c68e0f Created RTSPEncoder and RTSPDecoder which are now common for both requests and responses to be able to handle both types of messages on the same channel.
Keep RTSPRequestEncoder, RTSPRequestDecoder, RTSPResponseEncoder and
RTSPResponseDecoder for backwards compatibility but they now just extends
the generic encoder/decoder and are markes as deprecated.

Renamed the decoder test, because the decoder is now generic. Added
testcase for when ANNOUNCE request is received from server.

Created testcases for encoder.

Mark abstract base classes RTSPObjectEncoder and RTSPObjectDecoder as
deprecated, that functionality is now in RTSPEncoder and RTSPDecoder.

Added annotation in RtspHeaders to suppress warnings about deprecation, no need when
whole class is deprecated.
2015-10-27 14:01:20 +01:00
Norman Maurer
ca44436ce6 [#4265] Not allow to add/set DefaultHttpHeaders to itself.
Motivation:

We should prevent to add/set DefaultHttpHeaders to itself to prevent unexpected side-effects.

Modifications:

Throw IllegalArgumentException if user tries to pass the same instance to set/add.

Result:

No surprising side-effects.
2015-09-30 08:57:53 +02:00
Norman Maurer
2dde3a386b [#3687] Correctly store WebSocketServerHandshaker in Channel attributes
Motivation:

As we stored the WebSocketServerHandshaker in the ChannelHandlerContext it was always null and so no close frame was send if WebSocketServerProtocolHandler was used.

Modifications:

Store WebSocketServerHAndshaker in the Channel attributes and so make it visibile between different handlers.

Result:

Correctly send close frame.
2015-09-15 09:48:04 +02:00
Scott Mitchell
f89dfb0bd5 Deprecation cleanup for HTTP headers
Motivaion:
The HttpHeaders and DefaultHttpHeaders have methods deprecated due to being removed in future releases, but no replacement method to use in the current release. The deprecation policy should not be so aggressive as to not provide any non-deprecated method to use.

Modifications:
- Remove deprecated annotations and javadocs from methods which are the best we can do in terms of matching the master's api for 4.1

Result:
There should be non-deprecated methods available for HttpHeaders in 4.1.
2015-09-09 14:30:21 -07:00
Sivasubramaniam S
5987cddf7c Fixed a typo [testEquansIgnoreCase() --> testEqualsIgnoreCase()] 2015-08-28 20:49:57 +09:00
Scott Mitchell
b6a4f5de9d Refactor of HttpUtil and HttpHeaderUtil
Motivation:
There currently exists http.HttpUtil, http2.HttpUtil, and http.HttpHeaderUtil. Having 2 HttpUtil methods can be confusing and the utilty methods in the http package could be consolidated.

Modifications:
- Rename http2.HttpUtil to http2.HttpConversionUtil
- Move http.HttpHeaderUtil methods into http.HttpUtil

Result:
Consolidated utilities whose names don't overlap.
Fixes https://github.com/netty/netty/issues/4120
2015-08-27 08:49:58 -07:00
Norman Maurer
e59ae12b42 [#4079] Fix IllegalStateException when HttpContentEncoder is used and 100 Continue response is used.
Motivation:

Whe a 100 Continue response was written an IllegalStateException was produced as soon as the user wrote the following response. This regression was introduced by 41b0080fcc.

Modifications:

- Special handle 100 Continue responses
- Added unit tests

Result:

Fixed regression.
2015-08-21 08:02:43 +02:00
Scott Mitchell
a7135e8677 HttpObjectAggregator doesn't check content-length header
Motivation:
The HttpObjectAggregator always responds with a 100-continue response. It should check the Content-Length header to see if the content length is OK, and if not responds with a 417.

Modifications:
- HttpObjectAggregator checks the Content-Length header in the case of a 100-continue.

Result:
HttpObjectAggregator responds with 417 if content is known to be too big.
2015-08-17 09:26:50 -07:00
Brendt Lucas
e796c99b23 Add unit tests for HTTP and SPDY headers
Motivation:

When attempting to retrieve a SPDY header using an AsciiString key, if the header was inserted using a String based key, the lookup would fail. Similarly, the lookup would fail if the header was inserted with an AsciiString key, and retrieved using a String key. This has been fixed with the header simplification commit (1a43923aa8).

Extra unit tests have been added to protect against this issue occurring in the future.  The tests check that a header added using String or AsciiString can be retrieved using AsciiString or String respectively.

Modifications:

Added more unit tests

Result:

Protect against issue #4053 happening again.
2015-08-17 08:51:14 -07:00
Scott Mitchell
ba6ce5449e Headers Performance Boost and Interface Simplification
Motivation:
A degradation in performance has been observed from the 4.0 branch as documented in https://github.com/netty/netty/issues/3962.

Modifications:
- Simplify Headers class hierarchy.
- Restore the DefaultHeaders to be based upon DefaultHttpHeaders from 4.0.
- Make various other modifications that are causing hot spots.

Result:
Performance is now on par with 4.0.
2015-08-17 08:50:11 -07:00
Norman Maurer
fd27c403d3 [#4010] Correctly handle whitespaces in HttpPostMultipartRequestDecoder
Motivation:

Due not using a cast we insert 32 and not a whitespace into the String.

Modifications:

Correclty cast to char.

Result:

Correct handling of whitespaces.
2015-08-14 21:16:42 +02:00
Scott Mitchell
b714297a44 HttpObjectDecoder half close behavior
Motivation:
In the event an HTTP message does not include either a content-length or a transfer-encoding header [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.3.3) states the behavior must be treated differently for requests and responses. If the channel is half closed then the HttpObjectDecoder is not invoking decodeLast and thus not checking if messages should be sent up the pipeline.

Modifications:
- Add comments to clarify regular decode default case.
- Handle the ChannelInputShutdownEvent in the HttpObjectDecoder and evaluate if messages need to be generated.

Result:
Messages are generated on half closed, and comments clarify existing logic.
2015-08-05 09:04:59 -07:00
Jakob Buchgraber
6fd0a0c55f Faster and more memory efficient headers for HTTP, HTTP/2, STOMP and SPYD. Fixes #3600
Motivation:

We noticed that the headers implementation in Netty for HTTP/2 uses quite a lot of memory
and that also at least the performance of randomly accessing a header is quite poor. The main
concern however was memory usage, as profiling has shown that a DefaultHttp2Headers
not only use a lot of memory it also wastes a lot due to the underlying hashmaps having
to be resized potentially several times as new headers are being inserted.

This is tracked as issue #3600.

Modifications:
We redesigned the DefaultHeaders to simply take a Map object in its constructor and
reimplemented the class using only the Map primitives. That way the implementation
is very concise and hopefully easy to understand and it allows each concrete headers
implementation to provide its own map or to even use a different headers implementation
for processing requests and writing responses i.e. incoming headers need to provide
fast random access while outgoing headers need fast insertion and fast iteration. The
new implementation can support this with hardly any code changes. It also comes
with the advantage that if the Netty project decides to add a third party collections library
as a dependency, one can simply plug in one of those very fast and memory efficient map
implementations and get faster and smaller headers for free.

For now, we are using the JDK's TreeMap for HTTP and HTTP/2 default headers.

Result:

- Significantly fewer lines of code in the implementation. While the total commit is still
  roughly 400 lines less, the actual implementation is a lot less. I just added some more
  tests and microbenchmarks.

- Overall performance is up. The current implementation should be significantly faster
  for insertion and retrieval. However, it is slower when it comes to iteration. There is simply
  no way a TreeMap can have the same iteration performance as a linked list (as used in the
  current headers implementation). That's totally fine though, because when looking at the
  benchmark results @ejona86 pointed out that the performance of the headers is completely
  dominated by insertion, that is insertion is so significantly faster in the new implementation
  that it does make up for several times the iteration speed. You can't iterate what you haven't
  inserted. I am demonstrating that in this spreadsheet [1]. (Actually, iteration performance is
  only down for HTTP, it's significantly improved for HTTP/2).

- Memory is down. The implementation with TreeMap uses on avg ~30% less memory. It also does not
  produce any garbage while being resized. In load tests for GRPC we have seen a memory reduction
  of up to 1.2KB per RPC. I summarized the memory improvements in this spreadsheet [1]. The data
  was generated by [2] using JOL.

- While it was my original intend to only improve the memory usage for HTTP/2, it should be similarly
  improved for HTTP, SPDY and STOMP as they all share a common implementation.

[1] https://docs.google.com/spreadsheets/d/1ck3RQklyzEcCLlyJoqDXPCWRGVUuS-ArZf0etSXLVDQ/edit#gid=0
[2] https://gist.github.com/buchgr/4458a8bdb51dd58c82b4
2015-08-04 17:12:24 -07:00
James Roper
b958263853 Send full response for unsupported websocket versions
Motivation:

WebSocketServerHandshakerFactory.sendUnsupportedVersionResponse does not
send a LastHttpContent, nor does it flush, and it doesn't send a content
length.

Modifications:

Changed sendUnsupportedVersionResponse to send FullHttpResponse, to
writeAndFlush, and to set a content length of 0. Also added a test for
this method.

Result:

Upstream handlers will be able to determine the end of the response, the
response will actually get written to the client, and the client will be
able to determine the end of the response.
2015-07-17 10:56:59 +02:00
Frederic Bregier
caa1505020 Get uploaded size while upload is in progress
Proposal to fix issue #3636

Motivations:
Currently, while adding the next buffers to the decoder
(`decoder.offer()`), there is no way to access to the current HTTP
object being decoded since it can only be available currently once fully
decoded by `decoder.hasNext()`.
Some could want to know the progression on the overall transfer but also
per HTTP object.
While overall progression could be done using (if available) the global
Content-Length of the request and taking into account each HttpContent
size, the per HttpData object progression is unknown.

Modifications:
1) For HTTP object, `AbstractHttpData` has 2 protected properties named
`definedSize` and `size`, respectively the supposely final size and the
current (decoded until now) size.
This provides a new method `definedSize()` to get the current value for
`definedSize`. The `size` attribute is reachable by the `length()`
method.

Note however there are 2 different ways that currently managed the
`definedSize`:
a) `Attribute`: it is reset each time the value is less than actual
(when a buffer is added, the value is increased) since the final length
is not known (no Content-Length)
b) `FileUpload`: it is set at startup from the lengh provided

So these differences could lead in wrong perception;
a) `Attribute`: definedSize = size always
b) `FileUpload`: definedSize >= size always

Therefore the comment tries to explain clearly the different behaviors.

2) In the InterfaceHttpPostRequestDecoder (and the derived classes), I
add a new method: `decoder.currentPartialHttpData()` which will return a
`InterfaceHttpData` (if any) as the current `Attribute` or `FileUpload`
(the 2 generic types), which will allow then the programmer to check
according to the real type (instance of) the 2 methods `definedSize()`
and `length()`.

This method check if currentFileUpload or currentAttribute are null and
returns the one (only one could be not null) that is not null.

Note that if this method returns null, it might mean 2 situations:
a) the last `HttpData` (whatever attribute or file upload) is already
finished and therefore accessible through `next()`
b) there is not yet any `HttpData` in decoding (body not yet parsed for
instance)

Result:
The developper has more access and therefore control on the current
upload.
The coding from developper side could looks like in the example in
HttpUloadServerHandler.
2015-06-12 14:16:07 +02:00
Trustin Lee
b169a76d46 Fix the failing HttpObjectAggregatorTest.testInvalidConstructorUsage()
Related: 950da2eae1
2015-06-10 12:20:50 +09:00
Norman Maurer
e9a2cac16d [#3869] Add unit test to ensure adding null header values is not allowed.
Motivation:

We need to ensure we never allow to have null values set on headers, otherwise we will see a NPE during encoding them.

Modifications:

Add unit test that shows we correctly handle null values.

Result:

Verify correct implementation.
2015-06-08 09:59:44 +02:00
Roelof Naude
757671b7cc Support empty http responses when using compression
Motivation:

Found a bug in that netty would generate a 20 byte body when returing a response
to an HTTP HEAD. the 20 bytes seems to be related to the compression footer.

RFC2616, section 9.4 states that responses to an HTTP HEAD MUST not return a message
body in the response.

Netty's own client implementation expected an empty response. The extra bytes lead to a
2nd response with an error decoder result:
java.lang.IllegalArgumentException: invalid version format: 14

Modifications:

Track the HTTP request method. When processing the response we determine if the response
is passthru unnchanged. This decision now takes into account the request method and passthru
responses related to HTTP HEAD requests.

Result:

Netty's http client works and better RFC conformance.
2015-05-26 09:55:34 +02:00
Stephane Landelle
f6299d942c Minor ClientCookieDecoder improvements
Motivation:

* Path attribute should be null, not empty String, if it's passed as "Path=".
* Only extract attribute value when the name is recognized.
* Only extract Expires attribute value String if MaxAge is undefined as it has precedence.

Modification:

Modify ClientCookieDecoder.
Add "testIgnoreEmptyPath" test in ClientCookieDecoderTest.

Result:

More idyomatic Path behavior (like Domain).
Minor performance improvement in some corner cases.
2015-05-12 11:25:28 +02:00
Stephane Landelle
97d871a755 Validate cookie name and value characters Motivation:
RFC6265 specifies which characters are allowed in a cookie name and value.

Netty is currently too lax, which can used for HttpOnly escaping.

Modification:

In ServerCookieDecoder: discard cookie key-value pairs that contain invalid characters.
In ClientCookieEncoder: throw an exception when trying to encode cookies with invalid characters.

Result:

The problem described in the motivation section is fixed.
2015-05-07 06:33:36 +02:00
Scott Mitchell
9a7a85dbe5 ByteString introduced as AsciiString super class
Motivation:
The usage and code within AsciiString has exceeded the original design scope for this class. Its usage as a binary string is confusing and on the verge of violating interface assumptions in some spots.

Modifications:
- ByteString will be created as a base class to AsciiString. All of the generic byte handling processing will live in ByteString and all the special character encoding will live in AsciiString.

Results:
The AsciiString interface will be clarified. Users of AsciiString can now be clear of the limitations the class imposes while users of the ByteString class don't have to live with those limitations.
2015-04-14 16:35:17 -07:00
Trustin Lee
6d5c38897e Fix header and initial line length counting
Related: #3445

Motivation:

HttpObjectDecoder.HeaderParser does not reset its counter (the size
field) when it failed to find the end of line.  If a header is split
into multiple fragments, the counter is increased as many times as the
number of fragments, resulting an unexpected TooLongFrameException.

Modifications:

- Add test cases that reproduces the problem
- Reset the HeaderParser.size field when no EOL is found.

Result:

One less bug
2015-03-04 17:24:22 +09:00
Daniel Bevenius
5b1b334f01 When null origin is supported then credentials header must not be set.
Motivation:
Currently CORS can be configured to support a 'null' origin, which can
be set by a browser if a resources is loaded from the local file system.
When this is done 'Access-Control-Allow-Origin' will be set to "*" (any
origin). There is also a configuration option to allow credentials being
sent from the client (cookies, basic HTTP Authentication, client side
SSL). This is indicated by the response header
'Access-Control-Allow-Credentials' being set to true. When this is set
to true, the "*" origin is not valid as the value of
'Access-Control-Allow-Origin' and a browser will reject the request:
http://www.w3.org/TR/cors/#resource-requests

Modifications:
Updated CorsHandler's setAllowCredentials to check the origin and if it
is "*" then it will not add the 'Access-Control-Allow-Credentials'
header.

Result:
Is is possible to have a client send a 'null' origin, and at the same
time have configured the CORS to support that and to allow credentials
in that combination.
2015-02-18 16:06:30 +01:00
Daniel Bevenius
c53b8d5a85 Suggestion for supporting single header fields.
Motivation:
At the moment if you want to return a HTTP header containing multiple
values you have to set/add that header once with the values wanted. If
you used set/add with an array/iterable multiple HTTP header fields will
be returned in the response.

Note, that this is indeed a suggestion and additional work and tests
should be added. This is mainly to bring up a discussion.

Modifications:
Added a flag to specify that when multiple values exist for a single
HTTP header then add them as a comma separated string.
In addition added a method to StringUtil to help escape comma separated
value charsequences.

Result:
Allows for responses to be smaller.
2015-02-18 10:54:15 +01:00
Trustin Lee
3c6cbd40e2 Fix an sporadic failure in ServerCookieEncoderTest
In testEncodingSingleCookieV0():

Let's assume we encoded a cookie with MaxAge=50 when currentTimeMillis
is 10999.

Because the encoder will not encode the millisecond part for Expires,
the timeMillis value of the encoded Expires field will be 60000. (If we
did not dropped the millisecond part, it would be 60999.)

Encoding a cookie will take some time, so currentTimeMillis will
increase slightly, such as to 11001.

  diff = (60000 - 11001) / 1000 = 48999 / 1000 = 48
  maxAge - diff = 50 - 48 = 2

Due to losing millisecond part twice, we end up with the precision
problem illustrated above, and thus we should increase the tolerance
from 1 second to 2 seconds.

/cc @slandelle
2015-02-02 16:19:27 +09:00
Stephane Landelle
8614b88c18 Generate Expires attribute along MaxAge one so IE can honor it, close #1466
Motivation:

Internet Explorer doesn't honor Set-Cookie header Max-Age attribute. It only honors the Expires one.

Modification:

Always generate an Expires attribute along the Max-Age one.

Result:

Internet Explorer compatible expiring cookies. Close #1466.
2015-01-25 16:55:56 +01:00
igariev
ed10513238 Fixed several issues with HttpContentDecoder
Motivation:

HttpContentDecoder had the following issues:
- For chunked content, the decoder set invalid "Content-Length" header
	with length of the first decoded chunk.
- Decoding of FullHttpRequests put both the original conent and decoded
	content into output. As result, using HttpObjectAggregator before the
	decoder lead to errors.
- Requests with "Expect: 100-continue" header were not acknowleged:
	the decoder didn't pass the header message down the handler's chain
	until content is received. If client expected "100 Continue" response,
	deadlock happened.

Modification:

- Invalid "Content-Length" header is removed; handlers down the chain can either
	rely on LastHttpContent message or ask HttpObjectAggregator to add the header.
- FullHttpRequest is split into HttpRequest and HttpContent (decoded) parts.
- Header (HttpRequest) part of request is sent down the chain as soon as it's received.

Result:

The issues are fixed, unittest is added.
2015-01-23 11:14:20 +01:00
Trustin Lee
7d102084c1 Remove Rfc6265 prefix from cookie encoders and decoders
Motivation:

Rfc6265Client/ServerCookieEncoder is a better replacement of the old
Client/ServerCookieEncoder, and thus there's no point of keeping both.

Modifications:

- Remove the old Client/ServerCookieEncoder
- Remove the 'Rfc6265' prefix from the new cookie encoder/decoder
  classes
- Deprecate CookieDecoder

Result:

We have much better cookie encoder/decoder implementation now.
2015-01-21 22:27:50 +09:00
Stephane Landelle
c298230128 RFC6265 cookies support
Motivation:

Currently Netty supports a weird implementation of RFC 2965.
First, this RFC has been deprecated by RFC 6265 and nobody on the
internet use this format.

Then, there's a confusion between client side and server side encoding
and decoding.

Typically, clients should only send name=value pairs.

This PR introduces RFC 6265 support, but keeps on supporting RFC 2965 in
the sense that old unused fields are simply ignored, and Cookie fields
won't be populated. Deprecated fields are comment, commentUrl, version,
discard and ports.

It also provides a mechanism for safe server-client-server roundtrip, as
User-Agents are not supposed to interpret cookie values but return them
as-is (e.g. if Set-Cookie contained a quoted value, it should be sent
back in the Cookie header in quoted form too).

Also, there are performance gains to be obtained by not allocating the
attribute name Strings, as we only want to match them to find which POJO
field to populate.

Modifications:

- New RFC6265ClientCookieEncoder/Decoder and
  RFC6265ServerCookieEncoder/Decoder pairs that live alongside old
  CookieEncoder/Decoder pair to not break backward compatibility.
- New Cookie.rawValue field, used for lossless server-client-server
  roundtrip.

Result:

RFC 6265 support.
Clean separation of client and server side.

Decoder performance gain:

Benchmark                     Mode  Samples        Score        Error
Units
parseOldClientDecoder        thrpt       20  2070169,228 ± 105044,970
ops/s
parseRFC6265ClientDecoder    thrpt       20  2954015,476 ± 126670,633
ops/s

This commit closes #3221 and #1406.
2015-01-21 19:07:13 +09:00
Norman Maurer
e1a53e61d0 Fix compilation error introduced by 7f907e8c2a 2015-01-16 16:47:51 +01:00
Frederic Bregier
7f907e8c2a Accept ';' '\\"' in the filename of HTTP Content-Disposition header
Motivation:
HttpPostMultipartRequestDecoder threw an ArrayIndexOutOfBoundsException
when trying to decode Content-Disposition header with filename
containing ';' or protected \\".
See issue #3326 and #3327.

Modifications:
Added splitMultipartHeaderValues method which cares about quotes, and
use it in splitMultipartHeader method, instead of StringUtils.split.

Result:
Filenames can contain semicolons and protected \\".
2015-01-16 13:54:32 +01:00
Frederic Bregier
85fecba770 Fix for Issue #3308 related to slice missing retain
Motivations:
It seems that slicing a buffer and using this slice to write to CTX will
decrease the initial refCnt to 0, while the original buffer is not yet
fully used (not empty).

Modifications:
As suggested in the ticket and tested, when the currentBuffer is sliced
since it will still be used later on, the currentBuffer is retained.

Add a test case for this issue.

Result:
The currentBuffer still has its correct refCnt when reaching the last
write (not sliced) of 1 and therefore will be released correctly.
The exception does no more occur.

This fix should be applied to all branches >= 4.0.
2015-01-06 21:05:33 +01:00
zcourts
bd6d0f3fd5 ensure getRawQuery is not null before appending
Motivation:

without this check then given a URI with path /path the resulting URL will be /path?null=

Modifications:

check that getRawQuery doesn't return null and only append if not

Result:

urls of the form /path will not have a null?= appended
2014-12-16 06:53:29 +01:00
Scott Mitchell
72a611a28f HTTP Content Encoder allow EmptyLastHttpContent
Motiviation:
The HttpContentEncoder does not account for a EmptyLastHttpContent being provided as input.  This is useful in situations where the client is unable to determine if the current content chunk is the last content chunk (i.e. a proxy forwarding content when transfer encoding is chunked).

Modifications:
- HttpContentEncoder should not attempt to compress empty HttpContent objects

Result:
HttpContentEncoder supports a EmptyLastHttpContent to terminate the response.
2014-11-05 23:31:27 -05:00
Trustin Lee
4ce994dd4f Fix backward compatibility from the previous backport
Motivation:

The commit 50e06442c3 changed the type of
the constants in HttpHeaders.Names and HttpHeaders.Values, making 4.1
backward-incompatible with 4.0.

It also introduces newer utility classes such as HttpHeaderUtil, which
deprecates most static methods in HttpHeaders.  To ease the migration
between 4.1 and 5.0, we should deprecate all static methods that are
non-existent in 5.0, and provide proper counterpart.

Modification:

- Revert the changes in HttpHeaders.Names and Values
- Deprecate all static methods in HttpHeaders in favor of:
  - HttpHeaderUtil
  - the member methods of HttpHeaders
  - AsciiString
- Add integer and date access methods to HttpHeaders for easier future
  migration to 5.0
- Add HttpHeaderNames and HttpHeaderValues which provide standard HTTP
  constants in AsciiString
  - Deprecate HttpHeaders.Names and Values
  - Make HttpHeaderValues.WEBSOCKET lowercased because it's actually
    lowercased in all WebSocket versions but the oldest one
- Add RtspHeaderNames and RtspHeaderValues which provide standard RTSP
  constants in AsciiString
  - Deprecate RtspHeaders.*
- Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of
  the parameters are AsciiString
- Avoid using AsciiString.toString() repetitively
  - Change the parameter type of some methods from String to
    CharSequence

Result:

Backward compatibility is recovered.  New classes and methods will make
the migration to 5.0 easier, once (Http|Rtsp)Header(Names|Values) are
ported to master.
2014-11-01 01:00:25 +09:00
Scott Mitchell
50e06442c3 Backport header improvements from 5.0
Motivation:
The header class hierarchy and algorithm was improved on the master branch for versions 5.x. These improvments should be backported to the 4.1 baseline.

Modifications:
- cherry-pick the following commits from the master branch: 2374e17, 36b4157, 222d258

Result:
Header improvements in master branch are available in 4.1 branch.
2014-11-01 00:59:57 +09:00
Matthias Einwag
7fbd66f814 Added an option to use websockets without masking
Motivation:

The requirement for the masking of frames and for checks of correct
masking in the websocket specifiation have a large impact on performance.
While it is mandatory for browsers to use masking there are other
applications (like IPC protocols) that want to user websocket framing and proxy-traversing
characteristics without the overhead of masking. The websocket standard
also mentions that the requirement for mask verification on server side
might be dropped in future.

Modifications:

Added an optional parameter allowMaskMismatch for the websocket decoder
that allows a server to also accept unmasked frames (and clients to accept
masked frames).
Allowed to set this option through the websocket handshaker
constructors as well as the websocket client and server handlers.
The public API for existing components doesn't change, it will be
forwarded to functions which implicetly set masking as required in the
specification.
For websocket clients an additional parameter is added that allows to
disable the masking of frames that are sent by the client.

Result:

This update gives netty users the ability to create and use completely
unmasked websocket connections in addition to the normal masked channels
that the standard describes.
2014-10-25 22:18:43 +09:00
George Agnelli
0666924e8c Don't close the connection whenever Expect: 100-continue is missing.
Motivation:

The 4.1.0-Beta3 implementation of HttpObjectAggregator.handleOversizedMessage closes the
connection if the client sent oversized chunked data with no Expect:
100-continue header. This causes a broken pipe or "connection reset by
peer" error in some clients (tested on Firefox 31 OS X 10.9.5,
async-http-client 1.8.14).

This part of the HTTP 1.1 spec (below) seems to say that in this scenario the connection
should not be closed (unless the intention is to be very strict about
how data should be sent).

http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html

"If an origin server receives a request that does not include an
Expect request-header field with the "100-continue" expectation,
the request includes a request body, and the server responds
with a final status code before reading the entire request body
from the transport connection, then the server SHOULD NOT close
the transport connection until it has read the entire request,
or until the client closes the connection. Otherwise, the client
might not reliably receive the response message. However, this
requirement is not be construed as preventing a server from
defending itself against denial-of-service attacks, or from
badly broken client implementations."

Modifications:

Change HttpObjectAggregator.handleOversizedMessage to close the
connection only if keep-alive is off and Expect: 100-continue is
missing. Update test to reflect the change.

Result:

Broken pipe and connection reset errors on the client are avoided when
oversized data is sent.
2014-10-24 21:35:17 +02:00
Trustin Lee
789e323b79 Handle an empty ByteBuf specially in HttpObjectEncoder
Related: #2983

Motivation:

It is a well known idiom to write an empty buffer and add a listener to
its future to close a channel when the last byte has been written out:

  ChannelFuture f = channel.writeAndFlush(Unpooled.EMPTY_BUFFER);
  f.addListener(ChannelFutureListener.CLOSE);

When HttpObjectEncoder is in the pipeline, this still works, but it
silently raises an IllegalStateException, because HttpObjectEncoder does
not allow writing a ByteBuf when it is expecting an HttpMessage.

Modifications:

- Handle an empty ByteBuf specially in HttpObjectEncoder, so that
  writing an empty buffer does not fail even if the pipeline contains an
  HttpObjectEncoder
- Add a test

Result:

An exception is not triggered anymore by HttpObjectEncoder, when a user
attempts to write an empty buffer.
2014-10-22 14:46:22 +09:00
Daniel Bevenius
67c68ef8ba CorsHandler should release HttpRequest after processing preflight/error.
Motivation:
Currently, when the CorsHandler processes a preflight request, or
respondes with an 403 Forbidden using the short-curcuit option, the
HttpRequest is not released which leads to a buffer leak.

Modifications:
Releasing the HttpRequest when done processing a preflight request or
responding with an 403.

Result:
Using the CorsHandler will not cause buffer leaks.
2014-10-22 06:37:34 +02:00
Matthias Einwag
80ae2c9180 Add a test for handover from HTTP to Websocket
Motivation:
I was not fully reassured that whether everything works correctly when a websocket client receives the websocket handshake HTTP response and a websocket frame in a single ByteBuf (which can happen when the server sends a response directly or shortly after the connect). In this case some parts of the ByteBuf must be processed by HTTP decoder and the remaining by the websocket decoder.

Modification:
Adding a test that verifies that in this scenaria the handshake and the message are correctly interpreted and delivered by Netty.

Result:
One more test for Netty.
The test succeeds - No problems
2014-10-13 07:23:10 +02:00
Matthias Einwag
4eb1529d2c Improve WebSocket performance
Motivation:

Websocket performance is to a large account determined through the masking
and unmasking of frames. The current behavior of this in Netty can be
improved.

Modifications:

Perform the XOR operation not bytewise but in int blocks as long as
possible. This reduces the number of necessary operations by 4. Also don't
read the writerIndex in each iteration.
Added a unit test for websocket decoding and encoding for verifiation.

Result:

A large performance gain (up to 50%) in websocket throughput.
2014-10-12 19:48:49 +02:00