Motivation:
Calling a static method is faster then dynamic
Modifications:
Add 'static' keyword for methods where it missed
Result:
A bit faster method calls
Motivation:
We have our own ThreadLocalRandom implementation to support older JDKs . That said we should prefer the JDK provided when running on JDK >= 7
Modification:
Using ThreadLocalRandom implementation of the JDK when possible.
Result:
Make use of JDK implementations when possible.
Motivation:
The allowMaskMismatch parameter used throughout websocketx allows frames
with noncompliant masks when set to true, not false.
Modification:
Changed the javadoc comment everywhere it appears.
Result:
Fixes#6387
Motivation:
Today, the HTTP codec in Netty responds to HTTP/1.1 requests containing
an "expect: 100-continue" header and a content-length that exceeds the
max content length for the server with a 417 status (Expectation
Failed). This is a violation of the HTTP specification. The purpose of
this commit is to address this situation by modifying the HTTP codec to
respond in this situation with a 413 status (Request Entity Too
Large). Additionally, the HTTP codec ignores expectations in the expect
header that are currently unsupported. This commit also addresses this
situation by responding with a 417 status.
Handling the expect header is tricky business as the specification (RFC
2616) is more complicated than it needs to be. The specification defines
the legitimate values for this header as "100-continue" and defines the
notion of expectatation extensions. Further, the specification defines a
417 status (Expectation Failed) and this is where implementations go
astray. The intent of the specification was for servers to respond with
417 status when they do not support the expectation in the expect
header.
The key sentence from the specification follows:
The server MUST respond with a 417 (Expectation Failed) status if
any of the expectations cannot be met or, if there are other
problems with the request, some other 4xx status.
That is, a server should respond with a 417 status if and only if there
is an expectation that the server does not support (whether it be
100-continue, or another expectation extension), and should respond with
another 4xx status code if the expectation is supported but there is
something else wrong with the request.
Modifications:
This commit modifies the HTTP codec by changing the handling for the
expect header in the HTTP object aggregator. In particular, the codec
will now respond with 417 status if any expectation other than
100-continue is present in the expect header, the codec will respond
with 413 status if the 100-continue expectation is present in the expect
header and the content-length is larger than the max content length for
the aggregator, and otherwise the codec will respond with 100 status.
Result:
The HTTP codec can now be used to correctly reply to clients that send a
100-continue expectation with a content-length that is too large for the
server with a 413 status, and servers that use the HTTP codec will now
no longer ignore expectations that are not supported (any value other
than 100-continue).
Motivation:
Netty 4.1 introduced AsciiString and defines HttpHeaderNames constants
as such.
It would be convenient to be able to pass them to `exposeHeaders` and
`allowedRequestHeaders` directly without having to call `toString`.
Modifications:
Add `exposeHeaders` and `allowedRequestHeaders` overloads that take a
`CharSequence`.
Result:
More convenient API
Motivation:
DefaultCookie currently used an undocumented magic value for undefined
maxAge.
Clients need to be able to identify such value so they can implement a
proper CookieJar.
Ideally, we should add a `Cookie::isMaxAgeDefined` method but I guess
we can’t add a new method without breaking API :(
Modifications:
Add a new constant on `Cookie` interface so clients can use it to
compare with value return by `Cookie.maxAge` and decide if `maxAge` was
actually defined.
Result:
Clients have a better documented way to check if the maxAge attribute
was defined.
Motivation:
We used various mocking frameworks. We should only use one...
Modifications:
Make usage of mocking framework consistent by only using Mockito.
Result:
Less dependencies and more consistent mocking usage.
Motivation:
HttpObjectAggregator yields full HTTP messgaes (AggregatedFullHttpMessages) that don't respect decoder result when copied/replaced.
Modifications:
Copy the decoding result over to a new instance produced by AggregatedFullHttpRequest.replace or AggregatedFullHttpResponse.replace .
Result:
DecoderResult is now copied over when an original AggregatedFullHttpMessage is being replaced (i.e., AggregatedFullHttpRequest.replace or AggregatedFullHttpResponse.replace is being called).
New unit tests are passing on this branch but are failing on master.
Motivation:
HttpUtil.setTransferEncodingChunked could add a second Transfer-Encoding
header if one was already present. While this is technically valid, it
does not appear to be the intent of the method.
Result:
Only one Transfer-Encoding header is present after calling this method.
Motivation:
In Netty, currently, the HttpPostRequestEncoder only supports POST, PUT, PATCH and OPTIONS, while the RFC 7231 allows with a warning that GET, HEAD, DELETE and CONNECT use a body too (but not TRACE where it is explicitely not allowed).
The RFC in chapter 4.3 says:
"A payload within a XXX request message has no defined semantics;
sending a payload body on a XXX request might cause some existing
implementations to reject the request."
where XXX can be replaced by one of GET, HEAD, DELETE or CONNECT.
Current usages, on particular in REST mode, tend to use those extra HttpMethods for such queries.
So this PR proposes to remove the current restrictions, leaving only TRACE as explicitely not supported.
Modification:
In the constructor, where the test is done, replacing all by checking only against TRACE, and adding one test to check that all methods are supported or not.
Result:
Fixes#6138.
Motivation:
cb139043f3 introduced special handling of response to HEAD requests. Due a bug we failed to handle FullHttpResponse correctly.
Modifications:
Correctly handle FullHttpResponse for HEAD requests.
Result:
Works as expected.
Motivation:
We should have a unit test which explicitly tests a HTTP message being split between multiple ByteBuf objects.
Modifications:
- Add a unit test to HttpRequestDecoderTest which splits a request between 2 ByteBuf objects
Result:
More unit test coverage for HttpObjectDecoder.
Motivation:
Enables optional .startsWith() matching of req.uri() with websocketPath.
Modifications:
New checkStartsWith boolean option with default false value added to both WebSocketServerProtocolHandler and WebSocketServerProtocolHandshakeHandler. req.uri() matching is based on this option.
Result:
By default old behavior matching via .equal() is preserved. To use checkStartsWith use constructor shortcut: new WebSocketServerProtocolHandler(websocketPath, true) or fill this flag on full form of constructor among other options.
request with a 'content-encoding: chunked' header
Motivation:
It is valid to send a response to a HEAD request that contains a transfer-encoding: chunked header, but it is not valid to include a body, and there is no way to do this using the netty4 HttpServerCodec.
The root cause is that the netty4 HttpObjectEncoder will transition to the state ST_CONTENT_CHUNK and the only way to transition back to ST_INIT is through the encodeChunkedContent method which will write the terminating length (0\r\n\r\n\r\n), a protocol error when responding to a HEAD request
Modifications:
- Keep track of the method of the request and depending on it handle the response differently when encoding it.
- Added a unit test.
Result:
Correclty handle HEAD responses that are chunked.
Motivation:
According to https://www.ietf.org/rfc/rfc2388.txt 4.4, filename after "content-disposition" is optional and arbitrary (does not need to match a real filename).
Modifications:
This change supports an extra addBodyFileUpload overload to precise the filename (default to File.getName). If empty or null this argument should be ignored during encoding.
Result:
- A backward-compatible addBodyFileUpload(String, File, String, boolean) to use file.getName() as filename.
- A new addBodyFileUpload(String, String, File, String, boolean) overload to precise filename
- Couple of tests for the empty use case
Motivation:
IntelliJ issues several warnings.
Modifications:
* `ClientCookieDecoder` and `ServerCookieDecoder`:
* `nameEnd`, `valueBegin` and `valueEnd` don't need to be initialized
* `keyValLoop` loop doesn't been to be labelled, as it's the most inner one (same thing for labelled breaks)
* Remove `if (i != headerLen)` as condition is always true
* `ClientCookieEncoder` javadoc still mention old logic
* `DefaultCookie`, `ServerCookieEncoder` and `DefaultHttpHeaders` use ternary ops that can be turned into simple boolean ones
* `DefaultHeaders` uses a for(int) loop over an array. It can be turned into a foreach one as javac doesn't allocate an iterator to iterate over arrays
* `DefaultHttp2Headers` and `AbstractByteBuf` `equal` can be turned into a single boolean statement
Result:
Cleaner code
Motivation:
* DefaultHeaders from netty-codec has some duplicated logic for header date parsing
* Several classes keep on using deprecated HttpHeaderDateFormat
Modifications:
* Move HttpHeaderDateFormatter to netty-codec and rename it into HeaderDateFormatter
* Make DefaultHeaders use HeaderDateFormatter
* Replace HttpHeaderDateFormat usage with HeaderDateFormatter
Result:
Faster and more consistent code
Motivation:
code assumes a numeric value of 0 means no digits were read between separators, which fails for timestamps like 00:00:00.
also code accepts invalid timestamps like 0:0:000
Modifications:
explicitly check for number of digits between separators instead of relying on the numeric value.
also add tests.
Result:
timestamps with 00 successfully parse, timestamps with 000 no longer
Signed-off-by: radai-rosenblatt <radai.rosenblatt@gmail.com>
Motivation:
The method HttpUtil.getCharsetAsString(...) is missleading as its return type is CharSequence and not String.
Modifications:
Deprecate HttpUtil.getCharsetAsString(...) and introduce HttpUtil.getCharsetAsSe
quence(...).
Result:
Less confusing method name.
Motivation:
* RFC6265 defines its own parser which is different from RFC1123 (it accepts RFC1123 format but also other ones). Basically, it's very lax on delimiters, ignores day of week and timezone. Currently, ClientCookieDecoder uses HttpHeaderDateFormat underneath, and can't parse valid cookies such as Github ones whose expires attribute looks like "Sun, 27 Nov 2016 19:37:15 -0000"
* ServerSideCookieEncoder currently uses HttpHeaderDateFormat underneath for formatting expires field, and it's slow.
Modifications:
* Introduce HttpHeaderDateFormatter that correctly implement RFC6265
* Use HttpHeaderDateFormatter in ClientCookieDecoder and ServerCookieEncoder
* Deprecate HttpHeaderDateFormat
Result:
* Proper RFC6265 dates support
* Faster ServerCookieEncoder and ClientCookieDecoder
* Faster tool for handling headers such as "Expires" and "Date"
Motiviation:
We used ReferenceCountUtil.releaseLater(...) in our tests which simplifies a bit the releasing of ReferenceCounted objects. The problem with this is that while it simplifies stuff it increase memory usage a lot as memory may not be freed up in a timely manner.
Modifications:
- Deprecate releaseLater(...)
- Remove usage of releaseLater(...) in tests.
Result:
Less memory needed to build netty while running the tests.
Motivation:
If the wsURL contains an encoded query, it will be decoded when generating the raw path. For example if the wsURL is http://test.org/path?a=1%3A5, the returned raw path would be /path?a=1:5
Modifications:
Use wsURL.getRawQuery() rather than wsURL.getQuery()
Result:
rawPath will now return /path?a=1%3A5
Motivation:
Some commons values are missing from HttpHeader values constants.
Modifications:
- Add constants for "application/json" Content-Type
- Add constants for "gzip,deflate" Content-Encoding
Result:
More HttpHeader values constants available, both in
`HttpHeaders.Values` and `HttpHeaderValues`.
Motivation:
The HttpObjectAggregator never appends a 'Connection: close' header to
the response of oversized messages even though in the majority of cases
its going to close the connection.
Modification:
This PR addresses that by ensuring the requisite header is present when
the connection is going to be closed.
Result:
Gracefully signal that we are about to close the connection.
Motivation:
We need to ensure we not add the Transfer-Encoding header if the HttpMessage is EOF terminated.
Modifications:
Only add the Transfer-Encoding header if an Content-Length header is present.
Result:
Correctly handle HttpMessage that is EOF terminated.
Motivation:
We want to reject the upgrade as quickly as possible, so that we can
support streamed responses.
Modifications:
Reject the upgrade as soon as we inspect the headers if they're wrong,
instead of waiting for the entire response body.
Result:
If a remote server doesn't know how to use the http upgrade and tries to
responsd with a streaming response that never ends, the client doesn't
buffer forever, but can instead pass it along. Fixes#5954
Motivation:
the build doesnt seem to enforce this, so they piled up
Modifications:
removed unused import lines
Result:
less unused imports
Signed-off-by: radai-rosenblatt <radai.rosenblatt@gmail.com>
Motivation:
The Javadocs of HttpUtil.getContentLength(HttpMessage, long) and its int overload state that the provided default value is returned if the Content-Length value is not a number. NumberFormatException is thrown instead.
Modifications:
Correctly handle when the value is not a number.
Result:
API works as stated in javadocs.
Motivation:
HttpObjectDecoder maintains a resetRequested flag which is used to determine if internal state should be reset when a decode occurs. However after a reset is done the resetRequested flag is not set to false. This leads to all data after this point being discarded.
Modifications:
- Set resetRequested to false when a reset is done
Result:
HttpObjectDecoder can still function after a reset.
Motivation:
As discussed in #5738, developers need to concern themselves with setting
connection: keep-alive on the response as well as whether to close a
connection or not after writing a response. This leads to special keep-alive
handling logic in many different places. The purpose of the HttpServerKeepAliveHandler
is to allow developers to add this handler to their pipeline and therefore
free themselves of having to worry about the details of how Keep-Alive works.
Modifications:
Added HttpServerKeepAliveHandler to the io.netty.handler.codec.http package.
Result:
Developers can start using HttpServerKeepAliveHandler in their pipeline instead
of worrying about when to close a connection for keep-alive.
Motivation:
As described in #5734
Before this change, if the server had to do some sort of setup after a
handshake was completed based on handshake's information, the only way
available was to wait (in a separate thread) for the handshaker to be
added as an attribute to the channel. Too much hassle.
Modifications:
Handshake completed event need to be stateful now, so I've added a tiny
class holding just the HTTP upgrade request and the selected subprotocol
which is fired as an event after the handshake has finished.
I've also deprecated the old enum used as stateless event and I left the
code that fires it for backward compatibility. It should be removed in
the next mayor release.
Result:
It should be much simpler now to do initialization stuff based on
subprotocol or request headers on handshake completion. No asynchronous
waiting needed anymore.
Motivation:
The CorsHandler currently closes the channel when it responds to a preflight (OPTIONS)
request or in the event of a short circuit due to failed validation.
Especially in an environment where there's a proxy in front of the service this causes
unnecessary connection churn.
Modifications:
CorsHandler now uses HttpUtil to determine if the connection should be closed
after responding and to set the Connection header on the response.
Result:
Channel will stay open when the CorsHandler responds unless the client specifies otherwise
or the protocol version is HTTP/1.0
Motivation:
Documentation was added in #2401 to aid developers in understanding
how HttpObjectAggregator works and that it needs an encoder before it.
In #2471 it was pointed out that the documentation added can actually
add to the confusion and that it might have a typo.
This is an attempt at clearing up that confusion. Feedback is welcome.
Modifications:
- Adjust class level javadoc for HttpObjectAggregator
* Remove reference to HttpRequestEncoder
* Point out when HttpResponseEncoder is needed
* Point out that either HttpRequestDecoder or HttpResponseDecoder is needed
* Make clear everything must be added before HttpObjectAggregator
* Mention HttpServerCodec
Result:
Avoid confusion about dependencies for HttpObjectAggregator on the pipeline.
Motivation:
The CorsHandler currently closes the channel when it responds to a preflight (OPTIONS)
request or in the event of a short circuit due to failed validation.
Especially in an environment where there's a proxy in front of the service this causes
unnecessary connection churn.
Modifications:
CorsHandler now uses HttpUtil to determine if the connection should be closed
after responding
Result:
Channel will stay open when the CorsHandler responds unless the client specifies otherwise
or the protocol version is HTTP/1.0
Motivation:
RFC 6265 does not state that cookie names must be case insensitive.
Modifications:
Fix io.netty.handler.codec.http.cookie.DefaultCookie#equals() method to
use case sensitive String#equals() and String#compareTo().
Result:
It is possible to parse several cookies with same names but with
different cases.
Motivation:
The CorsHandler currently returns the Access-Control-Allow-Headers
header as on a Non-Preflight CORS request (Simple request).
As per the CORS specification the Access-Control-Allow-Headers header
should only be returned on Preflight requests. (not on simple requests).
https://www.w3.org/TR/2014/REC-cors-20140116/#access-control-allow-headers-response-headerhttp://www.html5rocks.com/static/images/cors_server_flowchart.png
Modifications:
Modified CorsHandler.java to not add the Access-Control-Allow-Headers
header when responding to Non-preflight CORS request.
Result:
Access-Control-Allow-Headers header will not be returned on a Simple
request (Non-preflight CORS request).
Motivation:
There are few duplicated byte[] CRLF fields in code.
Modifications:
Removed duplicated fields as they could be inherited from parent encoder.
Result:
Less static fields.
Motivation :
Unboxing operations allocate unnecessary objects when it could be avoided.
Modifications:
Replaced Float.valueOf with Number.parseFloat where possible.
Result:
Less unnecessary objects allocations.
Motivation:
retainSlice() currently does not unwrap the ByteBuf when creating the ByteBuf wrapper. This effectivley forms a linked list of ByteBuf when it is only necessary to maintain a reference to the unwrapped ByteBuf.
Modifications:
- retainSlice() and retainDuplicate() variants should only maintain a reference to the unwrapped ByteBuf
- create new unit tests which generally verify the retainSlice() behavior
- Remove unecessary generic arguments from AbstractPooledDerivedByteBuf
- Remove unecessary int length member variable from the unpooled sliced ByteBuf implementation
- Rename the unpooled sliced/derived ByteBuf to include Unpooled in their name to be more consistent with the Pooled variants
Result:
Fixes https://github.com/netty/netty/issues/5582
Motivation:
Currently, QueryStringDecoder#path simply returns the path info as is, without decoding it as the Javadoc states.
Modifications:
* Make QueryStringDecoder#path decode the path info.
* Add tests to QueryStringDecoderTest.
Result:
QueryStringDecoder#path now decodes the path info as expected.
Motivation:
DiskFileUpload and MemoryFileUpload.equals(...) are broken.
Modifications:
Fix implementation and add unit test.
Result:
Equals method are correct now.
Motivation:
We don't have an argument named {@code value} but have {@code set} and
{@code expected} in HttpHeaders and HttpUtil respectively.
Modifications:
I replaced {@code value} to {@code set} and {@code expected} in HttpHeaders
and HttpUtil respectively.
Result:
Now javadoc says;
If {@code set} is {@code true}, the {@code "Expect: 100-continue"} header is
set and all other previous {@code "Expect"} headers are removed. Otherwise,
all {@code "Expect"} headers are removed completely. in HttpHeaders
If {@code expected} is {@code true}, the {@code "Expect: 100-continue"} header
is set and all other previous {@code "Expect"} headers are removed. Otherwise,
all {@code "Expect"} headers are removed completely. in HttpUtil
Motivation:
These methods were recently deprecated. However, they remained in use in several locations in Netty's codebase.
Modifications:
Netty's code will now access the bootstrap config to get the group or child group.
Result:
No impact on functionality.
Motivation:
We use pre-instantiated exceptions in various places for performance reasons. These exceptions don't include a stacktrace which makes it hard to know where the exception was thrown. This is especially true as we use the same exception type (for example ChannelClosedException) in different places. Setting some StackTraceElements will provide more context as to where these exceptions original and make debugging easier.
Modifications:
Set a generated StackTraceElement on these pre-instantiated exceptions which at least contains the origin class and method name. The filename and linenumber are specified as unkown (as stated in the javadocs of StackTraceElement).
Result:
Easier to find the origin of a pre-instantiated exception.
Motivation:
When HTTPS is used we should use https in the sec-websocket-origin / origin header
Modifications:
- Correctly generate the sec-websocket-origin / origin header
- Add unit tests.
Result:
Generate correct header.
`HttpContentDecoder` was removing `Content-Length` header but not adding a `Transfer-Encoding` header which goes against the HTTP spec.
Added `Transfer-Encoding` header with value `chunked` when `Content-Length` is removed.
Modified existing unit test to also check for this condition.
Compliance with HTTP spec.
Motivation:
When using HttpContentCompressor and the HttpResponse is protocol version 1.0, HttpContentEncoder.encode() should not set the transfer-encoding header to chunked. Chunked transfer-encoding is not valid for HTTP 1.0 - this causes ERR_CONTENT_DECODING_FAILED errors in chrome and similar failures in IE.
Modifications:
Skip HTTP/1.0 messages
Result:
Be able to serve HTTP/1.0 as well when HttpContentEncoder is in the pipeline.
Motivation:
Its completly fine for ChunkedInput.readChunk(...) to return null to indicate there is currently not any data to read. We need to handle this in HttpChunkedInput to not produce a NPE when constructing the HttpContent.
Modifications:
If readChunk(...) return null just return null as well.
Result:
No more NPE.
Motivation:
I cherry-picked 819b26b too soon. There were entries added to a deprecated class which should only go into the non-deprecated version of the class.
Modifications:
- Remove the static final variables that were added as duplicates to the deprecated class
Result:
Deprecated code does not grown in volume without need.
Motivation:
Some commons values are missing from HttpHeader values constants.
Modifications:
- Add constants for "application/json" Content-Type
- Add constants for "gzip,deflate" Content-Encoding
Result:
More HttpHeader values constants available, both in
`HttpHeaders.Values` and `HttpHeaderValues`.
Motivation:
Support fetches data chunk by chunk for use with WebSocket chunked transfers.
Modifications:
Create a WebSocketChunkedInput.java that add to io.netty.handler.codec.http.websocketx package
Result:
The WebSocket transfers/fetches data chunk by chunk.
Motivation:
JCTools supports both non-unsafe, unsafe versions of queues and JDK6 which allows us to shade the library in netty-common allowing it to stay "zero dependency".
Modifications:
- Remove copy paste JCTools code and shade the library (dependencies that are shaded should be removed from the <dependencies> section of the generated POM).
- Remove usage of OneTimeTask and remove it all together.
Result:
Less code to maintain and easier to update JCTools and less GC pressure as the queue implementation nt creates so much garbage
Motivation:
When the channel is closed while we still decode the headers we currently not preserve correct message sequence. In this case we should generate an invalid message with a current cause.
Modifications:
Create an invalid message with a PrematureChannelClosureException as cause when the channel is closed while we decode the headers.
Result:
Correct message sequence preserved and correct DecoderResult if the channel is closed while decode headers.
Motivation:
The user may specify to use a different allocator then the default. In this case we need to ensure it is shared when creating the EmbeddedChannel inside of a ChannelHandler
Modifications:
Use the config of the "original" Channel in the EmbeddedChannel and so share the same allocator etc.
Result:
Same type of buffers are used.
Motivation:
At the moment the user is responsible to increase the writer index of the composite buffer when a new component is added. We should add some methods that handle this for the user as this is the most popular usage of the composite buffer.
Modifications:
Add new methods that autoamtically increase the writerIndex when buffers are added.
Result:
Easier usage of CompositeByteBuf.
Motivation:
The HPACK code currently disallows empty header names. This is not explicitly forbidden by the HPACK RFC https://tools.ietf.org/html/rfc7541. However the HTTP/1.x RFC https://tools.ietf.org/html/rfc7230#section-3.2 and thus HTTP/2 both disallow empty header names, and so this precondition check should be moved from the HPACK code to the protocol level.
HPACK also requires that string literals which are huffman encoded must be treated as an encoding error if the string has more than 7 trailing padding bits https://tools.ietf.org/html/rfc7541#section-5.2, but this is currently not enforced.
Result:
- HPACK to allow empty header names
- HTTP/1.x and HTTP/2 header validation should not allow empty header names
- Enforce max of 7 trailing padding bits
Result:
Code is more compliant with the above mentioned RFCs
Fixes https://github.com/netty/netty/issues/5228
Related: #4333#4421#5128
Motivation:
slice(), duplicate() and readSlice() currently create a non-recyclable
derived buffer instance. Under heavy load, an application that creates a
lot of derived buffers can put the garbage collector under pressure.
Modifications:
- Add the following methods which creates a non-recyclable derived buffer
- retainedSlice()
- retainedDuplicate()
- readRetainedSlice()
- Add the new recyclable derived buffer implementations, which has its
own reference count value
- Add ByteBufHolder.retainedDuplicate()
- Add ByteBufHolder.replace(ByteBuf) so that..
- a user can replace the content of the holder in a consistent way
- copy/duplicate/retainedDuplicate() can delegate the holder
construction to replace(ByteBuf)
- Use retainedDuplicate() and retainedSlice() wherever possible
- Miscellaneous:
- Rename DuplicateByteBufTest to DuplicatedByteBufTest (missing 'D')
- Make ReplayingDecoderByteBuf.reject() return an exception instead of
throwing it so that its callers don't need to add dummy return
statement
Result:
Derived buffers are now recycled when created via retainedSlice() and
retainedDuplicate() and derived from a pooled buffer
Motivation:
At the moment we let the IllegalArgumentException escape when parsing form parameters. This is not expected.
Modifications:
Correctly catch IllegalArgumentException and rethrow as ErrorDataDecoderException.
Result:
Throw correct exception.
Motivation:
Currently the way a 'null' origin, a request that most often indicated
that the request is coming from a file on the local file system, is
handled is incorrect. We are currently returning a wildcard origin '*'
but should be returning 'null' for the 'Access-Control-Allow-Origin'
which is valid according to the specification [1].
Modifications:
Updated CorsHandler to add a 'null' origin instead of the '*' origin in
the case the request origin is 'null.
Result:
All test pass and the CORS example as does the cors.html example if you
try to serve it by opening the file directly in a web browser.
[1]
https://www.w3.org/TR/cors/#access-control-allow-origin-response-header
Motivation:
Checking if a key exists on a TreeMap has a Big O of "log 2 N",
doing it twice is not cheap.
Modifications:
Get the key instead which has the same cost and check if it is null.
Result:
Faster code due to one expensive operation removed.
Motivation:
We missed to reset the decoder when asked for it in HttpObjectDecoder and so sometimes could produce more then one LastHttpContent in a sequence during channelInactive.
This did show up as AssertionError:
22:22:35.499 [nioEventLoopGroup-3-1] WARN i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.lang.AssertionError: null
at io.netty.handler.codec.http.HttpObjectAggregator.decode(HttpObjectAggregator.java:205) ~[classes/:na]
at io.netty.handler.codec.http.HttpObjectAggregator.decode(HttpObjectAggregator.java:57) ~[classes/:na]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) ~[classes/:na]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) [classes/:na]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278) [classes/:na]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:428) [classes/:na]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:277) [classes/:na]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:343) [classes/:na]
at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:309) [classes/:na]
at io.netty.handler.codec.http.HttpClientCodec$Decoder.channelInactive(HttpClientCodec.java:228) [classes/:na]
at io.netty.channel.CombinedChannelDuplexHandler.channelInactive(CombinedChannelDuplexHandler.java:213) [classes/:na]
...
Modifications:
Correctly reset decoder.
Result:
Correctly only produce one LastHttpContent per sequence.
Motivation:
When upgrading h2c, I found that sometimes both of http2 settings frame and http response message was arrived before receiving upgrade success event. It was because ByteToMessageDecoder propagated its internally buffered message to the next handler when removing itself from pipeline.(refer to ByteToMessageDecoder#handlerRemoved)
I think it's better to propagate upgrade success event when handling 101 switching protocol response.
Modifications:
Upgrade success event will be propagated before removing source codec.
Result:
It guarantees that upgrade success event will be arrived first at the next handler.
Motivation:
There is a spelling error in FileRegion.transfered() as it should be transferred().
Modifications:
Deprecate old method and add a new one.
Result:
Fix typo and can remove the old method later.
Motivation:
DefaultCookie constructor performs a name validation that doesn’t match
RFC6265. Moreover, such validation is already performed in strict
encoders and decoders.
Modifications:
Drop DefaultCookie name validation, rely on encoders and decoders.
Result:
no more duplicate broken validation
Motivation:
We missed to pass the decrement value to the wrapped FullHttpRequest and so missed to decrement the reference count in the correct way.
Modifications:
Correctly pass the decrement value to the wrapped request.
Result:
UpgradeEvent.release(decrement) works as expected.
Motivation:
HttpServerUpgradeHandler.UpgradeCodec.prepareUpgradeResponse should allow to abort the upgrade and so just continue with using HTTP. Beside this we should only pass in the response HttpHeaders as this is inline with the docs.
Modifications:
- UpgradeCodec.prepareUpgradeResponse now allows to return a boolean and so allows to specifiy if the upgrade should take place.
- Change the param from FullHttpResponse to HttpHeaders to be inline with the javadocs.
Result:
More flexible and correct handling of upgrades.
Motivation:
upgradeTo(...) takes the response as paramater, but the respone itself was already written to the Channel. This gives the user the impression the response can be changed or even act on it which may not be safe anymore once it was written and has been released.
Modifications:
Remove the response param from the method.
Result:
Less confusion and safer usage.
Motivation:
The current HttpPostMultipartRequestDecoder can decode multipart/form-data parts with a Content-Type that specifies a charset. When this charset is invalid the Charset.forName() throws an unchecked UnsupportedCharsetException. This exception is not catched by the decoder. It should actually be rethrown as an ErrorDataDecoderException, because the developer using the API would expect this validation failure to be reported as such.
Modifications:
Add a catch block for UnsupportedCharsetException and rethrow it as an ErrorDataDecoderException.
Result:
UnsupportedCharsetException are now rethrown as ErrorDataDecoderException.
Motivation:
It will be easier to support websockets in server application by using WebSocketServerProtocolHandshakeHandler class and not reinvent its functionality. But currently it handles all http requests as if they were websocket handshake requests.
Modifications:
Check if http request path is equals to adjusted websocket path.
Fixed example of websocket server implementation.
Result:
WebSocketServerProtocolHandshakeHandler handles only websocket handshake requests.
Motivation:
If the input buffer is empty we should not have decodeLast(...) call decode(...) as the user may not expect this.
Modifications:
- Not call decode(...) in decodeLast(...) if the input buffer is empty.
- Add testcases.
Result:
decodeLast(...) will not call decode(...) if input buffer is empty.
Modifications:
When constructing the FullHttpMessage pass in the ByteBuf to use via the ByteBufAllocator assigned via the context.
Result:
The ByteBuf assigned to the FullHttpMessage can now be configured as a pooled/unpooled, direct/heap based ByteBuf via the ByteBufAllocator used.
Motivation:
See #4855
Modifications:
Unfortunately, unescapeCsv cannot be used here because the input could be a CSV line like `"a,b",c`. Hence this patch adds unescapeCsvFields to parse a CSV line and split it into multiple fields and unescaped them. The unit tests should define the behavior of unescapeCsvFields.
Then this patch just uses unescapeCsvFields to implement `CombinedHttpHeaders.getAll`.
Result:
`CombinedHttpHeaders.getAll` will return the unescaped values of a header.
Motivation:
b714297a44 introduced ChannelInputShutdownEvent support for HttpObjectDecoder. However this should have been added to the super class ByteToMessageDecoder, and ByteToMessageDecoder should not propegate a channelInactive event through the pipeline in this case.
Modifications:
- Move the ChannelInputShutdownEvent handling from HttpObjectDecoder to ByteToMessageDecoder
- ByteToMessageDecoder doesn't call ctx.fireChannelInactive() on ChannelInputShutdownEvent
Result:
Half closed events are treated more generically, and don't get translated into a channelInactive pipeline event.
Motivation:
The initial buffer size used to decode HTTP objects is currently fixed at 128. This may be too small for some use cases and create a high amount of overhead associated with resizing/copying. The user should be able to configure the initial size as they please.
Modifications:
- Make HttpObjectDecoder's AppendableCharSequence initial size configurable
Result:
Users can more finely tune initial buffer size for increased performance or to save memory.
Fixes https://github.com/netty/netty/issues/4807
Motivation:
"CorsConfigBuilder.allowNullOrigin()" should be public otherwise people can not set it. See #4835
Modifications:
Make "CorsConfigBuilder.allowNullOrigin()" public.
Result:
The user can call "CorsConfigBuilder.allowNullOrigin()" now.
Motivation:
If the Connection header contains multiple values (which is valid) we fail to detect a websocket upgrade
Modification:
- Add new method which allows to check if a header field contains a specific value (and also respect multiple header values)
- Use this method to detect handshake
Result:
Correct detect handshake if Connection header contains multiple values (seperated by ',').
Motivation:
If the ZlibCodecFactory can support using a custom window size we should support it by default in the websocket extensions as well.
Modifications:
Detect if a custom window size can be handled by the ZlibCodecFactory and if so enable it by default for PerMessageDeflate*ExtensionHandshaker.
Result:
Support window size flag by default in most installations.
Motivation:
If the user calls handshake.finishHandshake() we need to ensure that the user has the chance to setup the pipeline before any WebSocketFrames are read. Because of this we need
to delay the removal of the HttpRequestDecoder.
Modifications:
- Remove the HttpRequestDecoder via the EventLoop and so delay it which gives the user a chance to setup the pipeline after finishHandshake() completes
- Add unit test for this.
Result:
Less surpising and correct behaviour even if the http response and websocket frame are received in one read operation.
Motivation:
Request bodies can easily be larger than Integer.MAX_VALUE in practice.
There's no reason, or intention, for Netty to impose this artificial constraint.
Worse, it currently does not fail if the body is larger than this value;
it just silently only reads the first Integer.MAX_VALUE bytes and discards the rest.
This restriction doesn't effect chunked transfers, with no Content-Length header.
Modifications:
Force the use of `long HttpUtil.getContentLength(HttpMessage, long)` instead of
`long HttpUtil.getContentLength(HttpMessage, long)`.
Result:
Netty will support HTTP request bodies of up to Long.MAX_VALUE length.
Motivation:
When HttpClientUpgradeHandler upgrades from HTTP/1 to another protocol,
it performs a two-step opertion:
1. Remove the SourceCodec (HttpClientCodec)
2. Add the UpgradeCodec
When HttpClientCodec is removed from the pipeline, the decoder being
removed triggers channelRead() event with the data left in its
cumulation buffer. However, this is not received by the UpgradeCodec
becuase it's not added yet. e.g. HTTP/2 SETTINGS frame sent by the
server can be missed out.
To fix the problem, we need to reverse the steps:
1. Add the UpgradeCodec
2. Remove the SourceCodec
However, this does not work as expected either, because UpgradeCodec can
send a greeting message such as HTTP/2 Preface. Such a greeting message
will be handled by the SourceCodec and will trigger an 'unsupported
message type' exception.
To fix the problem really, we need to make the upgrade process 3-step:
1. Remove/disable the encoder of SourceCodec
2. Add the UpgradeCodec
3. Remove the SourceCodec
Modifications:
- Add SourceCodec.prepareUpgradeFrom() so that SourceCodec can remove or
disable its encoder
- Implement HttpClientCodec.prepareUpgradeFrom() properly
- Miscellaneous:
- Log the related channel as well When logging the failure to send a
GOAWAY
Result:
Cleartext HTTP/1-to-HTTP/2 upgrade works again.
Motivation:
See #3411. A reusable ArrayList in InternalThreadLocalMap can avoid allocations in the following pattern:
```
List<...> list = new ArrayList<...>();
add something to list but never use InternalThreadLocalMap
return list.toArray(new ...[list.size()]);
```
Modifications:
Add a reusable ArrayList to InternalThreadLocalMap and update codes to use it.
Result:
Reuse a thread local ArrayList to avoid allocations.
Motivation:
WebSocketClientCompressionHandler is stateless so it should be @Sharable.
Modifications:
Add @Sharable annotation to WebSocketClientCompressionHandler, make constructor private and add static field to get the instance.
Result:
Less object creation.
Motivation:
I am use netty as a http server, it fail to decode some POST request when the request absent Content-Type in the multipart/form-data body.
Modifications:
Set content_type with default application/octet-stream to parse the uploaded file data when the Content-Type is absent in multipart request body
Result:
Can decode the http request as normal.
Motivation:
ChannelInboundHandler and ChannelOutboundHandler both can implement exceptionCaught(...) method and so we need to dispatch to both of them.
Modifications:
- Correctly first dispatch exceptionCaught to the ChannelInboundHandler but also make sure the next handler it will be dispatched to will be the ChannelOutboundHandler
- Add removeInboundHandler() and removeOutboundHandler() which allows to remove one of the combined handlers
- Let *Codec extends it and not ChannelHandlerAppender
- Remove ChannelHandlerAppender
Result:
Correctly handle events and also have same behavior as in 4.0
Motivation:
InternalAttribute doesn't extend Attribute, but its equals only returns true when it compares with an Attribute. So it will return false when comparing with itself.
Modifications:
Make sure InternalAttribute return false for non InternalAttribute objects.
Result:
InternalAttribute's equals works correctly.
Motivation:
Boxing/unboxing can be avoided.
Modifications:
Use parseInt/parseLong to avoid unnecessary boxing/unboxing.
Result:
Remove unnecessary boxing/unboxing.
Motivation:
Warnings in IDE, unclean code, negligible performance impact.
Modification:
Deletion of unused imports
Result:
No more warnings in IDE, cleaner code, negligible performance improvement.
Motivation:
I missed to reset the MessageDigest before reusing it. This bug was introduced by 79634e661b.
Modifications:
Call reset() on the MessageDigest.
Result:
Correctly reset MessageDigest before re-using
Motivation:
SpdySession.StreamComparator should not be Serializable since SpdySession is not Serializable
Modifications:
Remove Serializable fom SpdySession.StreamComparator
Result:
StreamComparator is not Serializable any more
Motivation:
Typos in javadoc, in "combine" and "recommendations", IllegalReferenceCountException
Modification:
Rename incorrect reference, typos are modified
Result:
Reference is correct, typos are fixed
Motivation:
Creating a new MessageDigest every time is wasteful, we should store them in FastThreadLocal.
Modifications:
Change WebSocketUtil to store MD5 and SHA1 MessageDigest in FastThreadLocal and use these.
Result:
Less overhead and less GC.
Motivation:
- AbstractHttp2ConnectionHandlerBuilder.encoderEnforceMaxConcurrentStreams can be the primitive boolean
- SpdySession.StreamComparator should not be Serializable since SpdySession is not Serializable
Modifications:
Use boolean instead and remove Serializable
Result:
- Minor improvement for AbstractHttp2ConnectionHandlerBuilder
- StreamComparator is not Serializable any more
Motivation:
Allow passing HttpHeaders instance to DefaultHttpMessage
in order to avoid eager creation of Headers to
allow users reuse their Headers instance.
Modifications:
Added a constructor with HttpHeaders to DefaultHttpMessage,
Modified DefaultHttpResponse and DefaultHttpRequest
to receive HttpHeaders instances.
Modified DefaultFullHttpReqest and DefaultFullHttpResponse
to receive HttpHeaders, and updated `duplicate` and
`copy` to use new constructors.
Result:
Users can now pass HttpHeaders instance when
constructing Http Requests and Responses.
Motivation:
As we not used Unpooled anymore for allocate buffers in Base64.* methods we need to ensure we realease all the buffers.
Modifications:
Correctly release buffers
Result:
No more buffer leaks
Motivation:
Javadoc reports errors about invalid docs.
Modifications:
Fix some errors reported by javadoc.
Result:
A lot of javadoc errors are fixed by this patch.
Motivation:
There are some wrong links and tags in javadoc.
Modifications:
Fix the wrong links and tags in javadoc.
Result:
These links will work correctly in javadoc.
Motivation:
ChunkedInput.readChunk currently takes a ChannelHandlerContext object as a parameters. All current implementations of this interface only use this object to get the ByteBufAllocator object. Thus taking a ChannelHandlerContext as a parameter is more restrictive for users of this API than necessary.
Modifications:
- Add a new method readChunk(ByteBufAllocator)
- Deprecate readChunk(ChannelHandlerContext) and updates all implementations to call readChunk(ByteBufAllocator)
Result:
API that only requires ByteBufAllocator to use ChunkedInput.
Motivation:
HttpHeaderValues.IDENTITY is an AsciiString, but was compared using equals to a String.
Modifications:
Use contentEquals instead.
Result:
Correct comparison.
Motivation:
We have websocket extension support (with compression) in old master. We should port this to 4.1
Modifications:
Backport relevant code.
Result:
websocket extension support (with compression) is now in 4.1.
Motivation:
Consistency in API design
Modifications:
- Deprecate CorsConfig.Builder and its factory methods
- Deprecate CorsConfig.DateValueGenerator
- Add CorsConfigBuilder and its factory methods
- Fix typo (curcuit -> circuit)
Result:
Consistency with other builder APIs such as SslContextBuilder and
Http2ConnectionHandlerBuilder
Motivation:
If a uri contains whitespaces we need to ensure we correctly escape these when creating the request for the handshake.
Modifications:
- Correctly encode path for uri
- Add tests
Result:
Correctly handle whitespaces when doing websocket upgrade requests.
Motivation:
HttpClientUpgradeHandler uses HttpHeaderNames.UPGRADE as the value of
the 'Connection' header, which is incorrect. It should use
HttpHeaderValues.UPGRADE instead (note Names vs Values.)
Also, HttpHeaderValues.UPGRADE should be 'upgrade' rather than
'Upgrade', as defined in:
- https://tools.ietf.org/html/rfc7230#section-6.7
Modifications:
- Use HttpHeaderValues.UPGRADE for a 'Connection' header
- Lowercase the value of HttpHeaderValues.UPGRADE
Result:
- Fixes#4508
- Correct behavior
Motivation:
On a successful protocol upgrade in HTTP, HttpClientUpgradeHandler calls
HttpClientCodec.upgradeFrom(), which removed both the HTTP encoder and
decoder from the pipeline immediately.
However, because the decoder is in the middle of the decode loop,
removing it from the pipeline immediately will cause the cumulation
buffer to be released prematurely.
This often leads to an IllegalReferenceCountException or missing first
response after the upgrade response.
Modifications:
- Remove the decoder *after* the decode loop is done
Result:
Fixes#4504
Motivation:
HttpClientUpgradeHandler currently throws an IllegalStateException when
the server sends a '101 Switching Protocols' response that has no
'Upgrade' header.
Some servers do not send the 'Upgrade' header on a successful protocol
upgrade and we could safely assume that the server accepted the
requested protocol upgrade in such a case, looking from the response
status code (101)
Modifications:
- Do not throw an IllegalStateException when the server responded 101
without a 'Upgrade' header
- Note that we still check the equality of the 'Upgrade' header when it
is present.
Result:
- Fixes#4523
- Better interoperability
Motivation:
- On the client, cookies should be sorted in decreasing order of path
length. From RFC 6265:
5.4.2. The user agent SHOULD sort the cookie-list in the following
order:
* Cookies with longer paths are listed before cookies with
shorter paths.
* Among cookies that have equal-length path fields, cookies with
earlier creation-times are listed before cookies with later
creation-times.
NOTE: Not all user agents sort the cookie-list in this order, but
this order reflects common practice when this document was
written, and, historically, there have been servers that
(erroneously) depended on this order.
Note that the RFC does not define the path length of cookies without a
path. We sort pathless cookies before cookies with the longest path,
since pathless cookies inherit the request path (and setting a path
that is longer than the request path is of limited use, since it cannot
be read from the context in which it is written).
- On the server, if there are multiple cookies of the same name, only one
of them should be encoded. RFC 6265 says:
Servers SHOULD NOT include more than one Set-Cookie header field in
the same response with the same cookie-name.
Note that the RFC does not define which cookie should be set in the case
of multiple cookies with the same name; we arbitrarily pick the last one.
Modifications:
- Changed the visibility of the 'strict' field to 'protected' in
CookieEncoder.
- Modified ClientCookieEncoder to sort cookies in decreasing order of path
length when in strict mode.
- Modified ServerCookieEncoder to return only the last cookie of a given
name when in strict mode.
- Added a fast path for both strict mode in both client and server code
for cases with only one cookie, in order avoid the overhead of sorting
and memory allocation.
- Added unit tests for the new cases.
Result:
- Cookie generation on client and server is now more conformant to RFC 6265.
Motivation:
HttpHeaders already has specific methods for such popular and simple headers like "Host", but if I need to convert POST raw body to string I need to parse complex ContentType header in my code.
Modifications:
Add getCharset and getCharsetAsString methods to parse charset from Content-Length header.
Result:
Easy to use utility method.
Motivation:
FullHttp[Request|Response].hashCode() uses a releasable object and in vulnerable to a IllegalRefCountException if that object has been released.
Modifications:
- Ensure the released object is not used.
Result:
No more IllegalRefCountException.
Motivation:
Headers and groups of headers are frequently copied and the current mechanism is slower than it needs to be.
Modifications:
Skip name validation and hash computation when they are not necessary.
Fix emergent bug in CombinedHttpHeaders identified with better testing
Fix memory leak in DefaultHttp2Headers when clearing
Added benchmarks
Result:
Faster header copying and some collateral bug fixes
Motivation:
Makes the API contract of headers more consistent and simpler.
Modifications:
If self is passed to set then simply return
Result:
set and setAll will be consistent
Motivation:
The HTTP/2 RFC (https://tools.ietf.org/html/rfc7540#section-8.1.2) indicates that header names consist of ASCII characters. We currently use ByteString to represent HTTP/2 header names. The HTTP/2 RFC (https://tools.ietf.org/html/rfc7540#section-10.3) also eludes to header values inheriting the same validity characteristics as HTTP/1.x. Using AsciiString for the value type of HTTP/2 headers would allow for re-use of predefined HTTP/1.x values, and make comparisons more intuitive. The Headers<T> interface could also be expanded to allow for easier use of header types which do not have the same Key and Value type.
Motivation:
- Change Headers<T> to Headers<K, V>
- Change Http2Headers<ByteString> to Http2Headers<CharSequence, CharSequence>
- Remove ByteString. Having AsciiString extend ByteString complicates equality comparisons when the hash code algorithm is no longer shared.
Result:
Http2Header types are more representative of the HTTP/2 RFC, and relationship between HTTP/2 header name/values more directly relates to HTTP/1.x header names/values.
Keep RTSPRequestEncoder, RTSPRequestDecoder, RTSPResponseEncoder and
RTSPResponseDecoder for backwards compatibility but they now just extends
the generic encoder/decoder and are markes as deprecated.
Renamed the decoder test, because the decoder is now generic. Added
testcase for when ANNOUNCE request is received from server.
Created testcases for encoder.
Mark abstract base classes RTSPObjectEncoder and RTSPObjectDecoder as
deprecated, that functionality is now in RTSPEncoder and RTSPDecoder.
Added annotation in RtspHeaders to suppress warnings about deprecation, no need when
whole class is deprecated.
Motivation:
As part of recent efforts to rectify performance and make 4.1 headers more similar to 5.0 some methods were deprecated. Some of these methods were deprecated because they used String instead of CharSequence in the signature, which may require casting at the user level. Some of the deprecated methods have no direct alternatives and were done to inform a user the method will go away in future releases.
Modifications:
- Remove the deprecated qualifier from methods where no direct replacement exists
Result:
Less warnings in user code.
Motivation:
As toString() is often used while logging we need to ensure this produces no exception.
Modifications:
Ensure we never throw an IllegalReferenceCountException.
Result:
Be able to log without produce exceptions.
Motivation:
We should prevent to add/set DefaultHttpHeaders to itself to prevent unexpected side-effects.
Modifications:
Throw IllegalArgumentException if user tries to pass the same instance to set/add.
Result:
No surprising side-effects.
Motivation:
Http2CodecUtils has some static variables which are defined as Strings instead of CharSequence. One of these defines is used as a header name and should be AsciiString.
Modifications:
- Change the String defines in Http2CodecUtils to CharSequence
Result:
Types are more consistently using CharSequence and adding the upgrade header will require less work.
Motivation:
According to the SPDY spec https://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1#TOC-3.2.1-Request header names must be lowercase. Our predefined SPDY extension headers are not lowercase.
Modifications
- SpdyHttpHeaders should define header names in lower case
Result:
Compliant with SPDY spec, and header validation code does not detect errors for our own header names.
Motivation:
Currently there is a HttpConversionUtil.addHttp2ToHttpHeaders which requires a FullHttpMessage, but this may not always be available. There is no interface that can be used with just Http2Headers and HttpHeaders.
Modifications:
- add an overload for HttpConversionUtil.addHttp2ToHttpHeaders which does not take FullHttpMessage
Result:
An overload for HttpConversionUtil.addHttp2ToHttpHeaders exists which does not require FullHttpMessage.
Motivation:
As we stored the WebSocketServerHandshaker in the ChannelHandlerContext it was always null and so no close frame was send if WebSocketServerProtocolHandler was used.
Modifications:
Store WebSocketServerHAndshaker in the Channel attributes and so make it visibile between different handlers.
Result:
Correctly send close frame.
Motivaion:
The HttpHeaders and DefaultHttpHeaders have methods deprecated due to being removed in future releases, but no replacement method to use in the current release. The deprecation policy should not be so aggressive as to not provide any non-deprecated method to use.
Modifications:
- Remove deprecated annotations and javadocs from methods which are the best we can do in terms of matching the master's api for 4.1
Result:
There should be non-deprecated methods available for HttpHeaders in 4.1.
Motivation:
Related to issue #4185.
HTTP has the option to disable header validation for optimisation purposes. Introduce the same option for SPDY headers.
Also, optimise SpdyHttpEncoder by allowing the user to specify whether or not the encoder needs to convert header names to lowercase.
Modifications:
Added flags for validation and conversion.
Result:
SpdyHeader validation and conversion can be disabled.
Motivation:
When SpdyHttpEncoder attempts to create an SpdyHeadersFrame from a HttpResponse an IllegalArgumentException is thrown if the original HttpResponse contains a header that includes uppercase characters. The IllegalArgumentException is thrown due to the additional validation check introduced by #4047.
Previous versions of the SPDY codec would handle this by converting the HTTP header name to lowercase before adding the header to the SpdyHeadersFrame.
Modifications:
Convert the header name to lowercase before adding it to SpdyHeaders
Result:
SpdyHttpEncoder can now convert a valid HttpResponse into a valid SpdyFrame
Motivation:
As all methods in the ChannelHandler are executed by the same thread there is no need to use synchronized.
Modifications:
Remove synchronized keyword.
Result:
No more unnessary synchronized in SpdySessionHandler.
Motivation:
There currently exists http.HttpUtil, http2.HttpUtil, and http.HttpHeaderUtil. Having 2 HttpUtil methods can be confusing and the utilty methods in the http package could be consolidated.
Modifications:
- Rename http2.HttpUtil to http2.HttpConversionUtil
- Move http.HttpHeaderUtil methods into http.HttpUtil
Result:
Consolidated utilities whose names don't overlap.
Fixes https://github.com/netty/netty/issues/4120
Motivation:
The HttpRequestEncoder.encodeInitialLine can now be consistent with the master branch after 85c79dbbe4
Modifications:
- Use the AsciiString and ByteBufUtil.copy methods
Result:
Consistent behavior/code between 4.1 and master branches.
Motivation:
Whe a 100 Continue response was written an IllegalStateException was produced as soon as the user wrote the following response. This regression was introduced by 41b0080fcc.
Modifications:
- Special handle 100 Continue responses
- Added unit tests
Result:
Fixed regression.
Motivation:
Hixie 76 needs special handling compared to other connection upgrade responses. Our detection code of non websocket responses did actually always use the special handling that only should be used for Hixie 76 responses.
Modifications:
Correctly detect connection upgrade responses which are not for websockets.
Result:
Be able to upgrade connections for other protocols then websockets.
Motivation:
The HTTP schemes defined by https://tools.ietf.org/html/rfc7230 don't have a common representation in Netty.
Modifications:
- Add a class to represent HttpScheme
Result:
The HTTP Scheme is now defined in 1 common location.
Motivation:
The HTTP specification defines specific request-targets in https://tools.ietf.org/html/rfc7230#section-5.3. Netty does not have a way to distinguish between these differnt types, and there is currently no obvious location where these types of methods would live.
Modifications:
- Add methods to distinguish request-targets as defined in https://tools.ietf.org/html/rfc7230#section-5.3
Result:
Common utitlity methods exist to inpsect request-targets.
Motivation:
HttpResponseStatus.reasonPhrase returns an AsciiString, but was compared using equals to a String. Other usages of the reasonPhrase also use the toString() method when not necessary.
Modifications:
- Use the contentEquals method
Result:
Correct comparison, and no toString() when not needed.
Motivation:
The HttpObjectAggregator always responds with a 100-continue response. It should check the Content-Length header to see if the content length is OK, and if not responds with a 417.
Modifications:
- HttpObjectAggregator checks the Content-Length header in the case of a 100-continue.
Result:
HttpObjectAggregator responds with 417 if content is known to be too big.
Motivation:
When attempting to retrieve a SPDY header using an AsciiString key, if the header was inserted using a String based key, the lookup would fail. Similarly, the lookup would fail if the header was inserted with an AsciiString key, and retrieved using a String key. This has been fixed with the header simplification commit (1a43923aa8).
Extra unit tests have been added to protect against this issue occurring in the future. The tests check that a header added using String or AsciiString can be retrieved using AsciiString or String respectively.
Modifications:
Added more unit tests
Result:
Protect against issue #4053 happening again.
Motivation:
A degradation in performance has been observed from the 4.0 branch as documented in https://github.com/netty/netty/issues/3962.
Modifications:
- Simplify Headers class hierarchy.
- Restore the DefaultHeaders to be based upon DefaultHttpHeaders from 4.0.
- Make various other modifications that are causing hot spots.
Result:
Performance is now on par with 4.0.
Motivation:
Due not using a cast we insert 32 and not a whitespace into the String.
Modifications:
Correclty cast to char.
Result:
Correct handling of whitespaces.
Motivation:
In the event an HTTP message does not include either a content-length or a transfer-encoding header [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.3.3) states the behavior must be treated differently for requests and responses. If the channel is half closed then the HttpObjectDecoder is not invoking decodeLast and thus not checking if messages should be sent up the pipeline.
Modifications:
- Add comments to clarify regular decode default case.
- Handle the ChannelInputShutdownEvent in the HttpObjectDecoder and evaluate if messages need to be generated.
Result:
Messages are generated on half closed, and comments clarify existing logic.
Motivation:
We noticed that the headers implementation in Netty for HTTP/2 uses quite a lot of memory
and that also at least the performance of randomly accessing a header is quite poor. The main
concern however was memory usage, as profiling has shown that a DefaultHttp2Headers
not only use a lot of memory it also wastes a lot due to the underlying hashmaps having
to be resized potentially several times as new headers are being inserted.
This is tracked as issue #3600.
Modifications:
We redesigned the DefaultHeaders to simply take a Map object in its constructor and
reimplemented the class using only the Map primitives. That way the implementation
is very concise and hopefully easy to understand and it allows each concrete headers
implementation to provide its own map or to even use a different headers implementation
for processing requests and writing responses i.e. incoming headers need to provide
fast random access while outgoing headers need fast insertion and fast iteration. The
new implementation can support this with hardly any code changes. It also comes
with the advantage that if the Netty project decides to add a third party collections library
as a dependency, one can simply plug in one of those very fast and memory efficient map
implementations and get faster and smaller headers for free.
For now, we are using the JDK's TreeMap for HTTP and HTTP/2 default headers.
Result:
- Significantly fewer lines of code in the implementation. While the total commit is still
roughly 400 lines less, the actual implementation is a lot less. I just added some more
tests and microbenchmarks.
- Overall performance is up. The current implementation should be significantly faster
for insertion and retrieval. However, it is slower when it comes to iteration. There is simply
no way a TreeMap can have the same iteration performance as a linked list (as used in the
current headers implementation). That's totally fine though, because when looking at the
benchmark results @ejona86 pointed out that the performance of the headers is completely
dominated by insertion, that is insertion is so significantly faster in the new implementation
that it does make up for several times the iteration speed. You can't iterate what you haven't
inserted. I am demonstrating that in this spreadsheet [1]. (Actually, iteration performance is
only down for HTTP, it's significantly improved for HTTP/2).
- Memory is down. The implementation with TreeMap uses on avg ~30% less memory. It also does not
produce any garbage while being resized. In load tests for GRPC we have seen a memory reduction
of up to 1.2KB per RPC. I summarized the memory improvements in this spreadsheet [1]. The data
was generated by [2] using JOL.
- While it was my original intend to only improve the memory usage for HTTP/2, it should be similarly
improved for HTTP, SPDY and STOMP as they all share a common implementation.
[1] https://docs.google.com/spreadsheets/d/1ck3RQklyzEcCLlyJoqDXPCWRGVUuS-ArZf0etSXLVDQ/edit#gid=0
[2] https://gist.github.com/buchgr/4458a8bdb51dd58c82b4
Motivation:
The SPDY spec requires that all header names be lowercase (see https://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1#TOC-3.2-HTTP-Request-Response). The SPDY codec header name validator does not enforce this requirement.
Modifications:
- SpdyCodecUtil.validateHeaderName should check for upper case characters and throw an error if any are found.
Result:
SPDY codec header validation enforces specification requirement.
Motivation:
The HttpObjectDecoder is on the hot code path for the http codec. There are a few hot methods which can be modified to improve performance.
Modifications:
- Modify AppendableCharSequence to provide unsafe methods which don't need to re-check bounds for every call.
- Update HttpObjectDecoder methods to take advantage of new AppendableCharSequence methods.
Result:
Peformance boost for decoding http objects.
Motivation:
WebSocketServerHandshakerFactory.sendUnsupportedVersionResponse does not
send a LastHttpContent, nor does it flush, and it doesn't send a content
length.
Modifications:
Changed sendUnsupportedVersionResponse to send FullHttpResponse, to
writeAndFlush, and to set a content length of 0. Also added a test for
this method.
Result:
Upstream handlers will be able to determine the end of the response, the
response will actually get written to the client, and the client will be
able to determine the end of the response.
Proposal to fix issue #3636
Motivations:
Currently, while adding the next buffers to the decoder
(`decoder.offer()`), there is no way to access to the current HTTP
object being decoded since it can only be available currently once fully
decoded by `decoder.hasNext()`.
Some could want to know the progression on the overall transfer but also
per HTTP object.
While overall progression could be done using (if available) the global
Content-Length of the request and taking into account each HttpContent
size, the per HttpData object progression is unknown.
Modifications:
1) For HTTP object, `AbstractHttpData` has 2 protected properties named
`definedSize` and `size`, respectively the supposely final size and the
current (decoded until now) size.
This provides a new method `definedSize()` to get the current value for
`definedSize`. The `size` attribute is reachable by the `length()`
method.
Note however there are 2 different ways that currently managed the
`definedSize`:
a) `Attribute`: it is reset each time the value is less than actual
(when a buffer is added, the value is increased) since the final length
is not known (no Content-Length)
b) `FileUpload`: it is set at startup from the lengh provided
So these differences could lead in wrong perception;
a) `Attribute`: definedSize = size always
b) `FileUpload`: definedSize >= size always
Therefore the comment tries to explain clearly the different behaviors.
2) In the InterfaceHttpPostRequestDecoder (and the derived classes), I
add a new method: `decoder.currentPartialHttpData()` which will return a
`InterfaceHttpData` (if any) as the current `Attribute` or `FileUpload`
(the 2 generic types), which will allow then the programmer to check
according to the real type (instance of) the 2 methods `definedSize()`
and `length()`.
This method check if currentFileUpload or currentAttribute are null and
returns the one (only one could be not null) that is not null.
Note that if this method returns null, it might mean 2 situations:
a) the last `HttpData` (whatever attribute or file upload) is already
finished and therefore accessible through `next()`
b) there is not yet any `HttpData` in decoding (body not yet parsed for
instance)
Result:
The developper has more access and therefore control on the current
upload.
The coding from developper side could looks like in the example in
HttpUloadServerHandler.
Related: #3814
Motivation:
To implement the support for an upgrade from cleartext HTTP/1.1
connection to cleartext HTTP/2 (h2c) connection, a user usually uses
HttpServerUpgradeHandler.
It does its job, but it requires a user to instantiate the UpgradeCodecs
for all supported protocols upfront. It means redundancy for the
connections that are not upgraded.
Modifications:
- Change the constructor of HttpServerUpgradeHandler
- Accept UpgraceCodecFactory instead of UpgradeCodecs
- The default constructor of HttpServerUpgradeHandler sets the
maxContentLength to 0 now, which shouldn't be a problem because a
usual upgrade request is a GET.
- Update the examples accordingly
Result:
A user can instantiate Http2ServerUpgradeCodec and its related objects
(Http2Connection, Http2FrameReader/Writer, Http2FrameListener, etc) only
when necessary.
Motivation:
We need to ensure we never allow to have null values set on headers, otherwise we will see a NPE during encoding them.
Modifications:
Add unit test that shows we correctly handle null values.
Result:
Verify correct implementation.
Motivation:
SpdyOrHttpChooser and Http2OrHttpChooser duplicate fair amount code with each other.
Modification:
- Replace SpdyOrHttpChooser and Http2OrHttpChooser with ApplicationProtocolNegotiationHandler
- Add ApplicationProtocolNames to define the known application-level protocol names
Result:
- Less code duplication
- A user can perform dynamic pipeline configuration that follows ALPN/NPN for any protocols.
Related: #3641 and #3813
Motivation:
When setting up an HTTP/1 or HTTP/2 (or SPDY) pipeline, a user usually
ends up with adding arbitrary set of handlers.
Http2OrHttpChooser and SpdyOrHttpChooser have two abstract methods
(create*Handler()) that expect a user to return a single handler, and
also have add*Handlers() methods that add the handler returned by
create*Handler() to the pipeline as well as the pre-defined set of
handlers.
The problem is, some users (read: I) don't need all of them or the
user wants to add more than one handler. For example, take a look at
io.netty.example.http2.tiles.Http2OrHttpHandler, which works around
this issue by overriding addHttp2Handlers() and making
createHttp2RequestHandler() a no-op.
Modifications:
- Replace add*Handlers() and create*Handler() with configure*()
- Rename getProtocol() to selectProtocol() to make what it does clear
- Provide the default implementation of selectProtocol()
- Remove SelectedProtocol.UNKNOWN and use null instead, because
'UNKNOWN' is not a protocol
- Proper exception handling in the *OrHttpChooser so that the
exception is logged and the connection is closed when failed to
select a protocol
- Make SpdyClient example always use SSL. It was always using SSL
anyway.
- Implement SslHandshakeCompletionEvent.toString() for debuggability
- Remove an orphaned class: JettyNpnSslSession
- Add SslHandler.applicationProtocol() to get the name of the
application protocol
- SSLSession.getProtocol() now returns transport-layer protocol name
only, so that it conforms to its contract.
Result:
- *OrHttpChooser have better API.
- *OrHttpChooser handle protocol selection failure properly.
- SSLSession.getProtocol() now conforms to its contract.
- SpdyClient example works with SpdyServer example out of the box
Motivation:
Found a bug in that netty would generate a 20 byte body when returing a response
to an HTTP HEAD. the 20 bytes seems to be related to the compression footer.
RFC2616, section 9.4 states that responses to an HTTP HEAD MUST not return a message
body in the response.
Netty's own client implementation expected an empty response. The extra bytes lead to a
2nd response with an error decoder result:
java.lang.IllegalArgumentException: invalid version format: 14
Modifications:
Track the HTTP request method. When processing the response we determine if the response
is passthru unnchanged. This decision now takes into account the request method and passthru
responses related to HTTP HEAD requests.
Result:
Netty's http client works and better RFC conformance.
Motivation:
* Path attribute should be null, not empty String, if it's passed as "Path=".
* Only extract attribute value when the name is recognized.
* Only extract Expires attribute value String if MaxAge is undefined as it has precedence.
Modification:
Modify ClientCookieDecoder.
Add "testIgnoreEmptyPath" test in ClientCookieDecoderTest.
Result:
More idyomatic Path behavior (like Domain).
Minor performance improvement in some corner cases.
Motivations:
When using HttpPostRequestEncoder and trying to set an attribute if a
charset is defined, currenlty implicit Charset.toStrng() is used, given
wrong format.
As in Android for UTF-16 = "com.ibm.icu4jni.charset.CharsetICU[UTF-16]".
Modifications:
Each time charset is used to be printed as its name, charset.name() is
used to get the canonical name.
Result:
Now get "UTF-16" instead.
(3.10 version)
RFC6265 specifies which characters are allowed in a cookie name and value.
Netty is currently too lax, which can used for HttpOnly escaping.
Modification:
In ServerCookieDecoder: discard cookie key-value pairs that contain invalid characters.
In ClientCookieEncoder: throw an exception when trying to encode cookies with invalid characters.
Result:
The problem described in the motivation section is fixed.
Motivation:
Our automatically handling of non-auto-read failed because it not detected the need of calling read again by itself if nothing was decoded. Beside this handling of non-auto-read never worked for SslHandler as it always triggered a read even if it decoded a message and auto-read was false.
This fixes [#3529] and [#3587].
Modifications:
- Implement handling of calling read when nothing was decoded (with non-auto-read) to ByteToMessageDecoder again
- Correctly respect non-auto-read by SslHandler
Result:
No more stales and correctly respecting of non-auto-read by SslHandler.
Motivation:
Other implementations of FullHttpMessage allow .toString to be called after the Message has been released
This brings AggregatedFullHttpMessage into line with those impls.
Modifications:
- Changed AggregatedFullHttpMessage to no longer be a sub-class of DefaultByteBufHolder
- Changes AggregatedFullHttpMessage to implement ByteBufHolder
- Hold the content buffer internally to AggregatedFullHttpMessage
- Implement the required content() and release() methods that were missing
- Do not check refcnt when accessing content() (similar to DefaultFullHttpMessage)
Result:
A released AggregatedFullHttpMessage can have .toString called without throwing an exception
Motivation:
While forward porting https://github.com/netty/netty/pull/3579 there were a few areas that had not been previously back ported.
Modifications:
Backport the missed areas to ensure consistency.
Result:
More consistent 4.1 and master branches.
Motivation:
The usage and code within AsciiString has exceeded the original design scope for this class. Its usage as a binary string is confusing and on the verge of violating interface assumptions in some spots.
Modifications:
- ByteString will be created as a base class to AsciiString. All of the generic byte handling processing will live in ByteString and all the special character encoding will live in AsciiString.
Results:
The AsciiString interface will be clarified. Users of AsciiString can now be clear of the limitations the class imposes while users of the ByteString class don't have to live with those limitations.
(Ported @luciferous's changes against 3.10)
Motivation:
The current implementation of the encoder writes each character of the
String as a single byte to the buffer, however not all characters are
mappable to a single byte.
Modifications:
If a character is outside the ASCII range, it's converted to '?'.
Result:
A safer encoder for String to ASCII, which substitutes unmappable
characters with'?'.
Motivation:
Not knowing which unit is used for the maxContentLength of the HttpObjectAggregator when reading the Javadoc is annoying and can be a source of bugs.
Modifications:
Added the mention "in bytes"
Result:
Javadoc is clear.
Related: #3445
Motivation:
HttpObjectDecoder.HeaderParser does not reset its counter (the size
field) when it failed to find the end of line. If a header is split
into multiple fragments, the counter is increased as many times as the
number of fragments, resulting an unexpected TooLongFrameException.
Modifications:
- Add test cases that reproduces the problem
- Reset the HeaderParser.size field when no EOL is found.
Result:
One less bug
Motivation:
Currently CORS can be configured to support a 'null' origin, which can
be set by a browser if a resources is loaded from the local file system.
When this is done 'Access-Control-Allow-Origin' will be set to "*" (any
origin). There is also a configuration option to allow credentials being
sent from the client (cookies, basic HTTP Authentication, client side
SSL). This is indicated by the response header
'Access-Control-Allow-Credentials' being set to true. When this is set
to true, the "*" origin is not valid as the value of
'Access-Control-Allow-Origin' and a browser will reject the request:
http://www.w3.org/TR/cors/#resource-requests
Modifications:
Updated CorsHandler's setAllowCredentials to check the origin and if it
is "*" then it will not add the 'Access-Control-Allow-Credentials'
header.
Result:
Is is possible to have a client send a 'null' origin, and at the same
time have configured the CORS to support that and to allow credentials
in that combination.
Motivation:
At the moment if you want to return a HTTP header containing multiple
values you have to set/add that header once with the values wanted. If
you used set/add with an array/iterable multiple HTTP header fields will
be returned in the response.
Note, that this is indeed a suggestion and additional work and tests
should be added. This is mainly to bring up a discussion.
Modifications:
Added a flag to specify that when multiple values exist for a single
HTTP header then add them as a comma separated string.
In addition added a method to StringUtil to help escape comma separated
value charsequences.
Result:
Allows for responses to be smaller.
Motivation:
To use WebSocketClientHandshaker / WebSocketServerHandshaker it's currently a requirement of having a HttpObjectAggregator in the ChannelPipeline. This is not a big deal when a user only wants to server WebSockets but is a limitation if the server serves WebSockets and normal HTTP traffic.
Modifications:
Allow to use WebSocketClientHandshaker and WebSocketServerHandshaker without HttpObjectAggregator in the ChannelPipeline.
Result:
More flexibility
Motivation:
SonarQube (clinker.netty.io/sonar) reported a resource which may not have been properly closed in all situations in AbstractDiskHttpData.
Modifications:
- Ensure file channels are closed in the presence of exceptions.
- Correct instances where local channels were created but potentially not closed.
Result:
Less leaks. Less SonarQube vulnerabilities.
Motivation:
`HttpResponseDecoder` and `HttpRequestDecoder` in the event when the max configured sizes for HTTP initial line, headers or content is breached, sends a `DefaultHttpResponse` and `DefaultHttpRequest` respectively. After this `HttpObjectDecoder` gets into `BAD_MESSAGE` state and ignores any other data received on this connection.
The combination of the above two behaviors, means that the decoded response/request are not complete (absence of sending `LastHTTPContent`). So, any code, waiting for a complete message will have to additionally check for decoder result to follow the correct semantics of HTTP.
If `HttpResponseDecoder` and `HttpRequestDecoder` creates a Full* invalid message then the request/response is a complete HTTP message and hence obeys the HTTP contract.
Modification:
Modified `HttpRequestDecoder`, `HttpResponseDecoder`, `RtspRequestDecoder` and `RtspResponseDecoder` to return Full* messages from `createInvalidMessage()`
Result:
Fixes the wrong behavior of sending incomplete messages from these codecs
In testEncodingSingleCookieV0():
Let's assume we encoded a cookie with MaxAge=50 when currentTimeMillis
is 10999.
Because the encoder will not encode the millisecond part for Expires,
the timeMillis value of the encoded Expires field will be 60000. (If we
did not dropped the millisecond part, it would be 60999.)
Encoding a cookie will take some time, so currentTimeMillis will
increase slightly, such as to 11001.
diff = (60000 - 11001) / 1000 = 48999 / 1000 = 48
maxAge - diff = 50 - 48 = 2
Due to losing millisecond part twice, we end up with the precision
problem illustrated above, and thus we should increase the tolerance
from 1 second to 2 seconds.
/cc @slandelle
Motivation:
Internet Explorer doesn't honor Set-Cookie header Max-Age attribute. It only honors the Expires one.
Modification:
Always generate an Expires attribute along the Max-Age one.
Result:
Internet Explorer compatible expiring cookies. Close#1466.
Motivation:
HTTP/2 codec was implemented in master branch.
Since, master is not yet stable and will be some time before it gets released, backporting it to 4.1, enables people to use the codec with a stable netty version.
Modification:
The code has been copied from master branch as is, with minor modifications to suit the `ChannelHandler` API in 4.x.
Apart from that change, there are two backward incompatible API changes included, namely,
- Added an abstract method:
`public abstract Map.Entry<CharSequence, CharSequence> forEachEntry(EntryVisitor<CharSequence> visitor)
throws Exception;`
to `HttpHeaders` and implemented the same in `DefaultHttpHeaders` as a delegate to the internal `TextHeader` instance.
- Added a method:
`FullHttpMessage copy(ByteBuf newContent);`
in `FullHttpMessage` with the implementations copied from relevant places in the master branch.
- Added missing abstract method related to setting/adding short values to `HttpHeaders`
Result:
HTTP/2 codec can be used with netty 4.1
Motivation:
HttpContentDecoder had the following issues:
- For chunked content, the decoder set invalid "Content-Length" header
with length of the first decoded chunk.
- Decoding of FullHttpRequests put both the original conent and decoded
content into output. As result, using HttpObjectAggregator before the
decoder lead to errors.
- Requests with "Expect: 100-continue" header were not acknowleged:
the decoder didn't pass the header message down the handler's chain
until content is received. If client expected "100 Continue" response,
deadlock happened.
Modification:
- Invalid "Content-Length" header is removed; handlers down the chain can either
rely on LastHttpContent message or ask HttpObjectAggregator to add the header.
- FullHttpRequest is split into HttpRequest and HttpContent (decoded) parts.
- Header (HttpRequest) part of request is sent down the chain as soon as it's received.
Result:
The issues are fixed, unittest is added.
Motivation:
Pull request for RFC6265 support had some unused flag first in ClientCookieDecoder.
Modification:
Remove unused flag first.
Result:
Cleaner code.
Motivation:
Rfc6265Client/ServerCookieEncoder is a better replacement of the old
Client/ServerCookieEncoder, and thus there's no point of keeping both.
Modifications:
- Remove the old Client/ServerCookieEncoder
- Remove the 'Rfc6265' prefix from the new cookie encoder/decoder
classes
- Deprecate CookieDecoder
Result:
We have much better cookie encoder/decoder implementation now.
Motivation:
Currently Netty supports a weird implementation of RFC 2965.
First, this RFC has been deprecated by RFC 6265 and nobody on the
internet use this format.
Then, there's a confusion between client side and server side encoding
and decoding.
Typically, clients should only send name=value pairs.
This PR introduces RFC 6265 support, but keeps on supporting RFC 2965 in
the sense that old unused fields are simply ignored, and Cookie fields
won't be populated. Deprecated fields are comment, commentUrl, version,
discard and ports.
It also provides a mechanism for safe server-client-server roundtrip, as
User-Agents are not supposed to interpret cookie values but return them
as-is (e.g. if Set-Cookie contained a quoted value, it should be sent
back in the Cookie header in quoted form too).
Also, there are performance gains to be obtained by not allocating the
attribute name Strings, as we only want to match them to find which POJO
field to populate.
Modifications:
- New RFC6265ClientCookieEncoder/Decoder and
RFC6265ServerCookieEncoder/Decoder pairs that live alongside old
CookieEncoder/Decoder pair to not break backward compatibility.
- New Cookie.rawValue field, used for lossless server-client-server
roundtrip.
Result:
RFC 6265 support.
Clean separation of client and server side.
Decoder performance gain:
Benchmark Mode Samples Score Error
Units
parseOldClientDecoder thrpt 20 2070169,228 ± 105044,970
ops/s
parseRFC6265ClientDecoder thrpt 20 2954015,476 ± 126670,633
ops/s
This commit closes#3221 and #1406.
Motivation:
HttpPostMultipartRequestDecoder threw an ArrayIndexOutOfBoundsException
when trying to decode Content-Disposition header with filename
containing ';' or protected \\".
See issue #3326 and #3327.
Modifications:
Added splitMultipartHeaderValues method which cares about quotes, and
use it in splitMultipartHeader method, instead of StringUtils.split.
Result:
Filenames can contain semicolons and protected \\".
Motivation:
HttpResponseStaus, HttpMethod and HttpVersion have methods that return
AsciiString. There's no need for object-to-string conversion.
Modifications:
Use codeAsText(), name(), text() instead of setInt() and setObject()
Result:
Efficiency
Motivation:
The SpdyHttpDecoder was modified to support pushed resources that are
divided into multiple frames. The decoder accepts a pushed
SpdySynStreamFrame containing the request headers, followed by a
SpdyHeadersFrame containing the response headers.
Modifications:
This commit modifies the SpdyHttpEncoder so that it encodes pushed
resources in a format that the SpdyHttpDecoder can decode. The encoder
will accept an HttpRequest object containing the request headers,
followed by an HttpResponse object containing the response headers.
Result:
The SpdyHttpEncoder will create a SpdySynStreamFrame followed by a
SpdyHeadersFrame when sending pushed resources.
Motivations:
It seems that slicing a buffer and using this slice to write to CTX will
decrease the initial refCnt to 0, while the original buffer is not yet
fully used (not empty).
Modifications:
As suggested in the ticket and tested, when the currentBuffer is sliced
since it will still be used later on, the currentBuffer is retained.
Add a test case for this issue.
Result:
The currentBuffer still has its correct refCnt when reaching the last
write (not sliced) of 1 and therefore will be released correctly.
The exception does no more occur.
This fix should be applied to all branches >= 4.0.
When handling an oversized message, HttpObjectAggregator does not wait
until the last chunk is received to produce the failed message, making
AggregatedFullHttpMessage.trailingHeaders() return null.
Related: #3019
Motivation:
We have multiple (Full)HttpRequest/Response implementations and only
some of them implements toString() properly.
Modifications:
- Add the reusable string converter for HttpMessages to HttpMessageUtil
- Implement toString() of (Full)HttpRequest/Response implementations
properly using HttpMessageUtil
Result:
Prettier string representation is returned by HttpMessage
implementations.
Motivation:
Even if its against the HTTP RFC there are situations where it may be useful to use other chars then US_ASCII in the headers. We should allow to make it possible by allow the user to override the how headers are encoded.
Modifications:
- Add encodeHeaders(...) method and so allow to override it.
Result:
It's now possible to encode headers with other charset then US_ASCII by just extend the encoder and override the encodeHeaders(...) method.
Motivation:
HEAD requests will have a Content-Length set that doesn't match the
actual length. So we only want to set Content-Length header if it isn't
already set.
Modifications:
If check around setting the Content-Length.
Result:
A HEAD request will no correctly return the specified Content-Length
instead of the body length.
Modifications:
Converted AsciiString into a String by calling toString() method before comparing with equals(). Also added a unit-test to show that it works.
Result:
Major violation is gone. Code is correct.
Motivation:
without this check then given a URI with path /path the resulting URL will be /path?null=
Modifications:
check that getRawQuery doesn't return null and only append if not
Result:
urls of the form /path will not have a null?= appended
Motivations:
The chunkSize might be oversized after comparison (size being > of int
capacity) if file size is bigger than an integer.
Modifications:
Change it to long.
Result:
There is no more int oversized.
Same fix for 4.1 and Master
Motivation:
The new Headers interface contains methods to getTimeMillis but no add/set/contains variants. These should be added for consistency.
Modifications:
- Add three new methods: addTimeMillis, setTimeMillis, containsTimeMillis to the Headers interface.
- Add a new method to the Headers.ValueConverter interface: T convertTimeMillis(long)
- Bring these new interfaces up the class hierarchy
Result:
All Headers classes have setters/getters for timeMillis.
Related: #3157
Motivation:
It should be convenient to have an easy way to classify an
HttpResponseStatus based on the first digit of the HTTP status code, as
defined in the RFC 2616:
- Information 1xx
- Success 2xx
- Redirection 3xx
- Client Error 4xx
- Server Error 5xx
Modification:
- Add HttpStatusClass
- Add HttpResponseStatus.codeClass() that returns the class of the HTTP
status code
Result:
It's easier to determine the class of an HTTP status
Motivation:
I found myself writing AsciiString constants in my code for
response statuses and thought that perhaps it might be nice to have
them defined by Netty instead.
Modifications:
Adding codeAsText to HttpResponseStatus that returns the status code as
AsciiText.
In addition, added the 421 Misdirected Request response code from
https://tools.ietf.org/html/draft-ietf-httpbis-http2-15#section-9.1.2
This response header was renamed in draft 15:
https://tools.ietf.org/html/draft-ietf-httpbis-http2-15#appendix-A.1
But the code itself was not changed, and I thought using the latest would
be better.
Result:
It is now possible to specify a status like this:
new DefaultHttp2Headers().status(HttpResponseStatus.OK.codeAsText());
Motivation:
Found performance issues via FindBugs and PMD.
Modifications:
- Removed unnecessary boxing/unboxing operations in DefaultTextHeaders.convertToInt(CharSequence) and DefaultTextHeaders.convertToLong(CharSequence). A boxed primitive is created from a string, just to extract the unboxed primitive value.
- Added a static modifier for DefaultHttp2Connection.ParentChangedEvent class. This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Added a static compiled Pattern to avoid compile it each time it is used when we need to replace some part of authority.
- Improved using of StringBuilders.
Result:
Performance improvements.
Motivation:
The SPDY/3.1 spec does not adequate describe how to push resources
from the server. This was solidified in the HTTP/2 drafts by dividing
the push into two frames, a PushPromise containing the request,
followed by a Headers frame containing the response.
Modifications:
This commit modifies the SpdyHttpDecoder to support pushed resources
that are divided into multiple frames. The decoder will accept a
pushed SpdySynStreamFrame containing the request headers, followed by
a SpdyHeadersFrame containing the response headers.
Result:
The SpdyHttpDecoder will create an HttpRequest object followed by an
HttpResponse object when receiving pushed resources.
Motivation:
RFC 2616, 4.3 Message Body states that:
All 1xx (informational), 204 (no content), and 304 (not modified) responses MUST NOT include a
message-body. All other responses do include a message-body, although it MAY be of zero length.
Modifications:
HttpContentEncoder was previously modified to cater for HTTP 100 responses. This check is enhanced to
include HTTP 204 and 304 responses.
Result:
Empty response bodies will not be modified to include the compression footer. This footer messed with Chrome's
response parsing leading to "hanging" requests.
Motivation:
HttpObjectDecoder extended ReplayDecoder which is slightly slower then ByteToMessageDecoder.
Modifications:
- Changed super class of HttpObjectDecoder from ReplayDecoder to ByteToMessageDecoder.
- Rewrote decode() method of HttpObjectDecoder to use proper state machine.
- Changed private methods HeaderParser.parse(ByteBuf), readHeaders(ByteBuf) and readTrailingHeaders(ByteBuf), skipControlCharacters(ByteBuf) to consider available bytes.
- Set HeaderParser and LineParser as static inner classes.
- Replaced not safe actualReadableBytes() with buffer.readableBytes().
Result:
Improved performance of HttpObjectDecoder by approximately 177%.
Motiviation:
The HttpContentEncoder does not account for a EmptyLastHttpContent being provided as input. This is useful in situations where the client is unable to determine if the current content chunk is the last content chunk (i.e. a proxy forwarding content when transfer encoding is chunked).
Modifications:
- HttpContentEncoder should not attempt to compress empty HttpContent objects
Result:
HttpContentEncoder supports a EmptyLastHttpContent to terminate the response.
Motivation:
Headers has getTimeMillis(), not getDate()
Modification:
- Replace HttpHeaders.getDate() with getTimeMillis() so that migration
is smoother
Result:
User code which accesses a date header is easier to migrate
Motivation:
The commit 50e06442c3 changed the type of
the constants in HttpHeaders.Names and HttpHeaders.Values, making 4.1
backward-incompatible with 4.0.
It also introduces newer utility classes such as HttpHeaderUtil, which
deprecates most static methods in HttpHeaders. To ease the migration
between 4.1 and 5.0, we should deprecate all static methods that are
non-existent in 5.0, and provide proper counterpart.
Modification:
- Revert the changes in HttpHeaders.Names and Values
- Deprecate all static methods in HttpHeaders in favor of:
- HttpHeaderUtil
- the member methods of HttpHeaders
- AsciiString
- Add integer and date access methods to HttpHeaders for easier future
migration to 5.0
- Add HttpHeaderNames and HttpHeaderValues which provide standard HTTP
constants in AsciiString
- Deprecate HttpHeaders.Names and Values
- Make HttpHeaderValues.WEBSOCKET lowercased because it's actually
lowercased in all WebSocket versions but the oldest one
- Add RtspHeaderNames and RtspHeaderValues which provide standard RTSP
constants in AsciiString
- Deprecate RtspHeaders.*
- Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of
the parameters are AsciiString
- Avoid using AsciiString.toString() repetitively
- Change the parameter type of some methods from String to
CharSequence
Result:
Backward compatibility is recovered. New classes and methods will make
the migration to 5.0 easier, once (Http|Rtsp)Header(Names|Values) are
ported to master.
Motivation:
The header class hierarchy and algorithm was improved on the master branch for versions 5.x. These improvments should be backported to the 4.1 baseline.
Modifications:
- cherry-pick the following commits from the master branch: 2374e17, 36b4157, 222d258
Result:
Header improvements in master branch are available in 4.1 branch.
Motivation:
The requirement for the masking of frames and for checks of correct
masking in the websocket specifiation have a large impact on performance.
While it is mandatory for browsers to use masking there are other
applications (like IPC protocols) that want to user websocket framing and proxy-traversing
characteristics without the overhead of masking. The websocket standard
also mentions that the requirement for mask verification on server side
might be dropped in future.
Modifications:
Added an optional parameter allowMaskMismatch for the websocket decoder
that allows a server to also accept unmasked frames (and clients to accept
masked frames).
Allowed to set this option through the websocket handshaker
constructors as well as the websocket client and server handlers.
The public API for existing components doesn't change, it will be
forwarded to functions which implicetly set masking as required in the
specification.
For websocket clients an additional parameter is added that allows to
disable the masking of frames that are sent by the client.
Result:
This update gives netty users the ability to create and use completely
unmasked websocket connections in addition to the normal masked channels
that the standard describes.
Motivation:
At the moment the whole HTTP header must be parsed at once which can lead to multiple parsing of the same bytes. We can do better here and allow to parse it in multiple steps.
Modifications:
- Not parse headers multiple times
- Simplify the code
- Eliminate uncessary String[] creations
- Use readSlice(...).retain() when possible.
Result:
Performance improvements as shown in the included benchmark below.
Before change:
[nmaurer@xxx]~% ./wrk-benchmark
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 21.55ms 15.10ms 245.02ms 90.26%
Req/Sec 196.33k 30.17k 297.29k 76.03%
373954750 requests in 2.00m, 50.15GB read
Requests/sec: 3116466.08
Transfer/sec: 427.98MB
After change:
[nmaurer@xxx]~% ./wrk-benchmark
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 20.91ms 36.79ms 1.26s 98.24%
Req/Sec 206.67k 21.69k 243.62k 94.96%
393071191 requests in 2.00m, 52.71GB read
Requests/sec: 3275971.50
Transfer/sec: 449.89MB
Motivation:
The 4.1.0-Beta3 implementation of HttpObjectAggregator.handleOversizedMessage closes the
connection if the client sent oversized chunked data with no Expect:
100-continue header. This causes a broken pipe or "connection reset by
peer" error in some clients (tested on Firefox 31 OS X 10.9.5,
async-http-client 1.8.14).
This part of the HTTP 1.1 spec (below) seems to say that in this scenario the connection
should not be closed (unless the intention is to be very strict about
how data should be sent).
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html
"If an origin server receives a request that does not include an
Expect request-header field with the "100-continue" expectation,
the request includes a request body, and the server responds
with a final status code before reading the entire request body
from the transport connection, then the server SHOULD NOT close
the transport connection until it has read the entire request,
or until the client closes the connection. Otherwise, the client
might not reliably receive the response message. However, this
requirement is not be construed as preventing a server from
defending itself against denial-of-service attacks, or from
badly broken client implementations."
Modifications:
Change HttpObjectAggregator.handleOversizedMessage to close the
connection only if keep-alive is off and Expect: 100-continue is
missing. Update test to reflect the change.
Result:
Broken pipe and connection reset errors on the client are avoided when
oversized data is sent.
Related: #2983
Motivation:
It is a well known idiom to write an empty buffer and add a listener to
its future to close a channel when the last byte has been written out:
ChannelFuture f = channel.writeAndFlush(Unpooled.EMPTY_BUFFER);
f.addListener(ChannelFutureListener.CLOSE);
When HttpObjectEncoder is in the pipeline, this still works, but it
silently raises an IllegalStateException, because HttpObjectEncoder does
not allow writing a ByteBuf when it is expecting an HttpMessage.
Modifications:
- Handle an empty ByteBuf specially in HttpObjectEncoder, so that
writing an empty buffer does not fail even if the pipeline contains an
HttpObjectEncoder
- Add a test
Result:
An exception is not triggered anymore by HttpObjectEncoder, when a user
attempts to write an empty buffer.
Motivation:
Currently, when the CorsHandler processes a preflight request, or
respondes with an 403 Forbidden using the short-curcuit option, the
HttpRequest is not released which leads to a buffer leak.
Modifications:
Releasing the HttpRequest when done processing a preflight request or
responding with an 403.
Result:
Using the CorsHandler will not cause buffer leaks.
Motivation
Issue #3004 shows that "=" character was not supported as it should in
the HttpPostRequestDecoder in form-data boundary.
Modifications:
Add 2 methods in StringUtil
- split with maxPart argument: String split with max parts only (to prevent multiple '='
to be source of extra split while not needed)
- substringAfter: String part after delimiter (since first part is not
needed)
Use those methods in HttpPostRequestDecoder.
Change and the HttpPostRequestDecoderTest to check using a boundary
beginning with "=".
Results:
The fix implies more stability and fix the issue.
Motivation:
There's no way for a user to get the encoder and the decoder of an
HttpClientCodec. The lack of such getter methods makes it impossible to
remove the codec handlers from the pipeline correctly.
For example, a user could add more than one HttpClientCodec to the
pipeline, and then the user cannot easily decide which encoder and
decoder to remove.
Modifications:
- Add encoder() and decoder() method to HttpClientCodec which returns
HttpRequestEncoder and HttpResponseDecoder respectively
- Also made the same changes to HttpServerCodec
Result:
A user can distinguish the handlers added by multiple HttpClientCodecs
easily.
Motivation:
Websocket clients can request to speak a specific subprotocol. The list of
subprotocols the client understands are sent to the server. The server
should select one of the protocols an reply this with the websocket
handshake response. The added code verifies that the reponded subprotocol
is valid.
Modifications:
Added verification of the subprotocol received from the server against the
subprotocol(s) that the user requests. If the user requests a subprotocol
but the server responds none or a non-requested subprotocol this is an
error and the handshake fails through an exception. If the user requests
no subprotocol but the server responds one this is also marked as an
error.
Addiontionally a getter for the WebSocketClientHandshaker in the
WebSocketClientProtocolHandler is added to enable the user of a
WebSocketClientProtocolHandler to extract the used negotiated subprotocol.
Result:
The subprotocol field which is received from a websocket server is now
properly verified on client side and clients and websocket connection
attempts will now only succeed if both parties can negotiate on a
subprotocol.
If the client sends a list of multiple possible subprotocols it can
extract the negotiated subprotocol through the added handshaker getter (WebSocketClientProtocolHandler.handshaker().actualSubprotocol()).
Motivation:
I was not fully reassured that whether everything works correctly when a websocket client receives the websocket handshake HTTP response and a websocket frame in a single ByteBuf (which can happen when the server sends a response directly or shortly after the connect). In this case some parts of the ByteBuf must be processed by HTTP decoder and the remaining by the websocket decoder.
Modification:
Adding a test that verifies that in this scenaria the handshake and the message are correctly interpreted and delivered by Netty.
Result:
One more test for Netty.
The test succeeds - No problems
Motivation:
The WebSocketClientProtocolHandshakeHandler never releases the received handshake response.
Modification:
Release the message in a finally block.
Result:
No more leak
Motivation:
The WebSocket08FrameEncoder contains an optimization path for small messages which copies the message content into the header buffer to avoid vectored writes. However this path is in the current implementation never taken because the target buffer is preallocated only for exactly the size of the header.
Modification:
For messages below a certain treshold allocate the buffer so that the message can be directly copied. Thereby the optimized path is taken.
Result:
A speedup of about 25% for 100byte messages. Declines with bigger message sizes. I have currently set the treshold to 1kB which is a point where I could still see a few percent speedup, but we should also avoid burning too many CPU cycles.
Motivation:
Websocket performance is to a large account determined through the masking
and unmasking of frames. The current behavior of this in Netty can be
improved.
Modifications:
Perform the XOR operation not bytewise but in int blocks as long as
possible. This reduces the number of necessary operations by 4. Also don't
read the writerIndex in each iteration.
Added a unit test for websocket decoding and encoding for verifiation.
Result:
A large performance gain (up to 50%) in websocket throughput.
Motivation:
According to the websocket specification peers may send a close frame when
they detect a protocol violation (with status code 1002). The current
implementation simply closes the connection. This update should add this
functionality. The functionality is optional - but it might help other
implementations with debugging when they receive such a frame.
Modification:
When a protocol violation in the decoder is detected and a close was not
already initiated by the remote peer a close frame is
sent.
Result:
Remotes which will send an invalid frame will now get a close frame that
indicates the protocol violation instead of only seeing a closed
connection.
Motivation:
Sometimes it is useful to be able to access the uri that was used to initialize the QueryStringDecoder.
Modifications:
Add method which allows to retrieve the uri.
Result:
Allow to retrieve the uri that was used to create the QueryStringDecoder.
Motiviation:
The HTTP content decoder's cleanup method is not cleaning up the decoder correctly.
The cleanup method is currently doing a readOutbound on the EmbeddedChannel but
for decoding the call should be readInbound.
Modifications:
-Change readOutbound to readInbound in the cleanup method
Result:
The cleanup method should be correctly releaseing unused resources
Motivation:
Currently we do more memory copies then needed.
Modification:
- Directly use heap buffers to reduce memory copy
- Correctly release buffers to fix buffer leak
Result:
Less memory copies and no leaks
Related issue: #2741 and #2151
Motivation:
There is no way for ChunkedWriteHandler to know the progress of the
transfer of a ChannelInput. Therefore, ChannelProgressiveFutureListener
cannot get exact information about the progress of the transfer.
If you add a few methods that optionally provides the transfer progress
to ChannelInput, it becomes possible for ChunkedWriteHandler to notify
ChannelProgressiveFutureListeners.
If the input has no definite length, we can still use the progress so
far, and consider the length of the input as 'undefined'.
Modifications:
- Add ChunkedInput.progress() and ChunkedInput.length()
- Modify ChunkedWriteHandler to use progress() and length() to notify
the transfer progress
Result:
ChunkedWriteHandler now notifies ChannelProgressiveFutureListener.
Motivation:
The _0XFF_0X00 buffer is not duplicated and empty after the first usage preventing the connection close to happen on subsequent close frames.
Modifications:
Correctly duplicate the buffer.
Result:
Multiple CloseWebSocketFrames are handled correctly.
In Netty 3, downstream writes of SPDY data frames and upstream reads of
SPDY window udpate frames occur on different threads.
When receiving a window update frame, we synchronize on a java object
(SpdySessionHandler::flowControlLock) while sending any pending writes
that are now able to complete.
When writing a data frame, we check the send window size to see if we
are allowed to write it to the socket, or if we have to enqueue it as a
pending write. To prevent races with the window update frame, this is
also synchronized on the same SpdySessionHandler::flowControlLock.
In Netty 4, upstream and downstream operations on any given channel now
occur on the same thread. Since java locks are re-entrant, this now
allows downstream writes to occur while processing window update frames.
In particular, when we receive a window update frame that unblocks a
pending write, this write completes which triggers an event notification
on the response, which in turn triggers a write of a data frame. Since
this is on the same thread it re-enters the lock and modifies the send
window. When the write completes, we continue processing pending writes
without knowledge that the window size has been decremented.
Related issue: #2743
Motivation:
When there are more than one stream with the same priority, the set
returned by SpdySession.getActiveStream() will not include all of them,
because it uses TreeSet and only compares the priority of streams. If
two different streams have the same priority, one of them will be
discarded by TreeSet.
Modification:
- Rename getActiveStreams() to activeStreams()
- Replace PriorityComparator with StreamComparator
Result:
Two different streams with the same priority are compared correctly.
Motivation:
If the requests contains uri parameters but not path the HttpRequestEncoder does produce an invalid uri while try to add the missing path.
Modifications:
Correctly handle the case of uri with paramaters but no path.
Result:
HttpRequestEncoder produce correct uri in all cases.
Motivation:
Now Netty has a few problems with null values.
Modifications:
- Check HAProxyProxiedProtocol in HAProxyMessage constructor and throw NPE if it is null.
If HAProxyProxiedProtocol is null we will set AddressFamily as null. So we will get NPE inside checkAddress(String, AddressFamily) and it won't be easy to understand why addrFamily is null.
- Check File in DiskFileUpload.toString().
If File is null we will get NPE when calling toString() method.
- Check Result<String> in MqttDecoder.decodeConnectionPayload(...).
If !mqttConnectVariableHeader.isWillFlag() || !mqttConnectVariableHeader.hasUserName() || !mqttConnectVariableHeader.hasPassword() we will get NPE when we will try to create new instance of MqttConnectPayload.
- Check Unsafe before calling unsafe.getClass() in PlatformDependent0 static block.
- Removed unnecessary null check in WebSocket08FrameEncoder.encode(...).
Because msg.content() can not return null.
- Removed unnecessary null check in DefaultStompFrame(StompCommand) constructor.
Because we have this check in the super class.
- Removed unnecessary null checks in ConcurrentHashMapV8.removeTreeNode(TreeNode<K,V>).
- Removed unnecessary null check in OioDatagramChannel.doReadMessages(List<Object>).
Because tmpPacket.getSocketAddress() always returns new SocketAddress instance.
- Removed unnecessary null check in OioServerSocketChannel.doReadMessages(List<Object>).
Because socket.accept() always returns new Socket instance.
- Pass Unpooled.buffer(0) instead of null inside CloseWebSocketFrame(boolean, int) constructor.
If we will pass null we will get NPE in super class constructor.
- Added throw new IllegalStateException in GlobalEventExecutor.awaitInactivity(long, TimeUnit) if it will be called before GlobalEventExecutor.execute(Runnable).
Because now we will get NPE. IllegalStateException will be better in this case.
- Fixed null check in OpenSslServerContext.setTicketKeys(byte[]).
Now we throw new NPE if byte[] is not null.
Result:
Added new null checks when it is necessary, removed unnecessary null checks and fixed some NPE problems.
Modifications:
- Added a static modifier for CompositeByteBuf.Component.
This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Removed unnecessary boxing/unboxing operations in HttpResponseDecoder, RtspResponseDecoder, PerMessageDeflateClientExtensionHandshaker and PerMessageDeflateServerExtensionHandshaker
A boxed primitive is created from a String, just to extract the unboxed primitive value.
- Removed unnecessary 3 times calculations in DiskAttribute.addContent(...).
- Removed unnecessary checks if file exists before call mkdirs() in NativeLibraryLoader and PlatformDependent.
Because the method mkdirs() has this check inside.
- Removed unnecessary `instanceof AsciiString` check in StompSubframeAggregator.contentLength(StompHeadersSubframe) and StompSubframeDecoder.getContentLength(StompHeaders, long).
Because StompHeaders.get(CharSequence) always returns java.lang.String.
Motivation:
When we receive an incomplete WebSocketFrame we need to make sure to wait for more data. Because we not did this we could produce a NPE.
Modification:
Make sure we not try to add null into the RecyclableArrayList
Result:
no more NPE on incomplete frames.
Motivation:
HTTP header validation can be expensive so we should allow to disable it like we do in HttpObjectDecoder.
Modification:
Add constructor argument to disable validation.
Result:
Performance improvement
Motivation:
HttpObjectAggregator currently creates a new FullHttpResponse / FullHttpRequest for each message it needs to aggregate. While doing so it also creates 2 DefaultHttpHeader instances (one for the headers and one for the trailing headers). This is bad for two reasons:
- More objects are created then needed and also populate the headers is not for free
- Headers may get validated even if the validation was disabled in the decoder
Modification:
- Wrap the previous created HttpResponse / HttpRequest and so reuse the original HttpHeaders
- Reuse the previous created trailing HttpHeader.
- Fix a bug where the trailing HttpHeader was incorrectly mixed in the headers.
Result:
- Less GC
- Faster HttpObjectAggregator implementation
Motivation:
HttpOrSpdyChooser can be simplified so the user not need to implement getProtocol(...) method.
Modification:
Add implementation for the method. The user can override it if necessary.
Result:
Easier usage of HttpOrSpdyChooser.
Motivation:
DecodeResult is dropped when aggregate HTTP messages.
Modification:
Make sure we not drop the DecodeResult while aggregate HTTP messages.
Result:
Correctly include the DecodeResult for later processing.
Motivation:
Due to integer overflow bug, writes of FileRegions to http server pipeline (eg like one from HttpStaticFileServer example) with length greater than Integer.MAX_VALUE are ignored in 1/2 of cases (ie no data gets sent to client)
Modification:
Correctly handle chunk sized > Integer.MAX_VALUE
Result:
Be able to use FileRegion > Integer.MAX_VALUE when using chunked encoding.
Motivation:
Persuit for the consistency in method naming
Modifications:
- Remove the 'get' prefix from all HTTP/SPDY message classes
- Fix some inspector warnings
Result:
Consistency
Motivation:
When Netty runs in a managed environment such as web application server,
Netty needs to provide an explicit way to remove the thread-local
variables it created to prevent class loader leaks.
FastThreadLocal uses different execution paths for storing a
thread-local variable depending on the type of the current thread.
It increases the complexity of thread-local removal.
Modifications:
- Moved FastThreadLocal and FastThreadLocalThread out of the internal
package so that a user can use it.
- FastThreadLocal now keeps track of all thread local variables it has
initialized, and calling FastThreadLocal.removeAll() will remove all
thread-local variables of the caller thread.
- Added FastThreadLocal.size() for diagnostics and tests
- Introduce InternalThreadLocalMap which is a mixture of hard-wired
thread local variable fields and extensible indexed variables
- FastThreadLocal now uses InternalThreadLocalMap to implement a
thread-local variable.
- Added ThreadDeathWatcher.unwatch() so that PooledByteBufAllocator
tells it to stop watching when its thread-local cache has been freed
by FastThreadLocal.removeAll().
- Added FastThreadLocalTest to ensure that removeAll() works
- Added microbenchmark for FastThreadLocal and JDK ThreadLocal
- Upgraded to JMH 0.9
Result:
- A user can remove all thread-local variables Netty created, as long as
he or she did not exit from the current thread. (Note that there's no
way to remove a thread-local variable from outside of the thread.)
- FastThreadLocal exposes more useful operations such as isSet() because
we always implement a thread local variable via InternalThreadLocalMap
instead of falling back to JDK ThreadLocal.
- FastThreadLocalBenchmark shows that this change improves the
performance of FastThreadLocal even more.
Motivation:
We have quite a bit of code duplication between HTTP/1, HTTP/2, SPDY,
and STOMP codec, because they all have a notion of 'headers', which is a
multimap of string names and values.
Modifications:
- Add TextHeaders and its default implementation
- Add AsciiString to replace HttpHeaderEntity
- Borrowed some portion from Apache Harmony's java.lang.String.
- Reimplement HttpHeaders, SpdyHeaders, and StompHeaders using
TextHeaders
- Add AsciiHeadersEncoder to reuse the encoding a TextHeaders
- Used a dedicated encoder for HTTP headers for better performance
though
- Remove shortcut methods in SpdyHeaders
- Replace SpdyHeaders.getStatus() with HttpResponseStatus.parseLine()
Result:
- Removed quite a bit of code duplication in the header implementations.
- Slightly better performance thanks to improved header validation and
hash code calculation
Motivation:
Provide a faster ThreadLocal implementation
Modification:
Add a "FastThreadLocal" which uses an EnumMap and a predefined fixed set of possible thread locals (all of the static instances created by netty) that is around 10-20% faster than standard ThreadLocal in my benchmarks (and can be seen having an effect in the direct PooledByteBufAllocator benchmark that uses the DEFAULT ByteBufAllocator which uses this FastThreadLocal, as opposed to normal instantiations that do not, and in the new RecyclableArrayList benchmark);
Result:
Improved performance
Motivation:
According to RFC2616 section 19, boundary string could be quoted, but
currently the PostRequestDecoder does not support it while it should.
Modifications:
Once the boundary is found, one check is made to verify if the boundary
is "quoted", and if so, it is "unqoted".
Note: in following usage of this boundary (as delimiter), quote seems no
more allowed according to the same RFC, so the reason that only the
boundary definition is corrected.
Result:
Now the boundary could be whatever quoted or not. A Junit test case
checks it.
Motivation:
When an attribute is ending with an odd number of CR (0x0D), the decoder
add an extra CR in the decoded attribute and should not.
Modifications:
Each time a CR is detected, the next byte was tested to be LF or not. If
not, in a number of places, the CR byte was lost while it should not be.
When a CR is detected, if the next byte is not LF, the CR byte should be
saved as the position point to the next byte (not LF). When a CR is
detected, if there is not yet other available bytes, the position is
reset to the position of CR (since a LF could follow).
A new Junit test case is added, using DECODER and variable number of CR
in the final attribute (testMultipartCodecWithCRasEndOfAttribute).
Result:
The attribute is now correctly decoded with the right number of CR
ending bytes.
Motivation:
We have different message aggregator implementations for different
protocols, but they are very similar with each other. They all stems
from HttpObjectAggregator. If we provide an abstract class that provide
generic message aggregation functionality, we will remove their code
duplication.
Modifications:
- Add MessageAggregator which provides generic message aggregation
- Reimplement all existing aggregators using MessageAggregator
- Add DecoderResultProvider interface and extend it wherever possible so
that MessageAggregator respects the state of the decoded message
Result:
Less code duplication
Motivation:
CORS request are currently processed, and potentially failed, after the
target ChannelHandler(s) have been invoked. This might not be desired, for
example a HTTP PUT or POST might have been performed.
Modifications:
Added a shortCurcuit option to CorsConfig which when set will
cause a validation of the HTTP request's 'Origin' header and verify that
it is valid according to the configuration. If found invalid an 403
"Forbidden" response will be returned and not further processing will
take place.
This is indeed no help for non browser request, like using curl, which
can set the 'Origin' header.
Result:
Users can now configure if the 'Origin' header should be validated
upfront and have the request rejected before any further processing
takes place.
Motivation:
Because of not correctly release a buffer before null out the reference a memory leak shows up.
Modifications:
Correct call buffer.release() before null out reference.
Result:
No more leak
Motivation:
Currently there's no way to configure maxFramePayloadSize from
WebSocketServerProtocolHandler, which is the most used entry point of
WebSocket server.
Modifications:
Added another constructor for maxFramePayloadSize.
Result:
We can configure max frame size for websocket packet in
WebSocketServerProtocolHandler. It will also keep backward compatibility
with default max size: 65536. (65536 is hard-coded max size in previous
version of Netty)
Motivation:
Before we aggregated the full text in the WebSocket08FrameDecoder just to fill in the ContinuationWebSocketFrame.aggregatedText(). The problem was that there was no upper-limit and so it would be possible to see an OOME if the remote peer sends a TextWebSocketFrame + a never ending stream of ContinuationWebSocketFrames. Furthermore the aggregation does not really belong in the WebSocket08FrameDecoder, as we provide an extra ChannelHandler for this anyway (WebSocketFrameAggregator).
Modification:
Remove the ContinuationWebSocketFrame.aggregatedText() method and corresponding constructor. Also refactored WebSocket08FrameDecoder a bit to me more efficient which is now possible as we not need to aggregate here.
Result:
No more risk of OOME because of frames.
Motivation:
When CORS has been configured to allow "*" origin, and at the same time
is allowing credentials/cookies, this causes an error from the browser
because when the response 'Access-Control-Allow-Credentials' header
is true, the 'Access-Control-Allow-Origin' must be an actual origin.
Modifications:
Changed CorsHandler setOrigin method to check for the combination of "*"
origin and allowCredentials, and if the check matches echo the CORS
request's 'Origin' value.
Result:
This addition enables the echoing of the request 'Origin' value as the
'Access-Control-Allow-Origin' value when the server has been configured
to allow any origin in combination with allowCredentials.
This allows client requests to succeed when expecting the server to
be able to handle "*" origin and at the same time be able to send cookies
by setting 'xhr.withCredentials=true'. A concrete example of this is
the SockJS protocol which expects behaviour.
Motivation:
4 and 5 were diverged long time ago and we recently reverted some of the
early commits in master. We must make sure 4.1 and master are not very
different now.
Modification:
Fix found differences
Result:
4.1 and master got closer.
Motivation:
4 and 5 were diverged long time ago and we recently reverted some of the
early commits in master. We must make sure 4.1 and master are not very
different now.
Modification:
Fix found differences
Result:
4.1 and master got closer.
Motivation:
Make it more clear what the output of HttpObjectAggregator is and that it need to come after the encoder in the pipeline.
Modifications:
Change javadocs to make things more clear.
Result:
Better docs
Motivation:
Fix leaks reported during running SpdyFrameDecoderTest
Modifications:
Make sure the produced buffers of SpdyFrameDecoder and SpdyFrameDecoderTest are released
Result:
No more leak reports during run the tests.
Motivation:
Fix leaks reported during running SpdyFrameDecoderTest
Modifications:
Make sure the produced buffer of SpdyFrameDecoder is released
Result:
No more leak reports during run the tests.
Motivation:
Fix leaks reported during SPDY test.
Modifications:
Use ReferenceCountUtil.releaseLater(...) to make sure everything is released once the tests are done.
Result:
No more leak reports during run the tests.
Motivation:
Currently, the SPDY frame encoding and decoding code is based upon
the ChannelHandler abstraction. This requires maintaining multiple
versions for 3.x and 4.x (and possibly 5.x moving forward).
Modifications:
The SPDY frame encoding and decoding code is separated from the
ChannelHandler and SpdyFrame abstractions. Also test coverage is
improved.
Result:
SpdyFrameCodec now implements the ChannelHandler abstraction and is
responsible for creating and handling SpdyFrame objects.
Motivation:
Currently the CORS support only handles a single origin, or a wildcard
origin. This task should enhance Netty's CORS support to allow multiple
origins to be specified. Just being allowed to specify one origin is
particulary limiting when a site support both http and https for
example.
Modifications:
- Updated CorsConfig and its Builder to accept multiple origins.
Result:
Users are now able to configure multiple origins for CORS.
[https://github.com/netty/netty/issues/2346]
Motivation:
Previously, we used URLDecoder.decode(...) to decode url-encoded data. This generates a lot of garbage and takes a considerable amount of time.
Modifications:
Replace URLDecoder.decode(...) with QueryStringDecoder.decodeComponent(...)
Result:
Less garbage to GC and faster decode processing.
Motivation:
When running the build with Java 8 the following error occurred:
java: reference to preflightResponseHeader is ambiguous
both method
<T>preflightResponseHeader(java.lang.CharSequence,java.lang.Iterable<T>)
in io.netty.handler.codec.http.cors.CorsConfig.Builder and method
<T>preflightResponseHeader(java.lang.String,java.util.concurrent.Callable<T>)
in io.netty.handler.codec.http.cors.CorsConfig.Builder match
The offending class was CorsConfigTest and its shouldThrowIfValueIsNull
which contained the following line:
withOrigin("*").preflightResponseHeader("HeaderName", null).build();
Modifications:
Updated the offending method with to supply a type, and object array, to
avoid the error.
Result:
After this I was able to build with Java 7 and Java 8
Motivation:
An intermediary like a load balancer might require that a Cross Origin
Resource Sharing (CORS) preflight request have certain headers set.
As a concrete example the Elastic Load Balancer (ELB) requires the
'Date' and 'Content-Length' header to be set or it will fail with a 502
error code.
This works is an enhancement of https://github.com/netty/netty/pull/2290
Modifications:
CorsConfig has been extended to make additional HTTP response headers
configurable for preflight responses. Since some headers, like the
'Date' header need to be generated each time, m0wfo suggested using a
Callable.
Result:
By default, the 'Date' and 'Content-Lenght' headers will be sent in a
preflight response. This can be overriden and users can specify
any headers that might be required by different intermediaries.
Motivation:
If the last item analyzed in a previous received HttpChunk/HttpContent was a part of an attribute's name, the read index was not set to the new right place and therefore raizing an exception in some case (since the "new" name analyzed is empty, which is not allowed so the exception).
What appears there is that the read index should be reset to the last valid position encountered whatever the case. Currently it was set when only when there is an attribute not already finished (name is ok, but content is possibly not).
Therefore the issue is that elements could be rescanned multiple times (including completed elements) and moreover some bad decoding can occur such as when in a middle of an attribute's name.
Modifications:
To fix this issue, since "firstpos" contains the last "valid" read index of the decoding (when finding a '&', '=', 'CR/LF'), we should add the setting of the read index for the following cases:
'lastchunk' encountered, therefore finishing the current buffer
any other cases than current attribute is not finished (name not found yet in particular)
So adding for this 2 cases:
undecodedChunk.readerIndex(firstpos);
Result:
Now the decoding is done once, content is added from chunk/content to chunk/content, name is decoded correctly even if in the middle of 2 chunks/contents.
A Junit test code was added: testChunkCorrect that should not raized any exception.
Motivation:
When an HttpResponseDecoder decodes an invalid chunk, a LastHttpContent instance is produced and the decoder enters the 'BAD_MESSAGE' state, which is not supposed to produce a message any further. However, because HttpObjectDecoder.invalidChunk() did not clear this.message out to null, decodeLast() will produce another LastHttpContent message on a certain situation.
Modification:
Do not forget to null out HttpObjectDecoder.message in invalidChunk(), and add a test case for it.
Result:
No more consecutive LastHttpContent messages produced by HttpObjectDecoder.
Motivation:
Currently, it is impossible to give a user the full control over what to do in response to the request with 'Expect: 100-continue' header. Currently, a user have to do one of the following:
- Accept the request and respond with 100 Continue, or
- Send the reject response and close the connection.
.. which means it is impossible to send the reject response and keep the connection alive so that the client sends additional requests.
Modification:
Added a public method called 'reset()' to HttpObjectDecoder so that a user can reset the state of the decoder easily. Once called, the decoder will assume the next input will be the beginning of a new request.
HttpObjectAggregator now calls `reset()`right after calling 'handleOversizedMessage()' so that the decoder can continue to decode the subsequent request even after the request with 'Expect: 100-continue' header is rejected.
Added relevant unit tests / Minor clean-up
Result:
This commit completes the fix of #2211
- Related: #2163
- Add ResourceLeakHint to allow a user to provide a meaningful information about the leak when touching it
- DefaultChannelHandlerContext now implements ResourceLeakHint to tell where the message is going.
- Cleaner resource leak report by excluding noisy stack trace elements