Motivations:
It seems that slicing a buffer and using this slice to write to CTX will
decrease the initial refCnt to 0, while the original buffer is not yet
fully used (not empty).
Modifications:
As suggested in the ticket and tested, when the currentBuffer is sliced
since it will still be used later on, the currentBuffer is retained.
Add a test case for this issue.
Result::
The currentBuffer still has its correct refCnt when reaching the last
write (not sliced) of 1 and therefore will be released correctly.
The exception does no more occur.
This fix should be applied to all branches >= 4.0.
When handling an oversized message, HttpObjectAggregator does not wait
until the last chunk is received to produce the failed message, making
AggregatedFullHttpMessage.trailingHeaders() return null.
Related: #3019
Motivation:
We have multiple (Full)HttpRequest/Response implementations and only
some of them implements toString() properly.
Modifications:
- Add the reusable string converter for HttpMessages to HttpMessageUtil
- Implement toString() of (Full)HttpRequest/Response implementations
properly using HttpMessageUtil
Result:
Prettier string representation is returned by HttpMessage
implementations.
Motivation:
Even if its against the HTTP RFC there are situations where it may be useful to use other chars then US_ASCII in the headers. We should allow to make it possible by allow the user to override the how headers are encoded.
Modifications:
- Add encodeHeaders(...) method and so allow to override it.
Result:
It's now possible to encode headers with other charset then US_ASCII by just extend the encoder and override the encodeHeaders(...) method.
Motivation:
HEAD requests will have a Content-Length set that doesn't match the
actual length. So we only want to set Content-Length header if it isn't
already set.
Modifications:
If check around setting the Content-Length.
Result:
A HEAD request will no correctly return the specified Content-Length
instead of the body length.
Motivation:
without this check then given a URI with path /path the resulting URL will be /path?null=
Modifications:
check that getRawQuery doesn't return null and only append if not
Result:
urls of the form /path will not have a null?= appended
Motivations:
The chunkSize might be oversized after comparison (size being > of int
capacity) if file size is bigger than an integer.
Modifications:
Change it to long.
Result:
There is no more int oversized.
Same fix for 4.1 and Master
Motivation:
Found performance issues via FindBugs and PMD.
Modifications:
- Removed unnecessary boxing/unboxing operations in DefaultTextHeaders.convertToInt(CharSequence) and DefaultTextHeaders.convertToLong(CharSequence). A boxed primitive is created from a string, just to extract the unboxed primitive value.
- Added a static modifier for DefaultHttp2Connection.ParentChangedEvent class. This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Added a static compiled Pattern to avoid compile it each time it is used when we need to replace some part of authority.
- Improved using of StringBuilders.
Result:
Performance improvements.
Motivation:
The SPDY/3.1 spec does not adequate describe how to push resources
from the server. This was solidified in the HTTP/2 drafts by dividing
the push into two frames, a PushPromise containing the request,
followed by a Headers frame containing the response.
Modifications:
This commit modifies the SpdyHttpDecoder to support pushed resources
that are divided into multiple frames. The decoder will accept a
pushed SpdySynStreamFrame containing the request headers, followed by
a SpdyHeadersFrame containing the response headers.
Result:
The SpdyHttpDecoder will create an HttpRequest object followed by an
HttpResponse object when receiving pushed resources.
Motivation:
RFC 2616, 4.3 Message Body states that:
All 1xx (informational), 204 (no content), and 304 (not modified) responses MUST NOT include a
message-body. All other responses do include a message-body, although it MAY be of zero length.
Modifications:
HttpContentEncoder was previously modified to cater for HTTP 100 responses. This check is enhanced to
include HTTP 204 and 304 responses.
Result:
Empty response bodies will not be modified to include the compression footer. This footer messed with Chrome's
response parsing leading to "hanging" requests.
Motivation:
HttpObjectDecoder extended ReplayDecoder which is slightly slower then ByteToMessageDecoder.
Modifications:
- Changed super class of HttpObjectDecoder from ReplayDecoder to ByteToMessageDecoder.
- Rewrote decode() method of HttpObjectDecoder to use proper state machine.
- Changed private methods HeaderParser.parse(ByteBuf), readHeaders(ByteBuf) and readTrailingHeaders(ByteBuf), skipControlCharacters(ByteBuf) to consider available bytes.
- Set HeaderParser and LineParser as static inner classes.
- Replaced not safe actualReadableBytes() with buffer.readableBytes().
Result:
Improved performance of HttpObjectDecoder by approximately 177%.
Motiviation:
The HttpContentEncoder does not account for a EmptyLastHttpContent being provided as input. This is useful in situations where the client is unable to determine if the current content chunk is the last content chunk (i.e. a proxy forwarding content when transfer encoding is chunked).
Modifications:
- HttpContentEncoder should not attempt to compress empty HttpContent objects
Result:
HttpContentEncoder supports a EmptyLastHttpContent to terminate the response.
Motivation:
At the moment the whole HTTP header must be parsed at once which can lead to multiple parsing of the same bytes. We can do better here and allow to parse it in multiple steps.
Modifications:
- Not parse headers multiple times
- Simplify the code
- Eliminate uncessary String[] creations
- Use readSlice(...).retain() when possible.
Result:
Performance improvements as shown in the included benchmark below.
Before change:
[nmaurer@xxx]~% ./wrk-benchmark
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 21.55ms 15.10ms 245.02ms 90.26%
Req/Sec 196.33k 30.17k 297.29k 76.03%
373954750 requests in 2.00m, 50.15GB read
Requests/sec: 3116466.08
Transfer/sec: 427.98MB
After change:
[nmaurer@xxx]~% ./wrk-benchmark
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 20.91ms 36.79ms 1.26s 98.24%
Req/Sec 206.67k 21.69k 243.62k 94.96%
393071191 requests in 2.00m, 52.71GB read
Requests/sec: 3275971.50
Transfer/sec: 449.89MB
Related: #2983
Motivation:
It is a well known idiom to write an empty buffer and add a listener to
its future to close a channel when the last byte has been written out:
ChannelFuture f = channel.writeAndFlush(Unpooled.EMPTY_BUFFER);
f.addListener(ChannelFutureListener.CLOSE);
When HttpObjectEncoder is in the pipeline, this still works, but it
silently raises an IllegalStateException, because HttpObjectEncoder does
not allow writing a ByteBuf when it is expecting an HttpMessage.
Modifications:
- Handle an empty ByteBuf specially in HttpObjectEncoder, so that
writing an empty buffer does not fail even if the pipeline contains an
HttpObjectEncoder
- Add a test
Result:
An exception is not triggered anymore by HttpObjectEncoder, when a user
attempts to write an empty buffer.
Motivation:
Currently, when the CorsHandler processes a preflight request, or
respondes with an 403 Forbidden using the short-curcuit option, the
HttpRequest is not released which leads to a buffer leak.
Modifications:
Releasing the HttpRequest when done processing a preflight request or
responding with an 403.
Result:
Using the CorsHandler will not cause buffer leaks.
Motivation
4.0 was not modified in the same time than 4.1 while the difference was
limited.
Include the fix on "=" character in Boundary.
Issue #3004 shows that "=" character was not supported as it should in
the HttpPostRequestDecoder in form-data boundary.
Modifications:
Backport from 4.1 to 4.0 while respecting interfaces.
Add 2 methods in StringUtil
- split with maxParm argument: String split with max parts only (to prevent multiple '='
to be source of extra split while not needed)
- substringAfter: String part after delimiter (since first part is not
needed)
Use those methods in HttpPostRequestDecoder.
Change and the HttpPostRequestDecoderTest to check using a boundary
beginning with "=".
Results:
Backport done (Issue #2886 fix)
Issue #3004 fix too
The fix implies more stability and fix the relative issues.
Motivation:
Websocket clients can request to speak a specific subprotocol. The list of
subprotocols the client understands are sent to the server. The server
should select one of the protocols an reply this with the websocket
handshake response. The added code verifies that the reponded subprotocol
is valid.
Modifications:
Added verification of the subprotocol received from the server against the
subprotocol(s) that the user requests. If the user requests a subprotocol
but the server responds none or a non-requested subprotocol this is an
error and the handshake fails through an exception. If the user requests
no subprotocol but the server responds one this is also marked as an
error.
Addiontionally a getter for the WebSocketClientHandshaker in the
WebSocketClientProtocolHandler is added to enable the user of a
WebSocketClientProtocolHandler to extract the used negotiated subprotocol.
Result:
The subprotocol field which is received from a websocket server is now
properly verified on client side and clients and websocket connection
attempts will now only succeed if both parties can negotiate on a
subprotocol.
If the client sends a list of multiple possible subprotocols it can
extract the negotiated subprotocol through the added handshaker getter (WebSocketClientProtocolHandler.handshaker().actualSubprotocol()).
Motivation:
I was not fully reassured that whether everything works correctly when a websocket client receives the websocket handshake HTTP response and a websocket frame in a single ByteBuf (which can happen when the server sends a response directly or shortly after the connect). In this case some parts of the ByteBuf must be processed by HTTP decoder and the remaining by the websocket decoder.
Modification:
Adding a test that verifies that in this scenaria the handshake and the message are correctly interpreted and delivered by Netty.
Result:
One more test for Netty.
The test succeeds - No problems
Motivation:
The WebSocketClientProtocolHandshakeHandler never releases the received handshake response.
Modification:
Release the message in a finally block.
Result:
No more leak
Motivation:
The WebSocket08FrameEncoder contains an optimization path for small messages which copies the message content into the header buffer to avoid vectored writes. However this path is in the current implementation never taken because the target buffer is preallocated only for exactly the size of the header.
Modification:
For messages below a certain treshold allocate the buffer so that the message can be directly copied. Thereby the optimized path is taken.
Result:
A speedup of about 25% for 100byte messages. Declines with bigger message sizes. I have currently set the treshold to 1kB which is a point where I could still see a few percent speedup, but we should also avoid burning too many CPU cycles.
Motivation:
Websocket performance is to a large account determined through the masking
and unmasking of frames. The current behavior of this in Netty can be
improved.
Modifications:
Perform the XOR operation not bytewise but in int blocks as long as
possible. This reduces the number of necessary operations by 4. Also don't
read the writerIndex in each iteration.
Added a unit test for websocket decoding and encoding for verifiation.
Result:
A large performance gain (up to 50%) in websocket throughput.
Motivation:
According to the websocket specification peers may send a close frame when
they detect a protocol violation (with status code 1002). The current
implementation simply closes the connection. This update should add this
functionality. The functionality is optional - but it might help other
implementations with debugging when they receive such a frame.
Modification:
When a protocol violation in the decoder is detected and a close was not
already initiated by the remote peer a close frame is
sent.
Result:
Remotes which will send an invalid frame will now get a close frame that
indicates the protocol violation instead of only seeing a closed
connection.
Motivation:
Sometimes it is useful to be able to access the uri that was used to initialize the QueryStringDecoder.
Modifications:
Add method which allows to retrieve the uri.
Result:
Allow to retrieve the uri that was used to create the QueryStringDecoder.
Motiviation:
The HTTP content decoder's cleanup method is not cleaning up the decoder correctly.
The cleanup method is currently doing a readOutbound on the EmbeddedChannel but
for decoding the call should be readInbound.
Modifications:
-Change readOutbound to readInbound in the cleanup method
Result:
The cleanup method should be correctly releaseing unused resources
Motivation:
Currently we do more memory copies then needed.
Modification:
- Directly use heap buffers to reduce memory copy
- Correctly release buffers to fix buffer leak
Result:
Less memory copies and no leaks
Motivation:
The _0XFF_0X00 buffer is not duplicated and empty after the first usage preventing the connection close to happen on subsequent close frames.
Modifications:
Correctly duplicate the buffer.
Result:
Multiple CloseWebSocketFrames are handled correctly.
In Netty 3, downstream writes of SPDY data frames and upstream reads of
SPDY window udpate frames occur on different threads.
When receiving a window update frame, we synchronize on a java object
(SpdySessionHandler::flowControlLock) while sending any pending writes
that are now able to complete.
When writing a data frame, we check the send window size to see if we
are allowed to write it to the socket, or if we have to enqueue it as a
pending write. To prevent races with the window update frame, this is
also synchronized on the same SpdySessionHandler::flowControlLock.
In Netty 4, upstream and downstream operations on any given channel now
occur on the same thread. Since java locks are re-entrant, this now
allows downstream writes to occur while processing window update frames.
In particular, when we receive a window update frame that unblocks a
pending write, this write completes which triggers an event notification
on the response, which in turn triggers a write of a data frame. Since
this is on the same thread it re-enters the lock and modifies the send
window. When the write completes, we continue processing pending writes
without knowledge that the window size has been decremented.
Related issue: #2743
Motivation:
When there are more than one stream with the same priority, the set
returned by SpdySession.getActiveStream() will not include all of them,
because it uses TreeSet and only compares the priority of streams. If
two different streams have the same priority, one of them will be
discarded by TreeSet.
Modification:
- Rename getActiveStreams() to activeStreams()
- Replace PriorityComparator with StreamComparator
Result:
Two different streams with the same priority are compared correctly.
Motivation:
If the requests contains uri parameters but not path the HttpRequestEncoder does produce an invalid uri while try to add the missing path.
Modifications:
Correctly handle the case of uri with paramaters but no path.
Result:
HttpRequestEncoder produce correct uri in all cases.
Motivation:
Now Netty has a few problems with null values.
Modifications:
- Check File in DiskFileUpload.toString().
If File is null we will get NPE when calling toString() method.
- Check Result<String> in MqttDecoder.decodeConnectionPayload(...).
- Check Unsafe before calling unsafe.getClass() in PlatformDependent0 static block.
- Removed unnecessary null check in WebSocket08FrameEncoder.encode(...).
Because msg.content() can not return null.
- Removed unnecessary null checks in ConcurrentHashMapV8.removeTreeNode(TreeNode<K,V>).
- Removed unnecessary null check in OioDatagramChannel.doReadMessages(List<Object>).
Because tmpPacket.getSocketAddress() always returns new SocketAddress instance.
- Removed unnecessary null check in OioServerSocketChannel.doReadMessages(List<Object>).
Because socket.accept() always returns new Socket instance.
- Pass Unpooled.buffer(0) instead of null inside CloseWebSocketFrame(boolean, int) constructor.
If we will pass null we will get NPE in super class constructor.
- Added throw new IllegalStateException in GlobalEventExecutor.awaitInactivity(long, TimeUnit) if it will be called before GlobalEventExecutor.execute(Runnable).
Because now we will get NPE. IllegalStateException will be better in this case.
- Fixed null check in OpenSslServerContext.setTicketKeys(byte[]).
Now we throw new NPE if byte[] is not null.
Result:
Added new null checks when it is necessary, removed unnecessary null checks and fixed some NPE problems.
Modifications:
- Added a static modifier for CompositeByteBuf.Component.
This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
A boxed primitive is created from a String, just to extract the unboxed primitive value.
- Removed unnecessary checks if file exists before call mkdirs() in NativeLibraryLoader and PlatformDependent.
Because the method mkdirs() has this check inside.
Conflicts:
codec-http/src/main/java/io/netty/handler/codec/http/multipart/DiskAttribute.java
codec-stomp/src/main/java/io/netty/handler/codec/stomp/StompSubframeAggregator.java
codec-stomp/src/main/java/io/netty/handler/codec/stomp/StompSubframeDecoder.java
Motivation:
When we receive an incomplete WebSocketFrame we need to make sure to wait for more data. Because we not did this we could produce a NPE.
Modification:
Make sure we not try to add null into the RecyclableArrayList
Result:
no more NPE on incomplete frames.
Motivation:
Currently we do 4 ByteBuf.writeBytes(...) calls per header line. This is can be improved.
Modification:
Introduce two new HttpHeaders methods to allow create HttpHeaderEntity which contains the separator. With this we can minimize it to 2 ByteBuf.writeBytes(...) calls per header line
Result:
Performance improvement.
Motivation:
HTTP header validation can be expensive so we should allow to disable it like we do in HttpObjectDecoder.
Modification:
Add constructor argument to disable validation.
Result:
Performance improvement
Motivation:
HttpObjectAggregator currently creates a new FullHttpResponse / FullHttpRequest for each message it needs to aggregate. While doing so it also creates 2 DefaultHttpHeader instances (one for the headers and one for the trailing headers). This is bad for two reasons:
- More objects are created then needed and also populate the headers is not for free
- Headers may get validated even if the validation was disabled in the decoder
Modification:
- Wrap the previous created HttpResponse / HttpRequest and so reuse the original HttpHeaders
- Reuse the previous created trailing HttpHeader.
- Fix a bug where the trailing HttpHeader was incorrectly mixed in the headers.
Result:
- Less GC
- Faster HttpObjectAggregator implementation
Motivation:
HttpOrSpdyChooser can be simplified so the user not need to implement getProtocol(...) method.
Modification:
Add implementation for the method. The user can override it if necessary.
Result:
Easier usage of HttpOrSpdyChooser.
Motivation:
DecodeResult is dropped when aggregate HTTP messages.
Modification:
Make sure we not drop the DecodeResult while aggregate HTTP messages.
Result:
Correctly include the DecodeResult for later processing.
Revert the removal of 'get' prefix from HTTP classes to ensure ABI
compatibility. Note that this commit does not revert the changes in
SPDY, which is considered experimental.
Motivation:
Due to integer overflow bug, writes of FileRegions to http server pipeline (eg like one from HttpStaticFileServer example) with length greater than Integer.MAX_VALUE are ignored in 1/2 of cases (ie no data gets sent to client)
Modification:
Correctly handle chunk sized > Integer.MAX_VALUE
Result:
Be able to use FileRegion > Integer.MAX_VALUE when using chunked encoding.
Motivation:
Persuit for the consistency in method naming
Modifications:
- Remove the 'get' prefix from all HTTP/SPDY message classes
- Fix some inspector warnings
Result:
Consistency
Fixes#2594
Motivation:
When Netty runs in a managed environment such as web application server,
Netty needs to provide an explicit way to remove the thread-local
variables it created to prevent class loader leaks.
FastThreadLocal uses different execution paths for storing a
thread-local variable depending on the type of the current thread.
It increases the complexity of thread-local removal.
Modifications:
- Moved FastThreadLocal and FastThreadLocalThread out of the internal
package so that a user can use it.
- FastThreadLocal now keeps track of all thread local variables it has
initialized, and calling FastThreadLocal.removeAll() will remove all
thread-local variables of the caller thread.
- Added FastThreadLocal.size() for diagnostics and tests
- Introduce InternalThreadLocalMap which is a mixture of hard-wired
thread local variable fields and extensible indexed variables
- FastThreadLocal now uses InternalThreadLocalMap to implement a
thread-local variable.
- Added ThreadDeathWatcher.unwatch() so that PooledByteBufAllocator
tells it to stop watching when its thread-local cache has been freed
by FastThreadLocal.removeAll().
- Added FastThreadLocalTest to ensure that removeAll() works
- Added microbenchmark for FastThreadLocal and JDK ThreadLocal
- Upgraded to JMH 0.9
Result:
- A user can remove all thread-local variables Netty created, as long as
he or she did not exit from the current thread. (Note that there's no
way to remove a thread-local variable from outside of the thread.)
- FastThreadLocal exposes more useful operations such as isSet() because
we always implement a thread local variable via InternalThreadLocalMap
instead of falling back to JDK ThreadLocal.
- FastThreadLocalBenchmark shows that this change improves the
performance of FastThreadLocal even more.
Motivation:
Allow to make use of our new FastThreadLocal whereever possible
Modification:
Make use of an array to store FastThreadLocals and so allow to also use it in PooledByteBufAllocator that is instanced by users.
The maximal size of the array is configurable per system property to allow to tune it if needed. As default we use 64 entries which should be good enough.
Result:
More flexible usage of FastThreadLocal
Motivation:
According to RFC2616 section 19, boundary string could be quoted, but
currently the PostRequestDecoder does not support it while it should.
Modifications:
Once the boundary is found, one check is made to verify if the boundary
is "quoted", and if so, it is "unqoted".
Note: in following usage of this boundary (as delimiter), quote seems no
more allowed according to the same RFC, so the reason that only the
boundary definition is corrected.
Result:
Now the boundary could be whatever quoted or not. A Junit test case
checks it.
Motivation:
When an attribute is ending with an odd number of CR (0x0D), the decoder
add an extra CR in the decoded attribute and should not.
Modifications:
Each time a CR is detected, the next byte was tested to be LF or not. If
not, in a number of places, the CR byte was lost while it should not be.
When a CR is detected, if the next byte is not LF, the CR byte should be
saved as the position point to the next byte (not LF). When a CR is
detected, if there is not yet other available bytes, the position is
reset to the position of CR (since a LF could follow).
A new Junit test case is added, using DECODER and variable number of CR
in the final attribute (testMultipartCodecWithCRasEndOfAttribute).
Result:
The attribute is now correctly decoded with the right number of CR
ending bytes.