Related: #3173
Motivation:
DnsNameResolver was using InetSocketAddress.getHostString() which is
only available since Java 7.
Modifications:
Use InetSocketAddress.getHostName() in lieu of getHostString() when the
current Java version is less than 7.
Result:
DnsNameResolver runs fine on Java 6.
Related: #3132
Motivation:
Changing the type of the string properties of HttpVersion and
HttpResponseStatus to AsciiString will give us the performance advantage
when encoding it into the wire.
Modifications:
- Change the type of the following properties to AsciiString:
- HttpVersion.protocolName()
- HttpVersion.text()
- HttpResponseStatus.reasonPhrase()
- Inline their respective encode() methods because they are used only in
the encoders.
- Fix the test failures incurred by the changes above
Result:
Getting close to the machine
Related: #3132
Motivation:
Changing the type of HttpMethod.name() gives us the performance
advantage when encoding it into the wire.
Modifications:
- Change the type of HttpMethod.name()
- Inline HttpMethod.encode() because it's used only in a single place
and it's trivial.
Result:
Getting close to the machine
Related: #3166
Motivation:
When the recyclable object created at one thread is returned at the
other thread, it is stored in a WeakOrderedQueue.
The objects stored in the WeakOrderedQueue is added back to the stack by
WeakOrderedQueue.transfer() when the owner thread ran out of recyclable
objects.
However, WeakOrderedQueue.transfer() does not have any mechanism that
prevents the stack from growing beyond its maximum capacity.
Modifications:
- Make WeakOrderedQueue.transfer() increase the capacity of the stack
only up to its maximum
- Add tests for the cases where the recyclable object is returned at the
non-owner thread
- Fix a bug where Stack.scavengeSome() does not scavenge the objects
when it's the first time it ran out of objects and thus its cursor is
null.
- Overall clean-up of scavengeSome() and transfer()
Result:
The capacity of Stack never increases beyond its maximum.
Motivation:
io.netty.util.internal.PlatformDependent.isRoot() depends on the IS_ROOT field which is filled in during class initialization. This spawns processes and consumes resources, which are not generally necessary to the complete functioning of that class.
Modifications:
This switches the class to use lazy initialization this field inside of the isRoot() method using double-checked locking (http://en.wikipedia.org/wiki/Double-checked_locking).
Result:
The first call to isRoot() will be slightly slower, at a tradeoff that class loading is faster, uses fewer resources and platform errors are avoided unless necessary.
Motivation:
The example MemcacheClient set command doesn't work.
Modifications:
Fill the extras field buffer with zeros so that it gets written to the
request payload.
Result:
The example MemcacheClient set command works.
Related: #3156
Motivation:
Let's say we have a channel with the following pipeline configuration:
HEAD --> [E1] H1 --> [E2] H2 --> TAIL
when the channel is deregistered, the channelUnregistered() methods of
H1 and H2 will be invoked from the executor thread of E1 and E2
respectively. To ensure that the channelUnregistered() methods are
invoked from the correct thread, new one-time tasks will be created
accordingly and be scheduled via Executor.execute(Runnable).
As soon as the one-time tasks are scheduled,
DefaultChannelPipeline.fireChannelUnregistered() will start to remove
all handlers from the pipeline via teardownAll(). This process is
performed in reversed order of event propagation. i.e. H2 is removed
first, and then H1 is removed.
If the channelUnregistered() event has been passed to H2 before H2 is
removed, a user does not see any problem.
If H2 has been removed before channelUnregistered() event is passed to
H2, a user will often see the following confusing warning message:
An exceptionCaught() event was fired, and it reached at the tail of
the pipeline. It usually means the last handler in the pipeline did
not handle the exception.
Modifications:
To ensure that the handlers are removed *after* all events are
propagated, traverse the pipeline in ascending order before performing
the actual removal.
Result:
A user does not get the confusing warning message anymore.
Motivation:
The inbound flow control code was returning too many bytes to the connection window. This was resulting in GO_AWAYs being generated by peers with the error code indicating a flow control issue. Bytes were being returned to the connection window before the call to returnProcessedBytes. All of the state representing the connection window was not updated when a local settings event occurred.
Modifications:
The DefaultHttp2InboundFlowController will be updated to correct the above defects.
The unit tests will be updated to reflect the changes.
Result:
Inbound flow control algorithm does not cause peers to send flow control errors for the above mentioned cases.
- Parameterize DomainNameMapping to make it useful for other use cases
than just mapping to SslContext
- Move DomainNameMapping to io.netty.util
- Clean-up the API documentation
- Make SniHandler.hostname and sslContext volatile because they can be
accessed by non-I/O threads
Motivation:
We use 3 (!) libraries to build mock objects - easymock, mockito, jmock.
Mockito and jMock pulls in the different versions of Hamcrest, and it
conflicts with the version pulled by jUnit.
Modifications:
- Replace mockito-all with mockito-core to avoid pulling in outdated
jUnit and Hamcrest
- Exclude junit-dep when pulling in jmock-junit4, because it pulls an
outdated Hamcrest version
- Pull in the hamcrest-library version used by jUnit explicitly
Result:
No more dependency hell that results in NoSuchMethodError during the
tests
Motivation:
When we need to host multiple server name with a single IP, it requires
the server to support Server Name Indication extension to serve clients
with proper certificate. So the SniHandler will host multiple
SslContext(s) and append SslHandler for requested hostname.
Modification:
* Added SniHandler to host multiple certifications in a single server
* Test case
Result:
User could use SniHandler to host multiple certifcates at a time.
It's server-side only.
Motivation:
8fbc513 introduced stray warnings in callsites of
PromiseAggregator#add and PromiseNotifier#(...).
Modifications:
This commit adds the @SafeVarargs annotation to PromiseAggregator#add
and PromiseNotifier#(...). As Netty is built with JDK7, this is a
recognized annotation and should not affect runtime VM versions 1.5 and
1.6.
Result:
Building Netty with JDK7 will no longer produce warnings in the
callsites mentioned above.
Related: #3149
Motivation:
DnsQueryContext, using the DatagramChannel bound in DnsNameResolver,
blindly writes to the channel without checking the bind future for
success.
Modifications:
Check the bindFuture before writing a DNS query to a DatagramChannel
Result:
Bug fixed
Motivation:
AbstractUnsafe considers two possibilities during channel registration. First,
the channel may be an outgoing connection, in which case it will be registered
before becoming active. Second, the channel may be an incoming connection in,
which case the channel will already be active when it is registered. To handle
the second case, AbstractUnsafe checks if the channel is active after
registration and calls ChannelPipeline.fireChannelActive() if so. However, if
an active channel is deregistered and then re-registered this logic causes a
second fireChannelActive() to be invoked. This is unexpected; it is reasonable
for handlers to assume that this method will only be invoked once per channel.
Modifications:
This change introduces a flag into AbstractUnsafe to recognize if this is the
first or a subsequent registration. ChannelPipeline.fireChannelActive() is only
possible for the first registration.
Result:
ChannelPipeline.fireChannelActive() is only called once.
Motivation:
JdkSslContext used SSL_RSA_WITH_DES_CBC_SHA in its cipher suite list.
OpenSslServerContext used DES-CBC3-SHA in the same place in its cipher suite
list, which is equivalent to SSL_RSA_WITH_3DES_EDE_CBC_SHA.
This means the lists were out of sync. Furthermore, using
SSL_RSA_WITH_DES_CBC_SHA is not desirable as it uses DES, a weak cipher. Triple
DES should be used instead.
Modifications:
Replace SSL_RSA_WITH_DES_CBC_SHA with SSL_RSA_WITH_3DES_EDE_CBC_SHA in
JdkSslContext.
Result:
The JdkSslContext and OpenSslServerContext cipher suite lists are now in sync.
Triple DES is used instead of DES, which is stronger.
Motivation:
RC4 is not a recommended cipher suite anymore, as the recent research
reveals, such as:
- http://www.isg.rhul.ac.uk/tls/
Modifications:
- Remove most RC4 cipher suites from the default cipher suites
- For backward compatibility, leave RC4-SHA, while de-prioritizing it
Result:
Potentially safer default
Motivation:
The Http2SecurityUtil class lists a few ciphers that are explicitly prohibited by the HTTP/2 specification because of their characteristics.
Modifications:
Remove the ciphers that are prohibited.
Results:
Cipher suite used for HTTP/2 codec is compatible with HTTP/2 spec.
Motivation:
The DefaultHttp2FrameWriter has an exception generated but is missing the throw keyword.
Modifications:
Insert the missing throw keyword.
Result:
Exception thrown when it was intended to be thrown.
Motivation:
The HttpClientUpgradeHandler attempts to compare a supported codec against the input codec but the comparison logic is reversed.
Modification:
Negate the logic in HttpClientUpgradeHandler so an error is detected in the error condition.
Result:
HttpClientUpgradeHandler should not fail the upgrade when the input protocol is valid.
Motivation:
Although the new IntObjectMap.values() that returns Collection is
useful, the removed values(Class<V>) that returns an array is also
useful. It's also good for backward compatibility.
Modifications:
- Add IntObjectMap.values(Class<V>) back
- Miscellaneous improvements
- Cache the collection returned by IntObjectHashMap.values()
- Inspector warnings
- Update the IntObjectHashMapTest to test both values()
Result:
- Backward compatibility
- Potential performance improvement of values()
Motivation:
The DefaultHttp2InboundFlowController uses processedBytes to determine
when to send the WINDOW_UPDATE, but uses window to determine the delta
to send in the request. This is incorrect since we shouldn't be
requesting bytes that haven't been processed.
Modifications:
Changed DefaultHttp2InboundFlowController to use processedBytes in the
calculation of the delta to send in the WINDOW_UPDATE request.
Result:
Inbound flow control only asks for bytes that have been processed in
WINDOW_UPDATE.
Motivation:
AbstractChannel.close() completes the closeFuture and then proceeds to
schedule a task to fireChannelInactive and unregister the channel. If
the user shuts down the EventLoop as a result of the closeFuture
completing this can result in a RejectedExecutionException when the
unregister task is scheduled. This is mostly noise, but it does
somewhat pollute the logs.
Modifications:
Changed AbstractChannel.close() to not complete the channelFuture and
promise until after the unregistration task is scheduled.
Result:
Noisy error log should no longer be an issue.
Related: #3122
Motivation:
The HttpStaticFileServer example writes the LastHttpContent twice at the
end of the transfer. HttpChunkedInput already produces a
LastHttpContent at the end of the stream, so there's no reason to write
another.
Modifications:
Do not write LastHttpContent in HttpStaticFileServerHandler when
HttpChunkedInput is used to transfer a file.
Result:
HttpStaticFileServer does not violates the protocol anymore.
Related: #3157
Motivation:
It should be convenient to have an easy way to classify an
HttpResponseStatus based on the first digit of the HTTP status code, as
defined in the RFC 2616:
- Information 1xx
- Success 2xx
- Redirection 3xx
- Client Error 4xx
- Server Error 5xx
Modification:
- Add HttpStatusClass
- Add HttpResponseStatus.codeClass() that returns the class of the HTTP
status code
- Remove HttpResponseStatus.isInformational()
Result:
It's easier to determine the class of an HTTP status
Motivation:
When running the examples using the provided run-examples.sh script the
log level is 'info' level. It can be handy to be able to configure a
different level, for example 'debug', while learning and trying out the
the examples.
Modifications:
Added a dependency to logback-classic to the examples pom.xml, and also
added a logback configuration file. The log level can be configured by
setting the 'logLevel' system property, and if that property is not set
the default will be 'info' level.
The run-examples.sh was updated to show an example of using the system
property to set the log level to 'debug'
Result:
It is now possible to turn on debug logging by settnig a system property
on the command line.
Motivation:
The HTTP/2 compressor does not release the input buffer when compression is done. This results in buffer leaks.
Modifications:
- Release the buffer in the HTTP/2 compressor
- Update tests to reflect the correct state
Result:
1 less buffer leak.
Motivation:
The interface for HTTP/2 onDataRead states that buffers will be released by the codec. The decompressor and compressor methods are not releasing buffers created during the decompression/compression process.
Modifications:
After onDataRead calls the decompressor and compressor classes will release the data buffer.
Result:
HTTP/2 compressor/decompressors are consistent with onDataRead interface assumptions.
Motivation:
Some of the comments in HTTP/2 Frame Listener interface are misleading.
Modifications:
Clarify comments in Http2FrameListener.
Result:
Http2FrameListener onDataRead comments are clarified.
Motivation:
When DefaultHttp2FrameReader has read a settings frame, the settings
will be passed along the pipeline. This allows a client to hold off
sending data until it has received a settings frame. But for a server it
will always have received a settings frame and the usefulness of this
forwarding of settings is less useful. This also causes a debug message
to be logged on the server side if there is no channel handler to handle
the settings:
[nioEventLoopGroup-1-1] DEBUG io.netty.channel.DefaultChannelPipeline -
Discarded inbound message {INITIAL_WINDOW_SIZE=131072,
MAX_FRAME_SIZE=16384} that reached at the tail of the pipeline. Please
check your pipeline configuration.
Modifications:
Added a builder for the InboundHttp2ToHttpAdapter and
InboundHttp2PriortyAdapter and a new parameter named 'propagateSettings'
to their constructors.
Result:
It is now possible to control whether settings should be passed along
the pipeline or not.
Motivation:
When authenticating with a proxy server, HttpProxyHandler should use the
'Proxy-Authorization' header rather than the 'Authorization' header.
Modifications:
- Use 'Proxy-Authorization' header
Result:
Can connect to an HTTP proxy server
Motivation:
NameResolverGroup uses the EventExecutor specified with getResolver() as
the key of its internal map. Because the EventExecutor is often a
wrapped one, such as PausibleChannelEventExecutor, which actually is a
wrapper of the same executor, they should instantiate only one
NameResolver.
Modifications:
Unwrap the EventExecutor before using it as the key of the internal map
Result:
Memory leak is gone.
Motivation:
I found myself writing AsciiString constants in my code for
response statuses and thought that perhaps it might be nice to have
them defined by Netty instead.
Modifications:
Adding codeAsText to HttpResponseStatus that returns the status code as
AsciiText.
In addition, added the 421 Misdirected Request response code from
https://tools.ietf.org/html/draft-ietf-httpbis-http2-15#section-9.1.2
This response header was renamed in draft 15:
https://tools.ietf.org/html/draft-ietf-httpbis-http2-15#appendix-A.1
But the code itself was not changed, and I thought using the latest would
be better.
Result:
It is now possible to specify a status like this:
new DefaultHttp2Headers().status(HttpResponseStatus.OK.codeAsText());
Related: #3113
Motivation:
Because ChannelHandlerContext is most often instantiated by an I/O
thread, we can rely on thread-local variables to keep the skipFlags
cache, which should be faster than partitioned synchronized variable.
Modifications:
Use FastThreadLocal for skipFlagsCache instead of partitioned
synchronized map.
Result:
Less contention
Motivation:
Found performance issues via FindBugs and PMD.
Modifications:
- Removed unnecessary boxing/unboxing operations in DefaultTextHeaders.convertToInt(CharSequence) and DefaultTextHeaders.convertToLong(CharSequence). A boxed primitive is created from a string, just to extract the unboxed primitive value.
- Added a static modifier for DefaultHttp2Connection.ParentChangedEvent class. This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Added a static compiled Pattern to avoid compile it each time it is used when we need to replace some part of authority.
- Improved using of StringBuilders.
Result:
Performance improvements.
Motivation:
The HTTP/2 codec currently does not expose the boundaries of the initial settings. This could be useful for applications to define their own initial settings.
Modifications:
Add new static final variables to Http2CodecUtil and make Http2Settings use these in the bounds checks.
Result:
Applications can use the max (or min) variables to initialize their settings.
Motivation:
When the inbound flow controller recognizes that the flow control window
has been violated on a stream (not connection-wide), it throws a
connection error.
Modifications:
Changed the DefaultHttp2InboundFlowController to properly throw
connection error if the connection window is violated and stream error
if a stream window is violated.
Result:
inbound flow control throws the correct error for window violations.
Motivation:
The current name of the class which converts from HTTP objects to HTTP/2 frames contains the text Http2ToHttp. This is misleading and opposite of what is being done.
Modifications:
Rename this class name to be HttpToHttp2.
Result:
Class names that more clearly identify what they do.
Motivation:
The current decompression frame listener currently opts-out of application level flow control. The application should still be able to control flow control even if decompression is in use.
Modifications:
- DecompressorFrameListener will maintain how many compressed bytes, decompressed bytes, and processed by the listener bytes. A ratio will be used to translate these values into application level flow control amount.
Result:
HTTP/2 decompressor delegates the application level flow control to the listener processing the decompressed data.
Motivation:
Currently, we only test our ZlibEncoders against our ZlibDecoders. It is
convenient to write such tests, but it does not necessarily guarantee
their correctness. For example, both encoder and decoder might be faulty
even if the tests pass.
Modifications:
Add another test that makes sure that our GZIP encoder generates the
GZIP trailer, using the fact that GZIPInputStream raises an EOFException
when GZIP trailer is missing.
Result:
More coverage for GZIP compression
Motivation:
Currently when an exception occurs during a listener.onDataRead
callback, we return all bytes as processed. However, the listener may
choose to return bytes via the InboundFlowState object rather than
returning the integer. If the listener returns a few bytes and then
throws, we will attempt to return too many bytes.
Modifications:
Added InboundFlowState.unProcessedBytes() to indicate how many
unprocessed bytes are outstanding.
Updated DefaultHttp2ConnectionDecoder to compare the unprocessed bytes
before and after the listener.onDataRead callback when an exception was
encountered. If there is a difference, it is subtracted off the total
processed bytes to be returned to the flow controller.
Result:
HTTP/2 data frame delivery properly accounts for processed bytes through
an exception.
Motivation:
The SPDY/3.1 spec does not adequate describe how to push resources
from the server. This was solidified in the HTTP/2 drafts by dividing
the push into two frames, a PushPromise containing the request,
followed by a Headers frame containing the response.
Modifications:
This commit modifies the SpdyHttpDecoder to support pushed resources
that are divided into multiple frames. The decoder will accept a
pushed SpdySynStreamFrame containing the request headers, followed by
a SpdyHeadersFrame containing the response headers.
Result:
The SpdyHttpDecoder will create an HttpRequest object followed by an
HttpResponse object when receiving pushed resources.