Commit Graph

36 Commits

Author SHA1 Message Date
skyguard1
266c987339
[Feature] Add zstd encoder (#11437)
Motivation:

As discussed in #10422, ZstdEncoder can be added separately

Modification:

Add ZstdEncoder separately

Result:

netty supports ZSTD with ZstdEncoder

Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
2021-07-06 14:57:09 +02:00
Artem Smotrakov
e5951d46fc
Enable nohttp check during the build (#10708)
Motivation:

HTTP is a plaintext protocol which means that someone may be able
to eavesdrop the data. To prevent this, HTTPS should be used whenever
possible. However, maintaining using https:// in all URLs may be
difficult. The nohttp tool can help here. The tool scans all the files
in a repository and reports where http:// is used.

Modifications:

- Added nohttp (via checkstyle) into the build process.
- Suppressed findings for the websites
  that don't support HTTPS or that are not reachable

Result:

- Prevent using HTTP in the future.
- Encourage users to use HTTPS when they follow the links they found in
  the code.
2020-10-23 14:44:18 +02:00
Norman Maurer
939e928312
Introduce MacOSDnsServerAddressStreamProvider which correctly detect all nameserver configuration on MacOS (#9161)
Motivation:

On MacOS it is not really good enough to check /etc/resolv.conf to determine the nameservers to use. We should retrieve the nameservers using the same way as mDNSResponser and chromium does by doing a JNI call.

Modifications:

Add MacOSDnsServerAddressStreamProvider and testcase

Result:

Use correct nameservers by default on MacOS.
2019-10-28 15:02:45 +01:00
Carl Mastrangelo
ff0045e3e1 Use Table lookup for HPACK decoder (#9307)
Motivation:
Table based decoding is fast.

Modification:
Use table based decoding in HPACK decoder, inspired by
https://github.com/python-hyper/hpack/blob/master/hpack/huffman_table.py

This modifies the table to be based on integers, rather than 3-tuples of
bytes.  This is for two reasons:

1.  It's faster
2.  Using bytes makes the static intializer too big, and doesn't
compile.

Result:
Faster Huffman decoding.  This only seems to help the ascii case, the
other decoding is about the same.

Benchmarks:

```
Before:
Benchmark                     (limitToAscii)  (sensitive)  (size)   Mode  Cnt        Score       Error  Units
HpackDecoderBenchmark.decode            true         true   SMALL  thrpt   20   426293.636 ±  1444.843  ops/s
HpackDecoderBenchmark.decode            true         true  MEDIUM  thrpt   20    57843.738 ±   725.704  ops/s
HpackDecoderBenchmark.decode            true         true   LARGE  thrpt   20     3002.412 ±    16.998  ops/s
HpackDecoderBenchmark.decode            true        false   SMALL  thrpt   20   412339.400 ±  1128.394  ops/s
HpackDecoderBenchmark.decode            true        false  MEDIUM  thrpt   20    58226.870 ±   199.591  ops/s
HpackDecoderBenchmark.decode            true        false   LARGE  thrpt   20     3044.256 ±    10.675  ops/s
HpackDecoderBenchmark.decode           false         true   SMALL  thrpt   20  2082615.030 ±  5929.726  ops/s
HpackDecoderBenchmark.decode           false         true  MEDIUM  thrpt   10   571640.454 ± 26499.229  ops/s
HpackDecoderBenchmark.decode           false         true   LARGE  thrpt   20    92714.555 ±  2292.222  ops/s
HpackDecoderBenchmark.decode           false        false   SMALL  thrpt   20  1745872.421 ±  6788.840  ops/s
HpackDecoderBenchmark.decode           false        false  MEDIUM  thrpt   20   490420.323 ±  2455.431  ops/s
HpackDecoderBenchmark.decode           false        false   LARGE  thrpt   20    84536.200 ±   398.714  ops/s

After(bytes):
Benchmark                     (limitToAscii)  (sensitive)  (size)   Mode  Cnt        Score      Error  Units
HpackDecoderBenchmark.decode            true         true   SMALL  thrpt   20   472649.148 ± 7122.461  ops/s
HpackDecoderBenchmark.decode            true         true  MEDIUM  thrpt   20    66739.638 ±  341.607  ops/s
HpackDecoderBenchmark.decode            true         true   LARGE  thrpt   20     3139.773 ±   24.491  ops/s
HpackDecoderBenchmark.decode            true        false   SMALL  thrpt   20   466933.833 ± 4514.971  ops/s
HpackDecoderBenchmark.decode            true        false  MEDIUM  thrpt   20    66111.778 ±  568.326  ops/s
HpackDecoderBenchmark.decode            true        false   LARGE  thrpt   20     3143.619 ±    3.332  ops/s
HpackDecoderBenchmark.decode           false         true   SMALL  thrpt   20  2109995.177 ± 6203.143  ops/s
HpackDecoderBenchmark.decode           false         true  MEDIUM  thrpt   20   586026.055 ± 1578.550  ops/s
HpackDecoderBenchmark.decode           false        false   SMALL  thrpt   20  1775723.270 ± 4932.057  ops/s
HpackDecoderBenchmark.decode           false        false  MEDIUM  thrpt   20   493316.467 ± 1453.037  ops/s
HpackDecoderBenchmark.decode           false        false   LARGE  thrpt   10    85726.219 ±  402.573  ops/s

After(ints):
Benchmark                     (limitToAscii)  (sensitive)  (size)   Mode  Cnt        Score       Error  Units
HpackDecoderBenchmark.decode            true         true   SMALL  thrpt   20   615549.006 ±  5282.283  ops/s
HpackDecoderBenchmark.decode            true         true  MEDIUM  thrpt   20    86714.630 ±   654.489  ops/s
HpackDecoderBenchmark.decode            true         true   LARGE  thrpt   20     3984.439 ±    61.612  ops/s
HpackDecoderBenchmark.decode            true        false   SMALL  thrpt   20   602489.337 ±  5397.024  ops/s
HpackDecoderBenchmark.decode            true        false  MEDIUM  thrpt   20    88399.109 ±   241.115  ops/s
HpackDecoderBenchmark.decode            true        false   LARGE  thrpt   20     3875.729 ±   103.057  ops/s
HpackDecoderBenchmark.decode           false         true   SMALL  thrpt   20  2092165.454 ± 11918.859  ops/s
HpackDecoderBenchmark.decode           false         true  MEDIUM  thrpt   20   583465.437 ±  5452.115  ops/s
HpackDecoderBenchmark.decode           false         true   LARGE  thrpt   20    93290.061 ±   665.904  ops/s
HpackDecoderBenchmark.decode           false        false   SMALL  thrpt   20  1758402.495 ± 14677.438  ops/s
HpackDecoderBenchmark.decode           false        false  MEDIUM  thrpt   10   491598.099 ±  5029.698  ops/s
HpackDecoderBenchmark.decode           false        false   LARGE  thrpt   20    85834.290 ±   554.915  ops/s
```
2019-07-02 20:09:44 +02:00
noSim
b11afd28f4 Updated jboss-marshalling dependency to current license (#9172)
Motivation:

The mentioned license for the jboss-marshalling dependency is outdated. The license has moved from LGPL v2.1 to Apache 2.0.
The version used by Netty (1.4.11Final) is on Apache 2.0 see https://github.com/jboss-remoting/jboss-marshalling/blob/1.4.11.Final/LICENSE.txt

Modification:

Updated NOTICE file with correct license for jboss-marshalling.

Result:

NOTICE file shows correct license.
2019-05-23 07:21:11 +02:00
Trustin Lee
bda6cb01c6 Add the NOTICE of the forked portion of Apache Harmony
Motivation:

According to the section 4d of ASLv2:

If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.

Modifications:

- Add a subset of NOTICE.txt of Apache Harmony.

Result:

- Fixes #7588
2018-01-30 11:22:51 +01:00
Abhijit Sarkar
f99a1985ce Include mvn wrapper to make setup of development env easier
Motivation:
Someone intending to contribute should be able to set up their development environment quickly and easily.

Modifications:
- Added a Maven wrapper such that a local Maven installation isn't necessary.
- Added a .gitattributes such that auto line-endings are enforced.

Result:
- ./mvnw is enough to build.
- Git line-endings are enforced.
- Fixes #7578.
2018-01-26 08:13:17 +01:00
Norman Maurer
5963279e58 Remove reference to akka code and ArrayDeque which is not part of netty anymore
Motivation:

We not ship any forked code of akka and ArrayDeque anymore.

Modifications:

Remove reference from NOTICE.txt and license folder.

Result:

Correctly document license related things.
2017-03-07 21:30:51 +01:00
Robert Borg
3785ca9311 added support for Protobuf codec nano runtime
Motivation:

Netty was missing support for Protobuf nano runtime targeted at
weaker systems such as Android devices.

Modifications:

Added ProtobufDecoderNano and ProtobufDecoderNano
in order to provide support for Nano runtime.

modified ProtobufVarint32FrameDecoder and
ProtobufLengthFieldPrepender in order to remove any
on either Nano or Lite runtime by copying the code
for handling Protobuf varint32 in from Protobuf
library.

modified Licenses and NOTICE in order to reflect the
changes i made.

added Protobuf Nano runtime as optional dependency

Result:

Netty now supports Protobuf Nano runtime.
2016-01-19 21:39:17 +01:00
Dmitry Spikhalskiy
2a65ae256e [#4331] Helper methods to get charset from Content-Type header of HttpMessage
Motivation:

HttpHeaders already has specific methods for such popular and simple headers like "Host", but if I need to convert POST raw body to string I need to parse complex ContentType header in my code.

Modifications:

Add getCharset and getCharsetAsString methods to parse charset from Content-Length header.

Result:

Easy to use utility method.
2015-11-19 15:59:34 -08:00
nmittler
8accc52b03 Forking Twitter's hpack
Motivation:

The twitter hpack project does not have the support that it used to have.  See discussion here: https://github.com/netty/netty/issues/4403.

Modifications:

Created a new module in Netty and copied the latest from twitter hpack master.

Result:

Netty no longer depends on twitter hpack.
2015-11-14 10:13:32 -08:00
Norman Maurer
81fee66c78 Let PoolThreadCache work even if allocation and deallocation Thread are different
Motivation:

PoolThreadCache did only cache allocations if the allocation and deallocation Thread were the same. This is not optimal as often people write from differen thread then the actual EventLoop thread.

Modification:

- Add MpscArrayQueue which was forked from jctools and lightly modified.
- Use MpscArrayQueue for caches and always add buffer back to the cache that belongs to the allocation thread.

Result:

ThreadPoolCache is now also usable and so gives performance improvements when allocation and deallocation thread are different.

Performance when using same thread for allocation and deallocation is noticable worse then before.
2015-05-27 14:38:11 +02:00
Marek Jelen
d3e7a0bcd0 Integrate non-blocking XML parser as Netty codec (#2806)
Motivation:
Provide non-blocking XML parser as Netty codec.

Modifications:
New codec implemented/extracted.

io.netty.handler.codec.xml.XmlDecoder decodes XML fed by Netty without blocking.

Result:
Non-blocking XML stream parsing.
2015-02-19 13:46:14 +01:00
Idel Pivnitskiy
cf5aea52ed Implemented LZMA frame encoder
Motivation:

LZMA compression algorithm has a very good compression ratio.

Modifications:

- Added `lzma-java` library which implements LZMA algorithm.
- Implemented LzmaFrameEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Added tests to verify the LzmaFrameEncoder and how it can compress data for the next uncompression using the original library.

Result:

LZMA encoder which can compress data using LZMA algorithm.
2014-09-15 15:05:36 +02:00
Idel Pivnitskiy
c8841bc9de Implemented LZ4 compression codec
Motivation:

LZ4 compression codec provides sending and receiving data encoded by very fast LZ4 algorithm.

Modifications:

- Added `lz4` library which implements LZ4 algorithm.
- Implemented Lz4FramedEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Added tests to verify the Lz4FramedEncoder and how it can compress data for the next uncompression using the original library.
- Implemented Lz4FramedDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages.
- Added tests to verify the Lz4FramedDecoder and how it can uncompress data after compression using the original library.
- Added integration tests for Lz4FramedEncoder/Decoder.

Result:

Full LZ4 compression codec which can compress/uncompress data using LZ4 algorithm.
2014-08-14 15:05:24 -07:00
Idel Pivnitskiy
a0c466a276 Implemented FastLZ compression codec
Motivation:

FastLZ compression codec provides sending and receiving data encoded by fast FastLZ algorithm using block mode.

Modifications:

- Added part of `jfastlz` library which implements FastLZ algorithm. See FastLz class.
- Implemented FastLzFramedEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Implemented FastLzFramedDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages.
- Added integration tests for `FastLzFramedEncoder/Decoder`.

Result:

Full FastLZ compression codec which can compress/uncompress data using FastLZ algorithm.
2014-08-12 15:14:59 -07:00
Idel Pivnitskiy
ed7240b597 Implemented a Bzip2Encoder
Motivation:

Bzip2Encoder provides sending data compressed in bzip2 format.

Modifications:

Added classes:
- Bzip2Encoder
- Bzip2BitWriter
- Bzip2BlockCompressor
- Bzip2DivSufSort
- Bzip2HuffmanAllocator
- Bzip2HuffmanStageEncoder
- Bzip2MTFAndRLE2StageEncoder
- Bzip2EncoderTest

Modified classes:
- Bzip2Constants (splited BLOCK_HEADER_MAGIC and END_OF_STREAM_MAGIC)
- Bzip2Decoder (use splited magic numbers)

Added integration tests for Bzip2Encoder/Decoder

Result:

Implemented new encoder which can compress outgoing data in bzip2 format.
2014-07-17 16:19:39 +02:00
Idel Pivnitskiy
3c6017a9b1 Implemented LZF compression codec
Motivation:

LZF compression codec provides sending and receiving data encoded by very fast LZF algorithm.

Modifications:

- Added Compress-LZF library which implements LZF algorithm
- Implemented LzfEncoder which extends MessageToByteEncoder and provides compression of outgoing messages
- Added tests to verify the LzfEncoder and how it can compress data for the next uncompression using the original library
- Implemented LzfDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages
- Added tests to verify the LzfDecoder and how it can uncompress data after compression using the original library
- Added integration tests for LzfEncoder/Decoder

Result:

Full LZF compression codec which can compress/uncompress data using LZF algorithm.
2014-07-17 07:18:07 +02:00
Idel Pivnitskiy
f9021a6061 Implement a Bzip2Decoder
Motivation:

Bzip2Decoder provides receiving data compressed in bzip2 format.

Modifications:

Added classes:
- Bzip2Decoder
- Bzip2Constants
- Bzip2BlockDecompressor
- Bzip2HuffmanStageDecoder
- Bzip2MoveToFrontTable
- Bzip2Rand
- Crc32
- Bzip2DecoderTest

Result:

Implemented and tested new decoder which can uncompress incoming data in bzip2 format.
2014-06-24 14:50:09 +09:00
Trustin Lee
681d460938 Introduce TextHeaders and AsciiString
Motivation:

We have quite a bit of code duplication between HTTP/1, HTTP/2, SPDY,
and STOMP codec, because they all have a notion of 'headers', which is a
multimap of string names and values.

Modifications:

- Add TextHeaders and its default implementation
- Add AsciiString to replace HttpHeaderEntity
  - Borrowed some portion from Apache Harmony's java.lang.String.
- Reimplement HttpHeaders, SpdyHeaders, and StompHeaders using
  TextHeaders
- Add AsciiHeadersEncoder to reuse the encoding a TextHeaders
  - Used a dedicated encoder for HTTP headers for better performance
    though
- Remove shortcut methods in SpdyHeaders
- Replace SpdyHeaders.getStatus() with HttpResponseStatus.parseLine()

Result:

- Removed quite a bit of code duplication in the header implementations.
- Slightly better performance thanks to improved header validation and
  hash code calculation
2014-06-14 15:36:19 +09:00
Trustin Lee
2a6ed67efc Preparation for porting OpenSSL support in 3.10
- Add licenses and dependencies
2014-05-17 20:01:30 +09:00
Norman Maurer
d3ffa1b02b [#1259] Add optimized queue for SCMP pattern and use it in NIO and native transport
This queue also produces less GC then CLQ when make use of OneTimeTask
2014-02-27 13:28:37 +01:00
Trustin Lee
6e73d5471c Add back jzlib license file and notice 2013-02-21 14:00:59 -08:00
Norman Maurer
c93f5afa99 [#1012] Cleanup 2013-02-20 12:52:55 +01:00
Norman Maurer
1cc04e7dda Remove reference to metrics as it is not used anymore 2013-02-12 11:14:16 +01:00
Trustin Lee
42c65cca3a Make MessageBuf bounded
- Move common methods from ByteBuf to Buf
- Rename ensureWritableBytes() to ensureWritable()
- Rename readable() to isReadable()
- Rename writable() to isWritable()
- Add isReadable(int) and isWritable(int)
- Add AbstractMessageBuf
- Rewrite DefaultMessageBuf and QueueBackedMessageBuf
  - based on Josh Bloch's public domain ArrayDeque impl
2013-01-31 18:11:06 +01:00
Trustin Lee
b60e0b6a51 Modernize InternalLogger API and enable logging framework autodetection
- Borrow SLF4J API which is the best of the best
- InternalLoggerFactory now automatically detects the logging framework
  using static class loading. It tries SLF4J, Log4J, and then falls back
  to java.util.logging.
- Remove OsgiLogger because it is very likely that OSGi container
  already provides a bridge for existing logging frameworks
- Remove JBossLogger because the latest JBossLogger implementation seems
  to implement SLF4J binding
- Upgrade SLF4J to 1.7.2
- Remove tests for the untestable logging frameworks
- Remove TestAny
2013-01-19 20:50:52 +09:00
Luke Wood
43e40d6af6 Add Snappy compression codec 2012-12-18 21:09:31 +01:00
Trustin Lee
8945ce17ad Update license notices and dependencies 2012-12-14 00:38:05 +09:00
Veebs
e79d5720d3 Added webbit license and credits 2011-10-27 10:34:37 +11:00
Trustin Lee
e5ccdee14e Removed the license files for unused dependencies 2010-04-30 11:09:37 +00:00
Trustin Lee
b4e183a9fe Removed the license files for unused dependencies 2010-04-30 11:09:09 +00:00
Trustin Lee
a7132ee08e Relates issue: NETTY-80 Compression codec
* Initial implementation of jzlib based zlib compression handler
2009-10-16 06:10:25 +00:00
Trustin Lee
e424c5f87d * Added XNIO dependency (optional)
* Updated license files for XNIO
2009-02-17 10:22:53 +00:00
Trustin Lee
7860e999a7 Updated license information (NETTY-106 Add missing license files to the distribution) 2009-02-13 13:58:37 +00:00
Trustin Lee
026fc520bb * Moved all third party license filed into the 'license' directory
* Beautified NOTICE.txt
2008-12-30 02:41:09 +00:00