This brings forward the build and release automation changes from 4.1 (#10879, #10883, #10884, #10886, #10888, #10889, #10893, #10900, #10933, #10945, #10966, #10968, #11002, and #11019) to 5.0. Details are as follows: * Use Github workflows for CI (#10879) Motivation: We should just use GitHub Actions for the CI Modifications: - Adjust docker / docker compose files - Add different workflows and jobs to deploy and build the project Result: Don't depend on external CI services * Fix non leak build condition * Only use build and deploy workflows for 4.1 for now * Add deploy job for cross compiled aarch64 (#10883) Motivation: We should also deploy snapshots for our cross compiled native jars. Modifications: - Add job and docker files for deploying cross compiled native jars - Ensure we map the maven cache into our docker containers Result: Deploy aarch64 jars and re-use cache * Use correct docker-compose file to deploy cross compiled artifacts * Use correct docker-compose task to deploy for cross compiled artifacts * Split pr and normal build (#10884) Motivation: We should better use seperate workflows for PR and normal builds Modifications: - Split workflows - Better cache reuse Result: Cleanup * Only deploy snapshots for one arch Motivation: We need to find a way to deploy SNAPSHOTS for different arch with the same timestamp. Otherwise it will cause problems. See https://github.com/netty/netty/issues/10887 Modification: Skip all other deploys then x86_64 Result: Users are able to use SNAPSHOTS for x86_6 * Use maven cachen when running analyze job (#10888) Motivation: To prevent failures to problems while downloading dependencies we shoud cache these Modifications: Add maven cache Result: No more failures due problems while downloading dependencies * Also include one PR job that uses boringssl (#10886) Motivation: When validating PRs we should also at least run one job that uses boringssl Modifications: - Add job that uses boringssl - Cleanup docker compose files - Fix buffer leak in test Result: Also run with boringssl when PRs are validated * Use matrix for job configurations (#10889) Motivation: We can use the matrix feature to define our jobs. This reduces a lot of config Modification: Use job matrix Result: Easier to maintain * Correctly deploy artifacts that are build on different archs (#10893) Motivation: We need to take special care when deploying snapshots as we need to generate the jars in multiple steps Modifications: - Use the nexus staging pluging to stage jars locally in multiple steps - Add extra job that will merge these staged jars and deploy these Result: Fixes https://github.com/netty/netty/issues/10887 * Dont use cron for PRs Motivation: It doesnt make sense to use cron for PRs Modifications: Remove cron config Result: Cleanup * We run all combinations when validate the PR, let's just use one type for normal push Motivation: Let us just only use one build config when building the 4.1 branch. Modifications: As we already do a full validation when doing the PR builds we can just only use one build config for pushes to the "main" branches Result: Faster build times * Update action-docker-layer-caching (#10900) Motivation: We are three releases behind. Modifications: Update to latest version Result: Use up-to-date action-docker-layer-caching version * Verify we can load native modules and add job that verifies on aarch64 as well (#10933) Motivation: As shown in the past we need to verify we actually can load the native as otherwise we may introduce regressions. Modifications: - Add new maven module which tests loading of native modules - Add job that will also test loading on aarch64 Result: Less likely to introduce regressions related to loading native code in the future * Let script fail if one command fail (#10945) Motivation: We should use `set -e` to ensure we fail the script if one command fails. Modifications: Add set -e to script Result: Fail fast * Use action to report unit test errors (#10966) Motivation: To make it easier to understand why the build fails lets use an action that will report which unit test failed Modifications: - Replace custom script with action-surefire-report Result: Easier to understand test failures * Use custom script to check for build failures (#10968) Motivation: It turns out we can't use the action to check for build failures as it can't be used when a PR is done from a fork. Let's just use our simple script. Modifications: - Replace action with custom script Result: Builds for PRs that are done via forks work again. * Publish test results after PR run (#11002) Motivation: To make it easier to understand why a build failed let us publish the rest results Modifications: Use a new workflow to be able to publish the test reports Result: Easier to understand why a PR did fail * Fix test reports name * Add workflow to cut releases (#11019) Motivation: Doing releases manually is error-prone, it would be better if we could do it via a workflow Modification: - Add workflow to cut releases - Add related scripts Result: Be able to easily cut a release via a workflow * Update build for master branch Motivation: The build changes were brought forward from 4.1, and contain many things specific to 4.1. Modification: Changed baseline Java version from 8 to 11, and changed branch references from "4.1" to "master". Result: Builds should now work for the master branch. Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Netty Project
Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients.
Links
How to build
For the detailed information about building and developing Netty, please visit the developer guide. This page only gives very basic information.
You require the following to build Netty:
- Latest stable OpenJDK 8
- Latest stable Apache Maven
- If you are on Linux, you need additional development packages installed on your system, because you'll build the native transport.
Note that this is build-time requirement. JDK 5 (for 3.x) or 6 (for 4.0+ / 4.1+) is enough to run your Netty-based application.
Branches to look
Development of all versions takes place in each branch whose name is identical to <majorVersion>.<minorVersion>
. For example, the development of 3.9 and 4.1 resides in the branch '3.9' and the branch '4.1' respectively.
Usage with JDK 9+
Netty can be used in modular JDK9+ applications as a collection of automatic modules. The module names follow the reverse-DNS style, and are derived from subproject names rather than root packages due to historical reasons. They are listed below:
io.netty.all
io.netty.buffer
io.netty.codec
io.netty.codec.dns
io.netty.codec.haproxy
io.netty.codec.http
io.netty.codec.http2
io.netty.codec.memcache
io.netty.codec.mqtt
io.netty.codec.redis
io.netty.codec.smtp
io.netty.codec.socks
io.netty.codec.stomp
io.netty.codec.xml
io.netty.common
io.netty.handler
io.netty.handler.proxy
io.netty.resolver
io.netty.resolver.dns
io.netty.transport
io.netty.transport.epoll
(native
omitted - reserved keyword in Java)io.netty.transport.kqueue
(native
omitted - reserved keyword in Java)io.netty.transport.unix.common
(native
omitted - reserved keyword in Java)io.netty.transport.rxtx
io.netty.transport.sctp
io.netty.transport.udt
Automatic modules do not provide any means to declare dependencies, so you need to list each used module separately
in your module-info
file.