1164 lines
48 KiB
XML
1164 lines
48 KiB
XML
<?xml version="1.0" encoding="UTF-8"?>
|
|
<!--
|
|
* Copyright 2009 Red Hat, Inc.
|
|
*
|
|
* Red Hat licenses this file to you under the Apache License, version 2.0
|
|
* (the "License"); you may not use this file except in compliance with the
|
|
* License. You may obtain a copy of the License at:
|
|
*
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
|
*
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
* License for the specific language governing permissions and limitations
|
|
* under the License.
|
|
-->
|
|
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.docbook.org/xml/4.5/docbookx.dtd" [
|
|
<!ENTITY % CustomDTD SYSTEM "../custom.dtd">
|
|
%CustomDTD;
|
|
]>
|
|
<chapter id="start">
|
|
<title>Getting Started</title>
|
|
<para>
|
|
This chapter tours around the core constructs of Netty with simple
|
|
examples to let you get started quickly. You will be able to write a
|
|
client and a server on top of Netty right away when you are at the
|
|
end of this chapter.
|
|
</para>
|
|
|
|
<para>
|
|
If you prefer top-down approach in learning something, you might want to
|
|
start from <xref linkend="architecture"/> and get back here.
|
|
</para>
|
|
|
|
<section>
|
|
<title>Before Getting Started</title>
|
|
<para>
|
|
The minimum requirements to run the examples which are introduced in
|
|
this chapter are only two; the latest version of Netty and JDK 1.5 or
|
|
above. The latest version of Netty is available in
|
|
<ulink url="&Downloads;">the project download page</ulink>. To download
|
|
the right version of JDK, please refer to your preferred JDK vendor's web
|
|
site.
|
|
</para>
|
|
<para>
|
|
As you read, you might have more questions about the classes introduced
|
|
in this chapter. Please refer to the API reference whenever you want to
|
|
know more about them. All class names in this document are linked to the
|
|
online API reference for your convenience. Also, please don't hesitate to
|
|
<ulink url="&Community;">contact the Netty project community</ulink> and
|
|
let us know if there's any incorrect information, errors in grammar and
|
|
typo, and if you have a good idea to improve the documentation.
|
|
</para>
|
|
</section>
|
|
|
|
<section>
|
|
<title>Writing a Discard Server</title>
|
|
<para>
|
|
The most simplistic protocol in the world is not 'Hello, World!' but
|
|
<ulink url="http://tools.ietf.org/html/rfc863">DISCARD</ulink>. It's
|
|
a protocol which discards any received data without any response.
|
|
</para>
|
|
<para>
|
|
To implement the DISCARD protocol, the only thing you need to do is
|
|
to ignore all received data. Let us start straight from the handler
|
|
implementation, which handles I/O events generated by Netty.
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.discard;
|
|
|
|
public class DiscardServerHandler extends &SimpleChannelHandler; {<co id="example.discard.co1"/>
|
|
|
|
@Override
|
|
public void messageReceived(&ChannelHandlerContext; ctx, &MessageEvent; e) {<co id="example.discard.co2"/>
|
|
}
|
|
|
|
@Override
|
|
public void exceptionCaught(&ChannelHandlerContext; ctx, &ExceptionEvent; e) {<co id="example.discard.co3"/>
|
|
e.getCause().printStackTrace();
|
|
|
|
&Channel; ch = e.getChannel();
|
|
ch.close();
|
|
}
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.discard.co1">
|
|
<para>
|
|
<classname>DiscardServerHandler</classname> extends
|
|
&SimpleChannelHandler;, which is an implementation of
|
|
&ChannelHandler;. &SimpleChannelHandler; provides various event
|
|
handler methods that you can override. For now, it is just enough
|
|
to extend &SimpleChannelHandler; rather than to implement
|
|
the handler interfaces by yourself.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.discard.co2">
|
|
<para>
|
|
We override the <methodname>messageReceived</methodname> event
|
|
handler method here. This method is called with a &MessageEvent;,
|
|
which contains the received data, whenever new data is received
|
|
from a client. In this example, we ignore the received data by doing
|
|
nothing to implement the DISCARD protocol.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.discard.co3">
|
|
<para>
|
|
<methodname>exceptionCaught</methodname> event handler method is
|
|
called with an &ExceptionEvent; when an exception was raised by
|
|
Netty due to I/O error or by a handler implementation due to the
|
|
exception thrown while processing events. In most cases, the
|
|
caught exception should be logged and its associated channel
|
|
should be closed here, although the implementation of this method
|
|
can be different depending on what you want to do to deal with an
|
|
exceptional situation. For example, you might want to send a
|
|
response message with an error code before closing the connection.
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
<para>
|
|
So far so good. We have implemented the first half of the DISCARD server.
|
|
What's left now is to write the <methodname>main</methodname> method
|
|
which starts the server with the <classname>DiscardServerHandler</classname>.
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.discard;
|
|
|
|
import java.net.InetSocketAddress;
|
|
import java.util.concurrent.Executors;
|
|
|
|
public class DiscardServer {
|
|
|
|
public static void main(String[] args) throws Exception {
|
|
&ChannelFactory; factory =
|
|
new &NioServerSocketChannelFactory;<co id="example.discard2.co1" />(
|
|
Executors.newCachedThreadPool(),
|
|
Executors.newCachedThreadPool());
|
|
|
|
&ServerBootstrap; bootstrap = new &ServerBootstrap;<co id="example.discard2.co2" />(factory);
|
|
|
|
bootstrap.setPipelineFactory(new &ChannelPipelineFactory;() {<co id="example.discard2.co3" />
|
|
public &ChannelPipeline; getPipeline() {
|
|
return &Channels;.pipeline(new DiscardServerHandler());
|
|
}
|
|
});
|
|
|
|
bootstrap.setOption("child.tcpNoDelay", true);<co id="example.discard2.co4" />
|
|
bootstrap.setOption("child.keepAlive", true);
|
|
|
|
bootstrap.bind(new InetSocketAddress(8080));<co id="example.discard2.co5" />
|
|
}
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.discard2.co1">
|
|
<para>
|
|
&ChannelFactory; is a factory which creates and manages &Channel;s
|
|
and its related resources. It processes all I/O requests and
|
|
performs I/O to generate &ChannelEvent;s. Netty provides various
|
|
&ChannelFactory; implementations. We are implementing a server-side
|
|
application in this example, and therefore
|
|
&NioServerSocketChannelFactory; was used. Another thing to note is
|
|
that it does not create I/O threads by itself. It is supposed to
|
|
acquire threads from the thread pool you specified in the
|
|
constructor, and it gives you more control over how threads should
|
|
be managed in the environment where your application runs, such as
|
|
an application server with a security manager.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.discard2.co2">
|
|
<para>
|
|
&ServerBootstrap; is a helper class that sets up a server. You can
|
|
set up the server using a &Channel; directly. However, please note
|
|
that this is a tedious process and you do not need to do that in most
|
|
cases.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.discard2.co3">
|
|
<para>
|
|
Here, we configure the &ChannelPipelineFactory;. Whenever a new
|
|
connection is accepted by the server, a new &ChannelPipeline; will be
|
|
created by the specified &ChannelPipelineFactory;. The new pipeline
|
|
contains the <classname>DiscardServerHandler</classname>. As the
|
|
application gets complicated, it is likely that you will add more
|
|
handlers to the pipeline and extract this anonymous class into a top
|
|
level class eventually.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.discard2.co4">
|
|
<para>
|
|
You can also set the parameters which are specific to the &Channel;
|
|
implementation. We are writing a TCP/IP server, so we are allowed
|
|
to set the socket options such as <literal>tcpNoDelay</literal> and
|
|
<literal>keepAlive</literal>. Please note that the
|
|
<literal>"child."</literal> prefix was added to all options. It
|
|
means the options will be applied to the accepted &Channel;s instead
|
|
of the options of the &ServerSocketChannel;. You could do the
|
|
following to set the options of the &ServerSocketChannel;:
|
|
<programlisting>bootstrap.setOption("reuseAddress", true);</programlisting>
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.discard2.co5">
|
|
<para>
|
|
We are ready to go now. What's left is to bind to the port and to
|
|
start the server. Here, we bind to the port <literal>8080</literal>
|
|
of all NICs (network interface cards) in the machine. You can now
|
|
call the <methodname>bind</methodname> method as many times as
|
|
you want (with different bind addresses.)
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
<para>
|
|
Congratulations! You've just finished your first server on top of Netty.
|
|
</para>
|
|
</section>
|
|
|
|
<section>
|
|
<title>Looking into the Received Data</title>
|
|
<para>
|
|
Now that we have written our first server, we need to test if it really
|
|
works. The easiest way to test it is to use the <command>telnet</command>
|
|
command. For example, you could enter "<command>telnet localhost
|
|
8080</command>" in the command line and type something.
|
|
</para>
|
|
<para>
|
|
However, can we say that the server is working fine? We cannot really
|
|
know that because it is a discard server. You will not get any response
|
|
at all. To prove it is really working, let us modify the server to print
|
|
what it has received.
|
|
</para>
|
|
<para>
|
|
We already know that &MessageEvent; is generated whenever data is
|
|
received and the <methodname>messageReceived</methodname> handler method
|
|
will be invoked. Let us put some code into the
|
|
<methodname>messageReceived</methodname> method of the
|
|
<classname>DiscardServerHandler</classname>:
|
|
</para>
|
|
<programlisting>@Override
|
|
public void messageReceived(&ChannelHandlerContext; ctx, &MessageEvent; e) {
|
|
&ChannelBuffer;<co id="example.discard3.co1"/> buf = (ChannelBuffer) e.getMessage();
|
|
while(buf.readable()) {
|
|
System.out.println((char) buf.readByte());
|
|
System.out.flush();
|
|
}
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.discard3.co1">
|
|
<para>
|
|
It is safe to assume the message type in socket transports is always
|
|
&ChannelBuffer;. &ChannelBuffer; is a fundamental data structure
|
|
which stores a sequence of bytes in Netty. It's similar to NIO
|
|
<classname>ByteBuffer</classname>, but it is easier to use and more
|
|
flexible. For example, Netty allows you to create a composite
|
|
&ChannelBuffer; which combines multiple &ChannelBuffer;s reducing
|
|
the number of unnecessary memory copy.
|
|
</para>
|
|
<para>
|
|
Although it resembles to NIO <classname>ByteBuffer</classname> a lot,
|
|
it is highly recommended to refer to the API reference. Learning how
|
|
to use &ChannelBuffer; correctly is a critical step in using Netty
|
|
without difficulty.
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
<para>
|
|
If you run the <command>telnet</command> command again, you will see the
|
|
server prints what has received.
|
|
</para>
|
|
<para>
|
|
The full source code of the discard server is located in the
|
|
<literal>org.jboss.netty.example.discard</literal> package of the
|
|
distribution.
|
|
</para>
|
|
</section>
|
|
<section>
|
|
<title>Writing an Echo Server</title>
|
|
<para>
|
|
So far, we have been consuming data without responding at all. A server,
|
|
however, is usually supposed to respond to a request. Let us learn how to
|
|
write a response message to a client by implementing the
|
|
<ulink url="http://tools.ietf.org/html/rfc862">ECHO</ulink> protocol,
|
|
where any received data is sent back.
|
|
</para>
|
|
<para>
|
|
The only difference from the discard server we have implemented in the
|
|
previous sections is that it sends the received data back instead of
|
|
printing the received data out to the console. Therefore, it is enough
|
|
again to modify the <methodname>messageReceived</methodname> method:
|
|
</para>
|
|
<programlisting>@Override
|
|
public void messageReceived(&ChannelHandlerContext; ctx, &MessageEvent; e) {
|
|
&Channel;<co id="example.echo.co1"/> ch = e.getChannel();
|
|
ch.write(e.getMessage());
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.echo.co1">
|
|
<para>
|
|
A &ChannelEvent; object has a reference to its associated &Channel;.
|
|
Here, the returned &Channel; represents the connection which received
|
|
the &MessageEvent;. We can get the &Channel; and call the
|
|
<methodname>write</methodname> method to write something back to
|
|
the remote peer.
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
<para>
|
|
If you run the <command>telnet</command> command again, you will see the
|
|
server sends back whatever you have sent to it.
|
|
</para>
|
|
<para>
|
|
The full source code of the echo server is located in the
|
|
<literal>org.jboss.netty.example.echo</literal> package of the
|
|
distribution.
|
|
</para>
|
|
</section>
|
|
|
|
<section>
|
|
<title>Writing a Time Server</title>
|
|
<para>
|
|
The protocol to implement in this section is the
|
|
<ulink url="http://tools.ietf.org/html/rfc868">TIME</ulink> protocol.
|
|
It is different from the previous examples in that it sends a message,
|
|
which contains a 32-bit integer, without receiving any requests and
|
|
loses the connection once the message is sent. In this example, you
|
|
will learn how to construct and send a message, and to close the
|
|
connection on completion.
|
|
</para>
|
|
<para>
|
|
Because we are going to ignore any received data but to send a message
|
|
as soon as a connection is established, we cannot use the
|
|
<methodname>messageReceived</methodname> method this time. Instead,
|
|
we should override the <methodname>channelConnected</methodname> method.
|
|
The following is the implementation:
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.time;
|
|
|
|
public class TimeServerHandler extends &SimpleChannelHandler; {
|
|
|
|
@Override
|
|
public void channelConnected(&ChannelHandlerContext; ctx, &ChannelStateEvent; e) {<co id="example.time.co1"/>
|
|
&Channel; ch = e.getChannel();
|
|
|
|
&ChannelBuffer; time = &ChannelBuffers;.buffer(4);<co id="example.time.co2"/>
|
|
time.writeInt((int) (System.currentTimeMillis() / 1000));
|
|
|
|
&ChannelFuture; f = ch.write(time);<co id="example.time.co3"/>
|
|
|
|
f.addListener(new &ChannelFutureListener;() {<co id="example.time.co4"/>
|
|
public void operationComplete(&ChannelFuture; future) {
|
|
&Channel; ch = future.getChannel();
|
|
ch.close();
|
|
}
|
|
});
|
|
}
|
|
|
|
@Override
|
|
public void exceptionCaught(&ChannelHandlerContext; ctx, &ExceptionEvent; e) {
|
|
e.getCause().printStackTrace();
|
|
e.getChannel().close();
|
|
}
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.time.co1">
|
|
<para>
|
|
As explained, <methodname>channelConnected</methodname> method will
|
|
be invoked when a connection is established. Let us write the 32-bit
|
|
integer that represents the current time in seconds here.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time.co2">
|
|
<para>
|
|
To send a new message, we need to allocate a new buffer which will
|
|
contain the message. We are going to write a 32-bit integer, and
|
|
therefore we need a &ChannelBuffer; whose capacity is
|
|
<literal>4</literal> bytes. The &ChannelBuffers; helper class is
|
|
used to allocate a new buffer. Besides the
|
|
<methodname>buffer</methodname> method, &ChannelBuffers; provides a
|
|
lot of useful methods related to the &ChannelBuffer;. For more
|
|
information, please refer to the API reference.
|
|
</para>
|
|
<para>
|
|
On the other hand, it is a good idea to use static imports for
|
|
&ChannelBuffers;:
|
|
<programlisting>import static org.jboss.netty.buffer.&ChannelBuffers;.*;
|
|
...
|
|
&ChannelBuffer; dynamicBuf = dynamicBuffer(256);
|
|
&ChannelBuffer; ordinaryBuf = buffer(1024);</programlisting>
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time.co3">
|
|
<para>
|
|
As usual, we write the constructed message.
|
|
</para>
|
|
<para>
|
|
But wait, where's the <methodname>flip</methodname>? Didn't we used
|
|
to call <methodname>ByteBuffer.flip()</methodname> before sending a
|
|
message in NIO? &ChannelBuffer; does not have such a method because
|
|
it has two pointers; one for read operations and the other for write
|
|
operations. The writer index increases when you write something to
|
|
a &ChannelBuffer; while the reader index does not change. The reader
|
|
index and the writer index represents where the message starts and
|
|
ends respectively.
|
|
</para>
|
|
<para>
|
|
In contrast, NIO buffer does not provide a clean way to figure out
|
|
where the message content starts and ends without calling the
|
|
<methodname>flip</methodname> method. You will be in trouble when
|
|
you forget to flip the buffer because nothing or incorrect data will
|
|
be sent. Such an error does not happen in Netty because we have
|
|
different pointer for different operation types. You will find it
|
|
makes your life much easier as you get used to it -- a life without
|
|
flipping out!
|
|
</para>
|
|
<para>
|
|
Another point to note is that the <methodname>write</methodname>
|
|
method returns a &ChannelFuture;. A &ChannelFuture; represents an
|
|
I/O operation which has not yet occurred. It means, any requested
|
|
operation might not have been performed yet because all operations
|
|
are asynchronous in Netty. For example, the following code might
|
|
close the connection even before a message is sent:
|
|
</para>
|
|
<programlisting>&Channel; ch = ...;
|
|
ch.write(message);
|
|
ch.close();</programlisting>
|
|
<para>
|
|
Therefore, you need to call the <methodname>close</methodname>
|
|
method after the &ChannelFuture;, which was returned by the
|
|
<methodname>write</methodname> method, notifies you when the write
|
|
operation has been done. Please note that, <methodname>close</methodname>
|
|
also might not close the connection immediately, and it returns a
|
|
&ChannelFuture;.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time.co4">
|
|
<para>
|
|
How do we get notified when the write request is finished then?
|
|
This is as simple as adding a &ChannelFutureListener; to the returned
|
|
&ChannelFuture;. Here, we created a new anonymous &ChannelFutureListener;
|
|
which closes the &Channel; when the operation is done.
|
|
</para>
|
|
<para>
|
|
Alternatively, you could simplify the code using a pre-defined
|
|
listener:
|
|
<programlisting>f.addListener(&ChannelFutureListener;.CLOSE);</programlisting>
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
</section>
|
|
|
|
<section>
|
|
<title>Writing a Time Client</title>
|
|
<para>
|
|
Unlike DISCARD and ECHO servers, we need a client for the TIME protocol
|
|
because a human cannot translate a 32-bit binary data into a date on a
|
|
calendar. In this section, we discuss how to make sure the server works
|
|
correctly and learn how to write a client with Netty.
|
|
</para>
|
|
<para>
|
|
The biggest and only difference between a server and a client in Netty
|
|
is that different &Bootstrap; and &ChannelFactory; are required. Please
|
|
take a look at the following code:
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.time;
|
|
|
|
import java.net.InetSocketAddress;
|
|
import java.util.concurrent.Executors;
|
|
|
|
public class TimeClient {
|
|
|
|
public static void main(String[] args) throws Exception {
|
|
String host = args[0];
|
|
int port = Integer.parseInt(args[1]);
|
|
|
|
&ChannelFactory; factory =
|
|
new &NioClientSocketChannelFactory;<co id="example.time2.co1"/>(
|
|
Executors.newCachedThreadPool(),
|
|
Executors.newCachedThreadPool());
|
|
|
|
&ClientBootstrap; bootstrap = new &ClientBootstrap;<co id="example.time2.co2"/>(factory);
|
|
|
|
bootstrap.setPipelineFactory(new &ChannelPipelineFactory;() {
|
|
public &ChannelPipeline; getPipeline() {
|
|
return &Channels;.pipeline(new TimeClientHandler());
|
|
}
|
|
});
|
|
|
|
bootstrap.setOption("tcpNoDelay"<co id="example.time2.co3"/>, true);
|
|
bootstrap.setOption("keepAlive", true);
|
|
|
|
bootstrap.connect<co id="example.time2.co4"/>(new InetSocketAddress(host, port));
|
|
}
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.time2.co1">
|
|
<para>
|
|
&NioClientSocketChannelFactory;, instead of &NioServerSocketChannelFactory;
|
|
was used to create a client-side &Channel;.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time2.co2">
|
|
<para>
|
|
&ClientBootstrap; is a client-side counterpart of &ServerBootstrap;.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time2.co3">
|
|
<para>
|
|
Please note that there's no <literal>"child."</literal> prefix.
|
|
A client-side &SocketChannel; does not have a parent.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time2.co4">
|
|
<para>
|
|
We should call the <methodname>connect</methodname> method instead of
|
|
the <methodname>bind</methodname> method.
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
<para>
|
|
As you can see, it is not really different from the server side startup.
|
|
What about the &ChannelHandler; implementation? It should receive a
|
|
32-bit integer from the server, translate it into a human readable format,
|
|
print the translated time, and close the connection:
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.time;
|
|
|
|
import java.util.Date;
|
|
|
|
public class TimeClientHandler extends &SimpleChannelHandler; {
|
|
|
|
@Override
|
|
public void messageReceived(&ChannelHandlerContext; ctx, &MessageEvent; e) {
|
|
&ChannelBuffer; buf = (&ChannelBuffer;) e.getMessage();
|
|
long currentTimeMillis = buf.readInt() * 1000L;
|
|
System.out.println(new Date(currentTimeMillis));
|
|
e.getChannel().close();
|
|
}
|
|
|
|
@Override
|
|
public void exceptionCaught(&ChannelHandlerContext; ctx, &ExceptionEvent; e) {
|
|
e.getCause().printStackTrace();
|
|
e.getChannel().close();
|
|
}
|
|
}</programlisting>
|
|
<para>
|
|
It looks very simple and does not look any different from the server side
|
|
example. However, this handler sometimes will refuse to work raising an
|
|
<exceptionname>IndexOutOfBoundsException</exceptionname>. We discuss why
|
|
this happens in the next section.
|
|
</para>
|
|
</section>
|
|
|
|
<section>
|
|
<title>
|
|
Dealing with a Stream-based Transport
|
|
</title>
|
|
<section>
|
|
<title>
|
|
One Small Caveat of Socket Buffer
|
|
</title>
|
|
<para>
|
|
In a stream-based transport such as TCP/IP, received data is stored
|
|
into a socket receive buffer. Unfortunately, the buffer of a
|
|
stream-based transport is not a queue of packets but a queue of bytes.
|
|
It means, even if you sent two messages as two independent packets, an
|
|
operating system will not treat them as two messages but as just a
|
|
bunch of bytes. Therefore, there is no guarantee that what you read
|
|
is exactly what your remote peer wrote. For example, let us assume
|
|
that the TCP/IP stack of an operating system has received three packets:
|
|
</para>
|
|
<programlisting>+-----+-----+-----+
|
|
| ABC | DEF | GHI |
|
|
+-----+-----+-----+</programlisting>
|
|
<para>
|
|
Because of this general property of a stream-based protocol, there's
|
|
high chance of reading them in the following fragmented form in your
|
|
application:
|
|
</para>
|
|
<programlisting>+----+-------+---+---+
|
|
| AB | CDEFG | H | I |
|
|
+----+-------+---+---+</programlisting>
|
|
<para>
|
|
Therefore, a receiving part, regardless it is server-side or
|
|
client-side, should defrag the received data into one or more meaningful
|
|
<firstterm>frames</firstterm> that could be easily understood by the
|
|
application logic. In case of the example above, the received data
|
|
should be framed like the following:
|
|
</para>
|
|
<programlisting>+-----+-----+-----+
|
|
| ABC | DEF | GHI |
|
|
+-----+-----+-----+</programlisting>
|
|
</section>
|
|
<section>
|
|
<title>
|
|
The First Solution
|
|
</title>
|
|
<para>
|
|
Now let us get back to the TIME client example. We have the same
|
|
problem here. A 32-bit integer is a very small amount of data, and it
|
|
is not likely to be fragmented often. However, the problem is that it
|
|
<emphasis>can</emphasis> be fragmented, and the possibility of
|
|
fragmentation will increase as the traffic increases.
|
|
</para>
|
|
<para>
|
|
The simplistic solution is to create an internal cumulative buffer and
|
|
wait until all 4 bytes are received into the internal buffer. The
|
|
following is the modified <classname>TimeClientHandler</classname>
|
|
implementation that fixes the problem:
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.time;
|
|
|
|
import static org.jboss.netty.buffer.&ChannelBuffers;.*;
|
|
|
|
import java.util.Date;
|
|
|
|
public class TimeClientHandler extends &SimpleChannelHandler; {
|
|
|
|
private final &ChannelBuffer; buf = dynamicBuffer();<co id="example.time3.co1"/>
|
|
|
|
@Override
|
|
public void messageReceived(&ChannelHandlerContext; ctx, &MessageEvent; e) {
|
|
&ChannelBuffer; m = (&ChannelBuffer;) e.getMessage();
|
|
buf.writeBytes(m);<co id="example.time3.co2"/>
|
|
|
|
if (buf.readableBytes() >= 4) {<co id="example.time3.co3"/>
|
|
long currentTimeMillis = buf.readInt() * 1000L;
|
|
System.out.println(new Date(currentTimeMillis));
|
|
e.getChannel().close();
|
|
}
|
|
}
|
|
|
|
@Override
|
|
public void exceptionCaught(&ChannelHandlerContext; ctx, &ExceptionEvent; e) {
|
|
e.getCause().printStackTrace();
|
|
e.getChannel().close();
|
|
}
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.time3.co1">
|
|
<para>
|
|
A <firstterm>dynamic buffer</firstterm> is a &ChannelBuffer; which
|
|
increases its capacity on demand. It's very useful when you don't
|
|
know the length of the message.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time3.co2">
|
|
<para>
|
|
First, all received data should be cumulated into
|
|
<varname>buf</varname>.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time3.co3">
|
|
<para>
|
|
And then, the handler must check if <varname>buf</varname> has enough
|
|
data, 4 bytes in this example, and proceed to the actual business
|
|
logic. Otherwise, Netty will call the
|
|
<methodname>messageReceived</methodname> method again when more
|
|
data arrives, and eventually all 4 bytes will be cumulated.
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
</section>
|
|
<section>
|
|
<title>
|
|
The Second Solution
|
|
</title>
|
|
<para>
|
|
Although the first solution has resolved the problem with the TIME
|
|
client, the modified handler does not look that clean. Imagine a more
|
|
complicated protocol which is composed of multiple fields such as a
|
|
variable length field. Your &ChannelHandler; implementation will
|
|
become unmaintainable very quickly.
|
|
</para>
|
|
<para>
|
|
As you may have noticed, you can add more than one &ChannelHandler; to
|
|
a &ChannelPipeline;, and therefore, you can split one monolithic
|
|
&ChannelHandler; into multiple modular ones to reduce the complexity of
|
|
your application. For example, you could split
|
|
<classname>TimeClientHandler</classname> into two handlers:
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
<classname>TimeDecoder</classname> which deals with the
|
|
fragmentation issue, and
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
the initial simple version of <classname>TimeClientHandler</classname>.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
<para>
|
|
Fortunately, Netty provides an extensible class which helps you write
|
|
the first one out of the box:
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.time;
|
|
|
|
public class TimeDecoder extends &FrameDecoder;<co id="example.time4.co1"/> {
|
|
|
|
@Override
|
|
protected Object decode(
|
|
&ChannelHandlerContext; ctx, &Channel; channel, &ChannelBuffer; buffer)<co id="example.time4.co2"/> {
|
|
|
|
if (buffer.readableBytes() < 4) {
|
|
return null; <co id="example.time4.co3"/>
|
|
}
|
|
|
|
return buffer.readBytes(4);<co id="example.time4.co4"/>
|
|
}
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.time4.co1">
|
|
<para>
|
|
&FrameDecoder; is an implementation of &ChannelHandler; which
|
|
makes it easy to which deals with the fragmentation issue.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time4.co2">
|
|
<para>
|
|
&FrameDecoder; calls <methodname>decode</methodname> method with
|
|
an internally maintained cumulative buffer whenever new data is
|
|
received.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time4.co3">
|
|
<para>
|
|
If <literal>null</literal> is returned, it means there's not
|
|
enough data yet. &FrameDecoder; will call again when there is a
|
|
sufficient amount of data.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time4.co4">
|
|
<para>
|
|
If non-<literal>null</literal> is returned, it means the
|
|
<methodname>decode</methodname> method has decoded a message
|
|
successfully. &FrameDecoder; will discard the read part of its
|
|
internal cumulative buffer. Please remember that you don't need
|
|
to decode multiple messages. &FrameDecoder; will keep calling
|
|
the <methodname>decoder</methodname> method until it returns
|
|
<literal>null</literal>.
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
<para>
|
|
Now that we have another handler to insert into the &ChannelPipeline;,
|
|
we should modify the &ChannelPipelineFactory; implementation in the
|
|
<classname>TimeClient</classname>:
|
|
</para>
|
|
<programlisting> bootstrap.setPipelineFactory(new &ChannelPipelineFactory;() {
|
|
public &ChannelPipeline; getPipeline() {
|
|
return &Channels;.pipeline(
|
|
new TimeDecoder(),
|
|
new TimeClientHandler());
|
|
}
|
|
});</programlisting>
|
|
<para>
|
|
If you are an adventurous person, you might want to try the
|
|
&ReplayingDecoder; which simplifies the decoder even more. You will
|
|
need to consult the API reference for more information though.
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.time;
|
|
|
|
public class TimeDecoder extends &ReplayingDecoder;<&VoidEnum;> {
|
|
|
|
@Override
|
|
protected Object decode(
|
|
&ChannelHandlerContext; ctx, &Channel; channel,
|
|
&ChannelBuffer; buffer, &VoidEnum; state) {
|
|
|
|
return buffer.readBytes(4);
|
|
}
|
|
}</programlisting>
|
|
<para>
|
|
Additionally, Netty provides out-of-the-box decoders which enables
|
|
you to implement most protocols very easily and helps you avoid from
|
|
ending up with a monolithic unmaintainable handler implementation.
|
|
Please refer to the following packages for more detailed examples:
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
<literal>org.jboss.netty.example.factorial</literal> for
|
|
a binary protocol, and
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
<literal>org.jboss.netty.example.telnet</literal> for
|
|
a text line-based protocol.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
</section>
|
|
</section>
|
|
|
|
<section id="start.pojo">
|
|
<title>
|
|
Speaking in POJO instead of ChannelBuffer
|
|
</title>
|
|
<para>
|
|
All the examples we have reviewed so far used a &ChannelBuffer; as a
|
|
primary data structure of a protocol message. In this section, we will
|
|
improve the TIME protocol client and server example to use a
|
|
<ulink url="http://en.wikipedia.org/wiki/POJO">POJO</ulink> instead of a
|
|
&ChannelBuffer;.
|
|
</para>
|
|
<para>
|
|
The advantage of using a POJO in your &ChannelHandler; is obvious;
|
|
your handler becomes more maintainable and reusable by separating the
|
|
code which extracts information from &ChannelBuffer; out from the
|
|
handler. In the TIME client and server examples, we read only one
|
|
32-bit integer and it is not a major issue to use &ChannelBuffer; directly.
|
|
However, you will find it is necessary to make the separation as you
|
|
implement a real world protocol.
|
|
</para>
|
|
<para>
|
|
First, let us define a new type called <classname>UnixTime</classname>.
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.time;
|
|
|
|
import java.util.Date;
|
|
|
|
public class UnixTime {
|
|
private final int value;
|
|
|
|
public UnixTime(int value) {
|
|
this.value = value;
|
|
}
|
|
|
|
public int getValue() {
|
|
return value;
|
|
}
|
|
|
|
@Override
|
|
public String toString() {
|
|
return new Date(value * 1000L).toString();
|
|
}
|
|
}</programlisting>
|
|
<para>
|
|
We can now revise the <classname>TimeDecoder</classname> to return
|
|
a <classname>UnixTime</classname> instead of a &ChannelBuffer;.
|
|
</para>
|
|
<programlisting>@Override
|
|
protected Object decode(
|
|
&ChannelHandlerContext; ctx, &Channel; channel, &ChannelBuffer; buffer) {
|
|
if (buffer.readableBytes() < 4) {
|
|
return null;
|
|
}
|
|
|
|
return new UnixTime(buffer.readInt());<co id="example.time5.co1"/>
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.time5.co1">
|
|
<para>
|
|
&FrameDecoder; and &ReplayingDecoder; allow you to return an object
|
|
of any type. If they were restricted to return only a
|
|
&ChannelBuffer;, we would have to insert another &ChannelHandler;
|
|
which transforms a &ChannelBuffer; into a
|
|
<classname>UnixTime</classname>.
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
<para>
|
|
With the updated decoder, the <classname>TimeClientHandler</classname>
|
|
does not use &ChannelBuffer; anymore:
|
|
</para>
|
|
<programlisting>@Override
|
|
public void messageReceived(&ChannelHandlerContext; ctx, &MessageEvent; e) {
|
|
UnixTime m = (UnixTime) e.getMessage();
|
|
System.out.println(m);
|
|
e.getChannel().close();
|
|
}</programlisting>
|
|
<para>
|
|
Much simpler and elegant, right? The same technique can be applied on
|
|
the server side. Let us update the
|
|
<classname>TimeServerHandler</classname> first this time:
|
|
</para>
|
|
<programlisting>@Override
|
|
public void channelConnected(&ChannelHandlerContext; ctx, &ChannelStateEvent; e) {
|
|
UnixTime time = new UnixTime(System.currentTimeMillis() / 1000);
|
|
&ChannelFuture; f = e.getChannel().write(time);
|
|
f.addListener(&ChannelFutureListener;.CLOSE);
|
|
}</programlisting>
|
|
<para>
|
|
Now, the only missing piece is an encoder, which is an implementation of
|
|
&ChannelHandler; that translates a <classname>UnixTime</classname> back
|
|
into a &ChannelBuffer;. It's much simpler than writing a decoder because
|
|
there's no need to deal with packet fragmentation and assembly when
|
|
encoding a message.
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.time;
|
|
|
|
import static org.jboss.netty.buffer.&ChannelBuffers;.*;
|
|
|
|
public class TimeEncoder extends &SimpleChannelHandler; {
|
|
|
|
public void writeRequested(&ChannelHandlerContext; ctx, &MessageEvent;<co id="example.time6.co1"/> e) {
|
|
UnixTime time = (UnixTime) e.getMessage();
|
|
|
|
&ChannelBuffer; buf = buffer(4);
|
|
buf.writeInt(time.getValue());
|
|
|
|
&Channels;.write(ctx, e.getFuture(), buf);<co id="example.time6.co2"/>
|
|
}
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.time6.co1">
|
|
<para>
|
|
An encoder overrides the <methodname>writeRequested</methodname>
|
|
method to intercept a write request. Please note that the
|
|
&MessageEvent; parameter here is the same type which was specified
|
|
in <methodname>messageReceived</methodname> but they are interpreted
|
|
differently. A &ChannelEvent; can be either an
|
|
<firstterm>upstream</firstterm> or <firstterm>downstream</firstterm>
|
|
event depending on the direction where the event flows.
|
|
For instance, a &MessageEvent; can be an upstream event when called
|
|
for <methodname>messageReceived</methodname> or a downstream event
|
|
when called for <methodname>writeRequested</methodname>.
|
|
Please refer to the API reference to learn more about the difference
|
|
between a upstream event and a downstream event.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time6.co2">
|
|
<para>
|
|
Once done with transforming a POJO into a &ChannelBuffer;, you should
|
|
forward the new buffer to the previous &ChannelDownstreamHandler; in
|
|
the &ChannelPipeline;. &Channels; provides various helper methods
|
|
which generates and sends a &ChannelEvent;. In this example,
|
|
&Channels;<literal>.write(...)</literal> method creates a new
|
|
&MessageEvent; and sends it to the previous &ChannelDownstreamHandler;
|
|
in the &ChannelPipeline;.
|
|
</para>
|
|
<para>
|
|
On the other hand, it is a good idea to use static imports for
|
|
&Channels;:
|
|
<programlisting>import static org.jboss.netty.channel.&Channels;.*;
|
|
...
|
|
&ChannelPipeline; pipeline = pipeline();
|
|
write(ctx, e.getFuture(), buf);
|
|
fireChannelDisconnected(ctx);</programlisting>
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
<para>
|
|
The last task left is to insert a <classname>TimeEncoder</classname>
|
|
into the &ChannelPipeline; on the server side, and it is left as a
|
|
trivial exercise.
|
|
</para>
|
|
</section>
|
|
|
|
<section>
|
|
<title>
|
|
Shutting Down Your Application
|
|
</title>
|
|
<para>
|
|
If you ran the <classname>TimeClient</classname>, you must have noticed
|
|
that the application doesn't exit but just keep running doing nothing.
|
|
Looking from the full stack trace, you will also find a couple I/O threads
|
|
are running. To shut down the I/O threads and let the application exit
|
|
gracefully, you need to release the resources allocated by &ChannelFactory;.
|
|
</para>
|
|
<para>
|
|
The shutdown process of a typical network application is composed of the
|
|
following three steps:
|
|
<orderedlist>
|
|
<listitem>
|
|
<para>
|
|
Close all server sockets if there are any,
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Close all non-server sockets (i.e. client sockets and accepted
|
|
sockets) if there are any, and
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Release all resources used by &ChannelFactory;.
|
|
</para>
|
|
</listitem>
|
|
</orderedlist>
|
|
</para>
|
|
<para>
|
|
To apply the three steps above to the <classname>TimeClient</classname>,
|
|
<methodname>TimeClient.main()</methodname> could shut itself down
|
|
gracefully by closing the only one client connection and releasing all
|
|
resources used by &ChannelFactory;:
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.time;
|
|
|
|
public class TimeClient {
|
|
public static void main(String[] args) throws Exception {
|
|
...
|
|
&ChannelFactory; factory = ...;
|
|
&ClientBootstrap; bootstrap = ...;
|
|
...
|
|
&ChannelFuture; future<co id="example.time7.co1"/> = bootstrap.connect(...);
|
|
future.awaitUninterruptibly();<co id="example.time7.co2"/>
|
|
if (!future.isSuccess()) {
|
|
future.getCause().printStackTrace();<co id="example.time7.co3"/>
|
|
}
|
|
future.getChannel().getCloseFuture().awaitUninterruptibly();<co id="example.time7.co4"/>
|
|
factory.releaseExternalResources();<co id="example.time7.co5"/>
|
|
}
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.time7.co1">
|
|
<para>
|
|
The <methodname>connect</methodname> method of &ClientBootstrap;
|
|
returns a &ChannelFuture; which notifies when a connection attempt
|
|
succeeds or fails. It also has a reference to the &Channel; which
|
|
is associated with the connection attempt.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time7.co2">
|
|
<para>
|
|
Wait for the returned &ChannelFuture; to determine if the connection
|
|
attempt was successful or not.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time7.co3">
|
|
<para>
|
|
If failed, we print the cause of the failure to know why it failed.
|
|
the <methodname>getCause()</methodname> method of &ChannelFuture; will
|
|
return the cause of the failure if the connection attempt was neither
|
|
successful nor cancelled.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time7.co4">
|
|
<para>
|
|
Now that the connection attempt is over, we need to wait until the
|
|
connection is closed by waiting for the <varname>closeFuture</varname>
|
|
of the &Channel;. Every &Channel; has its own <varname>closeFuture</varname>
|
|
so that you are notified and can perform a certain action on closure.
|
|
</para>
|
|
<para>
|
|
Even if the connection attempt has failed the <varname>closeFuture</varname>
|
|
will be notified because the &Channel; will be closed automatically
|
|
when the connection attempt fails.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time7.co5">
|
|
<para>
|
|
All connections have been closed at this point. The only task left
|
|
is to release the resources being used by &ChannelFactory;. It is as
|
|
simple as calling its <methodname>releaseExternalResources()</methodname>
|
|
method. All resources including the NIO <classname>Selector</classname>s
|
|
and thread pools will be shut down and terminated automatically.
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
<para>
|
|
Shutting down a client was pretty easy, but how about shutting down a
|
|
server? You need to unbind from the port and close all open accepted
|
|
connections. To do this, you need a data structure that keeps track of
|
|
the list of active connections, and it's not a trivial task. Fortunately,
|
|
there is a solution, &ChannelGroup;.
|
|
</para>
|
|
<para>
|
|
&ChannelGroup; is a special extension of Java collections API which
|
|
represents a set of open &Channel;s. If a &Channel; is added to a
|
|
&ChannelGroup; and the added &Channel; is closed, the closed &Channel;
|
|
is removed from its &ChannelGroup; automatically. You can also perform
|
|
an operation on all &Channel;s in the same group. For instance, you can
|
|
close all &Channel;s in a &ChannelGroup; when you shut down your server.
|
|
</para>
|
|
<para>
|
|
To keep track of open sockets, you need to modify the
|
|
<classname>TimeServerHandler</classname> to add a new open &Channel; to
|
|
the global &ChannelGroup;, <varname>TimeServer.allChannels</varname>:
|
|
</para>
|
|
<programlisting>@Override
|
|
public void channelOpen(&ChannelHandlerContext; ctx, &ChannelStateEvent; e) {
|
|
TimeServer.allChannels.add(e.getChannel());<co id="example.time8.co1"/>
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.time8.co1">
|
|
<para>
|
|
Yes, &ChannelGroup; is thread-safe.
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
<para>
|
|
Now that the list of all active &Channel;s are maintained automatically,
|
|
shutting down a server is as easy as shutting down a client:
|
|
</para>
|
|
<programlisting>package org.jboss.netty.example.time;
|
|
|
|
public class TimeServer {
|
|
|
|
static final &ChannelGroup; allChannels = new &DefaultChannelGroup;("time-server"<co id="example.time9.co1"/>);
|
|
|
|
public static void main(String[] args) throws Exception {
|
|
...
|
|
&ChannelFactory; factory = ...;
|
|
&ServerBootstrap; bootstrap = ...;
|
|
...
|
|
&Channel; channel<co id="example.time9.co2"/> = bootstrap.bind(...);
|
|
allChannels.add(channel);<co id="example.time9.co3"/>
|
|
waitForShutdownCommand();<co id="example.time9.co4"/>
|
|
&ChannelGroupFuture; future = allChannels.close();<co id="example.time9.co5"/>
|
|
future.awaitUninterruptibly();
|
|
factory.releaseExternalResources();
|
|
}
|
|
}</programlisting>
|
|
<calloutlist>
|
|
<callout arearefs="example.time9.co1">
|
|
<para>
|
|
&DefaultChannelGroup; requires the name of the group as a constructor
|
|
parameter. The group name is solely used to distinguish one group
|
|
from others.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time9.co2">
|
|
<para>
|
|
The <methodname>bind</methodname> method of &ServerBootstrap;
|
|
returns a server side &Channel; which is bound to the specified
|
|
local address. Calling the <methodname>close()</methodname> method
|
|
of the returned &Channel; will make the &Channel; unbind from the
|
|
bound local address.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time9.co3">
|
|
<para>
|
|
Any type of &Channel;s can be added to a &ChannelGroup; regardless if
|
|
it is either server side, client-side, or accepted. Therefore,
|
|
you can close the bound &Channel; along with the accepted &Channel;s
|
|
in one shot when the server shuts down.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time9.co4">
|
|
<para>
|
|
<methodname>waitForShutdownCommand()</methodname> is an imaginary
|
|
method that waits for the shutdown signal. You could wait for a
|
|
message from a privileged client or the JVM shutdown hook.
|
|
</para>
|
|
</callout>
|
|
<callout arearefs="example.time9.co5">
|
|
<para>
|
|
You can perform the same operation on all channels in the same
|
|
&ChannelGroup;. In this case, we close all channels, which means
|
|
the bound server-side &Channel; will be unbound and all accepted
|
|
connections will be closed asynchronously. To notify when all
|
|
connections were closed successfully, it returns a &ChannelGroupFuture;
|
|
which has a similar role with &ChannelFuture;.
|
|
</para>
|
|
</callout>
|
|
</calloutlist>
|
|
</section>
|
|
|
|
<section>
|
|
<title>
|
|
Summary
|
|
</title>
|
|
<para>
|
|
In this chapter, we had a quick tour of Netty with a demonstration on how
|
|
to write a fully working network application on top of Netty. More
|
|
questions you may have will be covered in the upcoming chapters and the
|
|
revised version of this chapter. Please also note that the
|
|
<ulink url="&Community;">community</ulink> is always waiting for your
|
|
questions and ideas to help you and keep improving Netty based on your
|
|
feed back.
|
|
</para>
|
|
</section>
|
|
</chapter>
|