netty5/transport/src/main/java/io/netty/channel/SingleThreadEventLoop.java

239 lines
9.2 KiB
Java
Raw Normal View History

2012-06-04 22:31:44 +02:00
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
2012-06-04 22:31:44 +02:00
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.channel;
import static io.netty.util.internal.ObjectUtil.checkPositive;
import static java.util.Objects.requireNonNull;
import io.netty.util.concurrent.RejectedExecutionHandler;
import io.netty.util.concurrent.RejectedExecutionHandlers;
import io.netty.util.concurrent.SingleThreadEventExecutor;
Decouple EventLoop details from the IO handling for each transport to… (#8680) * Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization Motiviation: As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement. Modifications: This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction. Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like: * Adding instrumentation / metrics: * how many Channels are registered on an SingleThreadEventLoop * how many Channels were handled during the IO processing in an EventLoop run * how many task were handled during the last EventLoop / EventExecutor run * how many outstanding tasks we have ... ... * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics. * Use different Promise / Future / ScheduledFuture implementations * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes: * AbstractEventLoop * AbstractEventLoopGroup * EventExecutorChooser * EventExecutorChooserFactory * DefaultEventLoopGroup * DefaultEventExecutor * DefaultEventExecutorGroup Result: Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.SystemPropertyUtil;
Decouple EventLoop details from the IO handling for each transport to… (#8680) * Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization Motiviation: As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement. Modifications: This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction. Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like: * Adding instrumentation / metrics: * how many Channels are registered on an SingleThreadEventLoop * how many Channels were handled during the IO processing in an EventLoop run * how many task were handled during the last EventLoop / EventExecutor run * how many outstanding tasks we have ... ... * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics. * Use different Promise / Future / ScheduledFuture implementations * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes: * AbstractEventLoop * AbstractEventLoopGroup * EventExecutorChooser * EventExecutorChooserFactory * DefaultEventLoopGroup * DefaultEventExecutor * DefaultEventExecutorGroup Result: Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
import java.util.Queue;
import java.util.concurrent.Executor;
import java.util.concurrent.ThreadFactory;
/**
Decouple EventLoop details from the IO handling for each transport to… (#8680) * Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization Motiviation: As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement. Modifications: This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction. Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like: * Adding instrumentation / metrics: * how many Channels are registered on an SingleThreadEventLoop * how many Channels were handled during the IO processing in an EventLoop run * how many task were handled during the last EventLoop / EventExecutor run * how many outstanding tasks we have ... ... * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics. * Use different Promise / Future / ScheduledFuture implementations * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes: * AbstractEventLoop * AbstractEventLoopGroup * EventExecutorChooser * EventExecutorChooserFactory * DefaultEventLoopGroup * DefaultEventExecutor * DefaultEventExecutorGroup Result: Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
* {@link EventLoop} that execute all its submitted tasks in a single thread and uses an {@link IoHandler} for
* IO processing.
*/
Decouple EventLoop details from the IO handling for each transport to… (#8680) * Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization Motiviation: As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement. Modifications: This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction. Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like: * Adding instrumentation / metrics: * how many Channels are registered on an SingleThreadEventLoop * how many Channels were handled during the IO processing in an EventLoop run * how many task were handled during the last EventLoop / EventExecutor run * how many outstanding tasks we have ... ... * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics. * Use different Promise / Future / ScheduledFuture implementations * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes: * AbstractEventLoop * AbstractEventLoopGroup * EventExecutorChooser * EventExecutorChooserFactory * DefaultEventLoopGroup * DefaultEventExecutor * DefaultEventExecutorGroup Result: Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
public class SingleThreadEventLoop extends SingleThreadEventExecutor implements EventLoop {
protected static final int DEFAULT_MAX_PENDING_TASKS = Math.max(16,
SystemPropertyUtil.getInt("io.netty.eventLoop.maxPendingTasks", Integer.MAX_VALUE));
Decouple EventLoop details from the IO handling for each transport to… (#8680) * Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization Motiviation: As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement. Modifications: This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction. Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like: * Adding instrumentation / metrics: * how many Channels are registered on an SingleThreadEventLoop * how many Channels were handled during the IO processing in an EventLoop run * how many task were handled during the last EventLoop / EventExecutor run * how many outstanding tasks we have ... ... * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics. * Use different Promise / Future / ScheduledFuture implementations * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes: * AbstractEventLoop * AbstractEventLoopGroup * EventExecutorChooser * EventExecutorChooserFactory * DefaultEventLoopGroup * DefaultEventExecutor * DefaultEventExecutorGroup Result: Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
// TODO: Is this a sensible default ?
protected static final int DEFAULT_MAX_TASKS_PER_RUN = Math.max(1,
SystemPropertyUtil.getInt("io.netty.eventLoop.maxTaskPerRun", 1024 * 4));
private final IoExecutionContext context = new IoExecutionContext() {
@Override
public boolean canBlock() {
assert inEventLoop();
return !SingleThreadEventLoop.this.hasTasks() && !SingleThreadEventLoop.this.hasScheduledTasks();
}
@Override
public long delayNanos(long currentTimeNanos) {
assert inEventLoop();
return SingleThreadEventLoop.this.delayNanos(currentTimeNanos);
}
@Override
public long deadlineNanos() {
assert inEventLoop();
return SingleThreadEventLoop.this.deadlineNanos();
}
};
private final Unsafe unsafe = new Unsafe() {
@Override
public void register(Channel channel) throws Exception {
SingleThreadEventLoop.this.register(channel);
}
@Override
public void deregister(Channel channel) throws Exception {
SingleThreadEventLoop.this.deregister(channel);
}
};
private final IoHandler ioHandler;
private final int maxTasksPerRun;
/**
* Create a new instance
*
* @param threadFactory the {@link ThreadFactory} which will be used for the used {@link Thread}
* @param ioHandler the {@link IoHandler} to use.
*/
public SingleThreadEventLoop(ThreadFactory threadFactory, IoHandler ioHandler) {
this(threadFactory, ioHandler, DEFAULT_MAX_PENDING_TASKS, RejectedExecutionHandlers.reject());
}
Decouple EventLoop details from the IO handling for each transport to… (#8680) * Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization Motiviation: As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement. Modifications: This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction. Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like: * Adding instrumentation / metrics: * how many Channels are registered on an SingleThreadEventLoop * how many Channels were handled during the IO processing in an EventLoop run * how many task were handled during the last EventLoop / EventExecutor run * how many outstanding tasks we have ... ... * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics. * Use different Promise / Future / ScheduledFuture implementations * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes: * AbstractEventLoop * AbstractEventLoopGroup * EventExecutorChooser * EventExecutorChooserFactory * DefaultEventLoopGroup * DefaultEventExecutor * DefaultEventExecutorGroup Result: Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
/**
* Create a new instance
*
* @param executor the {@link Executor} which will be used to run this {@link EventLoop}.
* @param ioHandler the {@link IoHandler} to use.
*/
public SingleThreadEventLoop(Executor executor, IoHandler ioHandler) {
this(executor, ioHandler, DEFAULT_MAX_PENDING_TASKS, RejectedExecutionHandlers.reject());
}
Decouple EventLoop details from the IO handling for each transport to… (#8680) * Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization Motiviation: As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement. Modifications: This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction. Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like: * Adding instrumentation / metrics: * how many Channels are registered on an SingleThreadEventLoop * how many Channels were handled during the IO processing in an EventLoop run * how many task were handled during the last EventLoop / EventExecutor run * how many outstanding tasks we have ... ... * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics. * Use different Promise / Future / ScheduledFuture implementations * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes: * AbstractEventLoop * AbstractEventLoopGroup * EventExecutorChooser * EventExecutorChooserFactory * DefaultEventLoopGroup * DefaultEventExecutor * DefaultEventExecutorGroup Result: Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
/**
* Create a new instance
*
* @param threadFactory the {@link ThreadFactory} which will be used for the used {@link Thread}
* @param ioHandler the {@link IoHandler} to use.
* @param maxPendingTasks the maximum number of pending tasks before new tasks will be rejected.
* @param rejectedHandler the {@link RejectedExecutionHandler} to use.
*/
public SingleThreadEventLoop(ThreadFactory threadFactory,
IoHandler ioHandler, int maxPendingTasks,
RejectedExecutionHandler rejectedHandler) {
this(threadFactory, ioHandler, maxPendingTasks, rejectedHandler, DEFAULT_MAX_TASKS_PER_RUN);
}
/**
* Create a new instance
*
* @param executor the {@link Executor} which will be used to run this {@link EventLoop}.
* @param ioHandler the {@link IoHandler} to use.
* @param maxPendingTasks the maximum number of pending tasks before new tasks will be rejected.
* @param rejectedHandler the {@link RejectedExecutionHandler} to use.
*/
public SingleThreadEventLoop(Executor executor,
IoHandler ioHandler, int maxPendingTasks,
RejectedExecutionHandler rejectedHandler) {
this(executor, ioHandler, maxPendingTasks, rejectedHandler, DEFAULT_MAX_TASKS_PER_RUN);
}
/**
* Create a new instance
*
* @param threadFactory the {@link ThreadFactory} which will be used for the used {@link Thread}
* @param ioHandler the {@link IoHandler} to use.
* @param maxPendingTasks the maximum number of pending tasks before new tasks will be rejected.
* @param rejectedHandler the {@link RejectedExecutionHandler} to use.
* @param maxTasksPerRun the maximum number of tasks per {@link EventLoop} run that will be processed
* before trying to handle IO again.
*/
public SingleThreadEventLoop(ThreadFactory threadFactory,
IoHandler ioHandler, int maxPendingTasks,
RejectedExecutionHandler rejectedHandler, int maxTasksPerRun) {
super(threadFactory, maxPendingTasks, rejectedHandler);
this.ioHandler = requireNonNull(ioHandler, "ioHandler");
this.maxTasksPerRun = checkPositive(maxTasksPerRun, "maxTasksPerRun");
}
Decouple EventLoop details from the IO handling for each transport to… (#8680) * Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization Motiviation: As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement. Modifications: This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction. Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like: * Adding instrumentation / metrics: * how many Channels are registered on an SingleThreadEventLoop * how many Channels were handled during the IO processing in an EventLoop run * how many task were handled during the last EventLoop / EventExecutor run * how many outstanding tasks we have ... ... * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics. * Use different Promise / Future / ScheduledFuture implementations * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes: * AbstractEventLoop * AbstractEventLoopGroup * EventExecutorChooser * EventExecutorChooserFactory * DefaultEventLoopGroup * DefaultEventExecutor * DefaultEventExecutorGroup Result: Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
/**
* Create a new instance
*
* @param executor the {@link Executor} which will be used to run this {@link EventLoop}.
* @param ioHandler the {@link IoHandler} to use.
* @param maxPendingTasks the maximum number of pending tasks before new tasks will be rejected.
* @param rejectedHandler the {@link RejectedExecutionHandler} to use.
* @param maxTasksPerRun the maximum number of tasks per {@link EventLoop} run that will be processed
* before trying to handle IO again.
*/
public SingleThreadEventLoop(Executor executor,
IoHandler ioHandler, int maxPendingTasks,
RejectedExecutionHandler rejectedHandler, int maxTasksPerRun) {
super(executor, maxPendingTasks, rejectedHandler);
this.ioHandler = requireNonNull(ioHandler, "ioHandler");
this.maxTasksPerRun = checkPositive(maxTasksPerRun, "maxTasksPerRun");
}
@Override
Decouple EventLoop details from the IO handling for each transport to… (#8680) * Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization Motiviation: As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement. Modifications: This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction. Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like: * Adding instrumentation / metrics: * how many Channels are registered on an SingleThreadEventLoop * how many Channels were handled during the IO processing in an EventLoop run * how many task were handled during the last EventLoop / EventExecutor run * how many outstanding tasks we have ... ... * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics. * Use different Promise / Future / ScheduledFuture implementations * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes: * AbstractEventLoop * AbstractEventLoopGroup * EventExecutorChooser * EventExecutorChooserFactory * DefaultEventLoopGroup * DefaultEventExecutor * DefaultEventExecutorGroup Result: Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
protected Queue<Runnable> newTaskQueue(int maxPendingTasks) {
// This event loop never calls takeTask()
return maxPendingTasks == Integer.MAX_VALUE ? PlatformDependent.newMpscQueue()
: PlatformDependent.newMpscQueue(maxPendingTasks);
}
@Override
Decouple EventLoop details from the IO handling for each transport to… (#8680) * Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization Motiviation: As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement. Modifications: This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction. Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like: * Adding instrumentation / metrics: * how many Channels are registered on an SingleThreadEventLoop * how many Channels were handled during the IO processing in an EventLoop run * how many task were handled during the last EventLoop / EventExecutor run * how many outstanding tasks we have ... ... * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics. * Use different Promise / Future / ScheduledFuture implementations * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes: * AbstractEventLoop * AbstractEventLoopGroup * EventExecutorChooser * EventExecutorChooserFactory * DefaultEventLoopGroup * DefaultEventExecutor * DefaultEventExecutorGroup Result: Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
protected final boolean wakesUpForTask(Runnable task) {
return !(task instanceof NonWakeupRunnable);
}
/**
* Marker interface for {@link Runnable} that will not trigger an {@link #wakeup(boolean)} in all cases.
*/
interface NonWakeupRunnable extends Runnable { }
Decouple EventLoop details from the IO handling for each transport to… (#8680) * Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization Motiviation: As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement. Modifications: This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction. Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like: * Adding instrumentation / metrics: * how many Channels are registered on an SingleThreadEventLoop * how many Channels were handled during the IO processing in an EventLoop run * how many task were handled during the last EventLoop / EventExecutor run * how many outstanding tasks we have ... ... * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics. * Use different Promise / Future / ScheduledFuture implementations * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes: * AbstractEventLoop * AbstractEventLoopGroup * EventExecutorChooser * EventExecutorChooserFactory * DefaultEventLoopGroup * DefaultEventExecutor * DefaultEventExecutorGroup Result: Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
@Override
public final Unsafe unsafe() {
return unsafe;
}
// Methods that a user can override to easily add instrumentation and other things.
@Override
protected void run() {
assert inEventLoop();
do {
runIo();
if (isShuttingDown()) {
ioHandler.prepareToDestroy();
}
runAllTasks(maxTasksPerRun);
} while (!confirmShutdown());
}
/**
* Called when IO will be processed for all the {@link Channel}s on this {@link SingleThreadEventLoop}.
* This method returns the number of {@link Channel}s for which IO was processed.
*
* This method must be called from the {@link EventLoop} thread.
*/
protected int runIo() {
assert inEventLoop();
return ioHandler.run(context);
}
/**
* Called once a {@link Channel} should be registered on this {@link SingleThreadEventLoop}.
*
* This method must be called from the {@link EventLoop} thread.
*/
protected void register(Channel channel) throws Exception {
assert inEventLoop();
ioHandler.register(channel);
}
/**
* Called once a {@link Channel} should be deregistered from this {@link SingleThreadEventLoop}.
*/
protected void deregister(Channel channel) throws Exception {
assert inEventLoop();
ioHandler.deregister(channel);
}
@Override
protected final void wakeup(boolean inEventLoop) {
ioHandler.wakeup(inEventLoop);
}
@Override
protected final void cleanup() {
assert inEventLoop();
ioHandler.destroy();
}
}