기본 콘텐츠로 건너뛰기

The Servlet API and NIO: Together at last

The Servlet API and NIO: Together at last
Download94 KBe-mail it!
Contents:
Threads on a budget
The Servlet API and NIO
The example server
The Server class
Request processing
Running the example
Performance results
Conclusion
Resources
Download
About the author
Rate this article
Related content:
Getting started with NIO
Turning streams inside out
IBM Developer Kits for the Java platform (downloads)
Subscriptions:
dW newsletters
dW Subscription
(CDs and downloads)
Build your own Servlet-based Web server, with nonblocking I/O

Level: Intermediate

Taylor Cowan (taylor_cowan@yahoo.com)
Senior Software Systems Engineer, Travelocity
03 Feb 2004

Think it's impossible to combine NIO and the Servlet API? Think again. In this article, Java developer Taylor Cowan shows you how to apply the producer/consumer model to consumer nonblocking I/O, thus easing the Servlet API into a whole new compatibility with NIO. In the process, you'll see what it takes to build an actual Servlet-based Web server that implements NIO; and you'll find out how that server stacks up against a standard Java I/O server (Tomcat 5.0) in an enterprise environment.

NIO was among the most celebrated (if not the most glamorous) additions to the Java platform with JDK 1.4. Many articles followed, explaining the basics of NIO and how to leverage the benefits of nonblocking channels. One thing missing through all this, however, was an adequate demonstration of just how NIO might improve the scalability of a J2EE Web tier. For the enterprise developer this information is particularly relevant, because implementing NIO isn't as simple as changing a few import statements to a new I/O package. First, the Servlet API assumes blocking I/O semantics, so it can't take advantage of nonblocking I/O by default. Second, threads aren't the resource hogs they were in JDK 1.0, so using fewer threads does not necessarily indicate a server's ability to handle more clients.

In this article, you'll learn how to work around the Servlet API's aversion to nonblocking I/O to create a Servlet-based Web server that implements NIO. We'll then see how this server scales against a standard I/O server (Tomcat 5.0) in a multiplexed Web server environment. In keeping with the realities of life in the enterprise, we'll focus on how NIO compares to standard I/O when an exponentially increasing number of clients retain their socket connections.

Note that this article is for Java developers familiar with the basics of I/O programming on the Java platform. See the Resources section for an introduction to nonblocking I/O.

Threads on a budget
Threads have a well-earned reputation for being expensive. In the early days of the Java platform (JDK 1.0), thread overhead was such a burden that developers were forced to custom build solutions. One common workaround was to use a pool of threads created at VM startup, rather than creating each new thread on demand. Despite recent improvements to thread performance at the VM layer, standard I/O still requires that a unique thread be allocated to handle each new open socket. This works well enough in the short term, but standard I/O falls short when the number of threads increases beyond 1K. The CPU simply becomes overburdened by context switching between threads.

With the introduction of NIO in JDK 1.4, enterprise developers finally have a built-in solution to the thread-per-use model: Multiplexed I/O allows a growing number of users to be served by a fixed number of threads.

Multiplexing refers to the sending of multiple signals, or streams, simultaneously over a single carrier. A day-to-day example of multiplexing occurs when we use a cell phone. Wireless frequencies are a scarce resource, so wireless providers use multiplexing to send multiple calls over a single frequency. In one example, calls are divided into segments that are given a very short time duration and reassembled at the receiving end. This is called time-division multiplexing, or TDM.

Within NIO the receiving end is comparable to a "selector" (see java.nio.channels.Selector). Instead of calls, the selector handles multiple open sockets. Just as in TDM, the selector reassembles segments of data being written from multiple clients. This allows the server to manage multiple clients with a single thread.

The Servlet API and NIO
Nonblocking reads and writes are essential to NIO, but they don't come trouble free. A nonblocking read makes no guarantee to the caller besides the fact that it won't block. The client or server application may read the complete message, a partial message, or nothing at all. On the other hand, a nonblocking read might read more than enough, forcing an overhead buffer for the next call. And, finally, unlike streams a zero byte read does not indicate that the message has been fully received.

These factors make it impossible to implement even a simple readline method without polling. All servlet containers must provide a readline method on their input streams. As a result, many developers have given up on building a Servlet-based Web application server that implements NIO. Fortunately, there is a solution; one that combines the power of the Servlet API and the multiplexed I/O of NIO.

In the sections that follow, you'll learn how to apply the producer/consumer model to consumer nonblocking I/O, using the java.io.PipedInput and PipedOutputStream classes. As the nonblocking channel is read, it is written into a pipe that is being consumed by a second thread. Note that this decomposition maps threads differently from most Java-based client-server apps. Here, we have a thread solely responsible for processing a nonblocking channel (the producer) and another thread solely responsible for consuming the data as a stream (the consumer). Pipes also alleviate the nonblocking I/O problem for application servers, because servlets will assume blocking semantics as they consume the I/O.

The example server
Our example server demonstrates the producer/consumer solution to the incompatibility of the Servlet API and NIO. The server is similar enough to the Servlet API to provide proof of concept for a full-fledged NIO-based application server, and it has been written specifically to measure the performance of NIO against standard Java I/O. It handles simple HTTP get requests and supports keep-alive connections from clients. This is important because multiplexing I/O only proves beneficial when the server is required to handle a large number of open socket connections.

The server is divided into two packages, org.sse.server and org.sse.http. The server package holds classes that provide primary server functionality such as receiving new client connections, reading messages, and spawning worker threads to handle requests. The http package supports a subset of the HTTP protocol. A detailed explanation of HTTP is beyond the scope of this article. Download the code examples from the Resources section for implementation details.

Now, let's take a look at the most important classes in the org.sse.server package.

The Server class
The Server class holds the multiplexer loop, the heart of any NIO-based server. In Listing 1, the call to select() blocks until the server either receives a new client or detects available bytes being written to an open socket. The major difference between this and standard Java I/O is that all data is read within this loop. Normally, a new thread would be given the task of reading bytes from a particular socket. It is actually possible to handle many thousands of clients with a single thread using the NIO selector event-driven approach, although we'll see later that threads still have a role to play.

Each call to select() returns a collection of events indicating that a new client is available, new data is ready to read, or a client is ready to receive a response. The server's handleKey() method is only interested in new clients (key.isAcceptable()) or incoming data (key.isReadable()). At that point the work is passed off to the ServerEventHandler class.

Listing 1. Server.java selector loop


public void listen() {
SelectionKey key = null;
try {
while (true) {
selector.select();
Iterator it = selector.selectedKeys().iterator();
while (it.hasNext()) {
key = (SelectionKey) it.next();
handleKey(key);
it.remove();
}
}
} catch (IOException e) {
key.cancel();
} catch (NullPointerException e) {
// NullPointer at sun.nio.ch.WindowsSelectorImpl, Bug: 4729342
e.printStackTrace();
}
}

The ServerEventHandler class
The ServerEventHandler class responds to server events. When a new client becomes available it instantiates a new Client object representing the state of that client. Data is read from the channel in a nonblocking fashion and written to the Client object. The ServerEventHandler also maintains a queue of requests. A variable number of worker threads are spawned to process (consume) requests off the queue. In traditional producer/consumer fashion, Queue is written so that threads block when it becomes empty, and are notified when new requests are available.

In Listing 2, the remove() method has been overridden to support waiting threads. If the list is empty, the number of waiting threads is incremented and the current thread is blocked. This essentially provides a very simple thread pool.

Listing 2. Queue.java
public class Queue extends LinkedList

{
private int waitingThreads = 0;

public synchronized void insert(Object obj)
{
addLast(obj);
notify();
}

public synchronized Object remove()
{
if ( isEmpty() ) {
try { waitingThreads++; wait();}
catch (InterruptedException e) {Thread.interrupted();}
waitingThreads--;
}
return removeFirst();
}

public boolean isEmpty() {
return (size() - waitingThreads <= 0);
}
}

The number of worker threads is independent of the number of Web clients. Instead of allocating one thread per open socket, we place all requests into a generic queue serviced by a set of RequestHandlerThread instances. Ideally, the number of threads should be tuned based on the number of processors and the length or duration of each request. If requests take a long time by way of resource or processing needs, the perceived quality of service can be improved by adding more threads.

Note that this doesn't necessarily improve overall throughput, but it does improve the user's experience. Even under heavy load each thread will be given a slice of processing time. This principle applies equally to servers based on standard Java I/O; however, those servers are limited in that they are required to allocate one thread per open socket connection. NIO servers are relieved of this and therefore can scale to larger numbers of users. The bottom line is that NIO servers still need threads, just not quite as many.

Request processing
The Client class serves two purposes. First, it solves the blocking/nonblocking problem by converting the incoming nonblocking I/O into a blocking InputStream consumable by the Servlet API. Second, it manages the request state of a particular client. Because nonblocking channels give no indication when a message has been fully read, we are forced to handle this at the protocol layer. The Client class indicates at any given point in time if it is currently involved in an ongoing request. If it is ready to handle a new request, the write() method enqueues the client for request processing. If it is already engaged in a request it simply transforms the incoming bytes into an InputStream using the PipedInputStream and PipedOutputStream classes.

Figure 1 shows the interactions of two threads around a pipe. The main thread writes bytes read from the channel into the pipe. The pipe provides the same data to consumers as an InputStream. Another important feature of the pipe is that it is buffered. If it were not, the main thread could become blocked trying to write to the pipe. Because the main thread is solely responsible for multiplexing between all clients, we cannot afford to allow it to block.

Figure 1. PipedInput/OutputStream
Graphical representation of relations

After the Client has enqueued itself, it is ready be consumed by a worker thread. The RequestHandlerThread class takes on this role. So far we've seen how the main thread loops continuously, either accepting new clients or reading new I/O. The worker threads loop awaiting new requests. When a client becomes available on the request queue, it is immediately consumed by the first waiting thread blocked on the remove() method.

Listing 3. RequestHandlerThread.java


public void run() {
while (true) {
Client client = (Client) myQueue.remove();
try {
for (; ; ) {
HttpRequest req = new HttpRequest(client.clientInputStream,
myServletContext);
HttpResponse res = new HttpResponse(client.key);
defaultServlet.service(req, res);
if (client.notifyRequestDone())
break;
}
} catch (Exception e) {
client.key.cancel();
client.key.selector().wakeup();
}
}
}

The thread then creates new HttpRequest and HttpResponse instances and invokes the service method of the default servlet. Notice that the HttpRequest is constructed with the clientInputStream property of the Client object. This is the PipedInputStream responsible for converting nonblocking I/O to a blocking stream.

From this point on, request processing is similar to what you would expect in the J2EE Servlet API. When the call to the servlet returns, the worker thread checks to see if another request is available from the same client before returning to the pool. Note that the word pool is used lightly here. The thread will in fact attempt another remove() call on the queue and will become blocked until the next available request.

Running the example
The example server implements a subset of the HTTP 1.1 protocol. It processes normal HTTP get requests. It takes two command-line arguments. The first one specifies the port number and the second designates the directory where your HTML files reside. After unzipping the files, cd into the project directory and issue the following command, replacing the webroot directory with your own:



java -cp bin org.sse.server.Start 8080
"C:\mywebroot"

Also note that the server doesn't implement directory listings, so you must specify a valid URL pointing to a file under your webroot.

Performance results
The example NIO server was compared to Tomcat 5.0 under heavy load. Tomcat was chosen because it is a 100 percent Java solution based on standard Java I/O. Some advanced app servers are optimized with JNI native code to improve scalability and therefore don't offer a good comparison between standard I/O and NIO. The objective was to determine if NIO gives any considerable performance benefits and under what conditions.

Here are the specs:

  • Tomcat was configured with a maximum thread count of 2000 while the example server was only allowed to run with four worker threads.

  • Each server was tested against the same set of simple HTTP gets consisting of mostly textual content.

  • The load tool (Microsoft Web Application Stress Tool) was set to use "keep-alive" sessions, resulting in roughly one socket per user. This in turn results in one thread per user on Tomcat, while the NIO server handles the same load with a constant number of threads.

Figure 2 shows the request-per-second rate under increasing load. At 200 users performance was similar. As the number of users exceeded 600, however, Tomcat's performance began to deteriorate drastically. This is most likely due to the cost of context switching between so many threads. In contrast, the NIO-based server's performance degraded in a linear fashion. Keep in mind that Tomcat must allocate one thread per user, while the NIO server was configured with only four worker threads.

Figure 2. Requests per second
Graphical representation of relations

Figure 3 provides further indication of NIO's performance. It shows the number of socket-connect errors per minute of operation. Again, Tomcat's performance deteriorated drastically at about 600 users, while the NIO-based server's error rate remained relatively low.

Figure 3. Socket-connect errors per second
Graphical representation of relations

Conclusion
In this article you've learned that it is indeed possible to write a Servlet-based Web server using NIO, even with its nonblocking features enabled. This is good news for enterprise developers because NIO scales better than standard Java I/O in enterprise environments. Unlike standard Java I/O, NIO can handle many clients with a fixed number of threads. The Servlet-based NIO Web server yields better performance when it comes to handling clients that keep and hold socket connections.

Resources

Download

Name

Size

Download method

j-nioserver-source.zip




FTP
*Information about download methods
About the author
Taylor Cowan is a software engineer and occasional freelance author specializing in J2EE. He received his Masters Degree in Computer Science from the University of North Texas, as well as a Bachelor of Music in Jazz Arranging.

댓글

이 블로그의 인기 게시물

oradiag_??? 로그 생성안되게 하는 방법

cx_Oracle이나 oci로 개발된 프로그램을 사용하다 보면 $HOME에 orgdiag_사용자계정으로 로그가 대량으로 저장되는 경우가 발생합니다. 이런 경우에 로그가 남지 않도록 하던지 아님 다른 경로에 저장하는 방법은  아래와 같은 방법으로 처리할 수 있습니다. 로그 안남기는 방법은. 환경변수에 추가 export TNS_ADMIN=/home/eek/conf/ 해당경로에 sqlnet.ora파일 생성해서 DIAG_ADR_ENABLED=off TRACE_LEVEL_CLIENT=off LOG_FILE_CLIENT=/dev/null 설정값을 추가하면 로그 파일이 생성되지 않음. 자세한 설정 정보는 http://docs.oracle.com/cd/B28359_01/network.111/b28317/sqlnet.htm#BIIDEAFI 참고하세요. 끝.

Visual Leak Detector - Enhanced Memory Leak Detection for Visual C++

Download Visual Leak Detector - 515.7 Kb Download demo console project - 518 Kb Download demo MFC project - 525.2 Kb Download the source code - 29 Kb What's New 30 March 2005 - Version 0.9d has been newly released. This is a fairly major release that features significant changes to the way VLD interfaces with the application being debugged. With this release, VLD is now packaged in library form. Packaged as a pre-built library, it is now much easier to start using VLD with your projects because it no longer requires you to set-up your build environment in any particular way. For example, you'll no longer need to have the Platform SDK installed in order to use VLD. You also won't need to configure Visual C++'s include search path to include directories in any specific order. Introduction This memory leak detector is superior, in a number of ways, to the memory leak detection provided natively by MFC or the Microsoft C runtime library. First, built-in lea...

Jakarta Tomcat을 갖춘 개발 환경으로서의 Eclipse

Jakarta Tomcat을 갖춘 개발 환경으로서의 Eclipse 목차: 왜 Eclipse와 Tomcat 인가? 컴포넌트 다운로드 설치 모든 컴포넌트의 압축 풀기 Tomcat 플러그인을 Eclipse/플러그인 디렉토리에 복사하기 SDK 설치 설정 SDK JRE를 Eclipse용 디폴트 JRE로 설정하기 Tomcat 선택 옵션에서 Tomcat Home 변수 설정하기 Tomcat과 Eclipse를 함께 테스트하기 새로운 Tomcat 프로젝트 만들기 테스트용 JSP 파일 만들기 Sysdeo 플러그인을 사용하여 Tomcat 시작하기 브라우저를 시작하고 index.jsp 파일 보기 참고자료 필자소개 기사에 대한 평가 Subscriptions: dW newsletters Eclipse와 Tomcat 통합의 빠른 길 난이도 : 초급 Geoffrey R. Duck Software Developer, IBM 2004년 5월 Eclipse 를 자바 개발 환경으로 사용하는 것은 대단한 일이다. Eclipse Tomcat 플러그인을 사용하면 자바와 웹 개발 프로젝트를 더욱 훌륭하게 조직화하고 통합할 수 있다. 이 글에서는 Eclipse, Jakarta Tomcat, Eclipse Tomcat 시작 플러그인의 설치부터 단계별로 소개하겠다. 왜 Eclipse와 Tomcat 인가? 나는 초창기부터 Eclipse로 개발해왔고 나의 자바 개발 역사에 발생한 최고의 사건들 중 하나이다. 오직 vi와 JDK 만을 사용하여 리눅스에서 자바 프로그래밍을 해왔던 경험에서 볼 때 자바 프로그래밍과 디버깅은 지루한 작업이다. 이제 Eclipse 덕분...