News On Concurrency

Prior to Java 5, most people associated concurrency in Java with the Thread and Runnable concepts. That has all changed now! We now have a compelling high-level API at our hands, including a lot of new concurrent data structures in the Collections Framework and a brand new task Executor framework. In this article, you’ll get an introduction to the most important features in the new java.util.concurrent package.

Low level constructs

In the new API, there are representations of the classical concurrency constructs of
locks, semaphores, latches and barriers. The Lock interface, for example, behave very
much like the intrinsic lock (used in synchronized blocks). However, the biggest
advantage of Lock objects is their ability to back out if the lock isn’t available, either
immediately or within a specified timeout, allowing the implementer to take an al-
ternate action. Furthermore, the new Lock interface also allows locks to be acquired
and released in different code blocks. The drawback of the new Lock class is that
it’s more complex, above all it requires the programmer to remember to release the
lock. In a similar manner, there are useful implementations of semaphores, latches
and barriers.
The concept of atomicity is central in concurrency. An atomic operation is one
that cannot be interrupted by a concurrently running process. The Java increment
operator, i++, is not an atomic operation (internally it’s three operations) but a sim-
ple assign (i = 5;) is. The java.util.concurrent.atomic package defines classes that
support atomic operations on single variables. For example, in order to protect a
variable incrementation (i++) from thread interference without resorting to syn-
chronization, the variable can be kept in an AtomicInteger and the increment can be
performed using the incrementAndGet() method, thus ensuring atomicity.

New Collection types

The java.util.concurrent package includes a number of additions to the Java Collec-
tions Framework. The new concurrent collections are improvements over synchro-
nized collections in terms of throughput and efficiency. Synchronized collections
achieve their thread safety by protecting the collection’s state by a collection-wide
lock. When accessed by multiple threads, throughput suffers. Concurrent collec-
tions, on the other hand, are designed for concurrent access from multiple threads
and instead lock on the individual elements.
Two important new Collection types are BlockingQueue and ConcurrentMap.
BlockingQueue implements the (non-concurrent) Queue interface, which holds
elements prior to processing. The thread-safe BlockingQueue is primarily designed
for producer-consumer queues and allows clients to wait for an element to appear
(possibly within a specified timeout). Waiting occurs if you either try to insert an
element into a full queue or if you try to retrieve from an empty queue. Concur-
rentMap defines a couple of very useful atomic map operations:

  • A key/value pair is removed/replaced only if the key is present in the map.
  • A key/value pair is added only if the key is absent.

Hence, there’s no need to synchronize when accessing the ConcurrentMap.

High level constructs

The new Callable interface is responsible for encapsulating a task. It’s similar to the
Runnable interface; both are designed for classes whose instances are potentially
executed by another thread. However, Callable returns a result and may throw an
exception. The utility class Executors has methods for converting from other com-
mon task forms (such as Runnable) to a Callable. Furthermore, a Callable can be
executed by the ExecutorService class in the submit() method.
The Future interface is an important entity in the new Executor framework and
represents the result of an asynchronous computation. The result can be retrieved
using one of the get() methods when the computation has completed, blocking if
necessary until it’s ready. The interface provides a possibility to cancel the computa-
tion and it’s also possible to determine if the task completed normally or if it was
canceled. A Future is normally used in the following manner:

Future futureResult = executor.submit(
   new Callable() {
      public String call() {
         // perform heavy op and return result
try {
} catch (ExecutionException ee) { cleanup(); }

Notice how the result is retrieved – get() might be blocking (depending on if the
computation is ready or not by the time of the invocation).
Executor is a simple interface for launching new tasks. It provides a way of decou-
pling task submission from the mechanics of how each task will be run, including
details of thread use, scheduling, etc. The single method in the interface, execute(),
has been designed to replace the way we normally use threads. That is, instead of
explicitly creating a new thread for each task to execute, all tasks are executed by a
single Executor. How the Executor actually executes the tasks internally is imple-

Figure 1

The ExecutorService interface extends the Executor interface and adds fea-
tures for managing the life cycle of both individual tasks and the executor itself. It’s
submit() methods accept both Runnable and Callable objects and returns a
Future. An unused ExecutorService should be shutdown() to reclaim its resources.
The ExecutorService interface also provide methods for submitting large collections
of Callable objects.
It’s time for an example. An implementation of a simple web server might look
something like this:

public class WebServer {
    * Creates a server socket, accepts connections,
    * and handles each incoming request in a new thread.
   public void run() throws IOException {
      ServerSocket socket = new ServerSocket(8080);
      while (true) {
         final Socket connection = socket.accept();
         Runnable task = new Runnable() {
            public void run() {
         new Thread(task).start();

   public static void main(String[] args)
         throws IOException {
      WebServer webServer = new WebServer();;

   private void handleRequest(Socket connection) {
      // Impl. details not important here!

In order to achieve high responsiveness, each new request is handled in a new
thread. Under light to moderate load with sufficient CPU resources available, this
implementation offers relatively good throughput too. However, it would be nice to
have a more flexible solution supporting a wide variety of task execution policies, i.e.
for ex. to be able to specify in what thread and in what order tasks will be executed,
and how many tasks may execute concurrently. Let’s introduce an Executor:

public class ThreadPerTaskExecutor implements Executor {
   public void execute(Runnable r) {
      new Thread(r).start();

, which leaves us with the following run() method:

public void run() throws IOException {
   Executor exec = new ThreadPerTaskExecutor();
   ServerSocket socket = new ServerSocket(8080);
   while (true) {
      final Socket connection = socket.accept();
      Runnable task = new Runnable() {
         public void run() {

The great advantage of this approach is of course that the execution policy could
very easily be substituted (even configured at deployment-time) without altering
the rest of the code base. However, one big problem still exists with this solution!
Under heavy load a lot of threads will be created, and thread creation and teardown
takes time and active threads consume a lot of memory. Furthermore, each platform
has an upper limit on how many threads that can be created, and when this limit is
hit an OutOfMemoryError is probably thrown. Hence, up to a certain point, more
threads can improve throughput but beyond that point, creating more threads just
slows down the application and may even lead to a crash. The way to stay out of
danger is to place some bound on how many active threads your application utilizes
Most of the executor implementations internally make use of thread pools,
which consists of worker threads. Worker threads are often used to execute mul-
tiple tasks and minimize the overhead caused by thread creation. One common
thread pool type is the fixed thread pool, which always has a specified number of
threads running. The Executors class contains factory methods for retrieving Execu-
torServices that in turn use different kind of thread pools internally. For example,
newCachedThreadPool() creates a thread pool that creates new threads as needed
but will try to reuse previously created threads when available.
To be on the safe side with regards to both throughput and responsiveness in the
web server example, an executor that uses a thread pool internally should be used:

public void run() throws IOException {
   Executor exec = Executors.newFixedThreadPool(100);
   ServerSocket socket = new ServerSocket(8080);
   while (true) {
      ... // same as above


As you can imagine, Java 5 has given us a lot of great high level features in the
new concurrency package. However, they extend (not replace) the traditional low
level concurrency constructs, such as the synchronized keyword and the wait/notify
mechanism, who still have a vital role to play. Your concurrency toolbox has just
become bigger and more flexible.


1 The JavaDoc
2 Java 5 Collection Framework changes
3 “Java Concurrency in Practice” (Brian Goetz et. al.), ISBN 9780321349606
4 Concurrency Tutorial

Leave a Reply