Thursday, January 23, 2014

Java 7 try-with-resources Statement

Introduction

One of the most exciting features added in Java 7 is automatic resource management (a.k.a. try-with-resources or TWR). While basic TWR usage looks simple and straightforward there are certain subtleties which can puzzle even experienced developers. So it’s better to be prepared :)

Old School Resource Management

To fully appreciate TWR approach to resource management it is worth comparing it to the one being used with earlier versions of Java. Suppose you want to open a file, write certain data into it and then close the file. In addition you want to make sure that the file is closed regardless of whether write operation succeeded or failed. The most idiomatic way of doing this in Java 6 and earlier is via try-finally block:
public void writeDataToFile(String fileName, byte[] data)
        throws IOException {

    OutputStream out = null;

    try {
        out = new FileOutputStream(fileName);
        out.write(data);
    } finally {
        if (out != null) {
            out.close();
        }
    }
}
The finally block always executes when the try block exits. This ensures that the file referenced by out variable gets closed even when try block is terminated abruptly because of an exception. Cool, this is exactly what we wanted.

It is worth noting that in production code it would be better to wrap FileOutputStream object with BufferedOutputStream instance for efficiency or to leverage NIO.2 facilities, but for this post I decided to keep examples simple and focused.

Checking for null in the finally block is necessary. If construction of FileOutputStream object will throw an exception then out variable will keep its initial null value. Hence attempt to call out.close() will generate NullPointerException which in turn will suppress the original IO-related exception.

Pay attention to throws clause. It allows for propagating checked exceptions up to the caller thus making code in writeDataToFile() method more concise and readable. Without throws clause it would be a bit more cluttered:
public void writeDataToFile(String fileName, byte[] data) {
    OutputStream out = null;

    try {
        out = new FileOutputStream(fileName);
        out.write(data);
    } catch (IOException e) {
        // exception handling code
    } finally {
        if (out != null) {
            try {
                out.close();
            } catch (IOException e) {
                // exception handling code
            }
        }
    }
}
As you can see the code for managing even a single resource looks pretty verbose. What if there are a few? Suppose you want to read data from a URL and save it to a file. The following code can be used to do that:
public void saveUrlContentsToFile(URL url, String fileName) {
    InputStream in = null;

    try {
        in = url.openStream();

        OutputStream out = null;
                
        try {
            out = new FileOutputStream(fileName);                

            int len;
            byte[] buf = new byte[8192];
            
            while ((len = in.read(buf)) >= 0) {
                out.write(buf, 0, len);
            }
        } catch (IOException e) {
            // exception handling code
        } finally {
            try {
                if (out != null) {
                    out.close();
                }
            } catch (IOException e) {
                // exception handling code
            }
        }
    } catch (IOException e) {
        // exception handling code
    } finally {
        try {
            if (in != null) {
                in.close();
            }
        } catch (IOException e) {
            // exception handling code
        }
    }
}
It looks like a mess, isn't it? Fortunately Java 7 try-with-resources statement makes resource management much more pleasant experience.

Managing a Single Resource

Ok, without further ado let's take a look at how TWR can be leveraged to improve examples from the previous section. Method writeDataToFile() could be rewritten like this:
public void writeDataToFile(String fileName, byte[] data)
        throws IOException {

    try (OutputStream out = new FileOutputStream(fileName)) {
        out.write(data);
    }
}
As you can see the code looks more clear and concise. After try block exits (normally or abruptly) resource referenced by out variable is automatically closed. Also there's no need to worry about null value of out variable. It is handled by TWR. Execution of the following (contrived and useless but still working) code will result in printing TWR rocks! without generating any kind of exception:
public class MainClass {
    public static void main(String[] args) throws IOException {
        try (InputStream in = null) {
            System.out.println("TWR rocks!");
        }
    }
}
TWR also takes appropriate action to propagate exception being thrown during resource initialization to the caller. Assuming there is no file named "inexistent-file" in the program running directory the following code generates FileNotFoundException:
public class MainClass {
    public static void main(String[] args) throws IOException {
        try (InputStream in = new FileInputStream("inexistent-file")) {
            System.out.println("TWR rocks!");
        }
    }
}
This is different from analogous code without TWR and null check (described in the previous section) which suppresses FileNotFoundException and throws NullPointerException instead:
public class MainClass {
    public static void main(String[] args) throws IOException {
        InputStream in = null;

        try {
            in = new FileInputStream("inexistent-file");
        } finally {
            in.close();
        }
    }
}
Last note: variables referencing to resources managed by TWR are implicitly final. Hence an attempt to assign to it inside try-with-resources block will result in compile-time error:
try (InputStream in = new FileInputStream("inexistent-file")) {
    // error: “in” is implicitly final
    in = new FileInputStream("another-file");
    // ...
}

Managing Multiple Resources

Let’s rewrite saveUrlContentsToFile() method using TWR to see how a single try-with-resources statement can easily manage multiple resources:
public void saveUrlContentsToFile(URL url, String fileName)
        throws IOException {
    
    try (InputStream in = url.openStream();
         OutputStream out = new FileOutputStream(fileName)) {

        int len;
        byte[] buf = new byte[8192];

        while ((len = in.read(buf)) > 0) {
            out.write(buf, 0, len);
        }
    }
}
Compare it to the original version. As you can see, with increasing number of resources TWR usage gets more and more beneficial. After try block exits (normally or abruptly) resources referenced by in and out variables are automatically closed. An interesting thing to note is that resources in TWR block are initialized from left to right (top to bottom) and closed in reverse order.

What happens if initialization of the resource referenced by in succeeds but initialization of the resource referenced by out throws an exception of type E? Fortunately TWR is smart enough to handle this situation. In this case the resource referenced by in will be automatically closed and the whole try-with-resources block will be terminated abruptly propagating the exception of type E up the call stack.

In general, if you have n resources (n > 1) and initialization of one of them fails, then all resources which have already been successfully initialized will be automatically closed. In the following example all resources starting from r1 and up to ri (ri excluded) will be automatically closed:
try (R1 r1 = new R1();
     R2 r2 = new R2();
     // ...
     Ri ri = new Ri(); // this initialization fails
     // ...
     Rn rn = new Rn()) {
    // some action
}
Ok, so far so good, but wait a minute... Closing a resource can also generate an exception. How TWR behaves in this case? Fortunately, try-with-resources statement can handle this situation too. It will make an attempt to automatically close all managed resources regardless of whether try block itself or closing procedure of any managed resource exits normally or abruptly. All exceptions (if any) generated in the process will be collected and combined via Throwable#addSuppressed method (rules for combining suppressed exceptions are a bit complicated, all the gory details can be found in JLS-14.20.3.1). Let's take a look at a code snippet:
try (R1 r1 = new R1();
     R2 r2 = new R2(); // closing fails with exception of type E2
     R3 r3 = new R3(); // closing fails with exception of type E3
     R4 r4 = new R4();
     R5 r5 = new R5()) {
    // exits normally or abruptly with exception of type E
}
Assuming that initialization of all resources went fine the execution result of the previous example can be as follows:
  • If try block exits normally then the whole try-with-resources block terminates abruptly with exception of type E3 having exception of type E2 in its suppressed array (accessible via Throwable#getSuppressed()).
  • If try block exits abruptly then the whole try-with-resources block terminates abruptly with exception of type E having exceptions of types E3 and E2 in its suppressed array (accessible via Throwable#getSuppressed()).
Two previous code snippets and related discussion may seem a bit abstract and confusing. Let's play with more concrete examples to gain some hands-on experience. To get started let's create the following helper classes:
class VerboseResource implements AutoCloseable {
    private String name;

    public VerboseResource(String name) {
        this.name = name;
        System.err.println("initializing " + name);
    }

    @Override
    public void close() throws IOException {
        System.err.println("closing " + name);
    }

    protected String getName() {
        return name;
    }
}

class InitFailedResource extends VerboseResource {
    public InitFailedResource(String name) {
        super(name);
        throw new IllegalStateException(
                "unable to initialize " + name);
    }
}

class CloseFailedResource extends VerboseResource {
    public CloseFailedResource(String name) {
        super(name);
    }

    @Override
    public void close() throws IOException {
        super.close();
        throw new IllegalStateException(
                "unable to close " + getName());
    }
}
Interface AutoCloseable was added in Java 7. It should be implemented by any class which is intended to be managed by try-with-resources statement (see JLS-14.20.3). Interface Closeable was retrofitted to implement AutoCloseable, so that every class which implements Closeable can be used with TWR. As you can see, helper classes are pretty simple and straightforward:
  • VerboseResource outputs an appropriate message when constructor or close() method gets executed.
  • InitFailedResource inherits from VerboseResource and throws an exception during initialization of a resource.
  • CloseFailedResource inherits from VerboseResource and throws an exception during closing of a resource.
Now let’s create a simple program that uses our fresh helper classes to successfully initialize and close three resources:
public class MainClass {
    public static void main(String[] args) throws IOException {
        try (VerboseResource res1 = new VerboseResource("res1");
             VerboseResource res2 = new VerboseResource("res2");
             VerboseResource res3 = new VerboseResource("res3")) {

            System.err.println("inside try block");
        }
    }
}
The output produced by the program confirms that resources are acquired from top to bottom and released in reverse order:
initializing res1
initializing res2
initializing res3
inside try block
closing res3
closing res2
closing res1
Now let’s see what happens if initialization of a resource fails:
public class MainClass {
    public static void main(String[] args) throws IOException {
        try (VerboseResource    res1 = new VerboseResource("res1");
             VerboseResource    res2 = new VerboseResource("res2");
             InitFailedResource res3 = new InitFailedResource("res3");
             VerboseResource    res4 = new VerboseResource("res4")) {

            System.err.println("inside try block");
        }
    }
}
The output produced by the program confirms that all successfully initialized resources (res1 and res2 in this case) are automatically closed regardless of other resources' initialization result. As you might expect, the code in try block is not executed:
initializing res1
initializing res2
initializing res3
closing res2
closing res1
Exception in thread "main" java.lang.IllegalStateException: unable to initialize res3
 at com.apolunin.twr.InitFailedResource.(MainClass.java:26)
 at com.apolunin.twr.MainClass.main(MainClass.java:46)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Now let’s see what happens if closing of a resource fails:
public class MainClass {
    public static void main(String[] args) throws IOException {
        try (VerboseResource     res1 = new VerboseResource("res1");
             CloseFailedResource res2 = new CloseFailedResource("res2");
             VerboseResource     res3 = new VerboseResource("res3");
             VerboseResource     res4 = new VerboseResource("res4");
             CloseFailedResource res5 = new CloseFailedResource("res5")) {

            System.err.println("inside try block");
        }
    }
}
As you can see from the output, an attempt is made to release every managed resource. The first exception encountered (thrown by res5 closing procedure) is propagated up the call stack with the second one (thrown by res2 closing procedure) added to suppressed list:
initializing res1
initializing res2
initializing res3
initializing res4
initializing res5
inside try block
closing res5
closing res4
closing res3
closing res2
closing res1
Exception in thread "main" java.lang.IllegalStateException: unable to close res5
 at com.apolunin.twr.CloseFailedResource.close(MainClass.java:38)
 at com.apolunin.twr.MainClass.main(MainClass.java:51)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
 Suppressed: java.lang.IllegalStateException: unable to close res2
There's a lot of possible execution paths of try-with-resources statement. Interested readers can find detailed specification in JLS-14.20.3. For deeper understanding of TWR I would recommend reading related JLS sections and then play around with aforementioned helper classes to clarify the most confusing cases.

The last but not the least. There's a common idiom in Java of using "Decorator" design pattern for resource chaining:
OutputStream out = new BufferedOutputStream(new FileOutputStream(“file.dat”));

When it comes to try-with-resources statement it should be used with caution. The previous code snippet doesn’t interact well with TWR. Let's add another helper class to illustrate the point:
class ChainedInitFailedResource extends InitFailedResource {
    private VerboseResource resource;

    public ChainedInitFailedResource(String name,
            VerboseResource resource) {
        
        super(name);
        this.resource = resource;
    }

    @Override
    public void close() throws IOException {
        resource.close();
        super.close();
    }
}
The whole purpose of class ChainedInitFailedResource is to make resource chaining possible while keeping our trace output at the same time. Let's take a look at the following example:
public class MainClass {
    public static void main(String[] args) throws IOException {
        try (VerboseResource res1 = new VerboseResource("res1");
             ChainedInitFailedResource res3 = new ChainedInitFailedResource("res3",
                     new VerboseResource("res2"))) {

            System.err.println("inside try block");
        }
    }
}
The output produced by this program looks like this:
initializing res1
initializing res2
initializing res3
closing res1
Exception in thread "main" java.lang.IllegalStateException: unable to initialize res3
 at com.apolunin.twr.InitFailedResource.(MainClass.java:26)
 at com.apolunin.twr.ChainedInitFailedResource.(MainClass.java:48)
 at com.apolunin.twr.MainClass.main(MainClass.java:62)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
As you can see from the output, resource res2 (chained inside res3) was NOT closed. To avoid such unpleasant surprises each managed resource should be specified separately:
public class MainClass {
    public static void main(String[] args) throws IOException {
        try (VerboseResource res1 = new VerboseResource("res1");
             VerboseResource res2 = new VerboseResource("res2");
             ChainedInitFailedResource res3 = new ChainedInitFailedResource("res3", res2)) {

            System.err.println("inside try block");
        }
    }
}
Output produced by the program confirms that res2 is released this time:
initializing res1
initializing res2
initializing res3
closing res2
closing res1
Exception in thread "main" java.lang.IllegalStateException: unable to initialize res3
 at com.apolunin.twr.InitFailedResource.(MainClass.java:26)
 at com.apolunin.twr.ChainedInitFailedResource.(MainClass.java:48)
 at com.apolunin.twr.MainClass.main(MainClass.java:63)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)

Using catch and finally Blocks with TWR

It is possible to use catch and/or finally blocks with try-with-resources statement. There are two important points to remember:
  • catch clause of TWR statement can catch exceptions generated during automatic initialization or closing of any resource.
  • finally clause of TWR statement gets executed after all resources have been closed (or attempted to be closed).
Here is a program to illustrate the aforementioned points:
public class MainClass {
    public static void main(String[] args) throws IOException {
        try (VerboseResource     res1 = new VerboseResource("res1");
             VerboseResource     res2 = new VerboseResource("res2");
             CloseFailedResource res3 = new CloseFailedResource("res3")) {

            System.err.println("inside try block");
        } catch (IllegalStateException e) {
            System.err.println("inside catch block: " + e.getMessage());
        } finally {
            System.err.println("inside finally block");
        }
    }
}
The program produces the following output:
initializing res1
initializing res2
initializing res3
inside try block
closing res3
closing res2
closing res1
inside catch block: unable to close res3
inside finally block
As it confirms the order of execution is as follows:
  • resources are initialized from top to bottom;
  • code inside try block gets executed;
  • an attempt is made to automatically close all resources (in reverse order);
  • catch clause of TWR statement handles exception;
  • finally clause of TWR statement gets executed.

Conclusion

Ok, that's it. I hope now you're armed with thorough enough understanding of a powerful try-with-resources feature to be able to use it in day-to-day programming tasks. To deepen your knowledge I would recommend reading JLS-14.20.3 and play around with helper classes created in this article or something similar.

I hope you enjoyed reading :)
Andrew

Friday, February 22, 2013

Mystery of Collections.max() Declaration

Introduction

I'm sure that every software developer who used Java for a while knows about Collections utility class. It contains a lot of useful methods which facilitate collection sorting, finding maximum and minimum of a collection, making synchronized or unmodifiable collection, etc. When I first looked at the source code of Collections class I was puzzled by the declaration of max() method:
public static <T extends Object & Comparable<? super T>> T max(
        Collection<? extends T> coll) {
    // …
}
In this post I’m going to explain why the aforementioned method declaration is written exactly that way.

Mystery Uncovered

The first attempt to define this method might look like this:
public static <T extends Comparable<T>> T max(Collection<T> coll) {
    // …
}
This declaration is too restrictive. Assume you have two classes: the first one (Foo) defines natural ordering (i.e. implements Comparable interface) and the second one (Bar) is a subclass of the first one:
class Foo implements Comparable<Foo> {

    // ...

    @Override
    public int compareTo(Foo o) {
        // ...
    }
}

class Bar extends Foo {
    // ...
}
It is safe to build a collection of Bar instances and pass it to max() method (which should base its search on the natural order defined by the Foo class). However the above declaration disallows this:
public class MaxDeclaration {
    public static void main(String[] args) {
        List<Bar> list = new ArrayList<Bar>();
        max(list); // compile-time error
    }

    public static <T extends Comparable<T>> T max(Collection<T> coll) {
        // …
    }
}
To make the above code compile without an error constraints on type parameter T must be loosen a bit. Application of “Get and Put Principle” leads us to the following declaration of max() method:
public static <T extends Comparable<? super T>> T max(
        Collection<? extends T> coll) {
    // …
}
This declaration looks almost like the one in Collections class. The last thing to figure out is the reason for presence of phrase “Object &” in T bound declaration. Short answer – it’s there for backward compatibility. Longer answer follows.

If you look at the documentation of Collections class for Java 1.4 or earlier you notice that max() method is declared to return Object. According to JLS, the erasure of a type variable is the erasure of its leftmost bound. Hence if phrase “Object &” would be omitted for Java 5 or later version of Collections class then the erasure of type variable T would be Comparable, not Object, which would break backward compatibility with Java 1.4 and earlier. This reasoning leads us to the original max() method declaration given in the beginning of the post.

Thanks for reading,
See you soon!

Wednesday, February 20, 2013

Get and Put Principle in Java Generics

Introduction

It is hard to imagine modern Java development without using generics. While they look more simple and straightforward than C++ templates, it makes sense to invest some time to learn their best practices. In this post I want to talk about “Get and Put Principle” – one of the most important rules to remember when working with generics.

Basic Information

Quite often “Get and Put Principle” is defined in terms of getting values out of a data structure and putting values into it:

Use an extends wildcard when you only get values out of a structure, use a super wildcard when you only put values into a structure, and don’t use a wildcard when you both get and put.

Let’s look at how judicious application of “Get and Put Principle” allows for creation of more flexible code. Assume you need to implement a method which gets the values from a source collection and adds them to a destination collection. It might look like this:
public static <T> void addAll(Collection<T> dst, Collection<T> src) {
    for (T element : src) {
        dst.add(element);
    }
}
Here is a sample call of this method:
Collection<Integer> src = Arrays.asList(1, 2, 3, 4);
Collection<Integer> dst = new ArrayList<Integer>();

addAll(dst, src);
At first glance addAll() method looks good, but it is not flexible enough. For example it is safe to append contents of the src collection to the dst collection typed Collection<Object>, but current implementation of the addAll() method disallows this:
Collection<Integer> src = Arrays.asList(1, 2, 3, 4);
Collection<Object> dst = new ArrayList<Object>();

addAll(dst, src); // compile-time error
To make the previous call compile without an error the dst parameter type should be modified in accordance with “Get and Put Principle”. Since we only put values into dst collection super wildcard should be used:
public static <T> void addAll(Collection<? super T> dst,
        Collection<T> src) {
    for (T element : src) {
        dst.add(element);
    }
}
This version looks better and works fine in almost any case, but it can be made even more flexible. Occasionally it might be necessary to provide explicit type parameter for generic method invocation. Let’s look at a bit contrived but illustrative example:
Collection<Integer> src = Arrays.asList(1, 2, 3, 4);
Collection<Object> dst = new ArrayList<Object>();

CollectionUtils.<Number>addAll(dst, src); // compile-time error
The example assumes that addAll() is a public static method in the CollectionUtils class. This is necessary because otherwise it is impossible to specify explicit type parameter:
<Number>addAll(dst, src); // this syntax is not allowed in Java
Alternatively you can make addAll() method a non-static member and provide explicit type parameter using the following code:
this.<Number>addAll(dst, src);
Ok, let’s break the example. In this case we refused from compiler type inference and provided an explicit type parameter for generic method invocation (<Number>). Hence src parameter of addAll() method is supposed to be of type Collection<Number>. But the actual argument is of type Collection<Integer>. Taking to account that generics are not covariant in Java (i.e. Collection<Integer> is not considered a subtype of Collection<Number> despite the fact that Integer is a subtype of Number) compiler disallows the call.

To make the previous call compile without an error the src parameter should be modified in accordance with “Get and Put Principle”. Since we only get values out of src collection extends wildcard should be used:
public static <T> void addAll(Collection<? super T> dst,
        Collection<? extends T> src) {
    for (T element : src) {
        dst.add(element);
    }
}
As you can see judicious application of “Get and Put Principle” may add considerable flexibility to your code.

Another Look

Some programmers tend to think that “Get and Put Principle” is worth considering only while working with the Collections Framework. But this is not the case. “Get and Put Principle” can also be defined in terms of method arguments and return values and applied to any generic class, not just a collection:

Use an extends wildcard when you only get return values out of a method, use a super wildcard when you only pass arguments into a method, and don’t use a wildcard when you both get and pass.

Let’s look at the following code which incorporates extends wildcard:
List<? extends Number> list = new ArrayList<Integer>();
list.add(7); // compile-time error
list.add(null);
Number value1 = list.get(0);
The first line creates a reference list and points it to an instance of ArrayList<Integer>. The second line tries to add an integer value to ArrayList<Integer> by accessing it via wildcard-typed reference list. If this call would be allowed by the compiler then the following code would also be allowed which would definitely cause problem at runtime and compromise Java type system:
List<? extends Number> list = new ArrayList<Double>();
list.add(7);
The third line compiles without an error because null literal according to JLS can be of any reference type. The fourth line also compiles and works as expected.

Now let’s look at the code which incorporates super wildcard:
List<? super Integer> list = new ArrayList<Integer>();
list.add(7);
Number value2 = list.get(0); // compile-time error
Object value3 = list.get(0);
The first line creates a reference list and points it to an instance of ArrayList<Integer>. The second line tries to add an integer value to ArrayList<Integer> by accessing it via wildcard-typed reference list. This time compiler allows the call because type parameter is guaranteed to be a super-type of Integer class; hence it is safe to pass an instance of Integer class itself as an argument to the add() method. The third line tries to retrieve the first element from the list using a reference typed via super wildcard. This call is disallowed by the compiler because it cannot guarantee the return value of get() method to be of type Number. The fourth line compiles and works as expected though.

Conclusion

Ok, that’s it. As you can see, judicious application of “Get and Put Principle” may add considerable flexibility to your code while keeping compile-time guarantees. This is one of the most important rules to remember when designing APIs involving generics.

Thanks for reading,
See you soon!

Monday, January 28, 2013

"Observer" Design Pattern in Java

Introduction

The Java Platform used to have support for Observer design pattern in the form of Observer interface and Observable class starting from its initial release. However a long time passed since then. A lot of enhancements were added to the standard libraries and to the Java language itself. In this post I want to describe how generics and Java Collections Framework can be used together to build a better Observer design pattern implementation.

What’s wrong with Observer and Observable?

Before building a better implementation we need to identify the flaws of what we currently have. The JDK support for Observer design pattern has the following problems:
  1. It doesn’t use generics which are a must in the modern Java programming.
  2. Internally (at least when it comes to Oracle JDK 6 and JDK 7) Observable implementation uses synchronization inefficiently: it leverages Vector class which is synchronized by itself in addition to custom synchronization logic.
Let’s see how we can tackle the aforementioned issues using modern JDK facilities.

Observer Revised

The following subsections show how generics and Java Collections Framework can be leveraged to build modern Observer design pattern implementation.

Initial Implementation

First of all, we need to introduce generic analogues to standard JDK Observer interface and Observable class. Let them be EventListener and EventPublisher respectively:
// EventListener.java
public interface EventListener<
        P extends EventPublisher<P, L, E>,
        L extends EventListener<P, L, E>,
        E> {

    void handleEvent(P sender, E event);

}

// EventPublisher.java
public interface EventPublisher<
        P extends EventPublisher<P, L, E>,
        L extends EventListener<P, L, E>,
        E> {

    void addListener(L listener);

    void removeListener(L listener);

    void clearListeners();

    void publishEvent(E event);

}
Programming to interfaces instead of concrete classes is considered a good practice, that’s why Observable class has been replaced with a generic interface, not a generic class. The definitions of these interfaces may seem a bit weird at first glance. Let’s break them down to see what’s happening.

Let’s begin with the EventListener interface. We need to parameterize the type of event publisher to accept events from and the type of events to listen to. So we can start with the following definition:
// EventListener.java
public interface EventListener<P, E> {
    // ...
}
For EventPublisher interface we need to parameterize the type of event listener to notify and the type of events to publish. The following definition can be used to express this:
// EventPublisher.java
public interface EventPublisher<L, E> {
    // ...
}
It is known that every event publisher should implement EventPublisher interface and every event listener should implement EventListener interface. Let’s express this idea in code with type variable bounds:
// EventListener.java
public interface EventListener<P extends EventPublisher<???, E>, E> {
    // ...
}

// EventPublisher.java
public interface EventPublisher<L extends EventListener<???, E>, E> {
    // ...
}
This is invalid Java code of course. Triple question mark is used to indicate problems. Type variable P of EventListener is bounded by EventPublisher which is generic by itself. Hence we need to provide certain values (type arguments) for event type (E) and event listener (L) type variables of EventPublisher interface. The solution is trivial for event type: type variable E can be reused. But what should be passed as type argument for event listener?

Similar reasoning can be applied to EventPublisher interface. Type variable L of EventPublisher is bounded by EventListener which is generic by itself. Hence we need to provide certain values (type arguments) for event type (E) and event publisher (P) type variables of EventListener interface. The solution is trivial for event type: type variable E can be reused. But what should be passed as type argument for event publisher?

To solve the aforementioned problems and to convert the previous erroneous listing to the working Java code we need to introduce two more type variables:
  1. In EventListener interface definition L type variable will denote event listener type to notify. The only purpose of the L type variable is to serve as the type argument for EventPublisher interface which bounds P type variable of EventListener.
  2. In EventPublisher interface definition P type variable will denote event publisher to accept events from. The only purpose of the P type variable is to serve as the type argument for EventListener interface which bounds L type variable of EventPublisher.
In the end we have two generic interfaces with mutually recursive type variable bounds. The definitions of these interfaces now look similar to the ones at the beginning of this subsection:
// EventListener.java
public interface EventListener<
        P extends EventPublisher<P, L, E>,
        L extends EventListener<P, L, E>,
        E> {
    // ...
}

// EventPublisher.java
public interface EventPublisher<
        P extends EventPublisher<P, L, E>,
        L extends EventListener<P, L, E>,
        E> {
    // ...
}
And now let’s create AbstractEventPublisher – abstract base class which any event publisher can extend to avoid implementing EventPublisher interface from scratch:
// AbstractEventPublisher.java
public abstract class AbstractEventPublisher<
        P extends EventPublisher<P, L, E>,
        L extends EventListener<P, L, E>,
        E> implements EventPublisher<P, L, E> {

    private final List<L> listeners = new ArrayList<L>();

    @Override
    public void addListener(L listener) {
        listeners.add(listener);
    }

    @Override
    public void removeListener(L listener) {
        listeners.remove(listener);
    }

    @Override
    public void clearListeners() {
        listeners.clear();
    }

    @Override
    @SuppressWarnings("unchecked")
    public void publishEvent(E event) {
        for (L listener : listeners) {
            listener.handleEvent((P) this, event);
        }
    }
}
There is nothing special about this class. The only thing worth noting is that it is not thread safe. This issue is tackled in the following subsection.

Naïve Thread-Safety

Quite often event publisher needs to be thread-safe. This subsection is the first try to implement thread-safety for AbstractEventPublisher class. It looks like this:
// AbstractEventPublisher.java
public abstract class AbstractEventPublisher<
        P extends EventPublisher<P, L, E>,
        L extends EventListener<P, L, E>,
        E> implements EventPublisher<P, L, E> {

    private final List<L> listeners =
            Collections.synchronizedList(new ArrayList<L>());

    @Override
    public void addListener(L listener) {
        listeners.add(listener);
    }

    @Override
    public void removeListener(L listener) {
        listeners.remove(listener);
    }

    @Override
    public void clearListeners() {
        listeners.clear();
    }

    @Override
    @SuppressWarnings("unchecked")
    public void publishEvent(E event) {
        synchronized (listeners) {
            for (L listener : listeners) {
                listener.handleEvent((P) this, event);
            }
        }
    }
}
In this implementation field listeners is instantiated using synchronizedList() method. Iteration over listeners is contained inside the synchronized block as required by synchronizedList() contract.

Now AbstractEventPublisher class is thread-safe, but it has one serious flaw: it delivers events inside synchronized block. The code inside synchronized block should be as fast as possible because:
  1. No listener can be add/removed while iteration inside synchronized block is in progress.
  2. No other event can be delivered until handling of the currently delivered event is finished.
Depending on the activity performed inside handleEvent method implementation, calling it from synchronized block may lead to performance bottle-necks, exceptions, data corruptions or dead-locks. Joshua Bloch in his Effective Java book (item 67) names methods like handleEvent alien methods. It’s a very apt term because AbstractEventPublihser class has no idea of what event listeners might do as part of their handleEvent implementations. The next subsection shows how to protect against alien methods.

Using List Snapshot

One pretty obvious solution to alien methods problem is to make a copy (snapshot) of the field listeners inside synchronized block and deliver an event using that copy outside the block:
@Override
@SuppressWarnings("unchecked")
public void publishEvent(E event) {
    final List<L> snapshot = new ArrayList<L>();

    synchronized (listeners) {
        snapshot.addAll(listeners);
    }

    for (L listener : snapshot) {
        listener.handleEvent((P) this, event);
    }
}
This approach works fine and allows for avoiding alien methods, but a better alternative exists.

Using Concurrent List Implementation

Starting from Java 5 concurrent collections were introduced to the Java platform as part of the java.util.concurrent package. This package contains a lot of useful stuff to facilitate concurrent programs development. Class CopyOnWriteArrayList is the greatest interest for us. According to documentation it is a thread-safe variant of ArrayList in which all mutative operations (add, set, and so on) are implemented by making a fresh copy of the underlying array. This is ordinarily too costly, but may be more efficient than alternatives when traversal operations vastly outnumber mutations, and is useful when you cannot or don't want to synchronize traversals, yet need to preclude interference among concurrent threads. Hence CopyOnWriteArrayList class perfectly suits our needs:
// AbstractEventPublisher.java
public abstract class AbstractEventPublisher<
        P extends EventPublisher<P, L, E>,
        L extends EventListener<P, L, E>,
        E> implements EventPublisher<P, L, E> {

    private final List<L> listeners = new CopyOnWriteArrayList<L>();

    @Override
    public void addListener(L listener) {
        listeners.add(listener);
    }

    @Override
    public void removeListener(L listener) {
        listeners.remove(listener);
    }

    @Override
    public void clearListeners() {
        listeners.clear();
    }

    @Override
    @SuppressWarnings("unchecked")
    public void publishEvent(E event) {
        for (L listener : listeners) {
            listener.handleEvent((P) this, event);
        }
    }
}
As you can see this code is almost identical to the very first implementation of abstract event publisher. The only difference is that CopyOnWriteArrayList is used instead of ArrayList to instantiate field listeners.

Practical Usage

Now let’s look at a practical example which incorporates all previously developed stuff. Assume we have class Circle which represents a circle and maintains its center, radius and color via instance fields. Suppose that Circle class instance should publish an event whenever any part of the instance state is changed. Let’s see how we can fulfill the aforementioned requirements based on the previously developed code.

First of all let’s look at the Circle class:
// Circle.java
public class Circle {
    private Point center;
    private int radius;
    private Color color;

    public Circle(Point center, int radius, Color color) {
        this.center = center;
        this.radius = radius;
        this.color = color;
    }

    public Point getCenter() {
        return center;
    }

    public void setCenter(Point center) {
        this.center = center;
    }

    public int getRadius() {
        return radius;
    }

    public void setRadius(int radius) {
        this.radius = radius;
    }

    public Color getColor() {
        return color;
    }

    public void setColor(Color color) {
        this.color = color;
    }
}
There is nothing special about this class for now. It contains instance fields, getters and setters to retrieve and update instance fields, and a constructor to set initial object state.

Now let’s introduce two interfaces: CircleEvent and CircleEventListener. The first one is a tag interface which every event type published by the Circle class should implement. The second one as you might guess is supposed to be implemented by every circle event listener:
// CircleEvent.java
public interface CircleEvent {
}

// CircleEventListener.java
public interface CircleEventListener extends
        EventListener<Circle, CircleEventListener, CircleEvent> {
}
Remember that EventListener and EventPublisher interfaces have mutually recursive bounds. So in order to compile successfully and build on top of the AbstractEventPublisher class functionality Circle class definition should be modified like this:
// Circle.java
public class Circle extends AbstractEventPublisher<Circle,
        CircleEventListener, CircleEvent> {
// …
}
Now let’s look at the classes representing events itself. The Circle class should fire an event whenever its center, radius or color is changed. Hence there should be three different event types:
// CenterChangedEvent.java
public class CenterChangedEvent implements CircleEvent {
    private final Point oldCenter;
    private final Point newCenter;

    public CenterChangedEvent(Point oldCenter, Point newCenter) {
        this.oldCenter = oldCenter;
        this.newCenter = newCenter;
    }

    public Point getOldCenter() {
        return oldCenter;
    }

    public Point getNewCenter() {
        return newCenter;
    }
}

// ColorChangedEvent.java
public class ColorChangedEvent implements CircleEvent {
    private final Color oldColor;
    private final Color newColor;

    public ColorChangedEvent(Color oldColor, Color newColor) {
        this.oldColor = oldColor;
        this.newColor = newColor;
    }

    public Color getOldColor() {
        return oldColor;
    }

    public Color getNewColor() {
        return newColor;
    }
}

// RadiusChangedEvent.java
public class RadiusChangedEvent implements CircleEvent {
    private final int oldRadius;
    private final int newRadius;

    public RadiusChangedEvent(int oldRadius, int newRadius) {
        this.oldRadius = oldRadius;
        this.newRadius = newRadius;
    }

    public int getOldRadius() {
        return oldRadius;
    }

    public int getNewRadius() {
        return newRadius;
    }
}
All three event classes are designed to be immutable. Each one contains old and new value of the corresponding Circle class instance field. Now let’s include event publishing code in the Circle class setters.
// Circle.java
public class Circle extends AbstractEventPublisher<Circle,
        CircleEventListener, CircleEvent> {

    // …

    public void setCenter(Point center) {
        if (!this.center.equals(center)) {
            publishEvent(new CenterChangedEvent(this.center, center));
        }

        this.center = center;
    }

    public void setRadius(int radius) {
        if (this.radius != radius) {
            publishEvent(new RadiusChangedEvent(this.radius, radius));
        }

        this.radius = radius;
    }

    public void setColor(Color color) {
        if (!this.color.equals(color)) {
            publishEvent(new ColorChangedEvent(this.color, color));
        }

        this.color = color;
    }
}
As you can see Circle class setters publish events using inherited publishEvent method. We’re almost done. To make sure everything works as expected let’s develop a simple circle event handler which will just print relevant message to the console:
// SimpleCircleEventHandler.java
public class SimpleCircleEventHandler implements CircleEventListener {

    private Map<Class<? extends CircleEvent>,
            CircleEventListener> handlers;

    public SimpleCircleEventHandler() {
        handlers = new HashMap<Class<? extends CircleEvent>,
                CircleEventListener>();
        
        handlers.put(CenterChangedEvent.class,
                new CenterChangedEventHandler());
        handlers.put(ColorChangedEvent.class,
                new ColorChangedEventHandler());
        handlers.put(RadiusChangedEvent.class,
                new RadiusChangedEventHandler());
    }

    @Override
    public void handleEvent(Circle sender, CircleEvent event) {
        CircleEventListener handler = handlers.get(event.getClass());

        if (handler != null) {
            handler.handleEvent(sender, event);
        }
    }

    private static class CenterChangedEventHandler implements
            CircleEventListener {
        
        @Override
        public void handleEvent(Circle sender, CircleEvent e) {
            CenterChangedEvent event = (CenterChangedEvent) e;
            String message = "center changed from %s to %s"
            System.out.println(String.format(message,
                    event.getOldCenter().toString(),
                    event.getNewCenter().toString()));
        }
    }

    private static class ColorChangedEventHandler implements
            CircleEventListener {
        
        @Override
        public void handleEvent(Circle sender, CircleEvent e) {
            ColorChangedEvent event = (ColorChangedEvent) e;
            String message = "color changed from %s to %s"
            System.out.println(String.format(message,
                    event.getOldColor().toString(),
                    event.getNewColor().toString()));
        }
    }

    private static class RadiusChangedEventHandler implements
            CircleEventListener {
        
        @Override
        public void handleEvent(Circle sender, CircleEvent e) {
            RadiusChangedEvent event = (RadiusChangedEvent) e;
            String message = "radius changed from %d to %d"
            System.out.println(String.format(message,
                    event.getOldRadius(), event.getNewRadius()));
        }
    }
}
And finally there is a small main program to wire up everything together:
// ObserverMain.java
public class ObserverMain {
    public static void main(String[] args) {
        Circle circle = new Circle(new Point(10, 10), 15, Color.RED);
        circle.addListener(new SimpleCircleEventHandler());

        circle.setCenter(new Point(5, 5));
        circle.setRadius(20);
        circle.setColor(Color.GREEN);
    }
}
This program will print the following output:

center changed from java.awt.Point[x=10,y=10] to java.awt.Point[x=5,y=5]
radius changed from 15 to 20
color changed from java.awt.Color[r=255,g=0,b=0] to java.awt.Color[r=0,g=255,b=0]

Conclusion

Ok, that’s it. As you can see, modern Java allows for building better and more type safe Observer design pattern implementations compared to standard JDK facilities. Of course it didn’t become as smooth as it should be (compared to events in C# for example), but one day it will certainly do :)

Thanks for reading,
See you soon!

Friday, January 11, 2013

"Singleton" Design Pattern in Java

Introduction

Singleton design pattern is a very disputable issue. There are tons of discussions in the Internet about Singleton usage in application design, why to use it, why not to use it, why singletons are evil, etc. In this post I want to put aside the “why?” part of the issue and concentrate on the “how?” part. In particular how to implement Singleton design pattern in Java. The rest of the post enumerates all Singleton implementation options I’ve ever encountered and describes their pros and cons. It doesn’t pretend for completeness but rather reflects my personal experience of working with this design pattern.

General Information

Singleton design pattern is intended to restrict the instantiation of a class to one (single) object. It should do the following things:

  • Ensure that only one instance of a class is created.
  • Provide a global point of access to that instance.

Let’s see how aforementioned points can be implemented in Java.

Singleton in Java

The following subsections describe different approaches to Singleton design pattern implementation in Java. If you are currently not interested in Singleton evolution in Java, want to skip all the background information and just need to find the best working solution quickly then jump straight to the last subsection which describes Singleton implementation using single-element enumeration.

First Option

The most obvious Singleton implementation looks like this:
public class Singleton1 {
    private static Singleton1 instance;

    private Singleton1() {
    }

    public static Singleton1 getInstance() {
        if (instance == null) {
            instance = new Singleton1();
        }

        return instance;
    }
}
This solution has one serious drawback: it doesn’t take threading into account and hence may not work as expected in multithreaded environment. If two or more threads call getInstance() method simultaneously then race condition occurs. For example, if 5 threads call getInstance() at the same time you may end up with static instance field being assigned up to 5 times (depending on the thread scheduling mechanism behavior).

Second Option

To tackle threading issue the previous option is suffering from you can try to use the following solution:
public class Singleton2 {
    private static final Singleton2 instance = new Singleton2();

    private Singleton2() {
    }

    public static Singleton2 getInstance() {
        return instance;
    }
}
In this implementation static instance field will be created during class initialization procedure. Java Language Specification guarantees this procedure to be thread-safe. Therefore this option is suitable for multithreaded environments, but it isn’t perfect either. It has the following drawbacks:

  • It cannot handle exceptions which may occur in the constructor.
  • Static instance field is no longer initialized lazily.

These points are not as critical as threading issue and they can be an acceptable trade-off depending on the task at hand. Moreover, you can easily overcome the first drawback by using a static initializer:
public class Singleton2 {
    private static final Singleton2 instance;

    static {
        try {
            instance = new Singleton2();
        } catch (Exception e) {
            // do possible error processing
            throw new RuntimeException(e);
        }
    }

    private Singleton2() throws Exception {
        // complex initialization logic which can throw an exception
    }

    public static Singleton2 getInstance() {
        return instance;
    }
}
This code looks a bit awkward, but it works and can be used if exceptions thrown from a constructor are a concern. But what if we want to preserve lazy initialization? This naturally leads us to the third option.

Third Option

The following Singleton implementation was initially suggested by Bill Pugh. It is called “Initialization on Demand Holder”. The trick is to use private nested class to hold Singleton instance:
public class Singleton3 {
    private Singleton3() {
    }

    private static class SingletonHolder {
        public static final Singleton3 instance = new Singleton3();
    }

    public static Singleton3 getInstance() {
        return SingletonHolder.instance;
    }
}
This solution leverages lazy initialization. Static instance field is initialized no earlier than SingletonHolder class is loaded and initialized. SingletonHolder class in its turn is loaded and initialized no earlier than it is first referenced. Finally SingletonHolder class is first referenced no earlier than getInstance() method is called. And this is exactly what we need. This implementation like the previous one is also based on the class initialization procedure which is guaranteed by Java Language Specification to be thread-safe. Similarly to the second option, if exceptions thrown from a constructor are a concern you can use a static initializer:
public class Singleton3 {
    private Singleton3() throws Exception {
        // complex initialization logic which can throw an exception
    }

    private static class SingletonHolder {
        public static final Singleton3 instance;

        static {
            try {
                instance = new Singleton3();
            } catch (Exception e) {
                // do possible error processing
                throw new RuntimeException(e);
            }
        }
    }

    public static Singleton3 getInstance() {
        return SingletonHolder.instance;
    }
}
So now we have a thread-safe Singleton implementation which initializes its instance lazily on demand. If you are using Java 1.4 or earlier then this is the approach to stick with.

Fourth Option

Before Bill Pugh suggested his “Initialization on Demand Holder” technique one of the most popular implementations of lazy-initializing thread-safe Singleton used to look like this:
public class Singleton4 {
    private static Singleton4 instance;

    private Singleton4() {
    }

    public static synchronized Singleton4 getInstance() {
        if (instance == null) {
            instance = new Singleton4();
        }

        return instance;
    }
}
This solution has only one drawback: it uses synchronization inefficiently. Synchronization is necessary only until the static instance field is not initialized. After that it simply wastes CPU cycles. In earlier implementations of the Java Platform synchronized methods and blocks were quite costly. That’s why the fifth option emerged.

Fifth Option

This solution is intended to tackle inefficient synchronization usage of the previous option. It leverages Double-Checked Locking pattern and like the previous one it also used to be very popular:
public class Singleton5 {
    private static Singleton5 instance;

    private Singleton5() {
    }

    public static Singleton5 getInstance() {
        if (instance == null) {
            synchronized (Singleton5.class) {
                if (instance == null) {
                    instance = new Singleton5();
                }
            }
        }

        return instance;
    }
}
There is only one problem with this approach: it doesn’t work! Broken Double-Checked Locking pattern is a complicated topic which goes beyond the scope of this post, but the interested reader can find more information here.

Sixth Option

Bill Pugh’s efforts on Double-Checked Locking led to changes in Java Memory Model which were incorporated in the Java Platform starting from Java 5. With these changes it became possible to make the previous solution work by adding volatile modifier to the instance field declaration:
public class Singleton6 {
    private static volatile Singleton6 instance;

    private Singleton6() {
    }

    public static Singleton6 getInstance() {
        if (instance == null) {
            synchronized (Singleton6.class) {
                if (instance == null) {
                    instance = new Singleton6();
                }
            }
        }

        return instance;
    }
}
This solution will work correctly in Java 5 and above.

Seventh Option

All previously mentioned Singleton implementations suffer from the following drawbacks:

  • It is possible to instantiate more than one Singleton instance using Java reflections. All you need to do is to get the corresponding Constructor instance, make it accessible using method setAccessible() and call the constructor reflectively. You can protect your Singleton class against this invocation by throwing an exception from the constructor if an attempt to create more than one Singleton instance is taken.
  • If Singleton class implements Serializable interface additional precautions should be taken to maintain Singleton guarantee: all fields of the Singleton class should be made transient and readResolve() method should be implemented to replace any deserialized instances coming from ObjectInputStream with the only true one.

While you do can bother with tackling the aforementioned problems manually, there is a much better approach which solves them automatically:
public enum Singleton7 {
    INSTANCE
    // ...
}
As you might know, enumerations in Java are not just a set of constants. They can contain method implementations and hence can be used to implement Singleton’s logic instead of plain class. Moreover, all serialization and multiple instantiation issues are handled automatically. I first encountered the idea of using single-element enumerations to implement Singleton design pattern in Joshua Bloch’s Effective Java book. At the time of this writing it is the best way to implement Singleton in Java.

Thanks for reading,
See you soon!

Monday, January 7, 2013

Variation of "State" Design Pattern in Java

This post is about a variation of “State” design pattern implemented in Java. I used similar approach in one of the projects I’ve been working on and I found it pretty useful. I assume you are familiar with “State” design pattern. I would suggest reading this introduction if this is not the case. Alternatively you can learn about the pattern from the famous GoF book.

Ok, let’s start. Consider the following scenario: you are working on a software system which should integrate at certain points with another system using SOAP web-services. Workflow of your software system highly depends on this integration, but the external system may be occasionally unavailable or require a long time to respond. This can significantly slow down your development activities.

If you know the expected outcome of the external system then you can simulate it locally during development phase without actual invocation. One of possible approaches is to put all the integration logic in a separate component and make the rest of your system talk to the external system via that component exclusively. The integration component in its turn can be configured to either invoke the real external system or to simulate locally its expected outcome. Let’s look at some code.

The integration component interface might look like this:
public interface ExternalSystemClient {

    AppResult invokeExternalSystem(AppParams params);

}
For simplicity it contains just a single method which invokes an external system and delivers its outcome. AppParams and AppResult classes represent invocation parameters and invocation result respectively in terms of application. Integration component’s responsibility is to convert them to the form understandable by external system (WebServiceParams and WebServiceResult introduced in the next listing). Implementation of integration component might look like this:
public class ExternalSystemClientImpl implements ExternalSystemClient {
    private ClientActions clientActions;

    public ExternalSystemClientImpl(boolean isExternalSystemEnabled) {
        clientActions = isExternalSystemEnabled ?
                new ExternalSystemEnabled() : 
                new ExternalSystemDisabled();
    }

    private interface ClientActions {
        WebServiceResult invokeExternalSystem(
                WebServiceParams webParams);
    }

    private class ExternalSystemEnabled implements ClientActions {
        @Override
        public WebServiceResult invokeExternalSystem(
                WebServiceParams webParams) {
            // invoke external system and return the results
        }
    }

    private class ExternalSystemDisabled implements ClientActions {
        @Override
        public WebServiceResult invokeExternalSystem(
                WebServiceParams webParams) {
            // simulate external system outcome
        }
    }
    @Override
    public AppResult invokeExternalSystem(AppParams params) {
        // validate input parameters
        // ...
        WebServiceParams webParams = new WebServiceParams();
        // convert AppParams to WebServiceParams
        // ...
        WebServiceResult invocationResult =
                clientActions.invokeExternalSystem(webParams);

        AppResult result = new AppResult();
        // convert WebServiceResult to AppResult
        // ...
        return result;
    }
}
Let’s break down this listing into individual parts. Class ExternalSystemClientImpl implements ExternalSystemClient interface to provide integration with external system. This class contains one private interface – ClientActions. This interface helps to factor out the code which directly depends on whether external system invocation should be performed or not. Hence there are two implementations of ClientActions interface: ExternalSystemEnabled (which do performs external system invocation) and ExternalSystemDisabled (which is intended solely for development purposes and simulates the invocation locally).

Interface ClientActions plus classes ExternalSystemEnabled and ExternalSystemDisabled effectively form the crux of the “State” design pattern. A certain state is selected in the ExternalSystemClientImpl constructor depending on the value of isExternalSystemEnabled parameter. Method invokeExternalSystem() shows “State” design pattern in action. This method has the following responsibilities:

  • Validate input parameters.
  • Convert AppParams instance to the form understandable by the external system (i.e. convert AppParams instance to WebServiceParams instance).
  • Invoke the external system via web-service.
  • Convert WebServiceResult to the form understandable by our application (i.e. convert WebServiceResult to AppResult) and return it.

As you can see the implementation of this method is completely independent of whether the external system is actually invoked or its outcome is locally generated. The more methods you have in ExternalSystemClient interface, the greater benefit you gain from “State” design pattern application.

Alternative Techniques

Let’s look at some alternatives. The first and the most obvious one is to simply check the flag in the ExternalSystemClientImpl#invokeExternalSystem() method and act accordingly depending on the result. The implementation might look like this:
@Override
public AppResult invokeExternalSystem(AppParams params) {
    // validate input parameters
    // ...
    WebServiceParams webParams = new WebServiceParams();
    // convert AppParams to WebServiceParams
    // ...

    WebServiceResult invocationResult;

    if (isExternalSystemEnabled) {
        invocationResult = // ... invoke actual external system
    } else {
        invocationResult = // ... simulate invocation result locally
    }

    AppResult result = new AppResult();
    // convert WebServiceResult to AppResult
    // ...

    return result;
}
This is not an option to be honest. The more methods you have in ExternalSystemClient interface, the more headaches you get during maintenance due to code duplication. Assume you have 15 methods and you need to introduce the third state. If you stick to the aforementioned approach then you will have to revisit all these methods to add new state handling. Code duplication is always error-prone and should be avoided. “State” design pattern works much better in this case because all you need to do is to introduce another ClientActions interface implementation and slightly modify the ExternalSystemClientImpl constructor.

One viable alternative is to provide ClientActions implementation dependency directly via constructor parameter. In this case the required object can be easily injected using Spring, Guice or any other dependency injection container. Method invokeExternalSystem() will be implemented similarly to the initial “State” design pattern description:
@Override
public AppResult invokeExternalSystem(AppParams params) {
    // validate input parameters
    // ...
    WebServiceParams webParams = new WebServiceParams();
    // convert AppParams to WebServiceParams
    // ...
    WebServiceResult invocationResult =
            clientActions.invokeExternalSystem(webParams);

    AppResult result = new AppResult();
    // convert WebServiceResult to AppResult
    // ...

    return result;
}
The constructor of ExternalSystemClientImpl class will be slightly changed:
public ExternalSystemClientImpl(ClientActions clientActions) {
    this.clientActions = clientActions;
}
This solution works fine and provides the same benefits as the initial one. But in this case ClientActions interface and its implementations cannot be kept private to ExternalSystemClientImpl class anymore. Sometimes this is exactly what we need (for example if ClientActions is a full-fledged component that is injected in several other components in the system), but in this particular case ClientActions, ExternalSystemEnabled and ExternalSystemDisabled are just implementation details, not the components on their own. Hence they should be kept as private as possible.

Thanks for reading,
See you soon!