Sunday, November 30, 2014

JavaFX Code Structure

JavaFX is Oracles attempt at a modern UI Framework for the Java Platform. Although it is better than Swing, it is far away from what it could have been.

One of many things JavaFX is missing is the testing story.

JavaFX have this seemingly cool feature called FXML. With it, you can write declarative XML code and easily let the runtime map it to code. Combine that with the UI tool SceneBuilder you have a convenient way to create complex interfaces. There is a catch though. FXML uses a concept of a controller, which is where you have your Java code. This controller can look something like this:



public class PersonController {

    @FXML
    Button button;
    @FXML
    TextField input;
    @FXML
    Label label;
// [...]
}

The FXMLLoader will do the boring work of initializing the UI nodes and bind them together after your instructions in the xml code. Practical and easy. Eventually you want to test your code. You have two approaches: Unit Tests or Automated UI Testing. For the latter you can use TestFX or JemmyFX. Automated UI Testing is an expensive task which I won't go into detail with here since you probably want to be able to Unit Test the FXML controllers anyways.

But you can't!


@RunWith(MockitoJUnitRunner.class)
public class PersonControllerTest {

    PersonController controller;

    @Mock
    Button button;

    @Before
    public void setUp() throws Exception {
        controller = new PersonController();
        controller.button = button;
    }

    @Test
    public void testInitialize() throws Exception {
        assertTrue(true);
    }
}

Running a simple test as the above fails horribly with an java.lang.IllegalStateException: Toolkit not initialized.

I cannot believe that Oracle delivers a UI Framework that doesn't include its own automated UI test tool and a design that allows for easy unit testing of all code in the project.

So how to work around it?

Although cumbersome and a bit annoying, the only way to achieve testability is to separate the JavaFX code from the presentation code. GWT does something similar, and even have API support for it with the way UI nodes all implement interfaces.

It's a basic MVP structure and in that sense nothing new here. The Presenter defines it's view using basic types/properties, which allows it to be mocked in unit testing. Furthermore it ensures that the presenter focuses on what is needed instead of on how it should be done. A simple example of a view is this:



public class PersonPresenter implements Presenter<PersonPresenter.View> {

    public static interface View extends Initializable {
        StringProperty getLabelText();
        StringProperty getButtonText();
        StringProperty getInputText();
        ObjectProperty<EventHandler<ActionEvent>> getOnAction();
    }

    View view;

    @Override
    public void initialize(View view) {
        this.view = view;

        view.getLabelText().setValue("");
        view.getButtonText().setValue("Click here");
        view.getInputText().setValue("Type something here ...");

        view.getOnAction().setValue(this::onButtonClicked);
    }

    void onButtonClicked(ActionEvent event) {
        String inputText = view.getInputText().getValue();
        view.getLabelText().setValue(inputText);
        System.out.println("Updating label to: " + inputText);
    }
}

This implemented interface is very simple and mostly done so the view controller can call the initialize method on the presenter from it's own initialize method:


public interface Presenter<T> {
    void initialize(T view);
}

By design the view controller is very simple. It should only expose the functionality the presenter requires and not contain any sort of logic. Since we cannot unit test it, it should be as light-weight and as little code as possible.


public class PersonController implements PersonPresenter.View {
    @FXML
    Button button;
    @FXML
    TextField input;
    @FXML
    Label label;

    Presenter<PersonPresenter.View> presenter;

    @Override
    public void initialize(URL location, ResourceBundle resources) {
        presenter = ... // Construct the presenter or use injection
        presenter.initialize(this);
    }
    @Override
    public ObjectProperty<EventHandler<ActionEvent>> getOnAction() {
        return button.onActionProperty();
    }

    @Override
    public StringProperty getLabelText() {
        return label.textProperty();
    }


    @Override
    public StringProperty getButtonText() {
        return button.textProperty();
    }

    @Override
    public StringProperty getInputText() {
        return input.textProperty();
    }
}

You can view the code under the sample-presenter folder in this project on GitHub.

Sunday, November 23, 2014

Database Semaphores

The Challenge

How to run EJB TimerService in a multi-node non-cluster JEE7 Environment?
Using the EJB Timer Service is a convenient way to produce scheduled tasks. Either through the explicit api or using the automatic timers with the annotation api.

If the environment was clustered, the EJB implementation might guarantee a single execution of the timeout methods. Clustering introduces other challenges though, and it really is depending on which Application server is used.

How do we handle that each node in the farm will execute the timered methods?

Solution 1: Special Node

One solution would be to promote a node to be special. This node would have the responsibility to perform the code that fulfills the business needs. The other nodes would recognize they are not promoted to being special and avoid performing the code. One way to do this, could for instance be to have a system property in the JVM or an environment variable that denotes the responsibility to be timer executioner.

The solution have several drawbacks though:

  • Configuration complexity - The nodes in the farm is no longer equal. One needs to be configured differently
  • Operations - The special node have a greater responsibility and operations must be ensured before the other nodes
  • Scalability - When the business needs increases, chances are the special node cannot keep up. The need arises for one more special node, further adding to the complexity

Solution 2: Database Semaphore

In concurrent programming a semaphore are used to restrict access to a common resource. In this case the resource is the right to perform the business logic. By using a semaphore it would be possible to implement a strategy where each node in the farm attempts to do the business logic as long as it's able to attain the semaphore lock.

In fact, this is one of the strategies scheduling frameworks like Quartz uses when trying to do work in a concurrent environment.

For a simple EJB Timer it is really easy to do. Create an Entity and corresponding database schema:


@Entity
@Table(name = "semaphore")
public class SemaphoreEntity {

    @Id
    @Enumerated(EnumType.STRING)
    Semaphore id;

    public Semaphore getId() {
        return id;
    }

    public void setId(Semaphore id) {
        this.id = id;
    }
}

And an Enum where the above Entity is pre-filled with the same values in the database.


public enum Semaphore {
    BUSINESS,
    BLACKOPS
}

Now it is easy to get a lock for the task at hand


try {
   entityManager.find(SemaphoreEntity.class, 
                   Semaphore.BUSINESS, 
                   LockModeType.PESSIMISTIC_WRITE);
   System.out.println("Perform Business Work here");
} catch (PersistenceException e) {
   System.out.println("Accept you are not the Special Node");
}

As with any solution there are a few drawbacks to this approach:

  • The database is the constraint of the operation. The ability to scale out is limited by the database.
  • There are faster approaches, although for most cases this is plenty fast
  • Using a pessimistic write lock works slightly different depending on JPA implementation and JDBC driver. For instance does the postgresql driver require a timeout of 0 in order for this to work

Check out the proof of concept code on GitHub.

Friday, November 14, 2014

Binding to Bean Validations

Currently I am working on a JavaFX Application. As it goes with most UI applications there needs to be input validation. Doing that on the server is a solved problem with the Bean Validation api. It is easy to use and well integrated in the JEE technology stack.

It is not the same story in the user interface!

In most cases you want to connect the error with a specific interface element. This could be done by displaying an error text next to the field or change the fields background color . All in order to make it easy for the user to find the place where something needs to be fixed.

To connect an UI element with a specific violation source on an entity, the ConstraintViolation interface provides the getPropertyPath() method. This returns a Path which basically boils down to a string formatted after a bean style convention - ie parent.name.

A naive approach to bind the violation to the UI elements would be something similar to:

  
void handleViolation(ConstraintViolation<?> violation) {
   String path = violation.getPropertyPath().toString();
   String message = violation.getMessage();
   if( "parent".equals(path) ) {
      showParentError(message);
   } else if( "parent.name".equals(path) ) {
      showParentNameError(message);
   } else {
      showDefaultError(message);
   }
}
  

The problem with this code is that you don't have any compiler support. When you refactor the entity, the code will break silently. We need a better way to do this. One that uses the strong typing of the language and remains flexible to do the binding.

  
  void bind(errorHandler, Entity::getField() );
  

Fortunately Java 8 brings Method References that allows for this syntax. Now we can create an api like this:

  
ValidationBinder<Person> v = new BeanValidator<>(Person.class)
  // simple binding - a normal usecase
  .bind(Handlers.messages(phoneErrors::setText), Person::getPhone)
  // multi binding with chained handlers
  .bind(Handlers.messages(nameErrors::setText).andThen(
        Handlers.styling(nameBox, "error"),
        pojo -> {
          pojo.getFirstName();
          pojo.getLastName();
       }
  )
  // binding to fields in the entity object graph
  .bind(Handlers.messages(carErrors::setText), pojo -> {
     pojo.getCar().getEngine();
     pojo.getCar().getPrice();
  });

The signature of the bind method is

  
ValidationBinder<T> bind(Consumer<List<String>> handler, Consumer<T> binder);

The first argument is the lambda that will take the possible errors a validation of an entity have produced on the fields bound in the second argument. Both arguments use the Java 8 function interfaces. This gives a certain convenience as these contains solid default methods allowing for the chaining seen in the andThen call.

An example of creating a new handler:

  
Consumer<List<String>> messages(Consumer<String> consumer) {
        return (messages) -> {
            String message = String.join("\n", messages);
            consumer.accept(message);
        };
    }

This approach gives us some benefits:

  • Refactoring preserves validation handling
  • The API is flexible - you just need a lambda to bind error messages
  • It is possible to listen to the same violation source from several handlers
  • Nested entities can be bound

There is some drawbacks though:

  • The BeanValidator uses cglib to create the recording binder. That can be an issue in some environment that uses a SecurityManager
  • Recording is done by calling methods on the entity. These methods are converted to bean style naming which - hopefully - matches the class field that holds the Constraint annotation. If that is not the case, the BeanValidator won't work

In conclusion I believe the benefits of this solution far outweighs the drawbacks. It is simple to create the binding between the constraints and the handler that can show them to a user. The drawbacks are easily circumvented by using Java conventions and best practices.

Check out the code in my GitHub project validation-binder