Manage Node.js dependencies properly

Author of every piece of software bigger than simple “Hello World” sooner or later will face problem with dependency management. In this post I would like to consider that problem and possible solutions in regard to Node.js applications.
In Node.js world managing dependencies with npm is exceptionally simple, however there are some caveats which everyone should be aware of.

Have a look at example package.json file:


{  
  "name": "",  
  "description": "",  
  "version": "0.0.1",  
  "private": true,  
  "dependencies": {    
    "express": "~4.2.x",    
    "compression": "~1.0.x",    
    "serve-favicon": "~2.0.x",    
    "morgan": "~1.0.x",    
    "cookie-parser": "~1.1.x",    
    "body-parser": "~1.2.x",    
    "method-override": "~1.0.x",    
    "express-session": "~1.1.x",    
    "serve-static": "~1.1.x",    
    "connect-redis": "~2.0.x",    
    "redis": "~0.10.2",    
    "open": "~0.0.x",    
    "request": "~2.34.x",    
    "connect": "~2.14.x",    
    "connect-flash": "~0.1.x",    
    "crypto": "~0.0.x",    
    "passport": "~0.2.x",    
    "passport-local": "~1.0.x",    
    "underscore": "~1.6.x",    
    "async": "~0.6.x",    
    "moment": "~2.5.x",    
    "ejs": "~1.0.x",    
    "cookie": "~0.1.x",    
    "winston": "~0.7.x",    
    "path": "~0.4.x",    
    "stompjs": "~2.3.x",    
    "socket.io": "~0.9.x",    
    "forever-monitor": "~1.2.x",  
  },  

  "devDependencies": {    
    "supertest": "latest",    
    "should": "latest",    
    "karma": "latest",    
    "karma-junit-reporter": "latest",    
    "karma-ng-html2js-preprocessor": "latest",    
    "grunt": "latest",    
    "grunt-contrib-jshint": "latest",    
    "jshint-junit-reporter": "latest",    
    "grunt-devperf": "latest"  
    }
  }

Every package has a version number which comply with what is described as semantic versioning. Generally speaking, it takes the form of major.minor.patch where major, minor and patch are integers increasing after each new release. Having package.json as above on first sight seems to be defensive approach to dependency management. We decide to every run of npm install download and install latest release of specified major.minor version. In other worlds, we accept bugfixes to the current version but refuse to download new minor or major versions. As long as it accepted on development environment, this approach needs to be reviewed when production deployment is considered. Deploying on production we want to be a 100% sure that we deploy software package which has been already tested. In that solution we cannot assure that. Running npm install on production will probably download new version (bugfix) of some package.

How to deal with it?There are at least a few possible solutions:

  • use specific version (for example “forever-monitor”: “1.2.3”) in package.json
    • pros
      • every time npm install is executed the same version is downloaded and installed
      • every version change needs editing package.json file
    • cons
      • it is still possible that package author republishes an existing version of the package and as a result we get a different code on production
      • on development environment it is accepted that new patches are installed since the software is instantly tested
  • npm install dependencies on development environment and commit the results into version controlsystem
    • pros
      • we deploy the exact bits which were checked in into version control
    • cons
      • in specific cases npm install not only downloads packages but also builds system-dependent binaries so it is obvious that packages installed on Microsoft Windows system will be suitable for Linux operating system
      • it is error prone since someone can check in a source change but not regenerate the binaries
      • it is redundant since binaries can always be built from sources; as we do not commit built artifacts in Java ecosystem, we should not do that in Node.js environment too
  • npm shrinkwrap – the command locks down the versions of package’s dependencies; produces npm-shrinkwrap.json containg specific version at the current time which is used instead of package.json in subsequent npm install commands
    • pros
      • all dependencies are locked down so every npm install will download the same version
    • cons
      • there is still possibility that package author will republish code while not changing version number
  • npm shrinkwrap on development environment and tar up dependencies on test environment before distributing them to production; test development is the same as production as far as platform is considered
    • pros
      • changes to packages versions are acceptable on development environment (update shrinkwrapped package as described in manual)
      • version deployed on production is always the same version which was tested
    • cons
      • patches are not automatically downloaded every npm install on development environment and require some manual actions, but not in package.json file
This was a quick review of a few available package management methods. Of course there are more possibilities such as maintaining a mirror of the part of npm registry but this a subject for another post. 
Do not expect to find one, always suitable solution. If it is fine by you to specify exact package version you can carry on with it. Personally I prefer the last route (npm shrinkwrap and tar up dependencies).

Another useful tool from Zeroturnaround – Xrebel

Some time ago I signed up for a beta testing of a new tool from Zeroturnaround : Xrebel. I have been waiting a bit impatiently for a testing program to start. Finally I have got an information about availability of 1.0.0 version of the tool for download. Without much hesitation I have started tests.

What does the tool do?

As you may expect, based on previous Zeroturnarond tools, generally speaking it is intended to improve quality of Java developer every day work. More precisely speaking it allows developers to live monitor JVM application in regards of session size and SQL queries sent to the underlying database.

Instalation

There is no doubt that Zeroturnaround guys know all the ins and outs of JVM so the installation is as simple as adding -javaagen:[path]xrebel.jar JVM parameter to the application server. And that is it. Starting an application server the following output is presented:

s1

Application successfully started with Xrebel assist.

When web application is executed in the web browser for the first time with Xrebel there is a simple form presented in the web browser allowing user to activate the tool.

Let’s discover tools features.

Configuration

When the application is executed in the web browser Xrebel adds small toolbar on the left side of the screen:

Picture 1. Toolbar
Picture 1. Toolbar

When first click on any option the Setting window is showed when user can enter package name which will be monitored and thresholds which will be used by Xrebel to notify user when exceeded:

Picture 2. Package settings
Picture 2. Package settings

 

Picture 3. Thresholds settings
Picture 3. Thresholds settings

 

I expected easy set up and Xrebel did not disappointed me.

Monitoring

Main feature of the Xrebel is live application monitoring and as shown on Picture 1. general information are presented on the toolbar itself. First sections regards SQL queries. First number (5) indicates number of SQL queries executed so far. Next position shows queries execution time (288.5 ms). Section below shows session information: total size and size difference from the last request.

When clicked on each session additional information is presented such as exact query executed in the database, number of rows returned and execution time:

s6

As far as session is concerned, there are sizes of each stored element.

Conclusion

The main purpose of the Xrebel is to quickly find bugs regarding dodgy database access and abnormal session size increment. In my opinion, even it is early beta version, it fulfills expectations.
Simple, not disturbing tool providing all information needed for a developer to track down the issue cause.

If you feel interested in the tool sign up for Xrebel beta testing:

Xrebel beta tests

Hibernate + 2nd level cache bug

Recently, I have come across a nasty bug. When Hibernate executes a cacheable query and applying a ResultTransformer on a result of that query, java.lang.ClassCastException is thrown.

session.createCriteria(RateEntity.class)
  .add(Restrictions.eq(CURR_ID, criteria.getCurrency()))       
  .add(Restrictions.eq(TYPE, criteria.getRateType().name()))        
  .add(Restrictions.gt(TS_CRT, criteria.getFromTime().toDate()))         
  .setProjection(Projections.projectionList()            
  .add(Projections.property(TS_CRT), TIME_DTO_PROPERTY)             
  .add(Projections.property(RATE), RATE_DTO_PROPERTY))         
  .addOrder(Order.asc(TS_CRT))        
  .setCacheable(true)        
  .setResultTransformer(Transformers.aliasToBean(RateDTO.class))        
  .list();

It seems that the ResultTransformer is applied on the result before putting it into cache. Cache expects Object list but it receives transformed results (in this case RateDTO list). It cannot deal with it and as a result exception is thrown.

The bug is present in Hibernate 3.6.6.Final version.

One of the workarounds would be to remove

.setResultTransformer(Transformers.aliasToBean(RateDTO.class))

and allow query return Object list.

Having Object list they can be transformed manually to the desired list:

public static List transformToBean(Class resultClass, List < String > aliasList, List resultList) {
  if (CollectionUtils.isEmpty(aliasList)) {
    throw new IllegalArgumentException("aliasList is required");
  }
  if (CollectionUtils.isEmpty(resultList)) {
    return Collections.EMPTY_LIST;
  }

  List transformedList = new ArrayList();
  AliasToBeanResultTransformer aliasToBeanResultTransformer = new AliasToBeanResultTransformer(resultClass);
  Iterator it = resultList.iterator();
  Object[] obj;
  while (it.hasNext()) {
    obj = (Object[]) it.next();
    transformedList.add(aliasToBeanResultTransformer.transformTuple(obj, (String[]) aliasList.toArray()));
  }
  return transformedList;
}

Java enums + template method design pattern

Let’s consider the following code snippets:

public enum Currency { 
  EUR, USD
}

 

String currency = "EUR";

if(Currency.EUR.name().equals(currency)){ 
  System.out.println("Transfer permitted");
}

How often do we see much the same scenario? It is not completely wrong since there is a try to use enums so it is not entirely “string driven” programming. However there is still a space to do some refactoring.

What about doing such stuff more in the object-oriented way? Enums are very powerful Java feature but for most of the cases there are only used in the simplest possible way.

public enum Currency {
  EUR() {
    @Override 
      public boolean isTransferPermitted() {
        return true;
      }
  }, 
  USD() {
    @Override 
    public boolean isTransferPermitted() {
      return false;
    }
  };
    
  public abstract boolean isTransferPermitted();
}

 

Currency currency = Currency.valueOf("EUR");

if(currency.isTransferPermitted()){  
  System.out.println("Transfer permitted");
}

In my opinion, refactored code is clearer and more self-documenting then the original one. Generally speaking it is a Template Method design pattern applied to Java enums.

Java application performance monitoring

I am going to present quick overview of useful tools which can be helpful when you trying to troubleshoot application in case of performance issues.

Describing each tool, some overview is provided, the way the tool is installed, free/paid info, inclusion in the JDK and the information if it can be used in production environment.

  • logs
    • application logs, application server logs, database logs
    • no installation
    • free
    • included with JDK
    • can be used in production
  • application server monitoring
    • administration console
    • provided by default with application server or installation needed – varies between application servers
    • free (comes with application server)
    • not included with JDK
    • can be used in production (if properly secured)
  • New Relic
    • application performance management and monitoring
    • application has to be started with agent
    • basic features: free; advanced features: paid
    • not included with JDK
    • can be used in production
  • AppDynamics
    • application performance management software; useful for developers as well as operations; great visual tool
    • application has to be started with agent
    • lite version: free; pro version: paid
    • not included with JDK
    • can be used in production
    • YourKit
      • cpu and memory profiler
      • installation needed
      • paid
      • not included with JDK
      • cannot be used in production
    • JVisualVM
      • cpu and memory profiler
      • no installation needed
      • free
      • provided with JDK
      • some features can be used in production
    • JConsole
      • JMX metrics
      • no installation needed
      • free
      • provided with JDK
      • can be used in production
    • JMap/JHat
      • prints Java process memory map
      • no installation needed, attaches to the running process
      • free
      • included with JDK
      • cannot be used in production
    • JStack
      • prints thread dump of Java process
      • no installation needed, attaches to the running process
      • free
      • included with JDK
      • can be used in production

    JPA EntityManager operations order

    I have run into interesting issue recently. I use in the project JPA + Hibernate + EJB. The issue concerns saving and deleting entities in the same transaction. Database table which is used has an unique constraint defined on two columns.

    What I have done was removing entity calling

    entityManager.remove();

    then the new entity has been added with the same values in two properties associated with columns used in the unique constraint but different values in other properties using:

    entityManager.persist();

    Those two operations have been carried out in a single transaction and have been executed in the order as presented above. Removal first, addition second.
    What turned out, the operations on the database were executed in the inverted order and unique constraint got violated. It looked like the new entity was added before removing the previous one.

    Apparently, looking at JPA specification, it does not force implementations to execute operations on the database in the order they were added to the transaction.

    To deal with the situation as above, JPA provides

    entityManager.flush();

    method. Its responsibility is to synchronize persistence context to the underlying database.
    So to avoid unique constraint violation you need to call flush() method after remove() method.

    What is more, there is no risk that if the transaction is rolled back after calling flush() the entity will be removed anyway.  flush() force the persistence context to be synchronized to the database, but the transaction is still not committed, except it is committed  manually. If EJB layer is configured by default and JTA is used, then the transaction will be committed only after the methods returns from the EJB layer.

    Grails + Axon = CQRS (part II)

     As promised in one of the previous posts, I carry on series concerning Groovy, Grails and Axon framework in this case.  In Part I a few words have been written about Grails, CQRS and Axon.
    In this part, the sample application using mentioned technologies and architecture is presented.
    he application is developed in Grails. CQRS architecture is implemented with Axon framework support.

    There are two simple use cases in the application:

    • entering new credit card transaction
    • canceling created credit card transaction

    If you need some more information about technologies used come back to Part I. In this post I will mainly concentrate at the code base.

    I will try to present the whole classes but in some cases they contain some less important code, not relevant to the CQRS, so I will skip that parts. However the entire project can be pulled from GitHub.

    Let’s start with a quick project structure overview:

    Picture 1. Project overview
    Picture 1. Project overview

    ControllerCQRS architecture introduces division of application into two separate parts: one responsible for data persistence and domain logic and another one responsible for fetching data in order to present it to the user.
    I decided to use Grails domain classes feature to implement the read part while implement the proper domain logic in the separate classes placed outside grails app in the src folder. So if you look in grails-app/domain/grails/cqrs it is the read model, while in src/grails/cqrs/domain write model is located.

    The analysis will by done starting from the view part down to the backend.

    Controller in Grails is a component handling requests and preparing a responses. In grails controllers are simple classes which names end in Controller. There are placed in the grails-app/controllers location.
    Have a look at part of the CreditCardTransactionController:

    class CreditCardTransactionController {
      DefaultCommandGateway defaultCommandGateway; 
      ....
    }

    Command Gateway is an interface to the command dispatching/handling mechanism. Axon provides CommandGateway interface containing signatures of methods responsible for sending commands synchronously, asynchronously, with timeout and so on. Obviously, you can implement that interface and provide own mechanism for dispatching command. However Axon provides default implementation DefaultCommandGateway which can be used straightaway.

    In the controller we have two important methods: save and cancel. The are responsible for handling request for two basic use cases of the application: creating and canceling credit card transaction.

    @Transactional 
    def save(CreateTransactionCommand command) {
      if (command == null) {
         notFound() 
         return
      }
      if (command.hasErrors()) {
        respond command.errors, view: 'create'
        return
      }
      command.id = randomUUID() commandGateway.send(command) flash.message = 'Credit card transaction created'
      redirect(action: "index")
    }
    @Transactional 
    def cancel(CancelTransactionCommand command) {
      if (command == null) {
        notFound() 
        return
      }
      CreditCardTransaction creditCardTransaction = CreditCardTransaction.findById(command.id) 
      command.aggregateIdentifier = creditCardTransaction.aggregateIdentifier 
      commandGateway.send(command) request.withFormat {
        form {
          flash.message = "Credit card transaction cancelled"
          redirect action: "index", method: "GET"
        }
        '*' { render status: NO_CONTENT }
      }
    }

    Command Object

    The first interesting thing is command parameter of each method. Command object in Grails when used as a controller parameter is a convenient way to pass request data. It uses data binding mechanism to bind request parameters to the command object properties. There is no need any more to get data from the request itself. What is more it allows data validation. It is also a mechanism to use when implementing command design pattern.

    As an example have a look at the CreateTransactionCommand:

    @grails.validation.Validateableclass 
    CreateTransactionCommand {    
    UUID id    
    String creditCardNumber    
    String creditCardCvvCode    
    String creditCardOwner    
    Date validDate    
    BigDecimal amount    
    
      static constraints = {        
        creditCardNumber blank: false, size: 16..16        
        creditCardCvvCode blank: false, size: 3..3        
        creditCardOwner blank: false        
        validDate nullable: false        
        amount nullable: false    
      }
    }

    It contains request data, validation rules and id. Because of the fact that those commands will be dispatched to the backend they should have unique id. This id can be assigned in different parts of the application. I decided to assign it in the controller (line 12).  It will be used as an aggregate identifier later on during command processing.

    Command Gateway

    Commands are sent to the command handling mechanism using mentioned earlier command gateway (line 13).  Apart of simple send(Object command) method which sends command and returns immediately without waiting for command execution, CommandGateway provides also the following methods:

    • void send(Object command, CommandCallback<R> callback
      • sends command and have the command result reported to the given callback
    • R sendAndWait(Object command)
      • sends command and waits for the command execution
    • R sendAndWait(Object command, long timeout, TimeUnit unit)
      • sends command and waits for the command execution until the given timeout is reached

    Come back for a moment to the controller and its cancel method. It is responsible for canceling credit card transaction. The problem here is that the read and write model need to be matched in some way. On the view we have data from read model (grails-app/domain) while command sent to the write model needs to identify the proper aggregate to update (delete in this case – it is only for presentation purpose, in the real world such case should not delete an aggregate).
    The matching is done by write model which holds aggregate identifier.

    Command Handler

    So far we have sent command using command gateway. It is then forwarded to the appropriate command handler through command bus. Infrastructure, like command bus will be discussed a little bit later. Command handler is an object receiving command of the specified type.
    Command handlers can be created by implementing CommandHandler interface which defines a single method Object handle(CommandMessage<T> command, UnitOfWork uow). Such command handler needs to be subscribed to/unsubscribed from  the command bus manually.
    I decided to use annotation based command handlers. In that case, command handler can be any POJO class with handle method annotated with CommandHandler annotation. That method as its first argument should declare command object. It is possible to add more arguments but the first one must be a command object.

    @Componentclass 
    CreateTransactionCommandHandler { 
      @CommandHandler 
      void handle(CreateTransactionCommand command) throws Throwable { 
        CreditCardTransactionAggregate creditCardTransactionAggregate = new CreditCardTransactionAggregate(command) 
        EventSourcingRepository eventSourcingRepository = SpringBeanResolver.resolve("creditCardTransactionRepository") 
        eventSourcingRepository.add(creditCardTransactionAggregate) 
      }
    }

    I created two separate command handlers, one for each command type. Annotation based configuration allows to define multiple methods in one object, each handling specific command. What is more aggregate methods can be annotated as command handlers. For example aggregate constructor can be annotated @CommandHandler. Consequently, sending that command may result in creating a new aggregate.
    In the sample application aggregate is created in the command handler itself and saved to the event store using repository.

    Line 8 from the above listing is specific for Grails framework. I have needed to create simple Spring beans resolver because Grails dependency injection (based on Spring) does not inject dependencies into objects located outside grails-app. 
    Repository will be discussed later on during Spring configuration analysis since it is configured there.

    Aggregate

    As we have aggregate created and saved to the store, let’s have a look at its implementation:

    class CreditCardTransactionAggregate extends AbstractEventSourcedAggregateRoot {
      @AggregateIdentifier UUID identifier 
      String creditCardNumber 
      String creditCardCvvCode 
      Date validDate 
      BigDecimal amount 
      Date transactionDate 
      Date cancellationDate 
      CreditCardOwner creditCardOwner 
    
      CreditCardTransactionAggregate() {}
    
      CreditCardTransactionAggregate(CreateTransactionCommand command) {
        apply(new TransactionCreatedEvent(command.id, command.creditCardNumber, command.creditCardCvvCode, command.creditCardOwner, command.validDate, command.amount, new Date()))
      }
    
      void cancelTransaction(CancelTransactionCommand command) {
        apply(new TransactionCancelledEvent(command.id, command.aggregateIdentifier))
      }
    
      @Override 
      protected Iterable <? extends EventSourcedEntity > getChildEntities() {
        return null
      }
    
      @Override 
      protected void handle(DomainEventMessage event) {
        if (TransactionCreatedEvent.class.isAssignableFrom(event.getPayload().getClass())) {
          handleTransactionCreated(event.getPayload())
        }
    
        if (TransactionCancelledEvent.class.isAssignableFrom(event.getPayload().getClass())) {
          handleTransactionCancel(event.getPayload())
        }
      }
    
      private void handleTransactionCreated(TransactionCreatedEvent event) {
        this.identifier = event.id this.creditCardNumber = event.creditCardNumber this.creditCardCvvCode = event.creditCardCvvCode this.validDate = event.getValidDate() this.amount = event.getAmount() this.transactionDate = new Date() creditCardOwner = new CreditCardOwner(event.creditCardOwner)
      }
    
      private void handleTransactionCancel(TransactionCancelledEvent event) {
        cancellationDate = new Date() markDeleted()
      }
    }

    The aggregate extends AbstractEventSourcedAggregateRoot which is a nice abstract class to extend by aggregates which are store in an event store. That class also tracks all uncommitted events. Moreover, it provides useful methods to initialize aggregate state based on the events retrieved from the event store.
    As we can see in the configuration later on, events applied on the aggregate are stored on the file system in the folder with aggregate name. Each event is stored in separate file identified by event id:

    Domain Event

    As nowadays we live in the event driven environment, software, especially its domain model, which is intended to reflect the reality as perfectly as it is possible, should be event-driven as well. Going that path all important changes in the domain should be represented by events. Axon provides a special DomainEventMessage interface to be implemented by the objects representing such events.
    The sample aggregate is applied two domain events: TransactionCreatedEvent and TransactionCancelledEvent. AbstractEventSourcedAggregateRoot interface provides a handle method (line 30) which responsibility is to handle events internally in the aggregate.
    Event applied on the aggregate is dispatched on the event bus which forwards it to the applicable event listener.

    class TransactionCreatedEvent { 
      UUID id 
      String creditCardNumber 
      String creditCardCvvCode 
      String creditCardOwner 
      Date validDate 
      BigDecimal amount 
      Date transactionDate
       
      TransactionCreatedEvent(UUID id, 
                              String creditCardNumber, 
                              String creditCardCvvCode, 
                              String creditCardOwner, 
                              Date validDate, 
                              BigDecimal amount, 
                              Date transactionDate) { 
        this.id = id 
        this.creditCardNumber = creditCardNumber 
        this.creditCardCvvCode = creditCardCvvCode 
        this.creditCardOwner = creditCardOwner 
        this.validDate = validDate 
        this.amount = amount 
        this.transactionDate = transactionDate 
      }
    }

    Domain event caries all the information which will be stored in the read part of the application. One important thing to notice is that it id property holding aggregate identifier.

    Domain Event Listener

    Finally, there is an event listener which handles events and persists data in the store from which data is retrieved and presented to the user.

    @Componentclass 
    CreditCardTransactionEventListener { 
    
      @EventHandler 
      public void handle(TransactionCreatedEvent event) { 
        CreditCardTransaction creditCardTransaction = new CreditCardTransaction() 
        creditCardTransaction.creditCardNumber = event.creditCardNumber 
        creditCardTransaction.creditCardCvvCode = event.creditCardCvvCode 
        creditCardTransaction.creditCardOwner = event.creditCardOwner 
        creditCardTransaction.validDate = event.validDate 
        creditCardTransaction.amount = event.amount 
        creditCardTransaction.transactionDate = event.transactionDate 
        creditCardTransaction.aggregateIdentifier = event.id 
        creditCardTransaction.save flush: true 
      } 
    
      @EventHandler 
      public void handle(TransactionCancelledEvent event) { 
        CreditCardTransaction creditCardTransaction = CreditCardTransaction.findById(event.transactionId) 
        creditCardTransaction.delete flush: true 
      }
    }

    So we have events stored in the write part of the application and data stored in the read part based on the events dispatched by the domain model.

    Spring configuration

    Last but not least element is an infrastructure. In the sample application it is configured using Spring application context:

    <axon:annotation-config />
    <context:component-scan base-package="grails.cqrs" />
    <bean class="org.axonframework.commandhandling.annotation.AnnotationCommandHandlerBeanPostProcessor">
        <property name="commandBus" ref="commandBus" />
    </bean>
    <bean id="defaultCommandGateway" class="org.axonframework.commandhandling.gateway.DefaultCommandGateway">
        <constructor-arg name="commandBus" ref="commandBus" />
        <constructor-arg name="commandDispatchInterceptors">
            <list></list>
        </constructor-arg>
    </bean>
    <axon:filesystem-event-store id="eventStore" base-dir="/events" />
    <axon:command-bus id="commandBus" />
    <axon:event-bus id="eventBus" />
    <bean id="creditCardTransactionRepository" class="org.axonframework.eventsourcing.EventSourcingRepository">
        <constructor-arg value="grails.cqrs.domain.CreditCardTransactionAggregate"/>
        <property name="eventBus" ref="eventBus"/>
        <property name="eventStore" ref="eventStore"/>
    </bean>

    Line <axon:annotation-config /> allows Axon to process annotation and accordingly turn objects with @EventHandler annotation into event handlers or @CommandHandler into command handlers for example. Obviously, such beans needs to be components managed by the Spring.

    AnnotationCommandHandlerBeanPostProcessor defined in line 5 registers classes containing command handling methods with a command bus provided through commandBus property.

    In the line 9 there is a command gateway definition. It receives reference to the command bus and list of interceptors through constructor arguments.
    Event though I haven’t provided any command dispatch interceptors it is worth saying a few words about them. Their purpose is to modify command message before dispatching it to the command bus. Interceptors are executed only on the command handlers going through that particular gateway.
    What can be done in the command dispatch interceptors? Validation for example or enriching command event with meta data.

    Very significant element is defined in the line 16. It is an event store. In this case it is a file system based store. It stores events applied on the aggregates in the events folder.
    Apart of the file system event store used in the example, Axon framework provides other event store implementations like JpaEventStore for example.

    Lines 18 and 20 use Axon namespaces to define the command bus and the event bus.

    Line 22 defines repository used to persist or load aggregates. EventSourcingRepository used in the sample automatically publish domain event to the event bus. In addition, it delegates event storage to the provided event store.

    Summary

    That is nearly all. As you can probably see from that simple example, Axon framework does brilliant job providing a perfectly designed infrastructure and architecture to support CQRS implementation.
    A lot of boilerplate code is eliminated from the application and moved to the framework. That is exactly what we expect from the solid framework.
    Apart of all the advantages which were mentioned in the Part I and more can be find in the Internet, I would like to stress one more thing. Having application separated into two loosely coupled parts: write and read, each one having own storage gives you an unique ability to switch off any of the parts without affecting another one.Think about a failure. It greatly reduces risk of the whole application going down.
    In my opinion, that is another great advantage of CQRS architecture. Advantage sometimes underestimated.

    Jira – issue not visible on the agile board

    JIRA agile, formerly known as GreenHopper (GreenHopper->JIRA Agile) has lots of nice features supporting agile planning during software development. This post involves th problem I have met using that feature.

    During my everyday duties concerning Jira from time to time I come across the following concern. When adding issue to the Jira it does not appear on the agile board. Specifically on the board describing sprint backlog or product backlog.
    It must be say that each agile board has corresponding filter which defines issues that are showed on the board. Obviously, I have always made sure that that problematic issue falls into the borders defined by the filter. What is more its status has been properly mapped in the configuration of the board. However, the issue still has not appeared on the board.
    What turns out after deep investigation is that the questionable issues are those which are created, opened and resolved and after that tried to being add to the proper agile board. In such scenario they do not appear on the board at all.

    The proper steps should be as follows:

    • create issue
    • open issue
    • add issue to the board
    • optional statuses (in progress, etc.)
    • resolve issue

    So the general rule is that issues should be added to sprint before they began.

    In order to fix issues which has been already resolved, they need to be reopened, then added to the agile board and once again resolved.

    Grails + Axon = CQRS (part I)

    I continue series of posts regarding Groovy and Grails. In the previous post there were some starter information about Groovy programming language. In this one and the next one I would like to present sample application built with Grails and Axon which demonstrates a basic usage of CQRS architecture.

    Grails

    Grails is a rapid application development framework for the JVM. It is largely built on top of the Spring framework, and in particular, a lot of it is built on top of Spring MVC. So if you’re building a web application that targets any other platform, Grails is not for you. Grails is specifically written as a framework for building JVM web applications.
    There are lots of developer productivity gains that Grails has to offer. It’s very, very easy to quickly get an application up and running on the Grails framework because of a lot of things that Grails does to help with things like a convention over configuration and the sensible defaults. Literally, in a couple of minutes you can have a simple Grails application up and running. And from there Grails offers a lot to simplify the process of building serious enterprise web applications.

     

    grails

     

    CQRS

    Firstly, let’s try to answer a simple question. Why we may need CQRS?
    Scalability. It is definitely a hard problem.
    What can we do if we need to handle a lot of  concurrent connections?
    One way to deal with it is using asynchronous concepts. Grails 2.3 has some async improvements. There is an API for doing async stuff. What is more Grails 2.3 introduces Promises, a concept similar to java.util.concurrent.Future instances.
    That is one aspect, do async stuff using current programming model.

    The other way to deal with scalability is changing completely an architecture. Here CQRS has his place. CQRS stands for Command Query Responsibility Segregation. It very often seems to be associated with Domain Driven Design. While Domain Driven Design is mainly focused on getting an application domain model right, CQRS is a wider concept.
    The basic concept behind is that the part of the application that updates the database (call it write model) has different requirements that part which reads (call it read model) and reports to the user data from the database.
    CQRS embeds that concept into application architecture. It clearly separates part that process commands and the one which queries data.
    Someone may ask why we need such separation, why we need two storages in one application.
    If we look closer at the two models (read and write) we notice that the requirements for them are different. The command part (write) is responsible for validating and execution of commands. Whereas the query part just reads data and transfers it to the view layer. Going further, storages for those parts can be optimized specifically for the particular purpose they serve. For example read storage can store denormalized data. What is more, database tables/documents can be design to serve particular view. So that no joining is needed. At the same time the write storage can persist data in normalized form. There are only a few examples which clearly justify the model CQRS provides.

    source: infoq.com
    source: infoq.com

    When we get an update which comes in as an command (Grails supports that providing command objects), the command is processed by command handling component. Each command should have specific processing component. The command interacts with domain model. The domain model may change and the changes go to the persist model to be stored in the database. It does not need to be a database as we know it. In most cases it should be an Event Store.
    The idea behind the Event Store is that the changes are stored to the database. As a result, we never loose information. Every change is persisted, we can go back, get audit trail, produce complicated reports and so on.
    When persisting data in the write model a message is produced that triggers update in the other part of the application, the read model. The message is placed on the message/event bus. The update is done asynchronously. Then using the read model and the storage on that part of the application data is queried and presented to the user. Because of asynchronous nature of the updates in the read model we must assume that data presented to the user can be stale sometimes. However, that issue is present only when we use views generated on the server side. Modern web applications use wide variety of technologies on the view layer apart of server side generated pages so it is pretty simple to trigger view refresh when data update is captured in read model. By the way, there is Grails Atmosphere Plugin which provides smooth integration with Atmosphere framework – AjaxPush/Comet framework.

    Axon

    Implementing a CQRS architecture requires quite a lot of “plumbing” and boilerplate code. Much of that code is project-independent and quite error-prone to do right. Axon aims at providing the basic building blocks, which are needed in most CQRS-style applications. Event dispatching, for example, is typically an asynchronous process, but you do want guaranteed delivery and, in some cases, sequential delivery. Axon provides building blocks and guidelines for the implementation of these features. Besides these core-components, Axon wants to make integration with third party libraries possible. Think of frameworks and libraries such as JMS, MQ and Spring Integration.

    The above was said by Allard Buijze, the author of Axon Framework. It perfectly sums up the reasons standing behind creating the framework and the areas in which Axon can help us to develop applications compliant with CQRS model. In my opinion it is brilliant framework, very well designed, open for extension. What is more it provides simple implementation of particular CQRS building blocks like event store, event bus, etc. which can be used straightforward in our application.

    Further reading:

    In the next part I will present simple Grails application which uses Axon framework to implement basic CQRS model.

    Stay tuned!

    Groovy – Getting Started

    As promised in the previous entry I am continuing series of Groovy posts. Starting from simple introduction we are moving to intermediate and advanced topics in the future posts.

    What is Groovy’s origin?

    Everything started in 2003 when James Strachan (official Groovy creator) wrote to Bob McWhirter:

    Wouldn’t it be “groovy” if we could have native syntax for lists, maps and regexs in Java like in most scripting languages?

    Wouldn’t it by “groovy” if we could have duck typing in Java?

    Wouldn’t it be “groovy” if we had closures and properties in Java?

    Certainly it would be “groovy” as a result of what we have brilliant scripting language on the JVM.

    Groovy is a dynamic language for Java Virtual Machine. It is compiled directly to byte code  which is then executed on the JVM alongside Java byte code. It is one of the main strengths of the language because it uses the same API,  I mean JDK, collections, etc. as Java, has the same security and threading models and finally the same object oriented concepts.

    Joint-compilation

    Talking about Groovy’s complementary to Java nature joint-compilation term must be mentioned.
    Have a look at the following diagram:

    java-groovy

    Groovy class can implement Java interface and such Groovy class can be extended by another Java class. The same way, Java class can implement Groovy interface and such class can be extended by Groovy class.
    Such behavior is possible thanks to joint-compilation. How does it work? Groovy compiler parses the Groovy source files and creates stubs for them, then invokes the Java compiler which compiles the stubs along with Java source files. After that, Groovy compiler continues normal compilation.

    Java vs Groovy

    Simple class in Java:
    public class HelloWorld{
    
        private String name;
    
        public void setName(String name){
            this.name = name;
        }
    
        public String getName(){
            return name;
        }
    
        public String greet(){
            return "Hello " + name;
        }
    
        public static void main(String[] args){
            HelloWorld helloWorld = new HelloWorld();
            helloWorld.setName("Groovy");
            System.out.println(helloWorld.greet());
        }
    }
    The same class in Groovy:
    public class HelloWorld{
    
        private String name;
    
        public void setName(String name){
            this.name = name;
        }
    
        public String getName(){
            return name;
        }
    
        public String greet(){
            return "Hello " + name;
        }
    
        public static void main(String[] args){
            HelloWorld helloWorld = new HelloWorld();
            helloWorld.setName("Groovy");
            System.out.println(helloWorld.greet());
        }
    }
    As you can see, Groovy compiler perfectly compiles Grovy code which is exactly the same as in Java.
    So where is the difference? Why Groovy is called “Java on steroids”?
    Here is a class doing the same as the one above:
    class HelloWorld{
    
        String name
    
        String greet(){
            "Hello $name"
        }
     }
    def helloWorld = new HelloWorld(name: "Groovy")
    println helloWorld.greet()
    Now the differences should be clear.
    Groovy features based on the above code snippet:
    • everything is public if not stated differently
    • automatic getters and setters
    • semicolons at the end of the line are optional
    • variable interpolation using GStrings
    • return statement is optional; function returns value returned by the last statement
    • dynamic typing using def keyword

    Groovy tries to be as natural as possible for Java developers, however there are some differences which should be mentioned and remembered:

    • floating point number literals are BigDecimal type by default
    • Groovy uses default imports which means that the following packages are imported by default:
      • java.io.*
      • java.lang.*
      • java.math.BigDecimal
      • java.math.BigInteger
      • java.net.*
      • java.util.*
      • groovy.lang.*
      • groovy.util.*
    • operator == means equals for all types; if you need to check identity use method is() like object.is(anotherObject)
    • operator in is a keyword
    • array in Java can be declared as follows int[] arr = {1,2,3,4}; but in Groovy we need
      int[] arr = [1,2,3,4] 

     Native syntax for data structures

    • lists
      • def list = [1,’a’]
    • maps
      • def map = [PL:’Poland’, UK:’United Kingdom’] – keys are strings by default
    • ranges
      • def range = 10..15
      • assert range.size() == 6
      • assert range.get(1) == 11
      • assert range instanceof java.util.List

     Operators

    Except operators which are common with Java, Groovy supports a list of additional operators:

    • spread operator (*.)
      • used to invoke action on all items of a given aggregate object
      • object*.action which is equivalent to object.coolect{ child->child?.action }
      • assert [‘a’, ‘bb’]*.size() == [1, 2] 
    • Java field (.@)
      • Groovy automatically creates getters for all properties in a class
      • operator allows getting property value directly, without using getter method
      • object.@fieldName
    • Elvis operator (?:)
      • shorter version of Java’s ternary operator
      • ternary operator:
        • def phoneNumber= user.phoneNumber ? user.phoneNumber : “N/A”
      • Elvis operator:
        • def phoneNumber = user.phoneNumber ?: “N/A”
    • safe navigation operator
      • used to avoid NullPointerException
      •  def phoneNumber = user?.phoneNumber
      • if user is null, variable phoneNumber will get null value instead of throwing NullPointerException

     Closures

    Closures, one of the “grooviest” features in my opinion. It can be defined as reusable piece of code which can be passed around as it was a string or an integer.

    { println "Hello World!" }

    Closures like functions, can get arguments and return values.
     

    { name -> println "Hello $name" }
    
    { number -> number * number }
    
    

    Moreover they can be assigned to variables.

    def closure = { a,b -> a + b }
    
    

    Closures can access variables which are in particular closure scope, which is quite obvious. However variables defined within the scope which closure is defined in can be accessed as well.

    def CONST = 1
    
    def incrementByConstance = { value -> value&nbsp;+ CONST }
    
    println incrementByConstance(5)
    
    

    Some more closure examples:

    square = { it * it } //it - value passed to the closure
    
    [1,2,3,4].collect(square)

     

    printMap = { key, value -> println "key=" + key + " value=" + value }
    
    ["PL" : "Poland", "UK" : "United Kingdom"].each(printMap)

    Stay tuned for further posts regarding Groovy.