Monday, December 31, 2007

Hibernate: how to map a collection of embedded components keyed by one of the component's properties?

Quite often, during application development, I encounter the issue of mapping a collection of embedded components using Hibernate.

An embedded component is, in Hibernate, a user-defined value-typed class. It has no individual identity, hence the persistent component class requires no identifier property or identifier mapping; its lifespan is bounded by the lifespan of the owning entity instance.

When mapping a collection of embedded components, it is very important to override the equals() and hashCode() methods and compare all properties, because they are used by Hibernate to detect modifications to these components.

Very often, this collection of components also has a unique key property, that is, the collection should normally be implemented as a map indexed by the value of the key property.

The book "Java Persistence with Hibernate" and Hibernate's documentation illustrate 3 ways to map a collection of embedded components.

The first and highly recommended (by Hibernate) option is to map the collection to a set. This method requires all database columns mapped to the component class must be declared with not-null="true". It does not address the key property issue, either. I myself often find it unwieldy when dealing with the key property because it becomes my responsibility to enforce the map semantics. For example, when adding a new element, I need to iterate the set and find the existing element having the same key property value with the new element. If the existing element is found, depending on the business rule, I may throw an exception, or remove the existing element from the set and add the new element in. When you have several entity classes that have component map, you have to duplicate the same set iteration logic in many places...

The second option is to map the collection to an idbag. Again, it is the responsibility of my application to ensure the map semantics.

The third option is to map it to a map. Unfortunately, this option requires the removal of the key property from the component class. The following is the example extracted from the above-mentioned book to demonstrate the mapping of the images belonging to an item in the Caveate Emptor sample application, where the image name must be unique.

<map name="images"
table="ITEM_IMAGE"
order-by="IMAGENAME asc">

<key column="ITEM_ID"/>
<map-key type="string" column="IMAGENAME"/>
<composite-element class="Image">
<property name="filename" column="FILENAME" not-null="true"/>
<property name="sizeX" column="SIZEX"/>
<property name="sizeY" column="SIZEY"/>
</composite-element>
</map>

As can be seen from the mapping file snippet above, the "Image" component class no longer has a "name" property. This removal can be quite problematic. The key property is usually the most important property of a component class; removing this property from the component class and handle it merely as a map key not only potentially violates object oriented technology principles theoretically, but also can have significant consequences in practice. For example, the public interface of the entity class may need to be overhauled. Instead of an addImage(Image image) method, you need to provide an addImage(String imageName, Image image) method. Or, you have to create another value-type class just in order to wrap the name-deprived Image and the image name together.

Luckily, Hibernate 3.x provides a very powerful new feature called formula. This can easily solve our dilemma. It can map the component map to a map, but it does not require the removal of the key property from the component class. With formula, the above mapping can be modified to:

<map name="images"
table="ITEM_IMAGE"
order-by="IMAGENAME asc">

<key column="ITEM_ID"/>
<map-key type="string" column="IMAGENAME"/>
<composite-element class="Image">
<property name="name" type="string" formula="IMAGENAME"/>
<property name="filename" column="FILENAME" not-null="true"/>
<property name="sizeX" column="SIZEX"/>
<property name="sizeY" column="SIZEY"/>
</composite-element>
</map>

In short, it allows the "IMAGENAME" column to be mapped to both the map key and the key property of the "Image" class when loading from database. When persisting, only the map key is used to update the "IMAGENAME" column.

Now we can map a map of embedded components to a map without sacrificing the key property or writing messy codes to enforce map semantics...

Friday, November 2, 2007

classpath*:BeanRefFactory.xml not found when instantiating a singleton Spring application context inside EJB2.x running on Sun application server 8.2

For the project I'm currently working on, we use Spring's EJB support. In order to share application context among all EJB instances, SingletonBeanFactoryLocator is used to locate or load the shared application context. And we used the default "classpath*:beanRefFactory.xml" selector key.

However, when the EJB is deployed and invoked, a FatalBeanException is thrown stating
Unable to find resource for specified definition. Group resource name
[classpath*:beanRefFactory.xml], factory key [...]


Looking into the implementation of SingletonBeanFactoryLocator, we found that it delegates to PathMatchingResourcePatternResolver to load the XML file. If the resource starts with "classpath*:" prefix, it uses ClassLoader's getResources(String) method to load the resource; otherwise, a ClasspathResource is returned, which eventually uses ClassLoader's getResource(String) method to load the resource. Pay attention to the plural form of the method name.

In the base java.lang.ClassLoader, both getResource(String) and getResources(String) delegate to its parent first. If the resource is not found by the parent, it invokes findResource(String) and findResources(String) methods, both of which simply return null. The Javadoc of ClassLoader recommends that, for both finder methods:
Class loader implementations should override this method to specify where to
load resources from.


However, when tracing the execution in debug mode, we found that the EJBClassLoader used by Sun Application Server 8.2 only overrides findResource(String) method and searches for the resource in the expanded directory of the EJB Jar file; it does not override findResources(String), which remains returning null, against their own advice.

Now it's obvious, this is due to a defect in Sun Application Server 8.2. The simplest solution is to specify the selector key as "classpath:beanRefFactory.xml" instead of the default value.

Saturday, October 27, 2007

Integrating Spring, Hibernate and EJB 2.x

How to configure Hibernate Session Factory in a Spring application context used in an EJB 2.x stateless session bean?

Most of the examples found on the internet about Hibernate and Spring integration are targeted at web applications. How to configure Hibernate and Spring inside an EJB 2.x stateless session bean is rarely mentioned, and it is not as simple and straightforward as many assume.

The most important issue of the integration is around the management of JDBC connections:
In EJB 2.x, JDBC connections are managed by the data source registered with JNDI on the application server:

  • Applications are not expected to hold on JDBC connections after each use, that is, applications should aggressively release JDBC connections, preferrably after each statement.

  • Applications can acquire JDBC connections multiple times during the execution of one EJB invocation. The application server may return the same JDBC connection, or it may return different JDBC connections, but it assures that all JDBC connections returned are registered within the same Container Managed Transaction.


When database is accessed outside EJB 2.x, such as in a web application running in Tomcat, the application itself is responsible for transaction demarcation. When using a local database transaction, the application code must ensure that the same JDBC connection be used in all database access within the same transaction. Because database access occurs in multiple classes that collaborate to complete a transaction, it is very unwieldy for applications to pass around the JDBC connection in order to re-use the same JDBC connection.

Enter Spring and Hibernate.

Spring guarantees the same JDBC connection is reused, provided the same Spring-managed data source is re-used. The magic happens when a JDBC connection is retrieved for the first time from a Spring-managed data source, Spring binds the JDBC connection to the current execution thread. When another JDBC connection is requested within the same transaction from the Spring-managed data source, Spring returns the thread-bound JDBC connection. When the transaction ends, Spring invokes the commit() or rollback() method of the thread-bound JDBC connection and then unbinds it from the thread.

Hibernate 3.x employs similar technique. By default, a Hibernate session is bound to the thread and returned whenever SessionFactory.getCurrentSession() is invoked within the same transaction. The same JDBC connection is used for all JDBC operations initiated by the same Hibernate session. It is possible to specify other current session context class other than the default thread local context since Hibernate 3.1.

As can be seen from the above discussion, Spring and Hibernate by default re-use the same thread-bound JDBC connection. This setting does not go well with the aggressive connection release mode expected by EJB 2.x. Therefore, non-default settings must be configured for both Spring and Hibernate for Spring and Hibernate combination to be integrated within EJB 2.x.

Below is the recommended configuration. The reasons behind this configuration are explained following the configuration.

<jee:jndi-lookup id="dataSource"
jndi-name="jdbc/datasource" resource-ref="true"/>

<bean id="transactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager"/>

<bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"/>

<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource"/>
<!-- default value is "false"
<property name="useTransactionAwareDataSource" value="false"/>
-->
<property name="exposeTransactionAwareSessionFactory" value="false"/>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.SomeDialect</prop>
<!--
WARNING! 'hibernate.connection.release_mode' must not be set to
'after_statement'. Otherwise, it will be overriden with 'after_transaction'
by Hibernate because Spring's LocalConnectionProvider does not support
aggressive connection release.
-->
<prop key="hibernate.connection.release_mode">auto</prop>
<prop key="hibernate.current_session_context_class">jta</prop>
<prop key="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.SunONETransactionManagerLookup</prop>
<prop key="hibernate.transaction.factory_class">org.hibernate.transaction.CMTTransactionFactory</prop>
<prop key="hibernate.transaction.flush_before_completion">true</prop>
<prop key="hibernate.transaction.auto_close_session">true</prop>
</props>
</property>
<property name="mappingResources">
<list>...</list>
</property>
</bean>


The reasons behind this configuration are explained below. It is recommended to have your IDE opened and source code of Spring and Hibernate 3.1 or above ready.

Spring uses LocalSessionFactoryBean, which is a subclass of AbstractSessionFactoryBean, to configure a Hibernate session factory.

There are lots of javadoc to read for each bean property that can be set for a LocalSessionFactoryBean. It's better off to read the buildSessionFactory() method to understand what each property value does in the build time.

Note that the above configuration has two parts for LocalSessionFactoryBean, the 'normal' Spring bean properties, such as 'dataSource', and the 'hibernateProperties' that takes a java.util.Properties value. The 'normal' properties are used to configure the default settings, which can be overriden by the 'hibernateProperties'.

As a side note, we don't set the 'jtaTransactionManager' property. Note that the class for this property is javax.transaction.TransactionManager, not the JtaTransactionManager that implements Spring's PlatformTransactionManager. In order to set this property, we need to know the JNDI name that the target application server binds its JTA transaction manager. The benefit of setting this property would be that we don't need to configure Hibernate's 'hibernate.transaction.manager_lookup_class' and 'hibernate.transaction.factory_class' properties. We concluded that this benefit could not justify explicitly specifying the application server specific JNDI name for transaction manager. We would rather set those two properties in Hibernate configuration.

If 'jtaTransactionManager' property is not set, Spring automatically set the 'hibernate.connection.release_mode' property to 'on_close'. Because we are running Hibernate inside an EJB 2.x, we must set this property to other value in the overriding 'hibernateProperties'.

The 'exposeTransactionAwareSessionFactory' property has a default value of true. If set to true, Spring will set the 'hibernate.current_session_context_class' property to Spring's own thread-bound implementation, which must be overriden by a value of 'jta' in the 'hibernateProperties'.

The 'useTransactionAwareDataSource' property must be left to the default 'false' value. Otherwise, Spring will wrap the data source with a TransactionAwareDataSourceProxy, which will effectively re-use the same JDBC transaction within the same transaction, even if Hibernate aggressively releases connections.

It must be pointed out that when 'useTransactionAwareDataSource' is set to false, Spring will supply LocalDataSourceConnectionProvider as the implementation of Hibernate's ConnectionProvider. LocalDataSourceConnectionProvider informs Hibernate that it does not support aggressive release of connection. However, in Hibernate's SettingsFactory, if the ConnectionProvider does not support aggressive release of connections and connection release mode is set to 'after_statement', the connection release mode will be automatically rectified to 'after_transaction', which effectively re-uses the same JDBC transaction for the whole transaction. A warning message "Overriding release mode as connection provider does not support 'after_statement'" is logged. Therefore, the connection release mode in the 'hibernateProperties' must be set to 'auto' instead of 'after_statement'. When the transaction factory is set to 'CMTTransactionFactory', the default connection release mode is 'after_statement', which is precisely what we want.

The other property settings are self-explanatory:

  • 'hibernate.current_session_context_class' should be set to 'jta'.

  • 'hibernate.transaction.manager_lookup_class' should be set the a class mapped to the target application server.

  • 'hibernate.transaction.factory_class' should be set to 'org.hibernate.transaction.CMTTransactionFactory'.


I hope this post will help anyone who has encountered mysterious connection problems when integrating Spring, Hibernate and EJB 2.x

Thursday, October 25, 2007

Cryptic JTS5031 and JTS5068 errors on Sun Application Server 8.1 and 8.2

We encountered the following exceptions when testing our EJB 2.1 + Spring + Hibernate + Osworkflow (also using Hibernate) application.

When running on Sun Application Server 8.1 EE, the stack trace is listed below:
[#2007-10-25T10:50:34.347+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13;TM: enlistComponentResources#]
[#2007-10-25T10:50:34.400+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13;--Created new J2EETransaction, txId = 25#]
[#2007-10-25T10:50:34.400+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13;TM: enlistComponentResources#]
[#2007-10-25T10:50:34.401+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13;
In J2EETransactionManagerOpt.enlistResource, h=5 h.xares=com.sun.gjc.spi.XAResourceImpl@c21e52 h.alloc=com.sun.enterprise.resource.ConnectorAllocator@54d24d tx=J2EETransaction: txId=25 nonXAResource=null jtsTx=null localTxStatus=0 syncs=[]#]
[#2007-10-25T10:50:34.401+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13;TM: begin#]
[#2007-10-25T10:50:34.402+1000FINEsun-appserver-ee8.1_02javax.enterprise.system.core.transaction_ThreadID=13;Control object :com.sun.jts.CosTransactions.ControlImpl@162aeda corresponding to this transaction has been createdGTID is : 19000000BBF79CD4616476627661707030312C5033373030#]
[#2007-10-25T10:50:34.402+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13;TM: enlistResource#]
[#2007-10-25T10:50:34.402+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13;--In J2EETransaction.enlistResource, jtsTx=com.sun.jts.jta.TransactionImpl@ffe966e9 nonXAResource=null#]
[#2007-10-25T10:50:34.404+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13;--In J2EETransaction.registerSynchronization, jtsTx=com.sun.jts.jta.TransactionImpl@ffe966e9 nonXAResource=null#]
[#2007-10-25T10:50:34.405+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13;--In J2EETransaction.registerSynchronization, jtsTx=com.sun.jts.jta.TransactionImpl@ffe966e9 nonXAResource=null#]
[#2007-10-25T10:50:34.406+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13;TM: delistResource#]
[#2007-10-25T10:50:34.406+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13; ejbDestroyed: AccountProcessServiceBean; id: [B@7219a#]
[#2007-10-25T10:50:34.406+1000FINEsun-appserver-ee8.1_02javax.enterprise.resource.jta_ThreadID=13;TM: rollback#]
[#2007-10-25T10:50:34.406+1000FINEsun-appserver-ee8.1_02javax.enterprise.system.core.transaction_ThreadID=13;Within TopCoordinator.rollback() :GTID is : 19000000BBF79CD4616476627661707030312C5033373030#]
[#2007-10-25T10:50:34.409+1000SEVEREsun-appserver-ee8.1_02javax.enterprise.system.core.transaction_ThreadID=13;JTS5031: Exception [org.omg.CORBA.INTERNAL: vmcid: 0x0 minor code: 0 completed: Maybe] on Resource [rollback] operation.#]
[#2007-10-25T10:50:34.410+1000FINEsun-appserver-ee8.1_02javax.enterprise.system.container.ejb_ThreadID=13;context with empty container in ContainerSynchronization.afterCompletion#]
[#2007-10-25T10:50:34.410+1000FINEsun-appserver-ee8.1_02javax.enterprise.system.core.transaction_ThreadID=13;Within TopCoordinator.rollback() :GTID is : 19000000BBF79CD4616476627661707030312C5033373030#]
[#2007-10-25T10:50:34.411+1000FINEsun-appserver-ee8.1_02javax.enterprise.system.container.ejb_ThreadID=13;EJB5092:Exception occurred in postInvokeTx : [{0}]
javax.transaction.SystemException: org.omg.CORBA.INTERNAL: JTS5031: Exception [org.omg.CORBA.INTERNAL: vmcid: 0x0 minor code: 0 completed: Maybe] on Resource [rollback] operation. vmcid: 0x0 minor code: 0 completed: No
at com.sun.jts.jta.TransactionManagerImpl.rollback(TransactionManagerImpl.java:295)
at com.sun.enterprise.distributedtx.J2EETransactionManagerImpl.rollback(J2EETransactionManagerImpl.java:1054)
at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.rollback(J2EETransactionManagerOpt.java:391)
at com.sun.ejb.containers.BaseContainer.completeNewTx(BaseContainer.java:2711)
at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:2521)
at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:819)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:137)
at $Proxy22.processUser(Unknown Source)
at au.net.ozgwei.services.userprocess.UserProcessServiceDelegate.processUser(UserProcessServiceDelegate.java:96)
...
[#2007-10-25T10:50:34.414+1000INFOsun-appserver-ee8.1_02javax.enterprise.system.container.ejb_ThreadID=13;EJB5018: An exception was thrown during an ejb invocation on [UserProcessServiceBean]#]


When running on Sun Application Server 8.2 PE, the stack trace is listed below:
[#2007-10-25T17:52:04.079+1000INFOsun-appserver-pe8.2javax.enterprise.system.stream.out_ThreadID=14;454609 [httpWorkerThread-2189-4] INFO  org.hibernate.impl.SessionFactoryObjectFactory  - Not binding factory to JNDI, no JNDI name configured
#]
[#2007-10-25T17:52:04.079+1000INFOsun-appserver-pe8.2javax.enterprise.system.stream.out_ThreadID=14;454609 [httpWorkerThread-2189-4] INFO org.hibernate.util.NamingHelper - JNDI InitialContext properties:{}
#]
[#2007-10-25T17:52:04.079+1000INFOsun-appserver-pe8.2javax.enterprise.system.stream.out_ThreadID=14;454609 [httpWorkerThread-2189-4] INFO org.springframework.context.support.ClassPathXmlApplicationContext - Bean 'siteManagerSessionFactory' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
#]
[#2007-10-25T17:52:04.110+1000INFOsun-appserver-pe8.2javax.enterprise.system.stream.out_ThreadID=14;454640 [httpWorkerThread-2189-4] INFO org.springframework.context.support.ClassPathXmlApplicationContext - Bean 'org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
#]
[#2007-10-25T17:52:04.110+1000INFOsun-appserver-pe8.2javax.enterprise.system.stream.out_ThreadID=14;454640 [httpWorkerThread-2189-4] INFO org.springframework.beans.factory.support.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1956ba5: defining beans [siteManager,siteRepository,siteAssembler,org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor,dataSource,transactionManager,siteManagerSessionFactory,eventLogService]; root of factory hierarchy
#]
[#2007-10-25T17:52:05.207+1000INFOsun-appserver-pe8.2javax.enterprise.system.stream.out_ThreadID=14;455737 [httpWorkerThread-2189-4] INFO org.springframework.transaction.jta.JtaTransactionManager - Using JTA UserTransaction: com.sun.enterprise.distributedtx.UserTransactionImpl@ad9064
#]
[#2007-10-25T17:52:05.207+1000INFOsun-appserver-pe8.2javax.enterprise.system.stream.out_ThreadID=14;455737 [httpWorkerThread-2189-4] INFO org.springframework.transaction.jta.JtaTransactionManager - Using JTA TransactionManager: com.sun.ejb.containers.PMTransactionManagerImpl@1eafdce
#]
[#2007-10-25T17:52:26.027+1000WARNINGsun-appserver-pe8.2javax.enterprise.system.core.transaction_ThreadID=15;JTS5068: Unexpected error occurred in rollback
java.lang.NullPointerException
at com.sun.gjc.spi.ManagedConnection.transactionCompleted(ManagedConnection.java:429)
at com.sun.gjc.spi.XAResourceImpl.rollback(XAResourceImpl.java:140)
at com.sun.jts.jta.TransactionState.rollback(TransactionState.java:168)
at com.sun.jts.jtsxa.OTSResourceImpl.rollback(OTSResourceImpl.java:271)
at com.sun.jts.CosTransactions.RegisteredResources.distributeRollback(RegisteredResources.java:971)
at com.sun.jts.CosTransactions.TopCoordinator.rollback(TopCoordinator.java:2240)
at com.sun.jts.CosTransactions.CoordinatorTerm.rollback(CoordinatorTerm.java:504)
at com.sun.jts.CosTransactions.TerminatorImpl.rollback(TerminatorImpl.java:266)
at com.sun.jts.CosTransactions.CurrentImpl.rollback(CurrentImpl.java:728)
at com.sun.jts.jta.TransactionManagerImpl.rollback(TransactionManagerImpl.java:308)
at com.sun.enterprise.distributedtx.J2EETransactionManagerImpl.rollback(J2EETransactionManagerImpl.java:1058)
at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.rollback(J2EETransactionManagerOpt.java:391)
at com.sun.ejb.containers.BaseContainer.completeNewTx(BaseContainer.java:2711)
at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:2521)
at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:819)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:137)
at $Proxy22.processUser(Unknown Source)
at au.net.ozgwei.services.userprocess.UserProcessServiceDelegate.processUser(UserProcessServiceDelegate.java:96)

These exceptions were really frustrating as they appeared to occur only at transaction commit time that somehow the commit failed and the EJB container was trying to roll back and then encountered an unexpected CORBA or null pointer error.

That led us to think that our configuration for Spring and Hibernate to work with EJB 2.1 was not set up properly.

Well, all that was just red herring. With debugging turned on, it was clear to us that a minor and seemingly innocent change in the web tier resulted in invoking the EJB with illegal arguments and that the POJO implementation wrapped by the EJB threw an IllegalArgumentException, which was swallowed by Sun's EJB container and triggered the transaction to be rolled back.

Once we fixed the bug at the web tier, the issue went away immediately.

So, the problem is with Sun's EJB container: when it catches a runtime exception, it should have printed the stack trace of the root cause of the exception instead of swallowing it. It wasted us several hours to figure out what went wrong.

It also taught us a few lessons:

  1. Debug early on may save you many hours of code reading and googling for a problem that was hidden/eclipse by a seemingly complex problem.
  2. Write more test cases for web tier codes, preferably with EasyMock 2 or jMock 2. If we had written the test cases for the UI, this problem would probably occur in the first place.

So, if you see similar JTS5031 or JTS5068 errors on Sun Application Server 8.x, make sure you do some debugging to verify that it was not caused by a runtime exception thrown by your application code...

Friday, September 28, 2007

Spring 2.0 schemas not found? And the solution is...

We've experienced a mysterious problem that the Spring 2.0 schemas could not be found when the application context is being created inside an EJB 2.1, using Spring's AbstractStatelessSessionBean.

The problem manifests itself with the following exception:
org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 18 in XML document from class path resource [springContext.xml] is invalid; nested exception is org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'beans'.
Caused by:
org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'beans'.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.error(Unknown Source)


Normally this is caused by the incorrect XML namespace or schema location declaration at the head of the application context. However, in our case, the declaration was correct:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:jee="http://www.springframework.org/schema/jee"
xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-2.0.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd">
...
</bean>


After some research, we found that other people have also experienced this problem.

The cause is that somehow the "META-INF/spring.schemas" and "META-INF/spring.handlers" files packaged in spring.jar could not be found.

What are these two files?
  • "spring.schemas" specifies the classpaths of all Spring schemas within the spring.jar. The content of this file is listed below:

http\://www.springframework.org/schema/beans/spring-beans-2.0.xsd=org/springframework/beans/factory/xml/spring-beans-2.0.xsd
http\://www.springframework.org/schema/tool/spring-tool-2.0.xsd=org/springframework/beans/factory/xml/spring-tool-2.0.xsd
http\://www.springframework.org/schema/util/spring-util-2.0.xsd=org/springframework/beans/factory/xml/spring-util-2.0.xsd
http\://www.springframework.org/schema/aop/spring-aop-2.0.xsd=org/springframework/aop/config/spring-aop-2.0.xsd
http\://www.springframework.org/schema/lang/spring-lang-2.0.xsd=org/springframework/scripting/config/spring-lang-2.0.xsd
http\://www.springframework.org/schema/tx/spring-tx-2.0.xsd=org/springframework/transaction/config/spring-tx-2.0.xsd
http\://www.springframework.org/schema/jee/spring-jee-2.0.xsd=org/springframework/ejb/config/spring-jee-2.0.xsd

http\://www.springframework.org/schema/beans/spring-beans.xsd=org/springframework/beans/factory/xml/spring-beans-2.0.xsd
http\://www.springframework.org/schema/tool/spring-tool.xsd=org/springframework/beans/factory/xml/spring-tool-2.0.xsd
http\://www.springframework.org/schema/util/spring-util.xsd=org/springframework/beans/factory/xml/spring-util-2.0.xsd
http\://www.springframework.org/schema/aop/spring-aop.xsd=org/springframework/aop/config/spring-aop-2.0.xsd
http\://www.springframework.org/schema/lang/spring-lang.xsd=org/springframework/scripting/config/spring-lang-2.0.xsd
http\://www.springframework.org/schema/tx/spring-tx.xsd=org/springframework/transaction/config/spring-tx-2.0.xsd
http\://www.springframework.org/schema/jee/spring-jee.xsd=org/springframework/ejb/config/spring-jee-2.0.xsd

  • "spring.handlers" specifies the handler class that implements the NamespaceHandler interface for each namespace. The content of this file is listed below:

http\://www.springframework.org/schema/util=org.springframework.beans.factory.xml.UtilNamespaceHandler
http\://www.springframework.org/schema/aop=org.springframework.aop.config.AopNamespaceHandler
http\://www.springframework.org/schema/lang=org.springframework.scripting.config.LangNamespaceHandler
http\://www.springframework.org/schema/tx=org.springframework.transaction.config.TxNamespaceHandler
http\://www.springframework.org/schema/jee=org.springframework.ejb.config.JeeNamespaceHandler
http\://www.springframework.org/schema/p=org.springframework.beans.factory.xml.SimplePropertyNamespaceHandler

One raised the topic in Spring's support forum that it was because of no internet access for the application server to access the schemas over the internet. Another post even provided a solution to reference the schemas using classpath prefix directly in the application context configuration files.

However, that's not the root cause and a good solution, because even if the application server can access the schemas over the internet or via classpath prefix, the bootstrap code still cannot access the "spring.handlers" file and would not know how to handle various namespaces other than the default "beans".

In fact, the root cause is that the META-INF directory of a can be blocked when referenced inside an EJB jar.

I tried a few ways to get around this problem, and the final solution I adopted is to extract these two files and put them in the META-INF directory of a JAR file that contains only this directory, and place this JAR file in the lib/ext directory of the application server domain where the application is deployed.

I hope this would help others who may face this problem.

Tuesday, September 25, 2007

Nice improvements in NetBeans 6.0

NetBeans 6.0 Beta 1 has been released since last week.

There are a few nice improvements that I have been waiting for:

1) Different font style for different scoped variables.
In NetBeans 5.5.1 and earlier versions, all variables have the default black colour, unlike Eclipse, which displays instance variables in blue and static variables in italic.
NetBeans 6.0 has finally caught up. Now the instance variables are in green and static variables/methods are also italic.

2) Visual JSF page editing now supports message resource bundles.
Previously, when using visual editing of a JSF page, if you need to display a label with text from a resource bundle, you have to manually edit the JSP pages. The resulting components are not displayed on the canvas.
This has been improved. Now these components are displayed on the canvas with the text from the default base.

Nice, isn't it? I may consider switching from MyEclipse to NetBeans 6.0 when it's finally out.

Wednesday, September 19, 2007

Won an IntelliJ IDEA 6.0 licence!

Tonight I went to the Sydney Java User Group's Lightning Talks Night hosted by Atlassian, and won the random audience draw. Among three rewards: a Sun umbrella, a "Java Concurrency in Practice" book and an IntelliJ IDEA licence, I chose IntelliJ IDEA, since: 1) who needs an umbrella in Sydney? 2) I have already got that book, though I haven't started reading...

I used IntelliJ at work previously. But we switched to MyEclipse, mainly because it's easier to find developers with Eclipse experience on the market, and also because the licence is much cheaper.

IntelliJ IDEA is actually a pretty cool IDE. One of the plugins I'd like to try out is the Grails plugin...

I'm also playing with NetBeans 6.0. The beta 1 has just been released.

If I have time, I will try to blog about my experience with these IDEs: Eclipse (and MyEclipse), IDEA and NetBeans...

Tuesday, August 28, 2007

Developer-friendly JAXB code generation with JAXB Commons and binding file

In my previous post "Defining service/component interfaces in WSDLs", I mentioned that "Not only are no-argument constructors and all public getter and setter methods generated for complex types, but also other useful methods, such as valued constructors, builder methods, hashCode, equals and toString methods can be implemented using XJC extensions." This post is about how to do this.

Our project uses XFire's WsGenTask Ant task to generate service interfaces, DTO classes and exceptions from WSDL file and the referenced XSD files. WsGenTask delegates the generation of DTO classes to JAXB's XJC compiler. However, these DTO classes generated by JAXB XJC's default are not particularly developer-friendly, in the following aspects:
  1. They have only one default constructor. This is quite inconvenient when you need to instantiate a DTO to hold some values:

    • You need to declare a local variable, which may otherwise be unnecessary.

    • You need to call the constructor to instantiate a new instance, then you need to invoke the setter methods for each property value you would like to it to hold.

    • To re-use the code and get rid of the local variable declaration, you need to create a factory class with one or more overloading create methods for each DTO class, which essentially are value-taking constructors moved to a factory class.

  2. They do not override the hashCode(), equals(Object) and toString() methods of the Object class. DTOs are value objects and no identity. Two instances holding exactly the same data should be considered interchangeable. Thus, overriding hashCode() and equals(Object) is essential for DTO classes. Moreover, not overriding toString() implementation makes it inconvenient in unit testing:

    • You cannot invoke assertEquals(Object, Object) directly. You have to rely on a static method to determine whether two objects are equal in values. And if the assertion fails, you need to invoke another static method to have a useful string representation of both the expected and actual objects.

  3. Default values are not honoured in the generated classes, that means you have to set the value explicitly even if you are using a default value in most cases.

  4. If you are not the owner of the schemas being used, the class and property names generated may not be following the Java's camel-case conventions.
  5. There is no setter method for collections. You need to invoke the getter method to retrieve the collection and invoke the addAll(Collection) method to add all elements of the prepared collection.

  6. If an element has a maxOccurs attribute value greater than one, the getter method generated is not using plural by default.

  7. Date, time and dateTime are generated as XMLGregorianCalendar, which requires constant conversion to and from the java.util.Date and java.util.Calendar values you use in your domain models.

Fortunately, XJC has an extension option that allows third-party extensions. A whole bunch of XJC plugins have been developed to iron out most of these isses in the open source community, and many of them are under the JAXB 2.0 Commons project hosted by java.net.

Many of the these plugins are very useful:
  1. The Value Constructor plugin generates a constructor that takes values for all properties besides the default no-argument constructor.

  2. If you have many properties in a DTO class, or some of the properties are optional, you can use the Fluent API plugin, which generates builder-styled methods, which essentially provides named-parameter constructor, which is not provided by Java.

  3. The jakarta-commons-lang plugin generates overriding hashCode(), equals(Object) and toString() methods using jakarta commons lang's HashCodeBuilder, EqualsBuilder and ToStringBuilder classes, which in turn uses reflection.

  4. The Default Value plugin honours the default values specified in the schemas.

  5. The CamelCase Always plugin generates class and property names following the camel-case convention.

  6. The Collection Setter Injection plugin generates setter methods for collections.


Unfortunately, at the time of this writing, XFire's WsGen does not pass parameters to JAXB's XJC, as tracked by this XFIRE-1038 JIRA. The way to get around it is to call the XJCTask Ant task after the WsGenTask call and overwrite all DTOs generated by WsGenTask.

Note that some of the plugins required JAXB 2.1 to work. If you intend to use JAXB 2.0 in runtime environment, it is fine: you can generate the classes using JAXB 2.1 with the target specified as "2.0" to avoid generating annotations introduced in JAXB 2.1.

To generate a plural form for collections, use a simple JAXB binding file with XJCTask.

If you would like to work with java.util.Date or java.util.Calendar instead of XMLGregorianCalendar, you can provide your own parseMethod and printMethod and specify them in your JAXB binding file used by XJCTask, as detailed by Sun's engineer Kohsuke's blog entry.

Follwing are extracted from the ant build file to illustrate how to generate developer-friendly DTO classes using JAXB:

<!-- This task will autogenerate the code from the XSD specification -->
<taskdef name="xjc" classname="com.sun.tools.xjc.XJCTask" >
<classpath>
<pathelement path="${jaxb1-impl-2.1.3.jar}:${jaxb-api-2.1.3.jar}:${jaxb-impl-2.1.3.jar}:${jaxb-xjc-2.1.3.jar}" />
<pathelement path="${jaxb2-commons-commons-lang-plugin.jar}"/>
<pathelement path="${jaxb2-commons-value-constructor.jar}"/>
<pathelement path="${jaxb2-commons-fluent-api.jar}"/>
<pathelement path="${jaxb2-commons-default-value-plugin.jar}"/>
<pathelement path="${jaxb2-commons-collection-setter-injector.jar}"/>
<pathelement path="${component.classpath}" />
</classpath>
</taskdef>

<!-- This task will autogenerate the code from the WSDL specification -->
<taskdef name="wsgen" classname="org.codehaus.xfire.gen.WsGenTask" >
<classpath>
<pathelement path="${component.classpath}" />
</classpath>
</taskdef>

<!-- Auto generate classes based on the XSD definitions using JAXB bindings -->
<target name="xjc_gen">
<mkdir dir="${component.autogen_src}"/>
<delete>
<fileset dir="${component.autogen_src}"
excludes="**/service/**/*.java"/>
</delete>
<xjc destdir="${component.autogen_src}" target="2.0" extension="true">
<arg value="-Xcommons-lang"/>
<arg value="-Xvalue-constructor"/>
<arg value="-Xfluent-api"/>
<arg value="-Xcollection-setter-injector"/>
<arg value="-Xdefault-value"/>
<schema dir="${component.home}/src/conf/wsdl" includes="*.xsd"/>
<binding file="${component.home}/src/conf/wsdl/simple.xjb"/>
</xjc>
</target>

<!-- Auto generate interfaces and classes based on the WSDL definitions using JAXB bindings -->
<target name="wsdl_gen">
<mkdir dir="${component.autogen_src}"/>
<wsgen outputDirectory="${component.autogen_src}"
wsdl="${component.home}/src/conf/wsdl/service-foo.wsdl"
package="com.blogspot.ozgwei.service.foo"
overwrite="true"
binding="jaxb"
externalBindings="${component.home}/src/conf/wsdl/simple.xjb"
/>
<wsgen outputDirectory="${component.autogen_src}"
wsdl="${component.home}/src/conf/wsdl/service-bar.wsdl"
package="com.blogspot.ozgwei.service.bar"
overwrite="true"
binding="jaxb"
externalBindings="${component.home}/src/conf/wsdl/simple.xjb"
/>
</target>

<!-- Build all user, autogenerated and test code -->
<target name="compile" depends="wsdl_gen, xjc_gen">
...
</target>


The next is the content of the simple.xjb file for JAXB Binding, which use java.util.Calendar:

<!--
This enables the simple binding mode in JAXB.
See http://weblogs.java.net/blog/kohsuke/archive/2006/03/simple_and_bett.html
-->
<jaxb:bindings jaxb:version="2.0" jaxb:extensionBindingPrefixes="xjc"
xmlns:jaxb="http://java.sun.com/xml/ns/jaxb"
xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"
xmlns:xs="http://www.w3.org/2001/XMLSchema">
<jaxb:globalBindings>
<xjc:simple/>
<jaxb:javaType name="java.util.Calendar" xmlType="xs:date"
parseMethod="javax.xml.bind.DatatypeConverter.parseDate"
printMethod="javax.xml.bind.DatatypeConverter.printDate"/>
<jaxb:javaType name="java.util.Calendar" xmlType="xs:time"
parseMethod="javax.xml.bind.DatatypeConverter.parseTime"
printMethod="javax.xml.bind.DatatypeConverter.printTime"/>
<jaxb:javaType name="java.util.Calendar" xmlType="xs:dateTime"
parseMethod="javax.xml.bind.DatatypeConverter.parseDateTime"
printMethod="javax.xml.bind.DatatypeConverter.printDateTime"/>
</jaxb:globalBindings>
</jaxb:bindings>


If you prefer to use java.util.Date instead of java.util.Calendar, define the following class and change the binding file accordingly:

package com.blogspot.ozgwei.jaxb

import java.util.Date;
import javax.xml.bind.DatatypeConverter;

public class DateConverter {

public static Date parseDate(String s) {
return DatatypeConverter.parseDate(s).getTime();
}

public static Date parseTime(String s) {
return DatatypeConverter.parseTime(s).getTime();
}

public static Date parseDateTime(String s) {
return DatatypeConverter.parseDateTime(s).getTime();
}

public static String printDate(Date dt) {
Calendar cal = new GregorianCalendar();
cal.setTime(dt);
return DatatypeConverter.printDate(cal);
}

public static String printTime(Date dt) {
Calendar cal = new GregorianCalendar();
cal.setTime(dt);
return DatatypeConverter.printTime(cal);
}

public static String printDateTime(Date dt) {
Calendar cal = new GregorianCalendar();
cal.setTime(dt);
return DatatypeConverter.printDateTime(cal);
}

}


The simple.xjb will be changed to:

<jaxb:bindings jaxb:version="2.0" jaxb:extensionBindingPrefixes="xjc"
xmlns:jaxb="http://java.sun.com/xml/ns/jaxb"
xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"
xmlns:xs="http://www.w3.org/2001/XMLSchema">
<jaxb:globalBindings>
<xjc:simple/>
<jaxb:javaType name="java.util.Date" xmlType="xs:date"
parseMethod="com.blogspot.ozgwei.jaxb.DateConverter.parseDate"
printMethod="com.blogspot.ozgwei.jaxb.DateConverter.printDate"/>
<jaxb:javaType name="java.util.Date" xmlType="xs:time"
parseMethod="com.blogspot.ozgwei.jaxb.DateConverter.parseTime"
printMethod="com.blogspot.ozgwei.jaxb.DateConverter.printTime"/>
<jaxb:javaType name="java.util.Date" xmlType="xs:dateTime"
parseMethod="com.blogspot.ozgwei.jaxb.DateConverter.parseDateTime"
printMethod="com.blogspot.ozgwei.jaxb.DateConverter.printDateTime"/>
</jaxb:globalBindings>
</jaxb:bindings>

Tuesday, August 21, 2007

Defining service/component interfaces in WSDLs

We have just completed an iteration of the project we have been working on for the past few months. Now it's a good time to summarise the experiences we have gained so far.

Probably the most significant one is defining all service/component interfaces in WSDLs.

The project started with the interfaces of a few services being defined in WSDLs because they are invoked by the web tier as web services call. Since the JAXB-generated Java classes of some complex types defined in the XML schema are reused in some other service interfaces that are not originally intended to be invoked via web services, we finally decided to define all service interfaces in WSDLs, regardless whether they are intended to be invoked as web services or not. And this brought us unexpected benefits and productivity.

So what are the benefits of defining all service interfaces in WSDLs?

First of all, it takes away most of the tedious work of implementing Data Transfer Objects (DTOs), resulting in higher productivity. If you wonder why DTOs are still in use when domain model objects persisted by Hibernate and JPA can be passed back to service clients directly, please read my previoug blog entry: DTOs are a necessary devil in building SOA enterprise applications.

So, how does defining service interfaces in WSDLs take away the boring implementation of DTOs? In a WSDL file, each operation has input messages, output messages and/or fault messages, which in turn are defined as elements in an external or embedded XML schema. And each element is declared in an XML type. The XJC compiler from JAXB can be used to generate corresponding Java classes of any complex types and some restricted simple types. These classes are data holders with absolutely no behaviours; in other words, they are DTOs. Code generation of DTOs has other benefits: no behaviours can be added to the DTOs without subclassing and defining new operations; and these DTOs won't accidentally reference any domain model objects, which is always a concern for hand-coded DTOs.

XML schema allows very rich type definitions, such as abstract types and type inheritance, which has corresponding concepts in object-oriented languages, such as Java, making it a perfect choice to generate Java classes from XML data types.

Not only are no-argument constructors and all public getter and setter methods generated for complex types, but also other useful methods, such as valued constructors, builder methods, hashCode, equals and toString methods can be implemented using XJC extensions. These are to be blogged in a follow-up post.

Moreover, defining service/component interfaces in WSDLs can partially enforce the "contract" using schema validation, promoting the best practice of "Design by Contract" (DBC).

Java, by itself, does not provide native support to DBC. Any acceptable input values, return values and exceptions are only documented using javadoc in the interfaces. The implementation classes have to do all the hard work to validate input values, such as making sure a mandatory parameter is not null, an amount is greater than zero and less than a preset limit, etc.

Meanwhile, XML schema provides a rich basic data types, such as positive integers. It can also specify whether an element is mandatory or not using the "minOccur" attribute. Furthermore, it allows you to add restrictions to simple types, such as the maximum length of a string and regular expression patterns that must be matched by a string. Thus, XML schema validation can catch many simple field validation errors. More complex validations, such as cross-field validations and business rule validations cannot be done by schema validation. However, they can still be documented using annotations, which are generated as Java comments in the generated Java classes.

Another benefit of defining service/component interfaces in WSDLs is that it promotes the best practice of "contract-first" web service design. Because the service interface is defined in a WSDL file, the service is web service ready, even if it is not intended to be used as a web service at this stage. The opposite of "contract-first" is "code-first". When adopting a "code-first" practice, the service interfaces are defined in Java, potentially without using DTOs. When the service needs to be accessed via web services or remotely, it is not as easy as just exposing the service as a web service by, say, wiring up the Spring application context. A proper service interface using DTOs has to be defined and implemented by delegate calls to the original interface and doing all the transformation between DTOs and domain models. Even if DTOs are used in the original Java interface, exposing the service as a web service is not as simple as adding the JSR-181 annotations to the interface and all DTOs: some common Java types do not have a counterpart in XML, such as java.util.Map. Moreover, the methods defined in your original DTOs may not be suitable for JAXB use. You may not have defined a no-argument constructor; you may not have defined a setter method for each property... So, why go the hard way when defining the interfaces in WSDLs can save you the tedious work of implementing DTOs and, at the same time, provide an elegant way to expose it as a web service when the time calls?

Saturday, August 18, 2007

DTOs are a necessary devil in building SOA enterprise applications

Bringing up the topic of Data Transfer Objects (DTOs), many people's first thought would probably be that it was a reminiscence of the old Enterprise Java Bean (EJB) 2.0.

Hibernate, JDO and JPA have made it possible to eliminate DTOs from web applications. However, DTOs are a necessary devil when building an enterprise application, especially in a Service-Oriented Architecture (SOA).

For each service or component, the underneath domain model that provides the service should be encapsulated and not leaked beyond the boundary of the service's interfaces.

It is often said that the interfaces are the contracts between the client and the provider of the services. However, many new Java developers think that only the interface classes and their method names are the contract without realising that any objects passed in and out of these operations are also part of the contract, such as method parameters, return values and thrown exceptions. And the contractual binding is transitive by reachability, that is, all classes directly or indirectly referenced by these parameters, return values and exceptions are all part of the contract. Therefore, if domain models are passed in and out of any operation defined in the service interfaces, they are leaked, making service clients susceptible to any changes in the domain models, which are the implementation details that are supposed to be encapsulated.

For a web application, this is probably not a big deal, as the presentation tier is typically closely coupled to the service and persistence tiers. However, for any enterprise applications in a Service-Oriented Architecture, the change of interfaces can have a ripple effect.

In order to achieve the encapsulation, DTOs are used as the parameters, returned values and exceptions in operations defined in the interfaces to isolate domain models from the outside world. For this reason, it is not hard to imagine that DTOs are ubiquitous in a SOA world.

Thursday, August 9, 2007

EasyMock Class Extension 2.2 Gotcha

EasyMock mocks interfaces, and the Class Extension mocks classes.

When using the class extension, the EasyMock from the org.easymock.classextension package should be used instead of the same-named class from the original org.easymock package.

When working with more than one mocks with Java 5, we can invoke the replay and verify methods using varargs, such as:
import static org.easymock.classextension.EasyMock.*;
...
replay(mockCustomerRepository, mockCustomer);
...
verify(mockCustomerRepository, mockCustomer);
However, if one of the mocks is mocking a class instead of an interface, I got an IllegalArgumentException: not a proxy instance.

Looking into the stack trace, I can see the EasyMock.replay/verify from the original org.easymock package is used instead of the one from org.easymock.classextension package, as stated in the import statement.

What happened is: org.easymock.classextension.EasyMock extends the org.easymock.EasyMock without providing a vararg version of replay(Object...) and verify(Object...) method. Thus, when the vararg versions are invoked, the control is passed to the vararg version of its super class, which only mocks interfaces using standard Java Proxy.

To get around this, you invoke the single parameter version, such as:
import static org.easymock.classextension.EasyMock.*;
...
replay(mockCustomerRepository);
replay(mockCustomer);
...
verify(mockCustomerRepository);
verify(mockCustomer);
Of course, this gotcha only exists in EasyMock class extension 2.2. The latest 2.2.2 version provides vararg version for these two methods. So a better solution is to upgrade to this latest version.

Tuesday, July 31, 2007

EasyMock2 quirk...

When using EasyMock 2 for testing, typically we need to set up expectations before replay, like this:
expect(mockEmployeeRepository
.findByFirstNameAndLastName("John", "Doe"))
.andReturn(employees);
Sometimes, you don't know exactly what parameter will be used for the expected call, you can instead specify the class of the parameter, such as:
expect(mockEmployeeRepository
.findBySpecification(isA(EmployeeSearchSpecification.class))
.andReturn(employees);
What if you know the exact value of some but not all parameters? I tried the following:
expect(mockEmployeeRepository
.findByDepartmentAndSpecification("HR",
isA(EmployeeSearchSpecification.class))
.andReturn(emplooyees);
Unfortunately, running this test will get the following exception thrown by EasyMock:
java.lang.IllegalStateException: 2 matchers expected, 1 recorded.
The correct way is to wrap the known parameter with an "eq" matcher:
expect(mockEmployeeRepository
.findByDepartmentAndSpecification(eq("HR"),
isA(EmployeeSearchSpecification.class))
.andReturn(employees);
This is a small quirk when using EasyMock 2.2...

Handling referential integrity when doing persistence testing using DbUnit

DbUnit is a great tool when testing database persistence codes.

There are many ways to using DbUnit. Anyhow, my preferred way is to delete all records from the database tables and re-populate with test data before each test. For unit testing, I test against a memory database, such as Hypersonic; for integration testing, I test against a database that mirrors the one in production.

However, this approach sometimes will get you into the trouble of referential integrity violation.

For example, I had some integration tests against the PRODUCT table, and later I developd some integration tests against the ORDER table, which has a foreign key dependency on the PRODUCT table.

After checking in the latest changes, the continuous integration test failed because of data integrity violation. The cause was that the PRODUCT integration tests saved some records in both PRODUCT and ORDER table, with some orders referencing some products. When DbUnit tried to delete all records from the PRODUCT table, the database detected the violation.

How to solve this? It's pretty easy, just add an empty ORDER record to the test data XML file that is used to populate the database. Following is a sample of a FlatXmlDataSet file:

<?xml version='1.0' encoding='UTF-8'?>
<dataset>
<product id="1" desc="abc" price="1.23"/>
<order/>
</dataset>
The trick is that in a FlatXmlDataSet file, the order of the tables defined must follow the dependency, that is, a later defined table can have a dependency on an earlier defined table. When DbUnit deletes records, it will delete records from the last defined table first.

Sunday, July 29, 2007

Springify a "Singleton"-infested application

The singleton pattern has been avidly avoided by most developers since the introduction of Spring, which solved the problem in an elegant way.

However, there have been many applications that were developed before and were infested with the "evil" singleton pattern. Quite often, these singletons are not really singletons; the singleton pattern was just misapplied to a problem that may be more suitable to use Registry to solve.

Every now and then, developers are asked to add Spring to such applications with minimal code changes and refactoring because these applications typically do not have any automated unit tests or do not have a good enough test coverage.

Adding Spring to such applications is rather simple if there is no dependency between the existing codes and the Spring-managed beans.

If Spring-managed beans have a dependency on a "singleton" that is not managed by Spring. We can define the "singleton" as a Spring-managed bean using a "factory-method", such as "getInstance()", to retrieve the "singleton" instance and inject it into other beans.

Things get more complex when you want to use Spring to configure these "singletons".

For example, one of the application that I need to springify used a "singleton" to look up data source from JNDI. In order to add tests that can be run out of container, we need to inject a Spring-configured data source, whether it is registered with JNDI or not, to that "singleton" instance, instead of letting the "singleton" to look up JNDI.

What we did was that we added a new static factory method "createInstance(DataSource)" accepting a pre-configured data source and returning the singleton instance. Then we configure Spring to invoke this new factory method to create a bean, which is stored in the original static variable and returned to caller of the static "getInstance()" method.

I have read on the internet that some call this a "Singleton with a back door" pattern. To me, it is pretty clear that the singleton pattern is misused here. What the original developer really needed was a "Registry" pattern that allows other objects to "find common objects and services" using "a well-known object". Spring creates and configures the common object or service, which is registered so that other objects in the same application can locate it.

In fact, Spring itself uses "Registry" pattern in many places. Some of the significant places include transaction management and JDBC connection management. Spring uses a thread-local scoped registry to hold JDBC connection so that all JDBC operations within the same transaction use the same JDBC connection...

One thing to note when using Spring to configure such "singletons" is that there are typically many implicit dependencies between these "singletons". If Spring instantiates these "singletons" in a wrong order, the application cannot run properly, usually with failure in constructing application context. The solution to this is to use the "depends-on" attribute in bean definition to explicitly specify the dependencies between these "singletons".

Friday, June 29, 2007

Workflow engine integration, where does it go?

In one of the projects I have worked on, OSWorkflow is used to manage workflows. It is integrated to the service layer instead of to the domain objects.

The design feels unnatural somehow:

  • The workflow id is stored in the domain object, passed as a parameter in constructor. However, it is not used by the domain object itself. The domain class provides a getter method so that the service layer can retrieve the workflow id and load the workflow instance from OSWorkflow.
  • After loading the domain object and then the workflow instance, the service layer needs to pass to the workflow engine some properties from the domain object and some properties from the DTO that carries the input of the user for the current request, so that the workflow engine can work out what is the next status for the domain object.
  • The domain class also expose a method to allow arbitrary update of the 'status' field so that the service layer can update the domain object with the status that is come up with by the workflow engine. The domain model loses encapsulation.
  • The service layer becomes thick while the domain model becomes anaemic...

IMHO, this design smells.

The change of state in a domain object should be managed by the domain itself. Any application requirements, such as sending email notifications, can be implemented as state change event listeners.

If the workflow logic gets too complicated, workflow engines like OSWorkflow come to rescue. But even if the workflow logic is delegated or outsourced to a workflow engine, it is still domain logic; thus, the workflow engine should be integrated directly into the domain objects instead of to the service layer.

In order for domain objects to work with a workflow engine, domain objects need to have a reference to the workflow engine, which used to be an issue.

With domain object dependency injection feature introduced in Spring 2, this problem is solved. Even if you are using Spring 1.2 or other IoC Containers that do not support domain object dependency injection, or if you are not using an IoC container at all, you can still use Registry pattern where a domain object can look up a dependency.

In conclusion, state change is part of the domain logic, and even if it is outsourced to a workflow engine, the workflow engine integration should still happen in the domain object, without leaking the implementation details to other layers. And technically, there is no more barrier that prevents injecting the workflow engine into domain objects.

Sunday, June 24, 2007

Is using domain entities in presentation layer encouraged?

There is an interesting discussion going on in the Spring support forum about using Hibernate entities in web layer.

In his reply, Debasish said:
I am all for using domain entities in the presentation layer. If you read the Expert Spring MVC book or the Hibernate book, both of them encourage using smart domain models and reusing domain objects in the presentation layer. The Spring MVC book recommends using domain objects as command objects and the Hibernate book recommends doing away with the behaviorless JSF backing beans in favor of smart POJOs in SEAM.
I have plenty of respects for Debasish. However, I have to disagree on this.

One thing that I have kept in mind is that Spring MVC and Hibernate's development started in the dark age of EJB 2.0, which forced developers to build enterprise applications with anaemic domain models using transaction scripts. At that time, many people equalled Java Beans to POJOs and domain classes. Unfortunately that was misleading... In a rich domain model, typically entity classes are NOT Java beans, because providing setter methods can easily break invariants of a domain class. Since then, Domain Driven Design has gained wider adoption and recognition.

In a recent presentation "Are We There Yet?", Rod Johnson, the founder of Spring, asked the attendees a question: when working with Hibernate, who use property access and who use field access? Those who use property access are persisting DTOs, or, using an anaemic domain model. I cannot recall the exact words, but that's basically what he said and meant.

I would like to add: that those who use domain classes as command objects in the web layer are working with an anaemic domain model.

For example, in one of the projects I have worked on, there is a Fraud class, which has three fields: fraudPaymentDate, fraudPaymentAmount, amountRecovered and dateRecovered. The invariants regarding these three fields are:
1) amountRecovered must not exceed fraudPaymentAmount.
2) When amountRecovered is $0.00, dateRecovered must be null.
3) When amount Recovered is greater than $0.00, dateRecovered must be present, in the past and must not predate fraudPaymentDate.

If this class provides setter methods for all these fields and is directly used as a command object, there is no way not to break invariants in the binding phase of Spring MVC:
If setAmountRecovered() is invoked with a non-zero amount before setDateRecovered(), then invariant no. 3 is broken.
If setDateRecovered() is invoked before setAmountRecovered(), then invariant no. 2 is broken.

Therefore, a command object typically does not maintain data integrity. Usually a validate() method is provided to the client to ensure data integrity AFTER the binding.

There is one big drawback with using validate() method: the domain object relies on its client to call its validate() method to check data integrity or on a trigger from an event such as save(). However, when using Hibernate, domain objects can be persisted transparently without the domain object being persisted beware of the fact that it's being persisted. Thus, a corrupted domain object can easily be persisted to database.

When using a rich domain model with DTO, this problem is solved:
The DTO can be used as a command object in the web layer and passed to the service tier. The UpdateService can construct two value objects fraudPaymentInfo, which contains both fraudPaymentDate and fraudPaymentAmount, and recoveryInfo, which contains both amountRecovered and dateRecovered, according to the data in the DTO and invoke the Fraud object's updatePaymentInfo() and updateRecoveryInfo(). Invariants are always maintained in the process.

Monday, June 18, 2007

Domain Classes should not be passed to UI layer.

Debasish Ghosh, one of my favourite bloggers, posted an entry titled
Domain Classes or Interfaces? last October. Later he posted a follow-up entry Abstract Classes or Aspect-Powered Interfaces?.

He had this great debate with Sergio Bossa in the mailing list of Domain Driven Design (DDD).

I was surprised to read that both Debasish and Sergio are in favour of passing domain objects to the UI layer and using them as command objects, in the context of DDD discussion, because this kind of usage is code smell of Anaemic Domain Model.

As I understand, one major reason cited by Debasish to use abstract domain classes is to maintain the integrity of domain objects, making sure no constraints are violated.

However, when a domain class is designed following the philosophy of DDD, there are typically few setter methods, making them unsuitable to be used as a command object in the UI layer.

Moreover, there are typically some business methods that you would not like to be exposed to the UI layer, such as those that access database and must be wrapped in a transaction, making it even more undesirable to pass domain objects to the UI layer.

In my opinion, a domain class typically has three interfaces (not necessarily Java interfaces):
* Business interface - these method usually bear domain-specific meaning in their names.
* State Exposure interface - typically getter methods
* State Manipulation interface - these methods update the object state without violating any constraints, and are usually named updateXXX instead of setter.

Business interface should not be exposed to UI layer. If the return value of any of the business methods are to be displayed in a UI, a getter method should be defined in the State Exposure interface, which should then delegate to the business method.

State Exposure interface is usually passed to UI layer for display. In a proper application design, the role to expose domain object states is usually assumed by a Data Transfer Object (DTO). In a Spring+Hibernate application, this Java interface can be implemented by the domain class directly, and the domain objects can be passed to the UI layer without exposing business methods. Typically, Open Session In View (OSIV) is required in this usage.

Even if the domain class implements the State Exposure interface , the domain objects that are passed to the UI layer should not be used as command objects. Each piece of update should be invoked on a facade, which then invokes methods defined in the State Manipulation interface. The piece of information for update is usually passed in as a DTO.

Note that there is difference between the DTO for state exposure and the DTO for state manipulation:
The DTO for state exposure usually captures most, if not all, state of the domain object.
The DTO for state manipulation usually captures only the piece of state that is being updated.

In conclusion, because of the three interfaces a domain object usually has, it is strongly discouraged to pass domain objects to UI layer or use them as command objects, when applying DDD.

Sunday, June 10, 2007

Paypal, You Did It Again!

This is a follow-up to my previous blog about Paypal's pathetic service.

I was able to log in eventually. However, since then, I received two warning email from Paypal, requesting me to accept their Policy Updates to prevent Account Limitation.

The first email is like this:
Dear ***,

PayPal's records indicate that you have not accepted the Product Disclosure Statement.

Failure to accept the Product Disclosure Statement within thirty days will result in access to your PayPal account being limited. If your account is limited, you will no longer be able to receive or send payments.

PayPal values you as a customer and does not want you to lose the valuable benefits of your account. Please visit the PayPal website to accept the Product Disclosure Statement. To do this, copy and paste the following URL into your browser: https://www.paypal.com. Then, log in to your account and click the New Policy Update link on your Account Overview page.

Thank you for using PayPal!
The PayPal Team

----------------------------------------------------------------

PayPal, an eBay company

Copyright © 1999-2007 PayPal, Inc. All rights reserved.

PayPal Australia Pty Limited ABN 93 111 195 389 (AFSL 304962). Any general financial product advice provided in this site has not taken into account your objectives, financial situations or needs.
PayPal Email ID PP 878

I went to their website and visited their "Policy Updates" page after log in. However , I could not find any input methods to allow to indicate whether I would accept or decline the policies, as shown in the following screens (the top and the bottom of the page):

A few days ago, I received the final warning email from Paypal:
Dear ***,

Recently, we sent an email notice to remind you to accept the updated Product Disclosure Statement within thirty days to avoid restrictions being placed on your account.

You now have seven days to accept the policy. If your account is limited, you will be unable to send or receive money, but will be able to withdraw any remaining balances.

PayPal values you as a customer and does not want you to lose the valuable benefits of your account. Please visit the PayPal website to accept the Product Disclosure Statement. To do this, copy and paste the following URL into your browser: https://www.paypal.com. Then, log in to your account and click the New Policy Update link on your Account Overview page.


----------------------------------------------------------------

Thank you for using PayPal!
The PayPal Team

----------------------------------------------------------------

PayPal, an eBay company

Copyright © 1999-2007 PayPal, Inc. All rights reserved.

PayPal Australia Pty Limited ABN 93 111 195 389 (AFSL 304962). Any general financial product advice provided in this site has not taken into account your objectives, financial situations or needs.
PayPal Email ID PP 879
I logged in to Paypal again and still couldn't find the "Accept" button or checkbox that I expected.

Today, I received the notification of limitation on my account:
Dear ***,

We regret to inform you that your account is now limited because you failed to accept one or more of the PayPal updated policies. Over the past month, we have contacted you by email twice asking you to accept the updated policies within 30 days to avoid account limitation.

Because your account is limited, you cannot send or receive money. Access the link below to learn how to withdraw any remaining balances from your account should you decline PayPal's updated policies.

Please visit the PayPal website to accept the policy updates and restore your account status. To do this, copy and paste the following URL into your browser: https://www.paypal.com. Then, log in to your account and click the New Policy Update link on your Account Overview page.

We encourage you to read the new User Agreement and Privacy Policy, as they contain important information about your PayPal account, your rights as a PayPal user, and the ways in which PayPal will use your personal information.

To view the Product Disclosure Agreement, please visit the PayPal website. To do this, copy and paste the following URL into your browser: https://www.paypal.com. Then, log in to your account and click the New Policy Update link on your Account Overview page.

Sincerely,
PayPal


-----------------------------------
Additional Information
-----------------------------------

Q. Why will my account be limited if I do not accept the policy updates within 30 days?

A. By agreeing to the terms of PayPal's updated policies, you are entering a binding, voluntary contract with PayPal. Therefore, if you decline the updated policies, we must assume that you no longer agree to abide by those rules, and your account will be limited. When your account is limited, you cannot send or receive money until you accept each policy update required. You will also have the right to withdraw any remaining balances if your account is limited.

Sincerely,
PayPal


----------------------------------------------------------------

PayPal, an eBay company

Copyright © 1999-2007 PayPal, Inc. All rights reserved.

PayPal Australia Pty Limited ABN 93 111 195 389 (AFSL 304962). Any general financial product advice provided in this site has not taken into account your objectives, financial situations or needs.
PayPal Email ID PP178

I logged in again and couldn't find out how I can accept the policies.

For God's sake, haven't they ever tested their website to make sure a user can accept their policies?

So so so pathetic... And I really don't want to have any business with them any more. Unfortunately, I have to... Yahoo!'s Flickr only accepts Paypal payments... But I'll try to stay away from Paypal as possible as I can. I just don't trust them: if they can't even make their website decently functioning, how can I trust them to abide by privacy laws or safeguard credit card information...

Sunday, May 6, 2007

Paypal - the *WORST* e-Commerce website I have ever used!

Using Paypal is so painful!

1) Painful to reset your password if you forget your email address you signed up with.
I had an account with Paypal long time ago. But I forgot which email account I signed up with.
So I clicked the "forgot your password" link.
I entered my hotmail account, the page displayed "The instruction on how to reset your password has been sent to you account."
Then I monitored my hotmail account for hours and did not find any email from Paypal.
Then I tried again. This time I used my Yahoo! account. The same message. And still no email from Paypal appeared in the Yahoo! inbox.
Can't they just tell you the fact that you didn't sign up with that email address?!

2) Painful to sign up and activate your account.
Finally I gave up and decided to sign up with my gmail email address. All seemed fine until I clicked the link in the message to activate my account.
The page prompted me to enter my password. I entered the password I set not that long ago. Alas, the page displayed: "Click here to retry" with no other error message telling you what is wrong! I tried and tried and still no luck.


So I thought maybe I set the initial password incorrectly, so I decided to reset my password. This time I received the reset password email. I clicked the link and it seemed fine.
I tried to reset my password to the password I wanted to set initially. My password was rejected because it was the same as the current active password!!! Then why the hell Paypal asked me to retry when I entered the correct password in the first place!!!
I changed to another password, which was accepted. Then Paypal prompted me to log in.
I entered my login details correctly with the second password
Paypal just displayed the same "Click here to retry" message again!!!

I have never ever used an e-Commerce website that is more pathetic than Paypal!!!

IMHO, Paypal is the *WORST* high-profile website ever in the world!!!

Tuesday, March 27, 2007

Coding convention of prefixing instance variables and Java 7's language support for JavaBeans properties

The coding guidelines in our company specifies that all instance variables must be prefixed with 'm'.

I think this is ugly, and if you need this prefix to tell whether a variable is an instance variable or a local variable/parameter, there is a bad code smell that indicates refactoring is needed.

In addition, a good IDE, such as Eclipse and IntelliJ, can display instance variables, static variables and local variables in different colours and styles. (NetBeans's support in this area is disappointing, even in 5.5)

Rod Johnson, Spring's founder, said that he did not advocate this practice in his famous book "Expert one-on-one J2EE Design and Development".

Anyway, this practice will be in conflict with the language support for JavaBeans properties in Java 7:

To declare a simple JavaBeans property, currently you code:
public class Foo {
private Bar baz;
public Bar getBaz() { return baz; }
public void setBaz(Bar newBaz) { this.baz = newBaz; }
}

In Java 7 you'll code:
public class Foo {
public property Bar baz;
}

You only need provide your own getter/setter methods only if you need to overwrite the default.

To access the properties, currently you code:
thisFoo.setBaz(thatFoo.getBaz();

In Java 7 you'll code:
thisFoo->baz = thatFoo->baz;

So you see, if you stick to the prefixing practice, you cannot utilise the new language support for properties.

So I'm in favour of abolishing prefixing instance variables at all. However, depending on the degree of comfort in an organisation, prefixing other variables is still acceptable, such as prefixing class variables with 's' and prefixing local varialbes and method parameters with 'new', 'the', 'tmp', 'old', 'p' or 'l', etc.