Tuesday, February 2, 2016

Hibernate 5.0.2 - Specifying Schema Causes AssertionFailure in JoinedSubclassEntityPersister

After migrate to Wildfly 10, my web application based on JPA and Hibernate cannot be deployed due to the following error:

Caused by: org.hibernate.AssertionFailure: Table xxx.XXX not found
    at org.hibernate.persister.entity.AbstractEntityPersister.getTableId(AbstractEntityPersister.java:5107)
    at org.hibernate.persister.entity.JoinedSubclassEntityPersister.<init>(JoinedSubclassEntityPersister.java:433)
    at sun.reflect.GeneratedConstructorAccessor89.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
    at org.hibernate.persister.internal.PersisterFactoryImpl.createEntityPersister(PersisterFactoryImpl.java:96)
    ... 13 more

I tested against Hibernate 5.0.1, up to 5.0.7 which is bundled in WildFly 10. It turns out until Hibernate 5.0.1, my application works just fine. From Hibernate 5.0.2 and higher, it's broken.

It seems the <schema> elements in my ORM mapping file for those entities in the entity hierarchy using the JOINED inheritance strategy triggering the problem. I have reported an issue at HHH-10490.

Current workaround is to remove those <schema> elements for entities using JOINED inheritance strategy from the mapping file. If it's not good for your case, you can also downgrade to Hibernate 5.0.1. For those WildFly 10 users, add the following property to your persistence.xml:

<property name="jboss.as.jpa.providerModule" value="application" />

Then, add Hibernate 5.0.1 as a dependency of your application, so that the required Hibernate artifacts will be packaged along with your application.

Environments

  • WildFly 10
  • Hibernate 5
  • MySQL Connector/J 5.1.37

Monday, November 23, 2015

Wildfly 9 - Execute Batch Jobs from a JAR in a WAR

It's very common to package Java EE JSR-352 batch jobs and artifacts in a JAR file, and execute the jobs in a web application with the JAR as a dependency. In Wildfly 9, due to the Class Loading based on JBoss Modules and an issue in Wildly 9, you might end up with an exception: javax.batch.operations.JobStartException: JBERET000601: Failed to get job xml file for job XXX.

This post is about how to execute Java EE JSR-352 batch jobs from a JAR file in a WAR archive on Wildfly 9 particularly.

META-INF/batch-jobs in the WAR

In the WAR archive, make sure you have a META-INF/batch-jobs directory. It goes under the WEB-INF/classes directory in a WAR archive. If you don't really have any job XML files in this directory, put a README file for example to make sure it's not empty in order to avoid being ignored by the packaking tool.

WEB-INF/beans.xml in the WAR

If you are using CDI for job artifacts in the WAR, make sure you have a beans.xml file for CDI under the WEB-INF directory. This is the trigger which leads Wildfly implicit module dependency to CDI subsystem being added.

META-INF/batch-jobs in the JAR

In the JAR file, you need storing the job XML documents under the META-INF/batch-jobs directory of course.

META-INF/beans.xml in the JAR

If you are using CDI for job artifacts in the JAR, you should have a CDI beans.xml file under the META-INF directory in the JAR. This will ensure Wildfly will scan the JAR for job artifacts. This is optional for some deployments though.

META-INF/services/org.jberet.spi.JobXmlResolver in the JAR

As a workaround for Wildfly 9 particularly, you need place a service provider configuration file in the resource directory META-INF/services in the JAR, with the name org.jberet.spi.JobXmlResolver. The configuration file contains only the following line:

org.jberet.tools.MetaInfBatchJobsJobXmlResolver

This service provider configuration file will ensure Wilefly 9 to scan the META-INF/batch-jobs directory in the same JAR file for job XML documents.

Resources

Thursday, November 5, 2015

JPA Inheritance and SQLException: Parameter Index Out of Range

JPA (Java Peristence API) supports inheritance. When working with the SINGLE_TABLE or JOINED mapping strategy, the @DiscriminatorColumn annotation (or discriminator-column element if you are using XML mapping descriptor) is used to specify the name of the type discriminator column. This colummn will not be mapped to any field in any of the classes in the entities hierarchy. If you do so, you might encounter the SQLException: Parameter index out of range. I'm using MySQL and Hibernate, the stack trace looks like this:

Caused by: org.hibernate.exception.GenericJDBCException: could not insert: [...]
    at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54)
    at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:126)
    at org.hibernate.id.insert.AbstractReturningDelegate.performInsert(AbstractReturningDelegate.java:65)
    at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3032)
    at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3556)
    at org.hibernate.action.internal.EntityIdentityInsertAction.execute(EntityIdentityInsertAction.java:97)
    at org.hibernate.engine.spi.ActionQueue.execute(ActionQueue.java:480)
    at org.hibernate.engine.spi.ActionQueue.addResolvedEntityInsertAction(ActionQueue.java:191)
    at org.hibernate.engine.spi.ActionQueue.addInsertAction(ActionQueue.java:175)
    at org.hibernate.engine.spi.ActionQueue.addAction(ActionQueue.java:210)
    at org.hibernate.event.internal.AbstractSaveEventListener.addInsertAction(AbstractSaveEventListener.java:324)
    at org.hibernate.event.internal.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:288)
    at org.hibernate.event.internal.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:194)
    at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:125)
    at org.hibernate.jpa.event.internal.core.JpaPersistEventListener.saveWithGeneratedId(JpaPersistEventListener.java:84)
    at org.hibernate.event.internal.DefaultPersistEventListener.entityIsTransient(DefaultPersistEventListener.java:206)
    at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:149)
    at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:75)
    at org.hibernate.internal.SessionImpl.firePersist(SessionImpl.java:807)
    at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:780)
    at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:785)
    at org.hibernate.jpa.spi.AbstractEntityManagerImpl.persist(AbstractEntityManagerImpl.java:1181)
    ... 1 more
Caused by: java.sql.SQLException: Parameter index out of range (1 > number of parameters, which is 0).
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:998)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:937)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:926)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:872)
    at com.mysql.jdbc.PreparedStatement.checkBounds(PreparedStatement.java:3367)
    at com.mysql.jdbc.PreparedStatement.setInternal(PreparedStatement.java:3352)
    at com.mysql.jdbc.PreparedStatement.setInternal(PreparedStatement.java:3389)
    at com.mysql.jdbc.PreparedStatement.setNull(PreparedStatement.java:3428)
    at org.hibernate.type.descriptor.sql.BasicBinder.bind(BasicBinder.java:77)
    at org.hibernate.type.AbstractStandardBasicType.nullSafeSet(AbstractStandardBasicType.java:282)
    at org.hibernate.type.AbstractStandardBasicType.nullSafeSet(AbstractStandardBasicType.java:277)
    at org.hibernate.type.AbstractSingleColumnStandardBasicType.nullSafeSet(AbstractSingleColumnStandardBasicType.java:56)
    at org.hibernate.persister.entity.AbstractEntityPersister.dehydrate(AbstractEntityPersister.java:2843)
    at org.hibernate.persister.entity.AbstractEntityPersister.dehydrate(AbstractEntityPersister.java:2818)
    at org.hibernate.persister.entity.AbstractEntityPersister$4.bindValues(AbstractEntityPersister.java:3025)
    at org.hibernate.id.insert.AbstractReturningDelegate.performInsert(AbstractReturningDelegate.java:57)
    ... 20 more

There's not necessarily a field mapping to the discriminator column, because for any specific concrete entity, its type discriminator value is basically a constant.

Environment

  • Java Persistence API 2.1
  • Hibernate 4.3.1
  • MySQL Connector 5.1.36

Tuesday, June 30, 2015

ADF - Time Zone for History Columns

In my last post of the time zone series - Time Zone for Oracle JDBC Driver, I introduced how the Oracle JDBC Driver processes date values with regards to time zones. As a special note, in this post, I will introduce how time zones affect the ADF Business Components 'Date' history columns, specifically, "Created On" and "Modified On" columns.

Here's the description from the official document (see the resources below):

  • Created On: This attribute is populated with the time stamp of when the row was created. The time stamp is obtained from the database.
  • Modified On: This attribute is populated with the time stamp whenever the row is updated/created.

So, the time stamp is obtained from the database (when there is a connection), rather than JVM in which ADF is running. Basically, an appropriate query will be executed in the database to get the current database time; then the value is returned by JDBC and converted to appropriate Java Date type. In this process, the rules introduced in my last post apply.

For example, for Oracle Database, the query (see resources below) would be like this:

select sysdate from dual

Whenever an entity object is about to initialize or update its history columns, the current transaction object is inquiried about the current database time. The transaction object executes the query statement and returns the timestamp obtained as the result from the database (for performance benefit, the query statement is executed only for the first time, and then a difference between the database time and JVM time is saved and added to the JVM time to get a result for a subsequent request).

Now it's clear, to get correct history columns, we need make sure both the database time zone and JVM time zone are configured correctly. Typically, both of them should be configured to be UTC time zone, and then we can use converters for UI components to display date values or accept user input for date values in any other time zones appropriate, as introduced in my another post, ADF - Time Zone for af:convertDateTime.

Resources

Monday, March 23, 2015

Time Zone for Oracle JDBC Driver

In my last post - Time Zone for af:convertDateTime, I introduced how the date values are passed around in an typical ADF application, and specifically, how the ADF Faces handles the date values conversion with respect to the time zone configuration. To review it, let's take a look at this figure again:

Image: Date Data Handling

In this post I will be talking about another part of the puzzle - how the Oracle JDBC driver processes date values with regards to time zones. It can be illustrated as the following simple figure:

Image: Oracle JDBC Driver and Time Zone

As shown in the figure, this post will use java.sql.Date (or simply Date in monospace type) and the Oracle DATE datatype (or simply DATE in monospace type) for the discussion. The term "date value" will be used for general purposes.

Oracle Database stores date values in its own internal format. A DATE value is stored in a fixed-length field of seven bytes, corresponding to century, year, month, day, hour, minute, and second. When a date value goes from the application to the database; and out of the database back to the application. It works like this, basically:

  1. A java.sql.Date value is created to hold the date value, and it's in the time zone GMT.
  2. The Date value is sent to the Oracle JDBC Driver, and the driver converts it to the Oracle DATE value and passes it to the database.
  3. The Oracle JDBC Driver retrieves the DATE value out of the datbase, converts it back to the java.sql.Date value.

The Java Date value carries the time zone information implicitly which is always GMT by definition; but the Oracle DATE datatype does not. For Oracle JDBC Driver to convert the value between these two datatypes, another time zone must be specified in some way as the source or destination time zone. If you just want a quick answer, here is it: Oracle JDBC Driver will use the default time zone of the Java VM if it's not explicitly specified.

The key lies in the class oracle.sql.Date, which provides conversions between the Oracle DATE datatype and the Java java.sql.Date (and java.sql.Time, java.sql.Timestamp). Specifically, I'll talk about its two overloaded methods used to convert the Oracle DATE value into the Java Date value. The reverse conversions are handled by its constructors with the same ideas shared.

One of methods is:

public static Date toDate(byte[] date, Calendar cal)

And another one is:

public static Date toDate(byte[] date)

Calling the second one is simply equivalent to call toDate(date, null). Let's focus on the first one. This method accepts two parameters. The first parameter represents the Oracle DATE value to be converted with each byte in the array corresponding to each field in the internal format of the Oracle DATE datatype (that seven-byte, fixed-length format). For the other parameter, it's documented as this:

cal - Calendar which encapsulates the timezone information to be used to create Date object

Here is how this method works:

  1. A new Calendar instance is created using the TimeZone encapsulated in the specified Calendar parameter (cal1 = Calendar.getInstance(cal.getTimeZone())). In case the Calendar parameter is null, use the default time zone (cal1 = Calendar.getInstance()).
  2. Populate each field of the new Calendar instance with the value of each corresponding field in the byte array.
  3. Create and return a new java.sql.Date object using the long value of the time returned from the populated Calendar instance (new java.sql.Date(cal1.getTime().getTime())).

In summary, Oracle JDBC Driver interprets the date values retrieved from the database as in the time zone of the Java VM by default. The values that are actually loaded into the Java Date values may vary depending on your Java VM, and vice versa.

Series on Time Zone

Environment

  • JDeveloper 12.1.3.0.0 Build JDEVADF12.1.3.0.0GENERIC_140521.1008.S
  • Oracle Database 12.1.0
  • Oracle JDBC 12.1.0
  • Mac OS X Version 10.10

Resources