Mapping a Flat Data Model to a Relational Data Model Using Dozer and Proxies

Introduction

I’m working for an insurance company which uses the AL3 form standard along with the Acord XML standard for the insurance business.  Acord has created a mapping spreadsheet which maps AL3 fields to Acord XML fields.  This is great because it allows us to communicate business-to-business with our carriers.  But there is a problem with the mapping.  [no proprietary information is shared in this article]

You map MailingAddressStreet to /customer/address[1].street

Do you see the problem?  Customer can have zero to many addresses of types mailing and home.  This means that an address has a code ‘H’ or ‘M’ to designate which one it is.  This also means that a customer’s lists of addresses must have an Address with a code of ‘M’ at index 1 for this mapping to work.  And this is how the current version of our software works.

The scale of our system is small right now, so the problem is not yet apparent, but it must scale to a huge size by the end of our implementation.  I believe the chances of a vulnerability are high with this mapping design, and so I’ve created an alternative.

Technology

For my proof of concept I’ve chosen a simplified technology stack to test my design using Spring MVC, Spring JPA, Hibernate, SQL Server, JSP, and Dozer. You will recognize the first five, but you may not recognize the last.  Dozer is a Java Bean-To-Bean mapping API which we chose specifically for AL3 to Acord XML mapping.  It is elegantly simple.

Using an XML file (or annotations) I can map each field of a DTO (Data Transfer Object) to a JPA Entity with a single command:  mapper.map(DTO,Entity) and back mapper.map(Entity, DTO).  It’s a marvelous time saver.  Mapping from the web to the database is painstaking, low-level code made easy with Dozer and JPA.  I’m 100% sold on these technologies.

Dozer’s Solution

The Dozer folks have proposed exactly what the Acord folks propose

dozer-index-mapping

Look familiar?  In this example userName1 must be in the array first.  But what if you are using an ArrayList, which is unordered?  What if your business logic orders them differently?

My goal is to solve the the AL3 to Acord XML multiple Address problem and make the solution full proof while continuing to take advantage of the Dozer technology.

My Solution

My team has been looking at problems like this for many months, and one day a member of my team proposed this:  make the JPA Entity mirror the DTO on the surface and break up the data into objects underneath.

This means if the DTO bean has methods called setMailingStreet and getMailingStreet then the Entity, even though it does not having these properties, should have the same methods.  Dozer looks at the public getters and setters to map.  It doesn’t care what you do with the data once they are called.  We would be transiently augmenting the entity which simply means that we would add these new methods to the entity and they will not be persisting anything.  We do this with the @Transient annotation on the methods.  These transient methods are proxies for a second layer of conversion.

The Code

To use this technology stack, there is much configuration to do and I will not cover that.  Instead, I will you give you the essential snippets to demonstrate this design starting with the form.

The app is simple.  We have a Customer and he/she has Addresses.  I want to present the customer with only two kinds of addresses:  home and mailing.  In the database, Customer has a relationship with Address that will allow for zero to many addresses per customer, and the Entity is the same way.  To the user, it’s all one record.  To the database, it is many records;  one-to-many.

The Customer app is simply a CRUD (Create Read Update Delete) app.

The form is very simple and looks like this

customer-form

This maps to a DTO bean.  Here’s a snippet to give you the idea.

public class CustomerDTO
{
	private int id;

    private String firstName;

    private String middleInitial;

    private String lastName;

    private Integer homeAddressId;

    private String homeStreet;

    private String homeCity;

    private String homeState;

    private String homeZip;

    private Integer mailingAddressId;

    private String mailingStreet;

    private String mailingCity;

    private String mailingState;

    private String mailingZip;

    public int getId()
    {
        return id;
    }

    public void setId(int id)
    {
        this.id = id;
    }

    public String getFirstName()
    {
        return firstName;
    }

    public void setFirstName(String firstName)
    {
        this.firstName = firstName;
    }

    public String getMiddleInitial()
    {
        return middleInitial;
    }

    public void setMiddleInitial(String middleInitial)
    {
        this.middleInitial = middleInitial;
    }

    public String getLastName()
    {
        return lastName;
    }

    public void setLastName(String lastName)
    {
        this.lastName = lastName;
    }

    public String getHomeStreet()
    {
        return homeStreet;
    }

    public void setHomeStreet(String homeStreet)
    {
        this.homeStreet = homeStreet;
    }
...

As you can see, the DTO maps precisely to the form.  However, the Customer entity bean only has firstName, middleInitial, lastName, id, and a list of Address beans.  This is not a complete mapping.

The Dozer mapping file looks like this.

<?xml version="1.0" encoding="UTF-8"?>
<mappings xmlns="http://dozer.sourceforge.net" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 	xsi:schemaLocation="http://dozer.sourceforge.net           http://dozer.sourceforge.net/schema/beanmapping.xsd">
	<mapping>
		<class-a>com.mycompany.customer.dto.CustomerDTO</class-a>
		<class-b>com.mycompany.customer.domain.model.Customer</class-b>
	</mapping>
</mappings>

The way Dozer works is that it inspects each object for matching getters and setters and it transfers the data for the ones which are the same.  Anything that doesn’t match is ignored.  The mapping can be done explicitly to resolve when the field names do not match, but what do you do when the field just doesn’t exist as in the case of the address fields?

That’s when my proxy pattern comes into play.  I add those fields explicitly to the Customer bean.  I take each getter and setter from the DTO and add them to the Customer bean with the @Transient annotation so JPA knows not to mess with them.

This is where it gets tricky because we have to take a zip or a street or a state or a city and put it into an Address object with a matching code.  For example, a mailing zip needs to be in an Address object with a code of ‘M’ for mailing.  In this application there can be only one mailing address for a customer.  That takes a bit of doing.  Here’s one way of doing it.

@Transient
public String getMailingZip()
{
	Address mailingAddress = null;
	if (addressList != null)
	{
		for (Address address : addressList)
		{
			if (AddressCode.MAILING.equals(address.getCode().trim()))
			{
				mailingAddress = address;
			}
		}
	}
	if (mailingAddress != null)
	{
		return mailingAddress.getZip();
	}
}

@Transient
public void setMailingZip(String mailingZip)
{
	Address mailingAddress = null;
	if (addressList == null)
	{
		mailingAddress = new Address();
		addressList = new ArrayListAddress>();
		addressList.add(mailingAddress);
	} else
	{
		for (Address address : addressList)
		{
			if (AddressCode.MAILING.equals(address.getCode().trim()))
			{
				mailingAddress = address;
				mailingAddress.setZip(mailingZip);
			}
		}
	}
	mailingAddress = new Address();
	mailingAddress.setCustomerId(this);
	mailingAddress.setCode(AddressCode.MAILING);
	mailingAddress.setZip(mailingZip);
	addressList.add(mailingAddress);
     }

I’ll step you through it.

  1. To get a mailingZip we first need to find the mailing address.  The Customer bean has a list of of Address beans.  In this case there is one with an address code of ‘H’ and one with ‘M’.  We want ‘M’ for mailing.
  2.  Once we have that object we can get the zip and return it.  Dozer thinks there really is a mailingZip property in Customer and so it maps it.

To set a zip we do a similar thing.

  1. We make sure that there is an addressList.  If this is a new customer there may not yet be one.  If there isn’t, then we create a new one.
  2.  Then we create a new Address, set the zip and add it to the list.
  3.  If there is already a list then we search for the ‘M’ code.  If we find it, then we set the zip, if we don’t then we create a new Address with the ‘M’ code and set the zip.

Dozer will call all of the sets or gets depending on which is the target and which the source.  Here is example of how to transfer from the DTO to the entity bean for editing.

To edit a customer, the method makes the customerDTO the source and the empty customer class as the destination if it’s a new customer, an existing customer if it’s an existing one.

New Customer

customer = mapper.map(customerDTO, Customer.class);

Existing Customer

customer = mapper.map(customerDTO, customer);

However, if we need to display a customer, we need to do the opposite.

CustomerDTO dto = mapper.map(customer, CustomerDTO.class);

What all of this means is that there are two mappings happening here.  Dozer maps to and from the proxy methods and then the proxy methods are smart enough to map to the correct Address objects.

As a bonus, I’ll show how I can further abstract the proxy mapping so that all of that code doesn’t have to appear for every method.  I’m sure I’ll eventually make it so that it can be used for any such entity, but this is good enough for now.

@Transient
public String getMailingZip()
{
	return(String)getAddressPart(AddressCode.HOME,"getZip",getAddressByCode(               AddressCode.MAILING));
}

@Transient
public void setMailingZip(String mailingZip)
{
	this.setAddressPart(AddressCode.MAILING, mailingZip, "setZip",String.class);
}

@Transient
private Address getAddressByCode(String code)
{
	if (addressList != null)
	{
		for (Address address : addressList)
		{
			if (code.trim().equals(address.getCode().trim()))
			{
				return address;
			}
		}
	}
	return null;
}
@Transient
private Object getAddressPart(String code, String methodName, Address addr)
{
	Class
<Address> addrClass = Address.class;
	Method getMethod = null;
	Object rtn = null;
	if (addr == null)
	{
		addr = new Address();
	}
	try
	{
		getMethod = addrClass.getMethod(methodName, new Class[]{});
		rtn = getMethod.invoke(addr);
	} catch (NoSuchMethodException | SecurityException | IllegalAccessException  IllegalArgumentException | NullPointerException | InvocationTargetException e)
	{
		// TODO Auto-generated catch block
		e.printStackTrace();
	}
	return rtn;
}
@Transient
private void setAddressPart(String code, Object addressPart, String methodName, Class dataType)
{
	Class
<Address> addrClass = Address.class;
	Method setMethod = null;
	if (addressList == null)
	{
		addressList = new ArrayList
<Address>();
	}
	try
	{
		setMethod = addrClass.getMethod(methodName, new Class[]{ dataType});
		for (Address existingAddr : addressList)
		{
			if (code.equals(existingAddr.getCode().trim()))
			{
				setMethod.invoke(existingAddr, addressPart);
				return;
			}
		}
		Address newAddr = new Address();
		newAddr.setCustomerId(this);
		newAddr.setCode(code);
		setMethod.invoke(newAddr, addressPart);
		addressList.add(newAddr);
	} catch (NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e)
	{
		// TODO Auto-generated catch block
		e.printStackTrace();
	}
	return;
}

First, look at the getter and setter for zip.  I’ve reduced it to one unique line of code each.  All of the logic is now in getAddressByCode, getAddressPart, and setAddress part.  So we have a look up, a getter, and a setter which can be used for any part of any address.

Alternative

As I wrapped up this project, I thought of another possible way to do this.  Dozer is an extensible framework.  It allows you to add custom converters to the mapping that might do the same thing as what I’ve done here.  I like the notion of a customer converter because the DTO doesn’t need to know anything about the entity bean and vice versa.  My predecessor used custom converters, but they ended up doing the entire conversion, nearly removing Dozer from the picture completely.  I want Dozer to do the part it can do and have my solution do the rest. That may end up being a cleaner solution with a clearer separation of concerns.  You may see another blog post on this.  But for now, my company has a working solution and that may be all they care about.

Conclusion

To summarize, we’ve tricked Dozer into believing that it can map CustomerDTO with a Customer entity bean by adding transient proxy methods to the bean.  We’ve created internal mapping to shift the form data into a relational model between Customer and Address. We’ve secured the data by avoiding the deep indexing method proposed by AL3 and Dozer.

If you have suggestions or questions, please don’t hesitate to leave your comments.  I am, after all, just a Regular Average Java Programmer (RAJP).

org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency

Learning Spring MVC has been challenging for me.  I’ve encountered many problems along the way.  The error in the title is a problem I’ve encountered several times for several reasons many of which have been mentioned in sites like Stack Overflow, but not all.

First, let me describe what I’m trying to accomplish.  I’m building a Spring MVC and Hibernate JPA application to demonstrate the best design for mapping flattened data to relational data using Dozer.  To accomplish this, I need several components.  The ones I’m concerned with in this post are the Spring components:  Controller, Service, and Repository.  The Repository is injected (autowired) into the Service and the Service is injected into the Controller.  Briefly, injection (@Inject or @Autowired) uses an interface in which to inject an implementing class in order to hide implementation details.  I can inject a Service into my Controller to interact with the Repository, and in a JPA app, I can use the Repository to access the persistence layer.

This is only a fraction of what Spring does and yet it is very complex, especially the configuration.  The old way to configure a Spring app is through XML.  I’m using a combination of XML and annotations.  There are many things that can go wrong when configuring a Spring MVC app, but there is one error that I’ve encountered several times.

org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency

This means that upon attempting to inject a spring component, no implementation was found.  Here are the reasons why this can happen.

No Implementation

The first reason is the most fundamental:  you didn’t bother building the implementation class.  You built the interface, autowired it, but never created a class which implements the Service, Component, or Repository interfaces you’ve built.

You must build a class which has the code “implements MyRepository” and implements all of it’s methods.  Then you must annotate it with, for example, @Repository.

No Annotations

Speaking of annotations.  A common cause of this error is that you’ve implemented the interfaces but you haven’t configured them.  A controller should have @Controller at the Class signature.  The service should have @Service or @Component.  The Repository should have @Repository or @Component.  These are hints to the Spring base scanner to load them when the server starts up.

No Scanner

Spring needs to know where to look for your Spring implementations.  You’ll have a file called [mydispatcherservlet]-servlet.xml.   In it, you must have the following:

This signals to Spring that the components are in com.mycompany.customer or it’s child packages.

Scanner base-package not catching spring components

You may have the component scanner, but your base package may not include all of your components.  For example, you might have a repostory in com.mycompany.customer.repository, but your scan package might be com.mycompany.customer.controller.  It will find the controller, but not the repository.

Another App Causing the Error

This is the one which I spent several hours debugging.  It’s not common, and you won’t find it listed on other sites because it’s not a Spring specific issue, it is an app server/servlet container issue.

I’m using Tomcat with Eclipse.  In Eclipse, you can add or remove your web projects to the Tomcat server, but there may be other apps in the Tomcat webapps directory which never show up in the Eclipse Tomcat server node.  I was having the above error and I tried all of the above solutions with no luck.  I began to wonder if something else in Tomcat could be spitting out the errors so I removed my app from Tomcat.  I surprised to see that the error continued.

Eventually, though, I looked in the Tomcat webapps directory and found a discarded version of the app.  I deleted it, and the errors ceased.  I redeployed my new version and it was fixed.  Good grief!

Maven Project – statement not supported in -source 1.5

Recently, I was working in a project and I coded a multi-catch statement because…you know…they’re cool.  Multi-catches were introduced in Java 1.7 and are not therefore supported with a compiler below 1.7.  Duh-doy!  But I’ve got it covered, right?  My Eclipse IDE is configured to use a 1.8 JDK.  My project is configured to compile with 1.8.  So why am I getting this message?

multi-catch statement is not supported in -source 1.5
(use -source 7 or higher to enable multi-catch statement)

I’ve been using maven with eclipse for 4 years and I’ve never run into this problem because a certain bit of config was always there and I didn’t know it. I found this statement at maven.apache.org

Also note that at present the default source setting is 1.5 and the default target setting is 1.5, independently of the JDK you run Maven with. If you want to change these defaults, you should set source and target as described in Setting the -source and -target of the Java Compiler.

This is how independent Maven is.  It doesn’t care a lick about your IDE configuration or your local JDK.  If you do not tell it to compile at a particular version, it will run by default at 1.5.  If you’re having this problem, the solution is very simple.  Add the following text to the plugins tag of  your project pom.xml.

compiler-plugin

<plugin>

<artifactId>maven-compiler-plugin</artifactId>

<version>3.1</version>

<configuration>

<source>1.8</source>

<target>1.8</target>

</configuration>

</plugin

Now you can multi-catch, lambda, and stream with the cool kids!

9 Steps for dynamic filtering and paging of a JPA Entity

Folks, this isn’t by any means a show of brilliant software engineering, but I didn’t find anything exactly like it in the blogosphere. I have a really basic scenario to solve.  I have a JQuery grid in a jsp with filterable columns which are mapped via Stripes MVC and Spring to a JPA Entity.

The ajax call from JQuery gives me name/value pairs to filter.  It also gives me the row number to begin the page with and how many rows to retrieve.  From this, I can filter the entity list without hard-coding the Predicates or without building a JPQL string.  The following method goes in Spring Repository or where ever you are keeping business logic for entities.

Step 1:  #44 – Get a CriteriaBuilder from the entity manager

Step 2: #47 – Get a Root of type <YourEntity>

Step 3: #49 – declare a collection of Predicates

Step 4: #60 – For each name/value pair, instantiate a new Like Predicate and add it to the list.  Use The % sign around the data for wildcard searching

Step 5: #62 – Add all predicates to the query’s where clause

Step 6: #63 – create TypedQuery from the CriteriaQuery

Step 7: #70 – set the first row for your page

Step 8: #71 – set your max your for your page

Step 9: #72 – retrieve the data

FilterPaging

Add a Datapicker to the Netbeans Swing Controls Palette

For small Swing applications, I like to use the Netbeans GUI Builder.  The problem is that there is no date picker.  And today, I really needed a date picker.  Fortunately, there is a solution.  Remember SwingX?  It’s built into Netbeans, and it has a good date picker called JXDatePicker.  Here’s how to get it on your Swing Controls palette so that you can drag it onto your form.

By “palette”, I mean the view that pops up on the right side of the IDE when you use the GUI Builder.  Notice that the Swing Controls palette doesn’t have a date picker of any sort (shame on you Netbeans! This is basic stuff!).

The goal is to get JXDatePicker onto that palette.

1.) Pull up the Palette Manager for Swing/AWT Components.

2.)  Click “Add from JAR”

3.) Browse to [NETBEANS HOME]\ide\modules\ext and select swingx-0.9.5.jar

4.)  This will bring up a list of all the components available for the palette.  Lots of goodies here!  Select JXDatePicker.

5.)  Select Swing Controls

It will immediately show up on your Swing Controls palette.  All that’s left is to drag onto form!

Integrating CA Software Change Manager with a Java EE application

CA’s Software Change Manager is a tool that we use to manage software and documents.  Although much of our development staff is using GIT, many of our engineers still use SCM (or Harvest as it used to be known).  The primary way our web applications use Harvest is as a version controlled repository for documents.  Our web apps can link directly to the latest versions of documents and can even allow our users to checkout and modify documents through the web.  I’d like to share some of the techniques we’ve used to build a relationship between a Java EE application apps and Harvest.

The obvious choice for using Harvest with java is the JHSDK (Java Harvest SDK).  But through trial and error, I’ve learned that JHSDK fails in Java EE because it’s classes are not thread safe.  At the CA World conference a few years ago, I had the privilege of consulting with the father of Harvest (can’t remember his name) about this problem.  He said that I have two options, make the API thread safe or just make system calls to the command line interface (Harvest CLI).  Not knowing exactly how to make the API thread safe, I chose the latter.  It has never failed us.

Things to consider when designing a Harvest-related web app:

1.  The server on which your app server resides must have a Harvest client installed.   You’re essentially creating an automated Harvest user in your web app.

2.  Because you’re using a CLI instead of JHSDK, you have to retrieve errors and exceptions by reading logs.  Each Harvest command creates its own log file on the server.  So you have to manage log files in real time.  We create a new log file with a new random name with every command.  After the command runs, we check the log messages so that we can send the messages (including errors) back to the browser.  And finally, we delete the log file.  This is handled differently for asynchronous calls (see #5).

3.  Each Harvest command must contain user credentials.  When the user logs in, you could capture his/her username and password, but this isn’t very secure.  Ultimately, you want to use an auth file on the server.  This can be generated with a command using the username and password one time and never again (unless the password changes).  You name the auth file after the user name and then you can reference it anytime you need it.  The svrenc command looks like this:

String[] command = new String[11];
int i = 0;
command[i++] = “svrenc”;
command[i++] = “-f”;
command[i++] = userName+”.auth”;
command[i++] = “-usr”;
command[i++] = userName;
command[i++] = “-pw”;
command[i++] = password;
command[i++] = “-dir”;
command[i++] = siService.getAuthRootPath();
command[i++] = “-o”;
command[i++] = log;

4.  Building commands can be error prone if it’s done in one big String.  Fortunately, Java’s Runtime class takes a String array.  It’s best to build your commands this way. (see above example for building a command)

5.  Commands can be run asynchronously for long-running processes by putting the command into a thread.  As the thread runs it writes the progress of the log file output into the database and the client polls it with AJAX calls.  That way you can show progress on the process.

6.  When the user needs to view a file, it’s easier and quicker to use SQL rather than the hco command to get a blob and stream it out to the browser

7.  When the user needs to checkout a file (hci command), you’re checking it out to the server’s file system then streaming a copy of that file back to the web browser.

8.  To check in a file, upload the file through the web browser to the exact spot where it was checked out to, then run the hco command on it.

9.  Finally, use the documentation.  It comes with your installation and is called “CA Software Change Manager Command Line Reference Guide”.  Everything you can do with the SCM client can be done via the CLI, and therefore can be done in a web app.

Integrating your web apps with CA SCM can be a very powerful asset to your users.  We allow users to list, view, and manage files, promote/demote packages, edit forms, create comments, and approve/deny packages.  We had hoped that CA would be a decent web version of SCM, but it never happened, so we built the parts that we need.  We’ve been very successful, and using CLI calls has been very reliable.

Testing Web Services in Glassfish 3

In Glassfish 2, the admin console had a separate node for web services.

Once there you could click on one of the services a get this page

Notice the “Test” button.  That will take you to a page that will allow you to test all of your service operations.  Handy.  But in Glassfish 3, there is no node for web services.  I was dismayed when I found this.  How will I test my service on the server?  No worries.  It’s still there.  It’s just moved, and it takes a few more clicks.

Click on the Applications node and navigate to the web application that contains the web service you want to test.  You’ll find this page detailing the application and it’s modules.

You’ll notice that under the action column you see two rows with links for View Endpoint.  When you click on that link, you will find the link for testing your service.  It’s the same page as in Glassfish 2.  It looks like this.

The result of running your operation is a display of both the SOAP Request and the SOAP Response.

This testing mechanism in Glassfish is invaluable.  Take advantage of it.

For an in depth read on web services in Glassfish, check out Java expert Aran Gupta’s article Creating and Invoking a Web service using GlassFish in NetBeans, IntelliJ, and Eclipse