In my previous post I explained why we should write more unit tests for our database code. Databases contain crucial business logic that is often developed and maintained in tandem with the code that depends on it. If we want to validate complex SQL statements in isolation and in detail, that means writing much more automatic tests than we normally do. Testing against a real database server (not an in-memory emulation) is by nature slower and more cumbersome to set up and run than basic unit tests. Here I will explain some practical strategies to make that process faster and more manageable.
The tips and tricks fall into two categories. First, I’ll look at ways to arrange your code under test so that the database is only spun up and accessed when it is actually needed. Secondly there are tips on how to prepare your database container with schemas and data, so setup time is kept to a minimum.
Part 1: cleaning up your code
Look at the following sample code:
List<Person> = dbUtil.executeAndRetrieveAsEntities(“RAW SQL HERE”, Person.class) for ( person: listofPersons){ if ( person.isOver25()){ dbUtil.runSQL(“RAW SQL HERE”); if ( person.isFixedEmployee() && person.tenureInYears() < 10 ){ dbUtil.runSQL(“RAW SQL HERE”); } } }
Working against the standard JDBC interfaces with raw SQL statements is verbose. That’s why many lead developers have foisted their own awesome DBUtil
library on the team to execute SQL literals and retrieve results as a list of entities. Never mind the wheel-invention effort here – dozens of venerable libraries can already do this better and with fewer bugs. The real problem is mixing two languages in one body of code. The dbUtil handle is just a boilerplate reduction device here. The raw SQL is still there. We still can’t test the complex individual statements separate from the simple yet crucial control logic captured in the if-statements, which depend solely on the state of the person object, not on the database.
Sure, we can test this control logic fine if we mock out the calls to the database. The mock for dbUtil
returns a prepared list of person objects and we can verify correct invocation of it for the two different conditions. That unavoidably leaves the SQL untested. If we want to test execution of these statements, we need to run the entire code inside the for loop, this time using a real database. That test needs to set up the conditions for all the three execution paths (condition 1, 1 and 2, or none), as well as verify what happened to the state after executing the void statement executions. It can be done, but we are of necessity testing both the Java and SQL realms here. That’s hardly the lean unit testing we’re looking for.
The solution is clear: to test our SQL code statement by statement it has to be properly extracted from the code that uses it. That’s not a difficult or risky refactoring. You don’t need an extra framework for this, you can keep whatever you have. Collect all your SQL statements into one or more functionally distinct classes with a single method per statement. You can put an interface on top of it if you’re feeling orthodox but mocking frameworks like Mockito or JMockit don’t require it. You should also extract interactions with ORM frameworks, whether you use a criteria builder or a DSL like Hibernate Query Language. They are all turned into generated SQL behind the scenes.Java
interface PersonDAO { List<Person> retrievePersons(); void processPersonsOver25(); void processJuniorFixedEmployees(); }
Now we can test our control logic as an isolated unit and test each individual statement to whatever level of precision we see fit. At such a granular level it’s also easier to bring the tables in a desired state and validate the outcome than in the non-extracted scenario. There you’d need to consider the effects of three statements instead of one. You would think following a strategy like this makes perfect sense and everybody uses it. Well, they don’t. You can bet that someone who is careless about liberally mixing SQL in code with high cyclomatic complexity is not going to be meticulous about testing. My hunch is usually right. The more difficult some code is to test, the less likely it is tested properly, if at all.
Simpler and more efficient
The benefits of extracting all your SQL statements into atomic methods is twofold: it makes tests conceptually more sound and easier to reason about: the control logic and the technically very different retrieval/update functions can now be tested separately, as they should. Second is the matter of efficiency. Since the database is not involved in the control logic, we don’t have to access it. With hundreds of tests refactored in this way this can add up to significant time savings.
Now let’s see where else we can speed things up. I showed you how we can save time by only rolling out the database for our data-access-objects (DAOs). An additional strategy if you have many DAO-tests is to keep them in separately versioned modules. Keeping all database interactions and targeted tests in their own directory or sub-project is practicing clean architecture, but if the DAO tests fall under the same parent source project, then a full build will run them all whether or not they have changed.
To counteract needless running of tests there is another and admittedly more contentious approach. This is to develop your DAO code and tests in separate modules, that is, not as children of the parent that produces the releasable artefact. Rather, they are imported as binary dependencies with their own version and treated as any other in-house library, with their own versioning and life cycle. Using multiple modules makes sense because large databases usually separate their business functions into schemas, each in their own lifecycle stage. Some parts may be under regular change while others are hardly touched. Separating the DAO code along these lines prevents running tests on code that has not changed and are not affected by changes elsewhere in the database. Building the end product will not run any of the DAO unit tests. It will involve the database only for its higher-level integration tests, but these will be smaller in number. This approach will feel more logical when the database is of a monolithic legacy type. Your new application under development is just one of its many clients and developing the intricate interactions with it as a reusable library makes more sense than as an integral part of the application itself.
Part two – preparing the container
The second category of time savings are to be found in how you run and prepare your test database. There are various options to choose from, but in my opinion the best compromise between reliability and speed is an optimised containerised database that is managed by the test framework. Mind you, this is not the fastest option. Having your test execution rely on an externally provisioned database that remains up after the test finishes is always faster than starting and discarding a locally running container. But tests can leave the database in a different state from the one they found it, making tests unreliable in principle, if not in practice. And there’s the added problem of concurrency when parallel executions try to truncate the same tables to set up their own fixture data. Another objection to an external database is that a simple build from source (git pull, mvn install) is no longer possible without prior setup.
There are plenty of good tutorials on how to include a running container into your test lifecycle, using the Testcontainers framework and extensions for JUnit 5. The challenge is preparing an image that is ready to go. The image from which your start your container should already be initialised with all the schemas, tables, procedures and other objects that it needs. It makes sense to make such a generic image available on your in-house repository ready to be pulled by the tests that need it rather than having the project itself handle the Docker build. Setting up a large database even without data from a dump script is not cheap, but you need only do it when the schema has changed. This housekeeping task can be performed as part of the regular procedure that handles upgrades. As soon as the production database is changed, a fresh test image is produced, tagged with an appropriate version number or change date and pushed to the repository.
A final tip is to consider populating certain tables with initial data that you expect will be required by multiple tests. This is more about speed than about avoiding duplication in setup code, since that can be centralized. If your test suite does not have to execute numerous inserts to bring the database in a desired test, so much the better. Bear in mind however that all such fixture data is a potential maintenance liability. Especially in the early stages of development tables tend to sprout new columns organically, that all have to be populated. Keep the number of rows to the absolute minimum required by your test. A list of one is still a valid list. At this stage we are testing the logic of the SQL code. Real-world concerns of performance belong at a later stage in the pyramid. For those tests you probably want an actual production copy with commensurate hardware, not the Docker instance of your developer laptop.