Previously published on DZone. Date/times logic in code is where the messiness of the real world upsets the relatively straightforward rules of the digital realm. Blame the bewildering hodgepodge of edge cases on the movement of celestial bodies and pope Gregory XIII (of the Gregorian calendar) but deal with it you must. I’m sure you all know that the 31st of December can be in week 52 or 53, while the 1st of January can be in week 0 or 1 and that you know these rules by heart for various countries (oh yes!). I’m also confident you can hand-code the logic to calculate the difference in seconds between two representations that span multiple time zones as well as a jump in daylight saving time.
In a room full of architects, no two will agree about their exact job description, as the joke goes. In a similar vein, someone in our team had a refreshing solution to another persistent bone of contention: how do you define an integration test? Don’t try to reconcile differing opinions and just stop using the term altogether. I liked it very much. Rather than think of unit and integration tests, it’s more helpful to distinguish between tests that validate code versus ones that validate public interfaces and recognize that within each category the scope varies for each individual test.
Previously published on DZone There are reasons to give key stakeholders the opportunity to officially sign off on a new software release. We need some formal approval from the people who commissioned it, or at least their delegates. This last stage prior to release is commonly called the user-acceptance test and executed in a UAT environment. It’s an indispensable stage, but treating it as the final step in testing is problematic for several reasons.
Let me start with a car example. Dealerships are generous with free test drives for the same reason that clothing stores let your try on three different shirts. It’s to let the endowment effect do its dirty work: wearing (or driving) something feels more like you already own it. It’s to get a taste for the look and feel, and a catalyst for closing the deal. It’s not about really testing the vehicle — they expect it back unscratched. Toyota doesn’t need their customers taking an active part in their QA process.
Previously published on DZone. However long you work in software, you always feel late to the party. You encounter some seemingly cutting-edge new tool only to learn it has been around for decades, sometimes inspired by research papers from 1970. Still, you can’t keep up with everything and have a life. Property-based testing (PBT) is such an established technology and it deserves more attention. Java has the Jqwik library, Scala has ScalaCheckand Python has Hypothesis.
Check the links at the end for some good tutorials. Here I want to skip the nitty-gritty and focus in detail on its killer feature, which is the ability to warn when some change to production code is no longer sufficiently covered by a test suite.
In my previous post I explained why we should write more unit tests for our database code. Databases contain crucial business logic that is often developed and maintained in tandem with the code that depends on it. If we want to validate complex SQL statements in isolation and in detail, that means writing much more automatic tests than we normally do. Testing against a real database server (not an in-memory emulation) is by nature slower and more cumbersome to set up and run than basic unit tests. Here I will explain some practical strategies to make that process faster and more manageable.
The tips and tricks fall into two categories. First, I’ll look at ways to arrange your code under test so that the database is only spun up and accessed when it is actually needed. Secondly there are tips on how to prepare your database container with schemas and data, so setup time is kept to a minimum.
The test pyramid is a well-known visualization aid to classify software tests. As you climb the steps towards greater integration of components, you proceed from many, detailed, fast and isolated tests towards fewer, slower and more global tests that validate the system as a whole. It makes good sense in principle but it’s harder to explain how the stratification between unit-, integration and end-to-end tests is supposed to work. Opinions differ on what parts to integrate at which layer in the pyramid. You would think the database belongs in the upper strata, because it is expensive to set up and run. Yet it also makes sense to integrate it at the lower, detailed stage in the pyramid when it contains business-critical logic that requires detailed validation. It often needs the kind of rigorous validation that you cannot leave to a few slow, global integration tests. But let’s start with a recap of the definition of a unit test.
SUMMARY: Any web service needs to export their public API if consumers want to make the best use of that service. A developer-friendly approach to do so if you work in the Java ecosystem is to package DTOs and endpoint interfaces in an API jar file and use the Retrofit framework to create type-safe clients for integration testing. This article discusses a complete example.
If you’ve worked in enterprise Java projects you will remember good old Web Services Description Language, theXML based format for describing network services by IBM and Microsoft. Maybe you still work with it? WSDL and its twin XML Schema are among those W3C standards that seasoned developers love to hate. Its specification files are not very human readable, let alone human writable. Fortunately you don’t have to. They can be generated by your server endpoint and fed straight into a code generator to create transfer objects (dtos) and service stubs.
Summary Pushing for test coverage is a bad incentive. It favors quantity over quality. You should prioritize your code with regard to test-worthiness. I discuss three such categories. First there is code that processes unmanaged and possibly malicious (user)input. You should test this for robustness. Secondly there is complex business logic with possible implementation errors. Test this for correctness. Thirdly there are quirks with trusted frameworks running code that doesn’t perform at scale. Run performance tests under realistic production loads to capture these.
I love reviews. When it comes to consumer equipment and whether it’s worth your money, it’s a blessing to have so much free advice to choose from. Last year I bought a new 3000-euro Casio Celviano digital piano online during the pandemic, purely based on internet reviews. Not my choice, but retail was locked down and I couldn’t try it out anywhere. Most reviews were favourable to raving, so I found safety in numbers there. Now you have to accept that reviews by retailers of their own wares are biased, but then again nobody would go to the trouble of a scathing review simply because they don’t like digital pianos, musical instruments in general or have it in for Casio as a company. Pianos are not controversial, in other words. What on earth does this have to do with code reviews? Bear with me.
There is plenty of opinions on how to do Test-Driven Development well. Today I want to discuss the drawbacks of the orthodox TDD school, let’s call it OTDD. Although as a development method it lacks a how-to bible like the Scrum Guide, some proponents defend their take with a religious level of conviction. To my understanding OTDD has the following characteristics:
You start by writing a failing test against a skeleton implementation. Then you dress it up by implementing the scenario, making the test pass. Test and production code evolve strictly in tandem, and you typically have a one-to-one relationship between a test suite and a source file, usually a class.
It follows then that all dependencies of a class under test must be set up using test doubles (spies, stubs or mocks). This applies not only to unmanaged, out-of-process dependencies like database, network and file handles, but also to any class in your own deliverable.