My last Java certification is the SCJP 6 from 2010, so when the OCP 17 came out, I thought it high time for a refresher. I am setting aside an hour every day when I’m at my brightest, which is after coffee and before breakfast. I’ll share my personal study tips later, but it’s more credible if I first pass the exam. So, I found another topic to distract me from my studies that I want to dwell on. It’s the effect that modern IDEs and resources like Stack Overflow are having on our brains. Once you dig into the 900+ pages of schoolwork that is the OCP Study Guide, you’re painfully reminded that they have been making us lazy and maybe even a bit stupid.
Previously published on DZone. Date/times logic in code is where the messiness of the real world upsets the relatively straightforward rules of the digital realm. Blame the bewildering hodgepodge of edge cases on the movement of celestial bodies and pope Gregory XIII (of the Gregorian calendar) but deal with it you must. I’m sure you all know that the 31st of December can be in week 52 or 53, while the 1st of January can be in week 0 or 1 and that you know these rules by heart for various countries (oh yes!). I’m also confident you can hand-code the logic to calculate the difference in seconds between two representations that span multiple time zones as well as a jump in daylight saving time.
Ricky Gervais cemented his comic talent in 2005 with the two-season series Extras, in which struggling actor Andy Millman keeps himself and the dream of a real acting career alive with freelance gigs as a movie extra. Each episode he rubs cold shoulders with a genuine star for some truly unforgettable awkward moments. Kate Winslet, Samuel L. Jackson, David Bowie, and Ian McKellen were all game. The bit where Diana ‘Miss Emma Peel’ Rigg gets a condom (albeit unused) flung into her hair by teenager Daniel Radcliffe amazingly sounds far grosser in words than it looked on screen.
Under this genre of celebrity embarrassment porn – a term I just made up – you may class the films that veteran Michael Winterbottom made with his favourite comedian Steve Coogan. It started with The Trip in 2010. Steve is asked to write a series of culinary reviews for the prestigious Observer newspaper and takes along his old friend and fellow comedian Rob Brydon for the trip. In between footage of busy kitchen staff dousing steaming pots of scallops with alcohol, our heroes sit around, eat, bicker, and laugh in a barely adult effort to outwit and outsmart each other. There’s a hundred minutes of the stuff per serving, with not much more plot to go round.
(previously published on DZone) In my previous post, I discussed the difference between tests that target code versus those that target an API. A subset of the second category are automated tests for a web/mobile interface that mimic user behavior and validate the rendered responses, using Cucumber/Selenium, Cypress, or any other stack. These are typically written and executed as end-to-end tests, in that they require a production-like setup of the backend; but that needn’t be the case. GUI tests can turn into true component tests if they target the browser, but with a fully mocked backend. In a complex microservices architecture, it makes good sense to do so. In this article, I will highlight the motivation for writing those tests, and in a follow-up, I will give tips and examples on how to do so with the Karate framework. Feel free to dig into its excellent document if you can’t wait.
In a room full of architects, no two will agree about their exact job description, as the joke goes. In a similar vein, someone in our team had a refreshing solution to another persistent bone of contention: how do you define an integration test? Don’t try to reconcile differing opinions and just stop using the term altogether. I liked it very much. Rather than think of unit and integration tests, it’s more helpful to distinguish between tests that validate code versus ones that validate public interfaces and recognize that within each category the scope varies for each individual test.
I started my programming career without a proper computer science qualification, which wasn’t exceptional in the Netherlands in the wild years preceding the dotcom boom. A sensible dose of the impostor syndrome and a lucky sense of how best to fill the knowledge gaps has stood me in good stead. Starting out as a glorified amateur myself, I have always sympathized with the poor end user, perhaps out of a sense of my own bewilderment with all this needless complexity. I was an early fan of Jakob Nielsen’s Alertbox column, who started writing about (web) usability since the nineties.
I don’t think there has ever been a time when computers and software had the same gentle learning curve as using a toaster. Certainly not in the 1950s, when programmer and user were the same person, i.e., an engineer. Dedicated systems remained far from idiot proof long after that and took considerable (memorization) skills to master. Before barcode scanners became common, checkout operators at German ALDI supermarkets had to enter all prices from memory (items had no price tags). It must have been a steep learning curve, but it sure was fast! Such mental feats were required less than a generation ago for a job we would now classify as unskilled labour.
In a recent article, I argued that it is becoming even harder to be a credible full stack developer, one who is sufficiently up to date to contribute to every stage of developing a modern web- and/or mobile-based application. You’re not expected to build it all by yourself, obviously, but given enough time and with a little help from Stack Overflow, you could.
True, a battle was raging between competing web “standards,” with proprietary goodies like Explorer’s marquee tag and Netscape’s layers. But otherwise, the stack was reassuringly stable and manageable. The necessary tools on your belt didn’t weigh you down. And we just built simpler things back then, before the internet became indispensable. Today’s demands for scaling and security are much higher.
Previously published on DZone There are reasons to give key stakeholders the opportunity to officially sign off on a new software release. We need some formal approval from the people who commissioned it, or at least their delegates. This last stage prior to release is commonly called the user-acceptance test and executed in a UAT environment. It’s an indispensable stage, but treating it as the final step in testing is problematic for several reasons.
Let me start with a car example. Dealerships are generous with free test drives for the same reason that clothing stores let your try on three different shirts. It’s to let the endowment effect do its dirty work: wearing (or driving) something feels more like you already own it. It’s to get a taste for the look and feel, and a catalyst for closing the deal. It’s not about really testing the vehicle — they expect it back unscratched. Toyota doesn’t need their customers taking an active part in their QA process.
Today I want to talk about remembering and forgetting, and particularly the vast difference between human and computer memory. Popular fiction likes to cling to some flawed analogies, but any AI expert or neuroscientist knows better. The brain doesn’t distinguish between software and hardware. Memories are not pieces of data. You can’t upload them to the cloud. Everything worth remembering is stored associatively and will fade without context. There are no neat folders and drawers in your head to keep work and private affairs organized. If only there were.
Joshua Foer’s Moonwalking with Einstein from 2012 is a respectable piece of immersive and participatory journalism. While researching his book about the workings of human memory he took an active and successful part in memory championships. These are as the name suggests: competitions to store and reproduce random facts against the clock. The winners are experts at creative mnemonics, the age-old practice to connect random facts into a memorable narrative by making links that stick, however far-fetched.
Many non-technical skills, qualities, and mindsets are part of software craftsmanship. Today I want to talk about two:
Resilience helps us cope with difficulties by not giving up too soon. Improvisation deals with compromise and creativity in the face of the unexpected. The intuition to distinguish negotiable best practices from unchangeable truths is what agility is about. In earlier posts, I drew analogies between people from art and fiction (Stanley Kubrick, Woody Allen, and Gomer Goof).
Here’s the story of an agile musician I find very inspiring.