I started my programming career without a proper computer science qualification, which wasn’t exceptional in the Netherlands in the wild years preceding the dotcom boom. A sensible dose of the impostor syndrome and a lucky sense of how best to fill the knowledge gaps has stood me in good stead. Starting out as a glorified amateur myself, I have always sympathized with the poor end user, perhaps out of a sense of my own bewilderment with all this needless complexity. I was an early fan of Jakob Nielsen’s Alertbox column, who started writing about (web) usability since the nineties.
I don’t think there has ever been a time when computers and software had the same gentle learning curve as using a toaster. Certainly not in the 1950s, when programmer and user were the same person, i.e., an engineer. Dedicated systems remained far from idiot proof long after that and took considerable (memorization) skills to master. Before barcode scanners became common, checkout operators at German ALDI supermarkets had to enter all prices from memory (items had no price tags). It must have been a steep learning curve, but it sure was fast! Such mental feats were required less than a generation ago for a job we would now classify as unskilled labour.
In a recent article, I argued that it is becoming even harder to be a credible full stack developer, one who is sufficiently up to date to contribute to every stage of developing a modern web- and/or mobile-based application. You’re not expected to build it all by yourself, obviously, but given enough time and with a little help from Stack Overflow, you could.
True, a battle was raging between competing web “standards,” with proprietary goodies like Explorer’s marquee tag and Netscape’s layers. But otherwise, the stack was reassuringly stable and manageable. The necessary tools on your belt didn’t weigh you down. And we just built simpler things back then, before the internet became indispensable. Today’s demands for scaling and security are much higher.
Previously published on DZone There are reasons to give key stakeholders the opportunity to officially sign off on a new software release. We need some formal approval from the people who commissioned it, or at least their delegates. This last stage prior to release is commonly called the user-acceptance test and executed in a UAT environment. It’s an indispensable stage, but treating it as the final step in testing is problematic for several reasons.
Let me start with a car example. Dealerships are generous with free test drives for the same reason that clothing stores let your try on three different shirts. It’s to let the endowment effect do its dirty work: wearing (or driving) something feels more like you already own it. It’s to get a taste for the look and feel, and a catalyst for closing the deal. It’s not about really testing the vehicle — they expect it back unscratched. Toyota doesn’t need their customers taking an active part in their QA process.
Today I want to talk about remembering and forgetting, and particularly the vast difference between human and computer memory. Popular fiction likes to cling to some flawed analogies, but any AI expert or neuroscientist knows better. The brain doesn’t distinguish between software and hardware. Memories are not pieces of data. You can’t upload them to the cloud. Everything worth remembering is stored associatively and will fade without context. There are no neat folders and drawers in your head to keep work and private affairs organized. If only there were.
Joshua Foer’s Moonwalking with Einstein from 2012 is a respectable piece of immersive and participatory journalism. While researching his book about the workings of human memory he took an active and successful part in memory championships. These are as the name suggests: competitions to store and reproduce random facts against the clock. The winners are experts at creative mnemonics, the age-old practice to connect random facts into a memorable narrative by making links that stick, however far-fetched.
Many non-technical skills, qualities, and mindsets are part of software craftsmanship. Today I want to talk about two:
Resilience helps us cope with difficulties by not giving up too soon. Improvisation deals with compromise and creativity in the face of the unexpected. The intuition to distinguish negotiable best practices from unchangeable truths is what agility is about. In earlier posts, I drew analogies between people from art and fiction (Stanley Kubrick, Woody Allen, and Gomer Goof).
Here’s the story of an agile musician I find very inspiring.
Dave Eggers’ new novel is a darkly comical techno dystopia. The Every (no link: buy it from your local book shop) is the sequel to The Circle. Its eponymous company is an unholy alliance of the major tech behemoths whose names need no mention. The Every is well on its way to wipe out or enslave all competition when our heroine Delaney joins it, on a secret mission to bring down the system from within. But after the first chapters we already know that resistance is futile.
Winston Smith from George Orwell’s Ur-dystopia 1984 likewise knew he did not stand a chance against the omniscient, all-seeing Party. Yet whereas the surveillance technologies that Orwell conceived were science fiction in 1948, most of The Every’s toys are already here or available soon in an Apple Store near you. Every new addictive app is a wolf in sheep’s clothing, adding more data points to your privacy profile. Eggers’ mission as a techno sceptic is anything but subtle.
Previously posted on Dzone Joel Spolsky’s once prolific blogging output dried up years ago, but Things You Should Never Do, Part I is still a classic after 22 years. He wrote it as an outsider’s postmortem following the first beta release (6) of Netscape’s browser, three years after the previous major release 4. There never was a version 5. The team had decided on a full rewrite, and the resulting delay probably cost them their competitive advantage over Microsoft’s Internet Explorer.
“If Netscape actually had some adult supervision with software industry experience, they might not have shot themselves in the foot so badly”, he closes.
I like reading computing history and consider to what degree those articles are relevant today. Here’s the gist of the original argument in my own words.
By stalling development on their current product, Netscape appeared to have shut for business, at least from the end user’s perspective. That was strategically disastrous. It made no business sense to abandon their flagship project like that. From a technical standpoint, it was even worse to throw away all working code and start from scratch. Most code is harder to read than to write, but you should always prefer refactoring to rewriting. Full rewrites are no guarantee that you will not introduce the same bugs again, that were so painstakingly discovered and fixed in the old base.
Near the end of a laid-back chat in Mark Maron’s WTF podcast, actor Peter Dinklage let his irritation suddenly run free over the planned live action remake of Disney’s Snow White and the Seven Dwarfs. The casting of a Latino actor as the female lead did not warm him to the project. It was still an excuse to rehash an archaic story, for no better reason than to cash in on its legendary status. “Have I done nothing to advance the cause from my soap box?”, he jokes, 56 minutes in. I agree. You can rewrite and re-cast all you want to make it more palatable, but you’re still telling Snow White. Why not spend that energy on new stories?
I consider soft skills crucial to success in an IT career, but I find the term crude and confusing for two reasons. It suggests a tidy separation into non-overlapping categories which doesn’t exist, and it carries a hidden, more pernicious value judgment by hinting they are less important than hard skills.
I hope professional coaches have a clearer understanding of the scope of these skills than a layperson like myself, who is regularly encouraged to keep up his soft skills, with little idea where to start. I duckducked into the definition and present you with this unscientific motley array:
Originally published on DZone There’s not much buzz about design patterns these days. They appear to have joined the hall of fame of accepted wisdom, alongside the Silver Bullet, SOLID and DRY. Lately, I had the opportunity to share some thoughts on the importance of good old design patterns with Koen Aerts, CTO of Team Rockstars IT. Here’s the gist of that talk in a more digestible format.
Before I start, let me set some boundaries, as people can get doctrinaire about definitions. I refer to the good old bridge, builder, decorator, and factory patterns. Architectural patterns like MVC do not fall into the same category, much fewer paradigms like serverless and microservices (aka SOA the next generation).
Then again, the latter do constitute a legitimate grey area. They’re clearly about design, patterns, and best practices as well. They offer standard solutions to common challenges. But what makes them different is the scale at which they operate. Classic design patterns are recipes for manageable bits of code, solutions that often fit in a single screen. They explain how to stack the bricks for your new house, whereas microservices show you how to lay out the entire neighborhood.