Abstract. While AI encroaches on our coding skills, it has not aced human language by any stretch. That’s where our competitive advantage lies, so be prepared. Previously posted on DZone.
I recently devoted three posts on my reluctant study for the OCP-17 Java exam, offering advice on how to make the effort less of an ordeal. I haven’t passed it yet. With every new advance in AI coding assistance, honing your skills as a human compiler seems to me more anachronistic. It always was an act of masochism, but I am increasingly convinced that there is no professional advantage in becoming good at something the machine is superior at. I concede that any pursuit can be beneficial or enjoyable for reasons other than mere utility, but as a developer, I am paid to be productive. Having a good time at the job is a nice-to-have, and the skills the OCP calls for are not my idea of fun.
Many intellectual tasks that are hard for humans are easy for computers (chess, arithmetic, rote learning) and have been that way for decades. We invented higher-level programming languages and garbage collecting because human beings are terrible at flipping bits and managing memory. The roadmap of computer languages and tooling is one towards ever greater abstraction. GitHub Copilot and the likes are only the next unavoidable step in removing accidental complexity.
Last month I shared my first experience using ChatGPT and GitHub Copilot. I was impressed, but not overwhelmed. AI suggestions can be accurate, but only to the extent that your question is specific and to the point. It can’t read your mind, but it can find solutions to well-defined tasks that have been solved before – and for which there may already exist a perfectly good library solution. Knowing what to build remains the hardest part of all and we are nowhere near automating the entire journey from idea to code.
I have two more points to make on the topic. First, since we as programmers are required to stay in the driver’s seat for a while, we had better stay sharp. Secondly, I believe the focus should be less on solutions that convert ideas to code. I personally can’t wait for useable AI that can do the reverse: scan an undocumented tangle of eighty microservices and explain to me in clear English what it is all about, because nobody bothered to keep documentation up to date.
In a recent article titled Why Do Many Developers Consider Scrum to Be an Evil Scam, Scrum consultant Willem-Jan Ageling bemoaned the clear impression that people outside his professional circle didn’t share the same devotion to the framework, “I am a Scrum enthusiast. When I’m in my Agile bubble, I have the pleasure of having great conversations. We may have different opinions about approaches, but in the end, we mostly agree on the merits of Agile and Scrum. But when I leave my bubble, I often receive backlash. The worst thing I have been called is a snake oil salesman, selling an evil scam”.
I won’t unpick his arguments in detail. I don’t think Scrum is an evil scam and I don’t think Mr Ageling is a snake oil salesman. Yet I have seen Scrum being misapplied too often to ever become an enthusiast again. Here are my very personal reasons, based on first-hand evidence. It’s biased, but then I don’t pretend to speak for ‘many’ developers, let alone all.
ChatGPT is taking the world by storm and redefining the definition of hype in the process. Dutch high school students use it for their homework and instantly give themselves away by handing in essays with perfect spelling and grammar. Everybody’s talking about it; I’m already late to the party. Comments range from admiration to stoic resignation, to plain fear for one’s job or the fate of humanity. I’m no pundit with a crystal ball, but my instinct is that our jobs as software developers are safe. For now.
Thanks to my background in linguistics and short stint as a translator, I took an early interest in machine translation and am still unimpressed by the state of the art. It’s hardly surprising. Perfect comprehension of human language is the holy grail of AI. It’s not about self-driving cars. If those performed as poorly as the best translation engine, none would be allowed off the test lot.
The native speaker’s intuition of a meaningful sentence is not something you need to be taught. You start picking it up as a baby. For most of human history, we didn’t have schools, and yet we understood each other perfectly — within the same tribe, naturally. This capacity is a defining part of being human. We’re born to speak, with no neat separation into hardware and software. All that makes human language a very hard problem to express in algorithms.
Machines are better at systems we designed ourselves, with predictable and exceptionless rules. No board game is safe from AI domination, it would seem. This doesn’t diminish the impressive mental prowess of professional chess or go players. The computer just happens to be better at it, like a deer can outrun you and salmons are better at swimming. None of this should spoil your appetite to compete in these pursuits with other human begins. Just don’t cheat.
AI does an impressive job of stealing and interpolating. It invents the room where the original Giaconda posed for da Vinci, but it’s still the result of an algorithm and by nature predictable. Predictable is bland. Great art emerges when the combination of otherwise mundane elements (words, notes and chords, or brush strokes) combines into something that is more than the sum of its parts. You can’t predict, much less force this originality and few artists manage to produce a consistent stream of genius output. They all have their off days.
When AI can write a sequel to Macbeth and an album of original Beatles songs worthy of Shakespeare and McCartney, the singularity has arrived. Humans are effectively redundant. It won’t happen in our lifetime, so let’s set our present expectations lower. Surely it can help us write Java?
Yes, it can. The upbeat, competitive mood around the annual Advent of Code challenge was seriously soured when it turned out ChatGPT did a more than decent job at solving the tough daily puzzles. I’m not surprised. All these brain teasers are variations on problems that have long been solved. This is AI’s forte: digging up, sorting, and repackaging such collective knowledge.
I don’t like puzzles in general, and I was never any good at math. I have failed job interviews for not being able to implement a linked list fast enough to the interviewer’s taste. He considered it basic stuff. It’s basic, all right. We solved it so we could deal with greater levels of abstraction. I don’t want to re-invent it, not even as an exercise. Sure, there are software niches where such mental acuity comes in handy, but I haven’t worked in them. I can’t remember or reproduce the proof of the Pythagoras Theorem. I trust it’s correct.
Back to ChatGPT. I signed up and tasked it with a coding challenge I imagined would be right up its street. “Write me a piece of Java code that checks if a String is in camel case.” Unsurprisingly, the answer looked fine. But what a waste of perfectly good processing power and network packets! Of all the sensible options, I chose the laziest, most wasteful, least maintainable one. I did not check whether a common String library or the JRE itself had what I needed. I could have written it myself in ten minutes, but how boring. Instead, I went straight to the oracle.
I should not be surprised if my ten lines of code were lifted straight from an Apache commons library. That’s where they should stay, to be properly imported as a library dependency, along with dozens of other useful functions you didn’t know about and instead scavenged from the web to add to your homegrown StringUtils.java. I know, we don’t live in a world of perfect re-use where we only ever write a line of new code if it serves a novel solution to a problem. I know we should not take do not repeat yourself to the extreme. One can in fact over-dry, Jerry Seinfeld. But Github Copilot and ChatGPT are StackOverflow on steroids. They make it all too easy. It will lead to more clueless copy-pasting instead of sensible re-use. I call for DROP: do not repeat other people.
From a business perspective, new code is a liability, especially in complex organizations. Senior developers deal with code, but we’re not writing many new lines. Most of our days are spent keeping what we already have in working. That’s a good thing. If you want to code for fun, start or join an open-source project.
I can’t pinpoint many activities in a typical workday that I could safely entrust to AI. Some days I don’t deal with code at all. I simply can’t, because coding is the formulation of the answer, and they’re not telling me the question. I don’t mean the question as the stakeholder jotted it down in JIRA, but formulated in crystal-clear human language to ensure we’re not wasting money building the wrong thing from day one. ChatGPT isn’t going to help.
There’s the famous and hilarious scene from the Hitchhiker’sGuide where supercomputer Deep Thought has found the succinct answer to the Ultimate Question after seven million years of number-crunching. It’s 42. Too focused on getting to the answer, it had forgotten how complex the question was.
Even if AI can work out 95% of the answers as well as the questions, I’m sure you don’t want to be a machine minder, sipping coffee and making three manual adjustments every fifteen minutes? Where’s the fun in that?
Perhaps it’s only fair that the promise of a perfect code generator is making developers nervous. We’ve been encroaching on people’s livelihood for half a century. We’re not so evil that we want to rob people of a professional purpose in life. It’s just that automating things for its own sake is such darn fun. And sometimes we assume we’re doing people a favor by cutting out a task that only seems monotonous and trivial. I’ll leave you with a case in point.
Eight years ago, I worked on the vessel traffic control software for the port authority in Rotterdam. One of the enhancements was a component to visually manage the passage of vessels coming through the sea lock complex in IJmuiden on their way to Amsterdam. Each ship gets a time slot to pass through one of the six locks in an assigned order, depending on its size. Traffic control wants to squeeze in as many ships as is safely possible because each passage is costly.
“As a developer, I want the software to assign each incoming vessel a time slot and location within the lock, to use the available capacity optimally and maximize throughput.” I want it because it’s an irresistible coding challenge.
Wrong user story. All traffic control wanted was a graphic depiction of the incoming vessels, true to scale, which they could manually drag and drop into the available locks.
As developers, we had over-abstracted the “problem.” We knew the exact size of the locks and the vessels from the maritime database. It looked like another traveling salesman’s problem, only way simpler. Maybe it worked 90% of the time, but what did we know? What about the cargo of the vessel? Does that make a difference? Surely bad weather must affect the minimum safe distance between vessels. Add a few more of these chaotic parameters and you have one epic of a user story.
Traffic control wanted a simple system not to save us work or their employer money. Doing the planning was one of the highlights of the day, an opportunity to use their intuition and decades of experience. No one was going to take that away from them.
I first dabbled in Kotlin soon after its 1.0 release in 2016. For lack of paying gigs in which to use it, I started my own open-source project and released the first alpha over the Christmas holidays. I’m now firmly in love with the language. But I’m not here to promote my pet project. I want to talk about the emotional value of the tools we use, the joys and annoyances beyond mere utility.
Some will tell you that there’s nothing you can do in Kotlin that you can’t do just as fine with Java. There’s no compelling reason to switch. Kotlin is just a different tool doing the same thing. Software is a product of the mind, not of your keyboard. You’re not baking an artisanal loaf of bread, where ingredients and the right oven matter as much as your craftsmanship. Tools only support your creativity. They don’t create anything.
I agree that we mustn’t get hung up on our tools, but they are important. Both the hardware and software we use to create our code matter a great deal. I’ll argue that we pick these tools not only for their usefulness but also for the joy of working with them. And don’t forget snob appeal. Kotlin can be evaluated on all these three motivations. Let’s take a detour outside the world of software to illustrate.
I like reading books about corporate dysfunction when they come in the shape of a compelling (fictional) narrative. Business writers know how storytelling can spice up dry theory and support their argument. Patrick Lencioni’s The Five Dysfunctions of a Teamand Gene Kim’s Unicorn and Phoenix projects are good examples. It works for popular science too. In Snakes in Suits, psychologist Robert Hare, a renowned authority on psychopathy, explains for a lay readership the manifestations and biological foundations of this dark human design flaw. He interweaves the science with a chilling fictional narrative of a parasitic young suit slithering his way up the corporate ladder. So, when a coworker told me the other day about an especially glib colleague who lied, cheated, charmed, and flunked his way to job security, I immediately thought: psycho!
In software, any rule or recommendation, whether it’s the Law of Demeter, SOLID principles, or the Agile Manifesto is the distillation of years of experience, spirited discussion, and plenty of compromises. Observing how teams work has led us to certain recommendations that boil the specific down to the generic. Stories are a wonderful aid to explain and justify such rules because they can show how the rules were arrived at in the first place. They supply the back story that reconnects the specific back to the generic. You need these to know and respect the justifications behind a principle. It’s not enough to learn a rule by heart if you want to apply it well. Concise lists of opinionated statements make for pithy posters, but the necessary back story is missing from the text.
Life means dealing with bad things that may or may not happen. We call them risks. We assess, evaluate, mitigate, accept, and sometimes blithely ignore them. Building complex and original software are inherently risky and the Agile way of working does not fix that. That’s why we need to be true to the value of courage. I’ll start my argument with a refresher on the topic and some practical examples.
The dictionary defines risks as the likelihood of bad things happening. Equally important is the nature and extent of the damage when the risk becomes a reality. The exact balance between probability and consequences is the bread and butter of actuaries at insurance companies. Their models inform them how likely it is that an average dwelling goes up in flames, so they can put a price on the collective risk of their millions of customers. Several houses are certain to burn down each year and their owners need to be reimbursed. It’s a predictable risk.
The Scala World Hiking Trip
This won’t do in aviation or civil engineering. Every lethal incident prompts an investigation to raise the standard and reduce risk up to the point where we are left with black swans only, events so rare and unpredictable you can’t possibly prepare for them. Most airline crashes are the result of these unknown unknowns.
Here’s a more mundane example. At Scala World 2019 I joined the attendees for the traditional pre-conference hike in the English Lake District. I had visited the area before and arrived with waterproof gear and sturdy boots, knowing the terrain and the unpredictable British weather, which even in September can be brutal.
We set off in the sunshine and of course, it had to rain for much of the walk. Several walkers had not read or minded the instruction email, or even checked the weather forecast. Arriving woefully unprepared in cotton jeans and t-shirts, they got thoroughly soaked and a little miserable. But there was safety in numbers. Spare ponchos were shared, and nobody would have been in mortal danger if they had sprained an ankle while clambering over slippery boulders with their inadequate footwear.
Previously published on DZone At this year’s Antwerp edition of the Devox Java conference (12-14 October 2022), I attended some talks that managed to really inspire me, which is a rare gift. It made me think of my own 20+ year career as a developer and how I want/expect to spend the remainder. I like to share these thoughts here, giving credit to the excellent speakers where credit is due. Here are my three conclusions for the impatient:
Been there, done that? I don’t buy it. You can never have been everywhere or done everything. The Javauniverse expands faster than anyone can keep up with, so don’t burn yourself out.
The world needs excellent craftspeople more than it needs people to manage them. Don’t become a victim of the Peter Principle.
If you aspire to a different job only for its status and are incompetent at it, you make more than one life miserable.
Books on bad programming habits take up a fraction of the shelf space dedicated to best practices. We know what good habits are – or we pay convincing lip service to them – but we lack the discipline to prevent falling into bad habits. Especially when writing test code, it is easy for good intentions to turn into bad habits, which will be the focus of this article. But first, let’s get the definitions right.
An anti-pattern isn’t simply the absence of any structured approach, which would amount to no preparation, no plan, no automated tests and just hacking the shortest line from brainwave to source code. This chaotic approach is more like non-pattern programming. Anti-patterns are still patterns, just unproductive ones, according to the official definition. The approach must be structured and repeatable, even when it is counter-productive. Secondly, a more effective, documented, and proven solution to the same problem must be available.
Many (in)famous anti-patterns consistently flout one or more good practices. Spaghetti code and the god object testify to someone’s ignorance or disdain of the principles of loose coupling and cohesion, respectively. Education can fix that. More dangerous, however, are the folks who never fell out of love with over-engineering since the day they read about the Gang of Four because doing too much of a good thing is the anti-pattern that rules them all. It’s much harder to fight because it doesn’t feel like you’re doing anything wrong.
The seventeen participants could not have predicted the success of their collective weekend brainstorm in February of 2001 that resulted in the Agile Manifesto. Their recommendations have held sway over common thinking about software development since then. Scrum saw the light in 1995, but cleverly hitched a ride on Agile’s bandwagon to the point that many consider the two one and the same. Many young developers have never known a different way of working in their professional careers.
But the spirit of Agile is becoming a dead letter. Coaches complain that most organisations do as they please. They explain in their blogs and books how true Agile should be practised. Developers grumble too, but rather because the enthusiasm over a better way of working has turned into going through the motions, rigidly and uninspired. What’s going on? Didn’t the Manifesto stand for careful deliberation and adaptation?