Not worrying about AI
I keep hearing these extreme narratives about the incredible opportunities offered by AI, or how it's going to make me redundant. But I'm not seeing much evidence of either.
Having had an especially productive couple of sprints recently, I've been trying to figure out where those opportunities — or threats — fit in.
The work
If my job was primarily programming, I'd be more worried — especially if I did it a in very predictable, regulated environment. But that's not really the case. Instead I tend to be problem-solving in messy, real-world scenarios.
Significant bits of work recent include:
New UI component integration
Our design system has some new chunks of UI added to its component library and my team was assigned to integrate these with our flagship product. It seemed like a straightforward technical task, but ended up being a complex multi-disciplinary problem in which content editors, UX design, backend and frontend developers had to collaborate to meet the business ask in an optimal way.
I wrote some code. The backend developer wrote some code. But the real work was engineering a response to the current requirements that struck the right balance between legacy data models and the requirements of current and future content design.
Writing it up
When we talk about design systems in the abstract it seems like a no-brainer to start using one, but in practice it's a much slower, harder process than I ever thought possible. We're at a point now where theory and reality are making contact — where the rubber meets the road, as they say — but it feels like there's more friction than traction.
The only way to improve that is by learning from the process, so I keep track of my thoughts and write them up to try make sure I can articulate them well.
There's been plenty of grist for the mill over the last couple of weeks — the current article is one example.
Saving time and effort
Organisations will never admit to being adequately resourced, but the public sector is going through a particularly lean period right now. I've formulated my annual goals in the context of a demanding technical roadmap with the explicit aim of reducing waste and optimising for delivery.
Having completed my contributions to the next release, I'd been asked if I could reinstate a prototype we'd previously used for data-gathering purposes. It was very difficult to see how this could be done in a responsible way.
Long story short, I was able to demonstrate to the PO's satisfaction that the functional requirements of the business could be met with a combination of modules already available to editors in the CMS. We were able to immediately commission the content changes without a release or additional development work.
I hope this will prove to be more than just a stop-gap measure; that the business users will either be happy with the current solution, or at least that any development work required later will take the form of small adjustments rather than development of a new, full-blown feature.
More loose ends
I picked up another bit of incomplete work from the backlog.
This one was less pleasing from a technical perspective, comprising the implementation of a prototype-quality service in production. But it's functional, and I'm a pragmatist. People often say "technical debt" pejoratively to describe work that's low quality, unfinished, or which requires some sort of future maintenance, but I have a stricter definition. I'm OK with technical debt, if that's what it really is — a well-reasoned cost / benefit decision with quantified, managable risks.
So I plumbed the thing together, exhaustively documented all the moving parts, and advised the PO that from a technical perspective this is a prototype. I was clear that any decision to release needs to come from the business not the developers.
Not going away
What has any of this got to do with AI? Nothing.
I've had a notably productive couple of sprints in which I've achieved and exceeded the work I was tasked with. I can't see how AI could done this without me, or helped someone else do it better than I did.
I'm not writing from a rabidly anti-AI perspective here. I've (cautiously) employed LLMs before, to scaffold out a single file component if I've not spent much time with a framework's APIs, or to suggest an efficient way to compose functions for implementing business logic. But those situations aren't my bread and butter.
It's become a bit of a cliche now, to assert that writing code is "the easy part" or "the smallest part of a developer's job", but for me, at least, it really is true. Not only do I not need AI to do a good job, I struggle to even identify any opportunities where it could help.
Nothing I've described here could have been done more quickly or efficiently with the help of a large language model, because it's all too specific: to the organisational structures, to the tech stacks and systems architecture, to the culture and processes of the business units and disciplines involved, and the personalities within them.
END