$/home/emma/random

Re. Are Devs Overreacting?

Are you sick and tired of seeing all the shite, everywhere, about 'AI'? Are you tired of having AI' rammed down your throat every five minutes? I can't even settle down with a nice glass of gin, to watch a documentary about wasps, without being advertised at every ten minutes for some device with 'AI' (a privacy-invading 'feature' that's another pretext for harvesting data from our devices). I get hammered with invitations to 'webinars' on the subject, and I'm dismayed seeing Hacker News and The Reg being saturated by articles about damn chatbots.

But, onto the main point...

Naturally, several of us in work ended up chatting about the potential usefulness of 'AI' as a development tool, about 18 months ago. The long and short of that is we shrugged our shoulders and moved on, after some experimenting, but I was going to post something very similar to what blackentropy recently did.

The central question blackentropy addressed was this: Should software developers be concerned about the possibility of being replaced by 'AI'?

The answer is heavily context dependent, but it's mainly dependent on where one is on the spectrum between working as a coder in a Scrum feature factory, and working as a software engineer who is valued for that expertise. All things being equal, of course - the complexity of what developers and engineers need to work with is always increasing.

I always make the distinction between software developers and software engineers: Software engineers are harder to come by. Their expertise is much broader and deeper than anything 'AI' could even begin to emulate. Software engineers are concerned with design principles, they insist on definite code patterns and are geneally quite opinionated about how software should be engineered. If an 'AI' chatbot can't even form an opinion on whether Kamala Harris would have made a better president than Donald Trump, I think we can safely assume it couldn't form opinions on things software engineers argue about.

The people who should be more concerned are software developers working in Scrum feature factories, who could probably get away with copying existing code into functional software. No, 'AI' can't replace them entirely, but they're more likely to be working for organisations that would make a strategic decision to experiment with the idea of retaining a couple of developers/engineers while getting ChatGPT or Copilot to do the rest.

ChatGPT can generate working code for a basic application. The success rate for this depends on how widely used and documented the programming language in question is. The code it generates is roughly the same quality as what a junior developer would submit - it needs reviewing, refactoring and modification to work in an engineered code base. The major difference, of course, is that junior developers can learn and become engineers themselves. There is also the problem of telling ChatGPT what to generate, aside from the obvious problem of getting the specifics of what clients want in a software product. Some level of expertise is required to adequately define the problem we want it to solve, and to know which parts of the generated code are usable.

#development