$/home/emma/random

AI as a software development tool

Over the past couple of months, a colleague had been experimenting with Chat-GPT as a software development tool and exploring its capabilities, and he showed a demo of it generating the jQuery/JavaScript for a simple Web application. Anyone could do the same, by the way, and the demo was more about encouraging a debate about how we should use it.

Personally, I see Chat-GPT as something of a chatbot and a smart search engine, and I've always had reservations about passing off anything copied and pasted as my own work, especially if it's derivative of what someone else had published.

Obviously, a couple of us did comment on the idea that AI might eventually replace software developers/engineers (and professionals in other fields). I don't think that would happen anytime in the foreseeable future, considering how much of our time is taken up by things that don't actually involve coding, and because of the overhead involved in designing and releasing software. I do believe ChatGPT, and something like it, would instead become another tool for developers, alongside things like IntelliSense and Resharper (which I'm actually less reliant on).

Problems need to be defined in order to be solved

ChatGPT did generate a (small) working application every time it was given the task - the jQuery could be copied and pasted into a scafolded Web page and it would run as a Web application. The thing is it gave us exactly what we asked for, which was different to what we wanted.

It soon became apparent that we needed to actually think about the problem and define it with some precision: What output should the application give? Should the variables be strongly-typed (the answer was yes)? What should the application do when the user does something unexpected? What about exception handling? How should the code itself be structured? Our queries, for a very small application, ended up looking more like pseudocode. 

Defining a problem at the level needed, interpreting abstract, and sometimes vague, design documents and translating them into working systems, deciding how software should implement a solution, and knowing when software is complete enough for beta testing, does require engineering expertise. That's even more the case, in a team that's pushing software as a collection of small APIs and containers, as appears to be the trend now. And we need to get this mostly right from at an early stage, as something that appears correct, such as what Chat-GPT would give us, might be very costly to fix later on.

Coding is a small part of an engineer's job

The technologies software engineers use are evolving faster than 'AI', and there's an ever increasing overhead involved in getting software deployed.

When I started out, the job was largely about adding functional code, compiling the application and copying the .exe and .dll files to a staging server. Another team would perform their regression tests and the application would be deployed on the production servers. A few years later, the principal engineers insisted on test-driven development, SOLID design, Unity Containers, a ridiculous (in my opinion) amount of dependency injection, specific coding patterns and whatnot. There were good reasons for most of that - we were, after all, working with systems that processed very sensitive data - but it was one hell of a learning curve. In that scenario, anything generated by ChatGPT would need to be heavily rewritten, modified and refactored, to the extent that one would be shifting lines of its code into different layers of classes.

Things are very different in my current position, since I'm working with mature software that outlived the technologies and frameworks used by my predecessors, and someone saw fit to add complex deployment pipelines. My job is less about coding and more about building and configuring the frameworks in which the software runs, migrating things from legacy systems, addressing technical debt, improving application security, documentation, etc. Most software development teams will inevitably end up doing the same eventually.

We still need to review code

Chat-GPT can generate a direct and working solution to a problem, but the output is derivative. Some of the solutions we saw included variables that weren't strongly typed, things that could have been refactored and even suggestions for outdated dependencies.

Abstracted software development still requires a lot of effort

I think this is relevant because software engineering appears to be going in the direction of becoming more abstracted - I'm thinking of 'low-code' solutions here - and AI might provide one type of abstraction. Currently I'm using Microsoft Azure to develop Logic Apps, Function Apps and other services, because it's believed things on that platform would be more reliable, secure and maintainable. It's very likely that some company, perhaps Microsoft, would release a low-code platform that uses something like Chat-GPT to interpret elements in a graphical interface. Perhaps Azure already does this in some way.

You'd think getting a low-code solution to work on Azure would be easier than developing a .NET application in Visual Studio, but quite often it's not. There's still a vast amount of code under the surface that needs to work with a collection of other components, and there's still a large amount of configuration, testing, integration and liaising (and negotiating) with engineers from other departments involved.

#development