Despite ChatGPT raising the spectre of AI replacing programmers, an experiment I did implies a shift of the role of programmers rather than replacement. Programmers will not be irrelevant in the AI-powered future. They will just “program” in a very different way.

Society will always need people to translate raw ideas into something that makes sense to a computer. The form that communication takes may change over time, as it has in the past. Today, no one programs directly in machine code. Everyone works in a higher-level language. Even compilers themselves are now written in a higher-level language, usually C (Rust may take over that privilege, but that’s another blog post).
In the same way, in the future, no programmers will write imperative code. At some point, languages like C, Rust, and Java won’t be used by humans. I used to think that we would develop declarative languages to program in, but now that I’ve been playing with ChatGPT, I wonder if programmers in the future will use natural language to describe what they want the computer to do.
This line of thought led to me doing an experiment to see what it was like to write a real program with ChatGPT. I started simple. I asked ChatGPT to write a To-Do App in React Redux. I typed the code in and asked follow-up questions to solve the errors I got. After a great deal of back and forth with ChatGPT, I finally got a complete, working project that implements a To-Do App in React Redux. You can look at the code at https://github.com/ziroby/todo-chatgpt in the v1-react-redux
directory.
So what was it like working with ChatGPT to produce real code? It felt like I was talking to an errant child with more confidence than ability. ChatGPT always responded to my questions, but the code wasn’t a coherent whole. The first bit of code it gave me had an error in it that caused a blank page to display and an error in the console. I had to ask it to solve each error, and it didn’t always respond with the corrected original code. Instead, it would answer how to fix it in general, rather than for the specific code it generated.
Whenever a person originally learns programming, one of the things they learn is how to use the error messages received to learn how to program better. Deciphering error messages is very much a core skill for programmers. I feel like ChatGPT was lacking that ability.
I wonder what would happen if you gave ChatGPT access to a complete development environment and had it try to work out how to generate a complete project under different constraints. It seems like there’s a possibility of unsupervised learning. Or maybe it needs partially supervised learning, where a human gives it a task and it learns how to create a complete program to solve that task.
Back to the original question, will AI take over programming? My experiment showed that ChatGPT and similar programs will not take over programming. However, they will morph programming into something different. I see a future where “programmers” will talk to humans to find out what they want to be built, then talk to AIs to describe the program in terms the AI can understand. This will be a process, as the human asks the AI to refine each iteration of the program to add more to it.
I allude to this today when people ask me to describe my job. I respond that my job is to write a task in terms that are so simple, even a computer can understand it. The future will just have the computer able to understand a task in more complex terms, but it will still need a human in the loop.