> "There will be no programmers in five years.” -Emad Mostaque, founder and CEO of Stability AI (Stable Diffusion)
I agree with this, except I’d put the timeframe more conservatively around 10 years.
I think in 5 years, most software development will be done by computers. Humans will prompt AIs that generate the code. AI at that point will be able to work within existing codebases and implement trivial feature requests on its own, detect and fix bugs, and debug itself.
In 10 years, AI will unquestionably be superior to humans in software development – just like AI is already better than humans at chess and Go. Handwritten code will not only be considered useless, but writing code by hand will be considered bad practice because humans will be considered less reliable and more error-prone than the machine.
A “software engineer” in 2033 will be more of a CTO-like managerial position where that person is in charge of the tech of an organization, but they’re delegating the tasks to AIs. Companies will need less and less software engineers as the AI advances and ultimately displaces humans. Of course there will be engineers building and maintaining the AI, researching, and working at the bleeding edge, but overall there will be significantly less demand for human engineers. What few jobs remain will be higher demanding since AI will only improve – and a human has to be able to do what an AI can’t in order to be useful – a bar that will only get higher and higher until it is eventually unattainable.
With humans rendered economically obsolete to machines, governments will have no choice but to implement a universal basic income in order to prevent mass poverty and chaos. Unfortunately a lot of workers will be displaced and people will suffer before governments implement UBI due to the enormous capitalistic brainwashing against UBI.
Why this time is different
The reason technological advancement will actually result in net job destruction this time is because technology is getting better than humans and approaching Artificial General Intelligence (AGI). Large Language Models (LLMs) like ChatGPT can reason, and thus solve problems it hasn’t specifically been programmed to. In the past, technology had to be built to solve a specific problem in order for the human to be rendered obsolete (eg. the loom), but AGI can solve any problem. As we approach AGI, the only value that humans can provide will be that which AGI is not yet capable of solving. There is no upper bound on how smart AIs can get, but humans have a clear biological ceiling.
AI will beat humans at software development
Just like AI is superior to humans at games like chess and Go, there’s no reason that AI won’t reign supreme when it comes to programming. AI can already pass Google’s coding interview, the bar exam, MCAT, etc. at a speed that no human could even come close to – like a flashlight competing with a sloth.
Humans look at ChatGPT now and laugh at its often nonsensical answers and hallucinations. Yes, but you’re looking at the equivalent of one of the first computers in the 1950s, or dialup internet in the 90s, a floppy disk with 1MB capacity, or mobile in 2007. Remember, ChatGPT was only released November 30, 2022. OpenAI’s Code Interpreter was released in beta last week to ChatGPT Plus users, an AI that can execute Python code and that you can upload files to. We’re in the first inning of an AI arms race, and at the speed at which things are progressing, it’s pretty clear that AI will be drastically more developed even just another 6 months from now.
As a software engineer, I have full awareness that most of what I know that makes me valuable will be obsolete within a decade because AI will be able to do it faster, cheaper, and better.
The stages of AI progress in software engineering look something like this:
- Chatbots tell you the answer, for example by giving you code, which you then need to copy & paste in the right files – adjusting to fit within your codebase.
- AI assistants write the code directly inside your codebase (eg. like “autocomplete” except with a task more complex than simply predicting the end of the word you’re typing)
- AI assistants/agents execute the code, and debug itself to correct any errors
The reality is that the vast majority of dev work has already been done before, and thus is very trivial to replace with AI.
Humans basically get paid by arbitraging knowledge and producing some output. Software engineers have knowledge of tools, libraries, frameworks, commands, version control, infrastructure, data structures & algorithms, networking, APIs, etc. and use this to write computer programs and create software.
Search engines, Stackoverflow, open source, internet forums, tutorials, Youtube, books, etc. democratized much of this knowledge – but still require a human to research and utilize this information to create software by writing computer code.
AI can do all of this far superior to humans because AI has all of the world’s knowledge, and can execute at the speed of light.
Even if AI code were to be inferior to human code – which by the end of the decade certainly won’t be the case (just like AI beats humanity’s best chess and Go players), humans are still going to be rendered mostly obsolete. Businesses don’t care about code, they want solutions that improve their bottom line. If an AI can solve the problem with half the performance and code quality in a couple minutes, is available 24/7, and only costs pennies/dollars, vs. a team of engineers with $200k/yr salaries taking a month, only available during business hours, and with vacation time, maternity/paternity leave, and other liabilities – who do you think is going to win out? Remember the AI is only going to get better, faster, and more knowledgeable, while humans are biologically limited in work output, and there will be a point where humans no longer have the edge.
I think it’s fairly obvious at this point that AI will supplant humans, but most people are drastically underestimating how quickly this is all going to happen because most humans struggle to understand exponential growth.
I don’t have a crystal ball, but here are some rough conservative predictions I’ll make off the top of my head:
- 2024-07-14 – AI agents can create codebases, write code in existing codebases directed by prompts (eg. “add this feature”, “create this API endpoint that does this”), and debug itself (eg. detect compilation/runtime errors and patch them). The tech will be slow and awful at first so engineers will initially laugh at it claiming they’re not worried about their jobs.
- 2025-7-14 (2 years from now) – These AI agents will start becoming reasonable enough to where they’ll automate the more mundane / trivial work. People will still have to know what to prompt to get the right results, and modify and fine tune the resulting AI-generated code, but AI will start to take the driver’s seat. Instead of the default being for the human to start tackling the problem and ask the AI for help when in need of assistance, it’ll start to become expected to let the AI tackle the problem, and bring in a human when that fails.
- 2026-7-14 (3 years from now) – AI agents start becoming competent enough to where their belonging is no longer questioned. Everyone is using them in some capacity. AI is the driver, but humans are still necessary to direct them and handle the edge cases where they fail. Typical dev work like building trivial web apps and APIs is a commodity at this point and nobody is doing that by hand.
- 2027-7-14 (4 years from now) – AI is the norm. At this point as an engineer you’re either working on building the AI itself, or you’re using the AI. The job of an engineer has evolved from less hand-writing code, to more of a managerial CTO type position, except instead of delegating to other managers and engineers, one is delegating to AI agents.
- 2028-7-14 (5 years from now) – So much human work has been made obsolete by AI that all governments in the modern world have implemented universal basic income.
How to position yourself
Tech was always a rapidly evolving field that required constantly staying on top of the latest advancements, but that’s only going to exponentially increase.
- Stay ahead of the curve on AI advancement. Ensure you’re leveraging the most advanced tools.
- Don’t be overly married or emotionally invested in any one programming language, library, framework, or technology. Assume it will get displaced sooner than you think.
- Learn architecture and systems design. As we move up the ladder of abstraction, an engineer’s focus and responsibility will be more on deciding what to implement, rather than actually implementing the thing. Individual contributors (ICs) will become more like managers directing AIs.
- Stay humble. Assume that everything you know will be irrelevant in a few years.
- Either seek to become more of a CTO-like generalist, entrepreneur, or pick a specialty/niche (eg. machine learning, distributed systems, product, etc.) to stay at the forefront of. If you specialize, realize that you’re competing with AI.
- Get your government representatives to implement a universal basic income so that AI advancement can become unquestionably a good thing for society, instead of the current system where it results in the masses losing their livelihoods and an arms race for an ever-dwindling pot of human jobs as robots overtake us. This can either result in the freedom and leisure utopia that Keynes envisioned, or mass poverty coupled with enormous wealth inequality – and inaction will lead to the latter.
For those who prefer video: https://www.youtube.com/watch?v=B1OZjGaeZj8