Coding after coders: The end of computer programming as we know it?

by angston 3/12/2026, 10:29 AMwith 353 comments

by dsQTbR7Y5mRHnZvon 3/14/2026, 6:38 AM

> in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.

I've always hated solving puzzles with my deterministic toolbox, learning along the way and producing something of value at the end.

Glad that's finally over so I can focus on the soulful art of micromanaging chatbots with markdown instead.

by comrade1234on 3/13/2026, 8:50 PM

Having an AI is like having a dedicated assistant or junior programmer that sometimes has senior-level insights. I use it to do tedious tasks where I don't care about the code - like today I used it to generate a static web page that let me experiment with the spring-ai chat bot code I was writing - basic. But yesterday it was able to track down the cause of a very obscure bug having to do with a pom.xml loading two versions of the same library - in my experience I've spent a full day on that type of bug and Claud was able to figure it out from the exception in just minutes.

But when I've used AI to generate new code for features I care about and will need to maintain it's never gotten it right. I can do it myself in less code and cleaner. It reminds me of code in the 2000s that you would get from your team in India - lots of unnecessary code copy-pasted from other projects/customers (I remember getting code for an Audi project that had method names related to McDonalds)

I think though that the day is coming where I can trust the code it produces and at that point I'll just by writing specs. It's not there yet though.

by dorfsmayon 3/14/2026, 2:55 PM

For me, the biggest shift is people who don't care about local AI. The idea that you can no longer code without paying a tax to one of the billion $ backed company isn't sitting well.

by bikelangon 3/14/2026, 6:25 AM

If coding truly becomes effortless to produce - and by that extension a product becomes near free to produce - then I find it quite odd that the executive class thinks their businesses won’t be completely up ended by a raging sea of competition.

by d4rkp4tternon 3/14/2026, 11:52 AM

I see lots of discussions about humans no longer writing code but the elephant in the room is the rapid extinction of human-review of AI-made code. I expect this will lead to a massive hangover. In the meantime we try to mitigate this by ensuring the structure of code remains AI-friendly. I also expect some new types of tools to emerge that will help with this “cognitive debt”.

by nativeiton 3/14/2026, 8:33 PM

So you’ve vibe coded an app. That’s just swell. You want to release it? Don’t. You can’t support it. You can’t update it. You are one bad prompt away from it collapsing.

We are convincing a generation of morons that they can do something they plainly cannot. This will be a major problem, and soon.

by allreduceon 3/14/2026, 8:08 AM

I'm starting to find the naive techno-optimism here annoying. If you don't have capital or can do something else you will be homeless.

by yubainuon 3/14/2026, 7:29 AM

In the near future, a "good programmer" might not be defined by someone who can write bug-free, clear code, but rather by someone who can prompt for code that works consistently within the context of AI. If that happens, I'll have to find a different job.

by suheilaaitaon 3/14/2026, 10:49 AM

I'm from an accounting/finance background and spent about 10 years in Big4. I was always into tech, but never software development because writing code (as I thought) takes years to master, and I had already chosen accounting.

Fast forward to 2024 when I saw Cursor (the IDE coding agent tool). I immediately felt like this was going to be the way for someone like me.

Back then, it was brutal. I'd fight with the models for 15 prompts just to get a website working without errors on localhost, let alone QA it. None of the plan modes or orchestration features existed. I had to hack around context engineering, memories, all that stuff. Things broke constantly. 10 failures for 1 success. But it was fun. To top it all off, most of the terminology sounded like science fiction, but it got better in time. I basically used AI itself to hack my way into understanding how things worked.

Fast forward again (only ~2 years later). The AI not only builds the app, it builds the website, the marketing, full documentation, GIFs, videos, content, screen recordings. It even hosts it online (literally controls the browser and configures everything). Letting the agent control the browser and the tooling around that is really, genuinely, just mad science fiction type magic stuff. It's unbelievable how often these models get something mostly right.

The reality though is that it still takes time. Time to understand what works well and what works better. Which agent is good for building apps, which one is good for frontend design, which one is good for research. Which tools are free, paid, credit-based, API-based. It all matters if you want to control costs and just get better outputs.

Do you use Gemini for a website skeleton? Claude for code? Grok for research? Gemini Deep Search? ChatGPT Search? Both? When do you use plan mode vs just prompting? Is GPT-5.x better here or Claude Opus? Or maybe Gemini actually is.

My point is: while anyone can start prompting an agent, it still takes a lot of trial and error to develop intuition about how to use them well. And even then everything you learn is probably outdated today because the space changes constantly.

I'm sure there are people using AI 100Ă— better than I am. But it's still insane that someone with no coding background can build production-grade things that actually work.

The one-person company feels inevitable.

I'm curious how software engineers think about this today. Are you still writing most of your code manually?

by flux3125on 3/14/2026, 4:04 PM

> You can’t just tell an agent, Build me the code for a successful start-up. The agents work best when they’re being asked to perform one step at a time

That's also true for humans. If you sit down with an LLM and take the time to understand the problem you're trying to solve, it can perfectly guide you through it step by step. Even a non-technical person could build surprisingly solid software if, instead of immediately asking for new shiny features, they first ask questions, explore trade-offs, and get the model's opinion on design decisions..

LLMs are powerful tools in the hands of people who know they don't know everything. But in the hands of people who think they always know the best way, they can be much less useful (I'd say even dangerous)

by bryanrasmussenon 3/14/2026, 6:01 AM

how many times in the history of computer programming has there been an end to computer programming as we know it, successfully, and how many times predicted?

I can think of one successfully, off hand, although you could probably convince me there was more than one.

the principle phrase being "as we know it", since that implies a large scale change to how it works but it continues afterwards, altered.

by anonzzzieson 3/14/2026, 4:37 PM

I dunno; I finally can focus on writing the logic I wanted to write all along and finally my upbringing in formal verification makes sense as I can spend my time on it instead of figuring what garbage (I cannot use it in my work but sbcl is one of the things that does not grow tumors in software) updates I will never ever need my friends added to the framework or language or ide I happen to use.

by fixxation92on 3/13/2026, 8:08 PM

Conversations of the future...

"Can you believe that Dad actually used to have to go into an office and type code all day long, MAUALLY??! Line by line, with no advice from AI, he had to think all by himself!"

by bwhiting2356on 3/14/2026, 4:26 PM

> Pushing code that fails pytest is unacceptable and embarrassing.

CI is for preventing regressions. Agents.md is for avoiding wasted CI cycles.

by __mharrison__on 3/14/2026, 1:54 PM

I wasn't around when we moved to that stack from assembly. I didn't experience the mourning then.

Most folks I hang out with are infatuated with turning tokens into code. They are generally very senior 15+ years of experience.

Most folks I hang out with experience existential dread for juniors and those coming up in the field who won't necessarily have the battle scars to orchestrate systems that will work in the will world.

Was talking with one fellow yesterday (at an AI meetup) who says he has 6 folks under him, but that he could now run the team with just two of them and the others are basically a time suck.

by jazz9kon 3/12/2026, 12:12 PM

Because they are still making the same salary. In 5 years, when their job is eliminated, and they can't find work, they will regret their decision.

by ripeon 3/12/2026, 12:33 PM

> it could also be that these software jobs won’t pay as well as in the past, because, of course, the jobs aren’t as hard as they used to be. Acquiring the skills isn’t as challenging.

This sounds opposite to what the article said earlier: newbies aren’t able to get as much use out of these coding agents as the more experienced programmers do.

by lelanthranon 3/13/2026, 8:01 PM

This is a very one-sided article, unashamedly so.

Where's the references to the decline in quality and embarrassing outages for Amazon, Microsoft, etc?

by htx80nerdon 3/13/2026, 8:00 PM

You have to hold AI hand to do even simple vanilla JS correctly. Or do framework code which is well documented all over the net. I love AI and use it for programming a lot, but the limitations are real.

by youknownothingon 3/14/2026, 2:50 PM

I once suggested a drinking game: shot every time someone says "X is dead". I was told to f** off because I'd kill half of humanity.

COBOL is dead. Java is dead. Programming is dead. AI is dead (yes, some people are already claiming this: https://hexa.club/@phooky/116087924952627103)

I must be the kid from The Sixth Sense because I keep seeing all these allegedly dead guys around me.

by cineticdaffodilon 3/14/2026, 5:21 PM

Revenge of the writers and software managers, the wishfull hoping for hurt of those made redundant upon those they blame for having been made redundant.

by IntrepidPigon 3/13/2026, 9:46 PM

> “The reason that tech generally — and coders in particular — see L.L.M.s differently than everyone else is that in the creative disciplines, L.L.M.s take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.”

This doesn’t really make sense to me. GenAI ostensibly removes the drudgery from other creative endeavors too. You don’t need to make every painstaking brushstroke anymore; you can get to your intended final product faster than ever. I think a common misunderstanding is that the drudgery is really inseparable from the soulful part.

Also, I think GenAI in coding actually has the exact same failure modes as GenAI in painting, music, art, writing, etc. The output lacks depth, it lacks context, and it lacks an understanding of its own purpose. For most people, it’s much easier to intuitively see those shortcomings of GenAI manifest in traditional creative mediums, just because they come more naturally to us. For coding, I suspect the same shortcomings apply, they just aren’t as clear.

I mean, at the end of the day if writing code is just to get something that works, then sure, let’s blitz away with LLMs and not bother to understand what we’re doing or why we do it anymore. Maybe I’m naive in thinking that coding has creative value that we’re now throwing away, possibly forever.

by CrzyLngPwdon 3/14/2026, 8:35 AM

Visual Basic was the end of programming as we knew it...until it wasn't.

by igor47on 3/14/2026, 6:35 AM

I'm not normally a fan of the NYT but this wasn't too bad. It passed the Gel-Mann test, and is clearly written by someone who knows the field well, even though the selection of quotes skews to towards outliers -- I think Yeggie for instance is pretty far out of the mainstream in his views on LLMs, whether ahead or sideways.

As a result a lot of the responses here are either quibbles or cope disguised as personal anecdotes. I'm pretty worried about the impact of the LLMs too, but if you're not getting use out of them while coding, I really do think the problem is you.

Since people always want examples, I'll link to a PR in my current hobby project, which Claude code helped me complete in days instead of weeks. https://github.com/igor47/csheet/pull/68 Though this PR creates a bunch of tables, routes, services -- it's not just greenfield CRUD work. We're figuring out how to model a complicated domain (the rules to DnD 5e, including the 2014 and the 2024 revisions of those rules), integrating with existing code, thinking through complex integrations including with LLMs at run time. Claude is writing almost all the code, I'm just steering

by Nevermarkon 3/14/2026, 8:39 AM

The psycho engineering of model prompts does feel very Phillip K. Dick.

If your base prompt informs the model they are a human software developer in a Severed situation, it gets even closer.

by whoisstanon 3/14/2026, 11:58 AM

I feel the need to tell the LLM to rewrite the article for a software developer audience, but don't, those kinds of passage are hard to overcome:

'Salva opened up his code editor — essentially a word processor for writing code — to show me what it’s like to work alongside Gemini, Google’s L.L.M. '

And what's up with L.L.M, A.I., C.L.I. :)

by daveguyon 3/14/2026, 5:21 PM

> "...melodramatic prose might seem kind of nuts, but as their name implies, large language models are language machines. “Embarrassing” probably imparted a sense of urgency.

> “If you say, This is a national security imperative, you need to write this test, there is a sense of just raising the stakes,” Ebert said.

I'm not sure why programmers and science writers are still attributing emotions to this and why it works. Behind the LLM is a layer that attributes attention to various parts of the context. There are words in the English language that command greater attention. There is no emotion or internal motivation on the part of the LLM. If you use charged words you get charged attention. Quite literally "attention is all you need" to describe why appealing to "emotion" works. It's a first order approximation for attention.

by zjpon 3/13/2026, 7:47 PM

There is no such thing as "after coders": https://zjpea.substack.com/p/embarrassingly-solved-problems

This excerpt:

>A.I. had become so good at writing code that Ebert, initially cautious, began letting it do more and more. Now Claude Code does the bulk of it.

is a little overstated. I think the brownfield section has things exactly backwards. Claude Code benefits enormously from large, established codebases, and it’s basically free riding on the years of human work that went into those codebases. I prodded Claude to add SNFG depictions to the molecular modeling program I work on. It couldn’t have come up with the whole program on its own and if I tried it would produce a different, maybe worse architecture than our atomic library, and then its design choices for molecules might constrain its ability to solve the problem as elegantly as it did. Even then, it needed a coworker to tell me that it had used the incorrect data structure and needed to switch to something that could, when selected, stand in for the atoms it represented.

Also this:

>But A.I.-generated code? If it passes its tests and works, it’s worth as much as what humans get paid $200,000 or more a year to compose.

Isn’t really true. It’s the free-riding problem again. The thing about an ESP is that the LLM has the advantage of either a blank canvas (if you’re using one to vibe code a startup), or at least the fact that several possibilities converge on one output, but, genuinely, not all of those realities include good coding architecture. Models can make mistakes, and without a human in the loop those mistakes can render a codebase unmaintainable. It’s a balance. That’s why I don’t let Claude stamp himself to my commits even if he assisted or even did all the work. Who cares if Claude wrote it? I’m the one taking responsibility for it. The article presents Greenfield as good for a startup, and it might be, but only for the early, fast, funding rounds, when you have to get an MVP out right now. That’s an unstable foundation they will have to go back and fix for regulatory or maintenance reasons, and I think that’s the better understanding of the situation than framing Aayush’s experience as a user error.

Even so, “weirdly jazzed about their new powers” is an understatement. Every team including ours has decades of programmer-years of tasks in the backlog, what’s not to love about something you can set to pet peeves for free and then see if the reality matches the ideal? git reset --hard if you don't like what it does, and if you do all the better. The Cuisy thing with the script for the printer is a perfect application of LLMs, a one-off that doesn’t have to be maintained.

Also, the whole framing is weirdly self limiting. The architectural taste that LLMs are, again, free riding off of, is hard won by doing the work more senior engineers are giving to LLMs instead of juniors. We’re setting ourselves up for a serious coordinated action problem as a profession. The article gestures at this a couple times

The thing about threatening LLMs is pretty funny too but something in me wants to fall back to Kant's position that what you do to anything you do to yourself.

by nenadgon 3/14/2026, 7:38 AM

sensationalism give it a couple of months

by lagrange77on 3/13/2026, 8:09 PM

It's really time that mainstream media picks up on 'agentic coding' and the implications of writing software becoming a commodity.

I'm an engineer (not only software) by heart, but after seeing what Opus 4.6 based agents are capable of and especially the rate of improvement, i think the direction is clear.

by 0xbadcafebeeon 3/14/2026, 6:48 AM

Back in the day, programming was done on punch cards. In 20 years, that's how kids will see typing out lines of program code by hand.

by somewhereoutthon 3/14/2026, 11:49 AM

I have a suspicion that for a task (or to make an artifact) of a given complexity, there is a minimum level of human engagement required to complete it successfully - and that human engagement cannot be substituted for anything else. However, the actual human engagement for a task is not bounded above - efficiency is often less (much less?) than 100%.

So tools (like AI) can move us closer to the 100% efficiency (or indeed further away if they are bad tools!) but there will always be the residual human engagement required - but perhaps moved to different activities (e.g. reviewing instead of writing).

Probably very effective teams/individuals were already close to 100% efficiency, so AI won't make much difference to them.

by holodukeon 3/14/2026, 11:03 AM

The best developers are the ones using AI to its best. Mediocre devs will become a useless skill as even a PO could become one. But one who understands architecture, software, code and AI will be expensive to hire. I know plenty of them. I wory for the ones not willing to adopt ai.

by DGAPon 3/14/2026, 2:00 PM

Lots of cope here. Highly paid white collar jobs are going to disappear.

by xenadu02on 3/13/2026, 9:15 PM

It's an accelerator. A great tool if used well. But just like all the innovations before it that were going to replace programmers it simply won't.

I used Claude just the other day to write unit test coverage for a tricky system that handles resolving updates into a consistent view of the world and handles record resurrection/deletion. It wrote great test coverage because it parsed my headerdoc and code comments that went into great detail about the expected behavior. The hard part of that implementation was the prose I wrote and the thinking required to come up with it. The actual lines of code were already a small part of the problem space. So yeah Claude saved me a day or two of monotonously writing up test cases. That's great.

Of course Claude also spat out some absolute garbage code using reflection to poke at internal properties because the access level didn't allow the test to poke at the things it wanted to poke at, along with some methods that were calling themselves in infinite recursion. Oh and a bunch of lines that didn't even compile.

The thing is about those errors: most of them were a fundamental inability to reason. They were technically correct in a sense. I can see how a model that learned from other code written by humans would learn those patterns and apply them. In some contexts they would be best-practice or even required. But the model can't reason. It has no executive function.

I think that is part of what makes these models both amazingly capable and incredibly stupid at the same time.

by CollinEMacon 3/13/2026, 8:14 PM

>but like most of their peers now, they only rarely write code.

Citation needed. Are most developers "rarely" writing code?

by sjeiuhvdiidion 3/14/2026, 5:15 AM

It's all nonsense. It's just better search, intelligence in not artificial. They are trying to convince everyone that they don't need to pay programmers. That's all, all it is. It'll work on the ignorant who'll take less money to make sure it works and fix the bugs, which is mostly what they were paying for anyway. They just want to devalue the work of the people they are reliant on. Nothing new.

by giston 3/13/2026, 9:42 PM

For one thing comments here appear to apply to the quality and issues today not potentially going forward. Quality will change quicker than anyone expects. I am wondering how many people at HN remember when the first Mac came out with Mac Paint and then Pagemaker or Quark. That didn't evolve anywhere nearly as quickly as AI appears to be.

Also I am not seeing how anyone is considering that what a programmer considers quality and what 'gets the job done' (as mentioned in the article) matters in any business. (Example with typesetting is original laser printers were only 300dpi but after a short period became 1200dpi 'good enough' for camera ready copy).

by kittikittion 3/13/2026, 7:55 PM

Another trash article from the New York Times, who financially benefit from this type of content because of their ongoing litigation against OpenAI. I think the assumption that developers don't code is wrong. Most software engineers don't even want to code, they are opportunists looking to make money. I have yet to experience this cliff of coding. These people aren't asking for hard enough questions. I have a bunch of things I want AI to build that it completely fails on.

The article could have been written from a very different perspective. Instead, the "journalists" likely interviewed a few insiders from Big Tech and generalized. They don't get it. They never will.

Before the advent of ChatGPT, maybe 2 in 100 people could code. I was actually hoping AI would increase programming literacy but it didn't, it became even more rare. Many journalists could have come at it from this perspective, but instead painted doom and gloom for coders and computer programming.

The New York Times should look in the mirror. With the advent of the iPad, most experts agreed that they would go out of business because a majority of their revenue came from print media. Look what happened.

Understand this, most professional software and IT engineers hate coding. It was a flex to say you no longer code professionally before ChatGPT. It's still a flex now. But it's corrupt journalism when there is a clear conflict of interest because the NYT is suing the hell out of AI companies.

by deflatoron 3/12/2026, 4:34 PM

What is a coder? Someone who is handed the full specs and sits down and just types code? I have never met such a person. The most annoying part of SWE is everyone who isn't an SWE has inane ideas about what we do.

by fraywingon 3/13/2026, 8:42 PM

I keep getting stuck on the liability problem of this supposed "new world". If we take this as far as it goes: AI agent societies that designs, architects, and maintains the entire stack E2E with little to no oversight. What happens when rogue AIs do bad things? Who is responsible? You have to have fireable senior engineers that understand deep fundamentals to make sure things aren't going awry, right? /s

by ramesh31on 3/12/2026, 1:15 PM

Because we love tech? I'm absolutely terrified about the future of employment in this field, but I wouldn't give up this insane leap of science fiction technology for anything.