People who are saying they're not seeing productivity boost, can you please share where is it failing?
Because, I am terrified by the output I am getting while working on huge legacy codebases, it works. I described one of my workflow changes here: https://news.ycombinator.com/item?id=47271168 but in general compared to old way of working I am saving half of the steps consistently, whether its researching the codebase, or integrating new things, or even making fixes. I have stopped writing code, occasionally I jump into the changes proposed by LLM and make manual edits if it is feasible, otherwise I revert changes and ask it to generate again but based on my learnings from the past rejected output
I am terrified about what's coming
I don't write code for a living but I administer and maintain it.
Every time I say this people get really angry, but: so far AI has had almost no impact on my job. Neither my dev team nor my vendors are getting me software faster than they were two years ago. Docker had a bigger impact on the pipeline to me than AI has.
Maybe this will change, but until it does I'm mostly watching bemusedly.
From my experience as a software engineer, doubling my productivity hasn’t reduced my workload. My output per hour has gone up, but expectations and requirements have gone up just as fast. Software development is effectively endless work, and AI has mostly compressed timelines rather than reduced total demand.
I'm working on a project right now, that is heavily informed by AI. I wouldn't even try it, if I didn't have the help. It's a big job.
However, I can't imagine vibe-coders actually shipping anything.
I really have to ride herd on the output from the LLM. Sometimes, the error is PEBCAK, because I erred, when I prompted, and that can lead to very subtle issues.
I no longer review every line, but I also have not yet gotten to the point, where I can just "trust" the LLM. I assume there's going to be problems, and haven't been disappointed, yet. The good news is, the LLM is pretty good at figuring out where we messed up.
I'm afraid to turn on SwiftLint. The LLM code is ... prolix ...
All that said, it has enormously accelerated the project. I've been working on a rewrite (server and native client) that took a couple of years to write, the first time, and it's only been a month. I'm more than half done, already.
To be fair, the slow part is still ahead. I can work alone (at high speed) on the backend and communication stuff, but once the rest of the team (especially shudder the graphic designer) gets on board, things are going to slow to a crawl.
I don't think there's been much of an impact, really. Those who know how to use AI just got tangentially more productive (because why would you reveal your fake 10x productivity boost so your boss hands you 10x more tasks to finish?), and those w/o AI knowledge stayed the way they were.
The real impact is for indie-devs or freelancers but that usually doesn't account for much of the GDP.
I am not going to trust a single word from a company whose business is selling you AI products.
One of the more interesting takes I heard from a colleague, who’s in the marketing department, is that he uses the corporate approved LLM (Gemini) for “pretend work” or very basic tasks. At the same time he uses Claude on his personal account to seriously augment his job.
His rationale is he won’t let the company log his prompts and responses so they can’t build an agentic replacement for him. Corporate rules about shadow it be damned.
Only the paranoid survive I guess
the numbers they show are barely distinguishable from noise as far as I can interpret them.
For me, the impact is absolutely in hiring juniors. We basically just stopped considering it. There's almost no work a junior can do that now I would look at and think it isn't easier to hand off in some form (possibly different to what the junior would do) to an AI.
It's a bit illusory though. It was always the case that handing off work to a junior person was often more work than doing it yourself. It's an investment in the future to hire someone and get their productivity up to a point of net gain. As much as anything it's a pause while we reassess what the shape of expertise now looks like. I know what juniors did before is now less valuable than it used to be, but I don't know what the value proposition of the future looks like. So until we know, we pause and hold - and the efficiency gains from using AI currently are mostly being invested in that "hold" - they are keeping us viable from a workload perspective long enough to restructure work around AI. Once we do that, I think there will be a reset and hiring of juniors will kick back in.
I know kids avoiding many high paying careers because of ai right now, and artists just giving up everywhere i look. Thanks, ai
I know multiple devs who would have a very large productivity increase but instead choose to slow down their output on purpose and play video games instead. I get it.
Based on my experience with using AI for development work, it feels like you really need to work with it instead of expecting it to do the work for you. Rather than type the code yourself by hand, you now need to explain the task very clearly, then review or test the generated code and then ask it to refactor and fix the issues you identify. This in itself is work that needs to be done, a different way of working compared to manual coding, but that doesn't mean any significant overall productivity gains are always guaranteed.
Wait a second, did they measure exposure from Claude logs and just assumed impact?
Let's say I sell snake oil and I survey every buyer, trying to convince everyone doctors won't be needed in the future.
First conclusion is that retired population seeks medical services the most (reality check - according to CDC most doctor visits are for infants).
Second conclusion is that because it's a snake oil, it heals all the problems and those people will never return to outdated healthcare system.
AI is coming for jobs—but the real risk isn’t where most people are looking.
The leading AI exposure indices (Anthropic, Eloundou et al.) focus on which jobs get automated. They treat low exposure as “safe.”
But the least exposed workers—cooks, roofers, dishwashers, construction laborers—are often in the worst jobs: low pay, high physical toll, short career spans, and little upward mobility. Safe from AI, but not from burnout or injury.
I built JQADI (Job Quality-Adjusted Displacement Index) to combine AI exposure with job quality. It surfaces three kinds of risk:
High AI exposure → classic displacement risk Low AI, low quality → “trapped” workers in grinding, unsustainable jobs Moderate AI, low quality → partial automation strips cognitive work and leaves physical drudgery (the “task residual” effect)
Findings: 83.5M workers are in low-AI, low-quality jobs. Customer service reps, data entry keyers, and medical records specialists sit at the intersection of high exposure and poor quality. Meanwhile, chief executives and lawyers are both low-exposure and high-quality.
The index uses ONET, BLS, and Anthropic exposure data. Code and methodology are open source. LINK https://github.com/quinndupont/JQADI
I think it really depends what you're working on. I do some consulting and found it's not helping the C++ devs as much it's helping the html/js devs.
Productivity up by 10%. Happiness, life satisfaction and feeling of self-worth down by 20%.
My day to day is even busier now with agents all over the place making code changes. The Security landscape is even more complex now overnight. The only negative impact I see is that there’s not much need for junior devs right now. The agent fills that role in a way. But we’ll have to backfill some way or another.
A possible outcome of AI: domestic technical employment goes up because the economics of outsourcing change. Domestic technical workers working with AI tools can replace outsourcing shops, eliminating time-shift issues, etc at similar or lower costs.
The problem with using unemployment as a metric is hiring is driving by perception. You're making an educated guess as to how many people you need in the future.
Anthropic can cause layoffs through pure marketing. People were crediting an Anthropic statement in causing a drop in IBM's stock value, which may genuinely lead to layoffs: https://finance.yahoo.com/news/ibm-stock-plunges-ai-threat-1...
We'll probably have to wait for the hype to wear off to get a better idea, but that might take a long while.
My speed shipping software increased but so did the demands of features by my company.
The endgame will be workers competing with networks of AIs that can solve business problems at all levels.
I'm curious how the system will maneuver itself to deprive workers of pay so that they can stay competitive with the ever-decreasing cost of AI.
Conversely, I'm curious how disruptors will find ways to provide workers with pay (perhaps through mutual aid networks, grants and alternative socioeconomic systems) so that they can use AI to produce the resources they need outside of the contracting labor market.
The productivity-to-workload compression point is real, but I think the more interesting dynamic is what happens to team shape over time. Running a small sales team with agents, the bottleneck shifted from execution to judgment almost overnight. The work didn't disappear, but who's doing which part of it changed pretty fast.
The junior hiring slowdown makes sense in that context. Junior roles were often the execution layer. That layer is getting absorbed. Whether that's bad long-term depends on whether there's still a path to build judgment without first doing the execution work for years. But what can be seen on entry level teams is you typically have 20% of these people that are outstanding, and 80% average. I assume this 20% will simply be able to cover more ground.
If people think Elite Overproduction (https://en.wikipedia.org/wiki/Elite_overproduction) is causing strife now, wait until tens of thousands of people with degrees get thrown out of work.
You know you're having a real impact when you have to self-report on the impact you're having.
> There's suggestive evidence that hiring of young workers (ages 22–25) into exposed occupations has slowed — roughly a 14% drop in the job-finding rate
There goes my excuse of not finding a job in this market.
We are rewriting our entire frontend from Webpack + Gatsby to Vite + React, we converted all static pages in one day using Claude Code.
We basically have ~40 components and 6 pages to go until complete rewrite, I am sure we will run into bumps in the road, but it's been crazy to watch.
We also added i18n (English + Spanish), ThemeProvider for white labeling solution, and WCAG 2A compliance, all in one shot.
If I went to a third party and asked them to rewrite just the static pages it would have been $200k and 3 months of work.
One thing I’m noticing in organizations is that AI tends to amplify judgment rather than replace it.
I think experienced people move faster because they can evaluate the output and redirect it, less experienced people often struggle because they don’t yet know what “good” looks like.
The interesting long-term question is how companies rebuild the environments where that judgment gets developed in the first place.
I'm an SDE with 1 YOE using AI tools heavily (doing "day's work" in ~2 hrs, perfect reviews). Spending most time on specs/review vs. raw coding. Worried I'm optimising short-term output over long-term skill development. Should I consider pivoting to AI/ML roles? Would love advice from anyone who's hired juniors in the current era.
Most of my recent interviews have been mostly people telling me that I am an idiot because I can't leetcode proficiently enough, so those jobs going away doesn't really effect me and at the same time makes sense. LLMs should be good with the leetcode classics that are the basis for rating software development productivity.
I shipped solo what would've been a 4-5 person team's output. The productivity gain is real but wildly uneven across tasks within the same role — and that unevenness is what makes aggregate labor stats misleading.
this keeps me up at night. i’m in a role that is essentially deployment management for LLMs at faang esque company. very little coding or need to code, mostly navigating guis, pipelines, and docker to get deployments updated with a new venting or model version or some patch
There looks to be some errors in the conversion from PDF -> web in this report. For example, the web version of Figure 7 has the legend colours reversed.
Anthropic should be outsourcing this kind of studies by providing data to non-affiliated researchers instead of doing the analysis themselves.
How is Anthropic getting this data? Are they running science experiments on people's chat history? (In the app, API or both?)
Did you all read about the aws outage for 13hrs because their autonomous AI agent decided to delete everything and write from scratch?
Has this been peer-reviewed?
Never trust a statistic you haven't forged yourself.
I'm not really concerned about the availability of SW dev jobs, but I am concerned about the quality of them. For many companies the velocity (and quality, much to my chagrin) of the code you can produce doesn't really matter. What matters more is whether or not you're building the right thing, and too often you're not. These companies also tend to keep more headcount than seems justified, I think because they are gambling that a few employees are going to do something awesome but they don't know which ones. As AI gets better what will these companies do? I don't think they will fire a bunch of SW devs. I think instead they will embrace the slop and just take more shots, and crazier shots. It doesn't just give us something to do, it also gives a bunch of PHBs something to do.
This is a pretty interesting report.
The TL;DR is that there is little measurable impact (and I'd personally add "yet").
To quote:
"We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations"
My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle.
I have doubts that this impact is measurable yet - there is a lag between hiring intention and impact on jobs, and outside Silicon Valley large scale hiring decisions are rarely made in a 3 month timeframe.
The most interesting part is the radar plot showing the lack of usage of AI in many industries where the capability is there!
What's interesting from a practical standpoint: the paper confirms what we're seeing in SME deployments – AI augments, not replaces. But the real productivity gain only kicks in when you redesign the process around the AI, not just bolt it on. Most small businesses skip that step entirely and then wonder why their 'AI tool' isn't delivering. The organizational restructuring is the hard part, not the technology. Anyone here seen teams actually get this right systematically?
> Claude is extensively used for coding, Computer Programmers are at the top, with 75% coverage
I think there are some advantages to being first.
It's time to re-evaluate strategies if we've been operating under the assumption that this is going to be a bubble, or otherwise largely bullshit. It definitely works. Not everywhere all the time, but often enough to be "scary" now. Some of my prior dismissals like "text 2 sql will never work" are looking pale in the face today.
I call BS on this as the ones displaced aren’t in the workforce anymore. I haven’t been able to work in over a year. Despite me applying to over 200 jobs a month.
This rhymes with another recent study from the Dallas Fed: https://www.dallasfed.org/research/economics/2026/0224 - suggests AI is displacing younger workers but boosting experienced ones. This matches what we see discussed here, as well as the couple similar other studies we've seen discussed here.
Also, it seems to me the concept of "observed exposure" is analogous to OpenAI's concept of "capability overhang" - https://cdn.openai.com/pdf/openai-ending-the-capability-over...
I think the underlying reason is simply because companies are "shaped wrong" to absorb AI fully. I always harp on how there's a learning curve (and significant self-adaptation) to really use AI well. Companies face the same challenge.
Let's focus on software. By many estimates code-related activities are only 20 - 60%, maybe even as low as 11%, of software engineers' time (e.g. https://medium.com/@vikpoca/developers-spend-only-11-of-thei...) But consider where the rest of the time goes. Largely coordination overhead. Meetings etc. drain a lot of time (and more the more senior you get), and those are mostly getting a bunch of people across the company along the dependency web to align on technical directions and roadmaps.
I call this "Conway Overhead."
This is inevitable because the only way to scale cognitive work was to distribute it across a lot of people with narrow, specialized knowledge and domain ownership. It's effectively the overhead of distributed systems applied to organizations. Hence each team owned a couple of products / services / platforms / projects, with each member working on an even smaller part of it at a time. Coordination happened along the heirarchicy of the org chart because that is most efficient.
Now imagine, a single AI-assisted person competently owns everything a team used to own.
Suddenly the team at the leaf layer is reduced to 1 from about... 5? This instantly gets rid of a lot of overhead like daily standups, regular 1:1s and intra-team blockers. And inter-team coordination is reduced to a couple of devs hashing it out over Slack instead of meetings and tickets and timelines and backlog grooming and blockers.
So not only has the speed of coding increased, the amount of time spent coding has also gone up. The acceleration is super-linear.
But, this headcount reduction ripples up the org tree. This means the middle management layers, and the total headcount, are thinned out by the same factor that the bottom-most layer is!
And this focused only on the engineering aspect. Imagine the same dynamic playing out across departments when all kinds of adjacent roles are rolled up into the same person: product, design, reliability...
These are radical changes to workflows and organizations. However, at this stage we're simply shoe-horning AI into the old, now-obsolete ticket-driven way of doing things.
So of course AI has a "capability overhang" and is going to take time to have broad impact... but when it does, it's not going to be pretty.
Much like LLM output, this seems convincing at first glance but it’s stating assumption as fact; That assumption being “when LLMs get better”. They don’t say what that ceiling is, but then go on to say “it can’t represent someone in court”. Why not? It can reason about more law and precedent than any human can right? As a society surely we want a fair and by the book justice system?
Look at GPT 5.4 and Opus, we’re clearly hitting diminishing returns already and these guys are pumping unsustainable amounts of money into them.
I’m bullish on AI, it’s been a net positive for me and my team. All I see here though is propaganda disguised as science to convince businesses to shrink their engineering budgets and redirect it to AI companies.
TL;DR: AI company says AI is amazing, more at 10.
I really hate to say it, but this article in particular needs a tldr. The author does a web recipe take. Don't put the actual factual info upfront and require parsing through everything to find anything important.
Kinda done with this.
If you have something important to say, say it up front and back it up with literature later.
cigarettes don't cause cancer! -cigarette companies
I was at a big tech for last 10 years, quit my job last month - I feel 50x more productive outside than inside.
Here is my take on AI's impact on productivity:
First let's review what are LLMs objectively good at: 1. Writing boiler plate code 2. Translating between two different coding languages (migration) 3. Learning new things: Summarizing knowledge, explaining concepts 4. Documentation, menial tasks
At a big tech product company #1 #2 #3 are not as frequent as one would think - most of the time is spent in meetings and meetings about meetings. Things move slowly - it's designed to be like that. Majority devs are working on integrating systems - whatever their manager sold to their manager and so on. The only time AI really helped me at my job was when I did a one-week hackathon. Outside of that, integrations of AI felt like more work rather than less - without much productivity boost.
Outside, it has proven to be a real productivity boost for me. It checks all the four boxes. Plus, I don't have to worry about legal, integrations, production bugs (eventually those will come).
So, depends who you are asking -- it is a huge game changer (or not).