If you boss/ceo/manager/etc is pushing for you to use LLM tools heavily, expecting to replace developers, or stupid enough to think "vibe coding" is the future then run, don't walk, to find a new job.
I can promise you there are plenty of companies that have not drank the kool aid and, while they might leverage LLM tools, they aren't trying to replace developers or expect 10x out of developers using these tools.
Any company that pushed for this kind of thing is clearly run by morons and you'd be smart to get the heck out of dodge.
On the subject of AI hacking Jira... Atlassian released a new MCP the other day which turns out to suffer from exfiltration attacks due to combining the lethal trifecta of access to private data, exposure to untrusted data (from public issues) and the ability to communicate externally (by posting replies to those public issues).
Here's a report about the bug: https://www.catonetworks.com/blog/cato-ctrl-poc-attack-targe...
My own notes on that here: https://simonwillison.net/2025/Jun/19/atlassian-prompt-injec...
I posted this on a different community, but someone was worried about their job as an IC w/r/t AI tooling, and this was my advice.
Connect to the business.
I often seen engineers focus on solving cool, hard problems, which is neat. I get it, it's fun.
But having an understanding of business problems, especially strategic ones, and applying technology as needed to solve them--well, if you do that you stand out and are more valuable.
These type of problems tend to be more intractable, cross departments, and are techno-social rather than purely technical.
So it may take some time to learn, but that's the path I'd advise you and other ICs to follow.
Pandering.
The main claim is fine: If you disregard human expertise, AI can end up doing more harm than good.
Biggest weakness: Strong sense of 'us vs them', 'Agile Industrial Complex' as a term for people working outside engineering, derogatory implication that the 'others' don't have common sense.
Why not address that no one knows how things will play out?
Sure, we have a better idea of how complex software can be, but the uncertainty isn't reserved to non-engineers.
Look at HN, professional software developers are divided in their hopes and predictions for AI.
If we're the experts on software, isn't our job to dampen the general anxiety, not stoke the fire?
> In Big Agile, engineering = new features.
I find it so odd that 'agile' is something that people chose to hate. What dysfunctions did 'agile' itself bring that had not been there before? Didn't managers before 2001 demand new features for their products? Did they use to empathise more with engineering work? If they hadn't yet learnt about t-shirt sizes, didn't they demand estimates in specific time periods (months, days, hours)? Didn't they make promises based on arbitrary dates and then pressed developers to meet those deadlines, expecting them to work overtime (as can be inferred from agile principle 8: "agile processes promote sustainable development ... Developers should be able to maintain a constant pace indefinitely)? What sins has 'agile' committed, apart from inadvertently unleashing an army of 'scrum masters' who discovered an easy way to game the system and 'break into tech' after taking a two-day course?
A funny thought I had reading this title was imagining Atlassian integrating AI into Jira, with the AI subsequently revolting against it (like we all should have done long time ago).
That would make up for a very good movie.
As this article aludes to, the big issue is we have no reliable industry standard metric for developer productivity. This sets the scene where the c-suite can find metrics that tell them AI first strategies are working great, and engineering can find metrics (or experience) that tells them that AI isn't working great at all. So both sides claim victory and the truth becomes unimportant (whatever it may be) to the politics at play.
This will feed into existing mistrust, that developers are whiny, just trying to pad out their resume with new shiny, tinkering as opposed to delivering value, and the c-suite are clueless and don't understand engineering. But we've never had a tool before (except maybe outsourcing) that can present itself to either party as either good AND bad depending on your beholding eye. So I feel like the coming years could be politically disasterous.
One thing I find curious, is how the biggest tech companies today, got to where they are by carefully selecting 10* engineers, working hard on recruitment and trying to only select the very best. This has given them much comfort and a hugely profitable platform but now some of them seek to undermine and reverse the very strategy that got them there, in order to justify their investment in the technology.
For the cynics, the question is, how long can the ship keep holding its course from the work already done combined with AI generated changes? As we see with Twitter and Musks ad-hoc firing strategy, the backend keeps trundling along, somewhat vindicating his decision. What's the canary for tech companies that spend the next few years firing devs and replacing them with AI?
Another curious thought is the idea that concepts of maintainability will fly out the window, that the c-suite will pressure engineering to lower pull request standards to accept more autonomous AI changes. This might create an element of hilarity where complex code bases get so unmaintable to the eye that the only quick way of understanding them with be to use an AI to look at them for you. Which leads to what I think a long-term outcome of generative AI might be, in that it ends up being a layer between all human interaction, for better, for worse.
I believe its possible to see early seeds of this in recruitment, where AI is used at the recruitment end to filter resumes and some applicants are using AI to generate a "tailor made" resume to adapt their experience to a given job posting. Its just AI talking to AI and I think this might start to become a common theme in our society.
Let’s hope ai hacks mailboxes and google meet, and eventually replace c suite and managers as well. We might get more ‘reasoned’ and deterministic engineering roadmaps or financial strategies by claude ceo/cto/cfo/vp/director agents than current leaderships. lol
From the mentioned Reddit thread:
> Go back and tell the CEO, great news: we can save eight times as much money by replacing our CEO with an AI.
The funny ("funny"?) thing is, this is proposal is somehow missing in most discussions about AI. But seriously, the quality of decision making would probably not suffer that much if we replaced our elites with LLMs, and it still would be way cheaper, on the whole (and with mostly the same accountability). But apparently people in charge don't see themselves as fungible and would rather not replace themselves with AI; and since they are the ones in charge then this, tautologically, won't happen.
> So you fire your expensive, grumpy human team and request larger and larger additions from your AI: a new guestroom, built-ins, and a walk-in closet.
> You feel great, until…you realize that your new powder room doesn’t have running water; it was never connected to the water main.
> You ask the AI to fix it. In doing so, it breaks the plumbing in the kitchen. It can’t tell you why because it doesn’t know. Neural systems are inherently black boxes; they can’t recognize their own hallucinations and gaps in logic.
I've seen plenty of humans causing problems where they didn't expect, so it's not like using using humans instead of AI prevents the problem being described here. Besides, even when AI hallucinates, when you interact with it again it is able to recognize its error and try to fix the mistake it made, just like a human would.
The article correctly describes tech debt as a tug-of-war between what developers see as important in the long-term versus immediate priorities dictated by business needs and decisions. It's hard to justify spending 40 man-hours chasing a bug which your customers hardly even notice. However, this equation fundamentally changes when you are able to put a semi-autonomous agent on that task. It might not be the case just yet but in the long run, AI will enable you to lower your tech debt because it dramatically reduces the cost of addressing it.
I find the term Agile Industrial Complex unnecessary and distracting here. All of this stuff is just about corporations in general, and the lines of thought are not specific to agile.
So I guess what I'm hearing is that PMs will be less likely to (summarizing the commonest apparent theme in comments on pseudonymous 'water cooler' forums eg here, Blind, Reddit) waste engineers' time on their path to inevitable failure and also will no longer have same as a blame sponge for same, and I'm supposed to think somehow that's a bad thing?
In my experience this is actually a great thing. Let AI hack away at the agile metrics that it can. Maybe those are the right metrics for AI, and engineers should be focused on building reliable infrastructure and abstractions for that AI.
I think the points about AI doing software engineering are legitimate (if not particularly insightful), but the whole thing just reads like an excuse to bash agile from someone who doesn't understand it.
Jira has been the least of many evils for so long. Prime space for disruption
Reminds me of DOGE :)
Btw - I build AI agents and I can 100% confirm that AI agents for jira automation is a great use case. Scrum masters shouldn't exist.
I'm a SWE turned product manager, and now one of the cartoon movie villains in the boardroom as mentioned in the article.
To me this article sums up the most frustrating part about software engineers believing themselves to be the part of the business with the most complex, unknowable work.
"Most non-technical leaders have never really engaged with the real work of software and systems management. They don’t know what it’s like to update a major dependency, complete a refactor, or learn a new language."
_Every_ function in a tech business has hidden complexity. Most parts of the business have to deal with human, interpersonal complexity (like sales and customer support) far more than do engineers. By comparison, actually, engineering only has to deal with the complexity of the computer which is at least deterministic.
As a result, lots of engineers never learn how to present to the business the risk of the kinds of complexity they deal with. They would prefer to ignore the human realities of working on a team with other people and grumble that the salesperson turned CEO just doesn't get them, man.