I'd argue the only new development is that now it's cheaper / easier to do. But the same concept has been used previously with human-augmented bot farms.
I wonder if AI is going to track like nuclear power. In the 1950s it was the greatest thing. Electricity would be plentiful and cheap. "Too cheap to meter." All kinds of new conveniences would be possible. The future was bright.
Then we had growing environmental concerns. And the costs were much higher than initially promoted. Then we had Three Mile Island. Then Chernobyl. Then Fukushima. New reactor construction came to a standstill. There was no trust anymore that humans could handle the technology.
Now, there's some interest again. Different designs, different approaches.
Will AI follow the same path? A few disasters, a retreat, and then perhaps a renewal with lessons learned?
I think it's pretty clear we're at this stage already.
I have done some research about AI-disinformation. This is a really complex topic, as disinformation or influence operations are complex phenomena that can have many different forms. What I would argue is that disinformation in general do not have a supply problem (how to generate as much of them as possible) but a demand problem (how to get what is generated in front of some eyes). You don't really need a botnet of fake users pushing something, you need a few popular accounts/politicians to spread your message. There is no significant advantage in using AI there.
But, there are still situations where botnets would be useful. For example, spreading propaganda on social media during hot phases of various conflicts (R-U war, Israeli wars, Indo-Pakistani war) or doing short term influence operations before the elections. These cases need to be handled by social media platforms detecting nefarious activity by either humans or AI. So far they could half-ass it as it was pretty expensive to run human-based campaigns, but they will probably have to step up their game to handle relatively cheap AI campaigns that people will attempt to run.
This paper builds on a series of pathways towards harm. Those are plausible in principle, but we still have frustratingly little evidence of the magnitude of such harms in the field.
To solve the question of whether or not these harms can/will actually materialize, we would need causal attribution, something that is really hard to do — in particular with all involved actors actively monitoring society and reacting to new research.
Personally, I think that transparency measures and tools that help civic society (and researchers) better understand what's going on are the most promising tool here.
Sections of Reddit and Twitter have been taken over by incredibly toxic cesspit of bots. They fuel polarization and hate like nothing I've ever seen before.
It's catered for the algorithm which pumps it out to users.
>Fusing LLM reasoning with multi-agent architectures [1], these systems are capable of coordinating autonomously, infiltrating communities, and fabricating consensus at minimal cost. Where legacy botnets acted like megaphones, repeating one script, AI swarms behave like adaptive conversationalists with thousands of distinct personas that learn from feedback, pivot narratives, and blend seamlessly into real discourse. By mirroring human social dynamics using adaptive tactics, they threaten democratic discourse.
A trip to Reddit will show you just how real this already is. Humans are far more likely to adopt a belief if they think that everyone else also believes it. The moment GPT became available, bad actors have exploited this to spread disinformation and convince everyone to believe it.
I suspect this is what a node in an AI swarm looks like looks like https://www.reddit.com/u/Low-Ocelot-992/s/95ORJu8j9U
Malicious AI seams are merely the tool wielded by the actual danger, which is the threat actors directing them towards a particular goal.
I'm spreading the message because I want more socially conscious people to engage in this. Look into Curtis Yarvin and The Dark Enlightenment. Look into Peter Thiel's (the CEO of Palantir aka America's biggest surveillance contractor) explicitly technofascist musings on using technology to bulldoze democracy.
Are malicious AI swarms a greater threat to democracy than British soldiers armed with muskets and cannons?
Are there benevolent AI swarms?
I'd love to hear why this was flagged, as it's very squarely within HN guidelines. Curiously, although it's not [dead] and was only posted a few hours ago, this story is totally missing from the HN topic list. I can only speculate as to why.
AI execs push the message of how dangerous and powerful AI is because who doesn't want to invest in the next most powerful thing?
But AI is not dangerous because of the potential for sentience, but because it makes the rich richer and the poor poorer. It gives those with resources more advantage. It's dangerous because it gives individuals the power to provide answers when people ask questions out of curioity they don't know the answer to, which is when they are most able to be influenced.
This is a popular tactic that the managerial class has been trying to use to keep its power in an age of decentralized mass communication.
Whenever you want to seize control of something or take power away from people just present it as an existential “threat to democracy”.
Real AI safety is enforced regulation and smart policies, period.
Don't let the government's FOMO on new weapons enable these companies to add new coal and methane power to the grid and build data centres in water-stressed regions. Make them pay for the externalities they cause. If it weren't subsidized these companies wouldn't be operating at the scale they are, AI would be in a lab, where it belongs doing cool stuff for science.
Heck, don't let government let alone private corporations weaponize this technology, full stop.
Economic policy that protects businesses and individuals online. Peoples' hosting bills are going through the roof from AI scrapers. The harm is nuts. These companies aren't respecting any of the informal rules and are doing everything they can to form monopolies and shut down competition.
We need social policies that prevent AI use cases from harming the economy. Policies that prevent displacing skilled workers without redistributing the accrued wealth to the labour class. If it's really improving productivity by such a huge factor then we should all be working less and living comfortably.
But I dunno how you prevent the disinformation, fraud, and scams. They've been around for years and it's always a cat-and-mouse game. Social media has just made it worse and AI is just more fuel for the tire fire.
Citizens United already killed democracy.
.. AI ... swarms! Like bees! Scary! We should be scared!
Anyways, I'm not sure AI is relevant here. Misinformation is just a form of propaganda which other than allowing the creation of falsehoods quicker, doesn't seem to be any more "threatening" than any other lie.
Sure but who needs democracy anyway? Monarchy is the way of the future for famous tech oligarchs like Peter Thiel (and aren't we all just aspiring tech oligarchs on HN?). AI monarchs are the way of the future.
I've been saying this for years.
It's curious that 90% of the top-level comments here are all dismissing it outright. And we have the usual themes:
1) "This has always been possible before. AI brings nothing new."
2) "We haven't seen anything really bad yet, so it is a non-issue
3) "AI execs are pushing this narrative to make AI sound important and get more money"
Nevermind the fact that many famous people behind the invention of AI, including Jeffrey Hinton (the "godfather of AI") and others quit their jobs and are spending their time loudly warning people, or signing major letters where they warn about human extinction or job loss... it's all a grift according to the vocal HN denizens.
This is like the opposite of web3 where everyone piles on the other way.
Well... swarms of AI agents take time to amass karma, pagerank, and other metrics, but when with the coming years, they will indeed be able to churn out content 24/7, create normal-looking influencer accounts and dominate the online discussion on every platform. Very likely, the percentage of human-generated content will trend to 0-1% of content on the internet, and it will become a dark forest, and this will be true on "siloed" ecosystems like HN as well: https://maggieappleton.com/ai-dark-forest/
Certainly, saying "this was always possible before" misses the forest for the trees. No, it wasn't.
List of attacks made possible by swarms of agents:
1) Edits to Wikipedia and publishing articles to push a certain narrative, as an Advanced Persistent Threat
2) Popular accounts on social media, videos on YouTube, Instagram and TikTok pushing AI-generated narratives, biased "news" and entertainment across many accounts growing in popularity (already happening)
3) Comments under articles, videos and shared posts that are either for or against the thing, and coordinated upvoting that bypasses voting ring detection (most social networks don't care about it as much as HN). Tactical piling on.
4) Sleeper accounts that appear normal for months or years and amass karma / points until they gradually start coordinating, either subtly or overtly. AI can play the long strategic game and outmaneuver groups people as well, including experts (see #7 for discrediting them).
5) Astroturfing attacks on people who disagree with the narrative. Maybe coordinating posts on HN that make it seem like it is an unpopular position.
6) Infiltrating and distracting opponents of a position, by getting them mired in constant defenses or explanations, where the interlocutors are either AI or friends / allies that have been "turned" or "flipped" by AI to question them
7) Reputational destruction, along the lines of NSA PRISM powerpoint slides (https://archive.org/details/NSA-PRISM-Slides) but at scale and implacable.
8) Astroturfing support for wars, unrest, or whatever else, but at scale, along the lines of Mahachkala Protests in Russia, etc.
These are just some of the early low-hanging fruit for 2026 and 2027.
You know what concerns me? Venture capital has started a ball rolling, which HN plays no small part in. VCs exist as an apparatus of America's massive financialized economy, and therefore understands that the system is overleveraged again. We have so much more debt than we're capable of working off, and therefore the people with money want to spend it on moonshots. Cryptocurrency scams, viral marketing schemes, AI shenanigans among the latest. What is the sum of all this? What work is really being done, besides the financier economics we've seen since the 1980s and the dev outsourcing we've seen since the 90s?
Maybe AI swarms do pose some weird contrived threat to democracy. Pieces like this will inevitably be laundered by the American intelligentsia like Karpathy or Hinton, and turned into some polemic hype-piece on social media proving that "safe AI" must be prioritized in a row for regulation. It's borderline ineffable to admit on HN, but America's obsession over speculative economics has pretty much ruined our chance at seizing a technological future that can benefit anyone. Now AI, like crypto before it and the dotcom bubble too, is overleveraged. "Where's the money, Lebowski?"
Stop blaming technology for the way humans misuse it. AI, like any technology, is a lever. Like a big metal rod. You could use that to move stones for building a structure, or to dislodge a boulder to roll down a hill and destroy someone else's building.
The pre-AI situation is actually incredibly bad for most people in the world who are relatively unprivileged.
"Democracy" alternates between ideological extremes. Even without media, the structure of the system is obviously wholly inadequate.
Advanced technologies can be used to make things worse. But they are also the best hope for improving things. And especially the best hope for empowering those with less privilege.
The real problems are the humans, their belief systems and social structures. The status quo may seem okay to you on most days, but it is truly awful in general. We need as many new tools as possible.
Don't blame the tools. This is the worst kind of ignorance.
What democracy lol, didn't need AI for that.
I’ve been hearing about the dangers of influence campaigns like Russian bot farms, deep fakes, and LLMs for the last ten years now. While these sorts of papers go out of their way to use the word “democratic,” it always seems to be motivated by a (not-very-compelling) idea that right wing populism is an essentially artificial creation rather than an organic backlash to the excesses of the Obama era. The scale of these influence campaigns may have increased in the last five years, but the (common, in these circles) idea that Russia stole the election with a few million posts on Twitter is silly and itself begins to look like an attempt to implement anti-democratic measures that would allow established authorities to pull the ladder up behind them.
The authors of this paper have at acknowledged that there are (practical, more so than moral) limitations to strict identification systems (“Please insert your social security number to submit this post”) and cite a few (non-US) examples of instances where these kinds of influence campaigns have ostensibly occurred. The countermeasures similarly appear to be reasonable, being focused on providing trust scores and counter-narratives. What they are describing, though, might end up looking a lot like a Reddit-style social credit system which has the impact of shutting out dissenting opinions. One of my favorite things about Hacker News over Reddit is that I can view and vouch for posts that have been flagged dead by other users. 95% of the time the flags were warranted (low-effort comments, vulgarity, or spam), but every once in a while I come across an intriguing comment that others preferred to be censored.
Killian civilians in war has been completely normalized. Even in highly visible conflicts in Gaza and Ukraine. People just shrug and move on.
Political discourse has lost all sense of decency. Senators, the VP and POTUS all routinely mock and demean their opponents and laugh even at murder. Arrests are made by unknown masked men with assault rifles.
AI is simply irrelevant to this - humans are selfish, tribal and ugly.
We are in an age where the technology of oppression gives the rich the feeling that they are above consequences. That people can be controlled in a way they will not be able to break free from.
Malicious AI swarms are only one manifestation of technology which gives incredible leverage to a handful of people. Incredible amounts of information are collected, and an individual AI agent per person watching for disobedience is becoming more and more possible.
Companies like Clearview already scour the internet for any public pictures and associated public opinions and offer a facial recognition database with political opinions to border patrol and police agencies. If you go to a protest, border patrol knows. Our government intelligence has outsourced functions to private companies like Palantir. Privatizing intelligence means intelligence capabilities in private hands, that might sound tautological, but if this does not make you fearful, then you did not fully understand. We have license plate tracking everywhere, cameras everywhere, mapped out "social graphs," and we carry around devices in our pockets that betray every facet of our personal lives. The vast majority of transactions are electronic, itemized, and tracked.
When every location you visit is logged, interaction you have is logged, every associate you communicate with known, every transaction itemized and logged for query, and there is a database designed to join that data seamlessly to look for disobedience and the resources available to fully utilize that data, then how do you mount a resistance if those people assert their own power?
We are becoming dangerously close to not being able to resist those who own or operate the technology of oppression and it is very much outpacing the technology of resistance.
> system-level oversight—a UN-backed AI Influence Observatory
> The Observatory should maintain and continually update an open, searchable database of verified influence-operation incidents, allowing researchers, journalists, and election authorities to track patterns and compare response effectiveness across countries in real time. To guarantee both legitimacy and skill, its governing board would mix rotating member-state delegates, independent technologists, data engineers, and civil society watchdogs.
We've really found ourselves in a pickle when the only way to keep Grandma from being psychologically manipulated is to have the UN keep a spreadsheet of Facebook groups she's not allowed to join. Honestly what a time to be alive.