Ask HN: How to deal with long vibe-coded PRs?

by philipptaon 10/29/2025, 8:37 AMwith 349 comments

Today I came across a PR for a (in theory) relatively simple service.

It span across 9000 LOC, 63 new files, including a DSL parser and much more.

How would you go about reviewing a PR like this?

by Yizahion 11/4/2025, 12:11 PM

Alternative to the reject and request rewrite approach, which may not work in the corporation environment. You schedule a really long video call with the offending person, with the agenda politely describing that for such a huge and extensive change, a collaborative meeting is required. You then notify your lead that new huge task has arrived which will take X hours from you, so if he wishes to re-prioritize tasks, he is welcome. And then if the meeting happen, you literally go line by line, demanding that author explain them to you. And if explanation or a meeting are refused, you can reject RP with a clear explanation why.

by throwawayffffason 11/4/2025, 2:57 AM

> How would you go about reviewing a PR like this?

Depends on the context. Is this from:

1. A colleague in your workplace. You go "Hey ____, That's kind of a big PR, I am not sure I can review this in a reasonable time frame can you split it up to more manageable pieces? PS: Do we really need a DSL for this?"

2. A new contributor to your open source project. You go "Hey ____, Thanks for your interest in helping us develop X. Unfortunately we don't have the resources to go over such a large PR. If you are still interested in helping please consider taking a swing at one of our existing issues that can be found here."

3. A contributor you already know. You go "Hey I can't review this ___, its just too long. Can we break it up to smaller parts?"

Regardless of the situation be honest, and point out you just can't review that long a PR.

by viccison 11/4/2025, 6:03 AM

Open source? Close it and ask them resubmit a smaller one and justify the complexity of things like a DSL if they wanted it included.

For work? Close it and remind them that their AI velocity doesn't save the company time if it takes me many hours (or even days depending on the complexity of the 9k lines) to review something intended to be merged into an important service. Ask them to resubmit a smaller one and justify the complexity of things like a DSL if they wanted it included. If my boss forces me to review it, then I do so and start quietly applying for new jobs where my job isn't to spend 10x (or 100x) more time reviewing code than my coworkers did "writing" it.

by jonchurch_on 11/4/2025, 2:23 AM

We are seeing a lot more drive by PRs in well known open source projects lately. Here is how I responded to a 1k line PR most recently before closing and locking. For context, it was (IMO) a well intentioned PR. It purported to implement a grab bag of perf improvements, caching of various code paths, and a clustering feature

Edit: left out that the user got flamed by non contributors for their apparently AI generated PR and description (rude), in defense of which they did say they were using several AI tools to drive the work. :

We have a performance working group which is the venue for discussing perf based work. Some of your ideas have come up in that venue, please go make issues there to discuss your ideas

my 2 cents on AI output: these tools are very useful, please wield them in such a way that it respects the time of the human who will be reading your output. This is the longest PR description I have ever read and it does not sound like a human wrote it, nor does it sound like a PR description. The PR also does multiple unrelated things in a single 1k line changeset, which is a nonstarter without prior discussion.

I don't doubt your intention is pure, ty for wanting to contribute.

There are norms in open source which are hard to learn from the outside, idk how to fix that, but your efforts here deviate far enough from them in what I assume is naivety that it looks like spam.

by yodsanklaion 10/30/2025, 11:01 PM

You review it like it wasn't AI generated. That is: ask author to split it in reviewable blocks. Or if you don't have an obligation to review it, you leave it there.

by EagnaIonaton 11/4/2025, 5:30 AM

Everyone talking about having them break it down into smaller chunk. Vibe coding there is a near guarantee the person doesn't know what the code does either.

That alone should be the reason to block it. But LLM generated code is not protected by law, and by extension you can damage your code base.

My company does not allow LLM generated code into anything that is their IP. Generic stuff outside of IP is fine, but every piece has to flagged that it is created by an LLM.

In short, these are just the next evolution of low quality PRs.

by MikeNotThePopeon 11/4/2025, 2:55 AM

How about this?

“This PR is really long and I’m having a hard time finding the energy to review it all. My brains gets full before I get to the end. Does it need to be this long?”

Force them to make a case for it. Then see how they respond. I’d say good answers could include:

- “I really trieeld to make it smaller, but I couldn’t think of a way, here’s why…”

- “Now that I think about it, 95% of this code could be pushed into a separate library.”

- “To be honest, I vibe coded this and I don’t understand all of it. When I try to make it smaller, I can’t find a way. Can we go through it together?”

by grodriguez100on 11/4/2025, 8:57 AM

Don’t. I would refuse to review a PR with 9000 LOC and 63 new files even if written by a human. Something that large needs to be discussed first to agree on an architecture and general approach, then split in manageable pieces and merged piece-wise in a feature branch, with each individual PR having reasonable test coverage, and finally the feature branch merged into master.

by TriangleEdgeon 11/4/2025, 2:14 AM

Amazon eng did some research and found the number of comments in a code review is proportional to the number of lines changed. Huge CRs get little comments. Small CRs get a lot of comments. At Amazon, it's common to have a 150 to 300 line limit to changes. It depends on the team.

In your case, I'd just reject it and ensure repo merges require your approval.

by onion2kon 11/4/2025, 10:00 AM

How would you go about reviewing a PR like this?

AI is red herring in discussions like this. How the change was authored makes no difference here.

I wouldn't. I'd reject it. I'd reject it even if the author had lovingly crafted each line by hand. A change request is not "someone must check my work". It's a collaboration between an author and a reviewer. If the author is failing to bother respecting the reviewer's time then they don't deserve to get a review.

by alexdowadon 11/4/2025, 2:34 AM

Be tactful and kind, but straightforward about what you can't/don't want to spend time reviewing.

"Thanks for the effort, but my time and energy is limited and I can't practically review this much code, so I'm closing this PR. We are interested in performance improvements, so you are welcome to pick out your #1 best idea for performance improvement, discuss it with the maintainers via ..., and then (possibly) open a focused PR which implements that improvement only."

by JohnFenon 10/29/2025, 2:07 PM

I'd just reject it for being ridiculous. It didn't pass the first step of the review process: the sniff test.

by CharlieDigitalon 11/3/2025, 1:45 AM

Ask the submitter to review and leave their comments first or do a peer code review with them and force them to read the code. It's probably the first time they'll have read the code as well...

by rhubarbtreeon 11/4/2025, 7:31 AM

In our company, you would immediately reject the PR based on size. There are a bunch of other quick bounce items it could also fail on, eg documentation.

The PR would then be split into small ones up to 400 lines long.

In truth, such a big PR is an indicator that either (a) the original code is a complete mess and needs reengineering or more likely (b) the PR is vibe coded and is making lots of very poor engineering decisions and goes in the bin.

We don’t use AI agents for coding. They’re not ready. Autocomplete is fine. Agents don’t reason like engineers, they make crap PRs.

by andreygrehovon 11/4/2025, 5:40 AM

That 10+ years old joke never gets old:

10 lines of code = 10 issues.

500 lines of code = "looks fine."

Code reviews.

by dosingaon 11/4/2025, 2:59 AM

Ideally you have a document in place saying this is how we handle vibe coding, something like: if you have the AI write the first version, it is your responsibility to make it reviewable.

The you can say (and this is hard), this looks like it is vibe code and misses that first human pass we want to see in these situations (link), please review and afterwards feel free to (re)submit.

In my experience they'll go away. Or they come back with something that isn't cleaned up and you point out just one thing. Or sometimes! they actually come back with the right thing.

by rvrson 10/31/2025, 3:46 PM

Enforce stacked PRs, reject PRs over 500-1k LoC (I'd argue even lower, but it's a hard sell)

by arianjmon 11/4/2025, 10:48 AM

It always depends on your position and their position, but by the sounds of it... I'd say it's too big for a simple "review this".

I'd ask for them to write their thought process, why they made the decisions they made, what the need for so many files and so many changes. I may ask for a videoconference to understand better, if it's a collegue from work.

By now hopefully you should know if their approach is valid or not really. If not sure yet, then I'd take a look at the code, specially at the parts they refer to most importantly in their answer to my previous questions. So not a detailed review, a more general approach, to decide if this is valid or not.

If it's a valid approach, then I guess I'd review it. If not, then give feedback as to how to make it valid, and why it isn't.

Not valid is very subjective. From "this is just garbage", to "this is a good approach, but we can implement this iteratively in separate PRs that will make my life easier", again, it depends on your and their position.

by raincoleon 11/4/2025, 5:59 AM

You ask questions. Literally anything, like asking them why they believe this feature is needed, what their code does, why they made a DSL parser, etc.

The question itself doesn't matter. Just ask something. If their answer is genuine and making sense you deal with it like a normal PR. If their answer is LLM-generated too then block.

by collingreenon 11/5/2025, 4:48 PM

My gut says close it.

My more professional side says invite the person to review it together - I do this for big or confusing PRs regardless of AI and it is both helpful and a natural backpressure to big PRs.

My tactical side says invite the person to show you their ai process because wow that's a lot of code that's super cool if it's good enough and then see if the AI can turn the PR into small, coherent, atomic chunks (rewritten with some arch learned from the existing project) and leave the person with those prompts and workflows.

My manager side already is very explicit with the team that code review is the bottleneck AND the code both working and being easy to understand is the authors responsibility, which makes these conversations much much easier.

by Cthulhu_on 11/4/2025, 9:58 AM

I wouldn't, they can start by writing requirements and a design first, then break it up into manageable components.

Or just refuse to review and let the author take full responsibility in running and maintaining the thing, if that's possible. A PR is asking someone else to share responsibility in the thing.

by bluerooiboson 11/4/2025, 9:06 AM

I wouldn't review it - bad engineering practice to submit this much work in one go - it puts too much expectation on the reviewer and makes it more likely that something gets broken.

Even 1000 lines is pushing it, IMO. Tell them to split the PR up into more granular work if they want it merged.

by LaFolleon 11/4/2025, 7:23 AM

There are good suggestions in the thread.

One suggestion that possibly is not covered is that you/we can document clearly how AI generated PRs will be handled, make it easy for contributors to discover it and if/when such PR shows up refer the documented section to save yourself time.

by siwatanejoon 11/4/2025, 2:23 AM

Forget about code for a second. This all depends a lot of what goal does the PR achieve? Does it align with the goals of the project?

by zigcBenxon 10/29/2025, 8:45 AM

In my opinion no PR should have so much changes. It's impossible to review such things.

The only exception is some large migration or version upgrade that required lots of files to change.

As far it goes for Vibe coded gigantic PRs It's a straight reject from me.

by devrundownon 10/31/2025, 1:09 AM

9000 LOC is way too long for a pull request unless there is some very special circumstance.

I would ask them to break it up into smaller chunks.

by smsm42on 11/4/2025, 7:53 AM

The only way such a PR can be reviewed is if it's accompanied with a detailed PRD and tech design documents, and at least half of that LOC count is tests. Even then it requires a lot of interactive work from both sides. I have seen PRs third or quarter of this size that took weeks to properly review and bring to production quality. Unless there's something artificially inflating the side of it (like auto-generated files or massive test fixtures, etc.) I wouldn't ever commit to reviewing such a behemoth without a very very good reason to.

by le-markon 11/4/2025, 2:22 AM

How long was this person working on it? Six months? Anything this big should’ve had some sort of design review. The worst is some junior going off and coding some garbage no one sees for a month.

by O-stevnson 11/4/2025, 7:47 AM

That's a lot of code for a PR, though i should admit I have made PR's being half that size myself.

Personally I think it's difficult to address these kinds of PR's but I also think that git is terrible at providing solutions to this problem.

The concept of stacked PR's are fine up to the point where you need to make changes throughout all yours branches, then it becomes a mess. If you (like me) might have a tendency to rewrite your solution several times before ending up with the final result, then having to split this into several PR's does not help anyone. The first PR will likely be outdated the moment I begin working on the next.

Open source is also more difficult in this case because contrary to working for a company with a schedule, deadlines etc... you can't (well you shouldn't) rush a review when it's on your own time. As such PR's can sit for weeks or months without being addressed. When you eventually need to reply to comments about how, why etc.. you have forgotten most of it and needs to read the code yourself to re-claim the reasoning. At that time it might be easier to re-read a 9000 lines PR over time rather than reading 5-10 PR's with maybe meaningful descriptions and outcome where the implementation changes every time.

Also, if it's from a new contributor, I wouldn't accept such a PR, vibe coded or not.

by fhd2on 11/4/2025, 10:28 AM

I'd say you have three options:

1. Reject it on the grounds of being too large to meaningfully review. Whether they used AI or not, this is effectively asking them to start over in an iterative process where you review every version of the thing and get to keep complexity in check. You'll need the right power and/or standing for this to be a reasonable option. At many organisations, you'd get into trouble for it as "blocking progress". If the people that pay you don't value reliability or maintainability, and you couldn't convince them that they should, that's a tough one, but it is how it is.

2. Actually review it in good faith: Takes a ton of time for large, over engineered changes, but as the reviewer, it is usually your job to understand the code and take on responsibility for it. You could propose to help out by addressing any issues you find yourself rather than making them do it, they might like that. This feels like a compromise, but you could still be seen as the person "blocking progress", despite, from my perspective, biting the bullet here.

3. Accept it without understanding it. For this you could _test_ it and give feedback on the behaviour, but you'd ignore the architecture, maintainability etc. You could still collaboratively improve it after it goes live. I've seen this happen to big (non-AI generated) PRs a lot. It's not always a bad thing. It might not be good code, but it could well be good business regardless.

Now, however you resolve it, it seems like this won't be the last time you'll struggle to work with that person. Can, and do they want to, change? Do you want to change? If you can't answer either of these questions with a yes, you'll probably want to look for ways of not working with them going forward.

by rurbanon 11/6/2025, 5:40 AM

I just came back from installing a vibe coded new service and frontend. It had most of the new required features the old python single file service didn't have. A new big shiny react typescript monster. Good UI.

The client would have loved be use it, as it was much easier to use. But in the end it was premature, not tested, and not adjustable to the client needs on-site. Too many states, even a global postgresql store. Super fragile. So I had to sidestep the new shiny Claude generated react code, after 2 people tried to fix it for 3 weeks, implemented the basic new features on the old system in a day, and this works stable now. No global state, just a single job file you copy there manually.

Enforce good SW practises. Test that it works. Have a backup solution ready if it doesn't work. Have simulators to test real behaviors. My simulators saved my day.

by johnnyanmacon 11/4/2025, 2:56 AM

excuse me, 9000? If that isn't mostly codegen, including some new plugin/API, or a fresh repository I'd reject it outright. LLM's or not.

In my eyes, there really shouldn't be more than 2-3 "full" files worth of LOC for any given PR (which should aim to to address 1 task/bug each. If not, maybe 2-3 at most), and general wisdom is to aim to keep "full" files around 600 LOC each (For legacy code, this is obviously very flexible, if not infeasible. But it's a nice ideal to keep in mind).

An 1800-2000 LOC PR is already pushing what I'd want to review, but I've reviewed a few like that when laying scaffolding for a new feature. Most PR's are usually a few dozen lines in 4-5 files each, so it's far below that.

9000 just raises so many red flags. Do they know what problem they are solving? Can they explain their solution approach? Give general architectual structure to their implementation? And all that is before asking the actual PR concerns of performance, halo effects, stakeholders, etc.

by throwaway290on 11/4/2025, 5:17 AM

Don't accept this PR. If it's bot generated you are not here to review it. They can find a bot to review bot generated requests.

by renewiltordon 11/4/2025, 8:04 AM

It's basic engineering principle: you do not do work amplification. e.g. debouncing, request coalescing, back-pressure are all techniques to prevent user from making server do lots of work in response to small user effort.

As example, you have made summarization app. User is try to upload 1 TB file. What you do? Reject request.

You have made summarization app. User is try upload 1 byte file 1000 times. What you do? Reject request.

However, this is for accidental or misconfigured user. What if you have malicious user? There are many technique for this as well: hell-ban, tarpit, limp.

For hell-ban simply do not handle request. It appear to be handled but is not.

For tarpit, raise request maker difficulty. e.g. put Claude Code with Github MCP on case, give broad instructions to be very specific and request concise code and split etc. etc. then put subsequent PRs also into CC with Github MCP.

For limp, provide comment slow using machine.

Assuming you're not working with such person. If working with such person, email boss and request they be fired. For good of org, you must kill the demon.

by dbgrmanon 11/4/2025, 3:07 AM

TBH, depends on what is being reviewed. Is it a prototype that might not see light of day and is only for proof-of-concept? Did an RFC doc precede it and reviewers are already familiar with the project? Were the authors expecting this PR? Was there a conversation before the PR was sent out? Was there any effort to have a conversation after the PR was shared? Was this even meant to be merged into main?

I'll just assume good intent first of all. Second, 9000 LOC spanning 63 lines is not necessarily an AI generated code. It could be a code mod. It could be a prolific coder. It could be a lot of codegen'd code.

Finally, the fact that someone is sending you 9000 LOC code hints that they find this OK, and this is an opportunity to align on your values. If you find it hard to review, tell them that I find it hard to review, I can't follow the narrative, its too risky, etc. etc.

Code review is almost ALWAYS an opportunity to have a conversation.

by flambojoneson 11/6/2025, 9:56 PM

Sounds like an opportunity for the person who sent it to learn to improve their prompts. I've set my default claude rules to emphasize doing small-contained commits and use a stacking tool like Graphite to help them get reviewed.

by hsbauauvhabzbon 11/4/2025, 8:10 AM

“Hey chatgpt, reject this pr for me. Be extremely verbose about the following topics:

- Large prs - vibe coding - development quality”

by ivankahlon 11/4/2025, 8:51 AM

What are your organization's expectations or policies regarding PR size and acceptable AI usage? Even if your organization hasn't set any expectations, what are yours—and have you communicated them to the author?

If expectations have been shared and these changes contradict them, you can quickly close the PR, explain why it's not acceptable, and ask them to redo it.

If you don't have clear guidelines on AI usage or haven't shared your expectations, you'll need to review the PR more carefully. First, verify whether your assumption that it’s a simple service is accurate (although from your description, it sounds like it is). If it is, talk to the author and point out that it's more complicated than necessary. You can also ask if they used AI and warn them about the complexities it can introduce.

by fathermarzon 11/4/2025, 6:22 AM

Let me ask a different question. Large refactor that ended up in a 60K line python PR because the new lead didn’t feel like merging it in until it was basically done. Even ask other devs to merge into his branch and then we would merge later.

How does one handle that with tact and not lose their minds?

by ilcon 11/4/2025, 5:47 PM

The same way I would with a human:

If I thought the service should only be 1000 lines tops:

- Reject due to excess complexity.

If it is a proper solution:

- Use AI to review it, asking it to be VERY critical of the code, and look for spots where human review may be needed, architecture wise, design wise and implementation wise.

- Ask the AI again to do a security review etc.

- Tell the author to break the PR down into human size chunks using git.

Why those things? It's likely some manager is gonna tell me review it anyways. And if so, I want to have a head start, and if there's critical shoot down level issues I can find with an AI quickly. I'd just shut the PR down now.

As in any "security" situation, in this case the security of your codebase and sanity, defense in depth is the answer.

by anarticleon 11/4/2025, 6:25 AM

No face, no case. They have to break it way down, just like at any org. In fact, I would ask for more tests than usual with a test plan/proof they passed. 9k is a little spicy, separate PRs, or an ad hoc huddle with them rubber ducking you through the code. Depends on if you care about this that much or not.

Unless you really trust them, it's up to the contributor to make their reasoning work for the target. Else, they are free to fork it if it's open source :).

I am a believer in using llm codegen as a ride along expert, but it definitely triggers my desire to over test software. I treat most codegen as the most junior coder had written it, and set up guardrails against as many things llm and I can come up with.

by fancyfredboton 11/4/2025, 12:56 PM

If it shouldn't be 90k LOC, and it doesn't need a DSL parser then reject it as overcomplicated and unmaintainable. Make it clear how large and complex you expect it to be and where existing code or framework should be reused so they can go away and address your concerns and so that there's a high chance you'll be able to approve if they do.

Above all, you aim to allow the contributor to be productive, you make it clear what constraints they need to operate under in order to use AI codegen effectively. You want to come across as trying to help them and need to take care not to appear obstructive or dismissive.

by throwaway106382on 11/4/2025, 2:25 AM

You don't.

Was your project asking for all this? No? Reject.

by EdwardDiegoon 11/5/2025, 2:27 AM

I wrote a lot of comments - for humans, but then I also specifically addressed some to the bot - "Cursor, remove all emojis in log messages, and do not use print for debugging, use a logger, where you are using a logger you are repeatedly importing the logging library in various conditional clauses, you should always import at the top level of the file" etc. etc. etc. - because you know that they're going to feed my review back to the bot.

The fact that someone submitted this PR in that state though...

by jasonjmcgheeon 11/4/2025, 11:53 PM

That's unreasonably large. Depending on the content, PRs tend to get harder and harder to read with every line of code.

1k added lines is imo already pushing it.

9k and 63 files is astronomical and very difficult to review.

A proper review means being able to understand the system and what's being changed, how, and why in order to be able to judge if it was done properly and includes everything it should and nothing it shouldn't.

9k lines is just too much to be able to do this properly.

by tayo42on 11/4/2025, 6:46 AM

You can't really review this. Rubber stamp it or reject it.

by lionkoron 11/4/2025, 8:09 AM

Close them and report to your boss. If your boss doesn't care, look for a new job. Once you have a new job, quit the old and cite that specific case as the reason.

by cyrusradfaron 11/5/2025, 11:43 PM

Trigger: Shameful Self Promotion

I created a tool in VSCode for this called Intraview. It allows you to create a dynamic code tour to provide feedback.

It works with your existing agent and creates a sharable tour that you can navigate and provide feedback step by step.

Rationally, this is much easier than reviewing the diff, because you can prompt to break up the PR logically so you can approve in functional pieces.

by T_Potatoon 11/4/2025, 6:42 AM

I have a tangent question: How do you deal with a team that spends days nitpicking implementation, double-speak and saying. I didn't actually expect you to implement this the way I said, I was just saying it would be nice if it was like this, can you undo it. I spend 3 weeks on a code review because of the constant back and forth; and I wish oh I wish they would allow PR to be small but the rule is that the PR has to implement the full deliverable feature. And that can mean 20 files to constantly change and change and change and change. Oh and then the why did you use Lombok question that occurs even though the project uses lombok and so you are stuck defending the use of a library that's is used in the project for no random reason than to flatter the egos of the gatekeepers who say, yes this is good but I want you to name this abc instead of ab before we merge. When in context it doesn't add or remove any value, not even clarity.

by reactordevon 11/4/2025, 10:54 AM

Easy, auto reject and close it. If asked why, state that each feature should be its own PR. Don’t waste any more brain cells on it.

If an engineer really cared, they would discuss these changes with you. Each new feature would be added incrementally and ensuring that it doesn’t break the rest of the system. This will allow you to understand their end goal while giving them an avenue to achieve it without disrupting your end goal.

by javier_e06on 11/4/2025, 1:17 PM

I would request in the PR references to the unit test with 100% coverage. Once I run it and if it passes I would do a spot check and look for glaring errors. Nothing deep. Perhaps I would run lint or some static analysis tool on the code. If the analysis tools come out squeaky clean and the unit test passes? Well, what's not to like? One or more problems? Reject the whole thing.

by phendrenad2on 11/4/2025, 10:02 AM

Are they truly vibe-coded? Or is the person simply accomplishing months of work in one day? Do you think the submitter reviewed it themselves? There's a difference you know. Like it or not, AI coding is not going away.

In your case, 9000 LOC and 63 files isn't that crazy for a DSL. Does the DSL serve a purpose? Or is it just someone's feature fever dream to put your project on their resume?

by fifiluraon 11/4/2025, 10:27 AM

Is it Java/Spring? Then probably go along and be happy that a human didn't have to write those 9000 lines for a trivial service.

by NumberCruncheron 11/4/2025, 11:21 PM

Why not fighting fire with fire and using AI to:

Version A: find 100 LOC which can be reduced to 50 LOC without changing the functionality. Then ask the author to go through the PR making sure it's not bloated. Repeat.

Version B: find hidden bugs. Ask the author to fix them. Repeat.

Keep them occupied saving your face. I would also fine tune an own agent to automatise this kind of work for me.

by dzinkon 11/4/2025, 1:25 PM

With AI code complexity is a cost bigger than money. Because it takes infinite amount of time from humans (maintainers, engineers) and requires increasing amount of memory and hardware to handle (unnecessarily) you have to account for it and hold contributors accountable for it. Otherwise any code will become unmanageable and un-runable and un-upgradable.

by locknitpickeron 11/4/2025, 7:07 AM

> How would you go about reviewing a PR like this?

State the PR is too large to be reviewed, and ask the author to break it down into self-contained units.

Also, ask which functional requirements the PR is addressing.

Ask for a PR walkthrough meeting to have the PR author explain in detail to an audience what they did and what they hope to achieve.

Establish max diff size for PRs to avoid this mess.

by thw_9a83con 11/5/2025, 11:29 AM

> How to deal with long vibe-coded PRs?

This is partly a joke, but it works: Rewrite your project in an obscure, unpopular, uncool programming language, that LLMs cannot use to meaningfully write code. You will get zero vibe-coded PRs and you will remain in a full control over your source code.

by jeremyjhon 11/4/2025, 2:53 AM

I'd just close it without comment. Or maybe if I'm feeling really generous I'll make a FAQ.md that gives a list of reasons why we'll close PRs without review or comment and link that in the close comments. I don't owe anyone any time on my open source projects. That said, I haven't had this issue yet.

by tacostakohashion 10/31/2025, 1:01 AM

Use AI to generate the review, obviously.

by jake-coworkeron 11/4/2025, 1:50 PM

I usually share this resource when people start doing this https://google.github.io/eng-practices/review/developer/smal...

by wheelerwjon 11/4/2025, 5:50 AM

The same way you review a non vibe coded pr. Whats that got to do with anything? A shit pr is a shit pr.

by self_awarenesson 11/4/2025, 1:14 PM

Reject, with unnecessary technical debt reason. Most of the times custom DSLs are not needed.

The question is what was the original task that needed to be fixed? I doubt it required a custom DSL.

Issue a research task first to design the scope of the fix, what needs to be changed and how.

by giantg2on 11/4/2025, 11:05 AM

Start with the test files. There's no way the AI had meaningful and working test cases. Pop a comment on each test file about missing tests or expanding them. That will force the dev to review their own code and make substantial changes.

by ares623on 11/4/2025, 8:21 AM

Ask them if they reviewed the AI’s output before opening the PR. If they didn’t then ask them to at least review it first rather than having you do all the work. If they did then is a 2nd review from you really necessary? ;)

by brutal_chaos_on 11/4/2025, 10:54 AM

Having experienced AI at $job and having tried to make vibecodong a thing, run when you see it. Yes, that means good enough AI gets through, what's the harm in that if it works as you need it to?

by aaronrobinsonon 10/29/2025, 8:53 AM

Reject it

by abhimanyue1998on 11/4/2025, 4:40 AM

vibe review it with AI then run it on vibe production support. simple.

by wengo314on 10/29/2025, 9:15 AM

reject outright. ask to split it into reasonable chain of changesets.

by cat_plus_pluson 11/4/2025, 6:49 AM

Vibe review with all the reasons it should not be merged obviously.

by pacifikaon 11/4/2025, 6:32 PM

Roughly takes an hour to review 1000 loc. tell your manager to book you in for a day and a half on the review. Usually scheduling it in is a deterrent for a quick approval

by ontouchstarton 11/4/2025, 1:13 PM

A more difficult question might be if it were merged now and 100 merges later you found a serious bug with a root cause in this PR, do you ask the same person to fix it?

by aryehofon 11/4/2025, 5:51 AM

This is effectively a product, not a feature (or bug). Ask the submitter how you can you determine if this meets functional and non-functional requirements, to start with?

by bmitcon 11/4/2025, 2:59 AM

Reject it and request the author makes it smaller.

PRs should be under 1000 lines.

The alternative is to sit down with them and ask what they're trying to accomplish and solve the problem from that angle.

by zzzeekon 11/4/2025, 12:35 PM

It's garbage, reject it. Over engineered. Update your PR guidelines that AI is fine to help write code but PRs have to be ultimately human designed.

by ojron 11/4/2025, 6:57 AM

I would test if the new features work and if there is any regressions around critical business functions and merge it, if my manual tests pass.

by fxtentacleon 11/4/2025, 9:24 AM

„I trust you that you have proof-read this“ and then just merge. When production explodes, their name will be all over „git blame“.

by james_markson 11/4/2025, 12:12 PM

“This is unnecessarily complex” and cite 1-2 egregious examples, with a LOC estimate that you think is more reasonable.

5 minutes, off the cuff.

by 999900000999on 11/4/2025, 2:18 AM

Reject it and tell them to actually code it.

by dustingetzon 11/4/2025, 10:31 AM

zoom call

ask them to walk you through it

ask for design doc if appropriate

what is test plan who is responsible for prod delivery and support

(no difference from any other large pr)

by ugh123on 11/4/2025, 7:37 AM

Are there tests written? You could start by demanding tests pass and demonstrate some kind of coverage metric.

by dlisboaon 11/4/2025, 1:12 PM

Close them. It's not a PR in good faith. A pull-request is meant to be reviewable, 9k LOC is not.

by akoon 11/4/2025, 6:02 AM

AI code generators are getting better fast, in the near future they will be able to produce good changes faster than you can review. How will you deal with it then? Most vibe coding tools can also produce smaller PR, but then you have to deal with 250+ PRs in 1 week. Is that more manageable? My guess is we need new tool, get the human out of the loop. More automated reviews, tests, etc.

by sshineon 11/4/2025, 6:24 AM

Same standard as if they had made it themselves: a sequence of logically ordered commits.

by ethinon 11/4/2025, 8:41 AM

If it's obviously AI generated and is an absurdly long PR, I'd ask them to extensively justify the complexity (especially if it does side quest-isms like this example where the AI created a DSL and stuff: why exactly is the DSL required?). If the project already implements the feature, I'd ask that they remove the re-implemented parts and use what already exists. If one of the dependencies of the project does this, I'd ask that they update the PR to use those instead of wholesale redoing it. If they respond, at all, with AI-generated responses instead of doing it themselves, or their PR description is AI generated, or it's blatantly obvious they used AI, I would immediately mentally classify the PR as an ultra low effort/quality PR until proven otherwise. Might seem harsh, but I prefer PRs from people who actually both understand the project and what the PR is trying to do. I don't mind if people use AI to assist in that understanding; I don't even mind if they use AI to help write parts of the PR. But if I can tell that it's AI generated (and completely re-implementing something that the project either has already or is in the stdlib or a dep is a very good sign of AI generated code in my experience), I'm far more inclined to dismiss it out of hand.

by exe34on 11/4/2025, 7:21 AM

simple, ask them to break it down into smaller pieces with clear explanation of what it does and why it's needed. Then set up an AI to drag them in the dirt with pointless fixes. or just close them as won't-fix.

by drbojingleon 11/4/2025, 1:55 PM

If they can vibe code it they can vibe disassemble it and vibe small PR it.

by atoavon 11/4/2025, 6:19 AM

Tell them to give you a phone call and have them explain the code to you : )

by alganeton 11/4/2025, 10:15 AM

"too big, please break it into smaller self-contained PRs"

[ Close with comment ]

by calinion 11/4/2025, 8:08 AM

Vibe merge review it using Copilot or equivalent, and then close it :)

by meltynesson 11/4/2025, 12:01 PM

Proof by counterexample, just find the inevitable security flaw.

by Roark66on 11/4/2025, 6:55 AM

Many people gave good tips, so let me answer in general.

As someone on the "senior" side AI has been very helpful in speeding up my work. As I work with many languages, many projects I haven't touched in months and while my code is relatively simple the underlying architecture is rather complex. So where I do use AI my prompts are very detailed. Often I spot mistakes that get corrected etc. With this I still see a big speedup (at least 2x,often more). The quality is almost the same.

However, I noticed many "team leads" try to use the AI as an excuse to push too difficult tasks onto "junior" people. The situation described by the OP is what happens sometimes.

Then when I go to the person and ask for some weird thing they are doing I get "I don't know, copilot told me"...

Many times I tried to gently steer such AI users towards using it as a learning tool. "Ask it to explain to you things you don't understand" "Ask questions about why something is written this way" and so on. Not once I saw it used like this.

But this is not everyone. Some people have this skill which lets them get a lot more out of pair programming and AI. I had a couple trainees in the current team 2 years ago that were great at this. This way as "pre-AI" in this company, but when I was asked to help them they were asking various questions and 6 months later they were hired on permanent basis. Contrast this with: - "so how should I change this code"? - You give them a fragment, they go put it in verbatim and come back via teams with a screenshot of an error message...

Basically expecting you will do the task for them. Not a single question. No increased ability to do it on their own.

This is how they try to use AI as well. And it's a huge time waster.

by Lapsaon 11/4/2025, 10:06 AM

strict lines of code limitation enforcement will lead to half-finished change requests and leak technological gibberish upstream to lovely business folk

by ninetyninenineon 11/4/2025, 5:35 AM

You vibe review it. I’m actually only half kidding here.

by occzon 11/4/2025, 7:48 AM

Easy, you reject it.

by pomarieon 11/4/2025, 10:57 AM

One thing that actually works is getting AI to review the basic stuff first so you can focus on architecture and design decisions. The irony of using AI to review AI-generated code isn't lost on me, but it does help.

That said, even with automated review, a 9000 line PR is still a hard reject. The real issue is that the submitter probably doesn't understand the code either. Ask them to walk you through it or break it down into smaller pieces. If they can't, that tells you everything.

The asymmetry is brutal though. Takes an hour to generate 9000 lines, takes days to review it properly. We need better tooling to handle this imbalance.

(Biased take: I'm building cubic.dev to help with this exact problem. Teams like n8n and Resend use it to catch issues automatically so reviewers can focus on what matters. But the human review is still essential.)

by CamperBob2on 11/4/2025, 5:55 AM

Please review this PR. Look carefully for bugs, security issues, and logical conflicts with existing code. Report 'Pass' if the PR is of sufficient quality or 'Fail' if you find any serious issues. In the latter case, generate a detailed report to pass along to the submitter.

(ctrl-v)

by ErroneousBoshon 11/4/2025, 1:28 PM

Instant reject, advising them not to resubmit.

by deariloson 11/4/2025, 2:08 PM

Put up guardrails to enforce quality code.

by paul_hon 11/4/2025, 5:50 PM

Ask AI number 2 to summarize the intention (look at the .patch) to markdown. Reset. Ask you AI to read the intention as of the orig author had written it, and say you've grave doubts about the contrib's functionally and non-functionally and for it to help you put that into words to feel back to the contributor. Basically the playbook from https://paulhammant.com/images/SimonSinghsFermatExcerpt.jpg

by nish__on 11/4/2025, 7:11 PM

Build it locally and QA test it.

by bitbasheron 11/4/2025, 5:40 PM

"CoS" - Close on Sight

by drfrank3on 11/4/2025, 12:28 PM

AI creates slop of dead or inefficient code that can be cleaned up. I think that developers that obsess over control have a difficult time adjusting to this.

The greater danger is that AI can create or modify code into something that is disconnected, stubbed, and/or deceptive and claim it’s complete. This is much worse because it wastes much more time, but AI can fix this too, just like it can the slop- maybe not deterministically, but it can.

And because of this, those that get in the way of creating source with AI are just cavemen rejecting fire.

by vasanon 11/2/2025, 9:18 AM

Just reflect upon it, see if you gave him less time to complete it. I would just have a meet with him and confront it.

by ZeroGravitason 11/4/2025, 8:46 AM

How you reject the first one of these, compared with the hundretth and the millionth(!) is probably going to be an interesting development over next few years.

Personally, I've felt drained dealing with small PRs fixing actual bugs by enthusiastic students new to projects in the pre-slop era.

Particularly if I felt they were doing it more to say they'd done it, rather than to help the project.

I imagine that motive might help drive an increase in this kind of thing.

by shinycodeon 11/4/2025, 8:09 AM

Don’t read it, approve it.

by ChrisMarshallNYon 11/4/2025, 2:18 AM

I write full app suites that have less than 9000 LoC. I tend toward fewer, large-ish source files, separated by functional domains.

I once had someone submit a patch (back in the SVN days), that was massive, and touched everything in my system. I applied it, and hundreds of bugs popped up.

I politely declined it, but the submitter got butthurt, anyway. He put a lot of work into it.

by 0x000xca0xfeon 11/4/2025, 12:27 PM

Fight slop with slop. Use an AI to review it in excruciating detail and write a lenghty justification for the rejection. Make sure to really hit a couple thousand words.

Maybe getting their own time wasted will teach the submitter about the value of clarity and how it feels to be on the receiving end of a communication with highly asymmetric effort.

by hshdhdhehdon 11/1/2025, 5:55 AM

With a middle finger

by userbinatoron 11/4/2025, 5:30 AM

If it's full of the typical vibe-coded nonsense that's easy to spot upon a quick-but-close inspection (unused functions, dead-end variables and paths that don't make sense, excessively verbose and inaccurate comments, etc.), I would immediately reject.

by darepublicon 11/6/2025, 6:04 PM

> a DSL parser

oh no

by PeterStueron 11/4/2025, 7:36 AM

Before review ask for a rational and justification. Might be just overcomplicated AI slop, could also be someone actually went beyond the basics and realy produced something next level.

A simple email could tell the difference.

by never_inlineon 11/4/2025, 2:24 AM

close button.

by Sirikonon 11/4/2025, 2:03 PM

Reject them

by mort96on 11/4/2025, 7:48 AM

Close them.

by irvingprimeon 11/4/2025, 6:29 PM

Reject. Period. No compromise. No friendly comments about how it can be improved. Just reject it as unreviewable.

Then ban the idiot who submitted it.

by mexicocitinluezon 11/4/2025, 10:50 AM

The same way you would do literally any other PR. I don't know why this is special.

If the code sucks, reject it. If it doesn't, accept it.

This isn't hard.

by ripped_britcheson 11/4/2025, 2:25 AM

Obviously by vibe reviewing it

by HelloNurseon 11/4/2025, 12:46 PM

Complaining about inadequate tests and documentation should be a very efficient and effective strategy against slop.

by wheelerwjon 11/4/2025, 5:50 AM

The same way you do a non vibe coded pr. If its a shit pr, its a shit pr.

by eston 11/4/2025, 7:31 AM

write another AI to hardcore review it and eventually reject it.

by foxfiredon 11/4/2025, 2:24 AM

It's funny just today I published an article with the solution to this problem.

If they don't bother writing the code, why should you bother reading it? Use an LLM to review it, and eventually approve it. Then of course, wait for the customer to complain, and feed the complaint back to the LLM. /s

Large LLM generated PRs are not a solution. They just shift the problem to the next person in the chain.

by exclipyon 11/4/2025, 2:46 AM

I made a /split-commit prompt that automatically splits a megacommit into smaller commits. I've found this massively helpful for making more reviewable commits. You can either run this yourself or send this to your coworker to have them run it before asking you to re-review it.

Sometimes it doesn't split it among optimal boundaries, but it's usually good enough to help. There's probably room for improvement and extension (eg. re-splitting a branch containing many not-logical commits, moving changes between commits, merging commits, ...) – contributions welcome!

You can install it as a Claude Code plugin here: https://github.com/KevinWuWon/kww-claude-plugins (or just copy out the prompt from the repo into your agent of choice)