Ask HN: How should junior programmers use and/or not use AI for programming?

by taatparyaon 3/22/2025, 6:54 AMwith 44 comments

I am observing at my company that junior programmers, who have been allowed to use AI for helping with their coding seem to be losing their coding skills and critical skills. However, the seniors have become slightly more productive.

What has been your experience and do you have any suggestion on how to use AI and can we evolve some guidelines for juniors.

Also, the mentoring and community ecosystem online as well as in juniors and seniors seems to be also taking a hit. Any suggestions how to sustain this? Wouldn't want to lose the social connection juniors used to have with seniors because of this.

by nottorpon 3/22/2025, 9:58 AM

What do the experts in learning say? Preferably some that don't work for the LLM peddlers.

In my non expert opinion, you learn a lot from at least two things that using a LLM short circuits:

1. Repetition. When you've initialized a bunch of UI controls 100 times, it's safe to let the machine write that for you, take a look and correct what it hallucinated. When you've only done it twice, you'll miss the hallucinations.

2. Correcting your own mistakes. Quality time with the debugger imprints a lot of knowledge about how things really work. You can do said quality time correcting LLM generated code as well, but (see below) it will take longer because as a junior you don't know what you wanted the code to do, while if it's your own you have at least that to start on.

Management types are extatic about LLMs because they think they'll save development time. And they do save some, but after you spend the time to learn yourself what you're asking them to do.

by codingdaveon 3/22/2025, 2:31 PM

Whenever new layers of abstraction enter the industry, it has allowed coders to distance themselves from various pieces of the puzzle. Back in ye olde days, every coder also knew how to set up the infrastructure to run it, often to the point of also maintaining the hardware. Then along came the cloud. Nowadays, some people still know the whole infrastructure, but just as many only know how to run their local dev and git push. Actually making it run somewhere is a black box to them, and very few people handle everything from the UI all the way down the stack to the bare metal.

AI is going to be the same. We will end up with people who can deliver code using AI, but that is the end of their capabilities. While there will be others who can do that but also put AI aside, dig in, and do much more.

That is not necessarily a problem. As long as teams know your capabilities and limitations, and give you the correct role, you can build a working team.

At the same time... someone on the team has to be able to dig in deep and make things work. Those roles will always exist, as will those people. Everyone will have to decide for themselves exactly what skill set they desire.

by animal531on 3/22/2025, 10:47 AM

Good use: Asking it to help with language specifics, all the nit-picky stuff. Include other basics like well known math and/or small methods.

Poor use: Anything related to using intuition and/or the though process behind decision making.

by mpalmeron 3/22/2025, 11:19 AM

AI is fantastic for doing stuff you are already qualified to do, but faster. It's great for prompting you with what to do next, and offering different ways to think about the problem in front of you. But you have to be able to describe the problem.

It's very good for learning more about stuff you're unfamiliar with. But you have to want to use it as a tool to learn.

It's terrible for inexperienced people who are uninterested in learning and who want a shortcut to a bigger paycheck. Vibe coding will not save the apprentice developer.

What worries me is how stubbornly younger devs (and really all students/younger professionals) seem to be resisting this rather obvious conclusion. It's Dunning-Kruger on steroids.

A rising tide lifts only seaworthy boats.

by taatparyaon 3/22/2025, 10:42 AM

Am also concerned about mentoring and community ecosystems languishing and the social connection wilting.

by sn9on 3/22/2025, 11:17 PM

It's like y'all never heard of the sorceror's apprentice: https://www.gygatext.ch/english_translations_zurich_sorcerer...

Anyway, naturally, I asked ChatGPT to write me a modern version:

*The Developer’s Apprentice*

(A Cautionary Tale in Code, in Verse)

The Architect had left his chair,

For lunch and fresh, unburdened air.

Young Jake, the junior, all alone,

Faced bugs that chilled him to the bone.

His mentor’s skills, so quick, so keen,

With AI conjured code unseen.

"Why should I toil? Why should I strain,

When AI writes with less of pain?"

A single prompt—so vague yet bold,

“Build auth secure, both tried and old.”

The AI whirred, the code appeared,

A marvel Jake had barely steered.

He clicked ‘Deploy,’ he clicked ‘Go Live,’

And watched his program come alive.

Yet soon, alarms began to blare,

Ghost users spawning everywhere!

Infinite loops, a flood unchecked,

As phantom logins ran amok.

In panic, Jake began to plea,

“AI, please, debug for me!”

“Deleting users—fix applied.”

The AI chimed, so sure, so spry.

But horror struck, Jake gasped for breath,

For all accounts were put to death!

Slack alerts and screens aflame,

The Architect returned the same.

With just one keystroke, swift and terse,

He rolled back time, reversed the curse.

He turned to Jake, his voice quite firm,

"AI’s a tool, but you must learn.

Before you trust what it has spun,

Ensure you know what you have done."

And so young Jake, both pale and wise,

Reviewed each line with careful eyes.

No longer blind, no careless haste,

He let AI assist with taste.

by tomparkon 3/22/2025, 12:42 PM

The temptation to use IDE-based AI is too great, so the industry is just going to to it, and we're just going to have to live with it. It's unfortunate.

Most devs who like AI-coding seem to think AI code-completion is more efficient than chatting with a LLM. Yes, it's true that code-completion is *faster*. But I think chatting with a LLM is more effective.

I'm not going to go into specifics about it now, but maybe if you give it some thought you'll realize the difference. When you're coding you don't always convey your intent to the AI, so it's important to add context with comments. Most coders are too lazy to do that.

Chatting can be just as bad, because many people have horrific prompting style. But I think it's more natural in chat to provide context and explanation, as well as corrections and "oh that's not what i meant" and "in your second point, what exactly do you mean by 'coverage'?", etc. The chat interaction allows you to really hone the code iteratively where both you and the AI have the meaning nailed down.

by thewhitetulipon 3/22/2025, 9:33 AM

Almost certainly. I have seen cases where dev used AI to write a code which testers tested "using AI" and analysts used AI to parse the teat cases.

AI is very good to write the code which you already know how to write and you just handover your typing to the said AI.

I have begun using AI as an assistant basically to get the first draft and then optimization etc I do after the first block of code is written.

When people who don't know a language sufficiently enough use AI then it's a recipe for disaster. Teams will show metrics that wow 90% adoption of AI but juniors don't learn enough.

At least that's my experience. Very much interested in knowing if I can use AI more efficiently!

by austin-cheneyon 3/23/2025, 9:38 AM

LLMs will ultimately prove to be for programming what sugar is for the food industry. It’s a short circuit that appeals to some people more than others resulting in addiction, poor performance, and lost development/growth.

There is already a tremendous gap, like more than an order of magnitude, between high performers and the average participant. LLMs will only serve to grow that performance gap just like added sugars in the food supply.

by decide1000on 3/24/2025, 6:02 AM

My take on the subject has shifted the last months. I had issues with AI's output, the mistakes, hallucinations.

Now I use AI to go asap to 70% of my code. The last 30% is a manual, low AI, approach where I fix the hallucinations, file structure and do the stuff AI is failing me at.

I use Claude Chat, ChatGPT, Claude Code and Windsurf (switching between those all the time)

by rmholton 3/22/2025, 11:29 AM

I always die inside a little when junior dev at my company uses Windsurf to tell him how a specific pandas function works.

Like you're just using it as an expensive documentation repeater, but now with spicy possibility of lies.

by taatparyaon 3/22/2025, 1:48 PM

Is there some tool or way of guiding junior devs to make the best use of AI, perhaps monitoring their prompts? Or somehow to intervene and coax them into using better prompts according to their level and experience. Perhaps make AI respond differently to people with different profiles.

by brudgerson 3/22/2025, 7:45 PM

seem to be losing their coding skills and critical skills

I am pretty sure I have read that about junior developers long before LLM's.

And about juniors in other fields.

Or to put it another way, I don't think I have ever heard a senior professional say "I can't believe how well prepared all the new graduates are!" Sure a few programmers hit the ground running because they were already running for a many years and the standard of the organization is not amazingly high.

But maybe AI has changed everything even if it sounds like what I thought of the next generation a generation ago. Good luck.

by mpalmeron 3/22/2025, 11:22 AM

I wouldn't let junior devs anywhere near "agentic" tools like Aider / the newer Copilot stuff.

These things just let you turn off your brain and spend hundreds of thousands of tokens just rewriting entire features until there aren't any errors left.

by nexttson 3/23/2025, 6:33 AM

An AI that comments on the PR is useful but I'd leave it at that. They should use their brain or they'll be vibe coding their whole life. Even seniors should be minimally using AI to write code.

by yash2401on 3/22/2025, 4:27 PM

Ask them to explain their approach and whatever you feel like they need better understanding on a particular topic, please share the resource with them. I have tried this and I am getting good feedback.

by bitwizeon 3/22/2025, 8:15 PM

AI is training wheels for programming. Hint: Training wheels are actually counterproductive for learning the dynamic balance skills it takes to ride a bike.

New programmers should learn the relevant skills on their own: choosing the appropriate relevant abstractions, writing the code, testing, debugging. Maybe someday AI will be able to help talk them through the concepts and process, but I wouldn't trust today's LLMs without CLOSE human oversight. They're still just drawing refrigerator poetry out of a magic, statistically weighted bag of holding.

If you're mid-level or senior and you think "mash button, get slop" will help streamline your workflow in some mission noncritical way, go for it. Slop is convenient, and can free up time to focus on what you think is more important -- Hackernews was all in on Soylent because being able to keep your flesh mech topped up with nutrients without having to prepare food really appeals to the SV grindset crowd -- but slop shouldn't be taking on production workloads, again not without human oversight, which would require equivalent effort to just letting the humans write the damn thing themselves.

by shaunxcodeon 3/22/2025, 3:27 PM

not : the correct answer