Coding agent in 94 lines of Ruby

by radanskoricon 5/14/2025, 2:17 PMwith 84 comments

by elifon 5/17/2025, 11:30 AM

Thank you for showing off why ruby is useful not just in the current year, but particular to the current time and AI situation. When you're dealing with code written with hallucinations, you want an easy to understand quickly language (of which ruby is S tier) where out of place behavior cannot hide in code so repetitive and unnecessary that your mind tries to skip over it.

by Mystery-Machineon 5/17/2025, 2:26 AM

Just out of curiosity, I never understood why people do `ENV.fetch("ANTHROPIC_API_KEY", nil)` which is the equivalent of `ENV["ANTHROPIC_API_KEY"]`. I thought the whole point of calling `.fetch` was to "fail fast". Instead of assigning `nil` as default and having `NoMethodError: undefined method 'xxx' for nil` somewhere random down the line, you could fail on the actual line where a required (not optional) ENV var wasn't found. Can someone please explain?

by rbitaron 5/16/2025, 11:15 PM

RubyLLM has been a joy to work with so nice to see it’s being used here. This project is also great and will make it easier to build an agent that can fetch data outside of the codebase for context and/or experiment with different system prompts. I’ve been a personal fan of claude code but this will be fun to work with

by fullstackwifeon 5/17/2025, 1:15 AM

This reminds me about PHP hello world programs which would take a string from GET, use it as a path, read a file from this path, and return the content in the response. You could make a website while not using any knowledge about websites.

Agents are the new PHP scripts!

by ColinEberhardton 5/17/2025, 7:30 AM

Great post, thanks for sharing. I wrote something similar a couple of years ago, showing just how simple it is to work with LLMs directly rather than through LangChain, adding tool use etc …

https://blog.scottlogic.com/2023/05/04/langchain-mini.html

It is of course quite out of date now as LLMs have native tool use APIs.

However, it proves a similar point to yours, in most applications 99% of the power is within the LLM. The rest is often just simple plumbing.

by melvinroeston 5/17/2025, 9:18 AM

The way I'd create extra functionality is to give command-line access with a permission step in between. I'd then create a folder of useful scripts and give it permission to execute those.

You can make it much more than just a coding agent. I personally use my personal LLMs for data analysis by integrating it with some APIs.

These type of LLM systems are basically acting as a frontend now that respond to very fuzzy user input. Such an LLM can reach out to your own defined functions (aka a backend).

The app space that I think is interesting and that I'm working on is creating these systems combined with some solid data creating advicing/coaching/recommendation systems.

If you want some input on building something like that, my email is in my profile. Currently I'm playing around with an LLM chat interface with database access that gives study advice based on:

* HEXACO data (personality)

* Motivational data (self-determination theory)

* ESCO data (skills data)

* Descriptions of study programs described in ESCO data

If you want to chat about creating these systems, my email is in my profile. I'm currently also looking for freelance opportunities based on things like this as I think there are many LLM applications to which we've only scratched the surface.

by RangerScienceon 5/16/2025, 11:26 PM

This is very cool, somewhat inspiring, and (personally) very informative: I didn't actually know what "agentic" AI use was, but this did an excellent job (incidentally!) explaining it.

Might poke around...

What makes something a good potential tool, if the shell command can (technically) can do anything - like running tests?

(or it is just the things requiring user permission vs not?)

by matt_son 5/17/2025, 1:08 PM

Wow, so that RubyLLM gem makes writing an agent more about basic IO operations. I have somehow thought there needed to be deep understanding of LLMs and/or AI APIs to build things like this where I would need to research and read a lot of docs, stay up to date on the endless updates the various AI systems have, etc. The example from the article is about files and directories, this same concept could apply to any text inputs, like data out of a Rails app.

by johnisgoodon 5/17/2025, 8:45 AM

Side-note: I do not understand the inclusion of "N lines of X". You import a library, which presumably consists of many lines. I do not see the point. It would be true that this is only 94 lines of Ruby if and only if there was no "require "ruby_llm/tool"" at the top.

by thih9on 5/17/2025, 7:33 AM

> Claude is trained to recognise the tool format and to respond in a specific format.

Does that mean that it wouldn’t work with other LLMs?

E.g. I run Qwen3-14B locally; would that or any other model similar in size work?

by sagarpatilon 5/17/2025, 5:47 AM

I don’t understand the hype in the original post.

OpenAI launched function calls two years ago and it was always possible to create a simple coding agent.

by thih9on 5/17/2025, 7:42 AM

> return { error: "User declined to execute the command" }

I wonder if AIs that receive this information within their prompt might try to change the user’s mind as part of reaching their objective. Perhaps even in a dishonest way.

To be safe I’d write “error: Command cannot be executed at the time”, or “error: Authentication failure”. Unless you control the training set; or don’t care about the result.

Interesting times.