Hey HN!
Wanted to show our open source agent harness called Gambit.
If you’re not familiar, agent harnesses are sort of like an operating system for an agent... they handle tool calling, planning, context window management, and don’t require as much developer orchestration.
Normally you might see an agent orchestration framework pipeline like:
compute -> compute -> compute -> LLM -> compute -> compute -> LLM
we invert this so with an agent harness, it’s more like:
LLM -> LLM -> LLM -> compute -> LLM -> LLM -> compute -> LLM
Essentially you describe each agent in either a self contained markdown file, or as a typescript program. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. We call these decks.
Agents can call agents, and each agent can be designed with whatever model params make sense for your task.
Additionally, each step of the chain gets automatic evals, we call graders. A grader is another deck type… but it’s designed to evaluate and score conversations (or individual conversation turns).
We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.
Prior to Gambit, we had built an LLM based video editor, and we weren’t happy with the results, which is what brought us down this path of improving inference time LLM quality.
We know it’s missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We’re really happy with how it’s working with some of our early design partners, and we think it’s a way to implement a lot of interesting applications:
- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.
- Rubric based grading to guarantee you (for instance) don’t leak PII accidentally
- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.
We’ll be around if ya’ll have any questions or thoughts. Thanks for checking us out!
Walkthrough video: https://youtu.be/J_hQ2L_yy60
Nice architecture. The typed deck composition pattern is exactly right for making agent workflows testable.
One thing I've been thinking about is that schema validation catches "is this data shaped correctly?" but not "is this action permitted given who initiated the request?" When you have deck → child deck → grandchild deck chains, a prompt injection at any level could trigger actions the root caller never intended.
I've been working on offline capability verification for this using cryptographically signed warrants that attenuate as they propagate down the call chain. Curious if you've thought about that layer, or if you're relying on the model to self-police tool selection?
You might want to know that Gambit is an open source Scheme implementation that has been around a very long time.
Is this an alternative to https://mastra.ai/docs
How would it compare?
> - Rubric based grading to guarantee you (for instance) don’t leak PII accidentally
That does not sound like a "guarantee", at all.
nice work. the idea of breaking agents into short-lived executors with explicit inputs/outputs makes a lot of sense - most failures i've seen come from agents staying alive too long and leaking assumptions across steps.
curious how you're handling context lifetimes when agents call other agents. do you drop context between calls or is there a way to bound it? that's been the trickiest part for us.
also, it seems like this works with openrouter, and perhaps OpenAI -- what about Gemini API?
[under-the-rug stub]
[see https://news.ycombinator.com/item?id=45988611 for explanation]
I've been playing with this for the past 24 hours or so. I like the atomic containment of the LLM, and the clear separation of logic, code, and prompts.
You have some great working examples, but, for example: translate_text specifies the default language in three places: the card, the input schema, and the deck. This can't be necessary; I'll experiment, but shouldn't it just be defined in one place?
The descriptive language of the project is a bit dense for me too. I'm having a hard time figuring out how to do basic things like parameters -- let's say that I want to constrain summarize_text to a certain length... I've tried to write language in the cards/decks, but the model doesn't seem to be paying attention.
I also want to be able to load a file, e.g. not just "translate 'hello my friend' to Italian" but "translate '/test/hello_my_friend.txt' to Italian" and have it load the contents of the file as input text. How do I do that?
Super cool project!