This is wonderful news.
I was actually scratching my head on how to structure a regular prompt to produce csv data without extra nonsense like "Here is your data" and "Please note blah blah" at the beginning and end, so this is much welcome as I can define exactly what I want returned then just push structured output to csv.
Yay! It works. I used gemma2:2b and gave it below text
You have spent 190 at Fresh Mart. Current balance: 5098
and it gave below output {\n\"amount\": 190,\n\"balance\": 5098 ,\"category\": \"Shopping\",\n\"place\":\"Fresh Mart\"\n}
No way. This is amazing and one of the things I actually wanted. I love ollama be because it makes using an LLM feel like using any other UNIX program. It makes LLMs feel like they belong on UNIX.
Question though. Has anyone had luck running it on AMD GPUs? I've heard it's harder but I really want to support the competition when I get cards next year.
Has anyone seen how these constraints affect the quality of the output out of the LLM?
In some instances, I'd rather parse Markdown or plain text if it means the quality of the output is higher.
So I can use this with any supported models? The reason I'm asking is because I can only run 1b-3b models reliably on my hardware.
PRs on this have been open for something like a year! I'm a bit sad about how quiet the maintainers have been on this.
I'm still running oobabooga because of its exlv2 support which does much more efficient inference on dual 3090s
What's the value-add compared to `outlines`?
https://www.souzatharsis.com/tamingLLMs/notebooks/structured...
Is there a best approach for providing structured input to LLMs? Example: feed in 100 sentences and get each one classified in different ways. It's easy to get structured data out, but my approach of prefixing line numbers seems clumsy.
That's very useful. To see why, try to get an LLM _reliably_ generate JSON output without this. Sometimes it will, but sometimes it'll just YOLO and produce something you didn't ask for, that can't be parsed.
I must say it is nice to see the curl example first. As much as I like Pydantic, I still prefer to hand-code the schemas, since it makes it easier to move my prototypes to Go (or something else).
Could someone explain how this is implemented? I saw on Meta's Llama page that the model has intrinsic support for structured output. My 30k ft mental model of LLM is as a text completer, so it's not clear to me how this is accomplished.
Are llama.cpp and ollama leveraging llama's intrinsic structured output capability, or is this something else bolted ex-post on the output? (And if the former, how is the capability guaranteed across other models?)
Wow neat! The first step to format ambivalence! Curious to see how well does this perform on the edge, our overhead is always so scarce!
Amazing work as always, looking forward to taking this for a spin!
This is a fantastic news! I spent hours on fine tuning my prompt to summarise text and output in JSON and still have some issues sometimes. Is this feature available also with Go?
Very annoying marketing and pretending to be anything other than just wrapper around llama.cpp.
If anyone needs a more powerful constrain outputs, llama.cpp support gbnf:
https://github.com/ggerganov/llama.cpp/blob/master/grammars/...