LLama3 looks particularly good at tool calling
Groq's low latency is particularly good for tool calling
Seems like two techs that will make coding obsolete :-)
Is the python lib open-source? I could only find the ja lib for Groq.
What is tbe cost per Mio. Token for llama3 70b on groq?
When is Mixtral 8x22b coming?
That's impressive. I asked to summarise an article in 5 bullet points, and the output was 812.81 T/s on Llama 3 8B.