Nvidia puts 30 years of high value knowhow in a 13B LLM

by rayvalon 11/16/2023, 9:25 AMwith 7 comments

by klaussilveiraon 11/16/2023, 3:05 PM

They seem to be using RAGs to prevent hallucinations (which I imagine would be really bad in their context).

by petterson 11/16/2023, 2:05 PM

The 13B Llama 2 model is not that good compared to the best ones. Maybe it was easier for them to fine tune?

by RecycledEleon 11/16/2023, 7:36 PM

This reminds me of a Star Trek (The Next Generation) character consulting an AI expert on the Holodeck.

by cyanydeezon 11/16/2023, 11:48 PM

how much you think china is investing in industrial espionage for this.