Long-form factuality in large language models

by rootforceon 3/29/2024, 4:16 PMwith 3 comments

by rston 3/29/2024, 8:34 PM

Hmmm... checking against external sources is an interesting idea -- but using Google as a source of ground truth is a little bit tricky, given how often these days Google itself is spitting up confabulated AI-generated crud (or other low-quality stuff).

by cl42on 3/31/2024, 5:46 AM

For those interested in using search-augmented "reasoning", I implemented something similar in Emerging Trajectories[1], an open source package that forecasts geopolitical and economic events. We extract facts[2] from various websites (Google searches, news articles, RSS feeds) and have the LLM generate a hypothesis on a metric.

We're tracking the info forecasts to see how well this does for future events. For example, we're pitting the LLMs against each other to predict March 2024 CPI[3].

[1] https://emergingtrajectories.com/

[2] Sample code: https://github.com/wgryc/emerging-trajectories/blob/main/eme...

[3] https://emergingtrajectories.com/a/statement/28