Ask HN: How can political bias across LLMs be factored?

by shaburnon 11/29/2023, 4:33 PMwith 8 comments

Political bias is measurable and significant across models(and probably changing over time for closed-sourced). In search of objectivity, what are the best ways to account for this abstraction(s)?

by h2odragonon 11/29/2023, 5:03 PM

Imagine having an LLM do a translation of daily news into "simple english", much like wikipedia has: https://simple.wikipedia.org/wiki/Simple_English_Wikipedia

the results are not free of political bias, but may well highlight it in a starkly hilarious way.

you might do human training at that level but then you've only created a newly biased model.

by jruohonenon 11/29/2023, 5:20 PM

What is "political bias"? Insofar as you're talking about American politics, as I suppose you are, the alleged bias is essentially quantified Gramsci.

by PaulHouleon 11/29/2023, 4:45 PM

A system which has artificial wisdom as opposed to just artificial intelligence might try to not get involved.

by smoldesuon 11/29/2023, 4:36 PM

Well, text is political. You're not going to say "Tiananmen Square" without a political sentiment, so your only option would be to censor it.

LLMs are text tokenizers, if the majority of it's training material leans liberal or conservative then the output should reflect that. I think a better idea is to avoid relying on glorified autocorrect for anything related to political drama.

by shaburnon 11/29/2023, 7:45 PM

I beleive the model bias is highly influenced by the modelers. See Grok and OpenAI.