Political bias is measurable and significant across models(and probably changing over time for closed-sourced). In search of objectivity, what are the best ways to account for this abstraction(s)?
What is "political bias"? Insofar as you're talking about American politics, as I suppose you are, the alleged bias is essentially quantified Gramsci.
A system which has artificial wisdom as opposed to just artificial intelligence might try to not get involved.
Well, text is political. You're not going to say "Tiananmen Square" without a political sentiment, so your only option would be to censor it.
LLMs are text tokenizers, if the majority of it's training material leans liberal or conservative then the output should reflect that. I think a better idea is to avoid relying on glorified autocorrect for anything related to political drama.
I beleive the model bias is highly influenced by the modelers. See Grok and OpenAI.
Imagine having an LLM do a translation of daily news into "simple english", much like wikipedia has: https://simple.wikipedia.org/wiki/Simple_English_Wikipedia
the results are not free of political bias, but may well highlight it in a starkly hilarious way.
you might do human training at that level but then you've only created a newly biased model.