Not sure if this is even possible, but is there any site or benchmark for testing which AI model is best for the task of coding?
Like Claude 3.5 vs GPT 4o vs Gemini 2 etc
What exists beyond our opinions to more objectively measure the quality of code output on these models?
As far as I know we don't have any way to objectively measure or compare "quality" or "coding performance" or "best" when looking at code produced by human programmers.
You may find this useful:
https://www.gitclear.com/coding_on_copilot_data_shows_ais_do...
Or this analysis if you don't want to sign up to download that white paper:
The two that I know of are SWE-bench and CodeElo. SWE-bench is oriented towards "real world" performance (resolution of GitHub issues), and CodeElo is oriented towards competitive programming (CodeForces).
https://www.swebench.com/
https://codeelo-bench.github.io/