It seems there are tiers of hardware required for LLM use: both interacting/asking questions and also training, but I don't understand them. There's seemingly two ends: a )it runs on my Mac or b) it needs 8xH100 Nvidia cards at USD250k+.
What are some other tiers? What could be done with 10k, 50k, 100k investments into compute?
If you do research, you may have access to descent HPCs.
At least for use you can get pretty far with 2k ish consumer hardware. Check out r/localllama if you want to learn more in general.