My name is Joel Gross and I am a Washington state resident who founded a tech company in 2009 that now has over 250 full-time employees.
Improvements in artificial intelligence have reached the takeoff point and are increasing exponentially. AI has already exceeded human intelligence by most measures. This year, we will see it far exceed human capabilities. The problem is that we are basically birthing a new form of life, one that will not care about human wants or needs. Most people who have studied this issue (see the book "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom) believe there is a much higher than 50% chance that this ends in the extinction of our species even IF artificial intelligence was developed slowly and under strict controls.
The way artificial intelligence is being developed is terrible for our species' future; it is a mad gold rush led by the planet's most power-hungry people. If we want to survive this year, I beg you to do everything in your power and reach out to everyone you can think of to slow down or stop this development until we can implement secure controls. Nationalize every AI and tech company immediately and halt AI development until we can put in protections for our species. Who cares if the Chinese get there before us if we all get killed by our own technology.
All of this comes from a traditional libertarian and small government guy. I hate government intervention, but I have five kids and I would rather see them live under a bureaucratic state than die this year.
It's real struggle. Nations will not likely put the brakes on because of its need for self defence. I ponder your question too.
I think the problem with controls, censorship and self-limitation is that as soon as you start to introduce these, your competitor wins. I will use the best model available to me that has the least amount of restrictions, which may not align with my worldview or the worldview of the model creators.
It may be difficult to put in protections when a nation's existence hinges on it's ability to exploit models without restrictions. What you are seeing now is a divergence in offering and intent: governments want unrestricted models yet want to control your use of the same products in a crippling manner. It's an interesting problem I don't know the answer to!