The issue I believe most have isn't with AI itself, though it gets shorthanded that way, instead it is the combination of the lack of safeguards and the fact that as a race, we are absolutely stupid and careless. Knowing that about ourselves, it seems absurd that we currently have no control on its development, and little to no controls on its use.
I don't really understand this concern..
AI isn't showing up anywhere unintentionally. It's not creeping out of any boxes on it's own. It's not developing itself. The usual stunts about making the AI chatbots do something stupid, etc is more an exercise on demonstrating weaknesses in either the prompts or in their learning patterns.
Liability is still with the humans. Just like we expect a business to put someone trained on the other end of the phone - we don't require regulation to say they are some competency except in very specific fields, typically those that require licenses. And all those constraints are still there.
The majority of agnst about AI is the unleashing of bad output on unexpecting audiences -- This again is a human driven outcome - not something AI did on its own. What is happening now is people are upset with AI because people are using it in ways that negatively impact them.
- Customer Service is getting harder
- People are being mislead or decived by people doing nefarious generative stuff
- People are facing career pressures due to how AI will reshape how work gets done
- And the unexpected speed that these kinds of things are happening
So people are getting exposed to bad outcomes - and it's "everywhere and getting worse"... so they have a bad impression of AI and many start repeating the troupes about it.
But none of these topics are about regulation and they will never be part of government intervention to mandate it doesn't happen. Just like the government never stepped in and said "you must provide live customer service agents!"
Areas where we really need protection and evolving conventions on restricting usage really aren't a problem area yet -- because people are being cautious generally in those areas. The rouges who aren't, like DOGE bros are seen for exactly what they are.
Right now people are more upset with poor experiences DUE to AI involved outcomes than they actually know about AI itself.
I think most would be more optimistic if we had either fast progress with some solid regulation or slower progress with less regulation but more time to adopt.
Look at areas like self-driving - that's got plenty of stuff slowing that roll... what are the areas concerning you?
Something with this much potential, good and bad, shouldn't be crashing through the walls Kool-Aid man style.
It's not really - what is different is every tom, dick and harry has it at their fingertips.
Just like the social IQ went down as soon as everyone had a camera+high speed internet in their pocket.