To be honest, regulating AI models seems very important to me, because one day humanity (or companies) will trust AI with a dangerous amount of power. The training data use will probably originate from the internet, making the model corruptible and susceptible to errors. However, spying and logging every user prompt is clearly not the answer.
I think Mental Outlaw did a video on this. He said we should use foss forks of projects or something.