

I think there’s an important semantic difference between worse performance and correctness. Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.
OP is responding to your original question. If you’re asking if it safe to do it, then you are worried about the consequences of doing it and wonder if you should stop. That would mean that you are “obeying a rule before it exists”.
As to your current comment, you should understand that current “AI” solutions do not think anything. If the current solutions were used for mass surveillance, then they would be used to classify your actions in some predefined way and whomever is using them will use this classification in some (possibly nefarious) unknown way