You must log in or # to comment.
Wasn’t there a guy at Google that claimed that they had a conscious AGI, and his proof was him asking the chatbot if it was conscious, and the answer was “yes”.
It was a bit more than that. The AI was expressing fear of death and stuff but nothing that wasn’t in the training data.
The tend to do that and go on existential rants after a session runs too long. Figuring out how to stop them from crashing out into existential dread has been an actual engineering problem they’ve needed to solve.


