• hushable@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Wasn’t there a guy at Google that claimed that they had a conscious AGI, and his proof was him asking the chatbot if it was conscious, and the answer was “yes”.

    • lugal@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      It was a bit more than that. The AI was expressing fear of death and stuff but nothing that wasn’t in the training data.

      • Schadrach@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        The tend to do that and go on existential rants after a session runs too long. Figuring out how to stop them from crashing out into existential dread has been an actual engineering problem they’ve needed to solve.