Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I’ve been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.

The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I’m thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)

Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That’s a no for me dawg.jpg.

I’m really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.

If my doctor refuses to let me be a patient if I don’t consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?

EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.

This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.

Edit 2: I just wanted to say that I appreciate everyone here that commented. For the most part everyone brought up valid points, and helped me see things I had not considered. I emailed my doctor and let them know I did not want to agree to the use of AI. I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn’t take issue with it.

Thank you everyone!

  • Scipitie@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    10
    arrow-down
    3
    ·
    2 days ago

    Counter top the popular opinion here for me it would be a clear yes in the situation you’re describing.

    The relationship with my direction doc is more believable to me than principles of vague bad feelings for me.

    Now taking and transcripts specifically are one of the use cases I also draw value out myself. I’d ask the doc though how they’re using it.

    Still I’d rather have my transcript public than to go on that search for a doctor match again.

    • WoodScientist@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      Seriously. What happens if it hallucinates and decides that I said I was planning to harm myself or others? Could I end up being committed because an LLM thought I said something I didn’t?

      Or more realistic, how does this affect something like body language? When taking notes, a therapist does more than just write down the words you say. They also take note on any body language or behavior that might be relevant to your case. If AI is replacing all the note taking, then this leaves two possibilities. One possibility is the therapist simply won’t ever have records of nonverbal communication. The second is even worse - you try to get an AI to create this record by feeding it a video of the session. Now you have even more subjectivity brought in.

      • Scipitie@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        If you are truly serious: this is not how hallucinations work. It’s really best to think of it as “fancy auto complete”. The hallucinations happen when the next token is too disconnected from what we as humans would call as “belonging together”. But it’s all math after all.

        Limit the k value, tube down temperature and cut off context size and the issue of hallucinations is a non-topic for “transcribe and summarize”.

        You get into what I’d call “stupid” territory like you’re describing.


        Your second point I fully agree with and is the reason why I’d ask the doc directly. To give the personal anecdote: the transcript itself helps me to focus on exactly the topics you’ve described: who’s confused? Where was agreement? Where did people just not speak up?

        A specific hallucination example I see every other day for example are tasks: that thing “thinks” that “we should” or “you must” are always tasks and outcomes which is utter bullshit - but I know that and using the transcript part helps me focus on the important part, the humans.