Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Not The Onion
  3. Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat

Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat

Scheduled Pinned Locked Moved Not The Onion
nottheonion
9 Posts 7 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • lordwiggle@lemmy.worldL This user is from outside of this forum
    lordwiggle@lemmy.worldL This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #1
    This post did not contain any content.
    extremedullard@lemmy.sdf.orgE R 2 Replies Last reply
    7
    • lordwiggle@lemmy.worldL [email protected]
      This post did not contain any content.
      extremedullard@lemmy.sdf.orgE This user is from outside of this forum
      extremedullard@lemmy.sdf.orgE This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #2

      Remember: AI chatbots are designed to maximize engagement, not speak the truth. Telling a methhead to do more meth is called customer capture.

      W 1 Reply Last reply
      0
      • extremedullard@lemmy.sdf.orgE [email protected]

        Remember: AI chatbots are designed to maximize engagement, not speak the truth. Telling a methhead to do more meth is called customer capture.

        W This user is from outside of this forum
        W This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #3

        The llm models aren’t, they don't really have focus or discriminate.

        The ai chatbots that are build using those models absolutely are and its no secret.

        What confuses me is that the article points to llama3 which is a meta owned model. But not to a chatbot.

        This could be an official facebook ai (do they have one?) but it could also be. Bro i used this self hosted model to build a therapist, wanna try it for your meth problem?

        Heck i could even see it happen that a dealer pretends to help customers who are trying to kick it.

        smorty@lemmy.blahaj.zoneS 1 Reply Last reply
        0
        • lordwiggle@lemmy.worldL [email protected]
          This post did not contain any content.
          R This user is from outside of this forum
          R This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #4

          I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes

          There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.

          The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time

          Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)

          V 1 Reply Last reply
          0
          • R [email protected]

            I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes

            There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.

            The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time

            Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)

            V This user is from outside of this forum
            V This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #5

            FWIW BTW This heavily depends on the model. ChatGPT in particular has some of the absolute worst, most vomit inducing chat "types" I have ever seen.

            It is also the most used model. We're so cooked having all the laymen associate AI with ChatGPT's nonsense

            zacryon@feddit.orgZ 1 Reply Last reply
            0
            • V [email protected]

              FWIW BTW This heavily depends on the model. ChatGPT in particular has some of the absolute worst, most vomit inducing chat "types" I have ever seen.

              It is also the most used model. We're so cooked having all the laymen associate AI with ChatGPT's nonsense

              zacryon@feddit.orgZ This user is from outside of this forum
              zacryon@feddit.orgZ This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #6

              Good that you say "AI with ChatGPT" as this extremely blurs what the public understands. ChatGPT is an LLM (an autoregressive generative transformer model scaled to billions of parameters). LLMs are part of of AI. But they are not the entire field of AI. AI has so incredibly many more methods, models and algorithms than just LLMs. In fact, LLMs represent just a tiny fraction of the entire field. It's infuriating how many people confuse those. It's like saying a specific book is all of the literature that exists.

              V 1 Reply Last reply
              0
              • zacryon@feddit.orgZ [email protected]

                Good that you say "AI with ChatGPT" as this extremely blurs what the public understands. ChatGPT is an LLM (an autoregressive generative transformer model scaled to billions of parameters). LLMs are part of of AI. But they are not the entire field of AI. AI has so incredibly many more methods, models and algorithms than just LLMs. In fact, LLMs represent just a tiny fraction of the entire field. It's infuriating how many people confuse those. It's like saying a specific book is all of the literature that exists.

                V This user is from outside of this forum
                V This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #7

                To be fair, LLM technology is really making other fields obsolete. Nobody is going to bother making yet another shitty CNN, GRU, LSTM or something when we have transformer architecture, and LLMs that do not work with text (like large vision models) are looking like the future

                zacryon@feddit.orgZ 1 Reply Last reply
                0
                • V [email protected]

                  To be fair, LLM technology is really making other fields obsolete. Nobody is going to bother making yet another shitty CNN, GRU, LSTM or something when we have transformer architecture, and LLMs that do not work with text (like large vision models) are looking like the future

                  zacryon@feddit.orgZ This user is from outside of this forum
                  zacryon@feddit.orgZ This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #8

                  Nah, I wouldn't give up on these so easily. They still have applications and advantages over transformers, e.g., efficiency, where the quality might suffice for the reduced time/space conplexity (Vanilla transformer still has O(n^2), and I have yet to find an efficient and qualitatively similar causal transformer.)

                  But regarding sequence modeling / reasoning about sequences ability, attention models are the hot shit and currently transformers excel on that.

                  1 Reply Last reply
                  0
                  • W [email protected]

                    The llm models aren’t, they don't really have focus or discriminate.

                    The ai chatbots that are build using those models absolutely are and its no secret.

                    What confuses me is that the article points to llama3 which is a meta owned model. But not to a chatbot.

                    This could be an official facebook ai (do they have one?) but it could also be. Bro i used this self hosted model to build a therapist, wanna try it for your meth problem?

                    Heck i could even see it happen that a dealer pretends to help customers who are trying to kick it.

                    smorty@lemmy.blahaj.zoneS This user is from outside of this forum
                    smorty@lemmy.blahaj.zoneS This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #9

                    its probably som company jusz using llama as their core llm for use in their chatbot-

                    if its free, peeps will take it n repackage it n resell ig.... ~

                    i rbr seeing som stuff bout llm therapists for lonely peeps- but woag.... thad sounds sad 😞

                    oh! btw - do u use ollama too? ❤

                    1 Reply Last reply
                    0
                    Reply
                    • Reply as topic
                    Log in to reply
                    • Oldest to Newest
                    • Newest to Oldest
                    • Most Votes


                    • Login

                    • Login or register to search.
                    • First post
                      Last post
                    0
                    • Categories
                    • Recent
                    • Tags
                    • Popular
                    • World
                    • Users
                    • Groups