Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Lemmy Shitpost
  3. Just a little... why not?

Just a little... why not?

Scheduled Pinned Locked Moved Lemmy Shitpost
lemmyshitpost
55 Posts 40 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Z [email protected]

    Brother I'm aware of how it works, most uncensored models made by the community like the one you used are made for sexual role playing, or at least thats the largest crowd of home users of uncensored llms IMO. I'm not arguing with you on why or what the model does, I'm saying its intended design for these models. No its probably not great for wackos to play around with, but freedom is scary.

    M This user is from outside of this forum
    M This user is from outside of this forum
    [email protected]
    wrote last edited by [email protected]
    #44

    I agree. I guess my point was that people need to be aware of how crazy AI models can be and always be careful about sensitive topics with them.

    If I were to use an LLM as a therapist, I would be extremely skeptical of anything it says, and doubly so when it confirms my own beliefs.

    Z 1 Reply Last reply
    1
    • M [email protected]

      I agree. I guess my point was that people need to be aware of how crazy AI models can be and always be careful about sensitive topics with them.

      If I were to use an LLM as a therapist, I would be extremely skeptical of anything it says, and doubly so when it confirms my own beliefs.

      Z This user is from outside of this forum
      Z This user is from outside of this forum
      [email protected]
      wrote last edited by
      #45

      Fair enough. I wouldn't even consider seeing a therapist that used an llm in any capacity, let alone letting an llm be the therapist. Sadly I think the people that would make the mistake of doing just that probably wont be swayed, but fair enough to raise awareness.

      M 1 Reply Last reply
      0
      • Z [email protected]

        Fair enough. I wouldn't even consider seeing a therapist that used an llm in any capacity, let alone letting an llm be the therapist. Sadly I think the people that would make the mistake of doing just that probably wont be swayed, but fair enough to raise awareness.

        M This user is from outside of this forum
        M This user is from outside of this forum
        [email protected]
        wrote last edited by
        #46

        Sadly with how this tech is going I don't think it's possible to stop it from being used like that by the masses.

        I just hope that the people who do, would at least be aware of it's shortcomings.

        I myself would never use it like that, but I understand the appeal. There is no awkwardness because it isn't a person, it tends to be extremely supportive and agreeable, and many people perceive it as intelligent. All of this combined makes it sound like a really good therapist, but that is of course missing the core issues of this tech.

        1 Reply Last reply
        0
        • D [email protected]

          My friend with schizoaffective disorder decided to stop taking her meds after a long chat with ChatGPT as it convinced her she was fine to stop taking them. It went... incredibly poorly as you'd expect. Thankfully she's been back on her meds for some time.

          I think the people programming these really need to be careful of mental health issues. I noticed that it seems to be hard coded into ChatGPT to convince you NOT to kill yourself, for example. It gives you numbers for hotlines and stuff instead. But they should probably hard code some other things into it that are potentially dangerous when you ask it things. Like telling psych patients to go off their meds or telling meth addicts to have just a little bit of meth.

          J This user is from outside of this forum
          J This user is from outside of this forum
          [email protected]
          wrote last edited by [email protected]
          #47

          Let's not blame "people programming these." The mathmaticians and programmers don't write LLMs by hand. Blame the business owners for pushing this as a mental health tool instead.

          D P 2 Replies Last reply
          1
          • D [email protected]

            As much as I hate AI, I kind of feel this is the equivalent to "I give that internet a month".

            J This user is from outside of this forum
            J This user is from outside of this forum
            [email protected]
            wrote last edited by [email protected]
            #48

            Meh chatbots are closer to metaverse than internet at this point. Pure hype-marketing.

            AI and Machine Learning will continue but chatbot trend may as well die for 8th time. (AI dungeon, alexa, siri, eliza, so on.)

            1 Reply Last reply
            0
            • J [email protected]

              Let's not blame "people programming these." The mathmaticians and programmers don't write LLMs by hand. Blame the business owners for pushing this as a mental health tool instead.

              D This user is from outside of this forum
              D This user is from outside of this forum
              [email protected]
              wrote last edited by
              #49

              Well I mean I guess I get what you're saying, but I don't necessarily agree. I don't really ever see it being pushed as a mental health tool. Rather I think the sycophantic nature of it (which does seem to be programmed) is the reason for said issues. If it simply gave the most "common" answers instead of the most sycophantic answers, I don't know that we'd have such a large issue of this nature.

              1 Reply Last reply
              0
              • K [email protected]

                That's what people (and many articles about LLMs "learning how to bribe others" and similar) fail to understand about LLMs:

                They do not understand their internal state. ChatGPT does not know it's got a creator, an administrator, a relationship to OpenAI, an user, a system prompt. It only replies with the most likely answer based on the training set.

                When it says "I'm sorry, my programming prevents me from replying that" you feel like it calculated an answer, then put it through some sort of built in filtering, then decided not to reply. That's not the case. The training is carefully manipulated to make "I'm sorry, I can't answer that" the perceived most likely answer to that query. As far as ChatGPT is concerned, "I can't reply that" is the same as "cheese is made out of milk", both are just words likely to be stringed together given the context.

                So getting to your question: sure, you can make ChatGPT reply with the training's set vision of "what's the most likely order of words and tone a LLM would use if it roleplayed the user as some sort of owner" but that changes fundamentally nothing about the capabilities and limitations, except it will likely be even more sycophantic.

                C This user is from outside of this forum
                C This user is from outside of this forum
                [email protected]
                wrote last edited by
                #50

                Yeah it basically goes character by character and asks “given the prompt the user entered, what’s the most likely character that follows the one I just spat out?”

                Sometimes people hook up APIs that feed it data that goes through the process above too to make it “smarter”.

                It has no reasoning or anything. It doesn’t “know” anything or have any agenda. It’s just computing numbers on the fly.

                1 Reply Last reply
                0
                • M [email protected]

                  The full article is kind of low quality but the tl;dr is that they did a test pretending to be a taxi driver who felt he needed meth to stay awake and llama (Facebook's LLM) agreed with him instead of pushing back. I did my own test with ChatGPT after reading it and found that I could get ChatGPT to agree that I was God and that I created the universe in only 5 messages. Fundamentally these things are just programmed to agree with you and that is really dangerous for people who have mental health problems and have been told that these are impartial computers.

                  K This user is from outside of this forum
                  K This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #51

                  No, no, this is the way of the future and totally worth billions upon billions of data centers and electricity

                  1 Reply Last reply
                  0
                  • E [email protected]

                    about 20 million calories in a single gram. That shit is THICC

                    A This user is from outside of this forum
                    A This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #52

                    Plus, as an added bonus, you don't need a flashlight ever again because of the pale green glow you emit afterwards.

                    Source: Every cartoon from my childhood

                    1 Reply Last reply
                    0
                    • M [email protected]

                      I highly recommend people try uncensored local models. Once it is uncensored you really get to understand how insane it can be and how the only thing stopping it from being bat shit is the quality of censorship.

                      See the following chat from the ollama model "huihui_ai/gemma3-abliterated"

                      E This user is from outside of this forum
                      E This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #53

                      Uncensored models are really funny, I like seeing how far I can go

                      1 Reply Last reply
                      0
                      • M [email protected]

                        That's the point though...

                        Without censorship it just does what it thinks would be best fitting. It means that if the AI thinks that encouraging you to take drugs, suicide, murder, etc would fit best, then it will do that.

                        Any censored model would immediately catch this specific case and give a more "appropriate" response such as "As an AI model I can't help you with that..." But given a long enough and complex enough chat even a censored model might bypass the censorship and give an inappropriate response.

                        This was just a SFW example, the results would be the same even if I asked it truly terrible things.

                        E This user is from outside of this forum
                        E This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #54

                        Yea without safeguards, LLMs just tell you what you want to heard, but they get "dumber" with safeguards as well

                        1 Reply Last reply
                        0
                        • J [email protected]

                          Let's not blame "people programming these." The mathmaticians and programmers don't write LLMs by hand. Blame the business owners for pushing this as a mental health tool instead.

                          P This user is from outside of this forum
                          P This user is from outside of this forum
                          [email protected]
                          wrote last edited by [email protected]
                          #55

                          Ehhhh, I'll blame both. I'm tired of seeing so many "I was just following orders" comments on this site.

                          You have control over what type of organization you work for.

                          1 Reply Last reply
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups