Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Not The Onion
  3. WhatsApp Deploys AI, for Those Incapable of Comprehending Straightforward Messages From Their Friends and Family

WhatsApp Deploys AI, for Those Incapable of Comprehending Straightforward Messages From Their Friends and Family

Scheduled Pinned Locked Moved Not The Onion
nottheonion
39 Posts 24 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • tonytins@pawb.socialT [email protected]
    This post did not contain any content.
    S This user is from outside of this forum
    S This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #14

    1 Reply Last reply
    34
    • tonytins@pawb.socialT [email protected]
      This post did not contain any content.
      A This user is from outside of this forum
      A This user is from outside of this forum
      [email protected]
      wrote on last edited by [email protected]
      #15

      Ahh, intellect of the average American on display. So... -óóóh, is that a Donut?

      L P 2 Replies Last reply
      2
      • tonytins@pawb.socialT [email protected]
        This post did not contain any content.
        B This user is from outside of this forum
        B This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #16

        To be fair, my father tends to make messages quite incomprehensible by adding irrelevant information all over the place. Sometimes going on for multiple screens while it could easily have been a 2-3 sentence message.

        Sadly I think AI would even be worse at picking up what information is important from that. But I understand why people want it.

        As for very active groupchats, I am not gonna read +25 messages a day, but being able to glance the gist of it would be awesome.

        D 1 Reply Last reply
        5
        • A [email protected]

          Ahh, intellect of the average American on display. So... -óóóh, is that a Donut?

          L This user is from outside of this forum
          L This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #17

          Why are you making fun of topological toroidal surfaces mr smarty pants

          1 Reply Last reply
          1
          • A [email protected]

            Ahh, intellect of the average American on display. So... -óóóh, is that a Donut?

            P This user is from outside of this forum
            P This user is from outside of this forum
            [email protected]
            wrote on last edited by [email protected]
            #18

            People in the US don't use WhatsApp, for the most part.

            N A G 3 Replies Last reply
            5
            • die4ever@retrolemmy.comD [email protected]

              An LLM could be trained on the way a specific person communicates over time

              Are there any companies doing anything similar to this? From what I've seen companies avoid this stuff like the plague, their LLMs are always frozen with no custom training. Training takes a lot of compute, but also has huge risks of the LLM going off the rails and saying bad things that could even get the company into trouble or get bad publicity. Also the disk space per customer, and loading times of individual models.

              The only hope for your use case is that the LLM has a large enough context window to look at previous examples from your chat and use those for each request, but that isn't the same thing as training.

              A This user is from outside of this forum
              A This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #19

              There are plenty of people and organisations doing stuff like this, there are plenty of examples on HuggingFace, though typically it's to get an LLM to communicate in a specific manner (e.g. this one trained on Lovecraft's works). People drastically overestimate the amount of compute time/resources training and running an LLM takes; do you think Microsoft could force their AI on every single Windows computer if it was as challenging as you imply? Also, you do not need to start from scratch. Get a model that's already robust and developed and fine tune it with additional training data, or for a hack job, just merge a LoRA into the base model.

              The intent, by the way, isn't for the LLM to respond for you, it's just to interpret a message and offer suggestions on what a message means or rewrite it to be clear (while still displaying the original).

              die4ever@retrolemmy.comD 1 Reply Last reply
              0
              • D [email protected]

                the same thing a person could do

                asking for clarification seems like a reasonable thing to do in a conversation.

                A tool is not about to do that because it would feel weird and creepy for it to just take over the conversation.

                A This user is from outside of this forum
                A This user is from outside of this forum
                [email protected]
                wrote on last edited by [email protected]
                #20

                The intent isn’t for the LLM to respond for you, it’s just to interpret a message and offer suggestions on what a message means or rewrite it to be clear (while still displaying the original).

                1 Reply Last reply
                0
                • A [email protected]

                  There are plenty of people and organisations doing stuff like this, there are plenty of examples on HuggingFace, though typically it's to get an LLM to communicate in a specific manner (e.g. this one trained on Lovecraft's works). People drastically overestimate the amount of compute time/resources training and running an LLM takes; do you think Microsoft could force their AI on every single Windows computer if it was as challenging as you imply? Also, you do not need to start from scratch. Get a model that's already robust and developed and fine tune it with additional training data, or for a hack job, just merge a LoRA into the base model.

                  The intent, by the way, isn't for the LLM to respond for you, it's just to interpret a message and offer suggestions on what a message means or rewrite it to be clear (while still displaying the original).

                  die4ever@retrolemmy.comD This user is from outside of this forum
                  die4ever@retrolemmy.comD This user is from outside of this forum
                  [email protected]
                  wrote on last edited by [email protected]
                  #21

                  Huggingface isn't customer-facing, it's developer-facing. Letting customers retrain your LLM sounds like a bad idea for a company like Meta or Microsoft, it's too risky and could make them look bad. Retraining an LLM for Lovecraft is a totally different scale than retraining an LLM for hundreds of millions of individual customers.

                  do you think Microsoft could force their AI on every single Windows computer if it was as challenging as you imply?

                  It's a cloned image, not unique per computer

                  A 1 Reply Last reply
                  2
                  • die4ever@retrolemmy.comD [email protected]

                    An LLM could be trained on the way a specific person communicates over time

                    Are there any companies doing anything similar to this? From what I've seen companies avoid this stuff like the plague, their LLMs are always frozen with no custom training. Training takes a lot of compute, but also has huge risks of the LLM going off the rails and saying bad things that could even get the company into trouble or get bad publicity. Also the disk space per customer, and loading times of individual models.

                    The only hope for your use case is that the LLM has a large enough context window to look at previous examples from your chat and use those for each request, but that isn't the same thing as training.

                    Y This user is from outside of this forum
                    Y This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #22

                    My friend works for a startup that does exactly that - trains AIs on conversations and responses from a specific person (some business higher-ups) for purposes of "coaching" and "mentoring". I don't know how well it works.

                    die4ever@retrolemmy.comD 1 Reply Last reply
                    0
                    • Y [email protected]

                      My friend works for a startup that does exactly that - trains AIs on conversations and responses from a specific person (some business higher-ups) for purposes of "coaching" and "mentoring". I don't know how well it works.

                      die4ever@retrolemmy.comD This user is from outside of this forum
                      die4ever@retrolemmy.comD This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #23

                      it probably works pretty well when it's tested and verified instead of unsupervised

                      and for a small pool of people instead of hundreds of millions of users

                      1 Reply Last reply
                      0
                      • die4ever@retrolemmy.comD [email protected]

                        Huggingface isn't customer-facing, it's developer-facing. Letting customers retrain your LLM sounds like a bad idea for a company like Meta or Microsoft, it's too risky and could make them look bad. Retraining an LLM for Lovecraft is a totally different scale than retraining an LLM for hundreds of millions of individual customers.

                        do you think Microsoft could force their AI on every single Windows computer if it was as challenging as you imply?

                        It's a cloned image, not unique per computer

                        A This user is from outside of this forum
                        A This user is from outside of this forum
                        [email protected]
                        wrote on last edited by [email protected]
                        #24

                        Hugging Face being developer-facing is completely irrelevant considering the question you asked was whether I was aware of any companies doing anything like this.

                        Your concern that companies like Meta and Microsoft are too scared to let users retrain their models is also irrelevant considering both of these companies have already released models so that anyone can retrain or checkpoint merge them i.e. Llama by Meta and Phi by Microsoft.

                        It’s a cloned image, not unique per computer

                        Microsoft's Copilot works off a base model, yes, but just an example that LLMs aren't as CPU intensive as made out to be. Further automated finetuning isn't out of the realm of possibility either and I fully expect Microsoft to do this in the future.

                        die4ever@retrolemmy.comD 1 Reply Last reply
                        0
                        • A [email protected]

                          Hugging Face being developer-facing is completely irrelevant considering the question you asked was whether I was aware of any companies doing anything like this.

                          Your concern that companies like Meta and Microsoft are too scared to let users retrain their models is also irrelevant considering both of these companies have already released models so that anyone can retrain or checkpoint merge them i.e. Llama by Meta and Phi by Microsoft.

                          It’s a cloned image, not unique per computer

                          Microsoft's Copilot works off a base model, yes, but just an example that LLMs aren't as CPU intensive as made out to be. Further automated finetuning isn't out of the realm of possibility either and I fully expect Microsoft to do this in the future.

                          die4ever@retrolemmy.comD This user is from outside of this forum
                          die4ever@retrolemmy.comD This user is from outside of this forum
                          [email protected]
                          wrote on last edited by [email protected]
                          #25

                          Your concern that companies like Meta and Microsoft are too scared to let users retrain their models is also irrelevant considering both of these companies have already released models so that anyone can retrain or checkpoint merge them i.e. Llama by Meta and Phi by Microsoft.

                          they release them to developers, not automatically retrain them unsupervised in their actual products and put them in the faces of customers to share screenshots of the AI's failures on social media and give it a bad name

                          A 1 Reply Last reply
                          1
                          • P [email protected]

                            People in the US don't use WhatsApp, for the most part.

                            N This user is from outside of this forum
                            N This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #26

                            No, but they develop it

                            1 Reply Last reply
                            2
                            • tonytins@pawb.socialT [email protected]
                              This post did not contain any content.
                              T This user is from outside of this forum
                              T This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #27

                              I get it, big stupid money hungry tech wants to put itself between you and every other person on earth for the almighty dollar.

                              How about we use tech to fix problems not create new ones!?

                              1 Reply Last reply
                              2
                              • P [email protected]

                                People in the US don't use WhatsApp, for the most part.

                                A This user is from outside of this forum
                                A This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #28

                                Interesting! What do people in the US generally use?

                                bananaisaberry@lemmy.zipB 1 Reply Last reply
                                0
                                • die4ever@retrolemmy.comD [email protected]

                                  Your concern that companies like Meta and Microsoft are too scared to let users retrain their models is also irrelevant considering both of these companies have already released models so that anyone can retrain or checkpoint merge them i.e. Llama by Meta and Phi by Microsoft.

                                  they release them to developers, not automatically retrain them unsupervised in their actual products and put them in the faces of customers to share screenshots of the AI's failures on social media and give it a bad name

                                  A This user is from outside of this forum
                                  A This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #29

                                  They release them under permissive licences so that anyone can do that.

                                  die4ever@retrolemmy.comD 1 Reply Last reply
                                  0
                                  • A [email protected]

                                    They release them under permissive licences so that anyone can do that.

                                    die4ever@retrolemmy.comD This user is from outside of this forum
                                    die4ever@retrolemmy.comD This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by [email protected]
                                    #30

                                    yea someone could take the model and make their own product with their own PR and public perception

                                    that's very different from directly spoonfeeding it as a product to the general public consumers inside of WhatsApp or something

                                    it's like saying someone can mod Skyrim to put nude characters in it, that's very different from Bethesda selling the game with nude characters

                                    1 Reply Last reply
                                    0
                                    • B [email protected]

                                      To be fair, my father tends to make messages quite incomprehensible by adding irrelevant information all over the place. Sometimes going on for multiple screens while it could easily have been a 2-3 sentence message.

                                      Sadly I think AI would even be worse at picking up what information is important from that. But I understand why people want it.

                                      As for very active groupchats, I am not gonna read +25 messages a day, but being able to glance the gist of it would be awesome.

                                      D This user is from outside of this forum
                                      D This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #31

                                      The exact point at which the gist of it can be manipulated, leaving out context and nudging you toward a different opinion than you might have formed if you'd read the whole thread.

                                      bananaisaberry@lemmy.zipB 1 Reply Last reply
                                      5
                                      • tonytins@pawb.socialT [email protected]
                                        This post did not contain any content.
                                        D This user is from outside of this forum
                                        D This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #32

                                        They're interjecting themselves between us and our contacts. They'll have the power to "summarize", which is also the power to subtly re-interpret meaning. The AI will be given a broad goal, and it will chip away at that goal bit by bit, across millions of summaries to mold public opinion.

                                        1 Reply Last reply
                                        14
                                        • P [email protected]

                                          People in the US don't use WhatsApp, for the most part.

                                          G This user is from outside of this forum
                                          G This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #33

                                          You're right.

                                          They don't use WhatsApp, they use Facebook Messenger.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups