Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. The new 3B "fully open source" model from AMD

The new 3B "fully open source" model from AMD

Scheduled Pinned Locked Moved Technology
technology
44 Posts 23 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • F [email protected]

    This is again a big win on the red team at least for me.
    They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

    AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

    As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

    A step further, thank you AMD.

    PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

    M This user is from outside of this forum
    M This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #23

    It's about AI.

    1 Reply Last reply
    0
    • F [email protected]

      This is again a big win on the red team at least for me.
      They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

      AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

      As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

      A step further, thank you AMD.

      PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

      ulrich@feddit.orgU This user is from outside of this forum
      ulrich@feddit.orgU This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #24

      I don't know why open sourcing malicious software is worthy of praise but okay.

      D 1 Reply Last reply
      0
      • ulrich@feddit.orgU [email protected]

        I don't know why open sourcing malicious software is worthy of praise but okay.

        D This user is from outside of this forum
        D This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #25

        I'll bite, what is malicious about this?

        ulrich@feddit.orgU 1 Reply Last reply
        0
        • D [email protected]

          I'll bite, what is malicious about this?

          ulrich@feddit.orgU This user is from outside of this forum
          ulrich@feddit.orgU This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #26

          What's malicious about AI and LLMs? Have you been living under a rock?

          At best it is useless, and at worst it is detrimental to society.

          D mitm0@lemmy.worldM 2 Replies Last reply
          0
          • ulrich@feddit.orgU [email protected]

            What's malicious about AI and LLMs? Have you been living under a rock?

            At best it is useless, and at worst it is detrimental to society.

            D This user is from outside of this forum
            D This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #27

            I disagree, LLMs have been very helpful for me and I do not see how an open source AI model trained with open source datasets is detrimental to society.

            ulrich@feddit.orgU 1 Reply Last reply
            0
            • 1 [email protected]

              Every AI model outperforms every other model in the same weight class when you cherry pick the metrics... Although it's always good to have more to choose from

              F This user is from outside of this forum
              F This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #28

              I've shared this AI because it's one of the best fully open source AI

              1 Reply Last reply
              0
              • ulrich@feddit.orgU [email protected]

                What's malicious about AI and LLMs? Have you been living under a rock?

                At best it is useless, and at worst it is detrimental to society.

                mitm0@lemmy.worldM This user is from outside of this forum
                mitm0@lemmy.worldM This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #29

                So in a nutshell, it's malicious because you said so

                Ok gotcha Mr/Ms/Mrs TechnoBigot

                ulrich@feddit.orgU 1 Reply Last reply
                0
                • F [email protected]

                  This is again a big win on the red team at least for me.
                  They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

                  AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

                  As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

                  A step further, thank you AMD.

                  PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

                  mitm0@lemmy.worldM This user is from outside of this forum
                  mitm0@lemmy.worldM This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #30

                  I'll be bookmarking the website & thank you

                  1 Reply Last reply
                  0
                  • D [email protected]

                    I disagree, LLMs have been very helpful for me and I do not see how an open source AI model trained with open source datasets is detrimental to society.

                    ulrich@feddit.orgU This user is from outside of this forum
                    ulrich@feddit.orgU This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #31

                    I don't know what to say other than pull your head outta the sand.

                    S 1 Reply Last reply
                    0
                    • mitm0@lemmy.worldM [email protected]

                      So in a nutshell, it's malicious because you said so

                      Ok gotcha Mr/Ms/Mrs TechnoBigot

                      ulrich@feddit.orgU This user is from outside of this forum
                      ulrich@feddit.orgU This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #32

                      Yes, that's totally what I said.

                      mitm0@lemmy.worldM 1 Reply Last reply
                      0
                      • ulrich@feddit.orgU [email protected]

                        I don't know what to say other than pull your head outta the sand.

                        S This user is from outside of this forum
                        S This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #33

                        No you.

                        Explain your exact reasons for thinking it's malicious. There's a lot of FUD surrounding "AI," a lot of which come from unrealistic marketing BS and poor choices by C-suite types that have nothing to do with the technology itself. If you can describe your concerns, maybe I or others can help clarify things.

                        F 1 Reply Last reply
                        0
                        • S [email protected]

                          No you.

                          Explain your exact reasons for thinking it's malicious. There's a lot of FUD surrounding "AI," a lot of which come from unrealistic marketing BS and poor choices by C-suite types that have nothing to do with the technology itself. If you can describe your concerns, maybe I or others can help clarify things.

                          F This user is from outside of this forum
                          F This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #34

                          These models are trained on human creations with the express intent to drive out those same human creators. There is no social safety net available so those creators can maintain a reasonable living standard without selling their art. It won't even work--the models aren't good enough to replace these jobs, but they're good enough to fool the C-suite into thinking they can--but they'll do lots of damage in the attempt.

                          The issues are primarily social, not technical. In a society that judges itself on how well it takes care of the needs of everyone, I would have far less of an issue with it.

                          S 1 Reply Last reply
                          0
                          • art@lemmy.worldA [email protected]

                            Help me understand how this is Open Source? Perhaps I'm missing something, but this is Source Available.

                            F This user is from outside of this forum
                            F This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #35

                            The source code on these models is almost too boring to care about. Training data and weights is what really matters.

                            1 Reply Last reply
                            0
                            • ulrich@feddit.orgU [email protected]

                              Yes, that's totally what I said.

                              mitm0@lemmy.worldM This user is from outside of this forum
                              mitm0@lemmy.worldM This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #36

                              Something we all agree on

                              1 Reply Last reply
                              0
                              • F [email protected]

                                These models are trained on human creations with the express intent to drive out those same human creators. There is no social safety net available so those creators can maintain a reasonable living standard without selling their art. It won't even work--the models aren't good enough to replace these jobs, but they're good enough to fool the C-suite into thinking they can--but they'll do lots of damage in the attempt.

                                The issues are primarily social, not technical. In a society that judges itself on how well it takes care of the needs of everyone, I would have far less of an issue with it.

                                S This user is from outside of this forum
                                S This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #37

                                The issues are primarily social, not technical.

                                Right, and having a FOSS alternative is certainly a good thing.

                                I think it's important to separate opposition to AI policy from a specific implementation. If your concerns are related to the social impact of a given technology, that is where the opposition should go, not toward the technology itself.

                                That said, this is largely similar to opposition to other types of technological change. Every time a significant change in technology comes about, there is a significant impact to jobs. The printing press destroyed the livelihood of scribes, but it made books dramatically cheaper, which created new jobs for typesetters, booksellers, etc. The automobile dramatically cut back jobs like farriers, stable hands, etc, but created new jobs for drivers, mechanics, etc. I'm sure each of those large shifts in technology also had an overreaction by business owners as they adjusted to the new normal. It certainly sucks for those impacted, but it tends to benefit those who can quickly adapt and make use of the new technology.

                                So I totally understand the hesitation around AI, especially given the overreaction by C-suites in gutting their workforce based on the promises made by AI marketing teams. However, that has nothing to do with the technology, but the social issues around the technology. Instead of hating AI in general, redirect that anger onto the actual problems:

                                • poor social safety net
                                • expensive education
                                • lack of consequences for false marketing
                                • lack of consequences for C-suite mistakes

                                Hating on a FOSS model just because it's related to an industry that is seeing abuse is the wrong approach.

                                F 1 Reply Last reply
                                0
                                • S [email protected]

                                  The issues are primarily social, not technical.

                                  Right, and having a FOSS alternative is certainly a good thing.

                                  I think it's important to separate opposition to AI policy from a specific implementation. If your concerns are related to the social impact of a given technology, that is where the opposition should go, not toward the technology itself.

                                  That said, this is largely similar to opposition to other types of technological change. Every time a significant change in technology comes about, there is a significant impact to jobs. The printing press destroyed the livelihood of scribes, but it made books dramatically cheaper, which created new jobs for typesetters, booksellers, etc. The automobile dramatically cut back jobs like farriers, stable hands, etc, but created new jobs for drivers, mechanics, etc. I'm sure each of those large shifts in technology also had an overreaction by business owners as they adjusted to the new normal. It certainly sucks for those impacted, but it tends to benefit those who can quickly adapt and make use of the new technology.

                                  So I totally understand the hesitation around AI, especially given the overreaction by C-suites in gutting their workforce based on the promises made by AI marketing teams. However, that has nothing to do with the technology, but the social issues around the technology. Instead of hating AI in general, redirect that anger onto the actual problems:

                                  • poor social safety net
                                  • expensive education
                                  • lack of consequences for false marketing
                                  • lack of consequences for C-suite mistakes

                                  Hating on a FOSS model just because it's related to an industry that is seeing abuse is the wrong approach.

                                  F This user is from outside of this forum
                                  F This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #38

                                  Was there anything in the posts above mine that suggest this was a technical issue, or did you read that in as an assumption?

                                  Every time a significant change in technology comes about, there is a significant impact to jobs. The printing press destroyed the livelihood of scribes, but it made books dramatically cheaper, which created new jobs for typesetters, booksellers, etc.

                                  Take a look at the history of the first people called "Luddites". They were early socialists focusing on the dismal working conditions that new automation would bring to the workers. And they were correct.

                                  Not every technological change is good. Our society has defaulted to saying yes to every change, and it's caused a whole lot of problems.

                                  S 1 Reply Last reply
                                  0
                                  • F [email protected]

                                    Was there anything in the posts above mine that suggest this was a technical issue, or did you read that in as an assumption?

                                    Every time a significant change in technology comes about, there is a significant impact to jobs. The printing press destroyed the livelihood of scribes, but it made books dramatically cheaper, which created new jobs for typesetters, booksellers, etc.

                                    Take a look at the history of the first people called "Luddites". They were early socialists focusing on the dismal working conditions that new automation would bring to the workers. And they were correct.

                                    Not every technological change is good. Our society has defaulted to saying yes to every change, and it's caused a whole lot of problems.

                                    S This user is from outside of this forum
                                    S This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #39

                                    Was there anything in the posts above mine that suggest this was a technical issue, or did you read that in as an assumption?

                                    I was responding both to you and to the parent to your comment and making it clear that it's not a technical issue. I'm agreeing with you.

                                    And they were correct.

                                    I disagree.

                                    Yes, not every technological change is good, we can look at Social Media as a shining example of that. However, technological change is usually inevitable, especially if you value freedom in your society, so it's a lot better to solve the issues that surround it than ban it.

                                    F 1 Reply Last reply
                                    0
                                    • S [email protected]

                                      Was there anything in the posts above mine that suggest this was a technical issue, or did you read that in as an assumption?

                                      I was responding both to you and to the parent to your comment and making it clear that it's not a technical issue. I'm agreeing with you.

                                      And they were correct.

                                      I disagree.

                                      Yes, not every technological change is good, we can look at Social Media as a shining example of that. However, technological change is usually inevitable, especially if you value freedom in your society, so it's a lot better to solve the issues that surround it than ban it.

                                      F This user is from outside of this forum
                                      F This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #40

                                      There is absolutely nothing inevitable about technological change. We think that way because of the specific place we are in history. A specific place that is an aberration in how fast those changes have come. For the most part, humans throughout history have used much the same techniques and tools that their parents did.

                                      You also can't separate AI technology from the social change. They're not dumping billions into data centers and talking about using entire nuclear reactors to power them just because they think AI is a fun toy.

                                      S 1 Reply Last reply
                                      0
                                      • F [email protected]

                                        This is again a big win on the red team at least for me.
                                        They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

                                        AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

                                        As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

                                        A step further, thank you AMD.

                                        PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

                                        H This user is from outside of this forum
                                        H This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #41

                                        OpenCL not mentioned, and so raw hardware level code most likely. Maybe no one else cares, but higher level code means more portability.

                                        F 1 Reply Last reply
                                        0
                                        • H [email protected]

                                          OpenCL not mentioned, and so raw hardware level code most likely. Maybe no one else cares, but higher level code means more portability.

                                          F This user is from outside of this forum
                                          F This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #42

                                          What is the link with rocm?

                                          H 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups