Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. The new 3B "fully open source" model from AMD

The new 3B "fully open source" model from AMD

Scheduled Pinned Locked Moved Technology
technology
44 Posts 23 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • F [email protected]

    This is again a big win on the red team at least for me.
    They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

    AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

    As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

    A step further, thank you AMD.

    PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

    Z This user is from outside of this forum
    Z This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #6

    And we are still waiting on the day when these models can actually be run on AMD GPUs without jumping through hoops.

    G F 2 Replies Last reply
    0
    • snotflickerman@lemmy.blahaj.zoneS [email protected]

      3B

      That's one more than 2B so she must be really hot!

      /nierjokes

      AMD knew what they were doing.

      G This user is from outside of this forum
      G This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #7

      That's a real stretch. 3B is basically stating the size of the model, not the name of the model.

      W 1 Reply Last reply
      0
      • G [email protected]

        That's a real stretch. 3B is basically stating the size of the model, not the name of the model.

        W This user is from outside of this forum
        W This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #8

        Are you calling her fat?

        L 1 Reply Last reply
        0
        • F [email protected]

          This is again a big win on the red team at least for me.
          They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

          AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

          As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

          A step further, thank you AMD.

          PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

          a_a@lemmy.worldA This user is from outside of this forum
          a_a@lemmy.worldA This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #9

          Nice and open source . Similar performance to Qwen 2.5.
          (also ... https://www.tomsguide.com/ai/i-tested-deepseek-vs-qwen-2-5-with-7-prompts-heres-the-winner ← tested DeepSeek vs Qwen 2.5 ... )
          → Qwen 2.5 is better than DeepSeek.
          So, looks good.

          F 1 Reply Last reply
          0
          • Z [email protected]

            And we are still waiting on the day when these models can actually be run on AMD GPUs without jumping through hoops.

            G This user is from outside of this forum
            G This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #10

            In other words, waiting for the day when antitrust law is properly applied against Nvidia's monopolization of CUDA.

            1 Reply Last reply
            0
            • F [email protected]

              This is again a big win on the red team at least for me.
              They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

              AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

              As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

              A step further, thank you AMD.

              PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

              B This user is from outside of this forum
              B This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #11

              Nice. Where do I find the memory requirements? I have an older 6GB GPU so I've been able to play around with some models in the past.

              D ikidd@lemmy.worldI F 3 Replies Last reply
              0
              • W [email protected]

                Are you calling her fat?

                L This user is from outside of this forum
                L This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #12

                Scott Steiner is

                1 Reply Last reply
                0
                • F [email protected]

                  This is again a big win on the red team at least for me.
                  They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

                  AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

                  As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

                  A step further, thank you AMD.

                  PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

                  art@lemmy.worldA This user is from outside of this forum
                  art@lemmy.worldA This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #13

                  Help me understand how this is Open Source? Perhaps I'm missing something, but this is Source Available.

                  F F 2 Replies Last reply
                  0
                  • B [email protected]

                    Nice. Where do I find the memory requirements? I have an older 6GB GPU so I've been able to play around with some models in the past.

                    D This user is from outside of this forum
                    D This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #14

                    No direct answer here, but my tests with models from HuggingFace measured about 1.25GB of VRAM per 1B parameters.

                    Your GPU should be fine if you want to play around.

                    1 Reply Last reply
                    0
                    • snotflickerman@lemmy.blahaj.zoneS [email protected]

                      3B

                      That's one more than 2B so she must be really hot!

                      /nierjokes

                      AMD knew what they were doing.

                      A This user is from outside of this forum
                      A This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #15

                      Can't judge you for wanting to **** her or whatever, just don't ask her for freebies. She won't care if you are a human at that point.

                      1 Reply Last reply
                      0
                      • B [email protected]

                        Nice. Where do I find the memory requirements? I have an older 6GB GPU so I've been able to play around with some models in the past.

                        ikidd@lemmy.worldI This user is from outside of this forum
                        ikidd@lemmy.worldI This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #16

                        LMstudio usually lists the memory recommendations for the model.

                        1 Reply Last reply
                        0
                        • C [email protected]

                          I know it's not the point of the article but man that ai generated image looks bad. Like who approved that?

                          F This user is from outside of this forum
                          F This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #17

                          Oh yeah you're right 🙂

                          1 Reply Last reply
                          0
                          • Z [email protected]

                            And we are still waiting on the day when these models can actually be run on AMD GPUs without jumping through hoops.

                            F This user is from outside of this forum
                            F This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #18

                            That is a improvement, if the model is properly trained with rocm it should be able to run on amd GPU easier

                            1 Reply Last reply
                            0
                            • a_a@lemmy.worldA [email protected]

                              Nice and open source . Similar performance to Qwen 2.5.
                              (also ... https://www.tomsguide.com/ai/i-tested-deepseek-vs-qwen-2-5-with-7-prompts-heres-the-winner ← tested DeepSeek vs Qwen 2.5 ... )
                              → Qwen 2.5 is better than DeepSeek.
                              So, looks good.

                              F This user is from outside of this forum
                              F This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #19

                              Dont know if this test in a good representation of the two AI, but in this case it seems pretty promising, the only thing missing is a high parameters model

                              1 Reply Last reply
                              0
                              • B [email protected]

                                Nice. Where do I find the memory requirements? I have an older 6GB GPU so I've been able to play around with some models in the past.

                                F This user is from outside of this forum
                                F This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #20

                                Following this page it should be enough based on the requirements of qwen2.5-3B
                                https://qwen-ai.com/requirements/

                                1 Reply Last reply
                                0
                                • art@lemmy.worldA [email protected]

                                  Help me understand how this is Open Source? Perhaps I'm missing something, but this is Source Available.

                                  F This user is from outside of this forum
                                  F This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #21

                                  Instead of the traditional open models (like llama, qwen, gemma...) that are only open weight, this model says that it has :

                                  Fully open-source release of model weights, training hyperparameters, datasets, and code

                                  Making it different from other big tech "open" models.
                                  Tough it exists other "fully open" models like GPT neo, and more

                                  1 Reply Last reply
                                  0
                                  • F [email protected]

                                    This is again a big win on the red team at least for me.
                                    They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

                                    AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

                                    As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

                                    A step further, thank you AMD.

                                    PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

                                    1 This user is from outside of this forum
                                    1 This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #22

                                    Every AI model outperforms every other model in the same weight class when you cherry pick the metrics... Although it's always good to have more to choose from

                                    F 1 Reply Last reply
                                    0
                                    • F [email protected]

                                      This is again a big win on the red team at least for me.
                                      They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

                                      AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

                                      As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

                                      A step further, thank you AMD.

                                      PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

                                      M This user is from outside of this forum
                                      M This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #23

                                      It's about AI.

                                      1 Reply Last reply
                                      0
                                      • F [email protected]

                                        This is again a big win on the red team at least for me.
                                        They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

                                        AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

                                        As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

                                        A step further, thank you AMD.

                                        PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

                                        ulrich@feddit.orgU This user is from outside of this forum
                                        ulrich@feddit.orgU This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #24

                                        I don't know why open sourcing malicious software is worthy of praise but okay.

                                        D 1 Reply Last reply
                                        0
                                        • ulrich@feddit.orgU [email protected]

                                          I don't know why open sourcing malicious software is worthy of praise but okay.

                                          D This user is from outside of this forum
                                          D This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #25

                                          I'll bite, what is malicious about this?

                                          ulrich@feddit.orgU 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups