Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. LocalLLaMA
  3. I'm excited for dots.llm (142BA14B)!

I'm excited for dots.llm (142BA14B)!

Scheduled Pinned Locked Moved LocalLLaMA
localllama
4 Posts 3 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • P This user is from outside of this forum
    P This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #1
    • It seems like it'll be the best local model that can be ran fast if you have a lot of RAM and medium VRAM.
    • It uses a shared expert (like deepseek and llama4) so it'll be even faster on partial offloaded setups.
    • There is a ton of options for fine tuning or training from one of their many partially trainined checkpoints.
    • I'm hoping for a good reasoning finetune. Hoping Nous does it.
    • It has a unique voice because it has very little synthetic data in it.

    llama.CPP support is in the works, and hopefully won't take too long since it's architecture is reused from other models llamacpp already supports.

    Are y'all as excited as I am? Also is there any other upcoming release that you're excited for?

    smokeydope@lemmy.worldS B 2 Replies Last reply
    11
    • P [email protected]
      • It seems like it'll be the best local model that can be ran fast if you have a lot of RAM and medium VRAM.
      • It uses a shared expert (like deepseek and llama4) so it'll be even faster on partial offloaded setups.
      • There is a ton of options for fine tuning or training from one of their many partially trainined checkpoints.
      • I'm hoping for a good reasoning finetune. Hoping Nous does it.
      • It has a unique voice because it has very little synthetic data in it.

      llama.CPP support is in the works, and hopefully won't take too long since it's architecture is reused from other models llamacpp already supports.

      Are y'all as excited as I am? Also is there any other upcoming release that you're excited for?

      smokeydope@lemmy.worldS This user is from outside of this forum
      smokeydope@lemmy.worldS This user is from outside of this forum
      [email protected]
      wrote on last edited by [email protected]
      #2

      Havent heard of this one before now. It will be interesting to see how it actually performs. I didnt see what license the models will be released under hope its a more permissive one like apache. Their marketing should try cooking up a catchy name thats easy to remember. It seems they're a native western language company so also hope it doesnt have too much random Chinese characters like qwen does sometimes

      Ive never really gotten into MoE models, people say you can get great performance gains with clever partial offloading strategy between various experts. Maybe one of these days!

      P 1 Reply Last reply
      3
      • smokeydope@lemmy.worldS [email protected]

        Havent heard of this one before now. It will be interesting to see how it actually performs. I didnt see what license the models will be released under hope its a more permissive one like apache. Their marketing should try cooking up a catchy name thats easy to remember. It seems they're a native western language company so also hope it doesnt have too much random Chinese characters like qwen does sometimes

        Ive never really gotten into MoE models, people say you can get great performance gains with clever partial offloading strategy between various experts. Maybe one of these days!

        P This user is from outside of this forum
        P This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #3

        Yes with llamacpp its easy to put just the experts on the CPU. Since only some of the experts are used every time, the GB moved to RAM slows things down way less than moving parts of the model that are used every time. And now parts that are used every time get to stay on the GPU. I was able to get llama4 scout running at around 15 T/s on 96GB RAM and 24GB VRAM with a large context. The whole GGUF was about 80GB.

        Also they actually are a Chinese company. I am pretty sure it is the company that makes RedNote (Chinese tiktok) and thats why they had access to so much non-synthetic data. I tried the demo on huggingface and never got any Chinese characters.

        I also really enjoyed it's prose. I think this will be a winner for creative writing.

        1 Reply Last reply
        3
        • P [email protected]
          • It seems like it'll be the best local model that can be ran fast if you have a lot of RAM and medium VRAM.
          • It uses a shared expert (like deepseek and llama4) so it'll be even faster on partial offloaded setups.
          • There is a ton of options for fine tuning or training from one of their many partially trainined checkpoints.
          • I'm hoping for a good reasoning finetune. Hoping Nous does it.
          • It has a unique voice because it has very little synthetic data in it.

          llama.CPP support is in the works, and hopefully won't take too long since it's architecture is reused from other models llamacpp already supports.

          Are y'all as excited as I am? Also is there any other upcoming release that you're excited for?

          B This user is from outside of this forum
          B This user is from outside of this forum
          [email protected]
          wrote on last edited by [email protected]
          #4

          This is like a perfect model for a Strix Halo mini PC.

          Man, I really want one of those Framework Desktops now...

          1 Reply Last reply
          1
          Reply
          • Reply as topic
          Log in to reply
          • Oldest to Newest
          • Newest to Oldest
          • Most Votes


          • Login

          • Login or register to search.
          • First post
            Last post
          0
          • Categories
          • Recent
          • Tags
          • Popular
          • World
          • Users
          • Groups