Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. LocalLLaMA
  3. My 2.5 year old laptop can write Space Invaders in JavaScript now, using GLM-4.5 Air and MLX

My 2.5 year old laptop can write Space Invaders in JavaScript now, using GLM-4.5 Air and MLX

Scheduled Pinned Locked Moved LocalLLaMA
localllama
5 Posts 4 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C This user is from outside of this forum
    C This user is from outside of this forum
    [email protected]
    wrote last edited by
    #1
    This post did not contain any content.
    P 1 Reply Last reply
    24
    • C [email protected]
      This post did not contain any content.
      P This user is from outside of this forum
      P This user is from outside of this forum
      [email protected]
      wrote last edited by
      #2

      The local model scene is getting really good, I just wish I’d sprung for more RAM and GPU when I bought my macbook m1.

      Even then, I can still run 8-12gb models that are decently good, and I’m looking forward to the new Qwen3 30b to move my tool use local.

      H B 2 Replies Last reply
      4
      • P [email protected]

        The local model scene is getting really good, I just wish I’d sprung for more RAM and GPU when I bought my macbook m1.

        Even then, I can still run 8-12gb models that are decently good, and I’m looking forward to the new Qwen3 30b to move my tool use local.

        H This user is from outside of this forum
        H This user is from outside of this forum
        [email protected]
        wrote last edited by
        #3

        Im really hoping to see this as a new graphics thing. Specialized chips to bring down power consumption and having oses built around it where you have an ai installation interface.

        P 1 Reply Last reply
        1
        • H [email protected]

          Im really hoping to see this as a new graphics thing. Specialized chips to bring down power consumption and having oses built around it where you have an ai installation interface.

          P This user is from outside of this forum
          P This user is from outside of this forum
          [email protected]
          wrote last edited by
          #4

          The new Ryzen AI chips and Apples Neural Engine (or whatever is called) have great efficiency for performance and can run strong local models.

          Intel also announced they’re going this route.

          I know soldered memory isn’t popular, but right now the performance/energy benefits are big — you just have to buy the premium models.

          I think NVIDIA will keep doing their massive GPU toaster ovens, Project Digits was supposed to be their low energy competitor and has been underwhelming.

          1 Reply Last reply
          1
          • P [email protected]

            The local model scene is getting really good, I just wish I’d sprung for more RAM and GPU when I bought my macbook m1.

            Even then, I can still run 8-12gb models that are decently good, and I’m looking forward to the new Qwen3 30b to move my tool use local.

            B This user is from outside of this forum
            B This user is from outside of this forum
            [email protected]
            wrote last edited by [email protected]
            #5

            Be sure to grab a DWQ quant, like this:

            https://huggingface.co/nightmedia/Qwen3-30B-A3B-Instruct-2507-dwq3-mlx

            https://huggingface.co/models?sort=modified&search=DWQ

            DWQ is like an enhanced MLX that's much stronger with tight quants around 3-4bpw.

            1 Reply Last reply
            5
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups