Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. LocalLLaMA
  3. need help understanding if this setup is even feasible.

need help understanding if this setup is even feasible.

Scheduled Pinned Locked Moved LocalLLaMA
localllama
21 Posts 5 Posters 3 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • B [email protected]

    I'll have to check mikupad. For the most part i've been using sillytavern with a generic assistant card because it looked like it would allow me plenty of space to tweak stuff. Even if it's not technically meant for the more traditional assistant use case.

    Thanks for the cheatsheet, it wil come really handy once i manage to set everything up. Most likely i'll use podman to make a container for each engine.

    As for the hardware side. The thinkcentre arrived today. But the card still has to arrive. Unfortunately i can't really ask more questions if i can't set it all up first to see what goes wrong / get a sense of what i Haven't understood.

    I'll keep you guys updated with the whole case modding stuff. I think it will be pretty fun to see come along.

    Thanks for everything.

    B This user is from outside of this forum
    B This user is from outside of this forum
    [email protected]
    wrote last edited by [email protected]
    #21

    Most likely i’ll use podman to make a container for each engine.

    IDK about windows, but on linux I find it easier to just make a python venv for each engine. Theres less CPU/RAM(/GPU?) overhead that way anyway, and its best to pull bleeding edge git versions of engines. As an added benefit, Python that ships with some OSes (like CachyOS) is more optimized that what podman would pull.

    Podman is great if security is a concern though. AKA if you don't 'trust' the code of the engine runtimes.

    ST is good, though its sampling presets are kinda funky and I don't use it personally.

    1 Reply Last reply
    1
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Recent
    • Tags
    • Popular
    • World
    • Users
    • Groups