Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Selfhosted
  3. Proxmox vs. Debian: Running media server on older hardware

Proxmox vs. Debian: Running media server on older hardware

Scheduled Pinned Locked Moved Selfhosted
selfhosted
66 Posts 25 Posters 363 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J [email protected]

    yeah, and qemu and lxc are very much legacy at this point. Stick with docker/podman/kubernetes for containers.

    M This user is from outside of this forum
    M This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #31

    QEMU is legacy? Pray tell me how you're running VMs on architectures other than x86 on modern computers without QEMU

    J 1 Reply Last reply
    0
    • J [email protected]

      I'm not saying it's bad software, but the times of manually configuring VMs and LXC containers with a GUI or Ansible are gone.

      All new build-outs are gitops and containerd-based containers now.

      For the legacy VM appliances, Proxmox works well, but there's also Openshift virtualization aka kubevirt if you want take advantage of the Kubernetes ecosystem.

      If you need bare-metal, then usually that gets provisioned with something like packer/nixos-generators or cloud-init.

      M This user is from outside of this forum
      M This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #32

      Sometimes, VMs are simply the better solution.

      I run a semi-production DB cluster at work. We have 17 VMs running and it's resilient (a different team handles VMWare and hardware)

      J 1 Reply Last reply
      0
      • B [email protected]

        I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.

        Here are the few features I need:

        • MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
        • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
        • I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.

        Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

        I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

        glizzyguzzler@lemmy.blahaj.zoneG This user is from outside of this forum
        glizzyguzzler@lemmy.blahaj.zoneG This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #33

        I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.

        Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.

        Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.

        On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.

        So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.

        If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.

        Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.

        A 1 Reply Last reply
        0
        • J [email protected]

          Yes, it's fine to still have VMs, but you shouldn't be building out new applications and new environments on VMs or LXC.

          The only VMs I've seen in production at my customers recently are application test environments for applications that require kernel access. Those test environments are managed by software running in containers, and often even use something like Openshift Virtualization so that the entire VM runs inside a container.

          C This user is from outside of this forum
          C This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #34

          Some of us don't build applications, we use them as built by other companies. If we're really unlucky they refuse to support running on a VM.

          J 1 Reply Last reply
          0
          • M [email protected]

            QEMU is legacy? Pray tell me how you're running VMs on architectures other than x86 on modern computers without QEMU

            J This user is from outside of this forum
            J This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #35

            Not QEMU in particular, poor phrasing on my part. I just mean setting up new environments that run applications on VMs.

            M 1 Reply Last reply
            0
            • J [email protected]

              Not QEMU in particular, poor phrasing on my part. I just mean setting up new environments that run applications on VMs.

              M This user is from outside of this forum
              M This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #36

              I prefer some of my applications to be on VMs. For example, my observability stack (ELK + Grafana) which I like to keep separate from other environments. I suppose the argument could be made that I should spin up a separate k8s cluster if I want to do that but it's faster to deploy directly on VMs, and there's also less moving parts (I run two 50 node K8S clusters so I'm not averse to containers, just saying). Easier and relatively secure tool for the right job. Sure, I could mess with cgroups and play with kernel parameters and all of that jazz to secure k8s more but why bother when I can make my life easier by trusting Red Hat? Also I'm not yet running a k8s version that supports SELinux and I tend to keep it enabled.

              J 1 Reply Last reply
              0
              • S [email protected]

                What's so nice about it? Have you tried quadlets or docker compose? Could you give a quick comparison to show what you one like about it?

                J This user is from outside of this forum
                J This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #37

                Sure!

                I haven't used quadlets yet, but I did set up a few systemd services for containers back in the day before quadlets came out. I also used to use docker compose back in 2017/2018.

                Docker compose and Kubernetes are very similar as a homelab admin. Docker compose syntax is a little less verbose, and it has some shortcuts for storage and networking. But that also means it's less flexible if you are doing more complex things. Docker compose doesn't start containers on boot by default I think(?) which is pretty bad for application hosting. Docker-compose has no way of automatically deploying from git like ArgoCD does.

                Kubernetes also has a lot of self-healing automation, like health checks that can either disable the load balancer and/or restart the container if an app is failing, automatic killing of containers when resources are low, preventing the scheduling of new containers when resources are low, gradual roll-out of containers so that the old version of a container doesn't get killed until the new version is up and healthy (helpful in case the new config is broken), mounting secrets as files in a container, and automatic retry on failed containers.

                There's also a lot of ubiquitous automation tools in the Kubernetes space, like cert-manager for setting up certificates (both ACME and local CA), Ingress for setting up reverse proxy, CNPG for setting up postgres clusters with automated backups, and first-class instrumentation/integration with prometheus and loki (both were designed for kubernetes first).

                The main downsides with Kubernetes in a homelab is that there is about a 1-2GiB RAM overhead for small clusters, and most documentation and examples are written for docker-compose, so you have to convert apps into a Deployment (you get used to writing deployments for new apps though). I would say installing things like Ingress or CNPG is probably easier than installing similar reverse-proxy automations on Docker-compose, though.

                1 Reply Last reply
                0
                • C [email protected]

                  Some of us don't build applications, we use them as built by other companies. If we're really unlucky they refuse to support running on a VM.

                  J This user is from outside of this forum
                  J This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #38

                  Yeah, that's fair. I have set up Openshift Virtualization for customers using 3rd party appliances. I've even worked on some projects where a 3rd party appliance is part of the original spec for the cluster, so installing Openshift Virtualization to run VMs is part of the day 1 installation of the Kubernetes cluster.

                  1 Reply Last reply
                  0
                  • M [email protected]

                    I prefer some of my applications to be on VMs. For example, my observability stack (ELK + Grafana) which I like to keep separate from other environments. I suppose the argument could be made that I should spin up a separate k8s cluster if I want to do that but it's faster to deploy directly on VMs, and there's also less moving parts (I run two 50 node K8S clusters so I'm not averse to containers, just saying). Easier and relatively secure tool for the right job. Sure, I could mess with cgroups and play with kernel parameters and all of that jazz to secure k8s more but why bother when I can make my life easier by trusting Red Hat? Also I'm not yet running a k8s version that supports SELinux and I tend to keep it enabled.

                    J This user is from outside of this forum
                    J This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #39

                    The production-scale Grafana LGTM stack only runs on Kubernetes fwiw. Docker and VMs are not supported. I'm a bit surprised that Kubernetes wouldn't have enough availability to be able to co-locate your general workloads and your observability stack, but that's totally fair to segment those workloads.

                    I've heard the argument that "kubernetes has more moving parts" a lot, and I think that is a misunderstanding. At a base level, all computers have infinite moving parts. QEMU has a lot of moving parts, containerd has a lot of moving parts. The reason why people use kubernetes is that all of those moving parts are automated and abstracted away to reduce the daily cognitive load for us operations folk. As an example I don't run manual updates for minor versions in my homelab. I have a cronjob that runs renovate, which goes and updates my Deployments, and ArgoCD automatically deploys the changes. Technically that's a lot of moving parts to use, but it saves me a lot of manual work and thinking, and turns my whole homelab into a sort of automated cloud service that I can go a month without thinking about.

                    I'm not sure if container break-out attacks are a reasonable concern for homelabs. See the relatively minor concern in the announcement I made as an Unraid employee last year when Leaky Vessels happened. Keep in mind that containerd uses cgroups under the hood.

                    Yeah, apparmor/selinux isn't very popular in the k8s space. I think it's easy enough to use them, plenty of documentation out there; but Openshift/okd is the only distribution that runs it out of the box.

                    M 1 Reply Last reply
                    0
                    • M [email protected]

                      Sometimes, VMs are simply the better solution.

                      I run a semi-production DB cluster at work. We have 17 VMs running and it's resilient (a different team handles VMWare and hardware)

                      J This user is from outside of this forum
                      J This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #40

                      I have 33 database servers in my homelab across 11 postgres clusters, all with automated barman backups to S3.

                      This stuff is all automated these days.

                      M 1 Reply Last reply
                      0
                      • irmadlad@lemmy.worldI [email protected]

                        hmmmm. Wouldn't you have to remove the Debian kernal and use the Proxmox kernal? Sorry, not trying to be obtuse, I just have never installed Proxmox 'on top' of Debian. I always opted for the clean install.

                        T This user is from outside of this forum
                        T This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #41

                        Yes, but that's a supported way to install Proxmox.

                        https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm

                        irmadlad@lemmy.worldI 1 Reply Last reply
                        0
                        • J [email protected]

                          Yes, it's fine to still have VMs, but you shouldn't be building out new applications and new environments on VMs or LXC.

                          The only VMs I've seen in production at my customers recently are application test environments for applications that require kernel access. Those test environments are managed by software running in containers, and often even use something like Openshift Virtualization so that the entire VM runs inside a container.

                          mhzawadi@lemmy.horwood.cloudM This user is from outside of this forum
                          mhzawadi@lemmy.horwood.cloudM This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #42

                          but you shouldn't be building out new applications and new environments on VMs or LXC

                          That's a bold statement, VMs might be just fine for some.

                          Use what ever is best for you, if thats containers great. If that's a VM, sure. Just make sure you keep it secure.

                          1 Reply Last reply
                          0
                          • possiblylinux127@lemmy.zipP [email protected]

                            ZFS is probably what you want

                            A This user is from outside of this forum
                            A This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #43

                            Not at all

                            1 Reply Last reply
                            0
                            • glizzyguzzler@lemmy.blahaj.zoneG [email protected]

                              I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.

                              Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.

                              Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.

                              On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.

                              So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.

                              If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.

                              Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.

                              A This user is from outside of this forum
                              A This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #44

                              Thanks so much for mentioning this, trying it out now

                              1 Reply Last reply
                              0
                              • B [email protected]

                                I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.

                                Here are the few features I need:

                                • MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
                                • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
                                • I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.

                                Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

                                I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

                                T This user is from outside of this forum
                                T This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #45

                                I use OpenMediaVault to run something similar. It’s a headless Debian distribution with web based config. Takes a bit of work but I like it.

                                1 Reply Last reply
                                0
                                • B [email protected]

                                  I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.

                                  Here are the few features I need:

                                  • MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
                                  • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
                                  • I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.

                                  Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

                                  I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

                                  K This user is from outside of this forum
                                  K This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #46

                                  I don't know about your first need ("MergerFS") but if you find useful, I have an old Intel NUC 6i3SYH (i3-6100U) with 16Gb RAM and I was running with Windows 10 for Plex+Arr and also HomeAssistant in VirtualBox. I was running into issues until I switched to Proxmox. Now I'm running Proxmox to run Docker with a bunch of containers (plex+arr and others) and also a virtual machine which has HomeAssistant and everything was smooth. I have to say that there is a learning curve, but it's very stable.

                                  M 1 Reply Last reply
                                  0
                                  • D [email protected]

                                    None of your listed use cases will even come close to taxing the 6600k. It's going to probably sit happily in idle states most of the time.

                                    Proxmox also has great snapshotting and backup features. Makes it easier to mess around with your containers/VMs without worrying too much.

                                    lemmchen@feddit.orgL This user is from outside of this forum
                                    lemmchen@feddit.orgL This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #47

                                    Only when using zfs, which op is not.

                                    1 Reply Last reply
                                    0
                                    • B [email protected]

                                      I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.

                                      Here are the few features I need:

                                      • MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
                                      • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
                                      • I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.

                                      Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

                                      I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

                                      C This user is from outside of this forum
                                      C This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #48

                                      My server runs Debian VMs in Proxmox on an i7-2600 which has a lower benchmark than the 6600k. I also used the Perfect Media Server guide, and have 2 x 8T data drives pooled with MergerFS and 1 for snapraid parity, these are passed through to the main VM from Proxmox with 'qm set'. One thing I often forget after deleting/restoring this VM is to run qm set again after restore, ensuring it has the flag to not back up those drives or else backups will fail and I have to go uncheck the backup option to fix it. If I need to spin up another VM for tinkering it's also easy enough to mount the NFS share as a volume with docker compose.

                                      Proxmox rarely shows CPU usage go above 50%, and this handles the whole *arr stack plus usenet and torrents in a single VM and compose file. I don't even have GPU passthrough setup because the motherboard on this older rig didn't support something like IOMMU. Never had issues with Plex or Jellyfin transcoding for Chromecast.

                                      1 Reply Last reply
                                      0
                                      • B [email protected]

                                        I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.

                                        Here are the few features I need:

                                        • MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
                                        • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
                                        • I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.

                                        Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

                                        I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

                                        lemmchen@feddit.orgL This user is from outside of this forum
                                        lemmchen@feddit.orgL This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #49

                                        Proxmox is pretty much focused on ZFS, LXC containers and VMs. You want mergerFS and Docker. I say avoid Proxmox and go for Debian or another distro.

                                        1 Reply Last reply
                                        0
                                        • B [email protected]

                                          I'm still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I'm a casual user on my personal machine, as well as with OpenWRT on my network hardware.

                                          Here are the few features I need:

                                          • MergeFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I'd like to have one pool that I can be flexible with space between each share.
                                          • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
                                          • I'd like to start working with Home Assistant. Installing with WSL hasn't worked for me, so switching to Linux seems like the best option for this.

                                          Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I'm concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

                                          I'm comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

                                          M This user is from outside of this forum
                                          M This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #50

                                          MergerFS and SnapRAID could be good for you. It's not immediate parity like with ZFS RAID (You run a regular cronjob to calculate RAID parity) but it supports mismatched drive sizes, expansion of the pool at any time, and some other features that should be good for a media server where live parity isn't critical.

                                          Proxmox and TrueNAS are nice because they help manage ZFS and other remote management within a nice UI but really you can just use Debian with SSH and do the same stuff. DietPi has a few nice utilities on top of Debian (DDNS manager and CLI fstab utilities, for example)but not super necessary.

                                          Personally I use TrueNAS but I also used DietPi/Debian for years and both have benefits and it really matters what your workflow is.

                                          Docker or LXC containers won't hurt your performance btw. There's supposedly some tiny overhead but both are designed to use the basic Linux system as much as possible: they're way faster than on WSL. For hardware acceleration it'll be deferred to the GPU for most things and there's lots of documentation to set it up. The best thing about docker is that every application is kept separate to eachother - updates can be done incrementally and rollbacks are possible too!

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups