Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Selfhosted
  3. From Docker with Ansible to k3s: I don't get it...

From Docker with Ansible to k3s: I don't get it...

Scheduled Pinned Locked Moved Selfhosted
selfhosted
36 Posts 19 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • T [email protected]

    I agree. k8s and helm have a steep learning curve. I have an engineering background and understand k8s in and out. Therefore, for my helm is the cleanest solution.
    I would recommend getting to know k8s and it's resources, before using (or creating) helm charts.

    A This user is from outside of this forum
    A This user is from outside of this forum
    [email protected]
    wrote last edited by
    #15

    Yeah - I did come down a bit harder on helm charts than perhaps I intended - but starting out with them was a confusing mess for me. Especially since they all create a new 'custom-to-this-thing' config file for you to work with rather than 'standard yml you can google'. The layer of indirection was very confusing when I was learning. Once I abandoned them and realized how simple a basic deployment in k8s really is then I was able to actually make progress.

    I've deployed half a dozen or so services now and I still don't think I'd bother with helm for any of it.

    1 Reply Last reply
    0
    • T [email protected]

      Hey there,

      I made a similar journey a few years ago. But I only have one home server and do not run my services in high availability (HA). As @[email protected] mentioned, to run a service in HA, you need more than "just scaling up". You need to exactly know what talks when to whom. For example, database entries or file writes will be difficult when scaling up a service not ready for HA.

      Here are my solutions for your challenges:

      • No, you are not supposed to run kubectl apply -f for each file. I would strongly recommend helm. Then you just have to run helm install per service. If you want to write each service by yourself, you will end up with multiple .yaml files. I do it this way. Normally, you create one repository per service, which holds all YAML files. Alternatively, you could use a predefined Helm Chart and just customize the settings. This is comparable to DockerHub.
      • If you want to deploy to a cluster, you just have to deploy to one server. If in your .yaml configuration multiple replicas are defined, k8s will automatically balance these replicas on multiple servers and split the entire load on all servers in the same cluster.
        If you just look for configuration examples, look into Helm Charts. Often service provide examples only for Docker (and Docker Compose) and not for K8s.
      • As I see it, you only have to run a single line of install script on your first server and afterward join the cluster with the second server. Then you have k3s deployed. Traefik will be installed alongside k3s. If you want to access the dashboard of Traefik and install rancher and longhorn, yes, you will have to run multiple installations. Since you already have experience with Ansible, I suggest putting everything for the "base installation" into one playbook and then executing this playbook one.

      Changelog:

      • Removeing k3s install command. If you want to use it, look it up on the official website. Do not copy paste the command from a random user on lemmy 😉 Thanks to @[email protected] for bringing up this topic.
      A This user is from outside of this forum
      A This user is from outside of this forum
      [email protected]
      wrote last edited by
      #16

      curl -sfL https://get.k3s.io/ | sh -

      Never, ever install anything this way. The trend of "just run this shell script off the internet" is a menace. You don't know what that script does, what repositories it may add, what it may install, whether somebody is typo-squatting the URL and you're running something else, etc.

      It's just a bad idea. If you disagree then I have one question - how would you uninstall k3s after you ran that blackbox?

      T J 2 Replies Last reply
      1
      • sunoc@sh.itjust.worksS [email protected]

        Hey!
        I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it's working great, but I want to learn MOAR and I need help...

        Recently, I've been considering migrating to bare metal K3S for a few reasons:

        • To learn and actually practice K8S.
        • To have redundancy and to try HA.
        • My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
        • Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!

        Here is my problem: I don't understand how things are supposed to be done. All the examples I find feel wrong.
        More specifically:

        • Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too "by hand"! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
        • I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
        • Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!

        I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it's really not.

        It's very much a noob question, but I really want to understand what I am doing wrong. I'm really looking for advice and especially configuration examples that I could try to copy, use and modify!

        Thanks in advance,

        Cheers!

        I This user is from outside of this forum
        I This user is from outside of this forum
        [email protected]
        wrote last edited by
        #17

        I’m the creator of ansible k3s playbooks, event if I not more active maintener. But there’s a big community : give a try and contrib https://github.com/k3s-io/k3s-ansible

        1 Reply Last reply
        1
        • J [email protected]

          helm charts are awful, i didn't really like cdk8s either tbh. I think the future "package format" might be operators or Crossplane Composite Resources

          T This user is from outside of this forum
          T This user is from outside of this forum
          [email protected]
          wrote last edited by
          #18

          Oh, operators are absolutely the way for "released" things.

          But on bigger projects with lots of different pods etc, it's a lot of work to make all the CRD definitions, hook all the events, and write all the code to deploy the pods etc.
          Similar to helm charts, I don't see the point for personal projects. I'm not sharing it with anyone, I don't need helm/operator abstraction for it.
          And something like cdk8s will generate the yaml for you to inspect. So you can easily validate that you are "doing the right thing" before slinging it into k8s.

          1 Reply Last reply
          0
          • M [email protected]

            garden seems similar to GitOps solutions like ArgoCD or FluxCD for deploying helm charts.

            Here is an example of authentik deployed using helm and fluxcd.

            T This user is from outside of this forum
            T This user is from outside of this forum
            [email protected]
            wrote last edited by
            #19

            Interesting, I might check them out.
            I liked garden because it was "for kubernetes". It was a horse and it had its course.
            I had the wrong assumption that all those CD tools were specifically tailored to run as workers in a deployment pipeline.

            I'm willing to re-evaluate my deployment stack, tbh.
            I'll definitely dig more into flux and ansible.
            Thanks!

            M 1 Reply Last reply
            0
            • sunoc@sh.itjust.worksS [email protected]

              Hey!
              I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it's working great, but I want to learn MOAR and I need help...

              Recently, I've been considering migrating to bare metal K3S for a few reasons:

              • To learn and actually practice K8S.
              • To have redundancy and to try HA.
              • My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
              • Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!

              Here is my problem: I don't understand how things are supposed to be done. All the examples I find feel wrong.
              More specifically:

              • Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too "by hand"! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
              • I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
              • Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!

              I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it's really not.

              It's very much a noob question, but I really want to understand what I am doing wrong. I'm really looking for advice and especially configuration examples that I could try to copy, use and modify!

              Thanks in advance,

              Cheers!

              B This user is from outside of this forum
              B This user is from outside of this forum
              [email protected]
              wrote last edited by
              #20

              Don't use kubernetes.

              1 Reply Last reply
              2
              • T [email protected]

                Interesting, I might check them out.
                I liked garden because it was "for kubernetes". It was a horse and it had its course.
                I had the wrong assumption that all those CD tools were specifically tailored to run as workers in a deployment pipeline.

                I'm willing to re-evaluate my deployment stack, tbh.
                I'll definitely dig more into flux and ansible.
                Thanks!

                M This user is from outside of this forum
                M This user is from outside of this forum
                [email protected]
                wrote last edited by
                #21

                that all those CD tools were specifically tailored to run as workers in a deployment pipeline

                That's CI 🙃

                Confusing terms, but yeah. With ArgoCD and FluxCD, they just read from a git repo and apply it to the cluster. In my linked git repo, flux is used to install "helmreleases" but argo has something similar.

                1 Reply Last reply
                0
                • sunoc@sh.itjust.worksS [email protected]

                  Hey!
                  I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it's working great, but I want to learn MOAR and I need help...

                  Recently, I've been considering migrating to bare metal K3S for a few reasons:

                  • To learn and actually practice K8S.
                  • To have redundancy and to try HA.
                  • My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
                  • Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!

                  Here is my problem: I don't understand how things are supposed to be done. All the examples I find feel wrong.
                  More specifically:

                  • Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too "by hand"! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
                  • I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
                  • Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!

                  I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it's really not.

                  It's very much a noob question, but I really want to understand what I am doing wrong. I'm really looking for advice and especially configuration examples that I could try to copy, use and modify!

                  Thanks in advance,

                  Cheers!

                  M This user is from outside of this forum
                  M This user is from outside of this forum
                  [email protected]
                  wrote last edited by [email protected]
                  #22

                  Ive actually been personally moving away from kubernetes for this kind of deployment and I am a big fan of using ansible to deploy containers using podman systemd units, you have a series of systemd .container files like the one below

                  [Unit]
                  Description=Loki
                  
                  [Container]
                  Image=docker.io/grafana/loki:3.4.1
                  
                  # Use volume and network defined below
                  Volume=/mnt/loki-config:/mnt/config
                  Volume=loki-tmp:/tmp/loki
                  PublishPort=3100:3100
                  AutoUpdate=registry
                  
                  [Service]
                  Restart=always
                  TimeoutStartSec=900
                  
                  [Install]
                  # Start by default on boot
                  WantedBy=multi-user.target default.target
                  

                  You use ansible to write these into your /etc/containers/systemd/ folder. Example the file above gets written as /etc/containers/systemd/loki.container.

                  Your ansible script will then call systemctl daemon-reload and then you can systemctl start loki to finish the example

                  1 Reply Last reply
                  9
                  • A [email protected]

                    curl -sfL https://get.k3s.io/ | sh -

                    Never, ever install anything this way. The trend of "just run this shell script off the internet" is a menace. You don't know what that script does, what repositories it may add, what it may install, whether somebody is typo-squatting the URL and you're running something else, etc.

                    It's just a bad idea. If you disagree then I have one question - how would you uninstall k3s after you ran that blackbox?

                    T This user is from outside of this forum
                    T This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #23

                    Yes, just running a random script from the internet is a very bad idea. You should also not copy and paste the command from above, since I'm only a random lemmy user.
                    Nevertheless, if you trust k3s, and they promote this command on the official website (make sure it's the official one) you can use it.
                    As you want to install k3s, I'm going to assume you trust k3s.

                    If you want to review the script, go for it. And you should, I agree.
                    I for myself reviewed (or at least looked over it) when I used the script for myself.

                    For the uninstallment: just follow the instructions on the official website and run /usr/local/bin/k3s-uninstall.sh source

                    A 1 Reply Last reply
                    0
                    • T [email protected]

                      Yes, just running a random script from the internet is a very bad idea. You should also not copy and paste the command from above, since I'm only a random lemmy user.
                      Nevertheless, if you trust k3s, and they promote this command on the official website (make sure it's the official one) you can use it.
                      As you want to install k3s, I'm going to assume you trust k3s.

                      If you want to review the script, go for it. And you should, I agree.
                      I for myself reviewed (or at least looked over it) when I used the script for myself.

                      For the uninstallment: just follow the instructions on the official website and run /usr/local/bin/k3s-uninstall.sh source

                      A This user is from outside of this forum
                      A This user is from outside of this forum
                      [email protected]
                      wrote last edited by [email protected]
                      #24

                      I really want to push back on the entire idea that it's okay to distribute software via a curl | sh command. It's a bad practice. I shouldn't be reading 100's of lines of shell script to see what sort of malarkey your installer is going to do to my system. This application creates an uninstall script. Neat. Many don't.

                      Of the myriad ways to distribute Linux software (deb, rpm, snap, flatpak, AppImage) an unstructured shell script is by far the worst.

                      M 1 Reply Last reply
                      0
                      • A [email protected]

                        curl -sfL https://get.k3s.io/ | sh -

                        Never, ever install anything this way. The trend of "just run this shell script off the internet" is a menace. You don't know what that script does, what repositories it may add, what it may install, whether somebody is typo-squatting the URL and you're running something else, etc.

                        It's just a bad idea. If you disagree then I have one question - how would you uninstall k3s after you ran that blackbox?

                        J This user is from outside of this forum
                        J This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #25

                        https://docs.k3s.io/installation/uninstall

                        There is also a k3s option for Nixos, which removes the security and side-affect risks of running a random bash script installer.

                        1 Reply Last reply
                        1
                        • sunoc@sh.itjust.worksS [email protected]

                          Hey!
                          I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it's working great, but I want to learn MOAR and I need help...

                          Recently, I've been considering migrating to bare metal K3S for a few reasons:

                          • To learn and actually practice K8S.
                          • To have redundancy and to try HA.
                          • My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
                          • Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!

                          Here is my problem: I don't understand how things are supposed to be done. All the examples I find feel wrong.
                          More specifically:

                          • Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too "by hand"! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
                          • I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
                          • Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!

                          I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it's really not.

                          It's very much a noob question, but I really want to understand what I am doing wrong. I'm really looking for advice and especially configuration examples that I could try to copy, use and modify!

                          Thanks in advance,

                          Cheers!

                          Z This user is from outside of this forum
                          Z This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #26

                          And this is why I do not like K8s at all. The only reason to use it is to have something on your CV. Besides that, Docker Swarm and Hashicorp Nomad feel a lot better and are a lot easier to manage.

                          1 Reply Last reply
                          3
                          • A [email protected]

                            I really want to push back on the entire idea that it's okay to distribute software via a curl | sh command. It's a bad practice. I shouldn't be reading 100's of lines of shell script to see what sort of malarkey your installer is going to do to my system. This application creates an uninstall script. Neat. Many don't.

                            Of the myriad ways to distribute Linux software (deb, rpm, snap, flatpak, AppImage) an unstructured shell script is by far the worst.

                            M This user is from outside of this forum
                            M This user is from outside of this forum
                            [email protected]
                            wrote last edited by
                            #27

                            I think that distributing general software via curl | sh is pretty bad for all the reasons that curl sh is bad and frustrating.

                            But I do make an exception for "platforms" and package managers. The question I ask myself is: "Does this software enable me to install more software from a variety of programming languages?"

                            If the answer to that question is yes, which is is for k3s, then I think it's an acceptable exception. curl | sh is okay for bootstrapping things like Nix on non Nix systems, because then you get a package manager to install various versions of tools that would normally try to get you to install themselves with curl | bash but then you can use Nix instead.

                            K3s is pretty similar, because Kubernetes is a whole platform, with it's own package manager (helm), and applications you can install. It's especially difficult to get the latest versions of Kubernetes on stable release distros, as they don't package it at all, so getting it from the developers is kinda the only way to get it installed.

                            Relevant discussion on another thread: https://programming.dev/post/33626778/18025432

                            One of my frustrations that I express in the linked discussion is that it's "developers" who are making bash scripts to install. But k3s is not just developers, it's made by Suse who has their own distro, OpenSuse, using OpenSuse tooling. It's "packagers" making k3s and it's install script, and that's another reason why I find it more acceptable.

                            A 1 Reply Last reply
                            0
                            • sunoc@sh.itjust.worksS [email protected]

                              Hey!
                              I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it's working great, but I want to learn MOAR and I need help...

                              Recently, I've been considering migrating to bare metal K3S for a few reasons:

                              • To learn and actually practice K8S.
                              • To have redundancy and to try HA.
                              • My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
                              • Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!

                              Here is my problem: I don't understand how things are supposed to be done. All the examples I find feel wrong.
                              More specifically:

                              • Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too "by hand"! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
                              • I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
                              • Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!

                              I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it's really not.

                              It's very much a noob question, but I really want to understand what I am doing wrong. I'm really looking for advice and especially configuration examples that I could try to copy, use and modify!

                              Thanks in advance,

                              Cheers!

                              irmadlad@lemmy.worldI This user is from outside of this forum
                              irmadlad@lemmy.worldI This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #28

                              I've thought about k8s, but there is so much about Docker that I still don't fully know.

                              1 Reply Last reply
                              1
                              • M [email protected]

                                I think that distributing general software via curl | sh is pretty bad for all the reasons that curl sh is bad and frustrating.

                                But I do make an exception for "platforms" and package managers. The question I ask myself is: "Does this software enable me to install more software from a variety of programming languages?"

                                If the answer to that question is yes, which is is for k3s, then I think it's an acceptable exception. curl | sh is okay for bootstrapping things like Nix on non Nix systems, because then you get a package manager to install various versions of tools that would normally try to get you to install themselves with curl | bash but then you can use Nix instead.

                                K3s is pretty similar, because Kubernetes is a whole platform, with it's own package manager (helm), and applications you can install. It's especially difficult to get the latest versions of Kubernetes on stable release distros, as they don't package it at all, so getting it from the developers is kinda the only way to get it installed.

                                Relevant discussion on another thread: https://programming.dev/post/33626778/18025432

                                One of my frustrations that I express in the linked discussion is that it's "developers" who are making bash scripts to install. But k3s is not just developers, it's made by Suse who has their own distro, OpenSuse, using OpenSuse tooling. It's "packagers" making k3s and it's install script, and that's another reason why I find it more acceptable.

                                A This user is from outside of this forum
                                A This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #29

                                Microk8s manages to install with a snap. I know that snap is "of the devil" around these parts but it's still better than a custom bash script.

                                Custom bash scripts will always be worse than any alternative.

                                M 1 Reply Last reply
                                0
                                • A [email protected]

                                  Microk8s manages to install with a snap. I know that snap is "of the devil" around these parts but it's still better than a custom bash script.

                                  Custom bash scripts will always be worse than any alternative.

                                  M This user is from outside of this forum
                                  M This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #30

                                  I've tried snap, juju, and Canonical's suite. They were uniquely frustrating and I'm not interested in interacting with them again.

                                  The future of installing system components like k3s on generic distros is probably systemd sysexts, which are extension images that can be overlayed onto a base system. It's designed for immutable distros, but it can be used on any standard enough distro.

                                  There is a k3s sysext, but it's still in the "bakery". Plus sysext isn't in stable release distros anyways.

                                  Until it's out and stable, I'll stick to the one time bash script to install Suse k3s.

                                  A 1 Reply Last reply
                                  0
                                  • M [email protected]

                                    I've tried snap, juju, and Canonical's suite. They were uniquely frustrating and I'm not interested in interacting with them again.

                                    The future of installing system components like k3s on generic distros is probably systemd sysexts, which are extension images that can be overlayed onto a base system. It's designed for immutable distros, but it can be used on any standard enough distro.

                                    There is a k3s sysext, but it's still in the "bakery". Plus sysext isn't in stable release distros anyways.

                                    Until it's out and stable, I'll stick to the one time bash script to install Suse k3s.

                                    A This user is from outside of this forum
                                    A This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by
                                    #31

                                    You're welcome to make whatever bad decisions you like. I can manage snaps with standard tooling. I can install, update, remove them with simple ansible scripts in a standard way.

                                    Bash installers are bad. End of.

                                    M 1 Reply Last reply
                                    0
                                    • A [email protected]

                                      You're welcome to make whatever bad decisions you like. I can manage snaps with standard tooling. I can install, update, remove them with simple ansible scripts in a standard way.

                                      Bash installers are bad. End of.

                                      M This user is from outside of this forum
                                      M This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #32

                                      Canonical's snap use a proprietary backend, and comes at a risk of vendor lock in to their ecosystem.

                                      The bash installer is fully open source.

                                      You can make the bad decision of locking yourself into a closed ecosystem, but many sensible people recognize that snap is "of the devil" for a good reason.

                                      1 Reply Last reply
                                      0
                                      • D [email protected]

                                        Or Kustomize, though I prefer Helm.

                                        S This user is from outside of this forum
                                        S This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by
                                        #33

                                        Then helmfile might be worth checking out

                                        1 Reply Last reply
                                        0
                                        • scrubbles@poptalk.scrubbles.techS [email protected]

                                          I'll post more later (reply here to remind me), but I have your exact setup. It's a great way to learn k8s and yes, it's going to be an uphill battle for learning - but the payoff is worth it. Both for your professional career and your homelab. It's the big leagues.

                                          For your questions, no to all of them. Once you learn some of it the rest kinda falls together.

                                          I'm going into a meeting, but I'll post here with how I do it later. In the mean time, pick one and only one container you want to get started with. Stateless is easier to start with compared to something that needs volumes. Piece by piece brick by brick you will add more to your knowledge and understanding. Don't try to take it all on day one. First just get a container running. Then access via a port and http. Then proxy. Then certs. Piece by piece, brick by brick. Take small victories, if you try to say "tomorrow everything will be on k8s" you're setting yourself up for anger and frustration.

                                          @[email protected]
                                          Edit: To help out I would do these things in these steps, note that steps are not equal in length, and they are not complete - but rather to help you get started without burning out on your journey. I recommend just taking each one, and when you get it working rather than jumping to the next one, instead taking a break, having a drink, and celebrating that you got it up and running.

                                          1. Start documenting everything you do. The great thing about kubernetes is that you can restart from scratch if you have written everything down. I would start a new git repository with a README that contains every command you ran, what it did, and why you did it. Assume that you will be tearing down your cluster and rebuilding it - in fact I would even recommend that. Treat this first cluster as your testing grounds, and then you won't feel crappy spinning up temporary resources. Then, you can rebuild it and know that you did a great job - and you'll feel confident in rebuilding in case of hardware failure.

                                          2. Get the sample nginx pod up and running with a service and deployment. Simply so you can curl the IP of your main node and port, and see the response. This I assume you have played with already.

                                          3. Point DNS to your main node, get the nginx pod with http://your.dns.tld:PORT. This should be the same as anything you've done with docker before.

                                          4. Convert the yaml to a helm chart as other have said, but don't worry about "templating" yet, get comfortable with helm install, helm upgrade -i, and helm uninstall. Understand what each one does and how they operate. Then go back and template, upgrade-ing after each change to understand how it works. It's pretty standard to template the image and tag for example so it's easy to upgrade them. There's a million examples online, but don't go overboard, just do the basics. My (template values.yaml) usually looks like:

                                          <<servicename>>
                                            name: <<servicename>>
                                            image:
                                              repository: path/to/image
                                              tag: v1.1.1
                                              network:
                                               port: 8888
                                          

                                          Just keep it simple for now.

                                          1. Decide on your proxy service. Traefik as you see comes out of the box. I personally use istio. I can go into more details why later, but I like that I can create a "VirtualService" for "$appname.my.custom.tld` and it will point to it.
                                          2. Implement your proxy service, and get the (http only still) app set up. Set up something like nginx.your.tld and be able to curl http://nginx.your.tld and see that it routes properly to your sample nginx service. Congrats, this is a huge one.
                                          3. Add the CertManager chart. This will set it up so you can create Certificate types in k8s. You'll need to use the proxy in the previous step to route the /.well-known endpoints on the http port from the open web to cert-manager, for Istio this was another virtual service on the gateway - I assume Traefic would have something similar to "route all traffic on port 80 that starts with /.well-known to this service". Then, in your nginx helm chart, add in a Certificate type for your nginx endpoint, nginx.your.tld, and wait for it to be successfully granted. With Istio, this is all I need now to finally curl https://nginx.your.tld!

                                          At this point you have routing, ports, and https set up. Have 2 drinks after this one. You can officially deploy any stateless service at this point.

                                          Now, the big one, stateful. Longhorn is a bear, there are a thousand caveats to it.

                                          Step one is where are your backups going to go. This can be a simple NFS/SMB share on a local server, it can be an s3 endpoint, but seriously this is step 1. Backups are critical with longhorn. You will fuck up Longhorn - multiple times. Losing these backups means losing all configs to all of your pods, so step one is to decide on your stable backup location.

                                          Now, read the Longhorn install guide: https://longhorn.io/docs/1.9.0/deploy/install/. Do not skip reading the install guide. There are incredibly important things in there that I regretted glossing over that would have saved me. (Like setting up backups first).

                                          The way I use longhorn is to create a PV in longhorn, and then the PVC (you can look up what both of these are later). Then I use Helm to set what the PVC name is to attach it to my pod. Try and do this with another sample pod. You are still not ready to move production things over yet, so just attach it to nginx. exec into it, write some data into the pvc. Helm uninstall. See what happens in longhorn. Helm install. Does your PVC reattach? Exec in, is your data still there? Learn how it works. I fully expect you to ping me with questions at this point, don't worry, I'll be here.

                                          Longhorn will take time in learning, give yourself grace. Also after you feel comfortable with it, you'll need to start moving data from your old docker setup to Longhorn, and that too will be a process. You'll get there though. Just start with some of your lower priority projects, and migrate them one by one.

                                          After all of this, there is still more. You can automount smb/nfs shares directly into pods for media or anything. You can pass in GPUs - or I even pass in some USB devices. You can encrypt your longhorn things, you can manage secrets with your favorite secret manager. There's thousands of things you'll be able to do. I wish you luck, and feel free to ping me here or on Matrix (@[email protected]) if you ever need an ear. Good luck!

                                          E This user is from outside of this forum
                                          E This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by
                                          #34

                                          Great writeup.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups