Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Linux
  3. rm'd a project that was months in the making a few days before launch... What to do if you kill the wrong file

rm'd a project that was months in the making a few days before launch... What to do if you kill the wrong file

Scheduled Pinned Locked Moved Linux
linux
36 Posts 21 Posters 121 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • R [email protected]

    100%. The organization wasn't there yet and seeing that I wanted to remain employed at the time I wasn't going to put up a fight against management 3 layers above me. Legacy business are a different beast when it comes to dumb stuff like that.

    paequ2@lemmy.todayP This user is from outside of this forum
    paequ2@lemmy.todayP This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #12

    put up a fight against management 3 layers above me

    Eh, yeah. I've been in that kind of situation before. Sucks.

    Still, you should try to go rogue where you can. Not for the company, fuck the company, do it to protect yourself. Like, maybe you could create your own git repo and push the changes there yourself. Don't tell anyone else, just do it privately. You don't need to use GitHub, you could push to a local folder on your computer or a USB drive.

    1 Reply Last reply
    0
    • J [email protected]

      Photorec, on the other hand, was truly a gift from the cosmos

      Can confirm. Over the years I've had recourse to this little tool several times and always found to to be almost disturbingly effective.

      R This user is from outside of this forum
      R This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #13

      Disturbingly effective is definitely the right phrase. It's actually inspired me to create a script on my desktop that moves folders to ~/Trash, then I have another script that /dev/random's the files and then /dev/zeros them before deletion. It eliminated risk of an accidental rm, AND make sure that once something is gone, it is GONE.

      J 1 Reply Last reply
      0
      • R [email protected]

        Yep... it happened to me. I killed a docker compose file with 550 lines of God-forsaken yaml less than a week before the project launch, and the most recent backup we had was nearly a month old and would have taken at least a day to get back up to speed. With a stroke of luck, I handled it about as well as I could have for on the feet thinking and I'd like to share my experience and lessons learned for anyone else that may ever find themselves in these smelly shoes:

        Disclaimer! I'm a young engineer still growing my expertise and experience. Some stuff in here may be bad advice or wrong, like my assertion that using dd to pull data off of an unmounted drive doesn't risk data loss; I'm pretty damn sure of that, but I wouldn't stake my life (or your data) on it. I'll happily update this post as improvements are suggested.

        IF YOU RM'D THE WRONG THING:

        1. Stop all writes to that partition as quickly as possible.

        this step has some optional improvements at the bottom

        Up to this point I'd been keeping a lazy backup of the file deleted on another partition. In order to preserve the disk as well as possible and prevent overwriting the blocks with the lost file, I cd to the backup dir and run a docker compose down. There were a few stragglers, but docker stop $containerName worked fine.

        2. Unmount the partition

        The goal is to ensure nothing writes to this disk at all. This, in tandem with the fact that most data recovery tools require an unmounted disk, is a critical step in preserving all hopes of recovering your data. Get that disk off of the accessible filesystem.

        3. Save what you have

        Once your partition is unmounted, you can use dd or a similar tool to create a backup somewhere else without risking corruption of the data. You should restore to a different disk/partition if at all possible, but I know sometimes things aren't possible and /boot can come in handy in an emergency. It would have been big enough to save me if I wasn't working on a dedicated app-data partition.

        4. Your sword of choice

        It's time to choose your data recovery tool. I tried both extundelete and testdisk/photorec, and extundelete got some stuff back but not what I was looking for, while also running into seg faults and other issues. Photorec, on the other hand, was truly a gift from the cosmos. It worked like a dream, it was quick and easy, and it saved my sanity and my project.

        5. The search for gold

        Use "grep -r './restore/directory' -e 'term in your file'" to look through everything you've deleted on the part since the beginning of time for the file you need.

        It was a scary time for me, and hopefully this playbook can help some of you recover from a really stupid, preventable mistake.

        potential improvements

        In hindsight, two things could have gone better here:
        1. Quicker: I could have shut them down immediately if I was less panicked and remembered this little trick: docker stop $(docker ps -q)
        2. Exporter running config: I could have used 'docker inspect > /path/to/other/partition' to aid in the restoration process if I ended up needing to reconstruct it by hand. I decided it was worth it to risk it for the biscuit, though, and choosing to shut the stack down as quickly as possible was worth the potential sacrifice.

        If you fight to preserve a running config of some sorts, whether k8s docker or other, MAKE SURE YOU WRITE IT TO ANOTHER PARTITION. It's generally wise to give an application it's own data partition but hey, you don't have a usable backup so if you don't have a partition to spare consider using the /boot partition if you really want to save your running config.

        If you're considering a donation to FOSS, join me in sending a few bucks over to CGSecurity.

        remove, recurse, force
        wrong path, there is no backup
        desperate panic

        davel@lemmy.mlD This user is from outside of this forum
        davel@lemmy.mlD This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #14

        Footgun protection: Automated hourly/daily/whatever backups and trash-cli.

        R 1 Reply Last reply
        0
        • R [email protected]

          Disturbingly effective is definitely the right phrase. It's actually inspired me to create a script on my desktop that moves folders to ~/Trash, then I have another script that /dev/random's the files and then /dev/zeros them before deletion. It eliminated risk of an accidental rm, AND make sure that once something is gone, it is GONE.

          J This user is from outside of this forum
          J This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #15

          Yep, I use trash-put, and trash-empty with a 30-day timeout. But no bit-scrubbing needed because the partition is encrypted.

          R 1 Reply Last reply
          0
          • R [email protected]

            100%. The organization wasn't there yet and seeing that I wanted to remain employed at the time I wasn't going to put up a fight against management 3 layers above me. Legacy business are a different beast when it comes to dumb stuff like that.

            D This user is from outside of this forum
            D This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #16

            git init .

            git doesn't need Github, Gitlab, or even a server. It's designed to allow devs to cooperate via patches and PRs sent by email.

            R 1 Reply Last reply
            0
            • R [email protected]

              100%. The organization wasn't there yet and seeing that I wanted to remain employed at the time I wasn't going to put up a fight against management 3 layers above me. Legacy business are a different beast when it comes to dumb stuff like that.

              C This user is from outside of this forum
              C This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #17

              You can just "git init ." on your PC somewhere and color relevant stuff into it occasionally and commit. Might not be automated, might not be used directly in production (or on the prototype), but it at least exists.

              1 Reply Last reply
              0
              • R [email protected]

                The server we were working at the time wasn't configured with frequent backups, just a full backup once a month as a stop gap until the project got some proper funding. Any sort of remote version control is totally the preventative factor here, but my goal is to help others that have yet to learn that lesson.

                R This user is from outside of this forum
                R This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #18

                Didn't even need remote version control. All it required was essential files version controlled in the local folder.

                M 1 Reply Last reply
                0
                • D [email protected]

                  git init .

                  git doesn't need Github, Gitlab, or even a server. It's designed to allow devs to cooperate via patches and PRs sent by email.

                  R This user is from outside of this forum
                  R This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #19

                  I'm aware. Any local storage wouldn't do much about a poorly aimed rm, though.

                  D D mmstick@lemmy.worldM 3 Replies Last reply
                  0
                  • J [email protected]

                    Yep, I use trash-put, and trash-empty with a 30-day timeout. But no bit-scrubbing needed because the partition is encrypted.

                    R This user is from outside of this forum
                    R This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #20

                    That's the move.

                    1 Reply Last reply
                    0
                    • davel@lemmy.mlD [email protected]

                      Footgun protection: Automated hourly/daily/whatever backups and trash-cli.

                      R This user is from outside of this forum
                      R This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #21

                      100% my stack going forward. Thanks!

                      1 Reply Last reply
                      0
                      • R [email protected]

                        Yep... it happened to me. I killed a docker compose file with 550 lines of God-forsaken yaml less than a week before the project launch, and the most recent backup we had was nearly a month old and would have taken at least a day to get back up to speed. With a stroke of luck, I handled it about as well as I could have for on the feet thinking and I'd like to share my experience and lessons learned for anyone else that may ever find themselves in these smelly shoes:

                        Disclaimer! I'm a young engineer still growing my expertise and experience. Some stuff in here may be bad advice or wrong, like my assertion that using dd to pull data off of an unmounted drive doesn't risk data loss; I'm pretty damn sure of that, but I wouldn't stake my life (or your data) on it. I'll happily update this post as improvements are suggested.

                        IF YOU RM'D THE WRONG THING:

                        1. Stop all writes to that partition as quickly as possible.

                        this step has some optional improvements at the bottom

                        Up to this point I'd been keeping a lazy backup of the file deleted on another partition. In order to preserve the disk as well as possible and prevent overwriting the blocks with the lost file, I cd to the backup dir and run a docker compose down. There were a few stragglers, but docker stop $containerName worked fine.

                        2. Unmount the partition

                        The goal is to ensure nothing writes to this disk at all. This, in tandem with the fact that most data recovery tools require an unmounted disk, is a critical step in preserving all hopes of recovering your data. Get that disk off of the accessible filesystem.

                        3. Save what you have

                        Once your partition is unmounted, you can use dd or a similar tool to create a backup somewhere else without risking corruption of the data. You should restore to a different disk/partition if at all possible, but I know sometimes things aren't possible and /boot can come in handy in an emergency. It would have been big enough to save me if I wasn't working on a dedicated app-data partition.

                        4. Your sword of choice

                        It's time to choose your data recovery tool. I tried both extundelete and testdisk/photorec, and extundelete got some stuff back but not what I was looking for, while also running into seg faults and other issues. Photorec, on the other hand, was truly a gift from the cosmos. It worked like a dream, it was quick and easy, and it saved my sanity and my project.

                        5. The search for gold

                        Use "grep -r './restore/directory' -e 'term in your file'" to look through everything you've deleted on the part since the beginning of time for the file you need.

                        It was a scary time for me, and hopefully this playbook can help some of you recover from a really stupid, preventable mistake.

                        potential improvements

                        In hindsight, two things could have gone better here:
                        1. Quicker: I could have shut them down immediately if I was less panicked and remembered this little trick: docker stop $(docker ps -q)
                        2. Exporter running config: I could have used 'docker inspect > /path/to/other/partition' to aid in the restoration process if I ended up needing to reconstruct it by hand. I decided it was worth it to risk it for the biscuit, though, and choosing to shut the stack down as quickly as possible was worth the potential sacrifice.

                        If you fight to preserve a running config of some sorts, whether k8s docker or other, MAKE SURE YOU WRITE IT TO ANOTHER PARTITION. It's generally wise to give an application it's own data partition but hey, you don't have a usable backup so if you don't have a partition to spare consider using the /boot partition if you really want to save your running config.

                        If you're considering a donation to FOSS, join me in sending a few bucks over to CGSecurity.

                        remove, recurse, force
                        wrong path, there is no backup
                        desperate panic

                        I This user is from outside of this forum
                        I This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #22

                        While you can't use Syncthing to share a git repo, it actually works quite well in an A -> B setup, where updates happen only on A and versioned backup is enabled on B. YMMV tho.

                        1 Reply Last reply
                        0
                        • R [email protected]

                          I'm aware. Any local storage wouldn't do much about a poorly aimed rm, though.

                          D This user is from outside of this forum
                          D This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #23

                          A lot harder to rm a whole directory vs a single file. And even then you can git init --bare a "remote" directory on the local machine that you push to to have a backup copy.

                          1 Reply Last reply
                          0
                          • R [email protected]

                            Yep... it happened to me. I killed a docker compose file with 550 lines of God-forsaken yaml less than a week before the project launch, and the most recent backup we had was nearly a month old and would have taken at least a day to get back up to speed. With a stroke of luck, I handled it about as well as I could have for on the feet thinking and I'd like to share my experience and lessons learned for anyone else that may ever find themselves in these smelly shoes:

                            Disclaimer! I'm a young engineer still growing my expertise and experience. Some stuff in here may be bad advice or wrong, like my assertion that using dd to pull data off of an unmounted drive doesn't risk data loss; I'm pretty damn sure of that, but I wouldn't stake my life (or your data) on it. I'll happily update this post as improvements are suggested.

                            IF YOU RM'D THE WRONG THING:

                            1. Stop all writes to that partition as quickly as possible.

                            this step has some optional improvements at the bottom

                            Up to this point I'd been keeping a lazy backup of the file deleted on another partition. In order to preserve the disk as well as possible and prevent overwriting the blocks with the lost file, I cd to the backup dir and run a docker compose down. There were a few stragglers, but docker stop $containerName worked fine.

                            2. Unmount the partition

                            The goal is to ensure nothing writes to this disk at all. This, in tandem with the fact that most data recovery tools require an unmounted disk, is a critical step in preserving all hopes of recovering your data. Get that disk off of the accessible filesystem.

                            3. Save what you have

                            Once your partition is unmounted, you can use dd or a similar tool to create a backup somewhere else without risking corruption of the data. You should restore to a different disk/partition if at all possible, but I know sometimes things aren't possible and /boot can come in handy in an emergency. It would have been big enough to save me if I wasn't working on a dedicated app-data partition.

                            4. Your sword of choice

                            It's time to choose your data recovery tool. I tried both extundelete and testdisk/photorec, and extundelete got some stuff back but not what I was looking for, while also running into seg faults and other issues. Photorec, on the other hand, was truly a gift from the cosmos. It worked like a dream, it was quick and easy, and it saved my sanity and my project.

                            5. The search for gold

                            Use "grep -r './restore/directory' -e 'term in your file'" to look through everything you've deleted on the part since the beginning of time for the file you need.

                            It was a scary time for me, and hopefully this playbook can help some of you recover from a really stupid, preventable mistake.

                            potential improvements

                            In hindsight, two things could have gone better here:
                            1. Quicker: I could have shut them down immediately if I was less panicked and remembered this little trick: docker stop $(docker ps -q)
                            2. Exporter running config: I could have used 'docker inspect > /path/to/other/partition' to aid in the restoration process if I ended up needing to reconstruct it by hand. I decided it was worth it to risk it for the biscuit, though, and choosing to shut the stack down as quickly as possible was worth the potential sacrifice.

                            If you fight to preserve a running config of some sorts, whether k8s docker or other, MAKE SURE YOU WRITE IT TO ANOTHER PARTITION. It's generally wise to give an application it's own data partition but hey, you don't have a usable backup so if you don't have a partition to spare consider using the /boot partition if you really want to save your running config.

                            If you're considering a donation to FOSS, join me in sending a few bucks over to CGSecurity.

                            remove, recurse, force
                            wrong path, there is no backup
                            desperate panic

                            X This user is from outside of this forum
                            X This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #24

                            Git. Why you would even think to use anything else is...weird.

                            Data recovery is a complete shot in the dark in a situation like this.

                            If you commit often, you don't have to worry about data loss, and git already has a workflow for this exact situation--git branches;

                            git checkout work_being_done
                            // dozens and dozens of commits while working
                            git rebase -i main
                            git checkout main
                            git merge work_being_done
                            

                            Let's you do any amount of work and save states for each step. You can even commit you working branch to the repository, so even if you have data loss like this, you can always just re-pull the repository.

                            M 1 Reply Last reply
                            0
                            • R [email protected]

                              I'm aware. Any local storage wouldn't do much about a poorly aimed rm, though.

                              D This user is from outside of this forum
                              D This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #25

                              I don't know if it fits your use-case but a little known feature is to use a second local drive/folder as a remote, like this:

                              D:
                              mkdir D:\git_repos\my_project.git
                              git init --bare D:\git_repos\my_project.git
                              
                              C:
                              cd C:\path\to\your\project
                              git init
                              git remote add origin file:///D:/git_repos/my_project.git
                              

                              This way, you can now push to origin and it will send your commits to your repo on your second drive.

                              R 1 Reply Last reply
                              0
                              • R [email protected]

                                Didn't even need remote version control. All it required was essential files version controlled in the local folder.

                                M This user is from outside of this forum
                                M This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #26

                                A simple rm -rf says hello

                                1 Reply Last reply
                                0
                                • R [email protected]

                                  Yep... it happened to me. I killed a docker compose file with 550 lines of God-forsaken yaml less than a week before the project launch, and the most recent backup we had was nearly a month old and would have taken at least a day to get back up to speed. With a stroke of luck, I handled it about as well as I could have for on the feet thinking and I'd like to share my experience and lessons learned for anyone else that may ever find themselves in these smelly shoes:

                                  Disclaimer! I'm a young engineer still growing my expertise and experience. Some stuff in here may be bad advice or wrong, like my assertion that using dd to pull data off of an unmounted drive doesn't risk data loss; I'm pretty damn sure of that, but I wouldn't stake my life (or your data) on it. I'll happily update this post as improvements are suggested.

                                  IF YOU RM'D THE WRONG THING:

                                  1. Stop all writes to that partition as quickly as possible.

                                  this step has some optional improvements at the bottom

                                  Up to this point I'd been keeping a lazy backup of the file deleted on another partition. In order to preserve the disk as well as possible and prevent overwriting the blocks with the lost file, I cd to the backup dir and run a docker compose down. There were a few stragglers, but docker stop $containerName worked fine.

                                  2. Unmount the partition

                                  The goal is to ensure nothing writes to this disk at all. This, in tandem with the fact that most data recovery tools require an unmounted disk, is a critical step in preserving all hopes of recovering your data. Get that disk off of the accessible filesystem.

                                  3. Save what you have

                                  Once your partition is unmounted, you can use dd or a similar tool to create a backup somewhere else without risking corruption of the data. You should restore to a different disk/partition if at all possible, but I know sometimes things aren't possible and /boot can come in handy in an emergency. It would have been big enough to save me if I wasn't working on a dedicated app-data partition.

                                  4. Your sword of choice

                                  It's time to choose your data recovery tool. I tried both extundelete and testdisk/photorec, and extundelete got some stuff back but not what I was looking for, while also running into seg faults and other issues. Photorec, on the other hand, was truly a gift from the cosmos. It worked like a dream, it was quick and easy, and it saved my sanity and my project.

                                  5. The search for gold

                                  Use "grep -r './restore/directory' -e 'term in your file'" to look through everything you've deleted on the part since the beginning of time for the file you need.

                                  It was a scary time for me, and hopefully this playbook can help some of you recover from a really stupid, preventable mistake.

                                  potential improvements

                                  In hindsight, two things could have gone better here:
                                  1. Quicker: I could have shut them down immediately if I was less panicked and remembered this little trick: docker stop $(docker ps -q)
                                  2. Exporter running config: I could have used 'docker inspect > /path/to/other/partition' to aid in the restoration process if I ended up needing to reconstruct it by hand. I decided it was worth it to risk it for the biscuit, though, and choosing to shut the stack down as quickly as possible was worth the potential sacrifice.

                                  If you fight to preserve a running config of some sorts, whether k8s docker or other, MAKE SURE YOU WRITE IT TO ANOTHER PARTITION. It's generally wise to give an application it's own data partition but hey, you don't have a usable backup so if you don't have a partition to spare consider using the /boot partition if you really want to save your running config.

                                  If you're considering a donation to FOSS, join me in sending a few bucks over to CGSecurity.

                                  remove, recurse, force
                                  wrong path, there is no backup
                                  desperate panic

                                  C This user is from outside of this forum
                                  C This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #27

                                  Docker is annoying and unnecessary for a lot of the situations people use it in.

                                  C 1 Reply Last reply
                                  0
                                  • R [email protected]

                                    100%. The organization wasn't there yet and seeing that I wanted to remain employed at the time I wasn't going to put up a fight against management 3 layers above me. Legacy business are a different beast when it comes to dumb stuff like that.

                                    D This user is from outside of this forum
                                    D This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #28

                                    Not trying to victim blame but your org was kind of asking for it here. I hope someone above takes responsibility for the situation they put you in.

                                    1 Reply Last reply
                                    0
                                    • R [email protected]

                                      Yep... it happened to me. I killed a docker compose file with 550 lines of God-forsaken yaml less than a week before the project launch, and the most recent backup we had was nearly a month old and would have taken at least a day to get back up to speed. With a stroke of luck, I handled it about as well as I could have for on the feet thinking and I'd like to share my experience and lessons learned for anyone else that may ever find themselves in these smelly shoes:

                                      Disclaimer! I'm a young engineer still growing my expertise and experience. Some stuff in here may be bad advice or wrong, like my assertion that using dd to pull data off of an unmounted drive doesn't risk data loss; I'm pretty damn sure of that, but I wouldn't stake my life (or your data) on it. I'll happily update this post as improvements are suggested.

                                      IF YOU RM'D THE WRONG THING:

                                      1. Stop all writes to that partition as quickly as possible.

                                      this step has some optional improvements at the bottom

                                      Up to this point I'd been keeping a lazy backup of the file deleted on another partition. In order to preserve the disk as well as possible and prevent overwriting the blocks with the lost file, I cd to the backup dir and run a docker compose down. There were a few stragglers, but docker stop $containerName worked fine.

                                      2. Unmount the partition

                                      The goal is to ensure nothing writes to this disk at all. This, in tandem with the fact that most data recovery tools require an unmounted disk, is a critical step in preserving all hopes of recovering your data. Get that disk off of the accessible filesystem.

                                      3. Save what you have

                                      Once your partition is unmounted, you can use dd or a similar tool to create a backup somewhere else without risking corruption of the data. You should restore to a different disk/partition if at all possible, but I know sometimes things aren't possible and /boot can come in handy in an emergency. It would have been big enough to save me if I wasn't working on a dedicated app-data partition.

                                      4. Your sword of choice

                                      It's time to choose your data recovery tool. I tried both extundelete and testdisk/photorec, and extundelete got some stuff back but not what I was looking for, while also running into seg faults and other issues. Photorec, on the other hand, was truly a gift from the cosmos. It worked like a dream, it was quick and easy, and it saved my sanity and my project.

                                      5. The search for gold

                                      Use "grep -r './restore/directory' -e 'term in your file'" to look through everything you've deleted on the part since the beginning of time for the file you need.

                                      It was a scary time for me, and hopefully this playbook can help some of you recover from a really stupid, preventable mistake.

                                      potential improvements

                                      In hindsight, two things could have gone better here:
                                      1. Quicker: I could have shut them down immediately if I was less panicked and remembered this little trick: docker stop $(docker ps -q)
                                      2. Exporter running config: I could have used 'docker inspect > /path/to/other/partition' to aid in the restoration process if I ended up needing to reconstruct it by hand. I decided it was worth it to risk it for the biscuit, though, and choosing to shut the stack down as quickly as possible was worth the potential sacrifice.

                                      If you fight to preserve a running config of some sorts, whether k8s docker or other, MAKE SURE YOU WRITE IT TO ANOTHER PARTITION. It's generally wise to give an application it's own data partition but hey, you don't have a usable backup so if you don't have a partition to spare consider using the /boot partition if you really want to save your running config.

                                      If you're considering a donation to FOSS, join me in sending a few bucks over to CGSecurity.

                                      remove, recurse, force
                                      wrong path, there is no backup
                                      desperate panic

                                      B This user is from outside of this forum
                                      B This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #29

                                      This doesn't sound like it'll help you now, but I'm the future, you really should have cloud synced backups of that kind of thing.

                                      R 1 Reply Last reply
                                      0
                                      • C [email protected]

                                        Docker is annoying and unnecessary for a lot of the situations people use it in.

                                        C This user is from outside of this forum
                                        C This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #30

                                        My secret Linux shame is that however much I try, I just can't understand Docker at all. Like I get the general idea of what it is, but I can't visualise how it works if that makes sense.

                                        I have an app that runs in Docker, that I installed by just following the instructions, but I don't know where it is on my computer or what exactly it's doing, which I don't really like.

                                        1 Reply Last reply
                                        0
                                        • X [email protected]

                                          Git. Why you would even think to use anything else is...weird.

                                          Data recovery is a complete shot in the dark in a situation like this.

                                          If you commit often, you don't have to worry about data loss, and git already has a workflow for this exact situation--git branches;

                                          git checkout work_being_done
                                          // dozens and dozens of commits while working
                                          git rebase -i main
                                          git checkout main
                                          git merge work_being_done
                                          

                                          Let's you do any amount of work and save states for each step. You can even commit you working branch to the repository, so even if you have data loss like this, you can always just re-pull the repository.

                                          M This user is from outside of this forum
                                          M This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #31

                                          Yeah I did a read through despite everything they wrote there's still no mention git, which means their project-critical YAML file has no means to rollback changes, bisect issues, manage concurrent development, audit changes. or even back up via any git provider out there.

                                          I can see they're a new dev so I don't wanna blame them, this is entirely on their project management and experienced devs to use some kind of version control.

                                          I worked in a job which basically had me dragging-and-dropping files into a live production environment and I didn't last more than 8 months before I scarpered for a job with better pay and better development practices.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups