Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Linux
  3. rm'd a project that was months in the making a few days before launch... What to do if you kill the wrong file

rm'd a project that was months in the making a few days before launch... What to do if you kill the wrong file

Scheduled Pinned Locked Moved Linux
linux
36 Posts 21 Posters 121 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • R [email protected]

    Yep... it happened to me. I killed a docker compose file with 550 lines of God-forsaken yaml less than a week before the project launch, and the most recent backup we had was nearly a month old and would have taken at least a day to get back up to speed. With a stroke of luck, I handled it about as well as I could have for on the feet thinking and I'd like to share my experience and lessons learned for anyone else that may ever find themselves in these smelly shoes:

    Disclaimer! I'm a young engineer still growing my expertise and experience. Some stuff in here may be bad advice or wrong, like my assertion that using dd to pull data off of an unmounted drive doesn't risk data loss; I'm pretty damn sure of that, but I wouldn't stake my life (or your data) on it. I'll happily update this post as improvements are suggested.

    IF YOU RM'D THE WRONG THING:

    1. Stop all writes to that partition as quickly as possible.

    this step has some optional improvements at the bottom

    Up to this point I'd been keeping a lazy backup of the file deleted on another partition. In order to preserve the disk as well as possible and prevent overwriting the blocks with the lost file, I cd to the backup dir and run a docker compose down. There were a few stragglers, but docker stop $containerName worked fine.

    2. Unmount the partition

    The goal is to ensure nothing writes to this disk at all. This, in tandem with the fact that most data recovery tools require an unmounted disk, is a critical step in preserving all hopes of recovering your data. Get that disk off of the accessible filesystem.

    3. Save what you have

    Once your partition is unmounted, you can use dd or a similar tool to create a backup somewhere else without risking corruption of the data. You should restore to a different disk/partition if at all possible, but I know sometimes things aren't possible and /boot can come in handy in an emergency. It would have been big enough to save me if I wasn't working on a dedicated app-data partition.

    4. Your sword of choice

    It's time to choose your data recovery tool. I tried both extundelete and testdisk/photorec, and extundelete got some stuff back but not what I was looking for, while also running into seg faults and other issues. Photorec, on the other hand, was truly a gift from the cosmos. It worked like a dream, it was quick and easy, and it saved my sanity and my project.

    5. The search for gold

    Use "grep -r './restore/directory' -e 'term in your file'" to look through everything you've deleted on the part since the beginning of time for the file you need.

    It was a scary time for me, and hopefully this playbook can help some of you recover from a really stupid, preventable mistake.

    potential improvements

    In hindsight, two things could have gone better here:
    1. Quicker: I could have shut them down immediately if I was less panicked and remembered this little trick: docker stop $(docker ps -q)
    2. Exporter running config: I could have used 'docker inspect > /path/to/other/partition' to aid in the restoration process if I ended up needing to reconstruct it by hand. I decided it was worth it to risk it for the biscuit, though, and choosing to shut the stack down as quickly as possible was worth the potential sacrifice.

    If you fight to preserve a running config of some sorts, whether k8s docker or other, MAKE SURE YOU WRITE IT TO ANOTHER PARTITION. It's generally wise to give an application it's own data partition but hey, you don't have a usable backup so if you don't have a partition to spare consider using the /boot partition if you really want to save your running config.

    If you're considering a donation to FOSS, join me in sending a few bucks over to CGSecurity.

    remove, recurse, force
    wrong path, there is no backup
    desperate panic

    I This user is from outside of this forum
    I This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #22

    While you can't use Syncthing to share a git repo, it actually works quite well in an A -> B setup, where updates happen only on A and versioned backup is enabled on B. YMMV tho.

    1 Reply Last reply
    0
    • R [email protected]

      I'm aware. Any local storage wouldn't do much about a poorly aimed rm, though.

      D This user is from outside of this forum
      D This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #23

      A lot harder to rm a whole directory vs a single file. And even then you can git init --bare a "remote" directory on the local machine that you push to to have a backup copy.

      1 Reply Last reply
      0
      • R [email protected]

        Yep... it happened to me. I killed a docker compose file with 550 lines of God-forsaken yaml less than a week before the project launch, and the most recent backup we had was nearly a month old and would have taken at least a day to get back up to speed. With a stroke of luck, I handled it about as well as I could have for on the feet thinking and I'd like to share my experience and lessons learned for anyone else that may ever find themselves in these smelly shoes:

        Disclaimer! I'm a young engineer still growing my expertise and experience. Some stuff in here may be bad advice or wrong, like my assertion that using dd to pull data off of an unmounted drive doesn't risk data loss; I'm pretty damn sure of that, but I wouldn't stake my life (or your data) on it. I'll happily update this post as improvements are suggested.

        IF YOU RM'D THE WRONG THING:

        1. Stop all writes to that partition as quickly as possible.

        this step has some optional improvements at the bottom

        Up to this point I'd been keeping a lazy backup of the file deleted on another partition. In order to preserve the disk as well as possible and prevent overwriting the blocks with the lost file, I cd to the backup dir and run a docker compose down. There were a few stragglers, but docker stop $containerName worked fine.

        2. Unmount the partition

        The goal is to ensure nothing writes to this disk at all. This, in tandem with the fact that most data recovery tools require an unmounted disk, is a critical step in preserving all hopes of recovering your data. Get that disk off of the accessible filesystem.

        3. Save what you have

        Once your partition is unmounted, you can use dd or a similar tool to create a backup somewhere else without risking corruption of the data. You should restore to a different disk/partition if at all possible, but I know sometimes things aren't possible and /boot can come in handy in an emergency. It would have been big enough to save me if I wasn't working on a dedicated app-data partition.

        4. Your sword of choice

        It's time to choose your data recovery tool. I tried both extundelete and testdisk/photorec, and extundelete got some stuff back but not what I was looking for, while also running into seg faults and other issues. Photorec, on the other hand, was truly a gift from the cosmos. It worked like a dream, it was quick and easy, and it saved my sanity and my project.

        5. The search for gold

        Use "grep -r './restore/directory' -e 'term in your file'" to look through everything you've deleted on the part since the beginning of time for the file you need.

        It was a scary time for me, and hopefully this playbook can help some of you recover from a really stupid, preventable mistake.

        potential improvements

        In hindsight, two things could have gone better here:
        1. Quicker: I could have shut them down immediately if I was less panicked and remembered this little trick: docker stop $(docker ps -q)
        2. Exporter running config: I could have used 'docker inspect > /path/to/other/partition' to aid in the restoration process if I ended up needing to reconstruct it by hand. I decided it was worth it to risk it for the biscuit, though, and choosing to shut the stack down as quickly as possible was worth the potential sacrifice.

        If you fight to preserve a running config of some sorts, whether k8s docker or other, MAKE SURE YOU WRITE IT TO ANOTHER PARTITION. It's generally wise to give an application it's own data partition but hey, you don't have a usable backup so if you don't have a partition to spare consider using the /boot partition if you really want to save your running config.

        If you're considering a donation to FOSS, join me in sending a few bucks over to CGSecurity.

        remove, recurse, force
        wrong path, there is no backup
        desperate panic

        X This user is from outside of this forum
        X This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #24

        Git. Why you would even think to use anything else is...weird.

        Data recovery is a complete shot in the dark in a situation like this.

        If you commit often, you don't have to worry about data loss, and git already has a workflow for this exact situation--git branches;

        git checkout work_being_done
        // dozens and dozens of commits while working
        git rebase -i main
        git checkout main
        git merge work_being_done
        

        Let's you do any amount of work and save states for each step. You can even commit you working branch to the repository, so even if you have data loss like this, you can always just re-pull the repository.

        M 1 Reply Last reply
        0
        • R [email protected]

          I'm aware. Any local storage wouldn't do much about a poorly aimed rm, though.

          D This user is from outside of this forum
          D This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #25

          I don't know if it fits your use-case but a little known feature is to use a second local drive/folder as a remote, like this:

          D:
          mkdir D:\git_repos\my_project.git
          git init --bare D:\git_repos\my_project.git
          
          C:
          cd C:\path\to\your\project
          git init
          git remote add origin file:///D:/git_repos/my_project.git
          

          This way, you can now push to origin and it will send your commits to your repo on your second drive.

          R 1 Reply Last reply
          0
          • R [email protected]

            Didn't even need remote version control. All it required was essential files version controlled in the local folder.

            M This user is from outside of this forum
            M This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #26

            A simple rm -rf says hello

            1 Reply Last reply
            0
            • R [email protected]

              Yep... it happened to me. I killed a docker compose file with 550 lines of God-forsaken yaml less than a week before the project launch, and the most recent backup we had was nearly a month old and would have taken at least a day to get back up to speed. With a stroke of luck, I handled it about as well as I could have for on the feet thinking and I'd like to share my experience and lessons learned for anyone else that may ever find themselves in these smelly shoes:

              Disclaimer! I'm a young engineer still growing my expertise and experience. Some stuff in here may be bad advice or wrong, like my assertion that using dd to pull data off of an unmounted drive doesn't risk data loss; I'm pretty damn sure of that, but I wouldn't stake my life (or your data) on it. I'll happily update this post as improvements are suggested.

              IF YOU RM'D THE WRONG THING:

              1. Stop all writes to that partition as quickly as possible.

              this step has some optional improvements at the bottom

              Up to this point I'd been keeping a lazy backup of the file deleted on another partition. In order to preserve the disk as well as possible and prevent overwriting the blocks with the lost file, I cd to the backup dir and run a docker compose down. There were a few stragglers, but docker stop $containerName worked fine.

              2. Unmount the partition

              The goal is to ensure nothing writes to this disk at all. This, in tandem with the fact that most data recovery tools require an unmounted disk, is a critical step in preserving all hopes of recovering your data. Get that disk off of the accessible filesystem.

              3. Save what you have

              Once your partition is unmounted, you can use dd or a similar tool to create a backup somewhere else without risking corruption of the data. You should restore to a different disk/partition if at all possible, but I know sometimes things aren't possible and /boot can come in handy in an emergency. It would have been big enough to save me if I wasn't working on a dedicated app-data partition.

              4. Your sword of choice

              It's time to choose your data recovery tool. I tried both extundelete and testdisk/photorec, and extundelete got some stuff back but not what I was looking for, while also running into seg faults and other issues. Photorec, on the other hand, was truly a gift from the cosmos. It worked like a dream, it was quick and easy, and it saved my sanity and my project.

              5. The search for gold

              Use "grep -r './restore/directory' -e 'term in your file'" to look through everything you've deleted on the part since the beginning of time for the file you need.

              It was a scary time for me, and hopefully this playbook can help some of you recover from a really stupid, preventable mistake.

              potential improvements

              In hindsight, two things could have gone better here:
              1. Quicker: I could have shut them down immediately if I was less panicked and remembered this little trick: docker stop $(docker ps -q)
              2. Exporter running config: I could have used 'docker inspect > /path/to/other/partition' to aid in the restoration process if I ended up needing to reconstruct it by hand. I decided it was worth it to risk it for the biscuit, though, and choosing to shut the stack down as quickly as possible was worth the potential sacrifice.

              If you fight to preserve a running config of some sorts, whether k8s docker or other, MAKE SURE YOU WRITE IT TO ANOTHER PARTITION. It's generally wise to give an application it's own data partition but hey, you don't have a usable backup so if you don't have a partition to spare consider using the /boot partition if you really want to save your running config.

              If you're considering a donation to FOSS, join me in sending a few bucks over to CGSecurity.

              remove, recurse, force
              wrong path, there is no backup
              desperate panic

              C This user is from outside of this forum
              C This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #27

              Docker is annoying and unnecessary for a lot of the situations people use it in.

              C 1 Reply Last reply
              0
              • R [email protected]

                100%. The organization wasn't there yet and seeing that I wanted to remain employed at the time I wasn't going to put up a fight against management 3 layers above me. Legacy business are a different beast when it comes to dumb stuff like that.

                D This user is from outside of this forum
                D This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #28

                Not trying to victim blame but your org was kind of asking for it here. I hope someone above takes responsibility for the situation they put you in.

                1 Reply Last reply
                0
                • R [email protected]

                  Yep... it happened to me. I killed a docker compose file with 550 lines of God-forsaken yaml less than a week before the project launch, and the most recent backup we had was nearly a month old and would have taken at least a day to get back up to speed. With a stroke of luck, I handled it about as well as I could have for on the feet thinking and I'd like to share my experience and lessons learned for anyone else that may ever find themselves in these smelly shoes:

                  Disclaimer! I'm a young engineer still growing my expertise and experience. Some stuff in here may be bad advice or wrong, like my assertion that using dd to pull data off of an unmounted drive doesn't risk data loss; I'm pretty damn sure of that, but I wouldn't stake my life (or your data) on it. I'll happily update this post as improvements are suggested.

                  IF YOU RM'D THE WRONG THING:

                  1. Stop all writes to that partition as quickly as possible.

                  this step has some optional improvements at the bottom

                  Up to this point I'd been keeping a lazy backup of the file deleted on another partition. In order to preserve the disk as well as possible and prevent overwriting the blocks with the lost file, I cd to the backup dir and run a docker compose down. There were a few stragglers, but docker stop $containerName worked fine.

                  2. Unmount the partition

                  The goal is to ensure nothing writes to this disk at all. This, in tandem with the fact that most data recovery tools require an unmounted disk, is a critical step in preserving all hopes of recovering your data. Get that disk off of the accessible filesystem.

                  3. Save what you have

                  Once your partition is unmounted, you can use dd or a similar tool to create a backup somewhere else without risking corruption of the data. You should restore to a different disk/partition if at all possible, but I know sometimes things aren't possible and /boot can come in handy in an emergency. It would have been big enough to save me if I wasn't working on a dedicated app-data partition.

                  4. Your sword of choice

                  It's time to choose your data recovery tool. I tried both extundelete and testdisk/photorec, and extundelete got some stuff back but not what I was looking for, while also running into seg faults and other issues. Photorec, on the other hand, was truly a gift from the cosmos. It worked like a dream, it was quick and easy, and it saved my sanity and my project.

                  5. The search for gold

                  Use "grep -r './restore/directory' -e 'term in your file'" to look through everything you've deleted on the part since the beginning of time for the file you need.

                  It was a scary time for me, and hopefully this playbook can help some of you recover from a really stupid, preventable mistake.

                  potential improvements

                  In hindsight, two things could have gone better here:
                  1. Quicker: I could have shut them down immediately if I was less panicked and remembered this little trick: docker stop $(docker ps -q)
                  2. Exporter running config: I could have used 'docker inspect > /path/to/other/partition' to aid in the restoration process if I ended up needing to reconstruct it by hand. I decided it was worth it to risk it for the biscuit, though, and choosing to shut the stack down as quickly as possible was worth the potential sacrifice.

                  If you fight to preserve a running config of some sorts, whether k8s docker or other, MAKE SURE YOU WRITE IT TO ANOTHER PARTITION. It's generally wise to give an application it's own data partition but hey, you don't have a usable backup so if you don't have a partition to spare consider using the /boot partition if you really want to save your running config.

                  If you're considering a donation to FOSS, join me in sending a few bucks over to CGSecurity.

                  remove, recurse, force
                  wrong path, there is no backup
                  desperate panic

                  B This user is from outside of this forum
                  B This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #29

                  This doesn't sound like it'll help you now, but I'm the future, you really should have cloud synced backups of that kind of thing.

                  R 1 Reply Last reply
                  0
                  • C [email protected]

                    Docker is annoying and unnecessary for a lot of the situations people use it in.

                    C This user is from outside of this forum
                    C This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #30

                    My secret Linux shame is that however much I try, I just can't understand Docker at all. Like I get the general idea of what it is, but I can't visualise how it works if that makes sense.

                    I have an app that runs in Docker, that I installed by just following the instructions, but I don't know where it is on my computer or what exactly it's doing, which I don't really like.

                    1 Reply Last reply
                    0
                    • X [email protected]

                      Git. Why you would even think to use anything else is...weird.

                      Data recovery is a complete shot in the dark in a situation like this.

                      If you commit often, you don't have to worry about data loss, and git already has a workflow for this exact situation--git branches;

                      git checkout work_being_done
                      // dozens and dozens of commits while working
                      git rebase -i main
                      git checkout main
                      git merge work_being_done
                      

                      Let's you do any amount of work and save states for each step. You can even commit you working branch to the repository, so even if you have data loss like this, you can always just re-pull the repository.

                      M This user is from outside of this forum
                      M This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #31

                      Yeah I did a read through despite everything they wrote there's still no mention git, which means their project-critical YAML file has no means to rollback changes, bisect issues, manage concurrent development, audit changes. or even back up via any git provider out there.

                      I can see they're a new dev so I don't wanna blame them, this is entirely on their project management and experienced devs to use some kind of version control.

                      I worked in a job which basically had me dragging-and-dropping files into a live production environment and I didn't last more than 8 months before I scarpered for a job with better pay and better development practices.

                      1 Reply Last reply
                      0
                      • R [email protected]

                        100%. The organization wasn't there yet and seeing that I wanted to remain employed at the time I wasn't going to put up a fight against management 3 layers above me. Legacy business are a different beast when it comes to dumb stuff like that.

                        M This user is from outside of this forum
                        M This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #32

                        This is red flag shit. 3 layers of management trying to restrict version control on project critical code? You need to update your CV and start looking for a better role before they fuck up. I say this from experience.

                        R 1 Reply Last reply
                        0
                        • R [email protected]

                          I'm aware. Any local storage wouldn't do much about a poorly aimed rm, though.

                          mmstick@lemmy.worldM This user is from outside of this forum
                          mmstick@lemmy.worldM This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #33

                          It's really easy to configure a self-hosted forgejo instance. Even if you rm your local work, you can clone it from your server. Be that hosted on the same system over localhost, or on another system in your network.

                          1 Reply Last reply
                          0
                          • M [email protected]

                            This is red flag shit. 3 layers of management trying to restrict version control on project critical code? You need to update your CV and start looking for a better role before they fuck up. I say this from experience.

                            R This user is from outside of this forum
                            R This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #34

                            This happened a while ago and I'm well past it. The point of the post was to help others that ended up in the situation, not sell best practices.

                            1 Reply Last reply
                            0
                            • B [email protected]

                              This doesn't sound like it'll help you now, but I'm the future, you really should have cloud synced backups of that kind of thing.

                              R This user is from outside of this forum
                              R This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #35

                              I'm aware. The post was simply to get a recovery guide out there for a crappy situation.

                              1 Reply Last reply
                              0
                              • D [email protected]

                                I don't know if it fits your use-case but a little known feature is to use a second local drive/folder as a remote, like this:

                                D:
                                mkdir D:\git_repos\my_project.git
                                git init --bare D:\git_repos\my_project.git
                                
                                C:
                                cd C:\path\to\your\project
                                git init
                                git remote add origin file:///D:/git_repos/my_project.git
                                

                                This way, you can now push to origin and it will send your commits to your repo on your second drive.

                                R This user is from outside of this forum
                                R This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #36

                                I'm aware, but thank you. This post was intended to be a guide for people that end up in this situation.

                                1 Reply Last reply
                                0
                                • System shared this topic on
                                Reply
                                • Reply as topic
                                Log in to reply
                                • Oldest to Newest
                                • Newest to Oldest
                                • Most Votes


                                • Login

                                • Login or register to search.
                                • First post
                                  Last post
                                0
                                • Categories
                                • Recent
                                • Tags
                                • Popular
                                • World
                                • Users
                                • Groups