Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Selfhosted
  3. Syncthing alternatives

Syncthing alternatives

Scheduled Pinned Locked Moved Selfhosted
selfhosted
64 Posts 26 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • N [email protected]

    Ah, just one question - is your current Syncthing use internal to your home network, or does it sync remotely?

    Because if you're just having your mobile devices sync files when they get on your home wifi, it's reasonably safe for that to be fire-and-forget, but if you're syncing from public networks into private that really should require some more specific configuration and active control.

    O This user is from outside of this forum
    O This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #24

    Syncthing runs encrypted anyway.

    N 1 Reply Last reply
    0
    • Z [email protected]

      Had a pixel 8, and now a new pixel 9a. I think the problem is actually a bit messy. On my house I have several access points. There is a chance when syncthing is working, and I am going up or down, phone changes the access point. Syncthing possibly gets a ' oppsie, didn't finish that! Let's go for the next one' kind of issue. Of course, never looked into logs or anything so this is just pure speculation.

      Usually the kind of corruption on the photos is the kind that beginning is always there, but at some point gets replaced with gray, hence my theory about the files.

      O This user is from outside of this forum
      O This user is from outside of this forum
      [email protected]
      wrote on last edited by [email protected]
      #25

      Yea, gotta be something odd with your setup.

      Currently I have one phone (of several) thats syncing en excess of 10,000 files, some only on Wifi (with 3 access points), some wifi/cell data.

      ST knows the state of a file, so a disconnect should have no effect. If you're getting corrupted files, I wonder if something else is going on which may also affect another sync tool.

      Try Resilio for the same folders, see if you have the same problem (disable Syncthing of course, otherwise conflicting edits will cause file corruption).

      Z 1 Reply Last reply
      2
      • Z [email protected]

        Hi,

        As the title suggests: what are alternatives to syncthing that are basically fire and forget, works on multiple device types, and just focuses on file syncing?

        I've had over the months the weirdest problems with syncthing, and lately I noticed some of my photos got corrupted, which is an absolute no no for me. I use syncthing currently as a easy automatic backup of documents, photos and other files, between my PCs and my phones (they all send only to the server. Folders are not shared with other devices).

        avidamoeba@lemmy.caA This user is from outside of this forum
        avidamoeba@lemmy.caA This user is from outside of this forum
        [email protected]
        wrote on last edited by [email protected]
        #26

        That's really weird. I've been using it for mobile-desktop-server-offsite sync for many years, with transfer sizes over 15TB, over WiFi, cellular, cable, fiber. I've never seen data corruption. Conflicts, sometimes. Permission issues, sometimes. Wiping something accidentally, sometimes. It's even more weird because Syncthing performs computes hash values for the files it manages. I don't know if it performs hash validation after copying remotely but if not, it can be forced manually which would tell you what's fucked and be pulled from the source, if it still exists.

        Nevermind, it verifies the result:

        When a block is copied or received from another device, its SHA256 hash is computed and compared with the expected value. If it matches the block is written to a temporary copy of the file, otherwise it is discarded and Syncthing tries to find another source for the block.

        According to this, if you have data corruption it can only occur between copying/moving a temporary file on your destination to another directory, or it could occur on the source itself. Both of those scenarios are a cause of concern and would likely persist with any utility. Moving or copying a file from one location to another on a sane machine should not corrupt it. If I were you I'd ensure my server doesn't eat bits. If not the storage media, it could be bit rot, or bad RAM.

        Just in case everything seems fine, let me tell you what I dealt with. I had a Ryzen 5950X machine with 32GB of RAM. It worked well since inception with no signs of RAM or data corruption issues. I test every new machine with Memtest86+. At some point I migrated the storage from Ext4 on LVMRAID to ZFS. All good. Then I wrote an alarm for Prometheus to tell me if there's any issues in ZFS. A week later I get an email about a ZFS error. I check the system - says checksum errors, data has been corrected, applications unaffected, run a scrub to clear. I ran a scrub. A few more checksum errors found, all corrected, we're clean now. There was a strong solar storm around that time, probably that. A couple of weeks later I get another email. Same symptoms, same procedure. No solar storm. Shit. Memtest86+ - pass. Hm. A couple of weeks later I get another. Same thing. Memtest again - nothing. This went on for several months. Meanwhile the off-site backup sees nothing like that. While running Memtest on another machine I noticed that the test passes following the first took longer than the first, a lot longer. I thought something might be wrong with that machine. Dug into it, got into Memtest's source code and discovered that the first pass is shorter on purpose so that it quickly flags obviously bad RAM. Apparently if you want to detect less obvious issues, you have to run multiple passes. OK. Memtest the main server again, pass 1: OK, pass 2: OK, pass 3: OK, pass 4: FAIL. FUCK. Memtest each stick separately for 4 passes: OK. Memtest 2 at a time: OK. Memtest all 4: FAIL. Alright, now we know why ZFS keeps finding checksum errors. Long story short, this machine could not run this RAM in 4-DIMM config. Replaced it with another RAM that's rated to run in 4-DIMM config on that processor. No more checksum issues. If I was running the older Ext4-on-LVMRAID storage stack, I would have caught NONE of these and it would have happily corrupted files here and there. In fact it likely did and I have some corruption. Moral of the story - run many Memtest passes and use checksumming storage stack like ZFS or Btrfs. I strongly recommend ZFS since its stripe RAID works fine unlike Btrfs'es. If you don't find bad RAM, start using it today, even if you're working with a single disk and add redundancy when you can. Only after change Syncthing for something else if you still somehow get corruption without ZFS'es knowledge. And if ZFS tells you that you have checksum errors, you likely have bad hardware.

        Z halcyoncmdr@lemmy.worldH Z 3 Replies Last reply
        18
        • avidamoeba@lemmy.caA [email protected]

          That's really weird. I've been using it for mobile-desktop-server-offsite sync for many years, with transfer sizes over 15TB, over WiFi, cellular, cable, fiber. I've never seen data corruption. Conflicts, sometimes. Permission issues, sometimes. Wiping something accidentally, sometimes. It's even more weird because Syncthing performs computes hash values for the files it manages. I don't know if it performs hash validation after copying remotely but if not, it can be forced manually which would tell you what's fucked and be pulled from the source, if it still exists.

          Nevermind, it verifies the result:

          When a block is copied or received from another device, its SHA256 hash is computed and compared with the expected value. If it matches the block is written to a temporary copy of the file, otherwise it is discarded and Syncthing tries to find another source for the block.

          According to this, if you have data corruption it can only occur between copying/moving a temporary file on your destination to another directory, or it could occur on the source itself. Both of those scenarios are a cause of concern and would likely persist with any utility. Moving or copying a file from one location to another on a sane machine should not corrupt it. If I were you I'd ensure my server doesn't eat bits. If not the storage media, it could be bit rot, or bad RAM.

          Just in case everything seems fine, let me tell you what I dealt with. I had a Ryzen 5950X machine with 32GB of RAM. It worked well since inception with no signs of RAM or data corruption issues. I test every new machine with Memtest86+. At some point I migrated the storage from Ext4 on LVMRAID to ZFS. All good. Then I wrote an alarm for Prometheus to tell me if there's any issues in ZFS. A week later I get an email about a ZFS error. I check the system - says checksum errors, data has been corrected, applications unaffected, run a scrub to clear. I ran a scrub. A few more checksum errors found, all corrected, we're clean now. There was a strong solar storm around that time, probably that. A couple of weeks later I get another email. Same symptoms, same procedure. No solar storm. Shit. Memtest86+ - pass. Hm. A couple of weeks later I get another. Same thing. Memtest again - nothing. This went on for several months. Meanwhile the off-site backup sees nothing like that. While running Memtest on another machine I noticed that the test passes following the first took longer than the first, a lot longer. I thought something might be wrong with that machine. Dug into it, got into Memtest's source code and discovered that the first pass is shorter on purpose so that it quickly flags obviously bad RAM. Apparently if you want to detect less obvious issues, you have to run multiple passes. OK. Memtest the main server again, pass 1: OK, pass 2: OK, pass 3: OK, pass 4: FAIL. FUCK. Memtest each stick separately for 4 passes: OK. Memtest 2 at a time: OK. Memtest all 4: FAIL. Alright, now we know why ZFS keeps finding checksum errors. Long story short, this machine could not run this RAM in 4-DIMM config. Replaced it with another RAM that's rated to run in 4-DIMM config on that processor. No more checksum issues. If I was running the older Ext4-on-LVMRAID storage stack, I would have caught NONE of these and it would have happily corrupted files here and there. In fact it likely did and I have some corruption. Moral of the story - run many Memtest passes and use checksumming storage stack like ZFS or Btrfs. I strongly recommend ZFS since its stripe RAID works fine unlike Btrfs'es. If you don't find bad RAM, start using it today, even if you're working with a single disk and add redundancy when you can. Only after change Syncthing for something else if you still somehow get corruption without ZFS'es knowledge. And if ZFS tells you that you have checksum errors, you likely have bad hardware.

          Z This user is from outside of this forum
          Z This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #27

          That is some good info here. My HDD is totally fine (checked it very recently actually), as for the ram last time I checked was ok, but can check again to be sure

          avidamoeba@lemmy.caA 1 Reply Last reply
          2
          • Z [email protected]

            That is some good info here. My HDD is totally fine (checked it very recently actually), as for the ram last time I checked was ok, but can check again to be sure

            avidamoeba@lemmy.caA This user is from outside of this forum
            avidamoeba@lemmy.caA This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #28

            Check my edit.

            Z 1 Reply Last reply
            2
            • Z [email protected]

              Had a pixel 8, and now a new pixel 9a. I think the problem is actually a bit messy. On my house I have several access points. There is a chance when syncthing is working, and I am going up or down, phone changes the access point. Syncthing possibly gets a ' oppsie, didn't finish that! Let's go for the next one' kind of issue. Of course, never looked into logs or anything so this is just pure speculation.

              Usually the kind of corruption on the photos is the kind that beginning is always there, but at some point gets replaced with gray, hence my theory about the files.

              A This user is from outside of this forum
              A This user is from outside of this forum
              [email protected]
              wrote on last edited by [email protected]
              #29

              Could be a bad AP.

              I once had a switch with a failing power supply that would corrupt MP3 artwork when writing to the MP3. That was a weird one to track down.

              Z 1 Reply Last reply
              1
              • avidamoeba@lemmy.caA [email protected]

                That's really weird. I've been using it for mobile-desktop-server-offsite sync for many years, with transfer sizes over 15TB, over WiFi, cellular, cable, fiber. I've never seen data corruption. Conflicts, sometimes. Permission issues, sometimes. Wiping something accidentally, sometimes. It's even more weird because Syncthing performs computes hash values for the files it manages. I don't know if it performs hash validation after copying remotely but if not, it can be forced manually which would tell you what's fucked and be pulled from the source, if it still exists.

                Nevermind, it verifies the result:

                When a block is copied or received from another device, its SHA256 hash is computed and compared with the expected value. If it matches the block is written to a temporary copy of the file, otherwise it is discarded and Syncthing tries to find another source for the block.

                According to this, if you have data corruption it can only occur between copying/moving a temporary file on your destination to another directory, or it could occur on the source itself. Both of those scenarios are a cause of concern and would likely persist with any utility. Moving or copying a file from one location to another on a sane machine should not corrupt it. If I were you I'd ensure my server doesn't eat bits. If not the storage media, it could be bit rot, or bad RAM.

                Just in case everything seems fine, let me tell you what I dealt with. I had a Ryzen 5950X machine with 32GB of RAM. It worked well since inception with no signs of RAM or data corruption issues. I test every new machine with Memtest86+. At some point I migrated the storage from Ext4 on LVMRAID to ZFS. All good. Then I wrote an alarm for Prometheus to tell me if there's any issues in ZFS. A week later I get an email about a ZFS error. I check the system - says checksum errors, data has been corrected, applications unaffected, run a scrub to clear. I ran a scrub. A few more checksum errors found, all corrected, we're clean now. There was a strong solar storm around that time, probably that. A couple of weeks later I get another email. Same symptoms, same procedure. No solar storm. Shit. Memtest86+ - pass. Hm. A couple of weeks later I get another. Same thing. Memtest again - nothing. This went on for several months. Meanwhile the off-site backup sees nothing like that. While running Memtest on another machine I noticed that the test passes following the first took longer than the first, a lot longer. I thought something might be wrong with that machine. Dug into it, got into Memtest's source code and discovered that the first pass is shorter on purpose so that it quickly flags obviously bad RAM. Apparently if you want to detect less obvious issues, you have to run multiple passes. OK. Memtest the main server again, pass 1: OK, pass 2: OK, pass 3: OK, pass 4: FAIL. FUCK. Memtest each stick separately for 4 passes: OK. Memtest 2 at a time: OK. Memtest all 4: FAIL. Alright, now we know why ZFS keeps finding checksum errors. Long story short, this machine could not run this RAM in 4-DIMM config. Replaced it with another RAM that's rated to run in 4-DIMM config on that processor. No more checksum issues. If I was running the older Ext4-on-LVMRAID storage stack, I would have caught NONE of these and it would have happily corrupted files here and there. In fact it likely did and I have some corruption. Moral of the story - run many Memtest passes and use checksumming storage stack like ZFS or Btrfs. I strongly recommend ZFS since its stripe RAID works fine unlike Btrfs'es. If you don't find bad RAM, start using it today, even if you're working with a single disk and add redundancy when you can. Only after change Syncthing for something else if you still somehow get corruption without ZFS'es knowledge. And if ZFS tells you that you have checksum errors, you likely have bad hardware.

                halcyoncmdr@lemmy.worldH This user is from outside of this forum
                halcyoncmdr@lemmy.worldH This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #30

                Dug into it, got into Memtest’s source code and discovered that the first pass is shorter on purpose so that it quickly flags obviously bad RAM. Apparently if you want to detect less obvious issues, you have to run multiple passes.

                I thought it was common knowledge that Memtest needed to be run for multiple passes to truly verify there are no issues. Seems that's one of those things that stopped being passed down in the community over the years. Back when I was first learning about overclocking around 2005 that was emphasized HEAVILY, with the recommendation to run it at least overnight, and a minimum of 10 passes.

                avidamoeba@lemmy.caA N 2 Replies Last reply
                5
                • halcyoncmdr@lemmy.worldH [email protected]

                  Dug into it, got into Memtest’s source code and discovered that the first pass is shorter on purpose so that it quickly flags obviously bad RAM. Apparently if you want to detect less obvious issues, you have to run multiple passes.

                  I thought it was common knowledge that Memtest needed to be run for multiple passes to truly verify there are no issues. Seems that's one of those things that stopped being passed down in the community over the years. Back when I was first learning about overclocking around 2005 that was emphasized HEAVILY, with the recommendation to run it at least overnight, and a minimum of 10 passes.

                  avidamoeba@lemmy.caA This user is from outside of this forum
                  avidamoeba@lemmy.caA This user is from outside of this forum
                  [email protected]
                  wrote on last edited by [email protected]
                  #31

                  It's kind of embarrassing because I used to work as a service technician at a popular computer store in the 2000s and Memtest86+ has been a standard fare of testing. I guess outside of OC, the shorter first pass truly was enough to spot bad RAM in the vast majority of cases. Plus multichannel interactions were not nearly as prevalent in the DDR1/2/3 days. I recently installed 4 DIMMS for 128GB on an AM5 machine just to discover that the 5600 RAM only boots at 3600 in a 4-DIMM config, as per AMD's docs. Could force it higher but without extra adjustment it can't go beyond 4600 on this machine. Back in the day, different DIMMs, often with different chips worked in 2, 4-DIMM configs so long as they matched their JEDEC spec. backinmyday.jpg

                  halcyoncmdr@lemmy.worldH 1 Reply Last reply
                  1
                  • S [email protected]

                    Really surprised about this. I am using syncthing now for many years on various devices and never encountered issues with it.
                    And also, file sync is not a backup solution.

                    sxan@midwest.socialS This user is from outside of this forum
                    sxan@midwest.socialS This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #32

                    Ditto.

                    I get angry with SyncThing; don't get me wrong. I really wish they'd add a per-file-type merge plugin capability, and I get far more sync conflicts than I care for. I get situations where a client on one computer stops (mostly, Android killing it) and it needs to be manually restarted.

                    What I've never had it data corruption. It's to the point where I implicitly trust that if SyncThing says it's synced, I know it's on the destination. It might be a stored as a sync conflict, but it's there.

                    1 Reply Last reply
                    0
                    • avidamoeba@lemmy.caA [email protected]

                      It's kind of embarrassing because I used to work as a service technician at a popular computer store in the 2000s and Memtest86+ has been a standard fare of testing. I guess outside of OC, the shorter first pass truly was enough to spot bad RAM in the vast majority of cases. Plus multichannel interactions were not nearly as prevalent in the DDR1/2/3 days. I recently installed 4 DIMMS for 128GB on an AM5 machine just to discover that the 5600 RAM only boots at 3600 in a 4-DIMM config, as per AMD's docs. Could force it higher but without extra adjustment it can't go beyond 4600 on this machine. Back in the day, different DIMMs, often with different chips worked in 2, 4-DIMM configs so long as they matched their JEDEC spec. backinmyday.jpg

                      halcyoncmdr@lemmy.worldH This user is from outside of this forum
                      halcyoncmdr@lemmy.worldH This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #33

                      Yeah AMD's memory controllers, especially DDR5 seem to have a lot more difficulty at high speed with 4 slots filled. I used to plan upgrades around populating 2 slots and doubling if needed a few years later, instead now you really need to plan to ignore those slots if you are needing memory performance for things like gaming versus raw capacity.

                      avidamoeba@lemmy.caA 1 Reply Last reply
                      0
                      • halcyoncmdr@lemmy.worldH [email protected]

                        Yeah AMD's memory controllers, especially DDR5 seem to have a lot more difficulty at high speed with 4 slots filled. I used to plan upgrades around populating 2 slots and doubling if needed a few years later, instead now you really need to plan to ignore those slots if you are needing memory performance for things like gaming versus raw capacity.

                        avidamoeba@lemmy.caA This user is from outside of this forum
                        avidamoeba@lemmy.caA This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #34

                        Yeah, I didn't need 128GB, but as soon as I figured what's going on with the 4-DIMM config, I ordered another kit to fill what I think I'd need for the lifetime of the system.

                        halcyoncmdr@lemmy.worldH 1 Reply Last reply
                        0
                        • avidamoeba@lemmy.caA [email protected]

                          Check my edit.

                          Z This user is from outside of this forum
                          Z This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #35

                          That is some crazy story right there. I do know for a fact that memtest needs multiple passes. But in my case the machine only has 1 stick of ram (used to have 2, one died). I will probably do a memtest overnight and get at you tomorrow.

                          halcyoncmdr@lemmy.worldH 1 Reply Last reply
                          2
                          • avidamoeba@lemmy.caA [email protected]

                            Yeah, I didn't need 128GB, but as soon as I figured what's going on with the 4-DIMM config, I ordered another kit to fill what I think I'd need for the lifetime of the system.

                            halcyoncmdr@lemmy.worldH This user is from outside of this forum
                            halcyoncmdr@lemmy.worldH This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #36

                            Similar issues even with just 2 DIMMs with some XMP/EXPO profiles not working on AMD systems because of board/CPU limits. It should technically work, but for whatever reason it just can't handle it and speeds need to be dropped or the timings loosened a bit even though the RMA itself is rated for that.

                            Not that the higher speeds are even necessary for 90% of users outside extreme overclocking. DDR5 6000 is basically where you reach diminishing returns anyway, and that's often where that limit seems to appear.

                            avidamoeba@lemmy.caA 1 Reply Last reply
                            0
                            • Z [email protected]

                              That is some crazy story right there. I do know for a fact that memtest needs multiple passes. But in my case the machine only has 1 stick of ram (used to have 2, one died). I will probably do a memtest overnight and get at you tomorrow.

                              halcyoncmdr@lemmy.worldH This user is from outside of this forum
                              halcyoncmdr@lemmy.worldH This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #37

                              (used to have 2, one died)

                              That would make me immediately look to the RAM as the possible source or corruption. If it used to be a matched pair and one stick died, the odds of the other being on its way out are MUCH higher than normal. I would never trust that matched stick.

                              Z 1 Reply Last reply
                              5
                              • Z [email protected]

                                Never tried unison or resilio can check. As for seafile, that is what I had before. At some point I realized I was getting several issues with desktop mostly, and the storage was only accessible from seafile wich in my case I am not OK with. Mostly was the inconsistencies between oses

                                irmadlad@lemmy.worldI This user is from outside of this forum
                                irmadlad@lemmy.worldI This user is from outside of this forum
                                [email protected]
                                wrote on last edited by [email protected]
                                #38

                                There is a dockerized version of GoodSync, tho I've only used the windows version, so I can't really vouch for it. Might be something worth looking at. I chime in with the others here in that I use Synchthing and I've never had any issues with corrupt files, but I can understand how that would be unacceptable.

                                1 Reply Last reply
                                1
                                • Z [email protected]

                                  I could try to do that, but I simply do not have reproducible steps that are certain to make the problem happen. I am a developer myself, and I absolutely despise when someone says 'hey, something random happened the other day. I cannot say what are the steps, but it is there' just to find out in the end nothing is there, or is simply not reproducible no matter what and for reasons that I might never find out

                                  H This user is from outside of this forum
                                  H This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #39

                                  Someone might recognise your issue though and have suggestions, even if you can't reproduce it exactly

                                  1 Reply Last reply
                                  0
                                  • halcyoncmdr@lemmy.worldH [email protected]

                                    Similar issues even with just 2 DIMMs with some XMP/EXPO profiles not working on AMD systems because of board/CPU limits. It should technically work, but for whatever reason it just can't handle it and speeds need to be dropped or the timings loosened a bit even though the RMA itself is rated for that.

                                    Not that the higher speeds are even necessary for 90% of users outside extreme overclocking. DDR5 6000 is basically where you reach diminishing returns anyway, and that's often where that limit seems to appear.

                                    avidamoeba@lemmy.caA This user is from outside of this forum
                                    avidamoeba@lemmy.caA This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #40

                                    Ugh. And as far as I'm reading, we're hitting limits with the connectors and interconnects so the next iteration up might require some type of CAMM interface. 😔

                                    1 Reply Last reply
                                    0
                                    • O [email protected]

                                      Syncthing runs encrypted anyway.

                                      N This user is from outside of this forum
                                      N This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #41

                                      Encrypting the connection is good, it means that no one should be able capture the data and read it - but my concern is more about the holes in the network boundary you have to create to establish the connection.

                                      My point of view is, that's not something you want happening automatically, unless you manually configured it to do that yourself and you know exactly how it works, what it connects to and how it authenticates (and preferably have some kind of inbound/outbound traffic monitoring for that connection).

                                      1 Reply Last reply
                                      0
                                      • Z [email protected]

                                        From what I see, kopia is for the desktop. Unless I didn't see something, it is not available for android, which is where more important to have backup in my case

                                        S This user is from outside of this forum
                                        S This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #42

                                        Aah my bad.. I was half asleep. What I meant was use Round Sync / Syncthing to copy files to pc and then use Kopia to backup. Round Sync can do one direction copying, so source files are not corrupted.

                                        Z 1 Reply Last reply
                                        1
                                        • Z [email protected]

                                          Hi,

                                          As the title suggests: what are alternatives to syncthing that are basically fire and forget, works on multiple device types, and just focuses on file syncing?

                                          I've had over the months the weirdest problems with syncthing, and lately I noticed some of my photos got corrupted, which is an absolute no no for me. I use syncthing currently as a easy automatic backup of documents, photos and other files, between my PCs and my phones (they all send only to the server. Folders are not shared with other devices).

                                          T This user is from outside of this forum
                                          T This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #43

                                          I am using rsync triggered by cronjobs for this task now for... well... nearly forever.

                                          M 1 Reply Last reply
                                          2
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups