16 comments

  • fabian2k 2 hours ago
    Looks interesting for something like local development. I don't intend to run production object storage myself, but some of the stuff in the guide to the production setup (https://garagehq.deuxfleurs.fr/documentation/cookbook/real-w...) would scare me a bit:

    > For the metadata storage, Garage does not do checksumming and integrity verification on its own, so it is better to use a robust filesystem such as BTRFS or ZFS. Users have reported that when using the LMDB database engine (the default), database files have a tendency of becoming corrupted after an unclean shutdown (e.g. a power outage), so you should take regular snapshots to be able to recover from such a situation.

    It seems like you can also use SQLite, but a default database that isn't robust against power failure or crashes seems suprising to me.

    • igor47 2 hours ago
      I've been using minio for local dev but that version is unmaintained now. However, I was put off by the minimum requirements for garage listed on the page -- does it really need a gig of RAM?
      • archon810 2 hours ago
        The current latest Minio release that is working for us for local development is now almost a year old and soon enough we will have to upgrade. Curious what others have replaced it with that is as easy to set up and has a management UI.
    • moffkalast 1 hour ago
      That's not something you can do reliably in software, datacenter grade NVMe drives come with power loss protection and additional capacitors to handle that gracefully. If power is cut at the wrong moment the partition may not be mountable afterwards otherwise.

      If you really live somewhere with frequent outages, buy an industrial drive that has a PLP rating. Or get a UPS, they tend to be cheaper.

      • crote 1 hour ago
        Isn't that the entire point of write-ahead logs, journaling file systems, and fsync in general? A roll-back or roll-forward due to a power loss causing a partial write is completely expected, but surely consumer SSDs wouldn't just completely ignore fsync and blatantly lie that the data has been persisted?

        As I understood it, the capacitors on datacenter-grade drives are to give it more flexibility, as it allows the drive to issue a successful write response for cached data: the capacitor guarantees that even with a power loss the write will still finish, so for all intents and purposes it has been persisted, so an fsync can return without having to wait on the actual flash itself, which greatly increases performance. Have I just completely misunderstood this?

        • unsnap_biceps 51 minutes ago
          you actually don't need capacitors for rotating media, Western Digital has a feature called "ArmorCache" that uses the rotational energy in the platters to power the drive long enough to sync the volatile cache to a non volatile storage.

          https://documents.westerndigital.com/content/dam/doc-library...

          • toomuchtodo 42 minutes ago
            Very cool, like the ram air turbine that deploys on aircraft in the event of a power loss.
        • Nextgrid 1 hour ago
          > ignore fsync and blatantly lie that the data has been persisted

          Unfortunately they do: https://news.ycombinator.com/item?id=38371307

          • btown 1 hour ago
            If the drives continue to have power, but the OS has crashed, will the drives persist the data once a certain amount of time has passed? Are datacenters set up to take advantage of this?
            • Nextgrid 55 minutes ago
              > will the drives persist the data once a certain amount of time has passed

              Yes, otherwise those drives wouldn't work at all and would have a 100% warranty return rate. The reason they get away with it is that the misbehavior is only a problem in a specific edge-case (forgetting data written shortly before a power loss).

            • unsnap_biceps 54 minutes ago
              Yes, the drives are unaware of the OS state.
  • thhck 2 hours ago
    BTW https://deuxfleurs.fr/ is one of the most beautiful website I have ever seen
    • codethief 23 minutes ago
      It's beautiful from an artistic point of view but also rather hard to read and probably not very accessible (haven't checked it, though, since I'm on my phone).
  • SomaticPirate 3 hours ago
    Seeing a ton of adoption of this after the Minio debacle

    https://www.repoflow.io/blog/benchmarking-self-hosted-s3-com... was useful.

    RustFS also looks interesting but for entirely non-technical reasons we had to exclude it.

    Anyone have any advice for swapping this in for Minio?

    • dpedu 3 hours ago
      I have not tried either myself, but I wanted to mention that Versity S3 Gateway looks good too.

      https://github.com/versity/versitygw

      I am also curious how Ceph S3 gateway compares to all of these.

      • zipzad 14 minutes ago
        I'd be curious to know how versitygw compares to rclone serve S3.
    • scottydelta 1 hour ago
      From what I have seen in the previous discussions here (since and before Minio debacle) and at work, Garage is a solid replacement.
    • Implicated 3 hours ago
      > but for entirely non-technical reasons we had to exclude it

      Able/willing to expand on this at all? Just curious.

      • NitpickLawyer 2 hours ago
        Not the same person you asked, but my guess would be that it is seen as a chinese product.
        • lima 2 hours ago
          RustFS appears to be very early-stage with no real distributed systems architecture: https://github.com/rustfs/rustfs/pull/884

          I'm not sure if it even has any sort of cluster consensus algorithm? I can't imagine it not eating committed writes in a multi-node deployment.

          Garage and Ceph (well, radosgw) are the only open source S3-compatible object storage which have undergone serious durability/correctness testing. Anything else will most likely eat your data.

        • dewey 2 hours ago
          What is this based on, honest question as from the landing page I don't get that impression. Are many committers China-based?
          • NitpickLawyer 2 hours ago
            https://rustfs.com.cn/

            > Beijing Address: Area C, North Territory, Zhongguancun Dongsheng Science Park, No. 66 Xixiaokou Road, Haidian District, Beijing

            > Beijing ICP Registration No. 2024061305-1

            • dewey 2 hours ago
              Oh, I misread the initial comment and thought they had to exclude Garage. Thanks!
    • klooney 41 minutes ago
      Seaweed looks good in those benchmarks, I haven't heard much about it for a while.
  • topspin 48 minutes ago
    No tags on objects.

    Garage looks really nice: I've evaluated it with test code and benchmarks and it looks like a winner. Also, very straightforward deployment (self contained executable) and good docs.

    But no tags on objects is a pretty big gap, and I had to shelve it. If Garage folk see this: please think on this. You obviously have the talent to make a killer application, but tags are table stakes in the "cloud" API world.

  • ai-christianson 3 hours ago
    I love garage. I think it has applications beyond the standard self host s3 alternative.

    It's a really cool system for hyper converged architecture where storage requests can pull data from the local machine and only hit the network when needed.

  • JonChesterfield 47 minutes ago
    Corrupts data on power loss according to their own docs. Like what you get outside of data centers. Not reliable then.
    • lxpz 1 minute ago
      Losing a node is a regular occurrence, and a scenario for which Garage has been designed.

      The assumption Garage makes, which is well-documented, is that of 3 replica nodes, only 1 will be in a crash-like situation at any time. With 1 crashed node, the cluster is still fully functional. With 2 crashed nodes, the cluster is unavailable until at least one additional node is recovered, but no data is lost.

      In other words, Garage makes a very precise promise to its users, which is fully respected. Database corruption upon power loss enters in the definition of a "crash state", similarly to a node just being offline due to an internet connection loss. We recommend making metadata snapshots so that recovery of a crashed node is faster and simpler, but it's not required per se: Garage can always start over from an empty database and recover data from the remaining copies in the cluster.

      To talk more about concrete scenarios: if you have 3 replicas in 3 different physical locations, the assumption of at-most one crashed node is pretty reasonable, it's quite unlikely that 2 of the 3 locations will be offline at the same time. Concerning data corruption on a power loss, the probability to lose power at 3 distant sites at the exact same time with the same data in the write buffers is extremely low, so I'd say in practice it's not a problem.

      Of course, this all implies a Garage cluster running with 3-way replication, which everyone should do.

  • Powdering7082 3 hours ago
    No erasure coding seems like a pretty big loss in terms of how much resources do you need to get good resiliency & efficiency
    • munro 1 hour ago
      I was looking at using this on an LTO tape library, it seems the only resiliency is through replication, but this was my main concern with this project, what happens with HW goes bad
  • faizshah 2 hours ago
    One really useful usecase for Garage for me has been data engineering scripts. I can just use the S3 integration that every tool has to dump to garage and then I can more easily scale up to cloud later.
  • Jhsto 44 minutes ago
    I have been using Garage for Nix storage in my home lab. Easy to set up! I hope to see more self-hosted projects operate over S3 APIs (or at least support it) so that making redundant setups would be easier. It might sound very niche, but I can't see myself replacing many cloud or centralized services unless I can do redundant object storage -- compute is trivial with a load balancer.
  • wyattjoh 2 hours ago
    Wasn't expecting to see it hosted on forgejo. Kind of a breath of fresh air to be honest.
  • apawloski 1 hour ago
    Is it the same consistency model as S3? I couldn't see anything about it in their docs.
  • agwa 2 hours ago
    Does this support conditional PUT (If-Match / If-None-Match)?
  • Eikon 2 hours ago
    Unfortunately, this doesn’t support conditional writes through if-match and if-none-match [0] and thus is not compatible with ZeroFS [1].

    [0] https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/1052

    [1] https://github.com/Barre/ZeroFS

  • doctorpangloss 2 hours ago