9 comments

  • betaby 9 minutes ago
    Very theoretical question. If there was a usable microkernel, how hard would be it be to have an FS as a service? Are MacOS FS' processes or are they 'monolithic'?
  • qalmakka 4 hours ago
    RIP BCacheFS. I was hopeful I could finally have a modern filesystem in Linux mainlined (I don't trust Btrfs anymore), but I guess I'll keep on having to install ZFS for the foreseeable future I guess.

    As I predicted, out of tree bcachefs is basically dead on arrival - everybody interested is already on ZFS, btrfs is still around only because ZFS can't be mainlined basically

    • StopDisinfo910 3 hours ago
      > btrfs is still around only because ZFS can't be mainlined basically

      ZFS is extremely annoying with the way it does extend and the fact that you can’t mismatch drive size. It’s not a panacea. There clearly is space for an improved design.

      • cyphar 2 hours ago
        This is being worked on (they call it AnyRaid), the work is being sponsored by HexOS[1].

        [1]: https://hexos.com/blog/introducing-zfs-anyraid-sponsored-by-...

      • nubinetwork 3 hours ago
        Underprivision your disks, then you don't have to worry about those edge cases...
        • StopDisinfo910 3 hours ago
          If you need to consider how to buy your drives so you can use a filesystem, that’s a flaw of said filesystem not an edge case.

          It clearly is an acceptable one for a lot of people but it does leave space for alternative designs.

          • estimator7292 1 hour ago
            This is the way it's always been. RAID can't really handle mismatched drives either and you must consider that when purchasing drives. It's not a flaw, it's a consequence of geometry
    • sureglymop 4 hours ago
      I've never had any issues with either ZFS or Btrfs after 2020. I wonder what you all are doing to have such issues with them.
      • Volundr 57 minutes ago
        Pre-2020, but I had a BTRFS filesystem with over 40% free space start failing on all writes, including deletes, with a "no space left on device error". Took the main storage array for our company offline for over a day while we struggled to figure out wtf was going on. Supposedly this is better now, but basically BTRFS marks blocks as data or metadata, and once marked a block will be reassigned (without a rebalance). Supposedly this is better now, but this was after it had been stable for a few years. After that and some smaller foot guns, I'll never willingly run BTRFS on a critical system.
      • pantalaimon 2 hours ago
        One lovely experience I had when trying to remove a failing disk from my array was that the `btrfs device remove` failed with an I/O error - because the device was failing.

        I then had to manually delete the file with the I/O error (for which I had to resovle the inode number it barfed into dmesg) and try again - until the next I/O error.

        (I'm still not sure if the disk was really failing. I did a full wipe afterwards and a full read to /dev/null and experienced no errors - might have just been the meta-data that was messed up)

      • pizza234 3 hours ago
        Just a few days ago I've had a checksum mismatch on a RAID-1 setup, on the metadata in both devices, which is very confusing.

        Over the last one or two years I've experienced twice a checksum mismatch on the file storing the memory of a VMWare Workstation virtual machine.

        Both are very likely bugs in Btrfs, and it's very unlikely that have been caused by the user (me).

        In the relatively far past (around 5 years ago), I've had the system (root being on Btrfs) turning unbootable for no obvious reason, a couple of times.

      • jamesnorden 4 hours ago
        Ah yes, the famous "holding it wrong".
        • ziml77 6 minutes ago
          If you complain a knife crushing your food because you're holding it upside down, it's good for everyone else to know that context. Because anyone who is using it with the sharp side down can safely ignore that problem rather than being scared away due to an issue they won't experience.
        • happymellon 4 hours ago
          I've also not had issues with BTRFS.

          The question was around usage, because without knowing people's usecases and configurations it'll never be usable for you while working fine for others.

          • pizza234 3 hours ago
            If 1% of the users report a given issue (say, data corruption), the fact that 99% of the users report that they don't experience it, doesn't mean that the issue is not critical.
            • happymellon 23 minutes ago
              > If 1% of the users report a given issue (say, data corruption

              If 0.1% of users say it corrupted for them, and then don't provide any further details and no one can replicate their scenario then it does make it hard to resolve it

            • izacus 3 hours ago
              The fact that you see an issue reported loudly on social media it doesn't mean it's critical or more common than for other FSes.

              As usual with all these Linux debates, there's a loud group grinding their old hatreds that can be decade old.

        • metadat 3 hours ago
          I've experienced unrecoverable corruption with btrfs within the past 2 years.
        • motorest 2 hours ago
          > Ah yes, the famous "holding it wrong".

          Is it wrong to ask how to reproduce an issue?

    • Ygg2 4 hours ago
      Wait. You don't trust Btrfs but you would trust BCacheFS, that's obviously very experimental?
      • phire 3 hours ago
        Btrfs claims to be stable. IMO, it's not.

        It's generally fine if you stay on the happy path. It will work for 99% of people. But if you fall off that happy path, bad things might happen and nobody is surprised. In my personal experience, nobody associated with the project seems to trust a btrfs filesystem that fell off the happy path, and they strongly recommend you delete it and start from scratch. I was horrified to discover that they don't trust fsck to actually fix a btrfs filesystem into a canonical state.

        BCacheFS had the massive advantage that it knew it was experimental and embraced it. It took measures to keep data integrity despite the chaos, generally seems to be a better design and has a more trustworthy fsck.

        It's not that I'd trust BCacheFS, it's still not quite there (even ignoring project management issues). But my trust for Btrfs is just so much lower.

        • ahartmetz 2 hours ago
          btrfs seems to be a wonky, ill-considered design with ten years of hotfixes. bcachefs seems to be a solid design that is (or has been, it's mostly done) regularly improved where trouble was found. Now it's just fixing basically little coding oversights. In two years, I will trust bcachefs to be a much more reliable filesystem than btrfs.
      • rurban 4 hours ago
        Still more stable than btrfs. btrfs is also dead slow
        • Iridiumkoivu 3 hours ago
          I agree with this sentiment.

          Btrfs has destroyed itself on my testing/lab machines three times during last two years up to point where recovery wasn’t possible. Metadata corruption being main issue (or that’s how it looks like to me at least).

          As of now I trust BCacheFS way more. I’ve given it roughly the same time to prove itself as Btrfs too. BCacheFS has issues but so far I’ve managed to resolve them without major data loss.

          Please note that I currently use ext4 in all ”really important” desktop/laptop installations and OpenZFS in my server. Performance being the main concern for desktop and reliability for server.

    • kiney 4 hours ago
      btrfs has many technical advantages over zfs
      • debazel 4 hours ago
        Yes, like destroying itself and losing all data.
        • natebc 4 hours ago
          ZFS is perfectly capable of this too.

          source: worked as a support engineer for a block storage company, witnessed hundreds of customers blowing one or both of their feet off with ZFS.

          • hebocon 3 hours ago
            To what extent are these customers blaming the hammer for hitting their thumb?

            (Legitimate question: I manage several PB with ZFS and would like to know where I should be more cautious.)

            • natebc 2 hours ago
              A great deal. Which is why my cringe reflex still activates when I read about people running ZFS in places that aren't super tightly configured. ZFS is just such a massively complex piece of software.

              There were legitimate bugs in ZFS that we hit. Mostly around ZIL/SLOG and L2ARC and the umpteen million knobs that one can tweak.

              • TheNewsIsHere 2 hours ago
                Customers blowing off their feet with ZFS because they felt the need to tweak tunables they didn’t need to use, or didn’t properly understand, is not the fault of ZFS though.

                You can do the same with just about any file system. In the Windows world you can blow your feet off with NTFS configuration too.

                Of course there have been bugs, but every filesystem has had data-impacting bugs. Redundancy and backups are a critical caveat for all file systems for a reason. I once heard it said that “you can always afford to lose the data you don’t have backed up”. I do not think that broadly applies (such as with individuals), but it certainly applies in most business contexts.

                • natebc 1 hour ago
                  Yeah, my reaction to it usually that's so quickly recommended so frequently for general use.

                  Obviously there's footguns in everything. Filesystem ones are just especially impactful.

              • motorest 2 hours ago
                > A great deal. Which is why my cringe reflex (...)

                Can you provide some specifics? So far all I see is vague complains with no substance, and when complainers are lightly pressed they go defensive.

                • natebc 1 hour ago
                  I don't have specifics for how many people running a fork of ZFS on Linux (or the fork for opensolaris, nexenta, etc) have copy-pasted some configuration from a wiki/forum/stackexchange and resulted in a pool that's misconfigured in some subtly fatal way. I don't have any personal anecdotes to share about my own homelab or enterprise IT experience with ZFS because I don't use it at home and nowhere I've worked in IT has used it.

                  I did live specific situations over several years in a support engineer role where a double digit percentage of customers in enterprise configurations that ended up somewhere between terrible performance and catastrophic data loss due to the misunderstood configuration of a very complex piece of software.

                  If you wanna use ZFS, use ZFS. I'm not the internets crusader against it. I have no doubt there's thousands of PB out there of perfectly happy, well configured and healthy zpools. It has some truely next-gen features that are extremely useful. I've just seen it recommended so, so many times as a panacea when something simpler would be just as safe and long lasting.

                  It's kinda like using Kubernetes to run a few containers. Right?

            • nubinetwork 3 hours ago
              Pool feature mismatch on send receive, dedup send receive, new features breaking randomly on bleeding edge releases
              • TheNewsIsHere 2 hours ago
                The intent of feature flags in ZFS is to denote changes in on-disk structures. Replication isn’t supported between pools that don’t support the same flags because otherwise ZFS couldn’t read the data from disk properly on the receiving sides.

                There are workarounds, with their respective caveats and warnings.

          • throw0101a 3 hours ago
            > source: worked as a support engineer for a block storage company, witnessed hundreds of customers blowing one or both of their feet off with ZFS.

            The phrasing of this tends me to believe that the customers set up ZFS in a 'strange' (?) way. Or was this a bug(s) with-in ZFS itself?

            Because when people talk about Btrfs issues, they are talking about the code itself and bugs that cause volumes to go AWOL and such.

            (All file systems have foot-guns.)

            • natebc 2 hours ago
              Mostly customers thinking they fully understand the thousands of parameters in ZFS.

              There was a _very_ nasty bug in the ZFS L2ARC that took out a few PB at a couple of large installations. This was back in 2012/2013 when multiple PBs was very expensive. Was a case of ZFS putting data from the ARC into the pool after the ZIL/SLOG had been flushed.

      • crest 3 hours ago
        Can you give an example because to me it always appeared as NIH copy-cat fs?
  • the_duke 4 hours ago
    This is a tragedy, bcachefs has so many great features...
  • motorest 4 hours ago
    Ultimately that's the right call, and the inevitable one as well.
  • lupusreal 4 hours ago
    The way the BCacheFS situation has been playing out is a tragedy. I had very high hopes for it.
    • johnisgood 4 hours ago
      Same. I liked many of its features (actually, all features, see https://bcachefs.org) and I was waiting for it to become usable, but I guess that day will never come now?

      So, the alternative is ZFS only, maybe HAMMER2. HAMMER2 does not look too bad either, except you need DragonflyBSD for that.

      • ahartmetz 2 hours ago
        What I expect to happen is that bcachefs stabilizes outside of mainline, and after that, it can be merged back because no large patches = not much drama potential.
      • ThatPlayer 3 hours ago
        It's not unusable, I use it on a spare computer for fun, cuz I want tiering of SSD + HDDs. And this doesn't mean development has stopped, just not done in the kernel.
    • InsideOutSanta 4 hours ago
      Yeah, this all seems so unnecessary. I hope Kent can either figure out how to work in the context of a larger team or find somebody who can do it on his behalf.
      • johnisgood 4 hours ago
        > Once the BCacheFS maintainer behaves [...]

        So, there are still behavioral issues here I take it? That is a bummer. This is not news to me, but I thought the situation has changed ever since.

        • motorest 1 hour ago
          > So, there are still behavioral issues here I take it?

          From the past discussion, it's mainly grave behavioral issues but they also end up being technical. Such as trying to push new untested features into RCs and breaking builds, and resorting to flame wars to push these problematic patches forward instead of actually working them out with maintainers.

          But yeah, the final straw was a very abusive email sent to a maintainer in the mailing list.

  • M95D 2 hours ago
    I'm still waiting for an overlayfs that does read caching on the overlay without the need to format the backing storage.
  • jtickle 3 hours ago
    All of the "btrfs eats your data" bugs have been fixed and the people who constantly repeat them are people who relied on an experimental filesystem for files they cared not to lose. FUD all around. I have a btrfs on my home file server that's been running just fine for almost 10 years now and has survived the initial underlying hard drives mechanical death. Since then I have used it in plenty of production environments.

    Don't do RAID 5. Just don't. That's not just a btrfs shortcoming. I lost a hardware RAID 5 due to "puncture" which would have been fascinating to learn about if it hadn't happened to a production database. It's an academically interesting concept but it is too dangerous especially with how large drives are now, if you're buying three, buy four instead. RAID 10 is much safer especially for software RAID.

    Stop parroting lies about btrfs. Since it became marked stable, it has been a reliable, trustworthy, performant filesystem.

    But as much as I trust it I also have backups because if you love your data, it's your own fault if you don't back it up and regularly verify the backups.

    • plqbfbv 2 hours ago
      > All of the "btrfs eats your data" bugs have been fixed ... I have a btrfs on my home file server that's been running just fine for almost 10 years now and has survived the initial underlying hard drives mechanical death

      In the last 10 years, btrfs:

      1. Blew up three times on two unrelated systems due to internal bugs (one a desktop, one a server). Very few people were/are aware of the remount-only-once-in-degraded "FEATURE" where if a filesystem crashed, you could mount with -odegraded exactly only once, then the superblock would completely prevent mounting (error: invalid superblock). I'm not sure whether that's still the case or whether it got fixed (I hope so). By the way, these were on RAID1 arrays with 2 identical disks with metadata=dup and data=dup, so the filesystem was definitely mountable and usable. It basically killed the usecase of RAID1 for availability reasons. ZFS has allowed me to perform live data migrations while missing one or two disks across many reboots.

      2. Developers merged patches to mainline, later released to stable, that completely broke discard=async (or something similar) which was a supported mount option from the manpages. My desktop SSD basically ate itself, had to restore from backups. IIRC the bug/mailing list discussions I found out later were along the lines of "nobody should be using it", so no impact.

      3. Had (maybe still has - haven't checked) a bug where if you fill the whole disk, and then remove data, you can't rebalance, because the filesystem sees it has no more space available (all chunks are allocated). The trick I figured out was to shrink the filesystem to force data relocation, then re-expand it, then balance. It was ~5 years ago and I even wrote a blog post about it.

      4. Quota tracking when using docker subvolumes is basically unusable due to the btrfs-cleaner "background" task (imagine VSCode + DevContainers taking 3m on a modern SSD to cleanup 1 big docker container). This is on 6.16.

      5. Hit a random bug just 3 days ago on 6.16, where I was doing periodic rebalancing and removing a docker subvolume. 200+ lines of logs in dmesg, filesystem "corrupted" and remounted read-only. I was already sweating, not wanting to spend hours restoring from backups, but unexpectedly the filesystem mounted correctly after reboot. (first pleasant experience in years)

      ZFS in 10y+ has basically only failed me when I had bad non-ECC RAM, period. Unfortunately I want the latest features for graphics etc on my desktop and ZFS being out of tree is a no-go. I also like to keep the same filesystem on desktop and server, so I can troubleshoot locally if required. So now I'm still on btrfs, but I was really banking on bcachefs.

      Oh well, at least I won't have to wait >4 weeks for a version that I can compile with the latest stable kernel.

      The only stable implementation is Synology's, the rest, even mainline stable, failed on me at least once in the last 10 years.

    • arccy 3 hours ago
      "performant", it's still slow if you actually use any of the advanced features like copy on write.
      • FirmwareBurner 3 hours ago
        Every CoW filesystem is just as slow. There's no magic pill to fix performance but it's a known tradeoff.
    • betaby 3 hours ago
      > FUD all around

      ????

      > Don't do RAID 5.

      Ah, OK, so not FUD

      > Stop parroting lies about btrfs.

      I seee

  • bgwalter 4 hours ago
    [deleted wrongthink]
    • graemep 4 hours ago
      There is an apology for that comment and a rewording further down the thread. Evidently made by someone who is not a native speaker who did not realise how it comes across.
      • teekert 4 hours ago
        Good addition,thanx.

        I've been in a similar situation, letting everyone know I was fired. Apparently in the US this has a negative connotation, and they use "being let go" (or something confusing as "handing in/being handed your 2 weeks notice", a concept completely unknown here). Here we only have one word for "your company terminating your employment", and there is no negative connotation associated with it. This can be difficult for non-natives. We can come across very weird or less intelligent.

        • T3OU-736 3 hours ago
          In the US, the terminology tends to split into "fired" (implies "for valid reasons") vs "laid off" (implies "position was terminsted, this was not about the employee or their qualities and performance").
          • graemep 2 hours ago
            In the UK "fired" would mean the same, "laid off" off would mean the same, "made redundant" also means the same and more clearly, with emphasis on the position no longer existing. "Sacked" means about the same as fired.
      • dbdr 3 hours ago
        Funnily enough the apology ends with:

        > If the above offended anyone, I sincerely apology them.

        Unless this was tongue-in-cheek, this kind of proves the point that language was the cause. The apology is a good move in any case.

      • t51923712 3 hours ago
        Why would the "behave" comment mean anything different in Czech than in English?

        The revised version, "Once the bcachefs maintainer conforms to the agreed process and the code is maintained upstream again" is still lecturing and piling on, as the LWN comments say:

        https://lwn.net/Articles/1037496/

        It is the classic case of CoC people and their inner circle denouncing someone, and subsequently the entire Internet keeps piling on the target.

    • hebocon 4 hours ago
      "behave" in this context can refer to simply respecting existing norms about RC code freezing.
  • rurban 4 hours ago
    > Once the BCacheFS maintainer behaves and the code is maintained upstream again, we will re-enable... (As IMO, it is a useful feature.)

    How cynical. It's the kernel maintainer, not the bcachefs maintainer, who does not behave and has a huge history of unprofessional behavior for decades.

    • nicman23 3 hours ago
      How cynical. It's the bcachefs maintainer, not the kernel maintainer, who does not behave and has a huge history of unprofessional behavior for decades.

      it is not like he was not explicitly warned.

    • happymellon 3 hours ago
      The bcachefs maintainer has added new features during bugfix windows, and lied about it.
      • pantalaimon 3 hours ago
        It's still an experimental module, the feature was about gathering more debug information.
        • StopDisinfo910 3 hours ago
          So?

          Bug fix windows are for bug fix. If it’s not a bug fix, it goes in the next version. That’s how the kernel release cycle works. It’s not very complicated.

          If it’s so unstable that it urgently needs new features shipped regularly, I think it’s entirely legitimate that it has to live out of tree until it’s actually stable enough.

    • boricj 3 hours ago
      The original author later sent an apology email explaining that it sounded too harsh in English and it wasn't meant to be offensive:

      https://lwn.net/ml/all/bece61a0-b818-4d59-b340-860e94080f0d@...

    • fj23Z741GAh 2 hours ago
      There was a time when Linus encouraged critics of "unprofessional behavior" to snap back at him:

      https://lkml.org/lkml/2013/7/15/374

      That is a reasonable compromise. Except when someone actually snaps back at him.

    • self_awareness 3 hours ago
      The reason is that people like Linus, because he's entertaining. And people don't like Kent, because he opposed Linus, who is liked. That's all there is too it. Like in some high school.