Pretty slow, but I wonder if you could optimise a floppy RAID setup by pairing it with an NVMe cache. Please someone try this, I find the concept absurdly funny.
Yeah my comment was mostly tongue-in-cheek. USB floppies don't need a floppy driver anyway, and a NEC 765 controller probably supports only a few drives. It probably can't even access floppies simulatenously.
Kernel devs/maintainers should wear hats or shirts that say as much so that I can hug each one if I ever saw one in public. The work they do is amazing. I am so glad I get to use Linux everyday and benefit from their work that is being released so frequently.
I keep seeing Btrfs improvements in each new kernel release but never anything about them fixing RAID 5/6. I was able to pickup multiple 20TB hard drives over Black Friday but now I've got to figure out how to configure redundancy with Btrfs optimally or try out FreeNAS, or go the easy route with Synology. I'm leaning towards FreeNAS.
Or just use a filesystem with working Raid-5/6 code like ZFS. How Btrfs can keep shipping broken code and get a pass is beyond me. Here's to hoping Bcachefs ever gets merged into the kernel.
I live with 6 disk NAS(+ some VMs and services like Grafana) with part-RAID6 (6 drives, bulk storage) part-RAID10 (5+1, VMs and other stuff needing a bit more bw) and it has been entirely fine.
But FreeNAS is perfect alternative if you don't want to fuck with penguin stuff and just have a blob of storage
You could just use ZFS. I like butter, and maybe keeping your Metadata on a different raid level helps, but it torched my aray a couple times so I switched and have had no issues.
This is a you problem, that no one else reproduces, and which you have reported only here. It's degrading to the conversation to have this kind of casual slander.
Btrfs raid is stupid reliable, if you dont f it up & dont put metadata in raid5/6.
Having checksum consistency & checkpoints is a superpower most of the alternatives dont have. Btrfs is much more flexible about adding/removing drives, of various sizes even, to a raid pool, as compared to zfs, & is imo ridiculously easier to operate, from not needing out of tree modules to just requiring a lot less specialist knowledge in general of different zfs caches for tuning/setup. There's no alternative close & it runs great across millions of systems, with operators such as Facebook/Meta.
Everyone in production reproduces this. Intentionally or not, everything on a long enough timeline will suddenly lose power.
I'm sorry to offend what is otherwise a decent filesystem, but I, and the systems I'm responsible for, will not use BTRFS with RAID and this characteristic.
There's a lot of qualification there, read it again.
I would much rather let the journal replay and do its thing, than rebuild the array. Something every alternative offers, consistently.
BTRFS RAID has routinely burned me every time I have given it the opportunity. It hasn't survived nearly as much as the other test arrays.
I'm fine discussing filesystems, benefits, and all of that - but I can't entertain it when you try to make this so heated.
These are anecdotes from reliability tests I've been doing... I'm sharing them so that they can be considered.
You accuse me, while opening like this? At least my anecdotes weren't insulting.
edit: To add, this isn't something I've kept secret. I don't know why you say that. Have you read every communication I've written?
I can show you screenshots that span the last three months discussing this exact situation
I haven't reported it upstream because I have better things to do; ship.
You might see this, but hopes aren't high. This is about the response I expected. It's an extreme edge case, but it consistently cuts.
The filesystem we use is incredibly uninteresting. By design.
I find it utterly hilarious you think I'm the only person who has had issues with BTRFS RAID
Yeah, I think I'm leaning in that direction but I may look into Btrfs with metadata on a different level. I've had good luck with Btrfs as my desktop filesystem.
people specifically looking for places where the read/write hole on raid56 is addressed, the problem is stated at minute 9 and solution of a new "raid-stripe-tree" presented from min 14 and on
You could just keep to RAID 1. btrfs raid is at filesystem level, so RAID 1 with more than 2 drives just means 2 copies spread across those drives, so you benefit from the increased storage space like RAID 5 (albeit slightly less) but of course unlike normal RAID 1 still have 1 drive fault tolerance across the entire array.
As someone who is looking to purchase their first NAS any advice on keeping the energy costs down (e.g. does particular NAS hardware or hard drives have running cost advantages in this area)? My main use case (at least initially) will be for storing and backing up photos for infrequent access.
I would assume most 7200 RPM drives have similar energy requirements, although it does appear some drives get hotter than others so efficient cooling might increase those requirements by a bit. Cost of storage though can be a bit expensive. I was able to find 20TB CMR drives for around $329 each.
2x storage is still expensive for home NAS use. RAID 5/6 has other benefits too over RAID 10. Maybe RAID 5/6 isn't the right choice, but it seemed like the best choice when I was comparing options.
The time it takes to initialize RAID 5 or 6, or to swap out a drive, was manageable when the drives were smaller. Now, the benefit of fewer drives is swamped by the extremely long time needed before you can do anything with them.
That sounds backward, there's already a monoculture: only GCC. Official
support for Clang would break that monoculture.
edit You probably meant more broadly in the FOSS ecosystem. Personally
I'm not worried about that as GCC seems to be doing ok competing with
Clang on technical grounds. For instance, last I heard GCC was ahead of
Clang in support for new C++ features.
Carbon will use LLVM for its backend, right? I presume Google are still
invested in LLVM itself, for that reason.
I was going to mention Apple but they're famously prone to adopting
non-mainstream languages like Swift. I don't think they have much
interested in modern C++ features.
Academic research for security purposes is almost exclusively done with llvm. Why? It is a compiler framework that is (relatively) easy to work with compared to GCC.
So there is already a monoculture in some circles.
However there's plenty of custom gcc toolchains for embedded systems. It isn't going anywhere for a while yet.
The illusion of a monoculture disappears if you zoom out a bit.
Even if clang was a "first-class citizen" compiler for the kernel and was picked up by some big distros, there's an absolutely gigantic market of companies designing and building embedded systems around GCC. There already is a LLVM/Clang monoculture in some areas (e.g. research, "modern" C++ shops), but while I don't have the numbers or anything, I'd imagine that those fields make up a very small percentage of worldwide C/C++ developers. A lot of those boring embedded systems companies have low margins and don't think particularly highly of software developers, so they'd never consider refactoring all of their tooling and projects to use clang unless they absolutely had to.
If the kernel _dropped_ support for GCC, you might see an actual monoculture develop. But I don't see the kernel _supporting_ GCC, or particular distros choosing to build their kernel with it, making any real dent in the huge market share (for lack of a better term) that GCC has built up. It might seem that way if you read HN and talk to people working in SV, but GCC is still pretty ubiquitous.
All of Apple's toolchain is LLVM, which also means all 3rd party apps running on Apple devices are (almost?) all LLVM. Google Chrome is now built with LLVM, Android is getting there with Clang, and who knows what else inside of Google is built with Clang. These aren't tiny percentages.
I guess it depends on how you look at the the percentages.
Lots of people _use_ Apple’s developer tools, Google Chrome and other Google products. But in each of those cases, I believe the actual group of people directly dealing with C++ and Clang are small relative to the global C/C++ developer population. Android has a strong argument since Google can force the hands of OEMs. I don’t see how the size of Chrome’s user base will bring about an LLVM monoculture if 99.999% of C++ developers on earth never build Chrome themselves.
The majority of electronic devices on earth have (non-Android) firmware or software written in C/C++, written by people all over the world who don’t work for Google, Apple or anyone you’d even consider a tech company. You will never hear about these projects on HN because they’re closed-source, proprietary and (frankly) boring. But almost all of those projects use whatever toolchain their vendor provides, and more often than not (in my experience), thats GCC.
Like I said, I don’t have any numbers to back this up. I’m not claiming that GCC reigns supreme and that LLVM is irrelevant outside of academia. I’m just saying that if you’re really concerned about LLVM _support_ in the kernel leading to an LLVM monoculture, you might not understand how widely used GCC is in “boring” tech.
[edit] Adding to this, I guess it also depends on how you view a “monoculture” w.r.t compilers. I could imagine a not-too-distant future where LLVM gets the vast majority of research effort, and you see cool new optimizations brought into it while GCC stagnates. We’re arguably already there. But I don’t see that driving thousands of low-margin hardware OEMs to switch over from GCC to LLVM. Maybe that will happen one day, but I think we’re pretty far away from that.
I’ve noticed Mangohud wasn’t showing CPU watts. I did some searching and found https://github.com/Ta180m/zenpower3. Blacklisted k10temp and now its working.
Great. For Ubuntu it technically already landed on https://kernel.ubuntu.com/~kernel-ppa/mainline/ However, I say technically because the latest kernel build failed. As do many other versions there. I wonder why ...
While I have no idea if this is another case, Ubuntu seems to have a tendency to carry Ubuntu-specific patches that may cause unforeseen consequences.
I've had an Ubuntu patch cause squid to crash when the config had an empty line at the end of the file or something like that.
And Ubuntu's "openbsd-netcat" requires an EOF marker at the end of input, otherwise it won't terminate the connection. The original "openbsd-netcat" does that. Ubuntu even patched in a parameter to make their "openbsd-netcat" behave like the original.
Maybe they should have called it "ubuntu-netcat" instead.
These antics, combined with the mess that is the "snaps" system, made me swear off Ubuntu and leave for RockyLinux.
This is their mainline channel delivered over a Personal Package Archives (ppa) channel (a repo that's just slightly simple to add/enable on Ubuntu); there are no patches on top of those.
If your issue with Ubuntu is Ubuntu-specific patches, I don't think Debian is a solution. They patch a lot.
I personnaly think it's too much mostly because I don't value most of the reasons Debian patches (I don't care about exotic architectures - I value conformance to upstream more than the ability to have modular and small packages - I don't care about having some non-free parts in my package).
Very excited! I have used the MGLRU patch on a 13 years old iMac with 8GB ram in last 6 months. Have chrome with lots of tabs and heavy use of gimp/ImageJ/libreoffice. MGLRU is such a great improvement, without it the machine frequently ran out of memory.