Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The 6.1 kernel is out (lwn.net)
233 points by mfiguiere on Dec 12, 2022 | hide | past | favorite | 55 comments


Exciting, but I can’t wait for the further updated floppy driver in the upcoming 6.2 release! https://www.phoronix.com/news/Linux-6.2-Floppy


Finally we can enjoy floppy RAIDs on Linux: https://www.youtube.com/watch?v=1hc52_PWeU8


Pretty slow, but I wonder if you could optimise a floppy RAID setup by pairing it with an NVMe cache. Please someone try this, I find the concept absurdly funny.


People laugh at this till they learn that such things exist as hardware manifestation: https://news.ycombinator.com/item?id=22428321


I think you can already do that - this just fixed a memory leak on init failure.


Yeah my comment was mostly tongue-in-cheek. USB floppies don't need a floppy driver anyway, and a NEC 765 controller probably supports only a few drives. It probably can't even access floppies simulatenously.



Really glad this got out & will with hope be the end of year LTS release.

Some other favs:

power management fixes for AMD

big BTRFS perf boosts

ton of embedded controller/fan speed updates

faster retbleed mitigation

usb4 host-to-host improvements

lots of gpu + hid improvements


Kernel devs/maintainers should wear hats or shirts that say as much so that I can hug each one if I ever saw one in public. The work they do is amazing. I am so glad I get to use Linux everyday and benefit from their work that is being released so frequently.


I keep seeing Btrfs improvements in each new kernel release but never anything about them fixing RAID 5/6. I was able to pickup multiple 20TB hard drives over Black Friday but now I've got to figure out how to configure redundancy with Btrfs optimally or try out FreeNAS, or go the easy route with Synology. I'm leaning towards FreeNAS.


"The 'write hole' bug, only affects metadata not data. Thus if you don't make metadata RAID 5/6, you avoid that problem.

Fortunately BTRFS allows you to have different RAID levels for data and metadata so you could run RAID 1 for metadata and RAID 5/6 for data."

https://www.reddit.com/r/linux/comments/yrlljv/comment/ivvop...


Thanks... I'm going to look more into this and see if it is an option.


Or just use a filesystem with working Raid-5/6 code like ZFS. How Btrfs can keep shipping broken code and get a pass is beyond me. Here's to hoping Bcachefs ever gets merged into the kernel.


I live with 6 disk NAS(+ some VMs and services like Grafana) with part-RAID6 (6 drives, bulk storage) part-RAID10 (5+1, VMs and other stuff needing a bit more bw) and it has been entirely fine.

But FreeNAS is perfect alternative if you don't want to fuck with penguin stuff and just have a blob of storage


You could just use ZFS. I like butter, and maybe keeping your Metadata on a different raid level helps, but it torched my aray a couple times so I switched and have had no issues.


Here to also suggest alternatives

BTRFS is the only 'RAID' that I've encountered that will almost predictably fail if forcefully powered off

RAID10 on gen4 NVMe drives, should sync pretty quick, you'd think.

LVM RAID, ZFS, mdadm, all of them are considerably more reliable

I can hear lamenting already: "that's not normal! you shouldn't expect consistency here!"

I tested this because reality says we do unusual things all of the time, don't hate me - hate the results.

Other implementations are demonstrably more robust


This is a you problem, that no one else reproduces, and which you have reported only here. It's degrading to the conversation to have this kind of casual slander.

Btrfs raid is stupid reliable, if you dont f it up & dont put metadata in raid5/6.

Having checksum consistency & checkpoints is a superpower most of the alternatives dont have. Btrfs is much more flexible about adding/removing drives, of various sizes even, to a raid pool, as compared to zfs, & is imo ridiculously easier to operate, from not needing out of tree modules to just requiring a lot less specialist knowledge in general of different zfs caches for tuning/setup. There's no alternative close & it runs great across millions of systems, with operators such as Facebook/Meta.


Everyone in production reproduces this. Intentionally or not, everything on a long enough timeline will suddenly lose power.

I'm sorry to offend what is otherwise a decent filesystem, but I, and the systems I'm responsible for, will not use BTRFS with RAID and this characteristic.

There's a lot of qualification there, read it again.

I would much rather let the journal replay and do its thing, than rebuild the array. Something every alternative offers, consistently.

BTRFS RAID has routinely burned me every time I have given it the opportunity. It hasn't survived nearly as much as the other test arrays.

I'm fine discussing filesystems, benefits, and all of that - but I can't entertain it when you try to make this so heated.

These are anecdotes from reliability tests I've been doing... I'm sharing them so that they can be considered.

You accuse me, while opening like this? At least my anecdotes weren't insulting.

edit: To add, this isn't something I've kept secret. I don't know why you say that. Have you read every communication I've written?

I can show you screenshots that span the last three months discussing this exact situation

I haven't reported it upstream because I have better things to do; ship.

You might see this, but hopes aren't high. This is about the response I expected. It's an extreme edge case, but it consistently cuts.

The filesystem we use is incredibly uninteresting. By design.

I find it utterly hilarious you think I'm the only person who has had issues with BTRFS RAID


Yeah, I think I'm leaning in that direction but I may look into Btrfs with metadata on a different level. I've had good luck with Btrfs as my desktop filesystem.


there's ongoing work on fixing this by Johannes Thumshirn, see https://old.reddit.com/r/btrfs/comments/wcxfqs/btrfs_declust...


people specifically looking for places where the read/write hole on raid56 is addressed, the problem is stated at minute 9 and solution of a new "raid-stripe-tree" presented from min 14 and on


You could just keep to RAID 1. btrfs raid is at filesystem level, so RAID 1 with more than 2 drives just means 2 copies spread across those drives, so you benefit from the increased storage space like RAID 5 (albeit slightly less) but of course unlike normal RAID 1 still have 1 drive fault tolerance across the entire array.


As someone who is looking to purchase their first NAS any advice on keeping the energy costs down (e.g. does particular NAS hardware or hard drives have running cost advantages in this area)? My main use case (at least initially) will be for storing and backing up photos for infrequent access.


I would assume most 7200 RPM drives have similar energy requirements, although it does appear some drives get hotter than others so efficient cooling might increase those requirements by a bit. Cost of storage though can be a bit expensive. I was able to find 20TB CMR drives for around $329 each.


RAID 5 and 6 are obsolete anyway, a relic of times when 2x storage cost too much.


That just seems like an absurd statement to me.

It ignores so many use cases and financial realities.


2x storage is still expensive for home NAS use. RAID 5/6 has other benefits too over RAID 10. Maybe RAID 5/6 isn't the right choice, but it seemed like the best choice when I was comparing options.


The time it takes to initialize RAID 5 or 6, or to swap out a drive, was manageable when the drives were smaller. Now, the benefit of fewer drives is swamped by the extremely long time needed before you can do anything with them.


LLVM-based CFI is interesting; does this mean that distros are going to start building kernels with Clang instead of GCC?


Iirc Android is already there, but a lot of distros are also pretty deep into GCC, so probably only on the fringes


Android and ChromeOS.


Mandriva too


I worry about a compiler monoculture if so. GCC will take a big blow if it gets replaced for the kernel.


That sounds backward, there's already a monoculture: only GCC. Official support for Clang would break that monoculture.

edit You probably meant more broadly in the FOSS ecosystem. Personally I'm not worried about that as GCC seems to be doing ok competing with Clang on technical grounds. For instance, last I heard GCC was ahead of Clang in support for new C++ features.


Gcc generates faster code, too, often, though far from always.

Clang is suffering because Google is off on a chase for "Carbon", which hasn't been abandoned yet, so neglects all but LLVM.

This doesn't affect the kernel much, because the C parser needs little attention. The Clang project needs someone not Google to step up.


Carbon will use LLVM for its backend, right? I presume Google are still invested in LLVM itself, for that reason.

I was going to mention Apple but they're famously prone to adopting non-mainstream languages like Swift. I don't think they have much interested in modern C++ features.


Academic research for security purposes is almost exclusively done with llvm. Why? It is a compiler framework that is (relatively) easy to work with compared to GCC.

So there is already a monoculture in some circles.

However there's plenty of custom gcc toolchains for embedded systems. It isn't going anywhere for a while yet.


The illusion of a monoculture disappears if you zoom out a bit.

Even if clang was a "first-class citizen" compiler for the kernel and was picked up by some big distros, there's an absolutely gigantic market of companies designing and building embedded systems around GCC. There already is a LLVM/Clang monoculture in some areas (e.g. research, "modern" C++ shops), but while I don't have the numbers or anything, I'd imagine that those fields make up a very small percentage of worldwide C/C++ developers. A lot of those boring embedded systems companies have low margins and don't think particularly highly of software developers, so they'd never consider refactoring all of their tooling and projects to use clang unless they absolutely had to.

If the kernel _dropped_ support for GCC, you might see an actual monoculture develop. But I don't see the kernel _supporting_ GCC, or particular distros choosing to build their kernel with it, making any real dent in the huge market share (for lack of a better term) that GCC has built up. It might seem that way if you read HN and talk to people working in SV, but GCC is still pretty ubiquitous.


All of Apple's toolchain is LLVM, which also means all 3rd party apps running on Apple devices are (almost?) all LLVM. Google Chrome is now built with LLVM, Android is getting there with Clang, and who knows what else inside of Google is built with Clang. These aren't tiny percentages.


I guess it depends on how you look at the the percentages.

Lots of people _use_ Apple’s developer tools, Google Chrome and other Google products. But in each of those cases, I believe the actual group of people directly dealing with C++ and Clang are small relative to the global C/C++ developer population. Android has a strong argument since Google can force the hands of OEMs. I don’t see how the size of Chrome’s user base will bring about an LLVM monoculture if 99.999% of C++ developers on earth never build Chrome themselves.

The majority of electronic devices on earth have (non-Android) firmware or software written in C/C++, written by people all over the world who don’t work for Google, Apple or anyone you’d even consider a tech company. You will never hear about these projects on HN because they’re closed-source, proprietary and (frankly) boring. But almost all of those projects use whatever toolchain their vendor provides, and more often than not (in my experience), thats GCC.

Like I said, I don’t have any numbers to back this up. I’m not claiming that GCC reigns supreme and that LLVM is irrelevant outside of academia. I’m just saying that if you’re really concerned about LLVM _support_ in the kernel leading to an LLVM monoculture, you might not understand how widely used GCC is in “boring” tech.

[edit] Adding to this, I guess it also depends on how you view a “monoculture” w.r.t compilers. I could imagine a not-too-distant future where LLVM gets the vast majority of research effort, and you see cool new optimizations brought into it while GCC stagnates. We’re arguably already there. But I don’t see that driving thousands of low-margin hardware OEMs to switch over from GCC to LLVM. Maybe that will happen one day, but I think we’re pretty far away from that.


What's the story with power usage sensors in k10temp? Can AMD contribute proper documentation or actual code for to support it?


I’ve noticed Mangohud wasn’t showing CPU watts. I did some searching and found https://github.com/Ta180m/zenpower3. Blacklisted k10temp and now its working.


That's not upstream, so not really a good solution.


Great. For Ubuntu it technically already landed on https://kernel.ubuntu.com/~kernel-ppa/mainline/ However, I say technically because the latest kernel build failed. As do many other versions there. I wonder why ...


While I have no idea if this is another case, Ubuntu seems to have a tendency to carry Ubuntu-specific patches that may cause unforeseen consequences. I've had an Ubuntu patch cause squid to crash when the config had an empty line at the end of the file or something like that. And Ubuntu's "openbsd-netcat" requires an EOF marker at the end of input, otherwise it won't terminate the connection. The original "openbsd-netcat" does that. Ubuntu even patched in a parameter to make their "openbsd-netcat" behave like the original. Maybe they should have called it "ubuntu-netcat" instead. These antics, combined with the mess that is the "snaps" system, made me swear off Ubuntu and leave for RockyLinux.


This is their mainline channel delivered over a Personal Package Archives (ppa) channel (a repo that's just slightly simple to add/enable on Ubuntu); there are no patches on top of those.


Don't need to go that far afield, Debian is good.


If your issue with Ubuntu is Ubuntu-specific patches, I don't think Debian is a solution. They patch a lot.

I personnaly think it's too much mostly because I don't value most of the reasons Debian patches (I don't care about exotic architectures - I value conformance to upstream more than the ability to have modular and small packages - I don't care about having some non-free parts in my package).


I wonder when 6.anything is going to land in Ubuntu Lunar. Perhaps they were waiting to skip 6.0 and go directly to 6.1?


If you want a newer kernel on Ubuntu, consider installing a third party one. XanMod Kernel works great for me.

https://xanmod.org/


There's also a nice bash script you can use that will patch your kernel to a newer version on Ubuntu I've used

https://github.com/pimlie/ubuntu-mainline-kernel.sh


The mainline PPA does not include Linux-tools, so using it means you lose access to many tools: perf, turbostat, cpupower, powertop, …


Very excited! I have used the MGLRU patch on a 13 years old iMac with 8GB ram in last 6 months. Have chrome with lots of tabs and heavy use of gimp/ImageJ/libreoffice. MGLRU is such a great improvement, without it the machine frequently ran out of memory.


On that note, anyone have a status page of frequently updated distros showing what kernel versions they're running?

Want an easy path to use new snapdragon arm chips.


Is the PremptRT features fully merged in, in the current mainline kernels?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: