Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A Look at the CPU Security Mitigation Costs Three Years After Spectre/Meltdown (phoronix.com)
285 points by zdw on Jan 6, 2021 | hide | past | favorite | 121 comments


So can someone please explain to me if its better to use "mitigations=off" with Linux Kernel 5.10 or have the default from kernel with "AMD Phenom II X4 955 Processor"? By running "lscpu" I see only two lines in default mode (not using "mitigations=off" right now), everything else "Not affected";

  Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Vulnerability Spectre v2: Mitigation; Full AMD retpoline, STIBP disabled, RSB filling
By having those it will affect my CPU perferomance? Will applying "mitigations=off" help with perfermance? Also by having "mitigations=off" what are the real security implications? Thank you for your help!


It's really hard to define what a regular user is. You're your own commandeer and it's recommended that you try running benchmarks or using it under those two cases to determine if it's worth having mitigations or not. Considering all, the performance impact for your average Linux desktop user is likely not going to be unacceptable.


Exactly. I just tried shutting off mitigations on an old laptop running BOINC, triggering the benchmarks before and after shutting them off. It made no difference whatsoever - not surprising, since that's compute-bound and isn't likely to have lots of system calls. How much that translates to the running applications remains to be seen.


The performance part is either irrelevant or easy to measure.

The security risk part is hard for most people to understand and make an informed decision. (Mitigation s ON is obviously safer, but even if I decide I want to go faster, I still don't understand the risk in taking.)


One datapoint: We run our production HFT machines with migrations=off. Latency at the microsecond level is highly critical, and these machines don't have any public facing interface.

This is obviously a niche case. But IMO, I can see similar calculus for other more mainstream use cases. I wouldn't turn migrations=off on a public facing web server. But if you have a cluster that does ETL data processing, it's probably CPU-bound and has very little externally exposed surface area. Arguably, even backend SQL servers can benefit, assuming the set of clients that directly access the dbase are tightly controlled and audited.


Another: we do not. Our code runs with exactly zero system calls once it has finished starting up, so we never execute the code affected by mitigations.

When every cycle counts, wasting them in the kernel is not a good idea anyhow. So, isolcpus and kernel bypass FTW.


How many other "trade security for performance" options does Linux have? For example can you disable paging entirely and have all user code mix with kernel code as long as you trust all code not to segfault?


Also, are MDS attacks mitigated or not? I seem to remember that on their website [0] it said that the mitigations were not possible or extremely hard, but now it says that Intel has released patches. I'm wondering if those patches actually mitigate MDS attacks.

0: https://mdsattacks.com/#ridl-ng


At least on my AMD processor by using default kernel 5.10 settings it says "Vulnerability Mds: Not affected".

(EDIT) Honestly I did not notice any performance issues by NOT using "mitigation=off". Default kernel 5.10 settings seem to be working well enought. At least for my tasks :-) But than again, I'm not playing any games on it LOL


That's because AMD processors were never affected by MDS attacks in the first place.


The context switching thing is the biggest.

If you have something that's just making a huge number of syscall's in a tight loop the mitigations absolutely dumpster performance.


It can be mitigated somewhat if you have pcid enabled for the processors, so that a flush can be process specific instead of cpu-wide. There’s still a performance hit of course but you’re going to lose some locality with a context switch anyway.


If you're on windows you can easily turn them off with inspectre from grc [1]

[1] https://www.grc.com/inspectre.htm


On Linux kernel >=5.1.13 you can add mitigations=off kernel boot parameter to turn off all mitigations.

There are more granular parameters that are supported by all kernels implementing CPU mitigations: noibrs noibpb nopti nospectre_v2 nospectre_v1 l1tf=off nospec_store_bypass_disable no_stf_barrier mds=off mitigations=off

https://unix.stackexchange.com/questions/554908/disable-spec...


I have a test suite that went from 19 minutes to 8 with mitigations=off


which CPU?


Threadripper, 12-core 2920X


I don't think so.

I do lots of compiling at work and our compiler hasn't changed in 5+ years.

pre spectre my 8700k could do 20k lines/sec. Post spectre with mitigation it's about 10k, with them disabled it's about 15k.

There's clearly been some under the hood changes to windows beyond these optional? changes.


I am trying to understand this: you are compiling programs on Windows?


Yes, our application compiles it's own sub applications. It is win32 software.


did you do a bios update or something you forgot about?


No bios update, but I wouldn't be surprised if windows shipped updated microcode or something.


It looks like it doesn't work on my PC: if I run it with elevated privileges and toggle off protections, they're back on when I restart the app.


You will need to restart your PC for the changes to take effect and show in the app.


Fwiw a reboot didn't change anything!


Oh OK thanks.



Does turning them off improve performance?


Yes, very significantly, especially on old CPUs that don’t have hardware mitigations.


I've read the word "mitigation" so much (not only here, and not only you; so this comment is not directed at you personally) that I only hear doublespeak now: when people say mitigation I immediately think of a slab of concrete in the back of my car to compensate for not having any brakes installed. Please recognize for what it is: a design failure that is pampered upon. A mitigation is not a solution, it lessens the impact of the failure mode that was identified. Afaik, if the flushes and changes actually work, it is not a mitigation but a workaround: the failure mode cannot occur anymore. Mitigation sounds better than workaround though (and is probably the reason it was choosen).


Mitigation and work around mean the same thing.


Not really: mitigation lessens the effect, workaround avoids the effect.


I don't think it turns off all mitigations. Only few of them.


I wonder how the Apple M1 hardware stacks up. I don't know if they are vulnerable to any existing Spectre attacks but surely they are still using speculative execution and thus have to be aware of similar side-channels.


Incredibly doubtful. Current attacks are, by this point, pretty well understood. One can simply "check off the list" of attacks and simulate the code gadgets to see what happens (EDIT: e.g. https://github.com/IAIK/transientfail provides a SoK with simple PoCs). In other words, I'm positive that no "script kiddie" can exploit any silicon using current attacks, not just the M1. *

The more difficult/interesting problem is being future-proof to whatever else undiscovered. Many Spectre attacks are heavily timing based, and even a single cycle variation in pipeline stages or flushing structures will spawn a new variant (see, MDS vs. Spectre). This is actually partially why a current trend in hardware research is trending towards fuzzing-type stuff [1].

Something also worth noting is that it's incredibly difficult to quantify "leakage". There's a pretty big difference between vulnerable and exploitable-- e.g. original Spectre papers had 10KB/s of kernel dumping, which is a big reason it was scary, but would it really be a big deal if it had <1b/s? Not going to explicitly name and shame, but there've been a handful of reasonable high profile "vulns" with cute domain names that I'm shocked to even see accepted at conferences due to how contrived the exploit was and tiny their leakage rates were.

I personally don't really know how to address the quantification problem, but I very much think it's necessary in any discussion of a bug's impact/severity. Definitely gets exhausting when every cute name gets a headline, and it's easy to blow things out of proportion without some grounding in reality.

* NOTE: I do think script kiddies have their place still. Definitely important to have automation to determine whether a system has updated security [2], it's just that I doubt such a scenario is applicable to Apple.

[1] https://www.usenix.org/conference/usenixsecurity20/presentat...

[2] https://owasp.org/www-project-top-ten/2017/A6_2017-Security_...


> I'm positive that no "script kiddie" can exploit any silicon using current attacks, not just the M1. *

Oh, do tell how spectre is solved. I would love to know, defending against it is a real problem I currently have. If you allow untrusted code to execute on your machine (e.g. JavaScript) then you're vulnerable to it. There are no practical attacks in the wild that I'm aware of, and it's tricky to do, but it's not impossible and the only defense really is to make it harder and more time consuming. This is the approach that I and others have taken.


Use a CPU that does constant-time everything (e.g. has no caches). Most of the super-simple in-order RISC-V cores that are available on github will do. Just put one of them in an FPGA, and you are set up.

Most of those CPUs are designed by CS or EE students taking a computer architecture class, so... in a sense... one can argue that defending against Spectre-like attacks is actually super simple: just use a simple CPU design.

To actually become vulnerable to Spectre, you need a very complex CPU design, so in the same way, it can be argue, that making a CPU vulnerable to Spectre is actually hard, since it takes a lot of work to create such a CPU design.

Now, if what you want is a CPU that's both fast and secure, then I'm sorry to tell you that such thing cannot exist. Those two goals are at tension. You can either get a F1 or a tank, but no vehicle that offers the same amount of protection as a tank is going to be able to compete against a F1 car, and vice-versa.


Caches aren't the problem (in the context of SPECTRE), branch prediction is. Getting rid of caches would be very costly performance-wise. VLIW CPUs don't predict branches themselves, but rely on the compiler to generate optimal code ahead of time. I was actually expecting an updated Itanium after the SPECTRE debacle.


AFAIK branch prediction enables the attack by enabling speculative execution, but the data is not leaked through the branch predictor, but rather through the timing differences observed due to speculative loads bringing data to cache.

I guess if you remove the branch predictor, you might avoid spectre while keeping caches, but I think you can keep the branch predictor and remove caches to also avoid spectre.

The downside I see in keeping the caches is that you keep the _source_ of the timing differences, so an attacker just needs to find a different attack vector to create a new timing attack.

If you remove the caches, you kill the source of most timing attacks.

I'm not an expert on this though.


Doesn’t Spectre/Meltdown rely on hyperthreading? In that case, not doing that would stop it, no? Or am I misinformed and it would affect multi core single threaded chips as well?


There were multiple attack vectors and some of them were significantly better at leaking information with hyperthreading enabled but the basic cache side channel leak worked just fine in single core systems before kernel patches mitigated the impact by flushing hardware buffers as part of the context switch (at a significant performance cost)


If you had a machine that was gapped could you disable the flushing?


Any machine that runs only verified code (ie. no JavaScript in the browser, no downloading nifty things and running them) don't need the flushes. Problem is (as shown by the Solarwind hack) even code you should be able to trust can be backdoored. However: if that is the attack vector, spectre and its' offspring are not your real concern: there are much more efficient and effective ways to compromise your machine if a blob of binary ends of being backdoored.


Yes and no. Yes I think it will protect independent processes sharing a computer that's fully patched. No it will not protect you if you run multiple untrusted scripts in the same process. This is partly why Chrome moved to isolate tabs in their own process after the V8 team eventually admitted defeat in implementing spectre mitigations.


Run this [1].

If it works, flush your caches or just update your kernel. If it doesn't, you checked off an item on your list.

I misspoke, silicon should be more like "system". Said it in this comment more about silicon fixes https://news.ycombinator.com/item?id=25665276

[1] https://gist.github.com/anonymous/99a72c9c1003f8ae0707b4927e...


I don't think you understand spectre. Defending against a single POC is not the same thing as defending against this class of side channel vulnerabilities.

Also your references to flushing caches and updating the kernel underlines to me that you don't know what you're talking about.


>> One can simply "check off the list" of attacks

>> The more difficult/interesting problem is being future-proof


I feel like you're not communicating well here. What does checking an attack off your list mean to you? If it means you're safe from it now and in the future, then obviously you're wrong. I feel like that's a strawman and not your actual argument though.


I read it as: defending against known vulnerabilities is well defined, defending against future exploits in the same vein is unknown. Pretty reasonable analysis unless there’s some doubt about the known part?


The impedance mismatch here is that someone is talking about defending against "vulnerabilities" when what they're really defending against is "exploits". Defending against exploits is antivirus, not security. Spectre is a broad class of attacks; there are categorical defenses, but you can't verify them by running a single POC.


Yes, you've put your finger right on it, I think that's where he's coming from.


TIL you can make an anonymous gist?


This functionality was phased out by Github ~2 years ago. Existing anonymous gists previously created still stand.


Could someone with an M1 try and run some Spectre gadgets?

Obviously read the source first but there are a few implementations of the basic ones on GitHub, not sure about the higher versions of spectre (I don't know whether CVE's require an implementation to be published).

If I can find I'll link a paper that describes a system to try to automatically characterize these types of side channels.


I think there still aren't any spectre/meltdown attacks, just proof of concepts. On my home PC I might deactivate mitigation since I rarely execute unknown code. On the other hand CPU power isn't really a bottleneck for me.

It is a memory leakage problem and in the world of today with apps stealing info left and right, I am almost unsure if I should care about it that much. Maybe attackers might be able to steal a key here and there, but I managed to stay quite cool when the architectural flaws were published.

I believe I saw a demonstration about the M1 not being affected by these side channel attacks, but I have no source.


Is it possible to attack via JS/WASM through the browser? That's pretty much the only unknown code I am running.


In theory, yes, it should be possible.

But I don't think anybody managed to, due to the amount of noise the browsers sandboxes add into the necessary syscalls.


That's not true at all. Some of the first proof of concept attacks were done in JS.

Now the question would be are those attacks still viable given the additional hardening browsers have done independent of the kernel mitigations?


That is why I got NoScript when the attacks were discovered

Although many pages do not work with it, so I disable it quite often

I wonder if it is possible to activate the mitigations only for the browser?

Or only for one user? I have created a separate user for the browser, so it cannot change my actual files


Almost certainly the M1 is still "vulnerable" to the same attacks that other current gen CPUs are. The M1 is of course a speculating CPU (and quite a deep one at that), and there doesn't appear to be any rollback for modifications to L1/L2 during that speculation when a branch miss occurs.


Why are they still releasing new CPUs with these issues?


Cheaper to keep the mitigations in place than to design around the side channel. Rearchitechting an entire CPU seems like it would be a boondoggle. Especially since the well of side channels runs infinity deep unless you're back to single core, single thread CPUs. As soon as they 'fix' this side channel, another will be found. It's just an inevitability of multi-tennant processing.

I'd have preferred a hardware flag that says "I accept that if untrusted code runs on this CPU, I've already lost" and keep the performance boosts of speculation, but alas.


well there's the mitigations=off kernel command-line switch in linux :)


A few reasons.

1) It takes a long time for desired design changes to make it to fabs, so it was always going to take years for the classes of vuln that were discovered to get mitigated in hardware.

2) More have been discovered in the intervening period.

3) Speculative execution really speeds things up, so the fixes are removing/changing as little spec. ex as you think you can get away with, and leaving the rest.


4) Intel's design pipeline is significantly messed up by its 10nm fab issues.


I think the tick-tock strategy[0] was still salvageable when these vulnerabilities were discovered, and it would be reasonable to assume that the next tock would fix these. Isn't Intel's 10 nm mass production five years late?

[0] https://en.wikipedia.org/wiki/Tick-tock_model


Spectre was disclosed to Intel sometime in 2017. Skylake (14nm) was the most recent rearchitecture at that point. The 10 nm rearchitecture was most certainly already in development, and may have been too far in to substantially address. I would expect it to be substantially addressed in the 7 nm rearchitecture, whenever that happens.


Many large customers don't need the mitigations because the attacks aren't relevant in their threat model and they prefer the peak performance provided by as much speculative execution as they can buy.


Because they aren't a bug, nominally - they're very potent side channels that peer directly into the CPU's operation at a very topological level.

CPUs are extremely expensive to design and verify, and astronomically expensive to make - these things take time.


Because if you discover how to fix Spectre completely, that would make (specialized) news.

Given common high perf microarchitectures, that's quasi-impossible. You can use somewhat efficient mitigations though.


Would segmented caches be a solution? What I mean is, allow the operating system to flag each memory page as belonging to a certain tenant id, and not allow speculation across tenant boundaries? This doesn't seem like a complicated addition, given that we already have multi-level page tables, caches and MMU accelerators for virtual environments.

Then again, if the solution was simple, I'm sure it would have already been implemented.


You want to protect against multiprocessor attacks through MESI. At which point I just have no idea of what to do purely in hardware.

The best system design change IMO would be to tag at programming language level (to systematize the introduction of the needed barriers, without needing too many of them, thus minimizing the overhead -- which is a must otherwise you can as well remove OOO entirely), but I suspect that's not going to happen for most systems. Or even any of them.


For Meltdown, they aren't. It was silly to do lazy validation of memory accesses and Intel fixed that.

For Spectre, you can't really have a high performance chip that isn't vulnerable to it in theory. A processor can't read your mind and know which code within a process should be able to communicate with which data within the process unless you tell it. You could just go to strict in order but that's taking a huge performance hit. Programmers cominging secure information and JITs running untrusted code within the same process will have to work to make sure they're not vulnerable to Spectre going forward.


> For Spectre, you can't really have a high performance chip that isn't vulnerable to it in theory.

You could if you had a fully transactional cache such that branch misses could be rolled back. It's highly non-trivial and likely far too expensive in die space to justify, but theoretically solvable.

But I think realistically CPUs are just going to say "processes are the only security boundary we offer" and leave it at that. Which puts things like WASM in a very questionable spot (particularly things like WASM in the kernel), but Intel & AMD probably don't care about that too much. For web browsers all the major ones just gave up on in-process sandboxing and that's why we have things like per-iframe process sandboxes now ( and then reduced privileges in-process iframes with the 'sandbox' attribute: https://caniuse.com/?search=sandbox )


For decades now hardware has been designed that can only be fully exploited by sufficiently advanced software. With CPUs this started in a big way when multiple cores became the norm. But the trend exists in completely different areas too, like DNA sequencing machines. Some people argued about this kind of thing many years ago, but it shows no signs of changing.


Because they mostly don't matter. Where is a practical, real world exploit? (This means not POC that works under perfect conditions.)


You're talking to people who are paranoid about Google using their search history to show more relevant ads... Just disable mitigations and let everyone else decide for themselves.

I do understand the need for top security in enterprise applications. But for personal use, all of this seems a bit ridiculous.

I've disabled mitigations on every computer in my home, will enable if I get burned. ~10-20% performance difference - I consider it a decent tradeoff.

But then again, I don't stress much about being hacked or privacy. It's not worth the mental effort, plus I'm a nobody and I accept that.


Even for many enterprise applications it is not necessary. What is the point of enabling mitigations on a single tenant database server, for example?


cpus would be extremely slow without the issues


Surprised to see significant mitigation penalties even on newer CPUs.


Ultimately spectre mitigations mean fiddling with the speculative execution, which means a real paradigm shift microarchitecturally.

The literature is still very thin too, so we're still very much in the early days of these side channels.


Current mitigations are still largely software based, and frankly, aren't fundamentally any different than the ones proposed just a couple days after 1/3/2018. Additionally, current academic literature still hasn't addressed proper silicon mitigations in an all too performant manner-- around 7-15% penalty from what I've seen. While in my opinion, some of these mitigations are provably future-proof (and very genius!), those penalties are still quite heavy. Furthermore, if software yields similar penalties, then why not just stick with software mitigations?

Just look at the mitigations list of i9 10900k or Ryzen 9 5950X as used in the article, both released in latter half of 2020.

>> itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Many of these are still by and large "disable this optimization" and "barriers", all done in software, with the potential exception of EIBRS which just essentially tags the branch predictors (which, in my opinion, they don't do in a particularly effective way, but stay tuned for Research tm). Also Meltdown, which I think is a pretty easy fix, and even Intel agreed [1].

Given typical design --> market time might be ~2 years, designers only had a few months to 1) finish currently pipelined work and 2) work on mitigations. On top of that, given that this is a pretty hard problem, a few months (not even > 1 year IMO) is definitely not enough time.

I'd guess that this problem won't be "solved" in a sense for many many more years, much in the same way that the rich history of buffer overflows has been a cat and mouse game for decades, with each mitigation coming with its own tradeoffs and potential performance penalties. On the "Hardware Mitigations" side, think of ARM's pointer authentication-- it's only just now that some pretty nice hardware support has been spawned, decades after buffer overflows were "known".

Still though, I personally think some more silicon mitigations will come in the next 2 years. Part of the reason "Pointer Auth" came "decades after buffer overflows" is due to the rise of cloud compute, and security matters much much more to the general public these days. I just mentioned current research generally has penalties on the order of 10%-- While pretty distasteful, it can definitely be (and has been) gradually improved on and is a far step from initial estimates of ~33-50%.

[1] https://www.rockpapershotgun.com/2018/01/29/intel-cannon-lak...


Hyper-threading (along with some forms of speculative execution which become subject to side channel attacks) is another example of a performance enabling technology with unforeseen security vulnerabilities which are nowadays required to be disabled.

With the death of Moore's law [1], it's not too surprising that newer CPUs (or at least x86 ones) are still recoiling from Spectre and Meltdown et al.

[1] https://youtu.be/MtrZJ4UqSn8?t=1327


CPUs are still very much superscalar, and hyperthreading still exists most places.


I couldn't find it in the article if it was addressed, but I'd like to see 2017-era microcode used in the mitigations=off testing as well, to see how much of a difference that makes.


I also wonder if there are changes to the kernel that can't be reversed with a simple mitigations=off and what effect compiler-based retpoline hacks have these days. Wouldn't be surprised if there's some other variable that I'm missing.


I've given up tracking this stuff. Just been adding https://make-linux-fast-again.com/ to my grub configs everwhere.


Even if you are OK with using that on a machine, that list of options absolutely reeks of magical thinking. On a modern Linux, a simple "mitigations=off" is equivalent to more than half of the other options listed, and many of the remaining listed options don't seem to exist anymore.


I'm the author of the website - I keep the whole list because older kernels didn't have mitigation=off, and inexisting options won't prevent your system from booting.


Like how some people still advocate for wiping hard drives with the Gutmann method (35 wipe passes)[0] when a single pass is more than sufficient for practically everyone?[a][b]

[a]: Obviously, for absolute security, a hammer and/or a shredder does much better.

[b]: This also ignores that the Gutmann method was designed for hard drive encoding methods that aren’t used anymore.

[0]: https://en.wikipedia.org/wiki/Gutmann_method


Method A is also known as "Widlarizing" or the "Widlar" method. "How do you Widlarize something? You take it over to the anvil part of the vice and you beat on it with a hammer, until it is all crunched down into tiny little pieces, so small that you don't even have to sweep it off the floor. It sure makes you feel better. And you know that component will never vex you again." Bob Pease, describing Bob Widlar's method for dealing with broken electronic components.


> [a]: Obviously, for absolute security, a hammer and/or a shredder does much better.

This still leaves large pieces of magnetic plates intact. At uni we could read some data from floppies that went through shredder.

Hammer and shredder is NOT "absolute security".


1.44 MB 3.5in floppies had 17434 bpi. Hard disks apparantly have around 1 Tbpi. That's like saying it's possible to jump to the moon because you could do a slam dunk back in school. You're probably still right, though. https://www.anandtech.com/show/11925/western-digital-stuns-s... https://en.m.wikipedia.org/wiki/Disk_density


Unfortunately e.g. the RHEL 7 world doesn't have mitigations=off yet (in addition to many other kernel command line sugars).


It's intended to be a catch all for multiple kernel versions. The 'mitigations=off' option was introduced with version 5.2.

https://www.phoronix.com/scan.php?page=news_item&px=Spectre-...


I would much rather have liked to see a list of kernel versions with customized options for each.

I mean, imagine coming upon a system where you want to add some other kernel option. You see that the system has all those options already added to the default kernel options. Are you going to research and clean up those options and remove the irrelevant ones? Or are you going to punt, add your own option to the end, thereby adding to the chaos?


Please, not in production systems


Why not?

Or do you mean "please, not in a production system where you share a host"?

If that's the case, I think you have it backwards. In production, please, don't share a host.


There’s zero reason to change 8 digit passwords every 6 months if your /etc/passwd isn’t insecure


Related: does anyone know how good current browsers are at mitigating Spectre-related attacks through JavaScript? It's hard to run a browser on the modern Internet without at least running _some_ untrusted JS.

Turning off the mitigations in the kernel sure speeds things up a lot on older machines, but if browsers don't do a proper job of mitigating those attacks then someone could extract data through JavaScript.


So companies had security flaws in their design, the fix makes their CPUs a lot slower so you need to buy a new one from them earlier.


The interesting thing about these particular measurements is how they cut against the meme of AMD dancing on Intel's grave. The i9-10900K, a $500 part that's readily available, is 1st or 2nd place in many of these benchmarks, and where it's 2nd place it's to the quite unavailable and more expensive Ryzen 5000.


> more expensive Ryzen

Only because this list happened to pick an expensive chip as the representative model for Ryzen 5000. For an even competition, look at the numbers for the 5800X and 5900X, which are respectively $50 cheaper and $50 more expensive. Compared to those, the 10900K isn't impressive. https://openbenchmarking.org/embed.php?i=2011098-FI-AMDRYZEN...

And the reason people got excited is not because Intel parts became useless, it's because AMD finally managed a generation of chips that flat-out beat Intel's, even in single core performance. Especially in light of how important a role AMD played in breaking the status quo of quad cores, putting fear in Intel's eyes is great for competition.


I think the new Zen 3 stuff is pretty good, but your take is highly biased. Objectively, the 10900K came out 6 months ago. The Ryzen 5000 series came out, effectively, at some point in the future. It's not in stock anywhere and nobody owns one. Using the benchmarks in this article, the Ryzen 5950X is 2% faster.

As far as stuff you can buy today there's Apple way out in front (if you can live with their RAM configs) followed distantly by last year's i9, then there's AMD.


It's not biased to say that a newer product won at something, let alone highly biased. And I didn't dispute the stock issues.

> nobody owns one.

https://i.imgur.com/8EMeiCZ.png

This is only one source, but the 10900 numbers since launch are dwarfed by the 5800X numbers since launch. For the people building PCs with this site, Zen 3 chips are currently outselling all of Intel combined.

> Using the benchmarks in this article, the Ryzen 5950X is 2% faster.

I think this article is all single core stuff. Which is the aspect they "finally managed" to win at. It's not their strength at all, it's the thing Intel was able to lord over them. Should I have said "barely" in addition to "finally managed"? I thought it was clear enough.


The Denver, CO Microcenter gets about 20 5950X chips every week it seems. Many more of the other models, of course.

It took me a while of waiting but I finally got mine a couple of weeks ago. It is pretty nice.

Since the chips sell out in the same day they arrive at the store you won't ever see them "in stock" but obviously, out of all the chip models at least 100 new PC enthusiasts every week own one just from that one store.

I also know several friends of mine who finally received their prebuilt gaming systems with Ryzen 5800 or 5900 chips and Nvidia 3080's. There was a lot of delay but the OEMs are shipping a lot of boxes.


I personally don't think AMD is "dancing on Intel's grave." As much as I love (and try to buy) AMD's excellent CPUs, as you noted, the latest Ryzen 5000 series are in very limited supply, and I found the same with the Ryzen 4000 laptop chips all year. I eventually got an HP Omen 15 which I'm pretty happy with, but I'll remain bitter that options with brighter and/or faster screens or beefier graphic cards were never an option, and what options I had were often hard to find.

As for the i9-10900K, if you check out a review from launch [0], you'll see nothing but good things to be said about the performance, with a very large trade-off of being a ridiculously power hungry, heat-producing CPU. If you're OK with that, then you're probably perfectly happy with the Intel CPU. For me, a cool and quiet CPU that can still smash my parallel processing needs and meet my gaming needs is a better sweet spot.

[0] https://www.tomshardware.com/reviews/intel-core-i9-10900k-cp...


The previous gen i9 launched at $989, intel had to cut their prices by half to be competitive. Intel isn't dead for sure, but they're no longer able to set whatever prices they want.


I have a (maybe stupid) question about this. If we have the mitigations in place and compare two otherwise rather similar processors where one has hyperthreading and the other one doesn't (for example the Intel i7 7700 and the i5 7600), would there still be a significant difference in performance between them?


I assume there are people here that can state in general what kind of performance drops they've seen since security patching?


There is no in-general. It is very use case specific.

If you're doing media encoding, it would be hard to measure. If you're doing an haproxy tcp proxy, it's big. If you're doing https, it's probably not so big (encryption eats enough cycles that you wouldn't necessarily see the slowdown).


Does anyone any numbers for turning these mitigations off on GCP instances? Especially for machines running Apache or nginx


How MacOS deals with it? Any slowdowns too?


Can anyone explain why the geometric mean in particular was used for the overall results?


I think this paper should explain it https://dl.acm.org/doi/10.1145/5666.5673


tldr: with all software-side mitigations turned on you are looking at ~15-25% decrease in performance.

Considering the millions of chips running worldwide and the CPU cycles / energy wasted on this, this really is absolutely horrendous.


Consider all the CPU energy "wasted" on playing games which contribute absolutely nothing to society.


I'm guessing you correlate that if peolpe didn't play games, they'd instead have time to do other things more meaningful to contribute to society, which is naive at best.


If anyone was wondering:

   cat /proc/cmdline


What I personally learned? Never trust any hardware manufacture for anything other than security theater.


And what then? Never use hardware?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: