Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would be extremely cautious about using any consumer grade TLC or quad-level-cell SSD in a "NAS" for serious purposes because of well known write lifespan issues.

There's a reason that a big difference in price exists between a quad-level-cell 2TB SSD and an expensive enterprise grade one with a much higher TB-write-before-dead rating.

This might look cool but check back in a few years and see how much of the drives' cumulative write lifespan is worn out.

I also cannot even imagine spending $4000+ on a home file server/NAS with copper only 10GbE NIC and it not having at least one 10G SFP+ interface network card.

Okay, so he wants it to be tiny? But in a home environment the major problem is more power consumption and noise, so you can often go with a well ventilated 4U height rackmount case for full size ATX motherboard, which is roughly the size of a midtower PC case turned on its side.

This lets you use motherboards that will have enough PCI-E 3.0 x8 slots for at least one dual-port Intel SFP+ 10G NIC which are very, very cheap on ebay these days.



> I would be extremely cautious about using any consumer grade TLC or quad-level-cell SSD in a "NAS" for serious purposes because of well known write lifespan issues.

I don't know what you're using your NAS for, but the author is using it as scratch space for raw video files. It's not an OLTP DBMS or anything. It just needs really fast ingest of files beyond the capacity that a DRAM cache can provide.

> I also cannot even imagine spending $4000+ on a home file server/NAS with copper only 10GbE NIC and it not having at least one 10G SFP+ interface network card.

The author's editing rig doesn't necessarily have a 10G NIC, let alone being attached to a 10G switch with runs of CAT6a; and there's only one device talking to this NAS at a time (as the author cannot be in two places at once.) So what'd be the point?


If the author's editing rig doesn't have a 10G port, what is the point of spending money in $ per TB for the SSD on the other end of a network connection, because something like a 6 drive raid of spinning disks can very easily saturate approx. 100MBps on a 1000baseT link.


Latency for scrubbing in projects


TrueNAS used to be designed to boot off of smaller SataDOMs that were used only for boot. They were effectively WORM. At least, it used to be a few years ago. Everything that was written for the server was either a RAM disk or spread out amongst the RAID drives (as a separate partition, which has its own issues, but still).

I had assumed this is what he was using the TLC SSD for. If that’s the case, so long as there isn’t much writing to it, it should be fine.


The 8TB Samsung QVO drives are quad level cell consumer grade drives, that's his main storage.


> well known write lifespan issues

Reports of SSDs write span issues have been greatly exaggerated ;)

At least nowadays even with QVOs thats not something consumers have to think of much anymore.

These SSDs have 8TB, so to exceed its write endurance Jeff would need to write 4TB to all of them each day for 3 full years.


SMART will report expected remaining lifespan. Another thing you can do is write k TB per drive where k is 0, 1, ... N-1 where N is the number of drives. This staggers the endurance so they don't all fail simultaneously.


> I would be extremely cautious about using any consumer grade TLC or quad-level-cell SSD in a "NAS" for serious purposes because of well known write lifespan issues.

How many full-drive writes does your average video editing server need? I would expect a pretty small number. The average source file is sitting there for weeks or months.


Yeah, my bet is that the rarity of writes (some metadata in the video project files) will give these drives plenty of longevity.

For a use case like database transactions, log storage and frequent data dumps, the game changes quite a bit. I would definitely shy away from the QVO drives for that use case.

I've had these drives in service for about 8 months in my regular NAS, before transferring them to this new build, and they are all checking out okay still.

But this is also why I'm doing the striped mirror plus a hot spare. The only real challenge would be if the drives have a firmware issue, and they all die at the same moment after like 4 years due to a bug (like how HN's servers died...).


Jeff edits on a Mac with a 10 gigabit nic; he says as much in the article. Unless he's in an odd duck situation with an SFP device connected to his Mac I'm not sure what value going to SFP and back would add?


SFP is nice for the switch-to-device connection. It lets you choose between DAC (cheap and easy to route) if you're close enough, fiber, or copper (your choice). If I had an option, I'd use SFP+ everywhere.

But copper is usually a little simpler for consumer/prosumer devices. Someone does make a Thunderbolt to SFP+ adapter but that things like $300!


Yeah that $300 device didn't seem like anything you'd use for a budget setup, so unless you need the distance advantage of SFP+ over copper, I don't see a reason to use it. Other than it's cool, of course. Which may be reason enough. :-)


Personally I'd not be too worried since the last QVO devices I put in a NAS lasted three years, and had about 300TB of reads and 500TB of writes before they triggered the SMART endurance alerts and were replaced.

At 500x whole-drive rewrites I think I got my money's worth out of an $84 1TB drive.


Samsung QVO SSDs aren't very cheap to justify its lower TBW, especially if you write much like that. There are many TLC drives with great TBW rating at similar price, like Seagate Firecuda. Some are DRAM-less but HMB is good enough for vs QLC drives.


I was thinking the same thing, but wouldn’t these be okay if his workload is mainly WORM?


This is definitely an engineering disaster. Sometimes we get too caught up in how to do something that we never ask ourselves if we should.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: