SSDs were a mistake

SSDs were a mistake

Attached: F910B367-3BC3-49CD-B2EB-F76FE5A798CF.jpg (720x405, 44K)

>implying magnetic and optical media don't have their own set of issues.....

The industry kniws that, in order for them ti keep thriving, everything must be fleeting..by that I mean everything must be fast as hell, stability/retention be damned. I'm O.K. with SSDs as the host for athe OS, I just don't like hiw fragile they are.

Attached: ginger_snaps_1.jpg (1920x1080, 284K)

What even is the problem here? What you're showing is a reliability/density trade off. Good MLC and (now somewhat rare) SLC will last way longer when it's powered on than any traditional HDD. Hell, the MLC drives of a few years ago lasted many petabytes before failure. If you're worried about durability, put some OS log stuff on a RAMDisk. Better yet, get some Optane and use ZFS to make it a cache drive. That'll reduce the number of writes your flash devices receive, if you're really worried.

Those names are terrible. Why not 1LC, 2LC, 3LC or something

>Better yet, get some Optane
thanks for the laugh Brian

Samshit EVOs were 3D SLC and they're already on par with MLCs, future 3D NAND would be better

Or just use cheap shit and back your data up. It's not rocket science.

If you really want your mind blown, look up 256QAM, a modulation technique for cable internet and someday wireless. It's amazing how we are able to transmit data with such little room for error.

pic semi-related, 64 qam.

Attached: 64qam.png (455x359, 92K)

Optane is a meme. You only need like 32GB of ZIL unless you've got a crazy heavy db load, but the l2arc should be 10-20% of your array size. At greater than a $ per GB optane is just not an option.

My point was that Optane has very high durability vs flash. Performance is whatever, but if you're super paranoid about durability in an SSD you can use Optane caching to reduce cycles.

>actual autist posting on Jow Forums
Holy shit.

can someone explain this to a brainlet

>SSDs

Attached: sdcardhackbybunnieandxobs2900x674.jpg (900x674, 93K)

The real point is optane is so expensive per GB you're actually better off just z1-ing two good MLC SSDs and just swapping them out when they break, which will likely take years even with a heavy workload.

Attached: sdcardhackbybunnieandxobs3900x674.jpg (900x674, 109K)

Attached: sdcardhackbybunnieandxobs4900x674.jpg (900x674, 98K)

This is how all HDDs work as well. In fact this is how all storage works ever. You're never guaranteed 100% reliability. What you do about that depends on the application. In the case of mass storage, using FEC to get closer to the Shannon limit is perfectly reasonable.

HDDs weren't any better. if you want your data to last thousands of years, carve it in stone. if you want it to be reasonably safe for your lifetime, make a bunch of live backups.

It's not a problem if the probability of what is stored matching what was stored is high enough. And as someone already pointed out, mechanical hard drives have the exact same "problem". Attempting to make "perfect" solutions is hardly ever worth the effort and overhead.

>f just z1-ing two good MLC SSDs
Wtf does this even mean

i mean... you need pretty high SNR to do it though

This is hilarious. Is this the first time you've seen how storage works at the hardware level?

This guy gets it.

I remember my external 3.5drive fell 2 inches and stopped working.

While my 850 pro has 10 year warranty and I will die before it runs out of writes.

how is this bad?
If you buy a 2GB chip and you get a 2GB chip why should you care that in reality it's a butchered partially broken 16GB chip?

this is somehow different than HDD, how? you think needles encode exact zeros and ones onto magnetic disks and not some encoded value that has error resistance?

jesus kiddo, please stop posting this literal bait.

Just to add to that. The new 802.11ax standard in development uses 1024 QAM to achieve hilarious data rates in such small channels but you will need an extreme SNR to achieve that

>Performance is whatever
Can we stop this meme?
The tech is horrifically underutilized when not in NVDIMM format, and for it's intended niche applications as a NVME device it still outclasses traditional SSD. The industry believes in the future it will threaten the DRAM market.

Attached: 2018-07-12-With-SSD.jpg (861x557, 78K)

ok, this is a valid gripe. managed flash is fucking homo, but not a lot of people are willing to write a block filesystem that would know how to properly write to a large plethora of nonstandard, nonfree shitware mmc's. that also being said, companies literally can't even make proper firmware to manage flash. hell I've gone through all kinds of sdcards because of how shit the managed flash is.

>SSD defense force on Jow Forums
wow

as opposed to hdd defense force? hard drives are so shit, they literally can't be defended. I'd take predictable failure over that meme that HDDs are any single day.

Can anyone explain why on Earth we are still on sata 3? What the fuck is the sata group even doing? Just banging little Vietnamese girls and not working on any new standard.

this is what SSD shills actually believe

don't even try kiddo. at least the microcotroller on the flash can have counters on cells and can see them fail. hard drives have useless metrics (smart) including nonfree ones that you can't even decode without sekret vendor knowledge.

Yeah let's see how reliable those dual head HAMR drives will end up being

.Sata 3 still fits 90% or more of use cases outside of enterprise, NVMe exists now, but the actual drive manufacturing costs end up being more than a third for just the caching and controller on them.

>j-just wait

Remember me?

Attached: SATAe-3.jpg (700x525, 92K)

Enterprise switched to high end SLC drives for critical services the moment they were available. Rebuilding with arrays holding drives larger than 4TB HDD's is terrifying. Anyone with experience knows this.

We can compromise user, let's agree to just hope soldered QLC doesn't end up in consumer laptops, even though we know it will.

Don't understand your pic but how is waiting 30 seconds on a HDD to do anything better?

>this is what SSD shills actually believe
Have you seen the results of a disk head crash? After one of those, your data is still there... in the dust at the bottom of the drive enclosure. Like the world's hardest ever jigsaw puzzle.

I hope they keep the sata plugs or a similar one around. With pcie4 coming, a single lane of that will be enough for most storage applications and also give the better IO performance of nvme and hopefully they use something similar to sata cables for that. HDDs are still around and I doubt consumer use is going to go towards U2 connectors and M2 isn't helpful when you want mass devices.

I would say efficient use. No need to discard the product.
>high end CPU has some fabrication errors, those errors are only noticeable if HT is enabled
>bin it as an midsegment CPU

-Less waste for manufacture
-lower price for consumers that don't need more performance

Win/win scenario

you can kill an ssd by powering it off while it's writing, you dumb shill

>please poz my asshole

and yet somehow SSDs are STILL more reliable than the abortions that are mechanical spinning disks.

PCIE4 won't make it to consumer for a long time. Consumer NVMe doesn't fully utilize 3.0 spec as is.

You realize that SSD manufacturers include additional capacity to make up for this right? If you buy a 500GB drive, its actually closer to 540GB or more in raw capacity because they account for it and additional failures from writes.

>source: my ass
dear god, are you even trying?

>Annualized failure rate for enterprise SLC drives getting hammered with writes nonstop
.44%
>Annualized failure rate for archive HDD inactive 90% of the time
1.65%, with some models up to 29.08%

Is there something you gain from ignoring reality?

A proper backup is indeed pretty much rocket science.

>made-up numbers
>reality

Apparently the pci group said it is okay to ignore 4.0 because pci-e 5.0 is already at 0.7 revision and will be done soon. However one company did just release a controller for it. So expect pci-e 4.0 x8 nvme drives soon.

>pci-e 4.0 x8 nvme
>soon
>for consumers
You gonna be disappointed user.

>Reddit spacing

It isn't my fault Samsung is releasing consumer drives faster than pci-e 3.0 x4.

RAIDZ1, aka mirroring them with ZFS

>pci-e 4.0 x8 nvme drives
oh god please yes

Attached: 1534471358884.png (246x220, 109K)