QLC is going to replace HDDs

What a time to be alive

Attached: 1539007040319.png (1015x577, 126K)

Other urls found in this thread:

youtube.com/watch?v=i7urtyV6KGo
twitter.com/NSFWRedditGif

>dies
>all data is irrecoverable
WOW GUYS ITS TOTALLY GONNA REPLACE HDDS ANY DAY NOW
Fuck off we have this thread every other week and you get btfo'd every fucking time.

>NAND
youtube.com/watch?v=i7urtyV6KGo

Imagine making such a shit argument and thinking you BTFO'd anyone.

>implying
I can't be asked to put in any more effort in my shit posting than OP does. But as the thread will move along he will get ripped apart as it always does and with the same facts that haven't changed since last week when we had this exact same fucking discussion with some retard thinking QLC of all fucking things is the future of mass storage.

>QLC
kys, i value my data

If QLC is new, then how do we know it's so bad?

that'll be $2000 +tip

contrarians talking out of their ass

If I haven't kicked you in the nuts yet, how do you know that it'll hurt? Come here, faggot.

>imagine being this retarded

Attached: 1551752240123.png (480x360, 212K)

So far I've heard that more dense = less reliable, but how do we know if it's the controller dying rather than the memory dying?

>So far I've heard that more dense = less reliable, but how do we know if it's the controller dying rather than the memory dying?
that is a fact but it's blown out of proportion by these faggots. also all SSD's come with specific resilience specs.

>be 2006
>raid 10 suffers a dead drive
>1 tb drives take 3 weeks to rebuild the array
>another drive dies
>now n+0
>rebuild slows down
>will now take 8 weeks
>8 weeks N+0
Spinning disks are a 20th century cancer upon the enterprise and I can't wait for them to be relegated to two minute bits on I love the 90's.

>that is a fact but it's blown out of proportion by these faggots
Look up MLC vs TLC vs QLC failure rates you fucking faggot before you talk even more out of your ass.

>RAID 10
>No backup ready to replace the failed array
You got what you deserve.

wtf do you take that long to rebuild
get good man, it takes like an hour to rebuild a raid10 on 1tb

not him but
you buy the drives depending on the resilience needed. It's pretty obvious

Can't happen fast enough.
Can't wait for affordable 4-8TB SSDs so I can ditch all of my HDDs and dedicate their use to long time backup.

>4D NAND
They have finally managed to stack flash memory cells in the time dimension. What a time to be alive.

And again look up failure rates. I know damn well SSDs have a TBW rating but QLC has a let's favourably call it tendency to fail well before you're even close to the warranty TBW limit.

It's rather new, they will eventually replace spinning disks though. There will be HLC one day as well
HAMR failed constantly and has had absolutely shitty drives that didn't work for the past 10 years, however in the next 2 it'll be the only big drives.

This. My MacBook Air SSD died (I think it was SLC, since it is one of the first, maybe TLC) in 5 years, meanwhile Seagate drive still works for 8 years. Fucking Seagate, not even WD or Hitachi

Can I use this technology to travel back in time, and get my dad a new condom?

>certain combinations of 1s and 0s are no longer illegal on 4D drives

Attached: 1549923137269.jpg (2048x1152, 206K)

>raid 10 suffers a dead drive
>1 tb drives take 3 weeks to rebuild the array
user uses consumer trash gear and on top of that on on consumer trash Windows and has a bad time. Shocking.

In reality, then and now you should be done rebuilding in a day or under, even with Linux SW RAID (which is faster than in 2006, but it wasn't that slow). Never mind with decent HW RAID.

No, it can only time travel in the period of time when it existed, and only to fetch data.

This only applies if you're careful to store them in time periods when they aren't illegal.

>implying I'm complaining about having no backups
Availability matters more to the enterprise than backups. You try soothing the CFO with "but the data is safe" when the reports are running slowly due to rebuild let alone not at all.

>netapp and equilogic are consumer trash
k

But what if I don't set RTC, and date is Jan, 1, 1970?

>not having an automatic fail over for something this critical
Ask me how I know you are a fucking larping consumer that never worked in the industry?

Rebuild times of 3 weeks are consumer trash at work. There pretty much isn't even a way to accidentally misconfigure decent hardware to take that long to rebuild.
This almost certainly only happened with trash on Windows where some Windows software handled the rebuild, yes.

How can it be 4D?

Rebuilds shouldn't take 3 weeks for 1TB even with consumer hardware. What the fuck is he doing?

Attached: 1528289093736.jpg (4011x2100, 3.73M)

my old opteron with software raid took like 30 minutes to rebuild a raid..

Then your SSD's 4D components will encode your data based on that timestamp. Then when you correct it later you may experience data loss.

post a link for your shitpost, you fat incompetent faggot.

> raid 0
this always makes me happy to see.

Yea. Not saying all consumer hardware was that slow, but it's just the only conceivable way I can see even some poorly configured Windows take as long.

You pretty much need to pair Windows' crappy scheduling back then with a software rebuild of the array. It's pretty much impossible to make hardware RAID10 run that slow - even if it was still IDE and 5 years earlier it wouldn't have slowed down that much.

> 30 minutes
Pretty sure that also wouldn't quite have worked with 1TB drives back then. But I'm sure your array rebuilt quickly. If you made sensible enough choices, it worked fine even on Windows.

>quadruple level cell
fucking WHY
WHY ARE WE GOING BACKWARDS AND MAKING THINGS EVEN WORSE
TLC WAS FUCKING BAD ENOUGH

Because cost per gb is more critical than most other factors. You even see it here.

>windows software raid
what?
my drives weren't full. but raid10 rebuilt as fast as the write speed of the drive..