Welp, I got a blue screen today. Hard drive is starting to fail

Welp, I got a blue screen today. Hard drive is starting to fail.

Does Jow Forums have a guide to the hard drive? Otherwise im just gonna buy this.

Attached: 61ErQF2DqaL._SL1200_[1].jpg (1200x900, 93K)

Other urls found in this thread:

rml527.blogspot.com.au/2010/09/hdd-platter-capacity-database.html
rml527.blogspot.com.au/2010/10/hdd-platter-database-western-digital-35.html
techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead
twitter.com/NSFWRedditGif

SSD > HDD

In my experience SSD dies faster than the HDD.

Im also packing about 4TB of data Id like to centralize one to one disk.

sounds like you already knew what to buy and didn't really need Jow Forums's help after all then

Attached: 38_MetroBlitz-approaching-Sportpark-18-Jan-1984a.jpg (1024x683, 254K)

If' you've got the room, buy two, and mirror them.

Fair enough I'll explain a little more. WD sells different colored hard drives that all do different tasks, I notice the blue one seems to be the general one size fits all solution but I cant help wonder if I should go for a different color. I do a lot of 3D and video work and the files really add up after awhile. Just wondering if one of the other types would be more beneficial.

Lastly my motherboard is from 2011, these hard drives have sata III versions that I might look in to. All I need is a sata III cable and it will work right?

Attached: maxresdefault[1].jpg (1280x720, 69K)

Buy two and mirror them.

Dump windows as it's fake intel RAID can't deal with bad sectors. It will corrupt shit.

Use linux with btrfs or zfs.

Attached: 1521561305846.gif (320x180, 2.15M)

Thats very interesting I did not know that was possible. From what I gather its basically two hard drives that update at the same time so that they are always identical? Does it take longer to write files if two disks need to be used at the same time? Also would the heat from both discs running for long periods of rendering pose a problem?

are you sure your blue screen is related to your HDD? what makes you say so?

>From what I gather its basically two hard drives that update at the same time so that they are always identical?
yes
>Does it take longer to write files if two disks need to be used at the same time?
writes are parallel, i think reads might be faster, not sure though
>Also would the heat from both discs running for long periods of rendering pose a problem?
is this even a thing?

He could also use ReFS on top of Storage Spaces inside windows, for the same effect.

I live in Arizona and summer is coming, also rendering videos can be an overnight thing so basically the disks are being written to for about 8 hours non stop. With two disks in close proximity it might be an issue.


I could be wrong but im going off of history with my computer. Ive replaced a few hard drives already usually when they start to click and I have start up trouble it means they are getting ready to go.

Plus they are pretty full right now and I think that shortens their life span? Heard that somewhere not sure if its true.

Fuck that noise.

If you want actual data integrity dodge that (not)NTFS gay shit.

Attached: 1523939097699.jpg (960x720, 34K)

True.
WD10EZEX is great drive though.
single 1TB platter, reliable as all fuck and plenty fast with 2 in raid0.

For slow and unreliable effect?
Storage Spaces actual garbage.

If you want softraid on Windows just use the tried and tested LDM raid, which is faster AND more reliable than Storage Spaces.

Wait did WD get rid of their NAS drives or are they just not listed there?

Grab crystaldisk (or some other S.M.A.R.T shit) and look at the hard drive info. Would let you confirm for sure that shits starting to go awry. There's a version without the anime if you're someone that bothers

Attached: crystaldisk.jpg (671x429, 71K)

Bcachefs > Petty squabbles over the downsides of either tech

Attached: 1523586590580.gif (268x325, 1.96M)

Right, but it achieves the same effect, if he needs to stay in windows for software compatibility, without having to build a storage server on a 10Gb/s LAN

What about the 4tb one?

If I remember im supposed to have a small hard drive for operating system and basic programs and then a large one for file storage? Or does that not matter anymore?

No idea, probably a 4x1TB platters though.
Check the HDD database - rml527.blogspot.com.au/2010/09/hdd-platter-capacity-database.html
And Backblaze blog.
I know about the WD10EZEX, because I have 2 in raid- on this machine and several others around, most reliable disks I've had since WD6400AAKS.

You can do that better with LDM raid though, without any of reliability issues that plague storage spaces and ReFS in general.
NTFS isn't great but it's better than ReFS.

Don't buy a blue, buy a black or at least a red. WD uses different classes of mechanisms internal to the drive based on the class of the drive. It used to be that the blacks shared the same internals with the gold, just had different firmware, for raid array vs desktop usage. For the durability of the mechanism it was something like:

gold/black
red
blue

You can see it in the mtbf ratings and expected bytes written. I want to mirror (heh) what others say here about using 2 in raid 1. You want to be sure your case draws air across the drives too.

Good stuff guys, gonna do some last bits of research and then make a decision. Thanks for the help Jow Forums.

This post is wrong.

WD Blue == Caviar SE (ie. the good drives)
WD Black is just blues with power management turned off and AAC to loud.

double check with SMART to make sure. it seems to me this is a recurring problem on your computer

blues typically are 5400rpm drives with a 2yr warranty. Blacks are 7200rpm drives with 5 year warranty. And the blacks are faster. For about 4W in extra power per drive.

SSD's haven't existed long enough to prove their long term durability.
I have HDD's from 01/05/07 still running.
Granted they are cold storage but still they are over a decade each.

SSD's from 4 years ago massively failed and even active use drives of two years old are rare.

I use them for scratch disks or swap or a read only system but never for my /home/ or storage.
They are nice for mobile though because shock/power/noise but better have a HDD basestation at home with sync'd backups.

Instead of buying 1 high capacity hard drive, instead buy multiple.
For example, if you want 4TB of storage, don't buy a single 4TB drive. Buy 4x1TB drives. Why?

The answer's simple. After reaching about 1TB, hard drives must add on more platters. More platters = more moving parts. More moving parts = more likely to fail.

Don't put all your eggs in one basket.

A single set of WD Blues has been 5400rpm.
They are not 'typically' that at all.

Blacks are faster than 7200rpm Blues because as I said - Power management is disabled and AAC is set to loud.

Did you buy OCZ drives like we told you not to?
Still using an Intel 320 that's ~6 years old now.

SMART stats screenshot now.
I bet it's barely been used.

greens have been dropped for Blue since blue does the same shit as greens

I am looking at the blue datasheet now. The entire range 6TB,5TB,4TB,3TB,2TB are '5400rpm class'.

There are only 7200rpm class blues in 1TB,750GB,and 500GB.

It's surprisingly hard to find info on these, but iirc the 4tb black drives had the dual actuator heads, but the blues had only single actuator heads.

Gimme a minute, it's on another machine.

WD doesn't list every WD blue they've ever made on the spec sheet for currently sold products...
rml527.blogspot.com.au/2010/10/hdd-platter-database-western-digital-35.html

ONLY buy Seagate hard drives
keeping your Seagate hard drive in a static enclosure is bad for its health, after all, the manufacturer needs to protect it with an anti-static bag
you need a kinetic enclosure for your Seagate hard drive, one that vibrates in sync with the rotation of the platters

If you care about your data at all then use this combo: UPS + Raid + Backup. UPS = allows safe shutdown in event of main power fail/prevents raid "write hole"/eliminates raid rebuild due to improper shutdown/protects against spike and surge damage/helps keep your data error free. Raid = Large volume creation w/protection so the volume can be backed up in time before the whole volume and data is lost for good due to failure. Backup = in case the raid volume/whole server goes tits up your data is safe. Keep the backup powered off when not in use. Its your data, how important it is/how much time it would take to recreate it all, those are questions you gotta ask/deal with. NTFS/ZFS, file system itself don't mean shit when hardware that it all runs on shits itself due to a failure. Yeah ZFS has nice things, but again its all tied to hardware. Hardware can fail, at the end it's your backup that will save you. Keeping that backup error free means that your foundation (server itself) must be stable and protected. Otherwise the errors creep into your backup which over time means the backup as a whole is worthless. A good measure is to keep two backups, an archive one (core stuff you write once and forget about till you need it), and a regular one (monthly/weekly) that you keep constant updated with new data from your server. But again it's all tied to your foundation (server itself), it must be solid or it's all worthless.

the colors are all bullshit
they sell 7200 rpm blue drives, 10,000 rpm black drives, 5400 rpm blue drives, and 7200 rpm black drives
just pick the one that has the correct features

Better yet, buy Hitachi drives.

Model Family: Intel 320 Series SSDs
Device Model: INTEL SSDSA2CW120G3

Firmware Version: 4PC10362
User Capacity: 120,034,123,776 bytes [120 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
ATA Version is: ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 2.6, 3.0 Gb/s

SMART overall-health self-assessment test result: PASSED

SMART Attributes Data Structure revision number: 5
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0032 100 100 000 Old_age Always - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 14567
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 5936
171 Program_Fail_Count 0x0032 100 100 000 Old_age Always - 0
172 Erase_Fail_Count 0x0032 100 100 000 Old_age Always - 0
184 End-to-End_Error 0x0032 100 100 090 Old_age Always - 0
199 CRC_Error_Count 0x0030 100 100 000 Old_age Offline - 0
225 Host_Writes_32MiB 0x0032 100 100 000 Old_age Always - 135467
226 Workld_Media_Wear_Indic 0x0032 100 100 000 Old_age Always - 1012
227 Workld_Host_Reads_Perc 0x0032 100 100 000 Old_age Always - 35
228 Workload_Minutes 0x0032 100 100 000 Old_age Always - 876078
232 Available_Reservd_Space 0x0033 100 100 010 Pre-fail Always - 0
233 Media_Wearout_Indicator 0x0032 100 100 000 Old_age Always - 0
241 Host_Writes_32MiB 0x0032 100 100 000 Old_age Always - 135467
242 Host_Reads_32MiB 0x0032 100 100 000 Old_age Always - 73609

SMART Error Log Version: 1
No Errors Logged

4233GB written.

Hurry up and photoshop those SMART stats ishmael "israel inside"

No photoshop, couldn't fit the entire smartctl output in a single post though.
Had to compile smartmontools for OSX.

Waste of digits
The reds are right there, and you also have purple which are meant for surveillance

That's not a screenshot rabbi

I'm not paying 60 burger shekels for a gui SMART utility on OSX.

Great drives, but I prefer Toshiba
>inb4 Durr they got bought out by WD
Calling bullshit, had a 330 die on me out of nowhere and the only thing I had on it was Firefox

1.absolutely proprietary
2.cucked
3. that ssd modem has a common controller failure so quit shilling
4.Available_Reservd_Space 0x0033 100 100 010 Pre-fail Always - 0

>0
Don't do this on flash media retard.
>what is TRIM

Western Digital Blue are pretty good. The WD Black ones are the best WD.

DON'T buy WD Green or Shitgate ones.

It sounds like you don't have a SSD at all. If that's the case then you should definitively buy one. The startup times for programs is way shorter, you'll be going from waiting for programs to start to instantly having them ready. A SSD really does make that much of a difference. Put Windows and your programs on the SSD.

Buy a minimum of two drives of the same size but not the same manufacturer and not the same type of drive if you do buy from the same manufacturer and put them in RAID1 if you just have two or RAID5 if you buy 3. Use this for your storage. Music, movies and files like that don't need fast access times, neither do documents. RAID1 will give you the write speed of the slowest drive since it writes your data to both drives. Random reading speeds will be faster since data will be read from both drives but one single file will only be read from one drive (doesn't improve the read speed of one huge file).

Don't worry about SATA standards, drives are backwards compatible and so are motherboards. You'll just get limited to the speed of the lowest part (SATA3 drive on SATA2 motherboard gives SATA2, SATA2 drive on SATA3 capable motherboard gives SATA2, both work fine).

Also, heat isn't a problem. Drives get hot, sure, but copying 4 TB to a 9 TB 4-drive RAID array in one go doesn't cause any problems.

They call it a WD blue screen for a reason

Meh It's my laptop, Linux is markedly slower than OSX on the same machine.
Got 2x Intel 520 120GB and a 240GB Intel 730 too on my desktop.
All still working.

As are my Samsung 850Evo and 960Pro - but they're far too young for failure yet anyway.

Call it whatever you want mate.
Doesn't stop it from being the truth.

Attached: main machine.png (1887x1376, 578K)

SSDs fail like a bathtub curve. If your SSDs fail differently then get a different brand.

Attached: 350px-Bathtub_curve.svg.png (350x247, 19K)

I've also been considering the WD blue hard drives. I Was going to get the 4 TB one, but it is only 5400 RPM, so I will probably be getting 2 of the 1TB blue 7200 RPM. Half the storage, better performance.

+33% of 120 IOPs is only 160 IOPs

SSDs number in the tens to hundreds of thousands of IOPs

Get a hard disk with a larger cache, use a linux kernel with RAM caching, or use bcachefs.
Those are your performance options.

>boot drive is HDD

Hello 3rd worlder.

They're not bad in md raid0.

Attached: wd10 100mb.png (628x571, 172K)

How's the latency look on a single random read spatter benchmark of one disk?

Don't have a graph saved and it's a bit hard to do it now since the array is busy.
Sorry mate.

do SSDs really die faster?
i've had my inspiron for 5 years now and it has an hdd. been looking to upgrade and almost all the laptops i've been looking at have exclusively ssds, 1 drive. i'll be doing almost all of my work on it and would like my new laptop to last a long time too. should i skip the ssd? i'll be starting my masters soon and i do a lot of development and calculation stuff

No.
techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead

A HDD is more likely to die sooner due to wear on mechanical components than an even a ~2013-2014 and beyond SSD dying.

They don't die from endurance.

They all just die randomly.
Ask my pile of 6 SSDs with bad sectors and CRC errors out the ass.

Hey pile of SSDs, what brand and model were you?

how long did it take before they died? days months years? have you checked the power supply?

Hi human, I'm just a bunch of no-name generic chink trash from eBay. My owner is a fucking mongoloid and he seems to think that all SSDs say, from Samsung are the same as my xhohxuahxhioeo $20 SSD shipping from a shed in China.

>posting report from 2015
you're probably not buying drives manufactured in 2015

Why do Seagate HDDs have a bad rep?

Even less likely to fail.

A string of bad models.
They waren't as bad as deathstars, but they waren't good.

What is WD fucking made of? I have a WD Black 2tb external from over 10 years ago that is still running fine (it's showing its age by with some slower read times, but it still works once it's "heated up").

Is it still a thing?

Under normal usage no. In an environment where you fill them up with data on a daily basis, they die way faster than mechanical disks, unless you use an enterprise variant.

At work we take terabytes of data at a time, and process the hell out of it. I purchased a 1TB crucial, and ran it out of endurance in about a year. I recovered all the data off of it with a disk recovery program. It refused to write any more sectors leaving it with a slightly corrupted filesystem. Smart data confirmed what happened.

Nah, they're alright again now.
Slower than WD though.

>OS on HDD
What year is it?

Next time get one with 3D nand, those have more endurance. However, you must really have been pounding the hell out of it, with like 24/7 max R/W to get it to die so fast.

A samsung
Three crucials
An Intel
And a liteon

Years. Used in various bits.
Now they sit in a substantial ZFS RAID 10 to prevent shedding sectors from damaging files.

7200.11

>WD Blue
There you go.

I got an Ironwolf lemon but got it RMAd after some bitching.
This was last year and I guess it's within normal badness.

Sure. Don't buy WD or Seagate HDDs.

Why get Blue? Black has 3 more years of warranty over the Blue's 2. That alone makes me cautious.

Blacks are just as quiet and data retrieval is quicker.