1TB HDD after 5 years

1TB HDD after 5 years

Attached: Screenshot_2.jpg (1919x1042, 649K)

Other urls found in this thread:

hddscan.com/
en.wikipedia.org/wiki/Backup_rotation_scheme#Tower_of_Hanoi
twitter.com/AnonBabble

nice seagate bruh lmao

>implying that's a seagate
>no bad sectors

I'm curious, would higher block size reduce this kind of fragmentation? Like 16KB instead of the default 4KB allocated?

my WD black looks like that after 9 years

>no blue sectors
What's wrong with it?

Very slow and sometimes unread files

I don't know, I usually use the default settings

Attached: Screenshot_3.jpg (1225x528, 88K)

If this thread is still up I'll post both my 1TB seagate results from a HDD Sentinel surface scan.
For those who don't know, it takes hours.

Nice WD

Ok.Nice

The Apple Macbook Pro with Retina Display doesn't have this problem.

what program is that?

>macshit
Found your problem.

Attached: 1538237486558.jpg (4032x3024, 1.52M)

HDDScan hddscan.com/

have you ever defragmented your HDD?

Try this on Linux before you blame the drive.

yes, one or two at that time

rate my quads

Attached: wd.png (685x538, 90K)

I have a 1tb HDD from 2010 I've been using this entire time

If you're storing mainly large files, use a large block size. If you're storing small files you should use a small block size.

uwu

Attached: qLaFboM.jpg (2004x1398, 581K)

Not what I'm saying M80.

does larger block size (ie 16KB vs 4KB = less fragmentation regardless if small or large files ???

nice/10

larger block sizes will increase internal fragmentation, especially with small files. Also implies more shit being fucked on a sector or block corruption

there is no regardless of small or large files

if you are storing small files mainly, use a small block size, if large files, large block size

Attached: prepare me sides we're going for a rides.gif (320x180, 2.11M)

I'm unilaterally confused here because say I had a theoretical HDD of just 4096KB, my understanding is:
4KB block size = 1,024 total blocks and 40KB file = 10 blocks

16KB block size = 256 total blocks and 40KB file = 3 blocks

So how can there be more fragmentation with a larger block size? I'm honestly confused here.

>using a filesystem that needs to defragment
> Not using a ssd

Brother?

Attached: 45745753.png (1039x903, 786K)

>raw values not in decimal
puh lease

>relying on brainlet snakeoil "tuning" software

Attached: 34ba565.png (1920x1080, 1.33M)

Window title says wdc

single platter master race

When you have >100GBs of rarely accessed files, some that haven't been touched in over 3 years, and that one perfect image for a reply is corrupted, you won't be laughing.

Disk surface refresh scans completely work. Drift and strength loss of the magnetic charge is not a myth. Also, total reinitialization scans (a complete format and a sector-by-sector patterned overwrite) have saved several of my HDDs from weak sectors turning into bad sectors.

>When you have >100GBs of rarely accessed files, some that haven't been touched in over 3 years, and that one perfect image for a reply is corrupted, you won't be laughing.

Since I am not a literal retard, I have file hashes of all my files on the disk and backup disk.
All I have to run is a hashcheck to verify integrity, so what you described would only happen to a retard as yourself, since you probably have no SH256 sums of all your files.
Stop talking to me, brainlet. Your tech illiteracy is giving me migraine already.

Attached: 3704a222.gif (480x252, 1.37M)

Actually, that ALSO fragments but in this case the random 4KB read is so batshit fast (like 1,000X faster) it's really hard to notice. This is why "defragmenting" an SSD has no noticeable improvement in performance.

I thought it had more to do with how a NAND array is naturally interleaved, and on top of wear-leveling effects, having files spread across the multiple chips is what gives better performance. Kind of like multiple RAM channels - you get the most bandwidth by having every channel contain a portion of your working data, instead of filling up one DIMM at a time.

Basically, by default an SSD is immune to fragmenting and the drive should be left alone.

If that's what you call retarded I'd rather not have your level of autism.

>presents a working integrity checking solution by running a single line of bash code
>"hurr autism"

Attached: 1a6a6614.gif (298x224, 3.59M)

And pray tell how often you hash check your 100s of GB, and what you do to prevent the backup from losing bits when it too has very large portions which are rarely touched, if at all.

Attached: mandelpepe.gif (720x720, 1.44M)

obviously before and after backups, most of the time whilst I sleep since it is automated

en.wikipedia.org/wiki/Backup_rotation_scheme#Tower_of_Hanoi

Hey, I don't want to scold you too hard frogposter. But just don't try to mock me, I basically rotate like IT companies following data security laws defined in ISO standards.

Attached: c0a2757a.jpg (765x1138, 772K)

Are you telling me you constantly perform full disk images to back up shit you'll forget exists in a year?

Well within weekly timeframes, since even conventional HDDs have an annual writing limit. Check on this in your disk manufacturers spec sheet. Which depends generally if the drives are consumer, "NAS" or even enterprise grade.
You can backup as often as you feel like and your hardware can handle it.
Remember it is automated and I sleep. I have no lifetime used on the process, Basically I swap a drive in the backup bay, hit start and nap away.

Attached: b7c0b531.jpg (720x540, 50K)

Not the same user, but the mdadm / snapraid arrays I have are checked monthly, and the backups specifically on every bi-weekly run before pruning the old backups (staggered retention on mostly differential backups).

This also happens at the next opportunity in case the machine was offline at the point in time where it was originally supposed to run. It's really not a problem at all to set up.

Don't talk to me with that "lol u must not understand MTBF statistics or manufacturer specifications" shit.
You're imaging whole drives on a weekly basis for personal data.

That's so autistic it made my nephew with Ch.16 deletion syndrome ask me why some faggot on the internet is trying to brag.

You're only mildly less autistic.

Typical HDDs are solid for months at a time minimum.
If you're talking about drives you use for work/projects, protect your income, sure. If you're talking about just personal bullshit, you have an OCD problem.

Attached: 1468123966068.gif (250x200, 346K)

Get a USP and stop battering your equipment.

>Not storing your chinese reaction bitmaps using raidz1

Attached: 1530619907429.jpg (1619x1725, 474K)

>MTBF statistics
are not maximum annual writes you brainlet

Attached: explained for retards.png (731x594, 126K)

> Typical HDDs are solid for months at a time minimum.
"Solid"?

And either way, you're ultimately testing these checksums against a possible failure in ALL HDD involved, your other hardware, even software. That's a higher probability of something going wrong one time [or even constantly] than just for a single HDD. Doing it only bi-annually is possible, but might not be such a good idea.

> If you're talking about just personal bullshit, you have an OCD problem.
No. It's just a decent guarantee that my hardware redundancy and my backup copies still exist.

The impact of letting a computer conduct these checks regularly is anyhow very low, they run automatically. So why would you set the period to super long like bi-annually / annually and risk anything more? The periods I chose seems very reasonable to me.

Don't bother, he might be trolling us both. Nobody can be this low iq as him.

Attached: kurt is having a laugh.png (900x900, 517K)

zpool scrub $POOL

Can someone explain this image to a brainlet like me?

Some HDD that is becoming inaccessible