Why the fuck doesn't anyone make hard drives bigger than 14TB?

Why the fuck doesn't anyone make hard drives bigger than 14TB?
I just want somewhere to store all my porn and user folders without having them divided across multiple drives.

Attached: seagate-ironwolf-pro-14tb-hard-drive-100773646-large.jpg (700x467, 38K)

Other urls found in this thread:

news.samsung.com/global/samsung-electronics-begins-mass-production-of-industrys-largest-capacity-ssd-30-72tb-for-next-generation-enterprise-systems
snapraid.it/manual
hpe.com/us/en/product-catalog/servers/proliant-servers/pip.hpe-apollo-4200-gen10-server.1011147097.html
twitter.com/AnonBabble

>what is RAID
t.brainlet with a fap problem

because that's dumb

the more data it stores, the more data you lose if it fails, so it's better to have it spread across several drives

how much porn do you need, you fiend

I currently have 2 8TBs in RAID 1. I need more though, but I don't want to have half my stuff in one volume and the other half in another.

Fuck man get an LTO8 tape you cant size ranging frome 12TB to 30TB

>14TB Seagate
yikes

Seagate are releasing 16TB drives this year, only the brave would put all their data on that.

there are also non porn things to store

Just buy two and put them in raid 1????

wait, I thought they were releasing the 20s with heat assist and 2 arms this year

Set up RAID.
Cheap 4TB drives $100 each, RAID6, rtorrent checks 20GB in one minute and total space is 10+ TB - it's great, although felt underwhelming while I set up all that.
Just purchased an HBA card and one hydra SAS cable, ready for the expansion.

Good luck with resyncing that onto another 14TB HDD.

why the hell do you need more than 14TB

Why the fuck does anyone need 14TB of ones and zeros?

Attached: Waterhouse-Diogenes.jpg (3202x5000, 2.03M)

by going 2 drive over one, you double the chance of fatal error, Big drive with big Backup drive is more save than having it split over like 4 smaller ones

> by going 2 drive over one, you double the chance of fatal error
> 2 drive bad
> backup good
This is NOT about RAID0.

Porn

i didnt imply that
by having more HDDs you increase the chance of one going belly up
user /a/ has 3 drives
user /b/ has 12 drives
guess who is more likely to lose one

Seedbox

Drives are consumables. Guess which user in your example is more likely to lose data.

Is there an intermediate solution? Say I buy 3 or 4 4TB drives, is there a solution that allows me to sequentially read and write from multiple drives at the same time but still keep the drives in somewhat sync, to avoid losing data?

What the fuck are you even typing

i dont know what that guy is typihg, but you can raid 5 or more on 12 drives so your data is more secure than 3 drives

Attached: Adorable.jpg (740x740, 93K)

HDDs aren't indestructible. They come and go and you should expect it to happen.

>Waah i don't want things divided over multiple drives
Merge them then you ponce.

>But it's still physically two different--
What, you think there's just one thick disk in those things?

Attached: 1357725402655.gif (160x120, 763K)

you want RAID 1 user

To download and store a single modern videogame.

RAID 1 and RAID 6 seem to be roughly equivalent for my case, would RAID 6 be an overcomplication over a simple RAID 1 over 4 disks?

So you add more drives and you use RAID 5/6... or snapraid.

3 drives: RAID1 and an offline backup disk
4 drives: RAID10

Guys. I want a Nas. What should I do? And what's the cheapest/tb drives that won't shit the fan on me. Is wd still the go to?

>downloading porn

Raid 6 is a hell if a lot more stress on the raid controller. Also way more complex and riskier if you need to rebuild. Raid 1 with an off site overnight backup is your best bet

>hardware raid
>not zfs
plebs

>8TB drives in raid5
Might as well go with raid0, Pretty much as redundant, but you'll get mad speedz.

Then put them in a stripe or preferably JBOD and accept possible loss of data.

>14TB
How many platters is that? I usually buy the biggest single-platter drive around.

> Pretty much as redundant
Uh, no. The ability to loose one drive without loosing data is big.

> you'll get mad speedz
Not much of a difference from RAID5. Use typical Linux mdadm RAID or such if you care about it working well.

Not some trash proprietary tied to onboard BIOS Windows SW RAID.

>Uh, no. The ability to lose one drive without losing data is big.
No it's not. You're pretty much playing russian roulette during rebuild with your placebo redundancy.

Probably a physical limitation because of how platter drives work...
The real question is:
>why would you want to scrub trough literally terabytes of data just to get to your sectors?

Because they haven't invented it yet.
2025 it's scheduled for 100tb drives

Doesn't take that long on ZFS resilvering as long as you have enough RAM

Didn't fucking look very hard did you?
30.72Tb enough?
news.samsung.com/global/samsung-electronics-begins-mass-production-of-industrys-largest-capacity-ssd-30-72tb-for-next-generation-enterprise-systems

>SSD
That probably costs $10k at least.

tape it man

more like 15k, kek

pretty much this
I have 8TB of movies and i maybe rewatched like 10 of those.. same with games.. since new ones come out all the time i never really watch the old ones, there is no point in saving anything these days since everything is available legally on streaming services for like 10 bucks a month

>he doesn't run a cluster of nosql databases
>he doesn't split his data over many nodes with many drives
>he's asking for data loss

Attached: serveimage.jpg (300x168, 8K)

t a p e
a
p
e

Uh, you simply rebuild for ~10h or so, and then it's back up to one drive of redundancy.

Maybe this was russian roulette if the revolver involved in russian roulette had a 150k capacity magazine and one bullet in it.

Will take around 1 day or so with mdadm and it will likely resume the rebuild even after interrupting the rebuild with a poweroff.

But keeping a machine powered for a day isn't an extreme feat anyhow.

>exerting this much effort, money, and time to store porn

Go for 6 muthafuggas.
I lost two 3TB in one month, pulled the 2nd one when the SMART test failed and the last RAID check had 2MB/s speed. Wouldn't trust a 5th RAID when disks are over 2 TB now.

> a cluster of nosql databases
Uh, you know distributed filesystems exist, right? LizardFS/MooseFS. Ceph. SeaweedFS. XTreemFS.

Or you can just do a more domestic replication with Syncthing, it even can do versioning with staggered retention.

>what is ure
nigger pls. besides, you're looking a lot longer rebuild time with 8TB drives.

> and the last RAID check had 2MB/s speed
This to me indicates you are using some shit software RAID implementation no one should ever use (and not just for performance reasons).
It's probably more dangerous than a degraded RAID5 in itself.

Use long-tested Linux mdadm RAID [the industry standard for NAS boxes and RAID in general, really] or Snapraid. They'll also rebuild at ~100MB/s on an Atom / ARM SBC and at full speed on pretty much any better desktop CPU if you don't rate limit.

>all this just to watch pixelated japanese pussy on a screen

> Use long-tested Linux mdadm RAID
That's what I use. You don't get it. I tested access speed with Victoria later, many sectors of that HDD had access time over 500ms and even over 1.5s, probably one of the plates is dying. It's not dead yet de-facto, since it's readable, but you should understand why I consider it dead.

>besides, you're looking a lot longer rebuild time with 8TB drives.
You might be looking at 11h to 1 day rebuilds depending on fast your HDD is (typical HDD have raw speeds of 250MB - 150MB/s and you get about that minus 50-30MB/s or something for rebuilds on a decent setup).

Of course it's longer if you rate limit, but maybe you should simply do full speed rebuilds with RAID5.

>what is ure
Something that has not any more of a chance of appearing just during the rebuild than when the array was up.

Obviously finding these and other corruption is part of the periodic scrubs.

That said, even if you got an URE, you loose one block of data, maybe 2MB. (Caveat: DON'T USE TRASH ONBOARD SOFTWARE RAID! Use Linux mdadm RAID or snapraid. They won't kill themselves over these things.)

A single block 2MB loss will in typical home usage likely not be disastrous. Of course if it is, use RAID6.

>what the fuck is RAID

>That's what I use.
Good job then. Now check:

cat /proc/sys/dev/raid/speed_limit_min

#replace "X" in mdX
cat /sys/block/mdX/md/stripe_cache_size

mergerfs

Ah and also check
cat /proc/sys/dev/raid/speed_limit_max

Of course. If it's fucked beyond that, I wonder how that happened only just now and not on prior scrubs.

Either way, you'd just be one of the very unlucky people. Sort of like winning a bad lottery. Nothing wrong with using RAID6, you are not the usual case with RAID5 either.

Get moar drives and use raid0

cat /proc/sys/dev/raid/speed_limit_min
1000
cat /sys/block/md127/md/stripe_cache_size
256
speed_limit_max
200000

That's another RAID tho, I stopped that one and will do something else with disks, maybe RAID0 for unimportant info. However, both were built with default settings, so probably they're the same.

>tfw ten 8TB disks in raidz2
:^)

>everything is available legally on streaming services
No, you're full of shit

Raid5 craps out with 2 bad drives...
With 3 drives you'll most likely make a single pool for the data, with 12 drives you'll most likely make two: a bigger for data and a smaller for backup

Anime

so i can install the newest AAA game that won't compress their fucking sound files

I have a seedbox VM in hyperv with two 4TB VHDs. Hyperv host has 4x4TB in two RAID1 volumes. I plan on swapping to SnapRaid with two parity drives and in the near future I'll add another two 4TB HDDs making 16TB usable space in total. After that, I'll merge the two VHDs in to a single 8TB VHD and extend it to 16TB.

Is SnapRaid capable of handling files larger than the size of a single parity drive? Will HyperV have issues with SnapRaid volumes? The vm and its boot VHD are stored on a separate SSD which won't be included in the SnapRaid. Would it be smarter to pass through the drives to the VM and run the SnapRaid on the VM instead of bare metal?

Well, you allow it to run at minimally 1MB/s then (the 1000 is kb/s, maybe try 50Mb/s?) so other things that use the md might possibly slow the rebuild down a lot.

And your cache is small (try the maximum of 32768 for 32 MiB) so it might have to wait after each HDD access, particularly maybe if your HDD aren't doing much readahead and don't have much of a quick-accessible cache on their own.

There are a few other tunables in "man 4 md" too.

> That's another RAID tho, I stopped that one and will do something else with disks
Eh, okay.

> maybe RAID0 for unimportant info
Maybe try snapraid with the redundancy level you want as a variant. It works slightly differently, which can actually also be useful.

If you need more than 14tb for porn storage what you need in reality is help, my friend

Dis nigga is ready for SHTF though, imagine if the net goes down, this guy can fap away to his hearts content whilst the world outside burns down.

>BASED

1TB holds only twenty 4K movies.

Attached: 4k.png (1264x554, 69K)

Why do people store porn locally?

>Is SnapRaid capable of handling files larger than the size of a single parity drive?
I think no, unless you combine them in a layer on top (LVM, or Dynamic Disks, I think).

It's a variant on RAID that has advantages and disadvantages over the standard variant, see snapraid.it/manual - clever, but it doesn't "do everything you could ever want".

[That's more Ceph's approach, but Ceph is a difficult behemoth that is definitely needing its ongoing cleanup to even get somewhat reasonable again to understand and manage even for professional admins; don't use it yet.]

just buy loadsa sd cards and make a necklace -they all together in one place

> 1080p movie
> 40GB
Someone is picking rather extreme encoder settings.

It's raw bluray.

My nigga

Figures. Would it work better if I ran it inside the VM, so it would handle the individual files instead of the massive VHDs?

I would be extremely picky about what things I have in 4k. None of the marvel shit deserves that much space on my server.

Yea I usually pick Sparks, Drones or Geckos releases in 1080p or even 720p when it's something mediocre that I'm only mildly interested in.

iktf, i just want a 100TB drive where i can copy everything to the root wihtout any directory shit getting in the way

virtual machines and full system backups

Never tried that. HyperV was not good for me, I stuck with Linux for these things.

And there it might be more convenient to just run conventional md RAID5/6. And the VMs/containers also might not even be big but have their storage partly "outside" on the host or any other host.

That said, I can't see any reason why snapraid or even mdraid couldn't work inside VMs. With the various IO virtualization speedups that a lot of hardware has, it is probably fine?

Would be nice if it was available and affordable, but for now you just do a big RAID array (/variant thereof) or a distributed filesystem. Or if you accept a rather very high risk, you combine 7+ drives JBOD-style.

At least its feasible and doesn't cost that much more than the cost of the drives.

Because I store everything I watch locally. Shit disappears off the Internet all the time.

And by that I mean you neither need very srs sysadmin skills nor exotic hardware.

Just get the required drive space plus two for RAID 6, grab a current desktop computer mainboard where the SATA/SAS ports plus distinct fast enough PCI slots add up to the drive count, slap it all into a big enough gaymen tower or cheapo rackmount case with a PSU with enough power connectors and a normal to even modest RAM and CPU. Then you run a few commands to tell Linux or BSD or maybe Windows to create the array/storage pool, partitions (or equivalents) and filesystems on top, and you got your 100TB+ single storage.

Because we (those of us who have 200TB+) dont trust the internet, and slowly get our entertainment built up, so we can quit this internet hell.

For mediocre stuff whatever the release is used on popcorn time works. I like 80s stuff Steve Martin did, that deserves a nice 1080p not a bluray but with 4gb I'm good. But for Interstellar or The Dark Knight those surely deserve a high quality encode.

For the sake of hoarding shit, I doubt anyone watches more than 2 times any porn. A 30 minute movie you skip it for the bj, some doggy or anal and then the cumshot, 30 minutes movie, you watched 5 minutes at most for a fap that last 3 minutes. Is not worth to store 4k porn for that use.
Unless is good ol' homemade porn where you are fucking your niece or cousin, you little degenerate fuck.

One of these with 24 6TB LFF drives and RAID60. Voila, 100TB. Very available, but not very affordable. Takes more time to unwrap all that stuff from their cardboard boxes and clean that shit up than set up a 100TB file server.
hpe.com/us/en/product-catalog/servers/proliant-servers/pip.hpe-apollo-4200-gen10-server.1011147097.html

Paying a bit more for a preassembled hotswap bay server also works, sure.

Can't see any prices though, how much are these?

sorry man, I prefer mother son incest porn that is filmed in 2160p. Also blondelashes19 (the tranny camgirl)

Can I get those on netflix?

Probably around 30k€

>having more than 2TBs of porn
You have a problem.