What is the optimal storage solution for big bois?(30TB+ Jow Forumsentoomen.)

What is the optimal storage solution for big bois?(30TB+ Jow Forumsentoomen.)

Buying 4~8 Bay boxes doesn't scale good. They also usually have poor cooling.

Attached: drive_large_4.jpg (1300x726, 225K)

Other urls found in this thread:

newegg.com/Product/Product.aspx?Item=N82E16811352047&cm_re=node_804-_-11-352-047-_-Product
govdeals.com/index.cfm?fa=Main.Item&itemid=16744&acctid=2863
forums.freenas.org/index.php?threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/
tripwire.com/state-of-security/featured/vpnfilter-botnet-has-hacked-500000-routers-reboot-and-patch-now/
en.wikipedia.org/wiki/FreeNAS
twitter.com/SFWRedditVideos

FreeNAS box. You can have them backup to another box somewhere else for redundancy.

What's so great about Freenas, or rather ZFS in comparison to NTFS?

Would transferring large files over local LAN have any impact on my regular internet speeds?

>zfs vs ntfs
Guess that would really depend on your use case? Are you doing raid under Windows server or something?
>Would transferring large files over local LAN have any impact on my regular internet speeds?
are you saturating 1GB connections already?

My main OS is Windows but I'm going to be running Freenas with Raid and connecting to my storage via lan.

>are you saturating 1GB connections already?
No, was just curious

Also
>Buying 4~8 Bay boxes doesn't scale good. They also usually have poor cooling.

There are 8-11 bay desktop towers that will pull air right over the drives. If you need more than that, you're going to have to start looking at rackable server chassis or large HBA expansion cases.

>8-11 bay desktop towers
Yeah and those are going to cost you $120+ because they're aimed at high-end users who will be using their desktop towers for more than just HDD storage.

You'd be better off just buying a used server chassis

I have a few boxes that can saturate 1GB connections when doing backups etc. But I've never noticed that having an impact on the rest of the computers in the house.

>$120+ is a lot
As opposed to the cost of 8+ hard drives?

I have not been able to find chassis that more than 8 bays that aren't $120+.

I have a rosewill 15 bay for one box, and it is okay but a PITA to pull drives.

At the moment I'm actually looking for a tower for another backup box that I can put somewhere else in the house. But I won't have access to a rack, or don't want to buy another.

HAF X or HAF 932

NTFS fragments, has no checksums. You lose data easily with it.
FreeNAS is a COW (copy on write) file system that checksums all data before it is written.
FreeNAS also allows you to schedule jobs, like SMART tests on disks, and then have it email you when there is a warning.
Regarding transfering on LAN affecting your internet speed, no. Speed refers to latency. If you mean bandwidth, it would depend on what your current internet bandwidth is. If you have gigabit internet, and get full throughput, then you will be losing whatever throughput FreeNAS needs, when it needs it.
I used a Fractal Design node 804 for my FreeNAS box. You can put a lot of drives in it. I think the max is 11 3.5" drives and 2 2.5" drives.
Watch the Feature Overview Video on this page. Search 'node 804' on youtube, tons of reviews out there.
newegg.com/Product/Product.aspx?Item=N82E16811352047&cm_re=node_804-_-11-352-047-_-Product

>I used a Fractal Design node 804 for my FreeNAS box. You can put a lot of drives in it. I think the max is 11 3.5" drives and 2 2.5" drives.
>Watch the Feature Overview Video on this page. Search 'node 804' on youtube, tons of reviews out there.
>newegg.com/Product/Product.aspx?Item=N82E16811352047&cm_re=node_804-_-11-352-047-_-Product

>what's so great about _ in comparison to ntfs?
every single modern file system is better than ntfs, heck, even apples faggot we-need-to-make-our-own-special-snowflake fs is better than ntfs

I have referring to server chassis on the cheap. Can't find 8+bay for under $200. Most higher capacity is $300-$500.

>Fractal Design node 804
I've looked at that and NZXT 210/H440. In the 210 you could cram 13 in with a 3x5.25" to 3.5" cage. But no dust filters and bottom 8 drives would be a little more difficult to work with.

Currently I'm using Drivepool for JBOD drive pooling in Windows Server 2012r2. so far so good. I like the ability to add or remove any number of any capacity disks at will. Using ZFS sounds appealing at first but then what if I want to replace an old 2tb drive with an 8tb? or replace a failed 8tb with a couple 4tb drives on sale? is there a better solution for that?
Also since these are all in NTFS I can remove any drive and read data directly from it, if for example my motherboard and HBAs exploded.
I freely admit I am a moron that doesnt understand ZFS very well.
Currently sitting at picrelatedTB with the vast majority 'duplicated'

Attached: Screenshot_20180703-123655_Remote Desktop.jpg (1077x818, 149K)

>I have referring to server chassis on the cheap. Can't find 8+bay for under $200. Most higher capacity is $300-$500.
20 bay for $12
govdeals.com/index.cfm?fa=Main.Item&itemid=16744&acctid=2863

You have no reason to be holding onto more than 3TB of data.

Attached: 1521873670465.jpg (355x440, 130K)

FreeNAS and most of its competition is absolute dogshit

>yeah you're gonna need like 16GB of RAM to have that 2TB array up, goy
>why wouldn't I just run a basic windows 10 LTSB install with 4GB and not have to deal with any stupid shit?
>B-BECAUSE THEN YOU AREN'T LIKE A REAL HACKER

Attached: 1492197754892.png (640x480, 411K)

>what if I want to replace an old 2tb drive with an 8tb?
You can do that. Remove the 2TB drive from the pool in the software, and then turn off the system. Take the drive out, replace it with the larger one. Turn on, and resilver.
>or replace a failed 8tb with a couple 4tb drives on sale?
This one, not so much. In FreeNAS you have a zpool, the total storage. You can expand this by adding vdevs. vdevs are made up of disks. If you lose a vdev, you lose the zpool. So, you want your vdevs to be hardy. In order to add more vdevs, the vdev needs to be the same size as the previous one. Example layout like this:
zpool
-vdev0
--Disk0 2TB
--Disk1 2TB
--Disk2 2TB
--Disk3 2TB
You put this in a raidz1 (6TB, can lose one disk) or raidz2 (4TB, can lose two disks)

To expand the pool, you add another vdev of the same size usable (6TB or 4TB). So, if you made a 4TB vdev (raidz2) you can add another vdev to the pool of, say 5 1TB disks, in a raidz1 config.

The robustness of your zpool is based on the weakest vdev. Say you just add one 4TB drive as it's own vdev. Now you have 8TB total, but if that 4TB drive goes out, the whole 8TB is gone.

>FreeNAS and most of its competition is absolute dogshit
>>yeah you're gonna need like 16GB of RAM to have that 2TB array up, goy
>>why wouldn't I just run a basic windows 10 LTSB install with 4GB and not have to deal with any stupid shit?
>>B-BECAUSE THEN YOU AREN'T LIKE A REAL HACKER

Bullshit. The recommendation is 1GB ecc ram per 1TB of storage, but plenty of people build arrays of 10s of TB with only 4GB of non-ecc ram. People use FreeNAS because of all of the features it has, as well as the data protection.

forums.freenas.org/index.php?threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Buy hot swap bays, assemble a cheap machine out of new parts in a half depth rack mount case. Any processor that's not an atom and less than 5 years old is fine. Save sheckles and get a Celeron, then spend those saved sheckles on more RAM. You don't need a ton*, but more will make life more pleasant. *If you want use dedup, then you need the 512MB-1GB per TB of storage though. Also no BIOS "RAID". Ever.

Unless you need more than fits in a shitty rack mount case, just use freenas. If you need more, don't buy a bigger box, buy more shitty boxes (3+), add a bit more RAM, and use ceph. As a bonus, ceph is a marketable skill. Downside, you'll learn why it's a marketable skill. Say bye to weekends for a while.

tapes

it's porn

Attached: 1528480307607.jpg (500x332, 35K)

i just put drives in my desktop computer and use the ext4 filesystem on them. has worked fine for years.

What should I be using to access my files from Freenas, Samba?

uhhhhh
you know that I never thought about this part?

4gb ram on 10s of TBs will run like dog shit if you are using ZFS

Samba(CIFS) is single-threaded so NFS is obviously better, but NFS isn't available on home editions of Windows. Not a problem for me of course. I just never used it before.

There's also FTP or SFTP, and this is supported natively with Directory Opus(explorer replacement).

I think I'll go with NFS, but I wanted opinions.

>Samba?
Yes, if you're on Windows. WinSCP might work too, never tried it. I don't use Windows much. Most people have difficulty with permissions on FreeNAS. It's just because they aren't familiar with the GUI. On youtube, search for 'Lawrence Systems freenas', he has a good video on it.

anyone even remotely considering freenas better fucking have 16GB ECC ram, MINIMUM. Even if you aren't using dedup or special freenas features.

Mine has 48GB ram, but that's also because I have around 36TB and plan to expand up to 48TB.

>4gb ram on 10s of TBs will run like dog shit if you are using ZFS
Don't use Deduplication.

>anyone even remotely considering freenas better fucking have 16GB ECC ram, MINIMUM. Even if you aren't using dedup or special freenas features.
>Mine has 48GB ram, but that's also because I have around 36TB and plan to expand up to 48TB.
Post a screenshot of your ram usage. The new GUI has some pretty graphs. I want to see how much you use, because I don't use hardly any when transfering files. What are you doing that's eating ram? VMs? Transcoding streams?

build your own... supermicro x11 ssh-f is a decent board, combo that with an e3-2620v6, enough ddr4 ecc for your needs, shuck some WD mybooks for those cheep 8tb red's... and throw freenas on it

Attached: rack.jpg (3120x4160, 3.89M)

what's the device at the bottom?

the very bottom? its a 1u cyberpower 1500va UPS.

the one above that is my vm server (e5-2620v4 / 64gb ddr4 ecc)

>what's the device at the bottom?
CyberPower OR1500LCDRM1U
You can't read, faggot?

Details on this thing? How is your pool setup? Sup with that sexy fibre?

too lazy to google faggot

thanks. is there a device that exists which can read the cpu/mobo/hdd temps and display them on an LED panel?

Home made nas is a better solution than a qnap/synology right? Are there any benefits to these two except that its prebuilt

Not with a decent switch.

>Are there any benefits to these two except that its prebuilt
They're great botnets. Literally.
tripwire.com/state-of-security/featured/vpnfilter-botnet-has-hacked-500000-routers-reboot-and-patch-now/

FreeNAS allows you to make a NAS as cheap or expensive, small or large, as you want, on hardware you want. And it's free. You just buy the hardware.

>IOT devices like routers and network access storage (NAS)
What an article.
How do we know this doesnt affect freenas devices? What specifically allows this to work on nas?

sure, i would upload a network diagram, but im lazy and never made one... lol

>as we covered - the bottom thingy is a UPS.
>above that is my vm host, got like 20 vms running off it... i really need to redo the chasis, mobo and drives. it kinda was the first part of the lab, and it doesnt have any raid, ipmi, and the case is hacked to bits. e5-2640v4 / 64gb ddr4 ecc / hgst 6tb drive + a 120 gb ssd for esxi as an install drive. also has a random 2* 10gbps sfp+ card in it. when i redo this, ill probably be running 4 2tb ssd's in raid 10 with 2 40tb ssd's in raid1 as a boot device. not 100% sure what im doing for the raid card since esxi doesnt do software raid.
>above that we have a norco 3u chasis with 10 of the drive bays populated by shucked wd mybook 8tb's with 256mb of cache each. 2 40tb ssd's in raid 1 as a boot device. e3-1220v6, 64gb of ddr4, SUPERMICRO MBD-X11SSM-F as a mobo. also a dual port sfp+ card.
>above that i have some blank plated for that sexy factor, and a pdu since the ups only has 4 battery backed ports.
>above that, fiber switch... sw-16-xg from ubiquiti. fucking love that company. 320gbps total switching throughput... ill get to wiring later.
>above that, some cable management shit plastic from amazon. will probably eventually go with a brush style one since you cant even see the core switch above it.
>above that (not seen) is a switch-48. also unifi. it has like 3 things plugged into it, and two fiber deals, cause everything (almost) is LAGG'd
>above that, a patch pannel thats not being used. will once i move out of an appartment.
>above that, my router. 4gb ddr4 ecc, another e3-1220v6. another SUPERMICRO MBD-X11SSM-F. also 2 port sfp+ card.

as you see, all my devices are 2x10gbps sfp+... so they all go into that fiber switch with LAGG's set up to aggregate the throughput. one cable is missing since i need to get another sfp module on the VM switch. will be doing that when i do the hardware overhaul.

Why wouldn't you just buy an old rack and get some rackmounted bays that slave to some cheap dual xeon master you pick up for nothing?

oh, didnt even mention... 80tb raw on the nas. its raidz2 so i get like 51.6tb usable. crazy read speeds, and pretty slow write speeds. i have a 50gb optane drive in there as a SLOG cache.

>How do we know this doesnt affect freenas devices?
FreeNAS is a free and open-source network-attached storage (NAS) software based on FreeBSD and the OpenZFS file system. It is licensed under the terms of the BSD License and runs on commodity x86-64 hardware.
en.wikipedia.org/wiki/FreeNAS

Post a screencap of a speedtest.net run. I want to see your 20 bonded gigabit fibre connections.

This baby is wicked nice. 5 bays + internal usb slot for bootable usb stick. Server grade parts + ECC ram. Uses only 150w fully loaded. Has E-Sata, 6 external USB Ports, GB Lan, two expansion slots (x4, x16). Small size. I've got two.

Attached: HP-ProLiant-MicroServer-N40L-4GB-RAM-AMD-Turion.jpg (400x378, 16K)

It depends on the workload. Lots of random IO will clog up the ARC and ZIL. Installing a pair of extremely fast disks as the SLOG and L2ARC will drastically reduce memory pressure and increase performance when you're plowing through hundreds of MB/s of random IO. If you have dedup on (don't) then you can't really get around the GB per TB memory requirements.

Continued; Combined w/ UPS, Freenas & WD Reds + Backups, my data is safe from pretty much anything. (99% of shit anyway, if my house catches fire, well, I'm totally fucked on every level)

Again: The first box has 16GB EEC Ram w/5 x 6TB WD Red Drives in Raid Z1 (21.4TB Usable space). The 2nd box has 4GB EEC ram w/ 5x 2TB Regular Seagate Drives in Raid Z1. First box is main file server. That's all it does. The 2nd box acts as a backup (shutdown). Contains solely data that would be a pain to replace/redo. It's not my only backup. I got another nas (Zyzel) that contains everything. So depending on what data it is, there is least 3 copies of it. Yes I know Raid Z1 is not the norm anymore, but I've got backups and I'm pretty good at keeping them all in sync. If I was a lazy fucker or really really had to have 99.9% uptime then I'd go with Raid Z2 or even 3.

150w is quiet a lot. Like 5hdd is what, 50w? So 100 for the rest is crazy

Don't know about optimal, but I settled on a Norco RPC-4220 with some used Opteron gear running FreeNAS for mine.
I probably should have run CentOS and ZFS raw, I don't really like FreeNAS much, but oh well. Live and learn. It's been running fine for probably 2 years now.

Attached: PICT0301.jpg (2592x1944, 1.2M)

>What's so great about Freenas, or rather ZFS in comparison to NTFS?
>>>What's so bad about NTFS

Attached: 1526451519186.jpg (323x520, 107K)

Have you had any problems with the hot-swap bays? I've long wanted one of those cases, but I've always been put off by the horror stories in the reviews.

>why wouldn't I just run a basic windows 10 LTSB install with 4GB and not have to deal with any stupid shit?
When your database gets a run through the bad sector hole shredder, you change your tune very quickly.
Disks fail in /gruesome/ ways.

Ask me
How I know

ZFS uses RAM because all of the error correction would fuck with random I/O if not for the insane levels of caching.

Attached: Screenshot_2018-07-03_21-42-20.png (937x512, 98K)

>NTFS

Attached: Disgusted Beta.jpg (1280x720, 116K)

Turn off dedup. Faggot.
Use a lighter checksum with smaller block sizes and lz4.
Add an SSD L2ARC, if you're so strapped for I/O, and can't into RAM.

NTFS isn't bad
For a 20 year old fs at least.

It's definitely not good at the tens of TB level.

The drive bays are far and away the weakest link on the design. I had to seat and reseat one or two of the drives several times in order to get it to connect to the backplane properly when I was building it, and I dread the day I need to replace one. Calling them hot-swappable is probably a bit of a stretch. If any of them fail I'll probably offline the system, and check in BIOS whether or not the replacement drive is recognized before I attempt a restore.
There is a reason I have offsite backups running.

[ 3] local 192.168.1.200 port 58050 connected with 192.168.1.5 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-50.0 sec 38.2 GBytes 6.57 Gbits/sec

theres an iperf with one machine... the LAGG kicks in when another machine is also trying to talk to it, to 2x that bandwidth. i could get more with jumbo frames but w/e

Anything that would work with 8 bay USB 3.0 externals?
I have 2 of them and would like to separate them from my main computer.

Doesn't need to do raid or anything fancy, just serve JBOD over gigabit.

Also, is 10GBe worth the 100USD nics and switches? I'm seeing netgear 5~16port GS series for under $100 and QNAP QXG-10G1T nics for $100

Is it possible to remove the backplanes and just use the hot-swap trays as carriers for not-hot-swappable drives connected conventionally? I don't even really need hot-swappability, but it seems like you get that whether you want it or not if you want more drives than that Rosewill 15-bay case.

>Doesn't need to do raid or anything fancy, just serve JBOD over gigabit.
Just about anything under the sun. Pull a Core2 out of dumpster and you're probably fine.

>Also, is 10GBe worth the 100USD nics and switches?
Do any of your use cases result in a saturated 1Gbps pipe, and/or is the money an object or better spent elsewhere?

NTFS is fine in a complete and utter vacuum.

The moment you introduce it to the real world and the faults of hardware it shits the bed.
You can have folder structures which will irreparably be inaccessible even after a CHKDSK, the repair mechanism just paves over holes in files and badblocks it, the MFT is centralized into two files as backup that are both contiguous meaning they both get shredded on head crash.

The default behavior is to operate the filesystem in /infinite/ halving retry mode, which completely destroys sectors on unaware consumer machines making data recovery a bitch of only the files you really need.
There are no checksums, not even metadata.

The OS itself cannot boot from compressed data, the compression scheme is shit and best left off so you can just zip files yourself rather than double burn CPU cycles, the snapshot functionality is super undefined in the event of failure.
There is no native application to identify holes in any individual file or bad disk you are blind in the event of an error other than some event logs on I/O. There is no RAM-disk readback caching in the NT kernel, the disk surface is tried for every single I/O operation.

Linux is the goto for when disks fail because the native recovery solutions are trash freeware that does more harm than good.

They have so pajeet'd themselves that they have to implement certain logging capability by means of hidden files rather than actually work in into the FS in a structurally sound manner so they have backwards capability but pay in speed and stability.

It's worse than you can know.

Attached: 1526470197981.jpg (1920x808, 197K)

The trays clip in at the front, and are on guides, so you could probably remove the backplane and run cables directly to the drives. I'm not sure if you'd need a screw driver or a grinder to do it though.

Which way are your fans pointing? Toward the backplane or away from it?

You have no reason to be holding onto more than 3TB of data.
>its porn

Nope.jpg i have been into 24bit for a while and big into 5.6mhz DSD this year. My hi-res music collection is 4tb of very rare low seeded stuff that would take an eternity to replace. Thats why its mirrored offline. So that takes it to 8tb and 2tb mirrored again on portable usb external so thats 10tb and that isnt even my photos and memes going back to 2005.

Hoarding TV and movies is pretty dumb IMHO.... thank god my decade old 50" 1080p is still chugging along ... as soon as it bites the dust and i go 4k my storage situation is going to need a serious rethink.

ZFS and RAID and NAS are all gravy but i just like to prioritise my data and hardware mirror.

Attached: 1eac6ce98a0a460151329c35e38dc66f (2).jpg (800x450, 76K)

>What is the optimal storage solution for big bois?
CORAID

>I don't even really need hot-swappability,
If you want 30TB, you want RAID, you want hot-swap.

>raid under Windows server
software raid in windows truely sucks

Nice rant. I definitely know how bad things are, but I've never lost data from anything but a totally dead disk or a pebkak fuckup. And I've managed a lot of data across a lot of machines. I do use Linux recovery tools on winboxen, but only because usually there's usually only one disk, and you shouldn't be booting from a disk that's barely functional.

NTFS is shitty but it's hardly a black hole.

Default configuration, drawing air in the front and venting out the back.

>Buying 4~8 Bay boxes doesn't scale good.
Look into icydock. It will fit 5 3.5 drives into the same space as 3 3.5 drives.

>going to cost you $120+
If you're doing it on a budget, then go get a bunch of computer cases. You can pick these up for peanuts at good will or sally ann. Or even find them on the side of the street. Pull all the 3.5" cages out of them. Bolt them to plywood sheet. Bolt your mobo to the sheet too. Get extra long SATA cables to run from your cages to your mobo and SATA PCI card. Mount plywood vertically so convection will create natural airflow over your drives.

>3x 10TB drives
>done
>want reason to buy giant fucking rack and box and keep expanding

>thanks. is there a device that exists which can read the cpu/mobo/hdd temps and display them on an LED panel?
Yes. Look at LCDproc

>single 10tb drive.
increased risk of drive failure
higher chance for corruption
usually lower rpm/performance
not ideal for raiding
higher cost of replacement

those drives are for archival purposes only, not main storage.

Sexy as all fuck, m8.

That's exactly what I was looking for, thanks.

>Also, is 10GBe worth the 100USD nics and switches?
Not unless the storage is being used by multiple video editors at once.

>backblaze showed no increase in drive failure for larger platter drives
>7200rpm Seagates
>higher cache 256mb
>5 year warranty
>nothing to stop you from raiding
>implying 5x2TB drives is cheaper

Keep thinking it's 2005

It's most other lower TB drives that are slower, all WD Reds and blues are 5400RPM, 1 model is 5900 i think

all ssd arrays

>backblaze showed no increase in drive failure for larger platter drives
>backblaze

>Seagates
no good

>nothing to stop you from raiding
I didn't say you couldn't. I said it wasnt ideal.

>5 year warranty
doesnt cover data loss or the fact you wont have access to a whooping 10tb of your storage for a few days.

>implying 5x2TB drives is cheaper
10tb = $378
2tb = $50 ea
5x2TB = $250 total.

you're saving $128 dollars. that's a lot of savings user. you could buy another 2x hdds user.

>implying a mass backup service that offers unlimited storage isn't a good metric for hdd workloads
>ruling out seagate
>had 1 seagate die a decade ago
>so now uses WD

top kek

>compares 5400RPM 64mb max 2tb drives to a 10TB that is 7200rpm and 256mb cache
>spreading data over 5-7 drives

>compares 5400RPM 64mb max 2tb drives to a 10TB that is 7200rpm and 256mb cache
that's 7200RPM.

>muh 256mb cache!
shit doesn't matter. 64mb is plenty.

>spreading data over 5-7 drives
you know more drives increase the speed of the raid right? it literally spreads the workload across all drives.

yes, you should feel very ashamed, guilty and bad for buying a bunch of big ol 10TB's

I am doing RAID, but I'm doing it for bit-rot protection and not availability/uptime. Hot-swap and not having to bring down the system to swap a drive would be very convenient. But I'm very much willing to sacrifice convenience in order to save some money. I'm a home hoarder, my only SLA is with myself.

>losing 2+ drives to parity
>or being stupid and raid 0

again you're living like it's 2005 and can't handle the drives are actually decent

top kek

also bonus kek

>shit doesn't matter. 64mb is plenty.