I've been thinking about getting a NAS for home use, I want to do RAID 5 or 6 and store all of my porn and illegally downloaded movies and chinese cartoons on it. The drives in my computer are starting to fill up quickly and i need more storage space. I also want to be able to stream my shit to my TV, so PLEX support is important.
most likely going to buy the HDDs separate so I can scale up storage space as I need it. Probably going to get four 4tb WD reds to start out, and do a RAID 5 for 12tb total storage, and have an external drive connected for backing up important stuff.
dont raid 5 do a zfs raid z2, make your own its usually cheaper unlessbyou get a deal on a freenas box. seach up how zfs works and hoe you could expand first tho to makr sure its futureproof.
Anthony Carter
I got a free desktop pc at work. Bought some 3.5 hdds, a pci wifi card and a 3.5 hotswap bay and voilà, sftp/ssh/virtualization homeserver for cheap burger bux
Jordan Nelson
please ignore this thread I fucked up and made a new thread instead of replying to /sqt/
I'll look into it thanks.
John Cox
whatever you do stay out of the home server general if it ever crops up again, I've never seen such a festering pile of autism
Thomas Gutierrez
you dont enjoy being told you dont really run servers because you dont run two virtualised ntp servers?
Sebastian Stewart
You can run your own on Linux distro just as well. For the amount of data you can probably get away with EXT4. Read a few things on mdadm and you should be good to go.
The only other thing I'd suggest is use a separate boot drive if you want to keep your initramfs simple.
Justin Garcia
> (OP) >I got a free desktop pc at work. Bought some 3.5 hdds, a pci wifi card and a 3.5 hotswap bay and voilà, sftp/ssh/virtualization homeserver for cheap burger bux
Wifi for fucking file storage? You must be retarded. user, make sure your NAS is wired if you're storing all data on it.
>You must be retarded I live in a very old house with a very shitty electric system. PLC doesn't work and I get 11mb/s with wifi so yeah, I'll take wifi.
Michael Rodriguez
Your are retarded. Wire with ethernet.
Xavier Green
Shut the fuck up you fuckinh faggot. If I use wifi its because I don't have the choice, I probably have 10x your knowlegde on computer systems and network administration, stop calling other people retarded and step up your post quality you fucking parasite of a human being
Josiah Thomas
>>what is a good prebuilt NAS computer? Literally anything that can hold a few HDDs.
Henry Parker
>Probably going to get four 4tb WD reds to start out, and do a RAID 5 for 12tb total storage
I think buying 8tb externals and ripping the hard drives out are the best bang for your buck 3 of those in raid5 would give you 16tb of space, but ideally 4 drives in raid10 due to better performance.
Then get a 256+GB SSD
Fuck freenas. ZFS is shit and hasn't aged well. MDADM+LVM is god tier.
SSD cache your raid array with bcache/dmcache/flashcache for even better results.
Install ubuntu/mint/xubuntu, install XRDP, Chrome Remote Desktop, setup NFS and SMB, plex, kodi and whatever else u want.
>Old/Cheap/whatever PC >4 x 8tb hdds from external harddrives > Raid10 using MDADM + LVM > 256GB+ pcie SSD > Half of SSD used for OS, the other half for bcache/dmcache/flashcache > XRDP/Chrome Remote Desktop/VNC > SMB/NFS/OwnCloud > Plex/Kodi
Luke Carter
>what is a good prebuilt NAS computer? QNAP / Synology have relatively good features for prebuilt machines.
You're still paying a premium vs just assembling a Linux machine and they do not NEARLY have the amount of software you can easily run with a package manager.
> do a zfs raid z2 Don't. Not only does ZFS require a lot more hardware [faster CPU, more RAM, maybe a cache SSD] than Linux mdadm RAID to achieve the same performance, you also cannot add drives one by one and grow the array on ZFS like you describe your plan.
Mdadm has the required tooling and it's tested [you could even switch between most RAID levels], ZFS does not.
Thomas Stewart
>SSD cache your raid array with bcache/dmcache/flashcache for even better results. If you're using LVM already, why not use lvmcache? It's one of the best solutions.
Ethan Gutierrez
Low power consumption while providing reasonable drive / network speed is usually still a measure of it being "good".
Throw your old gaymen machine that consumes an extra 100W at it and you'll be paying something like an extra US$100-200 / year to operate the thing if you run it 24/7 like most people would.
Adrian Reed
I don't think anyone who plays with toys is old enough to have an "old" game machine.
Brayden Morales
>LVM is god tier Sure, if you're in 2002 and have 60GB drives. LVM is shit. Your ZFS sucks ass because you didn't set it up correctly, and are probably blaming shitty SMB.
Nolan Evans
What toys? It's a media dump.
>Your ZFS sucks ass because you didn't set it up correctly No, he's correct. ZFS sucks in comparison. Takes too much hardware for no good reason, can't do what OP asks (add drives to the array as-needed).
There is no element of "setting it up wrong" other than the ZFS user's inclination to throw a fuckton of RAM and CPU and a caching SSD at it to fix its shitty performance whereas mdadm + lvm runs happily even on a low power potato machine with like 256MB RAM.
Noah Scott
>Don't. Not only does ZFS require a lot more hardware [faster CPU, more RAM, maybe a cache SSD] than Linux mdadm RAID to achieve the same performance, Processor is going to be based on what the server is doing. If you're transcoding video, or hosting virtual machines, you will need a better processor. SSD isn't needed at all. ECC Ram isn't needed. The 1GB of Ram to 1TB of space isn't really needed after 16GB of Ram. These are all old memes. >you also cannot add drives one by one and grow the array on ZFS like you describe your plan. You certainly can. That's the whole point of using Z2 or Z3, incase a drive fails during the resilvering process. Read the fucking documentation mong.
Luis Anderson
>Processor is going to be based on what the server is doing. Not with ZFS.
I guess you can make it run on the side with some large transcoding-on-the-fly machine [why? make your client devices decode current media files, save that 24/7 running transcoding monster's power] and 16GB of RAM, but the point is that it will not work well on some Atom / Pentium with half a GB or one GB of RAM. Which however will serve files off mdadm just fine.
Likewise, even if you do 16GB RAM / bloated monster machine, it still won't run data off HDD as fast as mdadm, it'll only about match it once you fix the worst flaws with an additional buffer/cache SSD and even then latencies are generally higher.
> You certainly can. You can't.
> That's the whole point of using Z2 or Z3 No. RAIDZ2 provides you with redundancy like RAID6 or a x,2 ecpool on ceph. But it does still not permit adding drives one by one.
ZFS users still have to find the storage space to create the new array with the new drive count and then move data over and delete the old array to add one drive.
On mdadm and obviously ceph, you just add one to the array / pool, it redistributes the data, done.
Tyler Rogers
PS: Growing ZFS RAIDZ arrays has been a planned feature for years now. Never got implemented.
Gavin Gutierrez
>make your client devices decode current media files, save that 24/7 running transcoding monster's power What? That's the whole point. Transcoding at the source is faster than making a tablet fucking decode it. Server wouldn't be doing this 24/7, only when serving it. You don't run a server on Atom/Pentium faggot. You want to transcode/decode on that shit? No.
Yes, you can swap drives out. You can do it with the system running. It's enterprise level server software. You obviously don't understand how to use it, or have read the documentation. Just another faggot that repeats shit he reads here.
Brayden Clark
>Growing ZFS RAIDZ arrays You grow them by increasing the size of the disks, or adding zpools together. Read the manual. Enterprise storage is done by the rack, not the single disk. Enjoy your hobby software.
>You grow them by increasing the size of the disks, or adding zpools together. Can't grow it by just adding one drive to the raidz2 array. That's the whole problem, and no, the other methods are not equal to it. They have constraints like not working okay with just one drive added, or acting more like JBOD.
> Enterprise storage is done *Extremely* rarely done with ZFS [and then usually because the sysadmin doesn't know his/her shit] because the latencies and terrible scaling on higher drive counts involved are making it pretty useless by the rack.
Also, we weren't really discussing enterprise storage in the context of OP, but a media home storage setup that can grow by the disk as-needed.
That said, if you want enterprise storage, go with Ceph or MooseFS or such. Then you can add a drive, server, a whole rack... and it actually scales okay, unlike ZFS.
Jason Gonzalez
>Shingled Magnetic Recording You better know what you're getting yourself into m8.
Ryan Ramirez
ZFS is literally enterprise. It was never developed for anything else.
If you want to add drives to upgrade storage, you do it one at a time. That's why you use Z2/3. You replace the drive, and once the pool is done, replace the next.
Grow up faggot
Carson Bailey
>Your ZFS sucks ass because you didn't set it up correctly, and are probably blaming shitty SMB.
Do a benchmark comparison using the exact same disks.
> pointlessly overspec'd 1U rackmount Synology "NAS" >4x15TB SMR drives Is the problem you are trying to solve that you got no Linux/BSD skills and too much money? This setup is beyond weird.
Joshua Lewis
>I don't understand the differnce between COWs and how Raid works with LVMs, or their use cases.
I hope more of you meme faggots infiltrate businesses. My future is bright, and I don't even specialize in this shit, it's a hobby.
Jacob Cruz
Don't do raid5/6 if you use large capacity HDD's. (larger than 3tb, even that is pushing it IMO)
Austin Sanders
New NASs are too overpriced. But you can find something cheap used in ebay, yahoo jp auctions. Got a Netgear Readynas Pro 2 for something like $80blast month. You can install the latest Readynas OS on it too. Put 2 4TB HGST on it, and have things like logitechmediaserver and transmission running on it 24/7. It's a great machine, better than the pseudo NAS I made with a raspi and a 2TB wd my passport. Don't go for the hurr durr repurpose an old laptop/pc as a NAS. It's retarded and your electricity bill will explode along with that laptop power brick that will burn your feet.
>that ubiquiti setup Ah, I see you are a man of great taste.
Jose Diaz
>If you want to add drives to upgrade storage, you do it one at a time. That's why you use Z2/3. You replace the drive, and once the pool is done, replace the next.
>Pool Degraded >Replace Disk >Pool rebuilds >Another disk fails >Replace Disks >Pool rebuilds >Another disk fails >Replace Disks >Pool rebuilds >Another disk fails >Replace Disks >Pool rebuilds >Another disk fails >Replace Disks >Pool rebuilds
I'm constantly changing out fucking enterprise disks on enterprise servers because ZFS has so much god damn overhead that the disks get thrashed.
Used ZFS with proxmox, and the performance of our VMs fucked sucked. 6 NVMEs with ZFS RAID0 can't even push 1.5GB/s, it's fucking stupid.
Nicholas Cooper
I'm using 4 3tb drives in raid 10. I think it's the best of both worlds and it works for me.
Austin Price
I literally just have a raspberry pi hooked up to a SATA-to-USB adapter with a 500GB HDD in it.
Owen Powell
>Doesn't understand HPC requirements
Ian Price
>ZFS is literally enterprise. It was never developed for anything else. Maybe it was the intent, but they didn't succeed for shit. It has been around for a long time and basically none of big data / big processing uses ZFS.
It's simply because ZFS RAIDZ [and deduplication and other features] scales like ass no matter what [no, really, it's already a very apparent difference at like 10-20 drives - long before you get to full racks filled with drives], runs like ass unless you throw very much higher spec'd servers at it, has weird latency spikes of 200ms and such even then [at least on BSD and Linux, no clue about Solaris derivatives], and so on.
The reputation that it works good on large data probably came from the age where Solaris was still very important and the other solutions mostly actually had weird trouble, ridiculous fsck and rebuild times and so on with a few TB. These days ZFS is just one of the worse choices, and it's mainly the BSD using freenas crowd that still thinks they're avant-garde cool something.
Alexander Lewis
>chrome remote desktop why do this when ssh and vnc exist
Cameron Robinson
What does large datacenters do? Like AWS, or Azure? Do they even use raid? What file system? It must be pretty robust, but is it even possible to scale that down to a consumer grade 2+ bay nas?
Jacob Martin
This guy knows whats up.
ZFS's compression/deduplication/checksuming made sense back when HDDs were slow as fuck and bottle necking the system. But it's 2018 now, and we've got SSDs, and ZFS wasn't written with SSDs in mind.
Thomas Murphy
Learn how to build a pool faggot. It's like you don't even know the name of the nigger who created this place. Protip:it's concrete.
I'm in huge data, and we use it. You dumb faggots don't understand how to segregate and combine.
That's fine, I like it this way. The dumber you are, the longer I make money and advance.
Blake Brown
I didn't have the rebuild/drive failure issues on ZFS, but the performance DID fucking suck here too. It was so bad that I suspected some weird hardware incompatibility and tested it on more [worse] machines to get a feel for how it should run and also looked / asked people for performance figures.
But no, it's just ZFS that sucks.
Parker Flores
Because it performs better, punches through firewalls, works on mac, linux, windows, android, ios, chromeos.
SSH +VNC is fine, but chrome-remote-desktop is a hidden gem that works better.
David King
> I'm in huge data, and we use it. That's only possible if your CTO is an idiot and you really don't care that monster machines with 20 enterprise drives runs at the speed of like 5 consumer drives unless you add a sizable fucking buffer/cache SSD, at which point you can run them at the speed of maybe 6-8 enterprise drives.
> You dumb faggots don't understand how to segregate and combine. You are the dumb faggot that thinks you need to "segregate and combine" and use monster computers with tons of RAM and all the other desperate shit because you're using ZFS, apparently even in production. Holy shit.
Sure, Ceph for example also has you fuck with parameters and isn't super nice in all situations either, but its baseline untweaked performance on some basic bitch ext4/xfs is already FAR better than ZFS with SSD cache and all the hardware and tweaks you can throw at it.
And at a smaller scale, mdadm just already werks better and more reliably so than ZFS on like 1/5 of the hardware.
Benjamin Garcia
Trust me dude, nobody wants to work with an attitude like yours.
Landon Peterson
You ZFS shills got baited hard. Do you really think that someone who is considering buying an off the shelf solution has the time, patience, or skill to get it setup properly? Also, the fuck you going on about using SSDs for RAID and not talking about networking equipment as well.
user just take the fucking middle ground and build your self a nice Ubuntu/Debian normie box, read the fucking mdadm links posted above. If you have an old box laying around buy this extra shit and you should be good to go.
Finally are hot swap trays, these are something that you really should wait for deals on. I've gotten them for 50% off by being patient.
Nicholas Barnes
/SQG/ I had a prof at uni, telling how an ex student was a dumbfuck imbecile, short story long, he supervised a student, that did not set up raid properly. So according to him raid5 is bad with SSD, because you wont get that much speed out of it. He never told the solution, what to do when you have a bunch of hdd, and an industrial lvl ssd? He didnt reason the why, he didnt tell us the proper way to set it up the raid in that scenario.
redpill me
Lincoln Turner
You can affect the raw speed and latency by doing RAID5/6 over SSD, but the impact is by no means prohibitively large in most situations. Somewhat sequential writes/reads should be fairly decent with even completely default mdadm RAID5/6.
The amount of IOPS likely won't scale well by default however. If your SSD are running databases or hash-addressed small file storage or something, adding another drive to the array may result in nearly no increase in performance. I think that is tweakable to a worthwhile extent, but I haven't really done that yet.
Brayden Hall
>prof at uni, telling how an ex student was a dumbfuck imbecile Sounds like the professor is bad at teaching.
>what to do when you have a bunch of hdd, and an industrial lvl ssd? He didnt reason the why, he didnt tell us the proper way to set it up the raid in that scenario.
You make a hybrid array using MDADM and use the SSD for caching.
Like for an enterprise environment where you had 12 HDDs and two SSDs: >12xHDDs in raid5 + 1-2 spares using MDADM >2xSSDs in raid10 far2 using MDADM >Use have of the SSD array for caching the HDD array by using either flashcache/dmcache/bcache >Use LVM ontop of everything >Walk away and never have to look back
Justin Gonzalez
BTW, about what to do if you > have a bunch of hdd, and an industrial lvl ssd I'd say just madadm RAID6 + LVM the HDD and then optionally add the ssd as lvmcache if you want such a thing. Or just use it elsewhere.
Bentley Ward
Most likely a bunch of different systems depending on use. There's object storage (AWS S3, Azure blob). It's used for dumping single files with metadata. Don't expect random byte access. Block storage (EC2, Block). Your regular old filesystem can formatted over block storage. Doesn't scale massively like object. Table, database, etc. Not too familiar with these but they're different forms of data storage delivered as a service.
Christian Murphy
> What does large datacenters do? Like AWS, or Azure? Have a look at Ceph, OpenStack, or maybe the more SME MooseFS or GlusterFS. That's the general idea what AWS or Azure do. They don't use these exact solutions, but the other entities that wanted a cooperative open sauce implementation of the same kind of thing [like CERN and so on] do use them.
You'll see both OpenStack and Ceph really got a bunch of somewhat distinct components that handle the different ways of accessing data in the storage pools.
> is it even possible to scale that down to a consumer grade 2+ bay nas? Possible for some of these, but generally pointless unless you got more of these 2+ bay NAS.
On NAS, the typical solution actually used by almost anyone is to use Linux md/dm RAID. QNAP does it, Synology does it, basically everyone does it.
Charles Brooks
OP here
I have a spare 250gb 850 EVO, could I use that in an array with like 4-6 4TB drives for caching? I might buy a 500GB SSD, clone my current OS onto it, and use the 250gb evo in my computer as well.
I've never used linux in any major way before but I'm kind of interested in setting up this MDAM stuff.
Cooper Lewis
Most likely YAGNI and it will wear out fast. You should be able saturate your gigabit network (in sequential reads) with 6 drives in RAID6.
Matthew Ramirez
>I have a spare 250gb 850 EVO, could I use that in an array with like 4-6 4TB drives for caching? Yes. With most of the software mentioned, you can add/remove SSD caches as you please,
> I'm kind of interested in setting up this MDAM stuff. Sure. That said, mdadm is a rather boring "just werks" affair overall, don't expect much entertainment from that.
Create a RAID5 array from drives x,y,z, it does it and done. Tell it to change to RAID6, it does it eventually and done. Add a drive to the array, it rebuilds the structure and done.
Kevin Rivera
YAGNI sure, but it won't wear out particularly fast.
It's overall basically like writing/reading to this SSD would be without an array underneath.
Adam Carter
The 12/15 bay case is exactly what I'm looking for. Biggest isssue is the noise. High usage will cause the 80mm fans from at least the power supply to ramp up to unmanageble levels. I'll probaly end up inserting an ATX power supply in there.
I could cram everything in a standard computer case with lots of drive bays. But I want to eventually have an actual server closet/room in the future.
Gavin Price
>Using WiFi on a server What a retard
>I live in a very old house with a very shitty electric system. Has nothing to do with internet and network connection types
>PLC doesn't work and I get 11mb/s with wifi so yeah, I'll take wifi. You have no idea what a PLC is.
Forgot to add to my last post >If I use wifi its because I don't have the choice >its because I don't have the choice >I don't have the choice >the choice
Cheap switches and routers can be bought new or used. Ethernet cables are cheap.
Nathaniel Scott
Yeah, that's pretty much the path I went down. Throw in an ATX power supply and you'll be be good to go. I threw everything into a 12u rack and DESU it's not loud at all.
Aiden Reed
does anyone know of a good 19" case with less than 380mm depth and more than 4 bays?
Lucas Murphy
Can I make a NAS from my old laptop with two 2,5" drive slots? Or do I need a third drive to install the operating system on?
Lincoln Foster
Can you? Yes. Should you? If it were me I wouldn't do it as anything other than a project, any money you throw at it is better spent on building something dedicated from scratch. As far as a drive for the OS you can run it off a USB stick or SD card.
Eli Hughes
I like this guy. He gets it. The rest of you idiots need to RTFM.
Evan Nelson
>12 drive RAID5 Jesus fucking christ why. Do you hate redundancy?
Hunter Harris
Okay great, nice LVM setup. Now take a snapshot.
Charles Reyes
I've got two ML350p G8s with 32 cores and 256gb of ram in total, but no drives. Both have SFF cages and I'm simply not willing to buy 2,5 drives. I've already got 4x4TB and 3x2TB 3,5 drives too. I'm bit on a fence here. Should I buy a bunch of SAS extension cords and store my HDDs outside of the servers or just sell them and buy something more modular that isn't proprietary as fuck. LFF cages aren't really a viable option due to the price.