I've been thinking about getting a NAS for home use...

I've been thinking about getting a NAS for home use, I want to do RAID 5 or 6 and store all of my porn and illegally downloaded movies and chinese cartoons on it. The drives in my computer are starting to fill up quickly and i need more storage space. I also want to be able to stream my shit to my TV, so PLEX support is important.

most likely going to buy the HDDs separate so I can scale up storage space as I need it. Probably going to get four 4tb WD reds to start out, and do a RAID 5 for 12tb total storage, and have an external drive connected for backing up important stuff.

what is a good prebuilt NAS computer?

should I try and make my own with FreeNAS?

Attached: file.png (640x360, 104K)

Other urls found in this thread:

digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04
wiki.archlinux.org/index.php/RAID
ebay.com/itm/MNPA19-XTR-10GB-MELLANOX-CONNECTX-2-PCIe-X8-10Gbe-SFP-NETWORK-CARD/131634470127?hash=item1ea6069cef:g:7HgAAOSwULVb2dcx:rk:2:pf:1&frcectupt=true
amazon.com/TP-Link-JetStream-24-Port-Ethernet-T1700G-28TQ/dp/B01CHP5IAC
amazon.com/IO-Crest-Controller-Non-Raid-SI-PEX40064/dp/B00AZ9T3OU/ref=sr_1_4?ie=UTF8&qid=1541074991&sr=8-4&keywords=sata pcie
amazon.com/Rosewill-Rackmount-Computer-Pre-Installed-RSV-L4000/dp/B0056OUTBK/ref=pd_sbs_147_25?_encoding=UTF8&pd_rd_i=B0056OUTBK&pd_rd_r=90a1f13c-ddd3-11e8-852c-1fd6a0133398&pd_rd_w=j3Mgv&pd_rd_wg=Eg66h&pf_rd_i=desktop-dp-sims&pf_rd_m=ATVPDKIKX0DER&pf_rd_p=7d5d9c3c-5e01-44ac-97fd-261afd40b865&pf_rd_r=D04XRBE32EEFNW6828ES&pf_rd_s=desktop-dp-sims&pf_rd_t=40701&psc=1&refRID=D04XRBE32EEFNW6828ES
google.com/search?source=hp&ei=7yLbW7GYGcGosgX1sJ7QDw&q=PLC&btnK=Google Search&oq=PLC&gs_l=psy-ab.3..0l10.735.1087..1840...0.0..0.153.420.3j1......0....1..gws-wiz.....0..0i131.udeLcNR5kUg
twitter.com/SFWRedditVideos

dont raid 5 do a zfs raid z2, make your own its usually cheaper unlessbyou get a deal on a freenas box. seach up how zfs works and hoe you could expand first tho to makr sure its futureproof.

I got a free desktop pc at work. Bought some 3.5 hdds, a pci wifi card and a 3.5 hotswap bay and voilà, sftp/ssh/virtualization homeserver for cheap burger bux

please ignore this thread I fucked up and made a new thread instead of replying to /sqt/

I'll look into it thanks.

whatever you do stay out of the home server general if it ever crops up again, I've never seen such a festering pile of autism

you dont enjoy being told you dont really run servers because you dont run two virtualised ntp servers?

You can run your own on Linux distro just as well. For the amount of data you can probably get away with EXT4. Read a few things on mdadm and you should be good to go.

digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04
wiki.archlinux.org/index.php/RAID

The only other thing I'd suggest is use a separate boot drive if you want to keep your initramfs simple.

> (OP)
>I got a free desktop pc at work. Bought some 3.5 hdds, a pci wifi card and a 3.5 hotswap bay and voilà, sftp/ssh/virtualization homeserver for cheap burger bux


Wifi for fucking file storage? You must be retarded.
user, make sure your NAS is wired if you're storing all data on it.

Attached: 1538947650922.gif (289x149, 333K)

>You must be retarded
I live in a very old house with a very shitty electric system. PLC doesn't work and I get 11mb/s with wifi so yeah, I'll take wifi.

Your are retarded.
Wire with ethernet.

Shut the fuck up you fuckinh faggot. If I use wifi its because I don't have the choice, I probably have 10x your knowlegde on computer systems and network administration, stop calling other people retarded and step up your post quality you fucking parasite of a human being

>>what is a good prebuilt NAS computer?
Literally anything that can hold a few HDDs.

>Probably going to get four 4tb WD reds to start out, and do a RAID 5 for 12tb total storage

I think buying 8tb externals and ripping the hard drives out are the best bang for your buck
3 of those in raid5 would give you 16tb of space, but ideally 4 drives in raid10 due to better performance.

Then get a 256+GB SSD

Fuck freenas. ZFS is shit and hasn't aged well. MDADM+LVM is god tier.

SSD cache your raid array with bcache/dmcache/flashcache for even better results.

Install ubuntu/mint/xubuntu, install XRDP, Chrome Remote Desktop, setup NFS and SMB, plex, kodi and whatever else u want.

>Old/Cheap/whatever PC
>4 x 8tb hdds from external harddrives
> Raid10 using MDADM + LVM
> 256GB+ pcie SSD
> Half of SSD used for OS, the other half for bcache/dmcache/flashcache
> XRDP/Chrome Remote Desktop/VNC
> SMB/NFS/OwnCloud
> Plex/Kodi

>what is a good prebuilt NAS computer?
QNAP / Synology have relatively good features for prebuilt machines.

You're still paying a premium vs just assembling a Linux machine and they do not NEARLY have the amount of software you can easily run with a package manager.

> do a zfs raid z2
Don't. Not only does ZFS require a lot more hardware [faster CPU, more RAM, maybe a cache SSD] than Linux mdadm RAID to achieve the same performance, you also cannot add drives one by one and grow the array on ZFS like you describe your plan.

Mdadm has the required tooling and it's tested [you could even switch between most RAID levels], ZFS does not.

>SSD cache your raid array with bcache/dmcache/flashcache for even better results.
If you're using LVM already, why not use lvmcache? It's one of the best solutions.

Low power consumption while providing reasonable drive / network speed is usually still a measure of it being "good".

Throw your old gaymen machine that consumes an extra 100W at it and you'll be paying something like an extra US$100-200 / year to operate the thing if you run it 24/7 like most people would.

I don't think anyone who plays with toys is old enough to have an "old" game machine.

>LVM is god tier
Sure, if you're in 2002 and have 60GB drives.
LVM is shit. Your ZFS sucks ass because you didn't set it up correctly, and are probably blaming shitty SMB.

What toys? It's a media dump.

>Your ZFS sucks ass because you didn't set it up correctly
No, he's correct. ZFS sucks in comparison. Takes too much hardware for no good reason, can't do what OP asks (add drives to the array as-needed).

There is no element of "setting it up wrong" other than the ZFS user's inclination to throw a fuckton of RAM and CPU and a caching SSD at it to fix its shitty performance whereas mdadm + lvm runs happily even on a low power potato machine with like 256MB RAM.

>Don't. Not only does ZFS require a lot more hardware [faster CPU, more RAM, maybe a cache SSD] than Linux mdadm RAID to achieve the same performance,
Processor is going to be based on what the server is doing. If you're transcoding video, or hosting virtual machines, you will need a better processor.
SSD isn't needed at all. ECC Ram isn't needed. The 1GB of Ram to 1TB of space isn't really needed after 16GB of Ram. These are all old memes.
>you also cannot add drives one by one and grow the array on ZFS like you describe your plan.
You certainly can. That's the whole point of using Z2 or Z3, incase a drive fails during the resilvering process. Read the fucking documentation mong.

>Processor is going to be based on what the server is doing.
Not with ZFS.

I guess you can make it run on the side with some large transcoding-on-the-fly machine [why? make your client devices decode current media files, save that 24/7 running transcoding monster's power] and 16GB of RAM, but the point is that it will not work well on some Atom / Pentium with half a GB or one GB of RAM. Which however will serve files off mdadm just fine.

Likewise, even if you do 16GB RAM / bloated monster machine, it still won't run data off HDD as fast as mdadm, it'll only about match it once you fix the worst flaws with an additional buffer/cache SSD and even then latencies are generally higher.

> You certainly can.
You can't.

> That's the whole point of using Z2 or Z3
No. RAIDZ2 provides you with redundancy like RAID6 or a x,2 ecpool on ceph. But it does still not permit adding drives one by one.

ZFS users still have to find the storage space to create the new array with the new drive count and then move data over and delete the old array to add one drive.

On mdadm and obviously ceph, you just add one to the array / pool, it redistributes the data, done.

PS: Growing ZFS RAIDZ arrays has been a planned feature for years now. Never got implemented.

>make your client devices decode current media files, save that 24/7 running transcoding monster's power
What? That's the whole point. Transcoding at the source is faster than making a tablet fucking decode it.
Server wouldn't be doing this 24/7, only when serving it.
You don't run a server on Atom/Pentium faggot. You want to transcode/decode on that shit? No.

Yes, you can swap drives out. You can do it with the system running. It's enterprise level server software. You obviously don't understand how to use it, or have read the documentation. Just another faggot that repeats shit he reads here.

>Growing ZFS RAIDZ arrays
You grow them by increasing the size of the disks, or adding zpools together.
Read the manual.
Enterprise storage is done by the rack, not the single disk. Enjoy your hobby software.

I'm talking about games. Games are toys.

>plex

Attached: 1512706429944.jpg (640x480, 293K)

waiting for this to come out
I hope that price will be reasonable

Attached: rs1619xsp.png (2800x1575, 465K)

Why?

Imagen not beeing able to drill a holle for a cable.

4 bays? LMAO

He has a degree in network administration.
He doesn't have a degree for drilling holes or pulling cable.

This is how college works now.

Good.
Let him pay for that degree by hiring someone to do some basic bullshit for him

This. Synology is a fucking scam.

yes 4

Attached: AF5BA692-D0E3-4AAB-B7BE-1AFB4418AA52.png (678x445, 151K)

why not?
just don’t mention some diy bull crap

Do you have a rack?

>You grow them by increasing the size of the disks, or adding zpools together.
Can't grow it by just adding one drive to the raidz2 array. That's the whole problem, and no, the other methods are not equal to it. They have constraints like not working okay with just one drive added, or acting more like JBOD.

> Enterprise storage is done
*Extremely* rarely done with ZFS [and then usually because the sysadmin doesn't know his/her shit] because the latencies and terrible scaling on higher drive counts involved are making it pretty useless by the rack.

Also, we weren't really discussing enterprise storage in the context of OP, but a media home storage setup that can grow by the disk as-needed.

That said, if you want enterprise storage, go with Ceph or MooseFS or such. Then you can add a drive, server, a whole rack... and it actually scales okay, unlike ZFS.

>Shingled Magnetic Recording
You better know what you're getting yourself into m8.

ZFS is literally enterprise. It was never developed for anything else.

If you want to add drives to upgrade storage, you do it one at a time. That's why you use Z2/3. You replace the drive, and once the pool is done, replace the next.

Grow up faggot

>Your ZFS sucks ass because you didn't set it up correctly, and are probably blaming shitty SMB.

Do a benchmark comparison using the exact same disks.

MDADM RAID0 + LVM2 + Ext4 > ZFS RAID0
MDADM RAID1 + LVM2 + Ext4 > ZFS RAID1
MDADM RAID5 + LVM2 + Ext4 > ZFS RAIDZ
MDADM + Flashcache + LVM2 + EXT4 > ZFS ZIL & L2ARC

> pointlessly overspec'd 1U rackmount Synology "NAS"
>4x15TB SMR drives
Is the problem you are trying to solve that you got no Linux/BSD skills and too much money? This setup is beyond weird.

>I don't understand the differnce between COWs and how Raid works with LVMs, or their use cases.

I hope more of you meme faggots infiltrate businesses. My future is bright, and I don't even specialize in this shit, it's a hobby.

Don't do raid5/6 if you use large capacity HDD's. (larger than 3tb, even that is pushing it IMO)

New NASs are too overpriced. But you can find something cheap used in ebay, yahoo jp auctions.
Got a Netgear Readynas Pro 2 for something like $80blast month. You can install the latest Readynas OS on it too. Put 2 4TB HGST on it, and have things like logitechmediaserver and transmission running on it 24/7.
It's a great machine, better than the pseudo NAS I made with a raspi and a 2TB wd my passport.
Don't go for the hurr durr repurpose an old laptop/pc as a NAS. It's retarded and your electricity bill will explode along with that laptop power brick that will burn your feet.

Attached: 98w43eq7598u435.png (482x480, 432K)

will probably get some rack later
currently I keep everything on wall shelves

Attached: 20181004_185250.jpg (2560x1440, 1.07M)

>that hanging dongle

>that ubiquiti setup
Ah, I see you are a man of great taste.

>If you want to add drives to upgrade storage, you do it one at a time. That's why you use Z2/3. You replace the drive, and once the pool is done, replace the next.

>Pool Degraded
>Replace Disk
>Pool rebuilds
>Another disk fails
>Replace Disks
>Pool rebuilds
>Another disk fails
>Replace Disks
>Pool rebuilds
>Another disk fails
>Replace Disks
>Pool rebuilds
>Another disk fails
>Replace Disks
>Pool rebuilds

I'm constantly changing out fucking enterprise disks on enterprise servers because ZFS has so much god damn overhead that the disks get thrashed.

Used ZFS with proxmox, and the performance of our VMs fucked sucked. 6 NVMEs with ZFS RAID0 can't even push 1.5GB/s, it's fucking stupid.

I'm using 4 3tb drives in raid 10. I think it's the best of both worlds and it works for me.

I literally just have a raspberry pi hooked up to a SATA-to-USB adapter with a 500GB HDD in it.

>Doesn't understand HPC requirements

>ZFS is literally enterprise. It was never developed for anything else.
Maybe it was the intent, but they didn't succeed for shit. It has been around for a long time and basically none of big data / big processing uses ZFS.

It's simply because ZFS RAIDZ [and deduplication and other features] scales like ass no matter what [no, really, it's already a very apparent difference at like 10-20 drives - long before you get to full racks filled with drives], runs like ass unless you throw very much higher spec'd servers at it, has weird latency spikes of 200ms and such even then [at least on BSD and Linux, no clue about Solaris derivatives], and so on.

The reputation that it works good on large data probably came from the age where Solaris was still very important and the other solutions mostly actually had weird trouble, ridiculous fsck and rebuild times and so on with a few TB. These days ZFS is just one of the worse choices, and it's mainly the BSD using freenas crowd that still thinks they're avant-garde cool something.

>chrome remote desktop
why do this when ssh and vnc exist

What does large datacenters do? Like AWS, or Azure? Do they even use raid? What file system? It must be pretty robust, but is it even possible to scale that down to a consumer grade 2+ bay nas?

This guy knows whats up.

ZFS's compression/deduplication/checksuming made sense back when HDDs were slow as fuck and bottle necking the system. But it's 2018 now, and we've got SSDs, and ZFS wasn't written with SSDs in mind.

Learn how to build a pool faggot. It's like you don't even know the name of the nigger who created this place.
Protip:it's concrete.

I'm in huge data, and we use it. You dumb faggots don't understand how to segregate and combine.

That's fine, I like it this way. The dumber you are, the longer I make money and advance.

I didn't have the rebuild/drive failure issues on ZFS, but the performance DID fucking suck here too. It was so bad that I suspected some weird hardware incompatibility and tested it on more [worse] machines to get a feel for how it should run and also looked / asked people for performance figures.

But no, it's just ZFS that sucks.

Because it performs better, punches through firewalls, works on mac, linux, windows, android, ios, chromeos.

SSH +VNC is fine, but chrome-remote-desktop is a hidden gem that works better.

> I'm in huge data, and we use it.
That's only possible if your CTO is an idiot and you really don't care that monster machines with 20 enterprise drives runs at the speed of like 5 consumer drives unless you add a sizable fucking buffer/cache SSD, at which point you can run them at the speed of maybe 6-8 enterprise drives.


> You dumb faggots don't understand how to segregate and combine.
You are the dumb faggot that thinks you need to "segregate and combine" and use monster computers with tons of RAM and all the other desperate shit because you're using ZFS, apparently even in production. Holy shit.

Sure, Ceph for example also has you fuck with parameters and isn't super nice in all situations either, but its baseline untweaked performance on some basic bitch ext4/xfs is already FAR better than ZFS with SSD cache and all the hardware and tweaks you can throw at it.

And at a smaller scale, mdadm just already werks better and more reliably so than ZFS on like 1/5 of the hardware.

Trust me dude, nobody wants to work with an attitude like yours.

You ZFS shills got baited hard. Do you really think that someone who is considering buying an off the shelf solution has the time, patience, or skill to get it setup properly? Also, the fuck you going on about using SSDs for RAID and not talking about networking equipment as well.

user just take the fucking middle ground and build your self a nice Ubuntu/Debian normie box, read the fucking mdadm links posted above. If you have an old box laying around buy this extra shit and you should be good to go.

Get some cheap 10gig nics, they will work with Linux/Windows (optional: get a dual port for the NAS if you want to setup bonding):
ebay.com/itm/MNPA19-XTR-10GB-MELLANOX-CONNECTX-2-PCIe-X8-10Gbe-SFP-NETWORK-CARD/131634470127?hash=item1ea6069cef:g:7HgAAOSwULVb2dcx:rk:2:pf:1&frcectupt=true

If you aren't using a DAC to connect your NAS to your desktop/worksation then you want a switch that can handle it:
amazon.com/TP-Link-JetStream-24-Port-Ethernet-T1700G-28TQ/dp/B01CHP5IAC

If you need more SATA ports than get something like this:
amazon.com/IO-Crest-Controller-Non-Raid-SI-PEX40064/dp/B00AZ9T3OU/ref=sr_1_4?ie=UTF8&qid=1541074991&sr=8-4&keywords=sata pcie

Buy a big ass case:
amazon.com/Rosewill-Rackmount-Computer-Pre-Installed-RSV-L4000/dp/B0056OUTBK/ref=pd_sbs_147_25?_encoding=UTF8&pd_rd_i=B0056OUTBK&pd_rd_r=90a1f13c-ddd3-11e8-852c-1fd6a0133398&pd_rd_w=j3Mgv&pd_rd_wg=Eg66h&pf_rd_i=desktop-dp-sims&pf_rd_m=ATVPDKIKX0DER&pf_rd_p=7d5d9c3c-5e01-44ac-97fd-261afd40b865&pf_rd_r=D04XRBE32EEFNW6828ES&pf_rd_s=desktop-dp-sims&pf_rd_t=40701&psc=1&refRID=D04XRBE32EEFNW6828ES

Finally are hot swap trays, these are something that you really should wait for deals on. I've gotten them for 50% off by being patient.

/SQG/
I had a prof at uni, telling how an ex student was a dumbfuck imbecile, short story long, he supervised a student, that did not set up raid properly.
So according to him raid5 is bad with SSD, because you wont get that much speed out of it.
He never told the solution, what to do when you have a bunch of hdd, and an industrial lvl ssd? He didnt reason the why, he didnt tell us the proper way to set it up the raid in that scenario.

redpill me

You can affect the raw speed and latency by doing RAID5/6 over SSD, but the impact is by no means prohibitively large in most situations. Somewhat sequential writes/reads should be fairly decent with even completely default mdadm RAID5/6.

The amount of IOPS likely won't scale well by default however. If your SSD are running databases or hash-addressed small file storage or something, adding another drive to the array may result in nearly no increase in performance. I think that is tweakable to a worthwhile extent, but I haven't really done that yet.

>prof at uni, telling how an ex student was a dumbfuck imbecile
Sounds like the professor is bad at teaching.

>what to do when you have a bunch of hdd, and an industrial lvl ssd? He didnt reason the why, he didnt tell us the proper way to set it up the raid in that scenario.

You make a hybrid array using MDADM and use the SSD for caching.

Like for an enterprise environment where you had 12 HDDs and two SSDs:
>12xHDDs in raid5 + 1-2 spares using MDADM
>2xSSDs in raid10 far2 using MDADM
>Use have of the SSD array for caching the HDD array by using either flashcache/dmcache/bcache
>Use LVM ontop of everything
>Walk away and never have to look back

BTW, about what to do if you
> have a bunch of hdd, and an industrial lvl ssd
I'd say just madadm RAID6 + LVM the HDD and then optionally add the ssd as lvmcache if you want such a thing. Or just use it elsewhere.

Most likely a bunch of different systems depending on use.
There's object storage (AWS S3, Azure blob). It's used for dumping single files with metadata. Don't expect random byte access.
Block storage (EC2, Block). Your regular old filesystem can formatted over block storage. Doesn't scale massively like object.
Table, database, etc. Not too familiar with these but they're different forms of data storage delivered as a service.

> What does large datacenters do? Like AWS, or Azure?
Have a look at Ceph, OpenStack, or maybe the more SME MooseFS or GlusterFS. That's the general idea what AWS or Azure do. They don't use these exact solutions, but the other entities that wanted a cooperative open sauce implementation of the same kind of thing [like CERN and so on] do use them.

You'll see both OpenStack and Ceph really got a bunch of somewhat distinct components that handle the different ways of accessing data in the storage pools.

> is it even possible to scale that down to a consumer grade 2+ bay nas?
Possible for some of these, but generally pointless unless you got more of these 2+ bay NAS.

On NAS, the typical solution actually used by almost anyone is to use Linux md/dm RAID. QNAP does it, Synology does it, basically everyone does it.

OP here

I have a spare 250gb 850 EVO, could I use that in an array with like 4-6 4TB drives for caching? I might buy a 500GB SSD, clone my current OS onto it, and use the 250gb evo in my computer as well.

I've never used linux in any major way before but I'm kind of interested in setting up this MDAM stuff.

Most likely YAGNI and it will wear out fast. You should be able saturate your gigabit network (in sequential reads) with 6 drives in RAID6.

>I have a spare 250gb 850 EVO, could I use that in an array with like 4-6 4TB drives for caching?
Yes. With most of the software mentioned, you can add/remove SSD caches as you please,

> I'm kind of interested in setting up this MDAM stuff.
Sure. That said, mdadm is a rather boring "just werks" affair overall, don't expect much entertainment from that.

Create a RAID5 array from drives x,y,z, it does it and done. Tell it to change to RAID6, it does it eventually and done. Add a drive to the array, it rebuilds the structure and done.

YAGNI sure, but it won't wear out particularly fast.

It's overall basically like writing/reading to this SSD would be without an array underneath.

The 12/15 bay case is exactly what I'm looking for. Biggest isssue is the noise. High usage will cause the 80mm fans from at least the power supply to ramp up to unmanageble levels. I'll probaly end up inserting an ATX power supply in there.

I could cram everything in a standard computer case with lots of drive bays. But I want to eventually have an actual server closet/room in the future.

>Using WiFi on a server
What a retard

>I live in a very old house with a very shitty electric system.
Has nothing to do with internet and network connection types

>PLC doesn't work and I get 11mb/s with wifi so yeah, I'll take wifi.
You have no idea what a PLC is.

Retard
>I probably have 10x your knowlegde on computer systems and network administration,
>Still has no idea what a PLC is
Here is a hint
google.com/search?source=hp&ei=7yLbW7GYGcGosgX1sJ7QDw&q=PLC&btnK=Google Search&oq=PLC&gs_l=psy-ab.3..0l10.735.1087..1840...0.0..0.153.420.3j1......0....1..gws-wiz.....0..0i131.udeLcNR5kUg

Forgot to add to my last post >If I use wifi its because I don't have the choice
>its because I don't have the choice
>I don't have the choice
>the choice

Cheap switches and routers can be bought new or used. Ethernet cables are cheap.

Yeah, that's pretty much the path I went down. Throw in an ATX power supply and you'll be be good to go. I threw everything into a 12u rack and DESU it's not loud at all.

does anyone know of a good 19" case with less than 380mm depth and more than 4 bays?

Can I make a NAS from my old laptop with two 2,5" drive slots?
Or do I need a third drive to install the operating system on?

Can you? Yes. Should you? If it were me I wouldn't do it as anything other than a project, any money you throw at it is better spent on building something dedicated from scratch. As far as a drive for the OS you can run it off a USB stick or SD card.

I like this guy. He gets it. The rest of you idiots need to RTFM.

>12 drive RAID5
Jesus fucking christ why. Do you hate redundancy?

Okay great, nice LVM setup.
Now take a snapshot.

I've got two ML350p G8s with 32 cores and 256gb of ram in total, but no drives. Both have SFF cages and I'm simply not willing to buy 2,5 drives. I've already got 4x4TB and 3x2TB 3,5 drives too.
I'm bit on a fence here. Should I buy a bunch of SAS extension cords and store my HDDs outside of the servers or just sell them and buy something more modular that isn't proprietary as fuck. LFF cages aren't really a viable option due to the price.

Attached: milftreefiddy.jpg (940x587, 107K)

KEK

>The 1GB of Ram to 1TB of space isn't really needed after 16GB of Ram.
>16GB of RAM in personal NAS

>16GB of RAM in personal NAS
All of my devices have at least 16GB of ram, except my phone.

Anybody use SnapRAID? It feels pretty practical for write-once media storage.

What a waste of money.

Roll your own is fine. Did that for years. Then discovered Synology and would never go back.

>Here is a hint
What did he mean by this?

Attached: f4035b50-1e50-4fe8-8d9f-39b5b9bb5e95[1].png (998x509, 47K)

Buffalo Terastation.