/hsg/ Home Server General

Home server thread
Forever Archived Edition

NAS is how most people get into this. It’s nice have a /comfy/ home for all your data. Streaming your movies/shows around the house and to friends is good feels. Repurpose an old desktop, buy a SBC, or go with cheap used enterprise gear. Lots of options and theres even a flowchart. Ask.

/hsg/ is about learning and expanding your horizons. Know all about NAS? Learn virtualization. Spun up some VMs? Learn about networking by standing up a pfsense box and configuring some vlans. Theres always more to learn and chances to grow. Think you’re godtier already? Setup openstack and report back.

>What software should I run?
install gentoo. Or whatever flavor of *nix is best for the job or most comfy for you. Emby to replace netflix, nextcloud to replace googlel, ampache to replace spotify, the list goes on and on. Look at the awesome selfhosted list and ask.

>Datahoarding ok here?
YES - you are in good company. Shuck those easystores and flash IT mode on your H310. All datahoarding talk welcome.

>Do I need a rack and all that noisey enterprise gear?
No. An old laptop or rpi can be a server if you want.

>A T T E N T I O N:
>The /hsg/ wiki is up!
hsg.shortlink.club/

Please expand it, also don't use your real name or any password when you register. Preferable use cock.li or something anonymous. Or just email the admin with the username and password you want.

>Links
server tips: pastebin.com/SXuHp12J
github.com/Kickball/awesome-selfhosted
old.reddit.com/r/datahoarder
labgopher.com
reddit.com/r/homelab/wiki/index
wiki.debian.org/FreedomBox/Features
>Chat
irc.rizon.net #_hsg_
riot.im/app/#/room/#homeservergeneral:matrix.org

previous thread:

Attached: 1557243399670.jpg (1200x368, 166K)

Other urls found in this thread:

docs.armbian.com/User-Guide_Getting-Started/
pcpartpicker.com/list/cLwN4q
forums.unraid.net/topic/70114-amd-raven-ridge-support/?tab=comments#comment-642197
jro.io/nas/
fiber.salt.ch/en/
vmware.com/ca/products/horizon.html
youtu.be/oJMCXhZeHIA
forum.level1techs.com/t/run-your-steam-library-from-a-nas-break-a-leg-challenge-update/107912
techpowerup.com/255384/trendforce-ssd-price-per-gb-could-drop-as-low-as-usd-0-1-by-years-end
twitter.com/SFWRedditGifs

Let's try this again. What did /hsg/ buy recently?

>me
>odroid hc1
>pic not mine

I hope Armbian is comfy. Is SSH active by default? On Raspbian I had to drop a 0 byte file called ssh first into boot to flip it on.

Attached: odroid_hc1_nas.jpg (2428x1775, 698K)

>What did /hsg/ buy recently?

hmm not much recently - I am kinda poor. Last thing I got server related would be 9 x 16GB USB sticks for my pi cluster to make a replicated gluster volume.

Which of the following CPU options represents the best value for server use at these prices?

Ryzen 1600 - $137 USD
Ryzen 2400G - $154 USD (no gpu required)
Ryzen 2600 - $168 USD

quick skim of docs.armbian.com/User-Guide_Getting-Started/

suggests that sshd is indeed running by default

Maybe this isn't entirely on-topic for this thread, but this would the group of people most likely to know about it.
I'm going to be running a bunch of cat6 cable in my house while there is opportunity from some renovations happening. I'm going to centralize a bunch of my stuff in one location, and get a small rack mount for my crappy server and probably a switch, and run 2 cables to every room from there.

So I'm going to need to buy a fair bit of cat6 cable (probably a 300m roll, with the extra used for making patch cables), but looking around I can see some pretty wildly different prices between them. What's the sort of shit I should be looking for when buying cat6 cable? Is all of the cheap shit just unshielded chink garbage?

Attached: 1460577020386.png (227x367, 38K)

>Is all of the cheap shit just unshielded chink garbage?

iirc you generally want shielded for longer runs, unshielded is fine for short/patch runs.

I would go for shielded if you can afford it - its going to be serving you for years so get the best you can.

From my own experience of wiring my house with cat6 - get a cable tester, will save you a lot of hassle.

There is a non negligible chance that 10gbe becomes popular within the lifetime of that cabling, so popping in a cable grade that works for that may save you a lot of hassle down the line.

1600, I guess. Maybe 1700 if for $160.

> What's the sort of shit I should be looking for when buying cat6 cable?
New crimping tool.
I guess cat5 will work too on up to 50m.

>no gpu required
If it's not a pure headless server but some HTPC NAS hybrid, this is the best option.

Source on pic?

>Is all of the cheap shit just unshielded chink garbage?
Unshielded Chinese cables are usually okay, and depending on the situation easier to lay down.

Cat 6 shielded usually isn't THAT stiff yet, but if you were going with muh maximum shielding Cat8 or Cat7, it is sometimes actually harder to install.

Thank you friend! A very sensible choice of the devs. Some headlessbros don't want to use UART or screens for trivial tasks.

The APU is good. Not sure if it transcodes, but integrated GPUs often offer HW acceleration for codec converting tasks.

you should still spoil yourself once in a while
i kinda save on food money or on holidays .w.

It's for an Unraid NAS. Visuals will not be necessary aside from installation and other tweeks to the bios when needed.
Is it really the best option when I'm missing out on the extra cores/threads from a 2600 or a 1700? Installing a low-profile GPU is cheap as chips. Does power consumption become a factor? Will a non-graphics Ryzen not boot up unless it has a GPU?

>The APU is good. Not sure if it transcodes, but integrated GPUs often offer HW acceleration for codec converting tasks.
Wait what? Are there any benchmarks for this? Would make my decision a hell of a lot easier.

I should, but long list of things that need bought - need a new phone soonish, taken on a treasurer roll with a conservation group so need lockable metal filing box (and filing stuff) - still have a server with a dead motherboard I need to do something with.

Saying that I've just got a design job that should be ~£1k and couple of other £50, so might get myself a pressie.

got an i3 laptop with dead display that might do for some jobs too.

I keep telling since several threads to roll the 2400G

>Jow ForumsPleX/comments/ahhlmv/ryzen_5_2400g_server_success/

Attached: bvAmUlP.png (1916x932, 1.41M)

true - downside is, I imagine, folks not too knowledgeable on such things leaving sshd open with the default password.

First login it does force you to set up account and password according to docs.
Such users could be protected from using
no permit root login
in sshd conf
But yes I agree with all your points.

pcpartpicker.com/list/cLwN4q

>windows 10
>no ryzen igpu-accelerated transcoding documentation on plex website

Attached: 1436713903844.jpg (250x241, 24K)

>It's for an Unraid NAS.
No recent experience with that, not my thing when I last tried it.

> when I'm missing out on the extra cores/threads from a 2600 or a 1700?
Are you? What for?

> Does power consumption become a factor?
If you leave a typical GPU plugged in? That tends to increases the power consumption of the machine minus drives by like 50%, yes.

> Will a non-graphics Ryzen not boot up unless it has a GPU?
No, usually you can boot... but you may need a GPU from the start to set your BIOS up so that it doesn't halt when it doesn't detect a GPU or something.

>4k Ultra HD transcoded to 1080p with Ryzen 5 2400G
>(newest Vega drivers at the time manually installed)


mimimimimi!

forums.unraid.net/topic/70114-amd-raven-ridge-support/?tab=comments#comment-642197

>The accelerated Vega iGPU drivers are in Linux 4.15 which would be needed for Plex transcoding presumably. We only include the Intel iGPU drivers at this time but we're open to including Vega iGPU drivers when we upgrade kernels, Plex supports it and the Vega iGPU drivers don't conflict with normal dedicated AMD video cards that would normally be used for passthrough.
Tell the Plex folks to step up their game already!

Attached: 5b9178420b6530b17e971635ebcce1e58498910c.png (1556x844, 198K)

>pcpartpicker.com/list/cLwN4q
>that build
>850W PSU
>possibly incompatible motherboard without a bios update

Attached: b36.png (420x420, 7K)

>The APU is good.
Well, this is already a gaymen-tier machine anyhow.

But it's one with not too much power consumption extra; if you need a setup with processing power and/or a decent amount of SATA/PCI bandwidth you can't really go much lower in terms of power anyhow.

Why are you shitposting pepe, /b/?

The build obviously works for the guy, else he wouldn't post screenshots.

Yes, the exact build you complain about runs fine in pic related

Attached: wjruvlirhoy01.jpg (1175x848, 593K)

>Tell the Plex folks to step up their game already!
Huh... doesn't this say the Unraid people are the ones that need to do their job and update the kernel and include the newer drivers?

[I don't get why some Linux distros don't offer more or less all current linux-stable kernels anyhow.]

>estimated wattage: 195W
>PSU: 850W

Attached: 1407119379594.png (568x479, 93K)

>[I don't get why some Linux distros don't offer more or less all current linux-stable kernels anyhow.]
Me neither, but there were some changes to filesystems in the kernel. If you care I can look it up later today.

>wojak spam
Okay time to leave this thread for a couple of hours. I'll wait till this is Jow Forums gentoomen again.

>[I don't get why some Linux distros don't offer more or less all current linux-stable kernels anyhow.]

I would think that upgrading a distro's kernel isn't a trival task. Doing it is easy, but verifying that every package/install script works as expected will take a good bit of effort, much easier if someone has done it before.

But I'm no distro release expert, just an apprentice sysadmin with an opinion.

> If you care I can look it up later today.
I've already got the 5.1 kernel on a Linux box with a Ryzen 2400G as a matter of very uneventful kernel updating.

I'm not using Unraid though and probably won't any time soon. Snapraid and mdadm work for me (on various machines) and I'm more interested in trying Ceph again if it ever stops being a giant collection of issues and bugs... or maybe bcachefs.

>I would think that upgrading a distro's kernel isn't a trival task.
Wrong, it's very trivial.

> verifying that every package/install script works as expected will take a good bit of effort
Not required. The kernel generally doesn't just break the userspace. Never mind nobody actually does this verification.

And also, you don't need to push every new linux-stable on everyone, just compile it and have it available for install.

Interesting read
jro.io/nas/

Goes into great detail about how this guy built his 100TB freenas server.

Attached: rack_back.jpg (1000x1544, 491K)

>Not required. The kernel generally doesn't just break the userspace.

true that, I seem to remember Linus saying something about that.

>Never mind nobody actually does this verification.

Don't they? I always thought the testing would be a huge part - thank ye, now I know better.

>Jow ForumsPleX/comments/ahhlmv/ryzen_5_2400g_server_success/
how do you have the upload for 6 streams

those must all be in your house/home?

The sequel question: How easy will it be to sell codes for The Division 2 and World War Z? I can save $10 USD buying off Ebay, but then I don't get the AMD meme game promo. Idk about any of this gray market shit.

> 100TB
Unless the goal is to run the array at 70%+ of all drive's speed or more, it's not even really tricky.

10 drives on consumer hardware (well, you need to pick the PSU to supply enough power... but the PCI SATA controllers and so on can all be very mundane) on typical Linux will work fine.

Not same user, but home internet is like this now:
fiber.salt.ch/en/

Or at least 1GBE in a lot of places.

How do hosting companies assign multiple external IP addresses to one physical machine? I have a VPS hosted on a machine shared with many other VPS'. Each VPS has a different IP address. How do they do that? Do their servers have multiple network cards or maybe a network card with multiple RJ-45 sockets? Or do they rent multiple IPs from their ISP and the implementation is entirely software-based?

Fired up my cold storage for the first time this year. Should probably do these backups more often.

You can assign multiple IPs to the same port or the same IP to multiple ports depending on the hardware.

Among other methods it's as simple as
ip addr add dev on Linux, but there are a good number of variants to this even on the server side on Linux.

They could possibly also do it on the networking infrastructure, routers and stuff. I don't know what they actually prefer.

What hardware are you referring to? How does one funnel multiple IP addresses into one port?

You might want to automate it a little, yes.

Send WOL magic packet - wait until ssh connects - run borg/restic - shutdown.

The current setup could use some work for sure. I've been leaving it as is while I build a new cold storage with new drives and what not with a automated wake and backup every month or so.

Any hardware can do it really. I don't think any OS restricts you from doing so. I could be wrong though, never played around with OSX.

I'm going to download and setup VMware Horizon over the weekend to use mtga on my mobile device

vmware.com/ca/products/horizon.html

Seems more elegant than setting up an RDS gateway.
youtu.be/oJMCXhZeHIA

It's a simple firewall static NAT entry.

Usually it's a single external IP to a single internal IP. But the external address added on a firewall is in a block. You can assign external IP/27 and the firewall can be reached by the entire subnet. Then you setup a firewall rule that /29 can access a single internal IP.

neither one of you, but i use nmtui to have several IPs on the same NIC. since i run several DNS servers on the same bare metal server
for let's say
10.0.0.44:53
10.0.0.22:53
i can make the same server reply. two different DNS resolvers have their own unique IP on the net, despite running on the same device.

hey boys, WAYWO?
just now getting around to setting up surveillance in my new place, not using shitty old ZoneMinder this time. Too quick to run out of disk space with my hoarding tendencies so I'm using Shinobi since it's got S3 support

Attached: hsg-20190507.jpg (2962x1500, 1.94M)

Will you use imagepy or opencv to track and recognize the sub-human invadors?

>S3
As in Amazon? Good call, if you get footage you want them to be unable to destroy it.
Probably would have gone for a cheap random virtual server though and rotate logs/footage.
Honestly it only needs to record movement when you flip it on and leave your place. Then your recording space should be 0. Unless sudden changes in light or movement detected.

Attached: pi_home_reaching_for_beer.jpg (500x375, 52K)

my camera does the human/face detection and overlays the squares on the image, i figure that's good enough.
ye it's S3 as in amazon but I don't use actual amazon s3. there's a company called wasabi that offers s3 compatible storage for $5/month and no egress or API fees

>camera does the human/face detection
Nice, is it some chink shit I can order as well if I live in a different country?

Also that sounds like a good deal. I always found Amazon servers/services mildly overpriced. The only reason to pay a company for me would be either their connection, availability and off-site.

Attached: 68747470733a2f2f64726976652e676f6f676c652e636f6d2f75633f6578706f72743d766965772669643d3176446563786a (754x638, 2.91M)

not sure how availability is outside the US but it's the Wyze cam v2, ~$25 at a bunch of different retailers. Just gotta pop an sd card in it with the firmware they recently published with RTSP support. Not sure if it's chinese spy hardware but i've got a separate vlan with no internet access for cheap shit that I don't trust not to phone home

The price for your Wyze is double of that around online stores here. If I find a Chinese alternative with tracking I might quarantine it inside a VLAN. My shitty Netgear switch could do that. Thanks for the hardware suggestion senpai.

cool
im buying some additional drives later this month when i get my paycheck
4 more 8tb drives to my snapraid array and one 1tb ssd for virtual machines/containers

Attached: 1546406567392.jpg (1920x1080, 163K)

how much porn is /hsg/ hoarding?

Attached: 2019-05-10-185044_398x52_scrot.png (398x52, 8K)

As much as I watch. And I delete boring stuff. Amateur couples are for sure better than professionally produced gangbang garbage.

i agree. the only professional stuff i really hoard is some jav, the rest is almost exclusively amateur and homemade

Yeah jav is sweet. For some reason the nips are really good at fetish productions.

Attached: a2828fc3.png (739x697, 87K)

Is anyone hyped for MAMR and HAMR technology in HDDs? Are manufacturers suffering yet enough from SSDs to finally offer real alternatives to helium memes?

Attached: server.jpg (2000x1500, 1.92M)

For the past few months I've been trying to migrate my steam library to my NAS, and I'm finally getting around to restructuring it for performance.
A while back I memed myself into doing a the smallest form factor build that is possible with a custom water loop a while back, but obviously that doesn't leave a lot of room for storage. I have a 256GB NVMe drive and a 1TB m.2 SATA drive but like 4TB of games and a shit internet connection. I'm using the smallest case that supports an mATX motherboard (sliger Cerberus) so expansion cards were not out of the question. I ended up buying a used Dell R320 for about $200, loading it up with 4 4TB drives in RAIDZ 10(I know this isn't the ZFS approved terminology but you get the idea), 48GB of RAM and Freenas, and then running 10Gbit fiber from my computer to the closet the server lives in. It has insanely fast sequential read rates (800MB/s) on account of RAIDZ 10 distributing reads across both drives in a given mirrored pair, and queued random reads and writes perform a bit better than a single drive in system. I currently run it over SMB, which works pretty well for most situation. The main issue is that the way SMB works, requesting a part of a file requires the whole file to get transferred, so games that pack assets in to archives are fucking slow, and games that don't pack assets in to archives are slow because of the protocol latency per file. I was thinking the best way to deal with this would be to just bite the bullet and get iSCSI working for those hot block level transfers. I've tried NFS, but windows NFS support is fucking awful so whatever.

Like this chummer?
forum.level1techs.com/t/run-your-steam-library-from-a-nas-break-a-leg-challenge-update/107912

Attached: Unifont_Full_Map.png (4128x4160, 929K)

A lot like that, but his setup performs fucking awfully even for gigabit. I think my NICs support iSCSI hardware offload, so I should be able to pump around 6Gbit/s over iSCSI depending on how cooperative my shitty chelsio card is.

Hardware features on NICs are quite fun to have. iSCSI is still too enigmatic for me. Should read more about it myself to understand it better.

I ordered a bigger case to prepare my home server for storage capacity expansion, when the time comes. It should arrive either tomorrow or on Monday, I don't know whether the shipping company does deliveries on Saturday or not.

Attached: 992308_7__63700-7.jpg (800x800, 84K)

Comfy drive bays. Do you have enough front coolers as well?

Attached: IMG_20180823_150653.jpg (4128x3096, 3.27M)

ASRock J4105-ITX or add that 20% extra for J5005-ITX? I will be building my first machine with remote access. LFS for starters, then maybe Xpenology. I will be treating it as a lab/edu and slowly adding stuff.

The case comes with 2 140mm fans right in front of the HDD bays, so it should be perfectly fine. I guess I'll see when my move my server into its new home, but I don't expect any issues at all with temps.

Attached: 4260285295065-9.jpg (800x800, 131K)

>btrfs backup volume full
>delete a bunch of old snaphots to free up space
>btrfs doesn't free up any space immediately, just starts slowly cleaning up garbage in the background
>freeing about 10GB/day
>after about 4 days, notice that btrfs has been steadily using more and more of the available system memory
>it uses all of it and the system stops responding entirely
>physically reboot it
>btrfs starts using all the memory much faster
>within hours it uses it all and brings down the system again
>have to take the whole disk out of fstab so btrfs won't fuck over the entire box
I thought this shit was supposed to work now

Excellent soundproofing to i see. Feel free to post pics of your final setup. I enjoy seeing people put together sweet hardware.

Attached: Free-Shipping-20Pcs-Lot-IRF7319TRPBF-IRF7319-F7319-MOSFET-N-P-CH-30V-SOP-8-Quality-assurance.jpg (578x560, 75K)

I got almost 50TB of storage attached to a J4205, but frankly simply get the J5005 if you don't mind buying the better RAM for it. It's not really significantly more expensive, but definitely better hardware.

>freeing about 10GB/day
That's really slow.

> btrfs has been steadily using more and more of the available system memory
I didn't have any issue like this. But I'm only using it on one drive without any fancy features. Do you run the unstable btrfs internal RAID or something like that?

> I thought this shit was supposed to work now
Probably? OTOH I can only suggest that you use xfs or ext4 if you want "justwerks" with basically no odd issues ever.

>Do you run the unstable btrfs internal RAID or something like that?
I'm just using a normal btrfs RAID1 with two disks.

>I can only suggest that you use xfs or ext4 if you want "justwerks" with basically no odd issues ever.
I don't think either of those do snapshots, which is the primary purpose of this volume.

J4105 is already on DDR4 and can run 32GB, numeration can be confusing. Proc is ~10% better in tests and J5005 has slightly better GPU than J4105 and that's it. My issue is I can have cheap 250GB SSD for the difference between boards here.

Maybe it's that then, but of course I don't know. Btrfs had a few diagnostics methods that I currently don't remember though, maybe you could consult these?

>I don't think either of those do snapshots, which is the primary purpose of this volume.
They don't really need to, LVM2 does this okay.

>numeration can be confusing
Ah, it was a later model of the same generation as the J5005? Fuck this confusing naming scheme, yes.

Either way, then it probably doesn't matter.

> My issue is I can have cheap 250GB SSD for the difference between boards here.
Then go for the SSD instead, yes.

I might do that when I get the time to swap it over. Hardware isn't that sweet I'm afraid, it's just a Z87 mATX board with a HBA and a bunch of drives.
>Excellent soundproofing to i see
It's supposed to be, yeah. It has a built-in 2-channel fan controller too. My server sits in the corner of my bedroom, so I don't want it to be loud.

what's a good cheap 120gb SSD for a boot drive

So I'm having a think.
I want to run my server as both a media streamer and a seedbox.
Problem is, I don't want to have my VPN up while I stream and I don't want to have it down while I'm seeding.
The next problem is that if I run one of those services in a VM I'll have to mount the hard drive in both the host machine and in the VM. And I'm not sure that's a good idea because I heard that it can fuck things up.
Anyone got any ideas? If I could circumvent my VPN for my Plex server that'd be cool but all the solutions I've seen are hacky as fuck and essentially refresh on a cron job.

Anyone got ideas?

Attached: 1531943375456.jpg (675x725, 67K)

any uptoday ZNC + Tor guide out there?
the znc wiki on the subject doesnt work at all

Attached: 015712.jpg (228x221, 8K)

>LVM2 does this okay
I didn't know LVM was capable of decent snapshots. Last I checked its concept of a snapshot was just literally duplicating everything and using a complete copy's worth of additional space

>arrakis
ok this is cute

Attached: arrakis.png (1026x158, 6K)

Linux containers or docker

A /hsg/???????????????????

I just realised that my FM2 board is SATA2, i also looked around at other FM2 motherboards and most of them are FM2 also.
Should i just ditch FM2 all together and use an intel mobo/cpu?

SATA2*

Crucial BX500 is very cheap, dunno if it's any good.

SATA2 is pretty fast.

Ive been looking at LGA1356/1366 CPUs and Mobos, if its gonna cost me $40 to get a good FM2 CPU anyway should i just invest in a Xeon X5650?

But, how did you end up with a FM2 mobo and no CPU in 2019?

Ive got a CPU, A4-3400. I dont know if its gonna be able to carry the weight of what i want to do, being NAS with Enby/Jellyfin/Plex

techpowerup.com/255384/trendforce-ssd-price-per-gb-could-drop-as-low-as-usd-0-1-by-years-end

I can only hope this affects HDDs as well. Is absolutely no one hyped for MAMR and HAMR technology?

They are still holding out on us. The Thai flood prices are BS. A 2TB should not cost more than $45.

Attached: IMG_20180724_215853.jpg (4128x3096, 2.72M)

hello /hsg/, how can 1 parity block cover any amount of drives?
seems counter intuitive to me that i only need 1 drive for parity no matter if I have a total of 4 drives or 100 drives
doesn't this also mean that rebuilding gets more complicated and slower the more drives my raid-5 has?

Attached: raid-5-configuration.png (600x444, 15K)

Consider a single bit across 4 drives one of which is a parity drive.
1 xor 0 xor 1 = 0 (parity)
Suppose we lose one of the drives with a 1.
? = 1 xor 0 xor 0(parity) = 1. Bit recovered.
Suppose we lose the drive with a 0.
? = 1 xor 1 xor 0(parity) = 0. Bit recovered.
Obviously if the parity drive goes we can retrieve it by xoring the other drives together.
Xoring a bunch of things together gives you a 1 if there is an odd number of ones in the input and a 0 if there is an even number of ones. If you lose a drive, and the parity bit is 0, then if you have an even number of ones on the other drives the missing drive had a 0, otherwise it had a 1. The reverse is also true, where if the parity bit is zero, and there is an odd number of ones on the remaining drive, then the missing drive must have had a one to make the parity bit zero.

Thanks for the detailed explanation.
So the speed of rebuilding is the sum of all reads of a block across all drives plus XOR'ing them?
Of which only the latter can be done fast.
You speak of a parity drive.
Do I even have the chance to select if I want a dedicated parity drive or if I prefer striping parity across all drives?
I don't see that any method would be superior, as I still have to read all drives to XOR for a rebuild.
However if indeed only the parity drive would die, which is a 1 in 3 chance with the smallest of all RAID-5, and even less if the drives go up, so I could still access the data immediately without a rebuild?
Or do I always have access and the RAID just degrades in speed until a rebuild?

Attached: 74c.gif (480x270, 2.23M)

There is no meaningful difference between having a dedicated parity drive and striping, most things stripe to stop one drive from eating more writes than any of the other ones since a parity drive would have to be written to each time any other drive writes. Most raid controllers orchestrate the drive reads so that they all happen in parallel during a rebuild, so it should take about as long as it would take to copy the entirety of a single drive. In practice it ends up taking longer because raid controllers suck and writes are slower than reads. If you stripe you parity bits across drives, the raid controller can calculate the missing data on the fly for you, so you never lose access to data just performance. Remember that raid is not a backup, but an availability and performance thing, so it is mostly intended so you can just rip a drive out and have your data still available. Raid 5 is a bit dicey due to something called the "write hole" where power loss puts the disk into an unrecoverable corrupt state, which requires a system like ZFS to address.

Alright, a good RAID controller may explode my budget.
I'd rather do it in software anyway.
Will the write hole and corruption occur only in software RAID-5 or HW too, perhaps even both?

Attached: FortunatePreciousHagfish-size_restricted.gif (498x278, 598K)

Don't bother with RAID controllers unless it's the best enterprise ones they fall short of standard ez Linux mdamd raid in most to all regards.

You can still have bot variants (striped, parity drive) between mdadm and snapraid and other solutions.

See (You). Don't bother with a raid controller.

Also, no, the array is not irrecoverable if you hit a write hole unless you use some trash raid solution. The loss in this improbable situation is generally one block (could be 512byte times drive count).

Alright then, thank you for your time. I'll just do that and invest in a mainboard with enough SATA connectors.

Attached: ccc283f3.gif (500x264, 636K)