/hsg/ Home Server General

Home server thread
waining interest edition

NAS is how most people get into this. It’s nice have a /comfy/ home for all your data. Streaming your movies/shows around the house and to friends is good feels. Repurpose an old desktop, buy a SBC, or go with cheap used enterprise gear. Lots of options and theres even a flowchart. Ask.

/hsg/ is about learning and expanding your horizons. Know all about NAS? Learn virtualization. Spun up some VMs? Learn about networking by standing up a pfsense box and configuring some vlans. Theres always more to learn and chances to grow. Think you’re godtier already? Setup openstack and report back.

>What software should I run?
install gentoo. Or whatever flavor of *nix is best for the job or most comfy for you. Emby to replace netflix, nextcloud to replace googlel, ampache to replace spotify, the list goes on and on. Look at the awesome selfhosted list and ask.

>Datahoarding ok here?
YES - you are in good company. Shuck those easystores and flash IT mode on your H310. All datahoarding talk welcome.

>Do I need a rack and all that noisey enterprise gear?
No. An old laptop or rpi can be a server if you want.

>A T T E N T I O N:
>The /hsg/ wiki is up!
hsg.shortlink.club/

Please expand it, also don't use your real name or any password when you register. Preferable use cock.li or something anonymous. Or just email the admin with the username and password you want.

>Links
server tips: pastebin.com/SXuHp12J
github.com/Kickball/awesome-selfhosted
old.reddit.com/r/datahoarder
labgopher.com
reddit.com/r/homelab/wiki/index
wiki.debian.org/FreedomBox/Features
>Chat
irc.rizon.net #_hsg_
riot.im/app/#/room/#homeservergeneral:matrix.org

previous thread:

Attached: hsg_5.jpg (1663x738, 368K)

Other urls found in this thread:

dropbox.com/en/individual/plans-comparison
hub.docker.com/r/linuxserver/plex/
github.com/linuxserver/docker-plex
bestbuy.com/site/wd-easystore-10tb-external-usb-3-0-hard-drive-black/6278208.p?skuId=6278208
twitter.com/NSFWRedditVideo

looks like interest in /hsg/ is at a all time low again
maybe it will return in due time

Attached: 2019-05-13-202402_433x667_scrot.png (433x667, 298K)

If I want a low power NAS, should I use 2.5 inch drives? Should I "shuck" some 4TB drives?

>low power
get out poorfag, this is for real server owners

You should use SSDs!

sure why not, but i dont think you'd end up saving much more than a couple of watts per drive using 2.5" instead of 3.5" or so but i could be wrong
ssd is also some more power saving too
your better off with some smart power management and spindown on your disks

you should use a helium tank

I think 3.5 inch drives are still better in terms of power per storage.

Maybe if you use PCIe/M.2 SSD it's better than HDD, though, but that tends to be uneconomical.

> Should I "shuck" some 4TB drives?
No, 8 or 10TB drives.

still working on my server room in my new house

Attached: rack.jpg (756x1344, 252K)

I think a lot of people, even if they're interested in hosting services, just use the cloud. soibois can't handle running their own metal

Attached: hsg-20190507.jpg (2962x1500, 1.94M)

so I've got an always on linux machine that I'd like to add as a network accessible drive. given the fact that windows10 has completely fucked sharing, what's the best way to go about this so I can see and access shit from the windows machines?

Still looks nice, but also like something that has a bunch more machines involved than needed.

I think many also just don't want to allocate the money, and be it just $400 for a SBC and two drives.

You simply configure samba to make it accessible.

If it's more than one drive, you're better off putting it into md raid or snapraid+mergerfs or some other erasure coded array first, but that's just related to keeping your data safe against drive failures, not making it accessible.

anyone using g suite for business? how many TB do you store?

is there any sort of config utility for samba or do I just have to do it by hand and figure out what I"m doing the hard way?

you'll have to edit a couple config files but you won't really have to figure anything out.. it's a pretty common scenario so there will be a million tutorials online

pretty nice, how much storage? Planning on ordering some 8TB drives soon to upgrade from my babby tier storage capacity (probably like 2tb total across all this )

WD10EZEX-08WN4A0 size: 931.51 GiB | First drive my server ever had, bunch of active projects, not a part of the hoard
WD30EZRX-00MMMB0 size: 2.73 TiB | Dad's drive, let him put his music and vacation photos and
videos here and documents and shit.
WD30EZRX-00MMMB0 size: 2.73 TiB | Anime, downloads, games, music, etc my stuff, my current hoard, maybe 45% full
WD60EFRX-68L0BN1 size: 5.46 TiB | (New) Not sure what to use this for? Right now it just
has `backup` folders where i rsync the other drives into this one
but there has to be a better organization for these drives.


help me organize my drives

i use arch btw

sweet thanks

I don't know, it's possible that there is a GUI.

I've been editing it with a text editor all along and I prefer it that way. It's the easy, really - GUI have more to learn than text files with their comments and examples.

BTW to the user that warned that some external HDD internally have USB soldered on rather than a SATA ports - the 8TB WDBBGB0080HBK-EESN I got did not & they work fine in the snapraid setup I have here. I now regret not taking the risk and ordering more. The price here shot up again, and we don't have as many good offers as some of you from the USA do.

The drives do however seem to have the power save mode where you either need a new enough PSU, cover a pin on the drive, or use an AMP MATE-N-LOK->SATA adapter.

your dad needs a raid 1 for that 2.73 because you're a dumbass for putting his shit on a single drive, so get him to buy you another 5.46 and give him two drives in raid 1

r8 my setup. It puts more hardware into my hands while also keeping all my private trackers happy(due to my shitty home connection), with the benefit of some cool automation like massive youtube-channel rips and soundcloud archives.


Actual torrenting happens here, and
uses the sshfs as a "disc" into my
server at home. vps drive acts as a
8GB cache that it is constantly
flushing into my server. More recent
private tracker torrents seed from
the 20GB sshfs server, and slate
ones get pushed down to my home
server.

|$5 vultr vps(8GB), 300MB/s down/up|
| |
| sshfs tunnel
| |
| |$30/yr VPS(20GB) 100MB/s down/up|
sshfs tunnel
|
|
|
|
|Home server(16TB) 10MB/s down/up|

raid is not backup

Backup isn't RAID either. So?

>I think many also just don't want to allocate the money, and be it just $400 for a SBC and two drives.

A server may be expensive in the beginning. But if you have 100 Mbit uplink + 100Mbit downlink even a stupid raspberry pi could do better than Dropbox, Onedrive and whatever these service are called.

Their unique selling point is an app that takes away the ugly of protocols like SFTP and makes it normo-friendly.

Only a tech illiterate would pay this much for hosting:
>€16.58 / mo * 12 months = 198,96€/year


>synology 2bay
>2x 4TB
>RAID-1
>have double storage and no spying on your data
>slightly less than 400€
Heh, how is 400€ for a server expensive now if it is fully paid after 2 years?

>dropbox.com/en/individual/plans-comparison
absolute cancer ^

Attached: 1533865768663.jpg (400x400, 24K)

ok peabrain. what is a backup?

Not sure, but seems fine to me. Did you pick sshfs over wireguard or was the VPS just restricted to that?

Well yes, people are tech illiterate and plan poorly.

That said, a lot of them virtually only use free services apart from one $3-10 or something paid service like iCloud. No, they're generally not going for $20+ plans, they're just not using that much storage if it costs them that much.

[That doesn't mean they'll plan 2 years ahead and do some hands-on with servers either, though.]

Why delete your post? Subscription based models are terrible. Cloud services are terrible. Just take Office 2019 vs Office 365 as an example. Hosting is not different.

>apart from one $3-10 or something paid service like iCloud. No, they're generally not going for $20+ plans
I don't even want to know how garbage hosting for 3€ must be. Doesn't even get a half decent VPS, which at least you'd "own" as in root shell.

Well yes, people are tech illiterate and plan poorly.

That said, a lot of them virtually only use free services apart from one $3-10 or something paid service like iCloud. No, they're generally not going for $20+ plans, they're just not using that much storage if it costs them that much. Either way, they'll not plan 2 years ahead and do hands-on with servers that don't even have easy app integration and online access and "all problems solved" already. It's the mixture of what's advertised in their face and cheap and easy.

The /hsg/ approach is only cheap per GB / for the processing power you get. It's not necessarily cheap in the absolute sense, or immediately providing as many features with no effort.

> Why delete your post?
Because I thought I could improve it a little.

>I don't even want to know how garbage hosting for 3€ must be.
US$3/month. That's just the 200GB iCloud last time I checked.

>US$3/month. That's just the 200GB iCloud last time I checked.

Damn that is scary as hell if I come and think about it.

If hosting is that cheap.. you already know who owns your data and how they mine and crawl it for advertisers. Like, where is the point in a company offering it this cheap if there are no hidden motives? I'd cause extra bandwidth to them just for fun.

>If hosting is that cheap.. you already know who owns your data and how they mine and crawl it for advertisers.
They don't have to do that. $26/year actually covers the cost of 200GB, you can have 50 of these on a 10TB drive (make that 35 or so for erasure coding redundancy) and not everyone will use most of that storage either.

Apart from that, they don't really care as long as it's nothing too obvious. In case you haven't noticed, even the operating systems and software people use pretty much all already have the botnet issue.
> Inb4 they'll not only run their servers but also switch their OS' and software to not botnet alternatives, learning entirely new alternatives.

why do you need ampache or emby if you can have plex which does both

I agree with all of your points. Convenience and ease of use leads down the road of slavery into the walled gardens of Silicon Valley. Which is why I envy isolationists and *NIX outcasts. But for 200GB it is still to expensive, if I wanted 4TB as in my scenario the price would go up to $60/month for Apfel memecloud. And 33€ for Cockbox.

what are all the parts to this? looks like a network switch, hard drives...what else?

can you list the equipment in here?

Most people can easily make do with these $3/month 200GB, it's far more than they actually use. It might also be more than you use if you only saved your own data.

Of course if you want 40TB, you're already with us and don't need to be convinced.

I'm kinda new with Servers and network-management in general so I just lurk and procrastinate, sorry for not being able to contribute yet.
Thanks for putting these threads up if you're OP.

> so I just lurk and procrastinate, sorry for not being able to contribute yet
So you're still evaluating if/how you can do a storage box or something like that?

Don't hold back too much, it's not THAT hard.

PLEX refuses to load .mkv

any tips?
tried root:root, nothing
tried 777 nothing
....help?

Attached: 085.jpg (1920x1080, 107K)

Try hub.docker.com/r/linuxserver/plex/ instead, can't easily mess up permissions or anything much with that.

oh shit, how i never thought of that!?
thanks a lot bro.

also can i get an quick tl:dr on the
-v :/transcode \
i know what transcode is, i dont know with do i need to/or can setup an folder for it..

>also can i get an quick tl:dr on the -v :/transcode \
Ah, the thing before the colon is the path on the docker host machine that you want to map to the thing behind the colon (which is the path inside the VM and shouldn't be changed - it's what plex inside the VM expects).

Basically for all -v parameters, delete whatever is before the colon and use your own paths. Leave the thing after the colon.

you are doing god's work user, thanks a lot :)

Attached: serveimage.jpg (300x168, 15K)

BTW it looks like dockerhub's parser simply wrecked the tags (interpreted them as html and "sanitized" them?) they used in the description on:
github.com/linuxserver/docker-plex

Over there, it's clearer. But the -v thing is anyhow from docker create.

Also, let me remark that using docker-compose and its file usually is simpler in the long run, but not to the point that you should immediately jump on it now if you are already trying the docker CLI instead.

that autist vcenter spammer killed any hope for these threads

why do morans think raid is backup? its availability, data loss is always around the corner and you need all disks up and running to access any data
snapraid is superior to real raid when it comes to backup
hell even two copies on separate external drives is too

Hi IT guy!
Do you prefere ZFS or LVM?

Transcode it.

ZFS is a filesystem. LVM is a volume manager.

They have different purposes, friend. Yet to answer your question: Both are pretty good at their job. Former if you have enough RAM.

I always see the word "nas" thrown around in compatibility lists. Cameras have an option for sending stuff to "nas"
what protocol is "nas" and is it any different than an old computer running samba

"Samba" is an SMB implementation. And yes, it is the most comon protocol if you think Windows/NIX for cross platform network shares. Whilst *NIX has better alternatives like SFTP and sshfs, these are only used by pros. Consumer grade shit will use SMB1/2/3.

ok so I was right and all of these hot shit nas cubes you see for $2000 are just a tiny cucked computer that literally just runs samba?

Haha, I would lie if I'd defend any of these companies. Yes, every consumer grade NAS is just a stinking locked down Linux box some company set-up for you.

Synology is pretty great though, and you can get proper solutions for about 200€. Saving money and building your own NAS is cheaper and more fun though.

Attached: 1552055963919.png (400x450, 14K)

They are not the same.

But I prefer not to use ZFS over management and performance concerns. You can't grow or shrink the RAIDZ vdevs, and ZFS is comparatively slow (/you throw a lot of hardware on it to get the same-ish performance as with mdadm or snapraid. Even more than Ceph).

Not all is bad with ZFS; it's not completely useless or anything, but I don't like the compromises it makes.

>Of course if you want 40TB, you're already with us and don't need to be convinced.
Honestly I would love to host that for friends, family and coworkers. If the legal situation would allow me to give a damn what my clients store. I delete data to often to hoard this much myself.

Is he right?

>Which is a concern for almost no one AT HOME.


>Hell, if you use RAID at home, I honestly think you're a fucking retard to begin with. There is simply no need for 24/7 up-time in a home server environment, you're not hosting mission critical data that needs 24/7 access or you'll lose thousands of dollars per minute.


>Make regular backups with offline disks and save your money and effort for other things, RAID is a meme for the home user, only reason it still perpetuates is because people don't really understand that RAID is pretty much only useful for keeping data up and running continuously and RAID in itself is NOT a backup solution. And if you don't need the 24/7 up-time, you're far better off with JUST making regular backups.

Sshfs isn't actually generally "better" than the new versions of smb or nfs which perform really quite well and are secured properly. But it's nice to have that convenient option. Ultimately you can do things with it like pull your backups only through ssh. If you're interested, I recently implemented this after some pointers from the internet:

# remote machine already has passwordless privilege expansion set up for sftp-server, /etc/sudoers[.d] configuration as follows:
# username ALL=NOPASSWD:/usr/libexec/openssh/sftp-server
# and of course ssh-copy-id and borg init was done beforehand in this directory

sshfs username@remoteserver:/ $MOUNTPOINT -o ro -o sftp_server="sudo /usr/libexec/openssh/sftp-server"

# at this point $MOUNTPOINT has the remote filesystem mounted and you can just run borg

borg create --stats --show-rc --compression zstd (pwd)::'backup-{now}' $MOUNTPOINT/etc $MOUNTPOINT/home $MOUNTPOINT/root
umount (pwd)/remote_mount


Pretty convenient and useful. BTW in case you use BASH rather than fish, (pwd) is $(pwd) and you probably double quote the whole thing.

I'm hoping someone with more knowledge of servers could help me decide on what to get, currently I'm thinking of getting the Dell R410 for my first server.
I heard that nixOS is decent so I may go with that.
My use cases are to host my website, some bots and potentially host vidya. Also if possible I'd rather have something not heavy when it comes to power consumption, budget is ≈ £400

>Dell R410
Are you sure? Usually something with more modest power consumption is more economical.

> I heard that nixOS is decent so I may go with that.
It is a somewhat decent design overall, but the main CLI, the nix-* commands are pretty shitty and you'll not find enough documentation how to set it up and change things if you're not already a Linux expert or pretty much want to become one no matter the time invested. It's probably not the right choice. Gentoo is actually easier, and relatively few choose that.

More likely, you just want some boring old fedora, debian, suse or whatever.

> lso if possible I'd rather have something not heavy when it comes to power consumption
Then probably don't get the old Dell. Get some J5005 or 200GE or low end Ryzen or an ARM SBC like the Rock64Pro or Odroid N2 or something.

Idiot.

>compression zstd
That's where you're wrong.

Use LZ4, use LZO. Zstd is a meme compression. As much as I love Jarek Duda and Asym Numerical Systems, we got to stay reasonable applying these to a filesystem. This is not cold storage or a tape library where Zstd shines.

Attached: 1547363891719.png (1600x869, 923K)

10TB on sale again:

bestbuy.com/site/wd-easystore-10tb-external-usb-3-0-hard-drive-black/6278208.p?skuId=6278208

not much going on here - design work and volunteering taking up a lot of my time atm, besides faffing around with docker swarm stack deployments isn't really /hsg/

>That's where you're wrong.
No, I am not.

> Use LZ4, use LZO.
No. Zstd is better at compressing at almost the same speed with the default settings (settings cover a greater range of possible compression, though).

There is no advantage in using LZ4 since even the lowest end box involved (J4205) can handle zstd just fine. As such, there is no reason whatsoever to use lz4; instead I prefer the better compression efficiency of zstd.

How do they run sales that make 8TB/10TB drives so cheap per GB every two months in the USA but not in Europe?

Because eurocucks don't deserve shit.

Then you really are a retard as pointed out already, since we have two use cases on drives:

>software
binaries can be compressed very well, but it is useless since you want them to execute fast
>media
most of media today is lossy compressed already:
mp3, x264, hevc, aac, opus are all lossy compression codecs

So you are really using one of the strongest compression algorithms to attempt to losslessly re-compress an already lossy compressed file? Fucking moron, what is entropy and runlength coding? Look up why chaining compression algorithms on top of each other is a waste of time.

I was there when FB just took Jarek Dudas FSE algos to create Zstd. Don't try to explain compression algorithms to me. I want I/O, I want speed. LZ4 and LZO for RAM and storage all the way. GB/s throughputs, not slow recompression attempts on media.

I truly despise these inane tech illiterates who read somewhere that something is the best compression algorithm but don't understand its use case and where it is pointless to apply. Dumb wikipedia scholars.

Attached: 1531548474237.gif (245x186, 961K)

I was asking about why drives become so cheap in the USA every two months. But thanks for throwing in your dumbass racist opinion on Europeans.

No, you can't read. It runs at the speed of the GBE networking.

I don't include media in the backups BTW and configs and so on compress rather well (so do actually a few binaries), but even that wouldn't matter; it'd still run at the speed of GBE networking.

> one of the strongest compression algorithms
That is not the case, it's still a pretty speed-oriented one, faster than LZMA ("7z"). Not strong and slow like PAQ or something.

Plus it's a variably configurable algorithm.

> I truly despise these inane tech illiterates
That's you. Again, can you understand "it won't actually run faster if I choose a faster algorithm"? Right, it's just that.

You are the one who is theorycrafting a problem here because he heard lz4 is fast and nice (which it is), but you don't seem to have much experience with zstd.

>he doesn't know about the transmission hole

Attached: the zstd hole.png (930x485, 70K)

>imagine being this autistic

Do you realize that despite all what you just wrote down in rage

WILL NOT CHANGE THE FACT

That you cannot recompress a 1080p h265 file, no matter how hard you sperg and roll on the floor?

Unless your data you hoard is _only_ text files and _not_ music and movies, any of these compression methods give you ZERO benefit?

Forcing lossless recompression on an already lossy compressed file is a waste of time and CPU cycles. Something only someone would do, whose time is worthless.

Attached: 1538512742053.jpg (2048x1637, 1.41M)

You know, I even looked this 2015 diagram up. That diagram is zstd on one setting for:
> compression, communication, and decompression time
I'm clearly not doing that as you can see on the code provided.

Was it too hard to read? It's just a few lines and I even added comments.

I'm more in this case (same website):
> Name Ratio C.speed D.speed
MB/s MB/s
> zstd 2.872 201 498

Also on the 2019 zstd 1.4, not the 2015 one.

so what's the sauce, I know I've read it before

>use Jow Forums archive
>view same image
>your answer is there
was that so hard?

s-sorry and thanks

> imagine being this autistic
Imagine linking dead and replaced posts hours later.

> That you cannot recompress a 1080p h265 file, no matter how hard you sperg and roll on the floor?
You have dyslexia, right? I said "I don't include media in the backups".

And again, even if I compressed it, it wouldn't make the backup process appreciably slower vs lz4 or uncompressed, although SURE for media files there'd be NO issue doing another borg create with that directory uncompressed or lz4 compressed. But yea, I already said I don't include media files in the backup.

no problem

>posting the same shit 3 times because you fat fingered errors into it
Nice projection when you're the one with dyslexia or brain worms. Time to take your anti-psychotics!

> Honestly I would love to host that for friends, family and coworkers
For their own personal use (backups etc.), this shouldn't be a problem. Just give each of them an isolated account on your machine, provide vpn/wireguard/ssh or something else safe to connect, then let them use smb/nfs/scp/ftp/syncthing and maybe a bunch of docker-ized things like owncloud or plex. If you're motivated you can also put it all into an encrypted storage that only their cryptographic keys can auth into. Not easy with all solutions and people right now, but in not too much time FIDO2 U2F might allow this more easily (passwordless auth with a cryptokey supported by browsers, phones and so on).

Certainly it gets more problematic if they need to be able to share files, then you might actually need to be able to handle takedown requests or comparable, but for "internal" use it's surely not a real problem.

What is the best Motherboard/CPU combination I can get for a home media server at $240 USD? Already have my DDR4 ram from Newegg.

Don't suggest any specialized server motherboard because they're not widely available where I live.
Currently I'm thinking Asus B450M-A with Ryzen 2400G. A lot of people on different forums seem to suggest Intel CPUs are better for transcoding performance though.

QuickSync is leagues better than Vega for transcoding.

Improving text (including a misparsed code block) for more clarity is not dyslexia. It was already corrected for hours by the time you posted. Fix your autistic browser script or whatever it is that triggers you hours later.

Dyslexia is what triggers whining about media files after you've been squarely told they're not included in the backup. On top of that, you've also already been told there is no effective impact on performance on that GBE setup either way, the machine is fast enough to compress at the speed the network has. Pic related and the accompanying inflated ego is the other issue you have.

Attached: d0fe87648b9c6a4b765552e4a8c16b41874e999c90b6aa4051e921e2568bb775.png (613x481, 36K)

A J5005 might already be adequate if you don't want to drive too many PCI controllers.

Ryzen 2400G is certainly fine in pretty much all not huge home servers. Some anons prefer to have a 1600 and remove/add the GPU as needed for maintenance - makes sense if you can be arsed.

Lmao, nice strawman and tu quoque in one post. The amount of psychological defense mechanisms you exhibit would Dr. Freud have a field day with you.

Attached: weaponized_autism.png (846x1352, 90K)

So it just comes down to iGPU hardware acceleration? That's not a factor for me because I don't have a Plex pass and don't intend to buy one, at least not in the near future. Nevertheless, do you have benchmarks? I find it hard to find benchmarks for things like this because most focus on gaming shit.

I already have a mid-tower so not going to bother with Mini-ITX. I'm also only going to cap out at around storage 8 drives at most. The B450M-A already has 6 SATA ports.

Yeah I was thinking of the 1600, but I think I just missed the boat where I live. Also don't get anywhere near the same value as americans do at Microcenter. I'm probably just going to get an iGPU CPU to save the hassle of needing a GPU, which is why I'm deciding between a 2400G and whatever the similarly priced Intel equivalent is.

The two worse versions were already deleted, The only one you needed to pay attention to is the current one. If you set up scripting to keep them all and not even collapse them, that's your problem - fuck off with your complaints.

>I'm also only going to cap out at around storage 8 drives at most,
That's just at the limit of what you can pretty comfortably operate on a J5005 (with an extra -cheap- SATA controller, of course). The Ryzen really won't have any problems with that.

> Yeah I was thinking of the 1600, but I think I just missed the boat where I live.
No worries then, you're not missing out on very much power savings and/or processing power. It's not at all a huge difference or anything.

And having to insert a GPU to diagnose some issues isn't always all that great either - who knows if it's not an inconveniently urgent problem at the time when it happens.

Anyone?

Quicksync is objectively better than AMD and sometimes even Nvidia for encoding, decoding and transcoding video content, heck QuickSync is the only way to watch UltraHD 4K blurays and shit.

There was also the Netflix 4k fiasco a while back.

Hence why media NAS/HTPCs tend to favor Intel. Also beats having to have another 75W dedicated GPU sapping power.

No. he is wrong. RAID5/6 in particular are very sensible compromises to keep your data in a world where drives fail. Until cloud storage solutions like Ceph are easier to manage and more stable (as software), this will remain the best option.

After they become less problematic, maybe adapting the more flexible solutions like Ceph [or perhaps bcachefs?] will be more attractive for almost everyone. Having different / no erasure coding levels for different folders in a drive pool and so on is generally convenient.

What about the 2200G? Will that struggle with more than 1 good transcode?

Possibly.

But it depends on the exact transcode, and generally speaking you're best off avoiding transcodes anyhow. Having playback devices that play back nearly all common video formats isn't rare, hard or expensive.

If your smart TV has some terrible meme OS, just attach a $25-75 HTPC box. It's better and cheaper than running a monster machine that can transcode for it.

Smartphones and laptop/desktop computers and the like will handle reasonably common video formats fine anyhow.

realized my synology is able to transcode 1080p HEVC content so I'm pretty happy to switch over to that now.
My phone and shield tv can direct play it anyway and it looks great.

Also started using Nextcloud for my bookmarking.
It's decent but could be a lot better, was hoping for something more like Pocket.

I can't call him wrong. I wouldn't call him right either. Depends on the application. I'm no expert so take this at face value.
If you're running a NAS to store movies, music, games, and other things you can easily re-download then a raid is great. These normally don't get backed up. The difference between pulling a movie off a torrent site and off a personal cloud is torrent will usually be faster. So to me in that kind of application the difference between "some fault tolerance" and "no fault tolerance" is massive.
For a server that stores things with legitimate importance - things that you need to exist at all costs if it fails - routine backup are the only solution. RAID 1/5/6/10 will save you from a hard drive failing, but not a broken water main turning your home server into a soggy mess or those Seagates conveniently failing all at once 2 weeks after the warranty expires. That's why every company worth mentioning does routine backups and then takes them off site. Does a RAID help in this application? Sure it does, but it won't be guaranteed to prevent data loss.

(cont'd)
BTW he also had the implication that it's somehow some super difficult enterprise server tech.

It just is not. It only requires you you to install one-two extra same-sized drives and then run one fucking single command like:
# mdadm --create /dev/md0 --level=6 /dev/sda /dev/sdb /dev/sdc /dev/sdd

After that md0 is the "drive" you're working with same as you'd have with the invidual drives. Put your filesystem on it, optionally with paritioning, lvm2, LUKS encryption... the usual.
Maybe you do still want a timed job to run 'echo "check" > /sys/block/md*/md/sync_action' to regularly check the array, but it's really nothing difficult at all.

No, you don't really need to worry about hardware requirements or anything; this basically just works fine on even current-ish low-end machines.

It's analogous for zfs (but now with higher hardware requirements) and for snapraid+mergerfs (more configuration, ~same kind of low hardware requirements), nothing amateurs at home can't handle though.

Transcoding is necessary for remote connections, unless you want to invest time manually re-encoding every single movie/TV show you have.

>remote connections

stop pretending you're fucking youtube

I am an expert. He is basically wrong.

The trade-off is simple and evident. You put in one-two more drives for RAID5/6, you get the capability of any one-two drives to fail without loosing data (this includes the storage your backup is on)

There is nothing else that gives you this trade-off but better right now, and you can't ignore the hordes of people who had a drive fail.

Even on Jow Forums people constantly ask whether brand x drive y is "reliable", and this is the main actual way to make storage hardware "reliable".

> a broken water main turning your home server into a soggy mess
This actually is unlikely to kill your drives unless you leave them to rust for ages. On top of that, you still may have the situation where not all drives died simultaneously, but just some. RAID6 greatly enhances your chances in such a situation, too.

You should still take versioned backups of course, but even these you might really want to place on (another) RAID6 array.