Virtualized file server

Sup Jow Forums
I'm planning on getting myself a server for VMs, however I've heard that I should run a file server on bare hardware so it would be able to fully access drive smart data and such.
What are your thoughs? It would be a home file server so 1-2 computers simulatiously max for vacation photos and similar crap.
I have also heard about people using shit like raspberry pi for file servers, wouldn't usb hurt already shitty performance of hard drives?

Attached: server from leddit.jpg (4128x2322, 663K)

Other urls found in this thread:

hardkernel.com/main/products/prdt_info.php?g_code=G151505170472
twitter.com/SFWRedditImages

>simatatiously

what is that?. seriusly

just put the disks in the virtualization server
also use KVM its good for u

You mean just passing drives thru? Is that possible with KVM? I was thinking about VMWare hypervisor but it would be interesting to test it.
please just kys

I use Proxmox (which uses KVM/qemu) to host FreeNAS, and it definitely supports passing the HDDs through (I have a separate M.2 device for booting Proxmox). It's not supported through the web UI, or at least wasn't when I set it up, but it was easy enough to setup by skimming the docs and editing the VM's config file to add the pass-through devices.

you can pass whole disks with a little perf overhead (use virtio-scsi for best perf) but if you also want a file server you need something like samba + a network bridge

virt-manager is a good GUI for KVM

>server
>1-2 computer max

hilarious

>I've heard that I should run a file server on bare hardware so it would be able to fully access drive smart data and such.
I think you also can do that from a virtualized server, but frankly for a file server: Probably best just run it on a less power-consuming machine than the monstrosity you likely have in mind.

>I have also heard about people using shit like raspberry pi for file servers, wouldn't usb hurt already shitty performance of hard drives?
It's not really an USB issue if you're talking about performance levels of normal HDD rather than some PCI-E SSD monster.

But the RPis are *all* shit if you want to get full drive performance from drive to network. They got a specific bottleneck that isn't present even on many ARM single-board computers.

Get an Odroid XU4/HC1/HC2 instead. Or maybe a Rock64 [Pro]. They about saturate GBE.

I ran ESXi with a 4TB virtual drive (sliced out of a 9TB RAID5 array which was running all my VMs) attached to a Windows 2008R2 VM as a file server for YEARS.
Yes, a dedicated bare metal file server is better, but at the time I knocked out all my infrastructure needs in one shot, and it suited me just fine.
You can always add a dedicated file server and migrate your data later.

> You can always add a dedicated file server and migrate your data later.
Or you just do it from the start and don't run a Windows-based power leech that likely costs you more every year [maybe every two years if your power is cheap] than a SBC or onboard X86 would.

I think I'm missing something. What does SBC mean in this context? All I'm coming up with is session boarder controller.
Also my solution would have worked just as well with Linux. I just didn't know any better at the time.

Basically the whole setup would be a FX-8320 based computer (basically my old PC) with 3-4 hdds for file storage and one ssd for hypervisor and vms themselves. My VMs consinst of OpenVPN server and a few python scrips that run 24/7. So I'm thinking maybe I should dump this idea and get a few of those boards from .
has point - it does seem like an overkill.

Single board computer. Like aforementioned Odroids.

> So I'm thinking maybe I should dump this idea and get a few of those boards
Yea. Well, for 4 drives in RAID you might want just want the usual Asrock or whatever onboard CPU Atom/Pentium.

But maybe you don't even need that online 24/7. A modest 1-2 drives and a bit of processing power that's online 24/7 usually is enough at home or even in a small office. You can turn on the big powerful machines with 10hdd and 8 CPU cores or whatever when you need to work with them or do a backup (/automatic sync ...) of those more modest 1-2 drives.

Don't be that guy. He meant simultaneously.
Yes, best case scenario is the fileserver running baremetal. Running anything in a VM will hurt performance. Not something you want from a fileserver.

Raspberry Pi has USB 2.0. It also has a 100 Mbit NIC, which is connected via USB. You won't get very high speeds.
When I ran that setup, average was about 10MB/s transfer.
Better would be a Pi sized device which has 1Gbit NIC and USB 3.0.
If that isn't an option, 1Gbit NIC with USB 2.0 is doable. I get about 38MB/s filetransfers.

>Don't be that guy. He meant simultaneously.
too bad ungoogled chromium doesn't support spellcheck
>Better would be a Pi sized device which has 1Gbit NIC and USB 3.0.
I completely agree and it would be an option, the question is which one do I get and what software do I use.

Odroid XU4
Or even better, Odroid HC2. You can connect it with Sata, So you're not limited to USB
hardkernel.com/main/products/prdt_info.php?g_code=G151505170472

Is r710 still worth it in 2018?

I'm contemplating between ryzen build and r710. R710 is 200w at idle while ryzen box ~70w with a lot more power at hand. Ram price on ddr4 is just insane tough..

But it only has 1 connection, and I was thinking about raiding at least 3 drives

Yea, of course considering the less power consuming setup is worth it if you actually use it.

Have you even calculated the costs involved? If this thing runs all day, you'll spend like US$130-260/year extra between 200w and 70w [approximate price range between the US and European average, could be more]. Without the amplifying effects of air conditioning that you might also have.

That "insane" RAM price will probably be amortized really quickly.

how do you get 200 or even 70w at idle when modern cpus and even things like SATA have power saving modes up the ass

You can team network adapters and connect your host to a SAN for fast reads on a virtual fileserver.

I would if I had money and space for a fucking datacenter

You got multiple options, like getting more of HC2 / XU4 with CloudShell2 and stacking them, and/or connecting one or a bunch of drives by USB [definitely preferably with USB3 and UASP support].

Attached: Orico_2_bay.jpg (1000x1000, 89K)

i dont understand. how does this relate to vidya gaming?

>67276616 (You)

This, even with a haswell e3-1240v3 I get around 30w on idle and 75 is my absolute max.

you can try to google r710 power figures, 99% will be around 180-200w.

Ryzens 70w includes a few vms and 3-4 spinning rust + psu efficiency.

Half of the year server would be an actual space heater so it'd lessen the power bill cost to approx. 50$.

I'm just really rustling my jimmies over price of ddr4.

Well yea, but you are using a e3-1240v3, not crappy r710 .

> Half of the year server would be an actual space heater so it'd lessen the power bill cost to approx. 50$.
You'd otherwise heat with much cheaper gas/oil/wood and not electricity, right?

Either way, calculate the difference properly, you'll probably see it's not worth it vs running a nice little ~6W SBC or a ~20W onboard x86.

An R710 is not a good choice in 2018. That power usage will quickly go up and your room will be a space heater. Your electric bill will scream. Not to mention how fucking loud it’s gonna be.

For a home user, a regular ATX case with a rack of drives will be fine, you don’t even really need a server-specific board unless you want features like IPMI and multiple NICs built-in. You can just add stuff you need via PCIe to a standard ATX motherboard with a CPU that supports virt and optionally IOMMU.

is it possible to boot a drive that was inside a normal computer (so it has data, OS, etc) from a VM, or is it only possible to use it as an external drive under other OS?

Yes, either clone it to a VHD/vmdk and add it normally or just pass it through in physical form. On VMWare Workstation you can add a physical drive to a VM confit and boot it.

nice
is it possible to do it if the drive in on a SATA to USB adapter, or I'd need to connect it directly into SATA?

You can pass through only entire controllers, not drives iirc, but i might be wrong

Easily doable with qemu/kvm (which is maybe the most used thing on Linux anyhow, but I don't have statistics).

It's all the same to Linux, just a block device.

With UASP it's anyhow a SCSI/SATA-like situation.

Use the free version of vmware esxi. Pic related, its my home server specs.

Attached: Capture.png (1156x846, 78K)

you can p much pass anything on the PCI or USB busses afaik

Not sure about their performance though. For the performant io remapping with VT-d i think the device needs to have a distinct acpi table, which is only per controller

Yes you can still map it as a physical device I’m fairly certain. If it won’t let you, you can try to attach it as USB and boot it. If it’s Linux you can use Plop Boot Manager to boot it (plop loads as a floppy device to bootstrap a USB device), if it’s Windows you won’t be able to because it will BSOD if loaded as a USB drive.

Either way, if you plan on making it a permanent VM you might as well just clone it to a virtual disk image.

not really permanent, just wanted to look around some stuff on a old drive

Then just add it as a physical drive in the VM config and boot it, or just plug it in and browse it