Do you think dedicated RAM disks will become a thing again with PCIe 4.0? Imagine 256gb ECC DDR4 3200 as a boot drive...

Do you think dedicated RAM disks will become a thing again with PCIe 4.0? Imagine 256gb ECC DDR4 3200 as a boot drive... fuck.

Attached: s-l1000[1].jpg (1000x571, 148K)

Other urls found in this thread:

tomshardware.com/reviews/highpoint-ssd7101-ssd,5200-2.html
twitter.com/NSFWRedditImage

>storing much gaymes on ram while I play with muh i9
Hmm

>batterie expoodes
Nothin' personnel kid

how long do those batteries last?

NVMe is comparable for sequential read/write and a lot cheaper dummy

this, those things are fast as fuck... and non-volatile

But if jews actually willed it this could be a lot cheaper.

Say you let it use ddr2 or ddr3

um no sweaty

Attached: untitled-1.png (674x518, 24K)

What?
No, it's not even close.
Even "high-end" NVME 4-lane SSDs are only pulling 3000-4000 MB/s at max speeds.
DDR4 RAM disks hit well into mid-way 5-digits in the same tests.

sure, but how much ram do you have? it only takes a few seconds to fill typical ram sizes at 3000-4000MB/s

AMD's X570 chipset has a DDR4 controller in it. Shouldn't be too hard to add an extra 4 slots.

Attached: 28750892.jpg (1124x768, 533K)

Ram exists for the low latency and high b/w.
Within a few microseconds you've already got what you requested from the ram, but you need several ms to even get it from an ssd.

Attached: 1561009548748.jpg (658x662, 51K)

look, i'm not arguing an NVMe ssd is fast as DRAM
but considering the cost of DRAM, and it's volatile nature, it's doesn't make sense for permanent storage
if you like, you could have your OS extract itself into a ramdisk on boot, then init from there, should be pretty fucking fast (i've done this before, not from an NVMe ssd though)

PCIe 4.0 SSDs are doing 5GB/s.

hence batteries, 5VSB, and people who aren't poor

currently available chips that support PCIe 4 only have dual channel RAM so it would be both nice to not need to copy shit on every power cycle / need a synchronization script going in the background and probably cheaper to use some PCIe lanes too with the cost of higher capacity DIMMs

The battery is a backup for when there isn't power to the socket.
It should last for a week at least.

No. Most NVMe drives are stuck at 4x at best, you could 16x a RAM disk. The NVMe controllers get crazy hot, even at 2GB/s, but RAMdisks don't have this problem, and RAM has better latency.

For about 5 seconds and then it throttles because the controller is overheating.

imagine the smelle

>boot
>from DDR4
Is no one going to call this man a retard?

Terrible.
Luckily for RAM drives you don't need a controller as complex as the SSD drivers because you don't need to maximize live of the storage medium, so it doesn't even require a heatsink.

but how much dedotated wam do i need to server

You are the retarded. This has always been possible.

>so it doesn't even require a heatsink

What's the real world benefit though? On paper NVMe is way faster than any normal SSD, but in practice, you can hardly notice the difference in loading times.

The controller doesn't require a heatsink.
The RAM can do with it, but even they don't get anywhere near as toasty as the SSD controllers.

Not much, really. Latency would be the biggest benefit.

Makes me wonder though, doesn't it allow mobo makers to make x570 boards with quad channel memory...? Even if the 2 chipset channels would be slower than direct CPU channels.

Has it been been practical? Is it practical now?
Buy Optane PPM if you have more money than sense.

1

Thats fantastic.
That's still just 5000 MB/s Vs "Slow" DDR4 RAMDisks at 40000 MB/s.

>Few seconds to fill RAM
If your RAM is taking thousands of milliseconds to fill, something is a problem. RAM should be single-double digits of milliseconds of speed, at the most.

DRAM sucks for permanent storage.
But for working storage and cache, it's fucking amazing.
Being able to cache a large project file on RAM Disk is fucking brilliant.

For "PrOfEsSiOnAl", Professional, and Scientific workloads, RAMDisks offer amazing benefits for workstations that need low-latency, rapid-access to large amounts of data. Project files, data sets, assets, automated scripts, and more. I've built a few workstations with RAMDisks for the meteorology division of the uni in town, and worked with their software guys, so when they pull down massive, pre-crunched data-sets from their local, or remote, data processing servers to their local workstations to generate maps, predictions, and routing for things such as weather balloon paths, it gets loaded into the RAMDisks, and read as fast as the software can call for it, so they can get their 180MP equivalent map generated in seconds, not minutes, which matters for when relaying information to field data collectors, so they know where they need to go.

RAM cooling is bullshit.
You just fell for my trap :----DD

But to GAY-MERS and 1337 H4CK3RS online, RAMDisks are just really neat toys to play with. Your storage I/O is rarely the bottleneck in a lot of consumer-focused software.

ZEN 2 RAM write speed = 25GB/s
Four Pcie4 Nvme SSD 5GB/s in a 16x slot adapter = 20GB/s

Closer to ram than you though huh

>muh sequential speeds

Except that the flash memory itself cannot achieve those speeds, the controller cannot handle that throughput and

Stay butthurt bro
Sequential speeds for video editing is a perfect example of a use case. Nvme in raid0 has helped me so much with this

Why does this board always passive aggressively attack technology that isn't necessarily beneficial to normie NPCs who do nothing with their computer? These people don't need anything but a chromebook and maybe a video game console.

Four of them in raid 0 means their buffer is four times larger, and since that buffer is basically ram and the controller on an adapter card is fan cooled they can in fact handle those speeds up to a point, unless you are attempting to fill the drive. Or something, at which point you going well beyond the capacity of regular RAM amounts in non server motherboards

>why do people make fun of dumb ideas
Dunno mate. It's a mystery to me.

more like twice that

Attached: AIDA-cache.jpg (545x524, 58K)

>I don't have a use for it so nobody does!
Don't you have some books to burn or nerds to harass?

>if i install four of them i can match this one RAM disk
Hold on, let me install 20 SATA SSDs and RAID 0 them together to invalidate you.

tomshardware.com/reviews/highpoint-ssd7101-ssd,5200-2.html

Even existing solutions that use 4x NVMe SSDs in RAID configurations still barely hit 10,000 MB/S, and need to sustain those reads to get those speeds.

But also, RAMDisk with Zen 2, and future threadripper/other CPUs is/will be faster, still, by 2x that estimate.

Why would it? We now can use SSDs or NVME SSDs which on paper are a lot slower but anyone barely notices the difference.

I wish there was a way to purpose old ram drives as a ram drive. I have ~70GB of DDR1, DDR2, DDR3 just laying around doing nothing. Would be fast enough for a ram drive

Absolutely not. There's almost no perceivable difference in boot time between a 500 MB/s SATA drive and a 2.5 GB/s NVMe drive, nobody wants to pay 10x the price for a RAM disk to shave an extra half a second.

>it only takes a few seconds to fill typical ram sizes
If you're still on SDRAM sure, maybe.

>Volatile memory
>Boot disk
Retard

Because using a RAM disk as a *boot drive* as OP wants is a retarded idea beneficial to no one.
RAMdisks are for specific cases like described.

Barely anyone notices a difference between PCIe 4.0 vs 2.0, 12 core vs 4 core CPUs, or any other advancements over the last decade or so either. Go be a luddite somewhere else.

>strawman
OP said for booting, dude. Of course a 12-cores can be faster than a 4-core and you WILL notice it in certain programs, but for booting up is it really that much of a difference between 1 second boot or 3 seconds?

How big is your ram disk buddy? Because my 16x raid 0 ssd array can hold and run a fair few virtual machines and sandboxed apps.

You best believe I'm getting the pcie4 version of it when available.

Only applies to 3900x. Every chip below that has gimped write perf. Pic related

Attached: untitled-10.png (722x874, 72K)

There's no difference in boot time between 4 and 12 cores, yes. You don't buy a 3900x just to make your computer boot faster.

>in practice, you can hardly notice the difference in loading times.
That's mainly because the software doesn't know what to do with all that bandwidth yet and does a piss poor job of parallelizing I/O.

Come back when you have to boot a hypervisor and 15 VMs.

I'm sure your 15 Arch linux vms running on Winblows 10 are very important user

Zen max throughput is determined by CCX count. Two chiplets = double the max throughput. Or actually a little less because of overhead with the IF.
R/W rate for one chiplet is the same across SKUs for a given clock.
So 3900X can write 52GB/sec total but both 3900X and 3700X will write 1GB in around 35 milliseconds.

>come back when you've done this really specific task only turbo-nerds or tech companies use that invalidates your arguments

>dozens of VMs on a single server
In this case boot time is irrelevant since a full reboot should almost never happen.

I do t need a link to benchmarks I've already got 4 EVO 970s on a 16x adapter with fan cooling. I get a consistent 10GB/sec for about 30GB of writes which is usually enough for an edited video but my use case,( Fuji X-t3 footage is lovely but huge) is the reason I want pcie4

Yes there is. You can literally measure this

Eyes can't even see under 3 seconds boot anyway.

That's a very convoluted way of saying that both chips have the same RAM read speed and only the 3900x has fast RAM write speed.

My point stands, the benchmarks don't lie

Measurable yes, perceivable hardly. How many people would pay several hundred dollars for something they can only measure with a stopwatch? Not to mention that it's far easier to mishandle a RAMdrive and lose your data.

>projecting

It's okay, Cleetus. Not everyone has to work minimum wage at Starbucks.

>labs before entering a production environment don't matter

>ad hominem

Nvme SSD vs sata is not hundreds of $ difference. Also you don't need to configure pcie nvme raid as a ram drive on boot, you can simply create one and unassigned it from within the OS with a few clicks ( I don't know how to do this on Linux though)

>staging dozens of VMs at once
That's one of those extremely rare use case like 's weather datasets, and one of the reasons why RAMdisk hardware exists at all.
(I also have a hard time imagining a case where you'd need to boot 15 VMs simultaneously lots of times in a staging environment instead of setting them up one by one)

They both read and write at the same rate per CCX which imposes the same limit on any given thread. The fact that total throughput scales with CCX count doesn't gimp the 1-chiplet SKUs so much as it prevents the multi-chiplet SKUs from getting gimped. If throughput didn't scale like that you'd hit a hard performance limit as you added threads and the 3900X would have shit perf/thread.

It's pretty much the same reason 24/32T Pro/HEDT runs quad channel. If it didn't you'd be wasting threads.

This is why I'm waiting before I upgrade to nvme. I don't need extra speed 99% of the software I use can't utilize

I use Ramdisk since I have 64GB of ram
The problem is your physical Ramdisk as presented needs eithrr to be powered 24/h because well, it works that way or to have some. automatic backup before shutting off and automatic recovery at boot as I have

Just buy RAM and software Ramdisk, I have 64GB DDR2 and it is freaking fast, I can't imagine how fast it must be on DDR4 4000MHZ

Chill, it was an honest question, specially considering the OP. I know it has its uses, as some other user pointed out. I was just asking for a "normal" use.

In fairness every xeon since forever has had quad channel memory. It really only has become a limitation on the desktop because cores numbers are up.

0bytes.
I'm arguing in principle.

Can't you see the battery attached to it?

how does that work

the battery can be replaced while it's on?

No, it can't, it just recharges when the computer is on.
You have to back up the contents of the drive if you want to replace the battery.

that's very specific
I turn off my computers every night

You can do that, the battery keeps the data for a couple days. But you can't leave the computer off for a month or more, like with regular drives.

Not really, you won't have any benefits over two PCIe 4.0 x4 NVMe drives in RAID0 for example, you just can't process the information fast enough.
Sure it would be great for a cache disk for video editing, etc, not for games or a system drive.

The majority benefit of solid state memory is it's seek time over hard drives, speed really isn't that important for things like a system or game drive.

Attached: 1562797066756.png (596x391, 131K)

No Nigga. Ram is blazing fucking fast. I miss the DDR2 ram disk days, before you nvme zoomers.

>Imagine 256gb ECC DDR4 3200 as a boot drive
>on a RAMdrive
Except for RAM gets cleared when there's no power.

>she's too dumb to notice the battery

>She's too dumb to realize that using pci-e defeats the point of a ramdisk

We need to shill these on /csg/ so the chinks make them accesible with DDR4 so we can use it cheaply once DDR5 comes out.

Foolish zoomer. They came with batteries