Hey Jow Forums, where can I buy a working used IBM mainframe?

Hey Jow Forums, where can I buy a working used IBM mainframe?

Attached: 1*iW4VBoVn6OCBQxfp_2bZ9A.jpg (449x600, 60K)

Other urls found in this thread:

youtube.com/watch?v=45X4VP8CGtk
pastebin.com/umhdBsPu
youtube.com/watch?v=vuXrsCqfCU4
hercules-390.org/
spectrum.ieee.org/tech-talk/tech-history/space-age/what-does-it-take-to-keep-a-classic-mainframe-alive
en.wikipedia.org/wiki/ECC_memory#Advantages_and_disadvantages
twitter.com/NSFWRedditImage

Look down a shotgun barrel

You can't and even if you manage the fucking thing won't do anything without a license that starts at 100k a year.

ebay, but I wouldn't bother

The should be some old mainframes that don't need that license no?

Why?

Don't they just run GNU/Linux?

imblygin that they arent just normal servers inside running memeos IBM with 4-8 weird huge custom CPUs with god knows what kind of vulns in them

hahahahahahahahahahahaha
Trust me you don't know what you're asking, there's a reason datacenters are full of racks full of x86 style servers. It's a completely different instruction set, they're meant for batch processing of repeated specific information for billion dollar companies and research labs and such.

Why the fuck would you possibly want one?

There's plenty, by they're usually 100+ kg for the most basic ones, so most people who ended with one on their homes/warehouses don't even bother giving them away on Craigslist

I'm fascinated by these things. They're used in every banks and big companies but I don't even know how they work.

Woah calm down there buddy, the reason I asked is exactly because I didn't know. Could you explain in more detail how these work? I literally have no idea.

>normal servers
they're not, they have options where you can have a process run concurrently across 3 CPUs doing the exact same thing and at every step they compare their state to make sure a CPU didnt go bad and produce the wrong result. you get autistic types of features like this

That isn't how corporate licensing works.

The only real difference you'll notice is the system or board management controller for remotely managing it. After it boots up it's really no different to use than a regular x64 box.

You have to be over 18 to post here you fucking mongoloid.

If they were so similar, why big companies will still use them instead of normal servers which cost 100x less?

no but there is a version of Linux available, apparently

>It's a completely different instruction set
Yes, which Linux supports. Just google "Linux z14" or "Linux z13"

Why do companies pay for redhat?

Lots of ECC memory, shitload of sockets, obscure setups like , support, more specialised hardware such as fibre channel networking etc etc. None of this will be of any use to you whatsoever.

And a lot of companies are moving away from these and towards racks of cheaper x64 sytems.

Google z14
Look at ibm documentation, try searching on redbooks to start

If you want the next best thing get a Talos II
It's not zarch, but POWER9 is what IBM builds supercomputers out of.

Attached: Talos II.jpg (749x374, 68K)

Theory is good but practice is better. That's why I want a old one.

not him, but I'll pitch in
I don't know a whole lot about differences between high-level server CPU and consumer CPU, but I remember reading once that server CPUs didn't fare well on consumer benchmarks because they're not interrupt-oriented. They're designed for continuous number crunching (I think the article was talking about sparc and alpha at the time).

There is some kid who bought an ancient one and got it running, he has a talk up on YouTube somewhere, and a blog. It sounded like a massive waste of time.

Search "here's what happens when an 18 year old buys a mainframe" on YouTube

Learning something new is never a waste of time. If there still used today, there must be some reason.

SPARC has been significantly slower than x86 at just about any single-threaded task for decades. Sun hardware was only really any good at massive amounts of concurrency.

the reason its stupid is because server cpus are generally focused on multithreading and high memory bandwidth rather than high end computational performance and the money spent on a server motherboard could have been spent on a more suitable setup

The majority of companies don't use mainframes and plenty of mainframe users are very slowly migrating away from then. They are still used because in the short term it is cheaper than upgrading a bunch of legacy systems and most business leadership doesn't care about the long term and just kicks the can down the road. Plus, there is a degree of vendor lock in. IBM has used legal maneuvers, like copy right claims, to prevent people from emulating the instruction sets used by some of their mainframes. If this weren't the case many users would just emulate the environment for their legacy systems. The only users left would be for niche use cases where some needs stuff like

>If there still used today, there must be some reason.

Legacy. Most of the places that still have them have a huge stack of specialised in-house software that would be a nightmare to port to another system.

>and most business leadership doesn't care about the long term and just kicks the can down the road

Porting and/or rewriting millions of lines of undocumented, untestable spaghetti is almost impossible not to fuck up. It's a ginormous, risky investment.

What kind of application needs autistic features like this?

Apparently there are multinational companies which use mainframes because they have mission critical binaries which they no longer have the source code to, and because “it just works” it would cost too much to re-write, test and implement so they just don’t.

there's still supercomputers with bulldozer opterons running, that doesn't make bulldozer good

Hey, COBOL fag here. Someone did this and they did a talk at SHARE about it. You can watch it here: youtube.com/watch?v=45X4VP8CGtk

Tl;dr, it's doable but a pain in the ass and you're pretty much on your own. Also you might need a backhoe.

A better idea is to run zOS in a VM. But you'll need some specialized software and more importantly the DASD for installation! There's a guide over here. SEED THE FUCKING TORRENT THANKS. pastebin.com/umhdBsPu

Alright, you just spent two days DLing a DASD for the ADCD image and got it booting. Fun fact that software is nearly a decade old and lacks a bunch of features. So now what? You can license a single core VM for 5000$/year and run it locally or sign up for master the mainframe and get access to a system with a active sysop, latest software, USS and other good shit.

Attached: 800px-z14_mainframe_drawer_showing.jpg (800x600, 101K)

Ever watched Office Space?

When the 1.0x10*-233 place in a FLOP is the difference between your financial institute losing millions of dollars or not, you triple check your work like you're an autist.

youtube.com/watch?v=vuXrsCqfCU4

This guy tears down and recycles old mainframes. Satisfy your hardware curiosity there.
As for software curiosity, check out: Generally: They're giant stacks that emulate one system, and the management in the stacks divides the tasks they're given among the resources.
It's like handling a Mining Quarry with 25-story tall earth moving equipment. Efficient at specific types of Quarrying, with a shit-ton of setup and maintenance ahead of time, costly operations, requiring specialized operators.
But everyone else is going to Quarry with newer, more efficient techniques, smaller, more targeted equipment, and less specialized operators.

Looks like the same heatsink mounting system as POWER9, with that black bar across the middle and metal clamps on the sides

Attached: SUMMIT_NODE11.png (2048x1356, 1.89M)

I want this computer so much

>get your IBM mainframe
>immediately realize you don't even have the correct power adapter to plug it in for power
>the moment you do it blows every fuse on startup

>ECC memory
what? why do you need lots of memory for ecc. each calculation takes up only 32 bytes and thats for just a split second..

If you've ever used any IBM z-linux shit you'd know that's a joke. Or more like an expensive nightmare.

Honestly I barely interact with them and the times that I have it's been a huge PITA (namely dumb IBM software that has a version for their stupid z-linux stuff. Basically they are meant for specialized applications that require massive data crunching and queue shit. Bank transactions are a big one, like every actual transaction is put onto a stack for a mainframe to crunch away with some monstrous core banking software written decades ago, maintained by aging mainframe wizards who are halfway through a 200-years-of-service blood contract with The IBM. They don't have the same instruction sets as x64/x86 computers. If you are a mainframe developer, you get a fancy special flash drive thingy from Big Blue that authorizes a weird VM to emulate a mainframe OS, and they keep tabs on who has all of those devices. If you're a credit card company who needs to crunch billions of transactions, or a laboratory running some whacky equation to build a black hole, or some three-letter doing [redacted] resource intensive process, then a mainframe is for you. Most things consuming the data are running on regular servers, and as said most mainframe users are shifting away from mainframes to normal server racks now, with the biggest users being super rich companies and governments on them because of And good luck getting software or support or anything for it without being good friends with IBM. I'm an IBM partner and it's a bitch enough to get licenses and files and stuff sent over properly, and that's when lots of money is changing hands. All the mainframe guys I know have been doing it since they were sucked into IBM fresh out of school and they realize their numbers are shrinking. Also whatever surplus you'll find will be ancient, insanely power hungry, and even more of a PITA.

Really? Source? There's gotta be a way to boot it and running programs without the thing phoning home, right? There's a video on youtube of some museum booting up an s/390 so when did they begin to ask for a valid license?

They aren't. You have hot swappable cpu and ram, plus the OS sees an unified huge pool of memory and cpus because they use special buses that allow all that stuff without slowing down to a crawl. If you tried that though regular ethernet connected servers it would be slow as fuck. At least that's my understanding, I'm not an expert.

>Office Space
I don't remember any of this relating to Office Space?

They try to get rich by redirecting rounding errors to their account, but they fuck up by one decimal point and almost end up in fuck-me-in-the-ass federal prison.
Best comedy movie ever.

Followup:
Mainframes are beautiful and at least historically awe-inspiring, but it's ridiculously impractical to just jump into. I think what you want is a homelab with a rack. I found a 42u rack for cheap and am slowly filling it up with servers and switches and stuff so it's all 'ohh look I have a datacenter thing in my living room!', and that's plenty of useful power. Rack servers are basically like a regular desktop but with redundant everything (two PSU, two NICs, usually two physical processors each with their own RAM channels) in a form factor that lets you slide them into a standard 19" rack, slide em out to work with em. You can get one and start adding more over time to increase available power. I got a Dell R710 a couple years ago for ~300 bucks with two processors and 144gb of RAM. R720 can be had now for better power savings and processing power; roommate has an IBM 3650 in the rack as well (they sold some of their regular server business to Lenovo at this point now though). Mine has a hypervisor, it's running all kinds of useful VMs: media center, storage, firewall, security cameras, and whatever lab environment I need to play with other software. Virtualization/hypervisors, networking, and whatever normal sysadmin stuff you want to practice are far more useful and worth the surprisingly mild power consumption. Hell, I've got some friends who just buy a rack and lay it against the wall to use as their main desktop.

Dell is used more often for homelabs because they keep posting all their software updates, my IBM account seems to get me the software to update the IBM server we have, HP jews put their shit behind a paywall. Just look up homelab and watch for the threads on here. LARP as a datacenter, not a bank's basement.

>aging mainframe wizards who are halfway through a 200-years-of-service blood contract with The IBM
rofl
good posts user, thanks

OP you can download the IBM mainframe z/OS VM via torrents. Besides, running those things costs a fortune.

Nope, z/OS.

Is that Curious Marc?

Before you look into buying one, maybe play with an emulator to get a feel for what a tiny system would be like.

hercules-390.org/

The only programs you're going to boot are weird old undocumented terminal applications for managing a flight booking database or something.

Perspective on the size of these things: I have an IBM-branded oscilloscope that used to be at a customer site in the 80s. The mainframe guy regaled me with tales of working with that system; they would use the scopes to diagnose the machines. As in, "oh, result is weird, is it communicating with memory right, let's probe the physical bus and see what square waves are going by" so they could identify the instruction words being sent. That's the scale of these things. He showed me a processor, didn't call it a CPU, had some other name like modular processing something. It looked like 4 giant CPUs attached to a giant heatsink; you would slide out a vertical section of the mainframe, turn off the water (cuz it's hooked to the building water supply), unscrew some lag bolts that hold down the waterpipe heat exchanger, and change them out with a giant handle like some kind of scifi show. It was something weird like a 112 bit instruction set. I think it was from a 3090 or something.

Mainframe guys are funny, some of them seem to barely know how to use a regular (ironically IBM form factor) x64 computer, and they don't quite understand why people are moving to rackmount servers, because they view all these personal computer tier machines as little more than a child's plaything with cartoonishly tiny versions of 'real' computers, because their processors are the size of dinner plates and their I/O channels were as wide as our fucking RAM sticks.

spectrum.ieee.org/tech-talk/tech-history/space-age/what-does-it-take-to-keep-a-classic-mainframe-alive
Just saw this on hacker news.

>work at a big company
>we have one mainframe left for some legacy application
>its maintained by 2 old guys
>company never tries to look for new people to take over
>both of the old guys want to retire
>company finally announced a plan to axe the mainframe in 3 years time for a different third party application on Linux

Kind of feels bad.

Interesting tale but you can't really compare the machines from the 70s and 80s to now. Back then at least on home computers almost everything was made with logic chips soldered to the motherboard so if one of the TTL logic or ram chips was acting up on say a IBM PC you'd need to probe it with a logic analyzer to repair the board.
Nowadays I think nobody is gonna hook up a logic analyzer to a modern z mainframe because everything is probably modularized so if anything fails you just swap the unit that failed with a new one, just like with PCs.

>The only programs you're going to boot are weird old undocumented terminal applications for managing a flight booking database or something.
But that's pretty much all these machines are meant to do, run one or a few business application and do it fast, 24/7 with zero downtime. You can also use them to run linux virtual machines while knowing that the underlying hardware is fully redundant out of the box.

I think it's a shame these things cost so much and their reliability features haven't made their way to more low end hardware.

I've never seen 'industrial' water cooling before. Cool.

>But that's pretty much all these machines are meant to do
Yeah I think we're in complete agreement, I just mean it's utterly useless in your home besides looking super cool and making the power company think you're growing weed in every room. To be fair, modern rack servers have redundant hardware and don't need to have downtime save for the rare BIOS update (and my surplus gear isn't getting many more of those outside SPECTRE patching). Anytime you get more complicated than one or two business applications written for the exact hardware it is running on, you introduce complexity that will result in issues at some point. Even the modern mainframe systems have problem sometimes, hence the top tier mainframe guys just sit around waiting for a call that a Fortune 100 is having a mainframe issue and is losing billions of dollars a minute or end-of-their-world scenario. I don't think super-general-purpose computers that are running many different applications on a wide variety of hardware configurations can compete, though rackmount servers are pretty damn close. With multiple servers and vSphere or other hypervisors, you can migrate all activity off a physical machine if it needs to go down for some kind of update or maintenance, and most items on them can be hotswapped anyways. I think for some rack servers you can even hotswap CPUs and RAM if you tell your management module to shut down that processor and RAM bank, though I sure as shit am not gonna try that on mine.

I learned how to write assembly code for the original ibm system/360 instruction set (contemporary in ~1965) at a medium-large state university in 2016. Ask me anything.

my uncle works for IBM

+ others, I don't want to quote everyone

Not OP here but thanks for the info, this stuff is really fascinating.

I am IBM AMA

Why would you plug it in if you realise you don't have the correct adapter?

The implication is you hire an electrician to set you up after an expensive bill.

thanks user that was a really interesting video. He's parents must be pretty cool people, no way mine would let me have a mainframe

Just curious, what's the advantage of running services in one VM instead of, say, one server running all of the daemons you need to do all of this stuff simultaneously? Surely the VM overhead is significant.

one OS*

Close there is Linux for z/OS and dedicated hardware for it, but usually you would just use commodity hardware for that unless being colocated on a MF would be advantageous. FFS IBM has a processor dedicated for running Linux workloads.

Fuck that, use a modern mainframe. Get creds for Master The Mainframe learning system. It's a bit of a weird time, but if you want to learn it's a invaluable resource alongside the Redbooks.

You seem to know what's up. I remember hearing a tale years ago of a way to connect two mainframes to share workload in a redundant manner. The idea being that you would be able to dismantle the first and move it up to 100km away or something. Was I just really fucked on LSD or is that a thing.

Take a look at what IBM Summit is using for cooling, It heats a fucking building.

Fuck yes, hotswap CPs is one of the coolest aspects of z/OS. The Honeywell guys used to do a demo with a frame where they'd set it up at a range and pump it full of lead while processing.

How the fuck do I get a job with z/OS and no CS degree. Fucking love this shit and fuck with COBOL/DB2 on the daily in Canada.

Yeah but compartmentalization. You can take down a VM and only a portion of your infra goes down. If you're orchestrating properly your fallback systems are activated and any transactions replayed. Too much load on one box? Spin up a new VM, let the load balancer hand the rest.

Cheaper to virtualize.
Makes more use of your hardware
Easier to change configuration
Easier to setup failovers

>How the fuck do I get a job with z/OS and no CS degree. Fucking love this shit and fuck with COBOL/DB2 on the daily in Canada.
Ah, you're the guy who made that thread some days ago. As everybody told you, when the old fucks who are running those things are close to dying, the companies find ways to migrate everything to commodity hardware using the least amount of money possible. Sorry pal but no matter how hard IBM shills its shit, nobody is falling for the mainframe meme except branches of the military with some spare trillions to throw around to their corporate friends. So maybe find out what government bodies are still investing in big iron and apply there.

Nah I was the guy responding in it. I know that it's not going to happen. :P

Also if anyone want's to fuck with old systems check out the livingcomputermeusuem

Like some banks with their systems writen in COBOL still running today

I am Cray AMA

How many FLOPS?

I forgot I had z/OS installed in Hercules, never knew what to do after logging in

Attached: Screen Shot 2018-11-03 at 03.27.13.png (1680x708, 238K)

>Surely the VM overhead is significant.
It actually isn't with modern hardware and hypervisors. A few hundred MB of RAM and 1% CPU.

Yeah a level 1 hypervisor like esxi basically swaps around what gets to run on bare metal, it isn't anything like running a VM from inside VMWare Player or something. Also the vast majority of my VMs are just PhotonOS (a super ultralight *nix OS designed to run Docker containers) with the relevant sets of docker containers inside. It's like I have dozens of seperate servers, and the images could be moved to other servers if I needed more breathing room. You can even overallocate RAM and disk space and such because all your machines won't use all their RAM all the time, so it spreads their usage out to make the most use of your hardware. IE, I have 144gb of RAM, but I can allocate maybe 300gb spread across various images, knowing that some images will just be turned off when I'm done or won't be all under heavy load simultaneously.

generally you can't because those are licensed out instead of sold to the businesses.

That's quite interesting thx

normally, mainframes are not sold. instead IBM rents them with support included

Build your own

This is a good thread

read
en.wikipedia.org/wiki/ECC_memory#Advantages_and_disadvantages

Government auctions.
Also good for lots of used PCs and servers in case you need them for something.
They do, but why would you do that?
Retard.
>they're meant for batch processing
Double retard. IBM has several time-sharing OSs built for their mainframes.
>but I don't even know how they work
Contrary to popular belief, there's not much to it. The core is 1960's computing paradigms with modern shit bolted on top in a backwards-compatible way. That goes for hardware as well as software. You'll find numerous coprocessors and off-die caches and interconnects and etc., and if you find something oddly kludgy in z/OS chances are you'll be able to trace its origins back to punched cards.
Nah, that's bullshit. The real reason is Intel (and AMD) started making enough money to out-innovate everyone else, and in late 90's/early 00's the UNIX/RISC crowd found themselves to be short on breath. The apocalypse that was Itanium killed off PA-RISC, Alpha, and MIPS, some of the best architectures of the time. PowerPC fell behind in desktops, POWER was only barely competitive in servers, and Sun failed as a company and was bought out by Oracle who only does maintenance.

AIX probably.

*z/OS

You already have a computer far more powerful than an 80s mainframe.

Attached: 1539121305230.jpg (685x720, 68K)

What's stopping you from sucking 5000 dicks for $1 to get one?
Let's assume you can fit 10 dicks per day. 10 hours of work, 1 dick per hour. 3650 dicks per year.
In roughly 1.5 years you can afford one.

Start sucking before it gets outdated and you'll want to upgrade.

But the case isn't very gucci compared to IBM

What's the pros and cons of POWER9?

Crazy Interesting. Thx. So they built their computers like they built their typewriters...

Attached: IBM Selectric Guts.jpg (1280x852, 124K)