Does architecture matter?

Does CPU architecture really matter anymore? I figured if you slap enough transistors and run the chip at high enough of a clock frequency, you should be able to defeat all the competition, right?
Is there any reason other than lack of compiled software why the non-x86/non-x64 can't thrive?

in the end, must there only be one - or two? (a question quite relevant to operating systems too)

Attached: collage.jpg (1805x1200, 250K)

it matters from the perspective of the wellbeing of whoever R&Ds the chip.
also CISC (x86) generally performs better than RISC (arm/etc) afaik

please be baiting...

CISC is to RISC as parallel buses are to serial buses. It used to be worth it, but now the overhead has caught up to it and RISC is simply much superior.

Attached: SATA_vs._PATA.jpg (489x237, 13K)

If you slap a ton of transistors on one core you won't be able to clock them very far since each transistor has to be running at least the target frequency
And that philosophy disregards limitations and bottlenecks on other parts of the processor along with production cost

this is where high level language and abstraction has got us

Interesting perspective, you'd probably know better than me ( )

>slap as many transistors as possible
>gate delay accumulates so much that you can't run it past 1GHz
>no cache brings your ipc back to the 70s

>Does CPU architecture really matter anymore?
It does matter, since proprietary software exists.

Intel guy here. Yes architecture matters and it's a constant trade off like all engineering. You can do one thing well or do many things just ok. A lot depends on algorithms/software. Most of cpu area is cache, so transistor count isn't all compute. More transistors, bigger die area is harder to cool and clock higher.

>Does CPU architecture really matter anymore? I figured if you slap enough transistors and run the chip at high enough of a clock frequency, you should be able to defeat all the competition, right?
??
>Is there any reason other than lack of compiled software why the non-x86/non-x64 can't thrive?
??????????

what the actual fuck are you trying to say/ask? good lord you come off as really dense

You're going at this idea from the wrong direction. Don't think of an instruction set architecture as all of the combined logic on the chip, but rather as the interfaces it exposes to the user for running software, because that's what really matters. And yes, the ISA does matter. Intel/AMD CISC designs are fairly bloated will all sorts of shit tacked on over the years. RISC is better in that you can relegate the interesting stuff to the compiler. Tweaking performance on the software side is always better.

Not really sure how to explain this stuff but just know that RISC is generally more secure due a more simple hardware design. The only downside is speed, but fuck your video games and pajeet javascript.

Attached: 97dbdec11405f205ac267c5045964072653307cf1d94ae03737f3cd768fa87fb.jpg (242x247, 11K)

It really doesn't. It's just a guise to get people interested enough to actually research and understand and eventually contribute too. Quickly realizing that it doesn't and they've only wasted their lives in a futile game.

>high enough of a clock frequency, you should be able to defeat all the competition
What if the competition achieves roughly equal clocks AND have a better arch?

If by architecture you mean ISA, it doesn't mater as much as it used to. Back in the day the decode logic took up a ton of space on the die (like half the die). Thus RISC was thought up to shrink the decode logic. Today it does not represent much space on the die compared to what caches eat up.

That said, in the end RISC kind of won. I've heard if you work at intel and peel back the X86 magic, under the hood of the CISC ISA they basically have a system to that runs on RISC microcode.

After working in industry for a while and having done a bunch of low level kernel programming on X86, ARM and PPC I can give you my theory about what will happen.

As I've alluded to ISA does not mater. Intel's advantage is that they are way further ahead in cache design, branch prediction, out order execution (oh wait, kind of, lol meltdown) and just overall optimization of the pipeline. Thus the raw clock speed and IPC of x86. This is not that strongly tied to the ISA.

IMO, where intel and X86 is going to get burned, and ultimately loose the market slowly over the long run is that when you buy intel you have to buy all of their peripherals (some very out of date).

For example, if you want to make an amazing cellphone SOC that uses your very well designed peripherals on a single package, you can't do it with X86. Intel won't licence the core and/or PCH to you. So at a bear minimum you need to get an intel SOC and connect your chip over the PCB. This makes your design more expensive and makes it harder to pack more stuff in a small phone PCB.

However, you can go to ARM, licence an ARM core, DDR controller, cache controller, .ect or what ever combo you need, and attach your novel peripheral. It won't be as fast as intel, but it will be cheaper to make, and use less power.

Thus as ARM slowly grows in market share they will throw more resources at closing the gap with Intel. I bet within a decade or two ARM will catch up.

And then there's that RISC-V wildcard...

Attached: 1415153797012.jpg (500x440, 62K)

ARM is around 10% more efficient than x86. This has to do with x86 being hot trash tho. Jim Keller specifically mentioned that AMD K12 would have been 10% higher performance because of an ISA advantage.

>I bet within a decade or two ARM will catch up
Dude, it's 5 years away at most. Aren't you seeing how Apple's CPUs are performing? They're all fanless.

It absolutely did, at one time. The simplicity of early RISC designs allowed for much more efficient and/or powerful designs than would otherwise be allowed given the same constraints with a common CISC design like x86 or 68k, and CISC designs were very much loved among developers who preferred an assembler to a compiler and knew how to use those powerful but specific instructions. A lot of differences are also still apparent in more constrained use cases like embedded systems where the smaller memory footprint offered by SuperH's 16-bit instructions may make it a more attractive alternative to MIPS or ARM in a project where memory will be scarce, for example.

But the story's different for desktops and other "real" computers that we typically think of when we talk about these kinds of things. As processes continued to advance, abstractions piled on and the fundamental algorithms driving most software we use every day continued to stay the same, the differences between one family of processors to another becomes very much meaningless. The once bulky x86 decoder still takes the same amount of transistors as it did years ago and only gets smaller with every shrink, saving a few hundred kilobytes here and there with a SuperH chip doesn't make as much of a dent when you've got gigabytes to throw around, nobody develops full applications with nothing but an assembler anymore and you wouldn't even know what was inside a Talos II if someone peeled the sticker off of the front before showing it to you. That's just how it is from a purely practical perspective no matter what the shills for x86, ARM, POWER, Itanium, whatever else are going to tell you. Especially in the GPL era, it all runs and does the same shit and the most you're going to see are a few meaningless percentage points on a benchmark outside of some very specific cases, for example POWER's blazing fast crypto acceleration.

Continuing The tl;dr of all of this is, in 95% of common applications, it really doesn't matter. Considering a MIPS and an ARM design of roughly similar price point and theoretical performance, the decision of what to use is going to be more based on tools, support, supply and other external factors than any part of the architecture.

I've spent a long time collecting all kinds of "weird" workstations, servers, laptops, PDAs and just about anything else with all different kinds of chips in them, and in the end unless you actively try to lift up the curtain, it's all just a sticker on the case. I don't judge a system by just its ISA anymore.

You're forgetting to mention the performance/wattage factor, which is very important nowadays since mobile took over. X86 is just too shitty for mobile since it scales down very badly.

Yeah, I was trying to tap on it with that first sentence but it probably didn't really get that across. It was definitely a huge deal especially in the early '90s, though I'd blame x86's terrible performance in the embedded market nowadays more on external factors like lackluster management, inability to stand out against established players in a market already flooded with choices and maybe even market stigma, rather than just the architecture by itself. x86 (along with many other CISC designs) was once reasonably successful in that market, after all.

For some reason, less instructions lead to better performance, but ofc larger binaries.
x86 has thousands now, its incredible.

Easier to write compilers for and early RISCs could also clock extremely fast to compensate for some operations taking longer than a CISC chip with dedicated silicon.

>I've spent a long time collecting all kinds of "weird" workstations, servers, laptops, PDAs and just about anything else with all different kinds of chips in them, and in the end unless you actively try to lift up the curtain, it's all just a sticker on the case.

Have you worked with any VLIWs?

I have a couple Transmeta thin clients and a single Itanium system buried in my garage, but I guess neither of those are really "true" VLIW in practical terms and I haven't worked with either of them enough to say I have a worthwhile opinion.

I'd love to dig up something like an i860 Stardent box to mess around and experiment with some day, though.