Tejas and Jayhawk

Has anyone in the wild ever found/purchased an Tejas engineering sample CPU?

They existed as per anandtech.com/show/1217 but it'd be interesting to find out if any survived their cancellation.

Attached: Tejas.jpg (250x251, 13K)

Other urls found in this thread:

tweakers.net/reviews/740/chip-magicians-at-work-patching-at-45nm.html
anandtech.com/show/1217
twitter.com/SFWRedditImages

underaged thread

I've got some in storage.

I’m not getting whether it was a Pentium4 based or Core2, but pretty sure Core2 were all 65nm or 45nm whereas Tejas is 90nm like its Prescott predecessor. That said, if you still have a 775 board for your sanity get a 45nm core2.

It was supposed to be the next in the Netburst line. Got canned due to running into the thermal wall - I think the rumors at the time were that a 4+ GHz Tejas would have been a 150+ watt CPU. No way he wants it to use, if those exist they're collectors' items.

That makes sense. I have an Intel workstation stored up in a closet that ran Xeon 5030s (Hyperthreaded P4s basically). Their performance was shit even though “muh 4 cores per CPU.” I later got core2 based xeon 5150s (both of these are 771), and it ran cooler, quieter, faster, etc.

Tejas is only unique because it was supposed to be a 40+ stage pipeline

150w TDP for the 2.8ghz single core ES part.

I have never heard of anyone having a post-Prescott2M not Smithfield P4 ever
The entire Voodoo 5 6000 production is less rare
Intel is pretty good at keeping their secrets and failures within the company and at destroying them

In theory they don't actually exist, since they got to the first tape-out for ES but never made it to be fabbed
>tweakers.net/reviews/740/chip-magicians-at-work-patching-at-45nm.html

Since they'd done all the work already, I wonder why they didn't start putting them out on 65nm later on, instead of shrinking Prescott.

Because at that point Pentium M (which was a souped up Pentium 3 with a fuckton of L2 cache) was the only architecture that they had that was any good. Itanium and Netburst both flopped in the face of Athlon 64.

Ultimately it was better for them to tack on another core (for Core Duo/Solo) then fatten the pipes to 64-bit for Core 2 vs continuing to ride the flaming fireball that was netburst and its derived architectures (Tejas and Jayhawk)

Still though, they did shrink down Prescott and call it Cedar Hill. If they're going through that effort, why not just throw Tejas out there?
A die shrink would have probably brought the TDP well enough under control to have given P4 a last hurrah. The first Core Solos and Duos got off to a pretty weak start, while P4 was still holding its (housefire-starting) own against Athlon 64.

Mind you, even Intel's old architecture was a shitshow compared to AMD's K7, T-birds ran circles around the all day long. Considering K8 was a direct evolution from that, it's very impressive how fast Intel was able to dust it off and make it compete again.

>anandtech.com/show/1217
Jesus Christ what were they smoking at intel at the time to come up with the Netburst shit.

Imagine if AMD wasn't at its finest at the time, we'd probably have to live with Netburst and Itanium.

Optical shrinks are/were easier to do vs bringing in an entirely new architecture. Just doing a die shrink all you didn't have to deal with debug and re-spins to implement those fixes.

Tejas and Jayhawk might only have really been doable on intel's 45 and 32nm nodes. At 90 and 65? It would have been even worse. They also would have required a monstrously powerful front-end to get the most out of the stupidly deep pipeline, which in turn requires a tithe in die area and power draw.

>MAKE HIGHER HERTZ
then AMD came with
>MAKE MORE CORES
and shitposting was never the same again.

It sort of still holds true. Intel is still all about higher hertz and amd is aiming at more cores. The main disadvantage of Ryzen is those hertz.

For your viewing pleasure.

Attached: Untitled-1.png (608x738, 229K)

Both "graphs" need a red line pointing straight up labelled "chance of house fire".

To expound upon this poster,
It was way easier to port a 200-300 million transistor design from 90>65nm than to port today's 1+ billion trans designs from say, 14 to 7nm. Back then CPUs had significantly less complex cache designs, less execution ports, they didn't have every functional piece of the CPU power gated with internal hardware controlling clock states, and engineers did not have to utilize extensive knowledge of quantum fuckery to make things work such as which part of which circuit needs triple the wall thickness or double the density to not have electron tunneling totally brick the chips synchronization.

One caveat being, with a GPU like Vega they're mostly working on easily replicable units (the shaders, the render and geometry), so it's do X once, copy+paste Y times, then fiddle with the video decoder and memory interface (mem interfaces are more simple than ALUs). This is why AMD can bring Vega over to 7nm after saying "it's mostly a waste of time to port old designs to new processes" - not only that 7nm Vega will fill a GPGPU niche, but a GPU is nothing like porting a CPU.

They thought they were going to be able to scale the clocks upwards of 10ghz. Pentium 4 is a giant skidmark on the history of cpu development.

Fun fact:

In all 32bit Pentium 4 CPUs the ALUs were double-clocked and could execute certain instructions, at stock frequencies, faster than 7Ghz. World record LN2 OCs did in fact have (parts of) the P4 running much faster than 10Ghz.
Intel even had designs for 64bit P4s in which the ALUs could operate double-pumped, but it's not certain that this was ever built into consumer units.

How P4 performed worse, than P3 at same frequencies and simplier instruction set?

P4 had a double-and-more pipeline depth compared the P3, something like 20-31 depending on model versus 10-12. While a longer/deeper pipeline allows a CPU to clock faster, it also makes the time penalty for missed executions cost the same ratio (2-2.8x depending on who you ask - the Willamette P4 had a full pipeline of 28 stages but was reported as having 20 "critical" stages) of time/energy/throughput.
So a top of the line 1.3Ghz Pentium 3 running 133Mhz SDR was technically better than a 1.5Ghz Willamette running 200Mhz RDRAM unless the code being run was more or less perfectly written.
But when the P4 was updated, a 2Ghz using 266Mhz DDR was indeed faster than that aging 1.3Ghz P3. The P3 was limited, at that point, by its memory bus, outdated cache system, outdated instructions.
And then we had the Athlon XP, and then the Athlon 64 come down like a sledgehammer and smash Intel to pieces.