How fast could a hand-made CPU actually be?
How fast could a hand-made CPU actually be?
Other urls found in this thread:
youtube.com
en.wikipedia.org
dhep.ga
boinc.multi-pool.info
gerasim.boinc.ru
youtube.com
potatosemi.com
en.wikipedia.org
twitter.com
Every single CPU is hand made but factory produced
I mean one that was made with parts only from Home Depot
3 Hertz.
Soft processors typically operate on FPGAs in the 100-400MHz range. Bill Buzbee's Magic-1 TTL CPU runs at 4.09MHz.
Wow, so many hertz.
Kek
Tell me how you'd make an FPGA by hand?
this guy is making a RISC-V processor from scratch
youtube.com
I don't know, maybe 100 km/h?
en.wikipedia.org
The computers which actually were assembled by hand - that is the ones from the 1960s before we had integrated circuit cpus from the 70s - ran in the single digit mHz. The designs were constantly improved, but smaller manufacturing is needed to get a decent clock rate. The signal propagation takes time and physical size puts a limit on how fast the whole circuit can switch.
>every CPU is hand-made
wrong
VLSI is used to compile written specs/code into circuit models that can then be tested for correct electromagnetic properties (self-interference). designers might have some small part in tweaking designs but the amount of complexity is far beyond a human’s ability to understand or edit themselves.
I want to know if a processor that is double the size of the standard and with the same size of internal components would run any faster or at least have a bigger cache size
It's kinda interesting that a CPU designer's job is writing code describing how to connect different modules. A lay person wouldn't be able to tell the difference from a software developer.
Interesting, I was searching last time how modern cpus with millions of transistors were designed and I was amazed to see that they were effectively programmed
But how can features like better multicore performances on ryzen cpus (or anything really) be achieved with that much complexity?
longer circuits means more metal that has to be brought up to voltage, and longer wait times for signals to get anywhere
same thing with larger caches: they’re slower. that’s why the CPU has registers, then one or two layers of cache and then RAM and then hard drive.
MyCPU.eu if TTL is allowed
I wasn't ready for this knowledge.
marketing scams
amdtoddlers btfo
there was an article not long ago about some enterprising lunatic who build a 6502 out of discrete transistors. If I remember right he said that the non-IC nature of it all limited it to around 400 KHz.
lol the snake is sucking his own dick
intel fag here. I do verification on server parts. The entire front end development and validation effort feels somewhat like a software dev project. Tools are different, HDL is used in place of conventional languages, perl is a universal glue for us though python is creeping in.
Some days I want to jump from the roof.
My knowledge of VLSI is limited to creating/verifying/ linking a handful of different adders, and I know nothing about Ryzen.
But I would guess that the easiest way to make a multicore setup faster is to make the cores as dumb and fast as possible. Still, MOAR CORES only speeds up the concurrent parts of any process, so the hard speed limit is gonna be all parts of the process that must run in a single thread.
Wanna know something even more fucked up? Our professor told us that advances in Fast Matrix Multiplication are at the forefront of all modern endeavors. Faster asymptotic matrix multiplication means faster VLSI CPU design and verification(how do these billion wires interact with each of their 100s of neighbors at all stages of operation?), which means more and larger CPUs, which means more and better devices, which means more ability to multiply matrices. And also run businesses and governments and research and that other inconsequential shit.
>tfw we have essentially created the second most powerful known device on the planet (before the human brain) by bootstrapping progressively more powerful processors going all the way back to one of the original processors some autist put together with graph paper and tweezers in his basement
I wasn't talking specifically about ryzen or amd, nor intel for the matter
I just find it fascinating how even with all the complexity and abstraction of designing a cpu, they still can create features and improve performances
With that said, softwares are worst than ever and shitty developers are not helping to improve anything
Today I was looking at BOINC projects and noticed that there are distributed computing programs aimed at aiding in hardware and software (would help in compilers) design.
Have you heard of anything like that before?
dhep.ga
boinc.multi-pool.info
gerasim.boinc.ru
It's pretty wild, but I wonder the impact.
It seems like combined we'd have seen exponential growth by now at an insane rate. But I guess the hope is that it's around the corner.
I always wonder, if a human can think of concepts like algebra and nature and life and death, but a human toenail or nerve cell can merely exist and perhaps react; What sort of concepts could an entity such as Facebook, Apple, or Google contemplate? An entity composed of thousands of intelligent apes and millions of computers. Or The Internet, what sorts of things could The Internet perceive and conceive of?
Nobody ever seems to want to think about it.
Ol Musky touched on this during his JRE appearance. He pointed out that the internet is just a projection of your limbic system.
It's fucking crazy to think about. What would happen if you shut down large portions of the internet without warning? Would people revert to smaller peer groups/their own consciences and end up starting huge fucking wars or something?
Sometimes I wonder if game developers program a complex emotion system with stored values for emotions and shit that doesn't get exposed visually.
What if they're programmed to be afraid, what if that's real enough.
I think this stems from the report on the AI in F.E.A.R., IIRC the AI only knows what it knows but shares its state when it is close enough to other AI. Like they talked to each other and could coordinate based on the combined knowledge.
You can see how it's implemented sure, but still. I don't like to acknowledge things like that.
Great talk by Jonathan Blow, at the start he compares Photoshop from 1990 ago to 2000's version.
youtube.com
TL;DW: Fucking godawful, with the same UI, controls, essentially the same thing.
Yea Neuralink and the like is fascinating. I'd definitely be willing to try it out some time after introduction.
Yea, pretty much. Internet collapse would be interesting. I'm mostly sick of hearing about every retard who stubs their toe everywhere around the god damn globe. I don't even give a fuck if someone 500 miles away is murdered.
FEAR AI is great, task-based planning on the individual and group level with audio cues _after_ plans have been decided. As for what degree that makes them sentient, I'm not sure. As a """game designer""" myself I'm awfully fond of abstractions and building up worthwhile foundations and lower layers for the player to use. IE my current game is a slime platformer where the player controls a slime core that controls a dynamic/distributed network of slime body nodes.
There's a fuckton of math under the hood that the player can control with just a few buttons and I think that's really interesting.
More on Jon Blow's photoshop complaints
>Single-threaded performance is about 24x as fast between 1990 and 2000
>Multi-core should be between 24x to 192x faster, or way fucking faster with GPU computation
>photoshop 2000 takes 6-7 seconds to load an image
>about a full second for some basic bitch menu
>IE my current game is a slime platformer where the player controls a slime core that controls a dynamic/distributed network of slime body nodes.
Nice.
>Jon Blow
That man has been an asset in my life in more ways than one. What a nice man. I strive to be more like him in multiple ways.
Looking forward to seeing where Jai goes.
you'd obviously program a custom CPU inside the FPGA, retard.
That's fucking peanuts compared to dealing with capacitances, etc. of real world sized logic gates and how slow it'd be.
Ha a lot of people tell me about that game. Mercury Meltdown is another interesting one, and mine was inspired by Mushroom 11.
slimeresearch.com, you should try it out and let me know what you think.
>Parasitic Capacitance not even once
Yeah those things can go die in a fire.
>FPGA
>fly wires to perfboard
that thing probably does 10MHz
>menu
hmm, if I ever build my own cpu(s), I'll add one dedicated core and IO only for menu output and user input. os will have to abide this the design and no matter how busy the HDD is or the other cores are crunching numbers, at least the user input won't freeze up.
That's the base of organization theory.
The idea is that humans have a finite amount of processing power (also called a "bounded rationality") and that organizations emerge to overcome this limitation.
Complex behaviors in nature emerge from the application of simple algorithms (e.g., ants), we're just two or three levels deep into the process (cells > human > society > society + computers/intetnet)
That's part of the android guidelines for developers. Don't do your shit on the main UI thread, always use another thread
>thread
well, that's certainly better and more optimized but if all cores are busy it's will still result in that shitty UI lag.
That's all down to the scheduler
not really. I've tried a bunch of different shedulers with different concepts and timings in my life, when it comes down to it, it's all shit and it lags. you might think you could work out a "good enough" balance but that's like telling a gamer 25 or 33 frames per second is good enough because it's fluid.
A menu taking a full second to open is a clear symptom of terrible software design.
Fuck off with your reductio ad absurdum garbage.
OS... just compress or extract something big on the HDD you're OS lies and be it windows or linux, it's will lag like shit. That's what I mean to prevent. That currently every browser avaible will even block your mouse-coursor from moving if your internet connection slows down is simply abmysal software design/os design/hardware design. I stand with my initial thought of a dedicated cpu and io so that I simply don't have to deal with brainfarts of others.
Perl? That sounds slow. Would've though they had all kinds of C or CUDA accelerated shit for optimizing and simulating the performance of the chip.
Haven't Java GUI programs been like that forever? Even before Android.
If there weren't new things on the horizon, I'd invest the time to learn it.
>those plebs yawning and on their phone during this talk
Literally all GUI programs are like that since ever.
1) don't try to multithread GUI, because it's way too fucking hard, so there's always some kind of dedicated GUI thread
2) never block that thread
lol I agree
lmao
A CPU composed of 7400 ICs with a gate delay of 10ns cascading from one to another on breadboards could maybe make it to 1MHZ.
Didn't you say gate delay wasn't the main limiting factor for maximum frequency?
>clueless
60Khz
I've been experimenting with boids recently, and I've built a system where it's easy to assign different steering behaviors to different classes of boids. It's super cool watching them radically change their behaviors at the touch of a button.
Boids are fucking great. I'm the slime guy from above and one of my past """games""" was this little webtoy where boids just fly around.
They can also land on the ground, and the player can peck at the ground to grow a tree using invisible recursive(?) boids.
Last time I checked in with that guy it was like 400Khz, actually.
Wrong. I was a layout designer for some time and every IP was designed by hand, because Cadence is simply too much stupid to effectively use space and preserve good routing at the same time. Even if the block is marked as auto-generated, it was 100% tweaked by hand. There are projects, that literally hundreds of people worked just on layout.
Most of the Interference is avoided by technology itself. That's why developement of manufacturing technology can cost billion $$$.
What kind of device and integration scale? Because hundreds of people doesn't sound enough to fix billions of transistors in, say, a CPU.
that depends on the propulsion force of your arm and the air resistance/air profile of your cpu
with the one like in your pic i could easily do 40m/s
He is still right though, to get the CPU made, human input (via hands) is needed. There's no Zero-One company making them all by themselves without human input.
You don't have to fix billions of transistors, you do know how a CPU works?
I can only say it was some Matrox device. You have to realize, that designers of a project are making IPs that can be reused. Every e.g. gpu has many same parts/cores and you are working just on the one and other ones are just (simplified) ctrl+c ctrl+v. The other thing is matching. If you have to design simple current mirror (that's just 2 transistors), you have to divide them to 8 - 16 transistors and interleave them to 2 rows because it can move to any side by given deviation and it have to have same characteristics. Transistor count then jumps very quickly.
Hmm, I see.
what is boids?
is it a programming tool or what
It hertz my feelings.
WTF WHY IS RYZEN WITH THE HIGHEST IPC
How many nanometers is this
Wonder how fast you could clock a TTL CPU made with potato chips? (GHz TTL clones for those who don't know.)
potatosemi.com
Facebok is the toenail in your metaphor.
It is a complex structure that can only react to stimulation.
It cannot contemplate.
>0.8 ns rise time
>every trace longer than 0.83" should be treated as a transmission line
Jesus christ that sounds like a pain in the ass to work with
What did you expect, at those switching speeds your dealing with RF bullshit.
>he doesn't work on microwave systems daily
Kek
Name of a flocking algorithm concept.
en.wikipedia.org
I remember when I though people who could make shit like this were smart. Know I now we're just glorified code monkeys whose job could be taken by a pajeet any day
Fears ai only works because tight corners, the same ai in larger spaces, tends to shit itself. The ai itself is fucking stupid, its shocking how simple it is, but effective use along with sound bites makes most people think it's smarter then it is.
meh, for most big advancements, you have 1 or a few people who trailblaze, then you have thousnads of others who look at their work, amazed that they could even think of it, and possibly 1 of them will have something to add, which will then be scrutinized for decades.
you weren't designing a commodity cpu bruh
>Sometimes I wonder if game developers program a complex emotion system with stored values for emotions and shit that doesn't get exposed visually.
they do, its a good way to create believeable NPCs
All the features he talks about Smalltalk had a million years ago.
The only interesting part is where you can compile it to a cross platform SDL thingy.
My knowledge of Smalltalk is limited. Can you offer any insight on it?
Specifically negative insight.
I know Go and the authors reference Smalltalk a lot, saying feature X is inspired by feature Y from Smalltalk, or acts similarly.
I like Go, and I like Jai in concept but I'm curious, if this does already exist, why is it not used. What is it really that stifles it?
Lack of widespread education on it?
The syntax?
The tools?
Even if it's unfair, I want to know why it isn't as prevalent as say C++.
Without even looking my initial guess is the syntax. Because it seems impossible to transition to different paradigms without a programming lingua franca equivalent.
A language useless in any regard except to ease someone from 1 paradigm to the other, and those don't seem too popular in programming languages as they are in human languages.
Not to mention the inability to know if something is worth the effort, before putting in that effort to learn it.
At least with Japanese or something you know there's stuff to consume and people to speak to.
But how can you trust "you'll like using us more!" up front.
I'm real curious about these things.
>job
As fast as you build it. Also depends on what it's for. It could be super fast at adding two registers but completely incapable or running Crysis.
Go to Xilinx.com and buy one with my hands.
Probably something like 200.000 assuming 0.2mm gate width.
Actually, parasitic inductance is a bigger problem