When you code something, how does the computer know what it means? Sure it might match what you wrote to correspond to the language the code is written in, but how does that translate to the ones and zeros and so on. Can someone walk me through the ENTIRE process, everything online isn't too clear.
How the fuck do computers work?
Other urls found in this thread:
coursera.org
nand2tetris.org
pcpartpicker.com
twitter.com
should go through nand2tetris or some other computer architecture course
It doesn't, that's the magic. Every letter, number, symbol you type is absolutely meaningless to a computer.
What if you write in machine code
That's also meaningless, this gets turned into micro-ops which the silicone can finally optimize to be used to alter on and off states in the transistors. In the end the computer will only ever see 1s and 0s.
Code by petzold is also a good intro and noob friendly.
Once you get to machine code, each symbol is converted into 1s and 0s, after which the specific patterns route the data through the computer
I've built 4 bit microcomputers in Minecraft, once you understand gates and memory cells it's not to bad
A series of shittons of lightbulbs that go from on (1) to off (0). That's essentially what a computer is.
I recommend reading "Manual of Logic Circuits" by Gerald A. Maley
0s and 1s at the lowest level is just voltage low and voltage high, which is then used in shit like gates, MUXs, etc which are made of nand transistors mostly I think. So basically billions of tiny transistors make up physical shit that can process your hentai video games using teraFLOPs level power. Idk I took some digital logic design class (required for EE major) and it is so goddamned boring, especially with a professor that looks like he some old ass mummy that escaped from his dusty tomb
logic gates and black magic
White magic*
I've never seen a nigger program, pajeets don't count.
>he's never used bash
uh oh
pcpartpicker.com
This is my "Budget" build, feel free voice your opinion on it.
I apologize for looking like a jackass, this is my first build
How long did it take you to render a single square of captcha?
Programming languages exist solely for the purposes of readability and convenience for humans, so that you don't have to sit there typing out 1s and 0s all day. When you write a C program you use words and numbers that you can understand, and you save it as a text file. When you want to turn that human readable code into something the computer can read, you do what's called compiling. You run that text file through a compiler program that converts it into a binary format you can execute. The binary is readable by the computer but not by you (at least not easily).
>well user, how is the compiler compiled?
It can compile itself. It's called boot strapping. It's an ancient chinese secret.
>so how does code take advantage of certain processor instructions while still maintaining backwards compatibility?
When dealing with a CISC processor like x86 you need to think of the architecture not as the sum of the all the silicon but instead as the interfaces it exposes to software and the OS. Some things are simply processed internally and neither the user programs nor the OS nor the compiler has a say in the matter. These things will work as long as the code you're running is x86 compatible. You can run the same software on a Core2 as you can an i7 for the most part because of this isolation. Go on YouTube and look for the OpenVMS presentation on VAX vs Alpha or something like that. They explain it well.
The video name that explains x86 hardware more in depth that I spoke of is actually called "Introduction to x86 (OpenVMS Boot Camp 2016 session 10084)"
Transistors, lots of 'em.
>>/pcbg/
Also ditch the cooler, stock cooler is fine, and get dual channel ram at least
where does the GCC and a linux binary fit in this
say for example, you write something in C
you compile it, this turns your C code into assembly that your processor can understand
assembly is made up of an instruction set
instruction sets are just ID numbers (with arguments) assigned to mathematical equations that the ALU (arithmatic logic unit) in your processor can calculate
different types of processors have different instructions sets (x86, MIPS, etc...)
the ALU (and a bunch of other stuff I glossed over or didn't even mention) is made up of logic gates (AND, NAND, OR, XOR, NOR, etc...)
logic gates are made up of transistors, which have unique properties that allows them to be used as electric switches or to control gain (depending on voltage input), in the logical part of computers they are used as switches (example, 5v in -> 5v out, 0v in -> 5v out, depending on circuitry)
I jumped over the neccessary storage cells when talking about the ALU as well (transistors circuits configured in such a way as to hold a charge state). These are called latches and flip flops (RS latch, tk flip flop, do a google search to learn more). This is where the ALU gets arguments for the equations used (equations such as add, subtract, move, store, logical and, arithmatic shift, etc...)
major in computer engineering, you'll learn everything from bottom to top (well, if "top" for you is a basic operating system/scheduler and nothing treading into web technologies or more computer science type stuff)
Thank fucking god we're past the point where you had to write shit in ASM
>black magic
blacks cannot into mage class. not enough INT & WIS
Ever heard of bash, my man?
The 4th semester of computer science class explains all this.