Risc v

My school teaches RISC V instruction set instead of ARM for the intro architecture course... is less hardware complexity actually better or am i just being brainwashed? Redpill me

Attached: D05B36E8-D230-4E46-8035-C2B357C3243D.png (225x225, 4K)

Other urls found in this thread:

rfwireless-world.com/Tutorials/ARM-tutorial.html
twitter.com/SFWRedditGifs

Are there any big differrences?

its easier for them to teach but is less useful for you to learn

Based and Redpilled

Mine did MIPS

doesn't really matter what you learn there it's an intro course, you can learn the basics from risc v

same

Their main argument was hardware complexity. With less instructions you get less hardware in the control, ALU, and datapath

not really, RISCV is becoming the fastest adopted ISA in industry
less hardware complexity is generally good for timing, RISC-V beats out ARM for comparable versions on several benchmarks

Berkeley switched from MIPS to RISC-V for their introductory computer architecture course (I was there). For all practical purposes for students, assembly-programming wise, it was just syntactic differences, but they could start to get a feel for why RISC-V was better when they implemented the datapath.

Modularity: you can pick and choose what instructions you want to support and get different types of RISC-V.
Better designed vector operations: you no longer have to deal with AVX2 and can have variable length vectors and shapes (vectors, matrices, etc.). This will all be done in a vector fashion by the hardware of course.
Better use of opcode space: they had the benefit of hindsight and since RISC-V is a RISC architecture, it doesn't need large numbers of bits just to identify what type of instruction a bit string is.
With respect to MIPS, 2 big things are that you no longer have a jump delay slot so speculative execution is possible on RISC-V and you have more argument registers (but this is a calling convention thing rather than a hardware thing).
Of course the biggest thing is that it's not proprietary so the community has the opportunity to build off of the ISA.

but what's the difference that makes it less complex? what's missing?

I've found rfwireless-world.com/Tutorials/ARM-tutorial.html
so judging from this:
- RISC-V has no inline shift
- no thumb instruction sets (basically modern ARM has 3 different instruction sets)
- no conditional flags (with shift my favorite ARM assembly feature desu)
- single endianness support
- overall simpler
and wikipedia suggest that ARM is not "Royalty free"

is that correct? is that all?
I guess it's interesting to look at ARM instructions to see what you can do with 32 bits, but would guess the course has focus on basic CPU architecture and develop some thinking, not to prepare you to be pro ARM programmer. So it doesn't matter that much.

Lol ironically i just finished 61C here qnd thats why i asked the question if RISC V is actually better. We never learned other ISAs so we don’t really have much to compare to and I was unaware if RISC V is actually used in industry.

>learning assembly
>computer science
fucking kek just do webdev like everyone else
>easy money
>easy code
>attractive as fuck in the workforce
nobody cares about you smelly neckbeards in the sub-basement floor

Web dev is souless

Tbh i don't have that much (or any for that matter) knowledge of how the ARM datapath works or even what instructions are in it.

But what we were taught was that RISC-V has less instructions. From a quick glance at the ARM instruction set, they have a lot more instructions, mostly arithmetic. This means you need to pack more circuitry into the ALU in order to support these operations.But I'm not sure exactly how much this helps. It just makes the hardware more complex.

Not sure about this but it could also make the datapath run slightly slower? The MUX would take slightly longer (like a couple picoseconds per cycle) to compare the bits... don't quote me on that last part.

>pajeets
>attractive
>not smelly
the fuck am i reading?

Do you go to berkeley, user?

I'd rather do something meaningful as my career, thanks

Yep

There's extremely little difference between them from a teaching PoV, but if I had to choose I'd use RISC-V. I've implemented both MIPS and RISC-V (for FPGA) and the difference was neglible

Haha do you use that atrocity SODOR? It's so fucking awful, true pajeet tier.

What the fuck even is riscv anyway? Just a fluffy theory architecture or are there actual chips being made to support it?

False, risc v is useful and no easier or harder to learn

typically used for education, but commercial RISC-V cores on silicon are available, but I dunno if anyone buys them and what for.

This might change a lot in the future

Yep, patterson from berkeley invented it but i wanted to know if its actually used / good

Sodor is an implementation written in chisel. Chisel is great, but the codebase for sodor is beyond abysmal.

Oh okay didnt know that

It's actually a good thing for us though, we implement 5stage RISC-V in our course, and the sodor code is so bad that it's impossible to plagiarize it..

Real talk
ARM is from huge company ARM holding, ARM had a lot licenses,restrictions, testing tools and other IP for made ARM CPU works, RISC V is free, allow companies build own designs.

Our class doesn’t spend as much time on assembly as it does the datapath / how the architecture is implemented. We had to use logisim to design a 5 staged pipelined cpu. Actually learned a decent amount even tho im not a huge fan of working with circuits

that is pretty much one and the same. The instruction set dictates how the pipeline must look like

How did you like the instructor this semester? If you just finished it, they had an unnatural ordering for the class material this semester, but maybe I'm in the minority thinking that. I know for sure there at least a couple others that agree with me though.

I loved Garcia but i 100% agree that they didn’t really have a great order of teaching things.

Are you also in the class user?

Not this semester, but I've taken it and let's say I know a lot about the course ;^).
I think if you want a true understanding of the computer architecture side, you should take 152 and 162. For a working knowledge, of assembly, you should only need 162, but 161 will force you to learn a bit of x86 for exploits. If you're interested in the datapath stuff, 151 let's you build your own.

And regarding the ordering, I for sure liked the older one. Having a comparative component in the course would have been nice where you look at different ISAs. I'm just happy you weren't driven away from systems by 61c. It seems like a weeder course for systems upper divs lol.

Planning to take 161 next semester then 162 after that. Does it matter what order i take those in? Are you a gsi? Or one of the professors :0

I can definitely see how it is a weeder but that stuff seems super important. I just wasn’t good at the cache and virtual memory stuff, though it is really interesting

Garbage

MIPS is the intellectual predecessor by P&H, one of which is on board at RISC-V.

This is just painful reading.

Also ARM is on a death spiral after toxic embrace of Chinese interests. A 99 percent reduction in profit means people also look at RISC-V as an exit before ARM implodes.

It is true ARM has a lot more infrastructure going, that is now probably the main selling point.

It really depends on your schedule. 161 isn't too bad, 162 projects are time sinks. But every undergrad interested in cs at berkeley should take 162 imo.

Thx for the advice

I-I was in your class this semester user. I think the ordering this semester was better than previous ones.

Attached: keyboardtypingcomputer.gif (650x366, 3.72M)

no problem. most importantly, have fun in whatever area you decide to focus in. don't jump in the ai/ml bandwagon just because everyone else is.

My gripes were mostly with how the first half was arranged the datapath stuff was fine but small things like when we did floating point just seemed kinda out of place. Generally tho i think the orderig was still fine but i would have liked to have the ordering from spring 2018

Doing caches/floating point before datapath/pipelining seems a more smooth transition from higher level to lower level. Iirc in sp18 they jumped right into the datapath after doing C and then did caches afterwards which seems like a weird order.

I think floating point should have been done around the first few lectures though after introducing 2s complement.

Everyone's switching to teaching RISC-V: you're learning the future. It's a beautifully clean design and it's better than MIPS and ARM.

There are already shipping hardware microcontrollers, and both nVIDIA and Western Digital are going to use it for certain embedded cores soon.

It is capable of going really fast, but so far nobody's tried making a general-purpose high power CPU on a recent process, as some of the extensions you might want for that are still being talked about - and Spectre has kind of sparked some caution about speculative execution, maybe before we design more CPUs that might execute different code it would be a good idea to have ways to actually Spectre-proof things from the ground up (and it might be possible to prove noninterference via things like cache partitioning, the trick is going to be minimising the performance impacts).

Didn't they recently tapeout the berkeley out of order processor? How does that compare to the arm cores that the big boys are putting out?

Attached: question.jpg (454x584, 70K)

bimp

>it's a beautifully clean design compared to
The 68000 was a beautifully clean design compared to x86, yet here we are today. It doesn't matter what's actually better, unfortunately. The only thing that matters is who has more money, and who spends more money in the right places. ARM will be around for a while because of that. They have a tight grip on the embedded market, and embedded moves slowly- we still use 8051, for fuck's sake.

Oh it still gets worse than ARM/ARM64/THUMB. The mid-grade Cortex-M uC's are technically Thumb, but they also support a subset of the full ARM ISA.
No conditional flags is a huge detractor, though.

wow your school is way fucking ahead of the times, congrats
yes less hardware complexity the better.

not really desu, patterson and hennessy switched over to RISC-V in their comp org book years ago

Motorola should have invested in the 68000 instead of creating the 88000 and PowerPC

... a place is actually teaching you that simpler is almost always better?
shit put some of that in software courses, especially webdev
they really fucking need it

software and hardware design are really different. software design allows for mistakes and so you get some "artistes" that overcomplicate things because there's no real risks to fucking up the first time. Hardware you have to get it as right as possible the first time or you burn millions of dollars.

its really not different
when you push out a piece of shit software product, people will still use it
when people use it their own shit becomes dependent on it
if enough things become dependent on it, it will become immortal and everybody will have to deal with it until it become completely obsolete

big enough mistakes in software have just as large of an effect as hardware fuckups
(see: Flash)
(see: pretty much any post-2014 javascript library shit-thing)

There's a hint in the name user

>Reduced instruction set computer


It is easier to learn, so yes as an intro course it is better to learn.

>is less hardware complexity actually better or am i just being brainwashed?
Seems RISC-y

Attached: Carlos.png (350x350, 138K)

RISC is usually more efficient for most use cases, plus the base instruction set is much simpler and thus easier to learn.

RISC-V in particular is good because it's an entirely open standard and is super extensible.

>not really, RISCV is becoming the fastest adopted ISA in industry
Where? I haven't heard of any RISC-V silicon other than that overpriced SiFive development board.