/VHDL/ Central

Models, Simulation, Synthesis. What are some good resources for learning and using VHDL?

Attached: VHDL.png (200x128, 19K)

Other urls found in this thread:

github.com/ghdl/ghdl
gtkwave.sourceforge.net/
vhdlguru.blogspot.com/2010/04/how-to-implement-state-machines-in-vhdl.html
en.wikipedia.org/wiki/Logic_synthesis
twitter.com/SFWRedditVideos

There are some pajeet websites out there but they are mostly trash. Id recommend books on hw design using vhdl instead.

Peter Ashenden's book "Designer's Guide To VHDL" is often recommended but I prefer Douglas L. Perry's book "VHDL: Programming by Example".

GHDL is an interesting free VHDL compiler to create executable simulations.
github.com/ghdl/ghdl

GTKWave can then be used to view the results.
gtkwave.sourceforge.net/

Attached: GTKwave.png (700x422, 15K)

>VHDL
>not Verilog
kys

If you want to participate in the conversation, learn something first.

Attached: come-on-youre-vaspni.jpg (600x460, 39K)

how do you properly implement a state machine in VHDL? The one I kinda banged together for my hardware design class didn't quite feel right.

An example of a direct way for beginners:
vhdlguru.blogspot.com/2010/04/how-to-implement-state-machines-in-vhdl.html

btw - just use google for questions like that.

Anyone wanna make a Tomasulo MIPS processor with me?

Learning hardware description languages seems pointless to me. Companies want Software, since programmable processors are much cheaper, usually faster and new features can be added and bugs fixed without having to throw the entire hardware away.
What kinds of jobs do people do with these languages? Is there even a demand?
I'm just curious, not trying to offend.

start reading the RISC-V specification.
It will be more useful than anything MIPS.

ASIC's are expensive to manufacture but FPGA's are programmable.

Yes, but the goal should be to make something that you can mass-produce as an ASIC.
If you distribute your hardware on FPGAs, that's a sure sign that you should have written software for a DSP instead.

Not necessarily.
The production volume might not warrant the cost of an ASIC and/or the system may need to be changed after deployment.
FPGA's are very flexible in the type of logic system that can be implemented. Microprocessors and DSP's might not be suitable.

Processors are horribly inefficient at tasks like signal processing. Imagine a HDMI to VGA converter, doing such conversion in software would be pants on head retarded. Basically anything that you can implement as a pipeline without any branching (imagine reactive programming) can be implemented to be practically real-time (albeit single purpose) with FPGAs. The single-purpose aspect could be also beneficial security wise, you don't want scrubs installing NetBSD (or worse hackers installing rootkits) on your new toaster line for 2018.

That's a pretty good point, thank you.
How bad is the difference between having no branching and having a few tiny ones? Does that make it perform much worse?

Yes, because Indians have the mentality that as long as it comes close to the minimum requirements, there's no need for improvement. Which is why their country is literal shit.

The FPGA is programmable and the digital logic system that is implemented in the FPGA can also be programmable (two different levels of programming). As an obvious example, a typical microprocessor can be implemented in an FPGA. More interestingly, programmable logic systems which are not of the von Neumann architecture, Turing machine type can be implemented. This is a very important point.

You can, of course, have if and case statements in VHDL/Verilog, but the difference is that the ""code"" is not evaluated line by line but it has to be translated to a physical layout, a pipeline, a block with inputs and outputs if you will. Ergo, the penalty for lots of branches is growing "cost" of said design = you need a bigger and bigger FPGA or more silicon in an ASIC.
Stupidly convoluted designs can also introduce timing issues, but with modern FPGAs and mundane use cases, you will find that the "speed" of an FPGA is practically infinite.

Why would you use VHDL over Verilog? Are you from the EU or something? Verilog is the industry standard in the US FYI.

So, just like modern software developers are at the mercy of their compiler, hardware developers also rely on their compilers to not generate expensive garbage?
Are there optimization flags, like gcc has, where you can optimize for lowest latency, lowest number of blocks, cheapest design etc.?
I wonder how often you have to revise the circuits verilog/vhdl spit out. I mean, I can hardly believe that entire CPUs are generated, in their exact physical layout and everything, just from a software that reads code that just describes logic and nothing else.

Not OP, but VHDL is the standard in France as far as I know. I'm in an engineering school that has an embedded system major where VHDL and FPGA are the main things you learn.
I'm not in this major tho, so I can't really tell you more. I know that some guys created a startup and made the fastest trading system on the planet with a VHDL / FPGA system 2 or 3 years back tho.

Not him but yes of course there are flags, and there are flags for each part of the toolchain. Compiling the code into logic is just the first step. To actually get a functioning chip you need timing and area constraints for the optimizer to work with, and in the case of ASICs you need to provide a standard cell library to map your logic onto. It's a bit more than just logic. Generally you have a team working on the first part (design) and another team working on the second part (backend implementation).

en.wikipedia.org/wiki/Logic_synthesis

Thank you

I'm from the EU. VHDL is taught first here because its syntax is dissimilar from programming languages so it forces people to learn anew. Verilog looks more like C and apparently students used to write it like it *actually was* C and produced shitty code. So we switched to VHDL in order to break that mental link. Also it's not like you can't learn the Verilog syntax after you get the basics of HDL.

In general whenever there is a requirement for some digital functionality in any chip, there will be a need for people who understand hardware description languages. Even if that chip includes a microprocessor, there still needs to be a guy who needs to define the behavior of that processor.

For example, any modern ADC or DAC have significant digital portion which literally cannot be implemented using processor-software combination because of the frequency of operation. A 2.5GHz DAC needs an interpolation filter operating at 2.5GHz, which is quite hard for a generic processor, but viable with a dedicated hardware. Someone needs to write that. Even more important is the need for analog mixed signal design. Analog circuits need to communicate and get configuration from digital circuits, someone needs to define the behavior of that link.

What's even more important is even if there's a miraculous processor verilog code that can handle everything written in software, there still needs to be a guy who maintains that code.

Eventually software/hardware optimization will come around but not yet and even then there is some merits to learning hardware description languages.

I think that we tend to check the generated RTL diagrams (to make sure that the compiler understood us) a lot more than software engineers check the generated machine code, hue hue.

>software engineers
>understand machine code

s/machine code/assembly
s/machine code/bytecode
Whatever.

The fuck kinda uni doesn't force software engineers to get a fairly robust understanding of hardware and low level execution?

>I wonder how often you have to revise the circuits verilog/vhdl spit out. I mean, I can hardly believe that entire CPUs are generated, in their exact physical layout and everything, just from a software that reads code that just describes logic and nothing else.
Pretty often.

Usually there will be digital engineers who just look at code and run simulations and then there will be backend engineers who do the actual layout generation. Once the circuit is synthesized and place&route is complete, backend engineers really hate going over it again. So they make digital engineers go through the synthesized code and find gates that they can use or find minimally invasive changes. And changes after P&R do happen although everyone hates it.

Also, the nice thing about HDLs is that if you do something tremendously wrong, it generates real garbage not pseudo-garbage. If you cannot follow the implementation of the functionality you described, something must've gone badly already.

>risv-v
thats some shit, started making a simulation of 32i with a friend in logisim before making it in vhdl

Well, it's not really that, but have you seen the kind of shit modern compilers spit out?
Who even knows instructions like vcvtsi2sd, vfmadd132sd and whatever the fuck the compiler generates when it's allowed to use SSE and AVX and co do?

You don't know what you are talking about.

VHDL is a branch from and extension of Ada.

>Verilog looks more like C and apparently students used to write it like it *actually was* C and produced shitty code. So we switched to VHDL in order to break that mental link. Also it's not like you can't learn the Verilog syntax after you get the basics of HDL.
That's when you make fun of them to stop, you wouldn't change HDL for that reason. VHDL is just very annoying at some things. And when you understand the difference between what's synthesizable and what is not, writing verilog tests become even enjoyable at times, because you can write a synthesizable circuit and an imaginary test code.

Shorthand notations allow more information to be condensed into a smaller visual area and this can sometimes make reasoning easier for someone who is trying to hold it all in their mind and work with it. With the more verbose notations, what's being described tends to be more spread out and can seem less accessible to the person who is trying to hold it all in their mind and work with it. However, when designing large, critical systems where mistakes are very expensive, the more verbose notations tend to be preferred.

So you are saying a person can comprehend the big picture more easily if the information is condensed into a smaller visual area?

First for SystemVerilog

Yes. I think this is what makes the shorthand notations more appealing to the people who need to sit and stare at a segment of text while thinking. It is cognitively more expensive and generally more time consuming if it is necessary to scroll through more verbose text and hop between files and navigate through interfaces while reasoning about the system. But this sit and stare at it phase of development is a small part of a long-lived hardware/software system.

based

Yeah, we hear you. What do you like about it?

I guess he is either searching Wikipedia for something to say or he scampered off completely. Based indeed.

It's Verilog with additional features like 2D arrays, it's great.

tard spanking isn't nice ;)

Considering UVM is all System Verilog and is becoming the de facto standard for formal verification we shall see VHDL fall out of use sometime in our lifetimes.

I doubt it unless there is some significant new form of logic synthesis from a much higher level description of requirements and constraints.

Most Xilinx IP is now system verilog with assertions.

If Xilinx maintains it and it's reliable, that's cool with me.

I don't know if Xilinx IP would be included in any high reliability and high security applications.

TS/SCI DoD and C4ISRS is all VHDL.

...

You'd be surprised. Not flight critical shit as it's not do254 or whatever. But people could certainly die if it went wrong.

We are certainly living in interesting times. There are still some of us though who prefer to build systems that are trustworthy.

Do you think because you wrote something 3 times it works better?

Real, formal verification comes from thousands and thousands of test vectors that cover the majority of states and scenarios. These are most easily constructed with constrained random system verilog

Formal verification and verification through testing are different methods.

What? You don't know what you are talking about.

You're right. verification through testing is a crapshoot done by fly by night orgs.

Formal verification and verification through testing compliment each other and many systems must be fully verified, partially by one method and the rest by the other method.

Are you on drugs?

Proof that Jow Forums will disagree with anything just to be contrary.

Attached: elon misk.png (575x413, 171K)