HDL languages and Synthesis discussion

Status
Not open for further replies.

Cogman

Lifer
Sep 19, 2000
10,278
126
106
I've been using Design Compiler a lot, and I think it's pretty amazing.. I'm pretty blown away by it actually - even if I have to hand-hold it occasionally and it sometimes does some incredibly dumb things, overall it's really impressive to me.

I disagree a bit with this too. System Verilog is supported by tools and it's pretty impressive... and it's new(ish). The new UPF format for power is pretty cool and also has pretty broad industry support.

I'll be honest, I've not read up too much on System Verilog and I'm not currently working in the industry. Most of my knowledge on the subject is second hand.

That being said. What I've seen from System Verilog is that, while nicer than straight verilog, it is still very much rough compared to most popular programming languages. It is still very low level with very few higher level constructs available. On top of that, nobody really has a full implementation of it yet.

For example, could you imagine how nice it would be if at the hardware level you could use things like closures? or interfaces? or even just templates? These are constructs that appear almost everywhere now in the software world.

And then lets talk about what could be done (but hasn't). How about splitting the synthesis process into a frontend and backend with a common HDL bytecode in between? (Think LLVM). Then you could create two different optimization layers that function independently. But more excitingly, you could create new HDLs much more simply.

In the software world (where I'm living) we enjoy an almost innumerable number of languages and tools. The results have been amazing, languages have progressed tremendously with new and exciting innovations happening daily.

In the hardware world, you have, what 2 or 3 languages (ok maybe more like 5 or 6 if you count some of the lesser used languages) provided by 3 or 4 companies. No open source community to really speak of. and tools that are provided by 2 or 3 different companies.

I think the hardware world can be better. Much better. The problem is that it is such a specialized field that the demand for improvement hasn't happened. Perhaps, though, the upsurgence of FPGAs will change that.


Moved from CPUs to Highly Technical because as a mod I try not to be a hypocrite and divert threads by discussing off-topic things, and Cogman's post was too interesting not to discuss further.

Moderator PM
 
Last edited by a moderator:

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
SV has a lot of high level functionality, they just don't synthesize. I use it for test-benches and validation. The design compiler already has a terrible time taking low-level HDL and turning into a design that meets the requirements.

The Synopsys synthesis toolset runs their optimization on some sort of proprietary intermediate database which I believe is structured for their algorithms. As far as I know, end users are allowed no access to that data structure.

As you said, any new development into more flexible HDL's will come from the low(er) speed world of FPGA's.
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
I don't have much experience with Mentor or Cadence, but I thought that Design Compiler had pretty good support for System Verilog. I spent today pushing System Verilog files through DC with pretty decent results. It seems to work... what's missing in terms of support?

So, while I'm not bad at hardware and circuitry, the entire software industry has moved a long way from the FORTRAN, Pascal and C that I was taught in college (I'm old). So, could you please explain to me (like I'm 5 years old) what closures and interfaces are? I googled them and I'm still mystified. And while you are at it, what's LLVM?

As far as splitting synthesis, maybe I'm missing something but that's sort of what happens. You have synthesis which takes RTL and turns it into a boolean logic netlist and then you have place 'n route which takes that and makes transistors and wires out of it. The "bytecode" here would be the netlist. Is this what you are referring to? Or something else?

I have seen other HDL languages too beyond just Verilog, VHDL and System Verilog... but I will concede your point that there aren't many, and they are all a bit limited ...except I still think System Verilog is pretty powerful even if you disagree with me... I'm no programmer though, so maybe my vision is blinded by staring at too many transistors and not reading enough Java manuals.

Definitely agreed hardware could be much (much!) better. While I think state-of-the-art synthesis tools and flows aren't pathetically bad, they could be so much better. But I will say that chip design is hard... and the tools are way way better than they used to be. Chip design is a hard thing - it's not binary, it's analog - and having a program which gets it all mostly right is - to me - amazing. You have wire-to-wire noise, you have power rail droop, you have transistor characteristics, parasitic electrical properties, R/L/C characteristics of wires, a ridiculous number of design rules for creating masks, and you have all of these libraries of boolean devices with all of their myriad of characteristics, plus multiple power-domains and all the craziness that that brings. To me it's amazing that synthesis and place 'n route tools can do much of anything to make a decent design from all of this. But there's a whole lot of room for improvement. As engineers we often talk about the mystical "powerpoint -> GDSII flow" (even though we'd all be out of a job) that takes Powerpoint slides from architects and turns them into a mask for fabrication.
 

bullzz

Senior member
Jul 12, 2013
405
23
81
didnt even know there was a highlighly technical forum

@cogman - SV does support interfaces. what am I missing

I agree with dmens. it takes time to understand how the tool interprets the RTL. a lower level RTL owner would want clear control on where gates and flops are placed. a higher level HDL doesnt give you that control
 

Nothingness

Platinum Member
Jul 3, 2013
2,722
1,357
136
In fact there are many smaller companies that do/did/will do synthesis tools. You can even find tools that will generate HDL from a subset of C (high-level synthesis).

The problem is that most of these companies want to be bought by one of the big three, so in the end everything is merged into other tools. And the companies that are not part of the big three are considered with disdain by potential customers (sometimes with good reasons...).

That being said I'm sure synthesis tools use traditional compiler technology. I was working in a company that was doing high-level synthesis and we surely had front-end, intermediate code, multiple back-ends, front/back-end independent optimizers, etc. We were in the second categories of EDA companies, so everything simply disappeared though our tool was used by TI and Intel R&D.
 

Cogman

Lifer
Sep 19, 2000
10,278
126
106
So, while I'm not bad at hardware and circuitry, the entire software industry has moved a long way from the FORTRAN, Pascal and C that I was taught in college (I'm old). So, could you please explain to me (like I'm 5 years old) what closures and interfaces are? I googled them and I'm still mystified. And while you are at it, what's LLVM?

A closure is an anonymous function. It is pretty useful in the software world. Probably seeing is a bit easier than trying to explain it.. I'll use some pseudo C++11 to demonstrate

Code:
int doThing(function<int()> funct)
{
	return funct() + 7;
}

int caller()
{
	int bob = 7;
	int val = doThing([&bob](){ return bob++; });
	return bob;
}

so the closure is the code that defines [&bob](){...}, It is handy because the logic underlying doThing is independent of the logic in the closure. These come in handy when you want to run an algorithm, but be able to plug in specific code required for the algorithm to complete. The classic example is sorting. One approach is to say that all objects which are pushed into the sort algorithm must implement comparators for what they are being sorted against. Another approach is to apply a closure which defines the sort methodology for your collection of objects.

The closure method is powerful because at the point of sorting, you can quickly change behavior. It isn't strictly tied to the objects itself. (So you could reverse sort, random sort, whatever) In the sorters side of thing the algorithm can be written very generically, not even caring about what the things being sorted look like, only caring that it can execute the comparison method.

Another beauty of this is that the anonymous function can be saved (not anonymous?) and passed around to different algorithms making it very easy to compose algorithms. you could make algorithms that generate algorithms. It is extremely powerful.

interfaces are

At there very core, interfaces are all about defining a standard set of functionality on an object. It has a similar sort of power to it that a closure does. By using interfaces, you can write your objects to a standard and then tree them generically in your algorithms (even though they are specialized). The example in C++ is a base class where all methods are pure virtual methods.

It is the ability to say "This object does X" and then use that fact to perform generic algorithms on that object.

And while you are at it, what's LLVM?

The LLVM project is a pretty interesting one. They have defined a standard architecture agnostic bytecode along with several tools to generate that bytecode. The idea is that it is easy to target that bytecode (they designed it to be easy to compile to). While generating machine code from the bytecode is also simplified. It makes it easy for language writers because they have one compilation target (the LLVM bytecode) after that, the platform maintainers can write the translator for LLVM to whatever platform they want.

I don't know if that explains things well enough. Let me just point out what has been done as a result of the LLVM. We now have C++/c/fortran -> javascript compilers (emscripten) as a result of the LLVM project. There is even work underway to support a C->verilog. In other words, if they can get a good "LLVM to Verilog" bytecode generator, all the sudden you can start using more complex languages like C and C++ to write systems. It really is impressive.

I don't think it is the right route necessarily, (hardware is way too different from software), but it shows what can be done with a good layer of abstraction.

As far as splitting synthesis, maybe I'm missing something but that's sort of what happens. You have synthesis which takes RTL and turns it into a boolean logic netlist and then you have place 'n route which takes that and makes transistors and wires out of it. The "bytecode" here would be the netlist. Is this what you are referring to? Or something else?

It is sort of the same, but not really. For example, LLVM bytecode defines a machine with an infinite number of registers. For the language writers that simplifies things tremendously. It isn't until the code is translated into the specific architecture that the registers are broken up and reassigned.

In other words, LLVM was designed to be easy to compile against. Whereas the netlist and RTL syntax is designed to more closely match the system they are targeting.

I have seen other HDL languages too beyond just Verilog, VHDL and System Verilog... but I will concede your point that there aren't many, and they are all a bit limited ...except I still think System Verilog is pretty powerful even if you disagree with me... I'm no programmer though, so maybe my vision is blinded by staring at too many transistors and not reading enough Java manuals.

Definitely agreed hardware could be much (much!) better. While I think state-of-the-art synthesis tools and flows aren't pathetically bad, they could be so much better. But I will say that chip design is hard... and the tools are way way better than they used to be. Chip design is a hard thing - it's not binary, it's analog - and having a program which gets it all mostly right is - to me - amazing. You have wire-to-wire noise, you have power rail droop, you have transistor characteristics, parasitic electrical properties, R/L/C characteristics of wires, a ridiculous number of design rules for creating masks, and you have all of these libraries of boolean devices with all of their myriad of characteristics, plus multiple power-domains and all the craziness that that brings. To me it's amazing that synthesis and place 'n route tools can do much of anything to make a decent design from all of this. But there's a whole lot of room for improvement. As engineers we often talk about the mystical "powerpoint -> GDSII flow" (even though we'd all be out of a job) that takes Powerpoint slides from architects and turns them into a mask for fabrication.
I agree full heartedly. I just think the industry can do better. After all, if we can make self driving cars why can't we make self programming and routing computers? Yes it is difficult, but I think the true reason it hasn't been done yet is because there isn't a huge push for it. We have so few people that actually need to use HDLs vs people using software tools.

That may change with the advent of FPGAs.
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
That may change with the advent of FPGAs.

I'm still parsing the rest of your post to make an intelligent reply - thanks for writing it all out - but I got stuck on the last line.

What advent of FPGAs? They've been around since the mid 1990's... and I haven't seen any appreciable difference in the modern versions except that they are bigger, faster and come with more IPs. What has changed that makes it an advent?
 

Cogman

Lifer
Sep 19, 2000
10,278
126
106
I'm still parsing the rest of your post to make an intelligent reply - thanks for writing it all out - but I got stuck on the last line.

What advent of FPGAs? They've been around since the mid 1990's... and I haven't seen any appreciable difference in the modern versions except that they are bigger, faster and come with more IPs. What has changed that makes it an advent?

Availability. It is true that they have been around for a long time, but for the most part they have been toys only for the big boys. Now, we have seen pretty dramatic price drops (You can get one for $50!) and there is some work to move FPGAs onto the CPUs themselves see here. The work is being done to make them a consumer level product.

I guess the advent hasn't quite happened yet, but I would say we are on the cusp. We are (possibly) very close to seeing FPGAs everywhere.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
A closure is an anonymous function. <snip>

What would you use them (or other high-level language features) for, in a hardware world? My perspective is likely warped by my experience (I have a hammer, and I've been turning things into nails for years, so the value of a screwdriver doesn't immediately jump out at me)... I might be missing something obvious.
 

Cogman

Lifer
Sep 19, 2000
10,278
126
106
What would you use them (or other high-level language features) for, in a hardware world? My perspective is likely warped by my experience (I have a hammer, and I've been turning things into nails for years, so the value of a screwdriver doesn't immediately jump out at me)... I might be missing something obvious.

The value would be similar to the value of inheritance. The difference is that anonymous methods allow more control at the call site for how a method is run.

For how it would apply to hardware, lets say you have some piece of hardware that, as part of its algorithm, needs to add two things together. Those things could be floats, ints, doubles, whatever, it just depends on how you call it. Now, one way of handling this is to pass in some sort of flag that says "These values are floats/ints/or doubles" and then choosing the addition algorithm based on that. However, that would require you, at the call site, to know every which way a set of binary values can be added together. On top of that, it would require you to have some conditional logic to switch based on which value comes in, that's yucky!

If, on the other hand, you took the anonymous method route, then you could pass in the addition function. That module would be plugged right into the correct place and even better, if you don't have to worry if you over prepared and implemented too many addition algorithms (no dead circuitry).

On top of that, testability becomes MUCH easier. Now, instead of having this complex circuit which you must teach each route in which addition may occur and generate cases that will trigger those routes (and woe be to you if someone changes which flag triggers which addition circuitry), you can now simply test each addition method piecewise and for the overall algorithm you can in the test provide an addition algorithm which makes testing simpler.

Again, this can be partially achieved with inheritance. However, in the software world that is falling out of favor, mainly because inheritance hides a lot of evils. The ability to overwrite methods is just to cludgy and can cause some unexpected consequences (when the overridden method doesn't do everything that the method it is overriding expects) It has lead to the cute saying Composition over inheritance.
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
How does that ease testability compared to classic Verilog modules?

Testability meaning post-silicon test - as in being able to test specific transistors? Or pre-silicon validation?

I was thinking that more advanced constructs could maybe help with formal verification...
 

Nothingness

Platinum Member
Jul 3, 2013
2,722
1,357
136
I guess Cogman meant pre-silicon validation.

In my experience, block level validation is already using various methods, including formal ones. I fail to see how anonymous functions would bring anything, as you can already write as small modules as you want and test them individually.
 

Cogman

Lifer
Sep 19, 2000
10,278
126
106
How does that ease testability compared to classic Verilog modules?

I guess Cogman meant pre-silicon validation.

In my experience, block level validation is already using various methods, including formal ones. I fail to see how anonymous functions would bring anything, as you can already write as small modules as you want and test them individually.

Yes, I was specifically referring to pre-silicon validation.

Now for the why. Yes, you can make very simple modules and test them. However, what happens when you have more complex modules made up of multiple simple modules? What if one of those modules fails, is moved, etc? All of the sudden you get a very noisy "everything is failing!" type error message and you need to track it down.

On top of that, writing tests ends up making assumptions on what submodules are being used. This is testing too deep.

By using closures/anonymous functions/lambdas, you can limit how deep you are testing. You can limit it because you are providing the submodule functionality at the test level.

Lets, for example, say that you have a module that in the course of things needs to sort its elements. Now, you could just embed a element sorting module, however, what happens if that element sorting module starts failing? Or, what happens if you decide you want to change one element sorting module for another with slightly different characteristics? (stable vs unstable sorter). All the sudden, your tests for the parent module may start to fail because some submodule changed its logic.

To improve testability, the better thing would be to provide the sorter module to the parent module and then have the test of the parent module define its own sorting module. That way, if/when the parent module fails to work, we can attribute it directly to a logical error in the parent module and not the sorting submodule. As well, if we want to, in the future, redefine the parent module to not require sorting, our test is much easier to modify to accommodate such a change.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
^

But we do have submodule testing. In my projects we use formal verification, verilog test-benches, or sub-fullchip test environments.

Formal methods provide total coverage but as far as I know they are very difficult to use outside the arithmetic logic (ALU) or basic transaction protocols. Test-benches and lower-level TE's are useful because they reduce the scope of complexity (i.e. don't have to run a long top-level test).

Unfortunately the last two can only provide so much coverage which is why a lot of time is spent debugging top level tests that die with "EAX register mismatch, have fun with that".
 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |