So, while I'm not bad at hardware and circuitry, the entire software industry has moved a long way from the FORTRAN, Pascal and C that I was taught in college (I'm old). So, could you please explain to me (like I'm 5 years old) what closures and interfaces are? I googled them and I'm still mystified. And while you are at it, what's LLVM?
A closure is an anonymous function. It is pretty useful in the software world. Probably seeing is a bit easier than trying to explain it.. I'll use some pseudo C++11 to demonstrate
Code:
int doThing(function<int()> funct)
{
return funct() + 7;
}
int caller()
{
int bob = 7;
int val = doThing([&bob](){ return bob++; });
return bob;
}
so the closure is the code that defines [&bob](){...}, It is handy because the logic underlying doThing is independent of the logic in the closure. These come in handy when you want to run an algorithm, but be able to plug in specific code required for the algorithm to complete. The classic example is sorting. One approach is to say that all objects which are pushed into the sort algorithm must implement comparators for what they are being sorted against. Another approach is to apply a closure which defines the sort methodology for your collection of objects.
The closure method is powerful because at the point of sorting, you can quickly change behavior. It isn't strictly tied to the objects itself. (So you could reverse sort, random sort, whatever) In the sorters side of thing the algorithm can be written very generically, not even caring about what the things being sorted look like, only caring that it can execute the comparison method.
Another beauty of this is that the anonymous function can be saved (not anonymous?) and passed around to different algorithms making it very easy to compose algorithms. you could make algorithms that generate algorithms. It is extremely powerful.
At there very core, interfaces are all about defining a standard set of functionality on an object. It has a similar sort of power to it that a closure does. By using interfaces, you can write your objects to a standard and then tree them generically in your algorithms (even though they are specialized). The example in C++ is a base class where all methods are pure virtual methods.
It is the ability to say "This object does X" and then use that fact to perform generic algorithms on that object.
And while you are at it, what's LLVM?
The LLVM project is a pretty interesting one. They have defined a standard architecture agnostic bytecode along with several tools to generate that bytecode. The idea is that it is easy to target that bytecode (they designed it to be easy to compile to). While generating machine code from the bytecode is also simplified. It makes it easy for language writers because they have one compilation target (the LLVM bytecode) after that, the platform maintainers can write the translator for LLVM to whatever platform they want.
I don't know if that explains things well enough. Let me just point out what has been done as a result of the LLVM. We now have C++/c/fortran -> javascript compilers (emscripten) as a result of the LLVM project. There is even work underway to support a C->verilog. In other words, if they can get a good "LLVM to Verilog" bytecode generator, all the sudden you can start using more complex languages like C and C++ to write systems. It really is impressive.
I don't think it is the right route necessarily, (hardware is way too different from software), but it shows what can be done with a good layer of abstraction.
As far as splitting synthesis, maybe I'm missing something but that's sort of what happens. You have synthesis which takes RTL and turns it into a boolean logic netlist and then you have place 'n route which takes that and makes transistors and wires out of it. The "bytecode" here would be the netlist. Is this what you are referring to? Or something else?
It is sort of the same, but not really. For example, LLVM bytecode defines a machine with an infinite number of registers. For the language writers that simplifies things tremendously. It isn't until the code is translated into the specific architecture that the registers are broken up and reassigned.
In other words, LLVM was designed to be easy to compile against. Whereas the netlist and RTL syntax is designed to more closely match the system they are targeting.
I have seen other HDL languages too beyond just Verilog, VHDL and System Verilog... but I will concede your point that there aren't many, and they are all a bit limited ...except I still think System Verilog is pretty powerful even if you disagree with me... I'm no programmer though, so maybe my vision is blinded by staring at too many transistors and not reading enough Java manuals.
Definitely agreed hardware could be much (much!) better. While I think state-of-the-art synthesis tools and flows aren't pathetically bad, they could be so much better. But I will say that chip design is hard... and the tools are way way better than they used to be. Chip design is a hard thing - it's not binary, it's analog - and having a program which gets it all mostly right is - to me - amazing. You have wire-to-wire noise, you have power rail droop, you have transistor characteristics, parasitic electrical properties, R/L/C characteristics of wires, a ridiculous number of design rules for creating masks, and you have all of these libraries of boolean devices with all of their myriad of characteristics, plus multiple power-domains and all the craziness that that brings. To me it's amazing that synthesis and place 'n route tools can do much of anything to make a decent design from all of this. But there's a whole lot of room for improvement. As engineers we often talk about the mystical "powerpoint -> GDSII flow" (even though we'd all be out of a job) that takes Powerpoint slides from architects and turns them into a mask for fabrication.
I agree full heartedly. I just think the industry can do better. After all, if we can make self driving cars why can't we make self programming and routing computers? Yes it is difficult, but I think the true reason it hasn't been done yet is because there isn't a huge push for it. We have so few people that actually need to use HDLs vs people using software tools.
That may change with the advent of FPGAs.