Shop OBEX P1 Docs P2 Docs Learn Events
RISC V ? - Page 14 — Parallax Forums

RISC V ?

1111214161723

Comments

  • Heater. wrote: »
    Hi Dolu,

    You still about?

    Back in July I said I would try to get Spinal running on the Linux Subsystem for Windows 10. After a busy time I just found time to do it.

    Pleased to say it works fine. Again the verilated Murax boots!

    Took a while to install the Win 10 Creators update and then update the BASH shell to Ubuntu 16 so that I could get a JDK 8 installation. It's amazing how many giga bytes of stuff one needs to install now a days to get a program running !

    Anyway, I was wondering what is the preferred way to start with a brand new Spinal project. Say one just wants to make a simple component and get it built as Verilog? I imagine it needs a new project directory and some structure in there?

    I guess the SpinalBaseProject is a good start. How much of that is actually needed from scratch?


    Great : D

    So for a new Spinal project, sure the SpinalBaseProject is the way to go. There is also a workshop with many labs and self test regression:
    https://github.com/SpinalHDL/SpinalBaseProject/tree/workshop

    Personnaly i would say that an IDE could realy help (Intellij 14.1.x)

    "How much of that is actually needed from scratch" I'm not sure to understand. Basicaly the SpinalBaseProject is an SBT project which can be use to do SpinalHDL stuff. SBT a scala buildscript tool ^^
  • Heater.Heater. Posts: 21,230
    edited 2017-09-04 19:43
    Dolu1990,

    Firstly, let me say I think Spinal looks like a wonderful thing. You have done a fantastic job on the language design, implementation and documentation. I'm itching to put it through it's paces.

    I'm sure I will figure out how to work with it from the base project and the workshop. Thanks.
    Personnaly i would say that an IDE could realy help (Intellij 14.1.x)
    There is the thing. I'm totally resistant to having to use "Yet Another F... IDE" (YAFIDE)

    I have the Altera IDE, the Xilinx IDE, the Cypress IDE, the Arduino IDE, the Microchip PIC IDE, Visulal Studio, Visual Studio Code, Lazarus, Qt Creator, etc, etc.

    I really don't want any more! Especially as my current plans involve using Spinal components in an existing Quartus project.

    I'm going to try and get on with Visual Studio Code. My current weapon of choice when not using vim.
    "How much of that is actually needed from scratch" I'm not sure to understand. Basicaly the SpinalBaseProject is an SBT project which can be use to do SpinalHDL stuff. SBT a scala buildscript tool
    Well, ideally there would be a single executable for the compiler, call it spinal. And it would work very simply:
    $ spinal myComponent.scala
    
    Boom, done. There is the verilog output.

    Again, after decades of having to deal with Yet Another F... Build System (YAFBS) I'm a bit tired of it all. I'm sure SBT is great and all but I'm already juggling make, cmake, qmake, npm, webpack, etc, etc.

    Anyway. Never mind my whining. I'll get on with it.

    Thanks for the great effort!

  • Heater. wrote: »
    Dolu1990,
    There is the thing. I'm totally resistant to having to use "Yet Another F... IDE" (YAFIDE)

    I understand it ^^ but here the goal isn't to manage a tool flow, but more to help you finding errors / helping in the navigation, especialy if you don't have experiencies in scala
    Personaly, i can't code without it, i'm way to lazy XD


    So it should be possible to pack everything into a spinal executable. but realy, it's not a priority, currently you can also do SpinalHDL developpments by using makefiles :
    https://github.com/SpinalHDL/SpinalBaseProject/tree/makefile
    Libraries linked in this example aren't updated, but the idea is there.

    SBT is damned slow the first time you run you project on it because it has to download the Scala compiler itself + some dependencies, and it look like the download speed isn't great :/


  • Heater.Heater. Posts: 21,230
    Actually, Dolu, despite my negativity toward Yet Another IDE/build tool I bit the bullet and installed InteliJ. Only another 2 GBytes of my 256GB SSD gone so what the heck! I was curious to see what all the fuss about IntelliJ was.

    Anyway, in odd free hours I have been using IntelliJ and working through your documentation. Starting from the simplest component, trying out the syntax. Just yesterday I got as far as creating the transmitter of a UART. I tried not to look at you UART example.

    All in all it has gone very smoothly. There were some odd error messages from the compiler that took a while to work out but that is normal with any new language. You get used to them.

    Just now I was wondering a couple of things:

    1) How to test my Spinal creations? I was thinking of creating test benches in Verilog to run under Icarus.

    2) How to integrate any Spinal components into my existing Quartus project.

    3) Is it so there is no concept of tri-state logic in Spinal. That might make 2) above a bit more difficult than I expected.

    One niggle with IntelliJ I have at the moment is that it highlights a lot of my val's and the hover message says:

    "Explicit type annotation required....)

    or:

    "Advanced language feature: postfix operator notation"

    I can get rid of these by changing for example:
    val a = in Bool
    
    to:
    val a : Bool = in.Bool
    
    Which is not the kind of verbosity I like. An not in the style of your documentation. So far I have not found a way to turn those highlights off.

    All in all, good fun so far.


  • Heater. wrote: »
    1) How to test my Spinal creations? I was thinking of creating test benches in Verilog to run under Icarus.
    To test it, you can use your tools as with a regular Verilog, nothing realy change. Personnaly i can't use anymore VHDL/Verilog for testing, i prefer cocotb + icarus for testbench that doesn't need to be fast, and verilator for very fast and heavy simulations (at least 100-200 time faster)
    Heater. wrote: »
    2) How to integrate any Spinal components into my existing Quartus project.
    As you are used to do it with Verilog, just point the generated verilog that you want to include in it.
    Heater. wrote: »
    3) Is it so there is no concept of tri-state logic in Spinal. That might make 2) above a bit more difficult than I expected.
    There is a TriState bundle definition in the SpinalHDL lib, which isn't a verilog inout definition but an datastructure with 3 signals (write/writeEnable/read). The idea is to keep the notion of tristate "pure" and not use VHDL/Verilog inout "flawed" thing ^^. Keeping pure digital design has some advantage, and 'Z' isn't pure digital design, it is analog stuff. Anyway, the idea is to design your stuff with SpinalHDL in a non board specific way, and then instanciate your SpinalHDL generated toplevel into a VHDL/Verilog board wrapper to add PLL stuff, do in out inout adaptation, special buffer and special board things.
    Heater. wrote: »
    One niggle with IntelliJ I have at the moment is that it highlights a lot of my val's and the hover message says:

    "Explicit type annotation required....)

    or:

    "Advanced language feature: postfix operator notation"

    I can get rid of these by changing for example:
    val a = in Bool
    
    to:
    val a : Bool = in.Bool
    
    Which is not the kind of verbosity I like. An not in the style of your documentation. So far I have not found a way to turn those highlights off.
    Don't worry about coding style highlight from intellij, SpinalHDL use a lot of things to make the syntax smoother. I alway use things like :

    val io = new Bundle{
    val a, b = in Bool
    val result = out Bool
    }

    There is no worries to have :)

    He doesn't highlight them red, right ?
    Which version of intllij do you have ? 14.1.x ?





  • Heater.Heater. Posts: 21,230
    edited 2017-09-13 00:43
    Dolu1990,

    Thanks for the heads up on cocotb. I have used both icarus and verilator for creating tests. Looks like cocotb would be much nicer.

    Looks like I'll be removing use of tri-state logic in my efforts. You are not the first on to suggest that I do so.


    Hmm... have no idea what IntlliJ version this is. The about box says that it is IntelliJ IDEA 2017.2.3 built in August this year.

    The highlighting I get is not red. Just bit shaded. It does no harm except it's a bit too shaded to read on my monitor and, well, looks messy.

    Screenshot%20%282%29.png

    I think SpinalHDL needs a bit more advertising. It's such a wonderful thing more people should know!

    363 x 261 - 17K
  • "Thanks for the heads up on cocotb."

    Yeah. It would be nice to have a widely available verification tool as an alternative to SystemVerilog and UVM.
  • Heater.Heater. Posts: 21,230
    I'm not sure what you mean by "verification tool" exactly. But the Icarus and Verilator simulators have been available for a long while. From my limited experience they do a good job of allowing me to exercise my Verilog designs. Coctob is a nice Python "front end" for using various simulators, including Icarus.
  • KeithEKeithE Posts: 957
    edited 2017-09-13 03:53
    Heater - Verilator can't even handle behavioral code much less all of SystemVerilog and the UVM. Icarus can handle behavioral code but not all of SystemVerilog and the UVM. Look at any job listing for verification engineers and you will see UVM listed. If one wants to get experience then you're stuck with EDA playground or taking classes to get access to the tools. So I'm a fan of any alternatives for multiple reasons.

    I think that cocotb does more than you realize.

    For example read about coverage here:

    https://github.com/potentialventures/cocotb/pull/490
  • E.g. to do constrained random requires a SAT solver (as far as I can tell...) So to me this isn't a front end, the verification code is more complex than the design code, and both are running in parallel. That's why I believe asking about what SpinalHDL does for verification earlier in the thread. The big problem isn't design it's verification by far.

    Separately from this there is formal verification and I see that Clifford Wolf quit his FPGA job to work on this area full-time. So that's exciting too. He has some recent presentations on formal RISC V up.

    See his Twitter feed:

    I've been doing FPGA design and math modelling for LIDAR devices as day job for the last 10 years. Today is my last day in that job.

    What's next?

    My own company providing services around Yosys, with the focus on formal hw verification.
  • Heater.Heater. Posts: 21,230
    You are speaking at a higher level than I can comprehend here Kieth :) I'm a noob to all this hardware description language business.

    From what I understand the boundary between RTL and behavioral level code is a but fluffy. The exact subset of behavioral code allowed varies between synthesis tools. Hardly surprising. It turns out that neither Verilog of VHDL were originally designed for synthesis. They were designed for hardware description (behavioral) and simulation. Only later were they co-opted for actually generating actual connected gates. It's all a massive kludge.

    I have never read anyone say nice things about SystemVerilog. To my mind I don't even want to be able to write anything that cannot be synthesized. Years ago I played with VHDL by means of the GHDL simulator. I was dismayed to find that what I had created could not be synthesized. Worse still there was nothing in GHDL to warn me that might be the case. I gave up toying with hardware description languages for years after. Also discouraged by the horrendous uglyness of VHDL.

    Seems others agree with me. That is why Berkeley created the Chisel language and now Dolu creates Spinal.

    So, yes, Spinal for RTL design and Coctob (Python) for creating test benches seems like a great idea.

    Which makes me wonder.... Why Python? If we are using Scala for the design why not have a thing like Cocotb also in Scala?

    I imagine something like:
    class MyTestbench extends TestBench {
    
      // Instantiate device under test
      val myDesign = new MyDesign
    
      // Stimulate DUT
      ...
      ...
    }
    






  • KeithEKeithE Posts: 957
    edited 2017-09-13 05:23
    > To my mind I don't even want to be able to write anything that cannot be synthesized

    Let's stick to Python terms, but for verification wouldn't you enjoy using a dictionary if appropriate? Even though you might not want to synthesize one? Or a list as a FIFO/queue, a set,... Pass a data structure by reference? That's the distinction. In ventilator you have to write stuff that can be synthesized, so it's painful in comparison to built-in types. SystemVerilog introduces this ability as well, plus it's really easy to interface to C code via the DPI. It was trivial for me to use some off-the-shelf cryptographic C code to verify some crypto hardware. (Much easier than the older PLI.)


    >Which makes me wonder.... Why Python? If we are using Scala for the design why not have a thing like Cocotb also in Scala?

    Maybe the last one on their list makes managers salivate over lower saleries ;-)
    All verification is done using Python which has various advantages over using SystemVerilog or VHDL for verification:

    Writing Python is fast - it’s a very productive language
    It’s easy to interface to other languages from Python
    Python has a huge library of existing code to re-use like packet generation libraries.
    Python is interpreted. Tests can be edited and re-run them without having to recompile the design or exit the simulator GUI.
    Python is popular - far more engineers know Python than SystemVerilog or VHDL
  • Heater.Heater. Posts: 21,230
    I see what you mean.

    In my design I want only stuff that can be synthesized. In my test bench anything goes as it's running in a simulator on my PC.

    I think my gripe was that I did not know the limits of what was synthesizable and what was not. AND there was no way to tell the compiler to warn me about it. I could not be sure until my code hit Quartus and it threw up. Very tedious.

    In Verilator your Verilog is translated to C++ for simulation, the test bench you wrap around that and run it from can use any and all features of C++.

    So presumably you could have used that off-the-shelf cryptographic C code in a Verilator test bench to check your crypto hardware design against.

    Or have I missed a point?

    Having had a brief look at System Verilog I can understand why nobody wants to deal with it.

    Everyone seems to be going Python crazy at the moment. Which I find very depressing.





  • Heater.Heater. Posts: 21,230
    No luck with cocotb.

    In the Bash shell on Windows 10 the build fails with some message about a missing _ctypes module.

    Now I'm in the usual Python hell. Did I say something about YAFL above? I back out and delete it all immediately.

    I guess it's back to writing test benches in Verilog for Icarus for me.
  • One interesting thing is that SystemVerilog is not one, not two, but three languages mashed together.

    SystemVerilog - what you're using (at least a subset of it)
    SystemVerilog Assertions (SVA)
    SystemVerilog Functional Coverage (SFC)

    On a separate note because I'm curious to see what you have to say and it's somewhat verification related, have you used TDD for embedded development? I was looking into these tools a bit (see http://www.throwtheswitch.org):
    Unity - Curiously Powerful Unit Testing in C for C
    CMock - Automagical generation of stubs and mocks for Unity Tests
    Ceedling - Test build management

    CMock seems like the clever part which makes this really useful.
  • Heater.Heater. Posts: 21,230
    It's a long time since I worked at any place that was producing safety critical embedded systems and was fanatical about testing. Back then nobody would dream to use C for such things.

    Lucas Aerospace (Jet engine controllers) had their own language called Lucol. A language designed from the ground up to make testing and coverage analysis very easy. The only language I have ever seen where the compiler could reliably report the execution time of every module, at compilation time.

    GEC-Marconi Avionics did everything in Ada at the time.

    I think half the battle with getting good test coverage and making unit testing easier in C is a discipline to write code in such a way that individual modules and functions can be tested in isolation of the rest of the program.

    CMock looks like a really useful tool in this respect.

    Also have look into the new generation of static analyzers that have grown up with clang/llvm:

    https://clang-analyzer.llvm.org/

    And the run time sanitizers for finding memory leaks, checking test coverage, thread problems, etc:

    https://clang.llvm.org/docs/

    Used in conjunction with CMock this should make a pretty good unit test setup.
  • Heater. wrote: »
    I think half the battle with getting good test coverage and making unit testing easier in C is a discipline to write code in such a way that individual modules and functions can be tested in isolation of the rest of the program.

    Yes the TDD guys say that it forces a better design. I know in my career I've seen software developers wait until hardware is available to start coding, but that wouldn't stop the TDD guys. The discovery for me was that it's being done in the embedded space so a path has been prepared. There are a lot of alternative tools but I've just started exploring this unity/clock/ceedling set of tools.

    The static checking is good too. I worked with a group who even used it to avoid bugs in certain C compilers. They would create rules to flag code that would likely fail on certain customer platforms. (Customers would often stick with certain obsolete tool versions and you couldn't force them to update.)
  • Heater.Heater. Posts: 21,230
    Wow, who has that luxury? Pretty much every embedded system I have worked on since 1980 involved a year or so of software effort targeting a platform that did not exist yet. It's expected that software gets developed in parallel with the hardware. This includes boards that don't exist, custom peripherals that are not fab'ed yet, in one case a non-existent processor architecture!

    The typical approach is to kick off by creating software that pretends to be the not-yet built hardware. Then one can exercise the application code against that. Mock hardware. I guess I have been doing TDD before anyone had a name for it!

    Often a positive side effect of all that is that whilst the software guys are trying to develop code against the mock hardware it often happens that problems with the hardware interface specification show up. Hopefully the hardware design can be tweaked before it goes to production.





  • AleAle Posts: 2,363
    edited 2017-09-13 19:13
    It's a long time since I worked at any place that was producing safety critical embedded systems and was fanatical about testing. Back then nobody would dream to use C for such things.

    We write ASILD compliant code in C++. Of course there is no dynamic allocated anything. I cannot say that it is crazy. But without unit tests and code coverage reports, nothing gets accepted. No idea what is ASILD in avionics terms. It is probably not so strict as what is used in planes and so on.
    I still think the vendors should provide a MISRA C compiler, the standard approach is imho flawed. Not that MISRA C really makes much sense, I'm pretty sure it has many contradictory rules.

    Now, Ale, get back to your parallel Saturn core, ok ?
  • Heater.Heater. Posts: 21,230
    It's interesting to look at the C++ coding standard for the Joint Strike Fighter: http://www.stroustrup.com/JSF-AV-rules.pdf A lot of that seems to me MISRA rules or revised MISRA rules.

    Looking at the "shall not" rules I always think that a lot of them are pointing out bugs in the C++ language. For example goto is not allowed, so why on Earth does C++ have a goto? Similarly use of the types int, char etc is not allowed, so why do they even exist. And so on for exceptions, and a bunch of other things.

    There really should be a compiler switch that removes all these disallowed language features.

    Looks like there are efforts going on to provide MISRA checking analysers for C++ using clang/llvm
    https://github.com/rettichschnidi/clang-tidy-misra
    https://github.com/rettichschnidi/clang-misracpp2008

    The "shall not" rules:

    There shall not be any self-modifying code.

    The error indicator errno shall not be used.

    The macro offsetof, in library <stddef.h>, shall not be used.

    <locale.h> and the setlocale function shall not be used.

    The setjmp macro and the longjmp function shall not be used.

    The signal handling facilities of <signal.h> shall not be used.

    The input/output library <stdio.h> shall not be used.

    The library functions atof, atoi and atol from library <stdlib.h> shall not be used.

    The library functions abort, exit, getenv and system from library <stdlib.h> shall not
    be used.

    The time handling functions of library <time.h> shall not be used.

    The #define pre-processor directive shall not be used to create inline macros. Inline functions
    shall be used instead.

    The #define pre-processor directive shall not be used to define constant values. Instead, the const qualifier shall be applied to variable declarations to specify constant values.

    The following character sequences shall not appear in header file names: ‘, \, /*, //, or ".

    An object shall not be improperly used before its lifetime begins or after its lifetime ends.

    Calls to an externally visible operation of an object, other than its constructors, shall not be allowed until the object has been fully initialized.

    A class’s virtual functions shall not be invoked from its destructor or any of its constructors.

    Unnecessary default constructors shall not be defined.

    The definition of a member function shall not contain default arguments that produce a
    signature identical to that of the implicitly-declared copy constructor for the correspondingclass/structure.

    A base class shall not be both virtual and non-virtual in the same hierarchy.

    An inherited nonvirtual function shall not be redefined in a derived class.

    Arrays shall not be treated polymorphically

    Arrays shall not be used in interfaces. Instead, the Array class should be used.

    Functions with variable numbers of arguments shall not be used.

    A function shall not return a pointer or reference to a non-static local object.

    Functions shall not call themselves, either directly or indirectly (i.e. recursion shall not be allowed).

    Identifiers in an inner scope shall not use the same name as an identifier in an outer scope, and therefore hide that identifier.

    Identifiers shall not simultaneously have both internal and external linkage in the same translation unit.

    The register storage class specifier shall not be used.

    In an enumerator list, the ‘=‘ construct shall not be used to explicitly initialize members other than the first, unless all items are explicitly initialized.

    The underlying bit representations of floating point numbers shall not be used in any way by the programmer.

    Octal constants (other than zero) shall not be used.

    A string literal shall not be modified

    Multiple variable declarations shall not be allowed on the same line.

    Unions shall not be used.

    The right hand operand of a && or || operator shall not contain side effects.

    Operators ||, &&, and unary & shall not be overloaded

    Signed and unsigned values shall not be mixed in arithmetic or comparison operations.

    Unsigned arithmetic shall not be used.

    The left-hand operand of a right-shift operator shall not have a negative value.

    The unary minus operator shall not be applied to an unsigned expression.

    The sizeof operator will not be used on expressions that contain side effects.

    The comma operator shall not be used.

    More than 2 levels of pointer indirection shall not be used.

    Relational operators shall not be applied to pointer types except where both operands are ...

    The address of an object with automatic storage shall not be assigned to an object which persists after the object has ceased to exist.

    The null pointer shall not be de-referenced.

    A pointer shall not be compared to NULL or be assigned NULL; use plain 0 instead.

    A pointer to a virtual base class shall not be converted to a pointer to a derived class.

    Implicit conversions that may result in a loss of information shall not be used.

    Type casting from any type to or from pointers shall not be used.

    Floating point numbers shall not be converted to integers unless such a conversion is a
    specified algorithmic requirement or is necessary for a hardware interface.

    The goto statement shall not be used.

    The continue statement shall not be used.

    The break statement shall not be used (except to terminate the cases of a switch statement).

    Floating point variables shall not be used as loop counters.

    Numeric variables being used within a for loop for iteration counting shall not be modified in the body of the loop.

    Floating point variables shall not be tested for exact equality or inequality.

    Evaluation of expressions shall not lead to overflow/underflow

    The volatile keyword shall not be used unless directly interfacing with hardware.

    Allocation/deallocation from/to the free store (heap) shall not occur after initialization.

    C++ exceptions shall not be used

    The basic types of int, short, long, float and double shall not be used, but specific-length equivalents should be typedef’d accordingly for each compiler, and these type names used in the code.

    Algorithms shall not make assumptions concerning how data is represented in memory

    Algorithms shall not make assumptions concerning the order of allocation of nonstatic data members separated by an access specifier.

    Algorithms shall not assume that shorts, ints, longs, floats, doubles or long doubles begin at particular addresses.

    Underflow or overflow functioning shall not be depended on in any special way.

    Assuming that non-local static objects, in separate translation units, are initialized in a special order shall not be done.




  • AleAle Posts: 2,363
    Algorithms shall not make assumptions concerning how data is represented in memory

    And because of that and many other is why the whole makes imho not that much sense;

    We have two processors one is little endian the other one is big endian, and a CAN interface. We have to know the endianness and code around it so the data gets positioned in the right bits.
    Floating point variables shall not be used as loop counters.

    This was a bad idea anyways. But think that in many BASIC implementations, there is only a floating point numeric type.
    The #define pre-processor directive shall not be used to define constant values. Instead, the const qualifier shall be applied to variable declarations to specify constant values.

    In a previous job, it was required that every number be #defined. So there where only fancy names and no numbers in the code. "Don't use magic numbers" was it called. Even if they have fancy names, they are still magic numbers.
  • TorTor Posts: 2,010
    Hey, I want my octal.. 0775.. I immediately know what that means, for an open() call with O_CREA. I would have more trouble with 509

  • Heater.Heater. Posts: 21,230
    Certainly if two processors of opposite endianess are communicating through shared memory the endian swap has to happen somewhere. As is the case when reading bytes of a communications link, one has to know the endianness of the bytes on the line.

    However, ones algorithm need not know anything about that. The algorithm should access these bytes with getter and setter macros or functions. Those function will take care of byte order without the algorithm needing to be changed. That is why in the networking world there are the macros htons, ntohs etc http://beej.us/guide/bgnet/output/html/multipage/htonsman.html I have seen projects burned by endian issues a number of times.

    In general one should not rely on the byte layout of structures. Don't go poking around in a structure with pointers offset from the base address and the like. Structure fields have names for a reason. Use them!

    Using floating point variables as loop counters is not only a bad idea in general it does not work. For example, in Javascript, this loop does not terminate:
    for (let x = 0.0; x != 1.0; x += 0.1) {
      console.log(x)
    }
    
    Not only that it sails past 1000000 with values that are far away from the 0.1 steps one is expecting, the errors have been accumulating:
    999999.8998389754
    999999.9998389754
    1000000.0998389754
    1000000.1998389753
    
    Of course, JS only has float numbers as well. But it does guarantee that if you use integer values they will come out exact.

    Yes, they are still "magic numbers". There are some great advantages to naming them though.

    If a magic number ever changes its easier to change its definition in one place than scour the code looking for some particular numeric value.

    It makes it obvious to the reader what the magic number is and hence what the code is doing. For example:
    for (core = 0; t=core < NUM_CORES; core++)
    {
        for (thread = 0; thread < MAX_THREADS; thread++)
        {
            spawnThread(....)
        }
    }
    

    What if you have the same number, say "42", used for different things throughout your code? If you find you need to change one use of it to "43" you don't have to check all occurrences of "42", read the code and figure out of it needs changing or not. Far better to name your magic numbers. Then you have less work to do, and less chance of making a silly error:
    #define THE_MEANING_OF_LIFE 42
    #define SIX_TIMES_SEVEN
    
    Then in C++ we use "const" instead of "#define". Then your magic number not only have a name but a type that can be checked.
  • Heater. wrote: »
    Dolu1990,

    Thanks for the heads up on cocotb. I have used both icarus and verilator for creating tests. Looks like cocotb would be much nicer.

    Looks like I'll be removing use of tri-state logic in my efforts. You are not the first on to suggest that I do so.


    Hmm... have no idea what IntlliJ version this is. The about box says that it is IntelliJ IDEA 2017.2.3 built in August this year.

    The highlighting I get is not red. Just bit shaded. It does no harm except it's a bit too shaded to read on my monitor and, well, looks messy.

    I think SpinalHDL needs a bit more advertising. It's such a wonderful thing more people should know!

    So cocotb is nice, could have something equvalent to it in Scala, or even a Verilator wrapper in scala, i would like to work on that but i haven't the time for it. I try to realy focus on SpinalHDL itself.

    Personnaly, i don't like python, it can turn in a nightmare so easily (not small project, mutliple people working on a project, broken refractoring, no compilation check, very weakly typed-> comparing a double with a function pointer is completly fine : it return false :most of the time P)

    About intellij, you can disable those stupide highlighting ^^ He is realy too piky.

    Then about SpinalHDL advertising. Let's assume SpinalHDL is perfectly implemented and has all imaginable features, even in this case, it will be rejected by a vast majority of people for many superficial and (in fact) pointless reasons. Hardware design world is a realy crual ^^ Peoples doesn't have the time / want to learn Scala OOP FP SpinalHDL to do something that they think they are already able to do in Verilog/VHDL. To understand SpinalHDL and how far you can go with it, you realy need those OOP/FP notions. It why most of the time when i show SpinalHDL to people, they don't understand it, don't get the point, reject it, and generaly just think it is kind of VHDL/Verilog ++ with interface definition feature (which is realy realy far from reality).

    So i don't know how to advertise it, I also sent many mails to "university" professor but they don't give any attention/answer to it, and generaly they try to educate their student for the industry, and the industry is using VHDL/Verilog/SystemVerilog ^^. It's realy a closed situation.
  • Heater.Heater. Posts: 21,230
    I guess you are right, Dolu. I think I'd rather your time went on perfecting SpinalHDL. For my own selfish reasons of course :)

    Oh yeah, Python is really annoying. It's not the weak types that bugs me, after all I love Javascript under node.js. No, it's the brain dead syntax. It's the lack of an event driven programming model. Hmmm...I wonder if that would be helpful in creating test benches?

    I wish I could find out how to disable that sill highlighting in IntelliJ.

    I'm sorry to hear your problems in promoting SpinalHDL.

    I can imagine the existing hardware design world is cruel. Those guys have invested years learning their Verilog and VHDL and whatever tooling goes with it. They don't want anyone making easy to use design systems that threaten their job security! They are not software engineers so they don't get the point. The HDL industry obviously does not want anyone eating into their revenue with new easy to use and Open Source tools.

    It's rather like back in the day when the first high level programming languages arrived. Fortran, Ada, etc. The old guard of software engineers who used assembler did not see the point and sneered at it. It took a whole new generation of programmers to arrive to overthrow all that.

    On the bright side. The University of California, Berkeley is seriously into Chisel for hardware design courses. They realize that they have more students that understand software engineering than hardware. Students that would turn their noses up at the low level of Verilog. I know you were not happy with Chisel but it shows a sea change can happen.

    Also I was looking at this from a different angle. Not industry or even academia. I was thinking more grass roots and alternative. The hobbyists for example, there are now a lot of cheap FPGAs out there which would be much easier for people to get into if they did not have to use Verilog or VHDL. For example who would have imagined a few years back that millions of people are using 8 bit micro-controllers, they are complicated and need special tools. Well now they hack away in C++ on Arduinos very happily.

    Then there are the software engineers who might like to accelerate some part of their projects with FPGA. But they are not hardware designers so they don't. Give them a language they understand and they might fly with it.

    An example of that is Microsoft's Catapult. FPGA in your cloud servers. Would be great if anyone knew how to program it!
    https://www.wired.com/2016/09/microsoft-bets-future-chip-reprogram-fly/

  • cgraceycgracey Posts: 14,133
    I think the reason vhdl / verilog / system verilog pevail in the industry is due simply to risk management. It costs so much money to turn a chip, that zero risks are tolerated in the tool system. Risk is consildated to the design, only.

    If it became way cheaper to make chips, I'm sure better tool methodologies would be adopted, as more risk could be tolerated.
  • Heater.Heater. Posts: 21,230
    Exactly so.

    There was a time when a program might build up a big data structure in RAM. It would save that data to disk by blindly writing out all the memory bytes from the start of the structure to the end.

    Fine.

    Until you compile the program for a machine with a different endianness. Say moving from PC to old Mac. Then everything is totally corrupted. Then comes the #ifdef BIG_ENDIAN. So that the new build of the program can untangle the mess.

    Except the bug here is that the file was written and read from disk wrongly in the first place.

    I had a bit of a fight with a developer about exactly this problem years ago when his code failed on moving to Motorola from x86. This was writing network packets but the issue is the same. He insisted on putting the #ifdef in there and adding a bunch of ugly code to reverse things on reception. Grrr...

  • AleAle Posts: 2,363
    There was a time when a program might build up a big data structure in RAM. It would save that data to disk by blindly writing out all the memory bytes from the start of the structure to the end.

    Wait, you mean that was/is a bad idea ?... glad to know that the people at M$ never heard of it ! the Word empire would collapse !
  • cgracey wrote: »
    I think the reason vhdl / verilog / system verilog pevail in the industry is due simply to risk management. It costs so much money to turn a chip, that zero risks are tolerated in the tool system. Risk is consildated to the design, only.

    If it became way cheaper to make chips, I'm sure better tool methodologies would be adopted, as more risk could be tolerated.

    I agree about the risk management. A full mask set can cost millions of dollars and the process of getting a chip built takes months.

    Add on top of this that you're going to be working at the verilog level regardless of a front-end like SpinalHDL. For example debugging simulations, formal verification (proving assertions, doing formal compare), using existing IP, patching bugs in both RTL and gate level, inserting test logic, creating various test vectors,...
Sign In or Register to comment.