Shop OBEX P1 Docs P2 Docs Learn Events
Is it time to re-examine the P2 requirements ??? - Page 11 — Parallax Forums

Is it time to re-examine the P2 requirements ???

18911131418

Comments

  • jmgjmg Posts: 15,155
    edited 2015-03-03 10:30
    Heater. wrote: »
    Since we have been waiting on the P2 we have had:

    1) The Raspeberry Pi

    2) The new Raspberry Pi, multi-core and all.

    3) The Esruino and the Micro Python or even just the STM32 F4.

    4) Recently the tiny ultra cheap ESP8266 WIFI devices which are also powerful processors in their own right. No extra MCU required.

    5) Heck, today I'm ordering an 8 core 64 bit ARM device for a hundred bucks or so.

    6) And so on.

    Since you are making a list, I would add another items that 'comes from below' & while it does not displace a P1 or P2 (it can work very well with a P1or P2) it does change the dynamics of choice to a new user.
    7) Silabs EFM8BB1 - from $0.21 @ 10k, for 2KF and serious peripherals : 12b ADC/PWM/SPI/i2c/UID..

    Looks like it can run at 3 & 4 MBd as a slave, no crystal needed. It could replace the Crystal in a P1 design.

    Heater. wrote: »
    Problem is, those unique application areas are vanishing rapidly.
    Maybe
    Heater. wrote: »
    Hence my suggestion to abandon ship and adopt the RISC V instruction set. Get in with the new frontier of open standard hardware that strongly suspect RISC V will foster. Wrap the RISC V around with Propeller I/O simplicity and goodness.
    Interesting idea, but before this advanced any further, some real world (simulation is ok) numbers on
    a) MHz of RISC V in the P2 process
    b) Die area needed in the P2 process

    would indicate just what such a core could bring.
    The FPGA point mentioned above sounds like this is rather a large core, in P1/P2 terms.
    Certainly not looking like more than 1 is practical at 180nm, but perhaps a Asym Design with 1 x RISC V and multi P2 could have a niche. It depends greatly on those numbers above.
  • mindrobotsmindrobots Posts: 6,506
    edited 2015-03-03 10:37
    KC_Rob,

    You are correct, Heater has just enumerated some of the things he (and I) have fallen in techno-lust with since the advent of the P2. Item #6 covers a multitude of other distractions.

    One of the fun things about the P1 (which the P2 is losing) is the ability to take a DIP and a couple discrete parts, plug them into a breadboard and start creating and playing. The goto guy for me now in that respect is the Micromite MKII, a PIC32MX170 loaded with Micromite BASIC all wrapped up in a 28 pin DIP. Add a capacitor and a $5 USB/Serial connector and you're back to that clean fun of toys on a breadboard!

    Seriously, the ESP8266 - a low power Cortex-M0 with WIfi! Once everyone figures out how to program it, there's your IoT solution for almost everything.

    ...as you said, that barely scratches the surface.



    (So, Heater, what new toy did you order?? I'm already tapped out for March April May but I can put it on the toy list!)
  • David BetzDavid Betz Posts: 14,511
    edited 2015-03-03 10:41
    Dave Hein wrote: »
    The P2 will be whatever Chip thinks it should be. This thread is somewhat pointless because I think that Chip is ignoring the forum these days, and is busy working on getting the FPGA image out ASAP. I'm sure that Ken and Chip know that the future of Parallax depends on the P2, and I think they are serious about having the get-together in the Fall and having a complete P2 FPGA image by that time. As I said before, in order to achieve this goal they need to have an initial P2 FPGA image by June. It could come sooner, but I'm expecting it in June.
    Yes, I know that Parallax will never jump on the RISC V bandwagon. Maybe that's actually for the best for those of us who like the unique features of the P1 and hope to seem something similar but even better in the P2. However, are we really a big enough market to make P2 a success? Maybe we are. In any case, we'll have something interesting to play with when the P2 comes to life.
  • KC_RobKC_Rob Posts: 465
    edited 2015-03-03 10:50
    mindrobots wrote: »
    KC_Rob,

    You are correct, Heater has just enumerated some of the things he (and I) have fallen in techno-lust with since the advent of the P2. Item #6 covers a multitude of other distractions.

    One of the fun things about the P1 (which the P2 is losing) is the ability to take a DIP and a couple discrete parts, plug them into a breadboard and start creating and playing.
    Agreed. Not only fun but practical. There's already plenty of complicated, often unwieldy, "stuff" out there. I say that, generally, the more KISS the better.
  • Heater.Heater. Posts: 21,230
    edited 2015-03-03 11:11
    @Dave,
    This thread is somewhat pointless...
    Agreed. But whilst we are waiting...

    @jmg,
    Interesting idea, but before this advanced any further, some real world (simulation is ok) numbers on
    a) MHz of RISC V in the P2 process
    b) Die are needed in the P1 process

    would indicate just what such a core could bring.
    Yes, but, that is the whole thesis of the RISC V concept. In the modern world the instruction set is basically irrelevant when it comes down to determining the speed or power consumption of your system. When you take into account process technology, clock speed, compiler optimization, the algorithms used, skill of the programmer, etc, etc you find the instruction set used does not matter.

    Ergo, it's a good idea if we all just agree on one and use that. Skip closed, single source, Intel and ARM etc.

    @mindrobots,
    One of the fun things about the P1 ... is the ability to take a DIP and a couple discrete parts...and start creating and playing.
    That is a critical observation. One huge reason I reach for DIP Props.
  • jmgjmg Posts: 15,155
    edited 2015-03-03 11:44
    Heater. wrote: »
    Yes, but, that is the whole thesis of the RISC V concept. In the modern world the instruction set is basically irrelevant when it comes down to determining the speed or power consumption of your system. When you take into account process technology, clock speed, compiler optimization, the algorithms used, skill of the programmer, etc, etc you find the instruction set used does not matter.
    That was my point : the process is an important pivot here, and a RISC V may not be viable on 180nm - so those numbers are critically important.
  • jmgjmg Posts: 15,155
    edited 2015-03-03 11:47
    Heater. wrote: »
    It should be replaced by the RISC V instruction set architecture as specified by the University of California, Berkeley.

    At it's base level RISC V is only 40 simple instructions. RISC V already has GCC to support it. RISC V supports extensions to the ISA, for example whatever special things a P2 hardware needs, RISC V scales to 64 and even 128 bits that will be in demand in the not so distant future. RISC V has the backing of many companies in embedded, mobile, phone development that want to get away from Intel and ARM. An example of RISC V, the LowRISC, will become available via some guys involved with the Raspberry Pi Foundation in a year or so.

    See here: http://riscv.org/
    Hehe, looking at that link, I see it already has the same 'single instruction set' mirage as ARM.
    ARMs are not binary comparible across the huge numbers of cores hidden under the 'ARM" brand either.
  • mindrobotsmindrobots Posts: 6,506
    edited 2015-03-03 12:05
    Heater!! What have you done to us????

    RISC-V has softcores written in Chisel
    Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages.

    Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference.

    Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.

    RISC-V ----> Chisel
    > Scala
    > .jar files
    > JVM

    You've infected my machine!!!!! :o)

    If I ever get to Helsinki, you're in BIG trouble!!
  • jmgjmg Posts: 15,155
    edited 2015-03-03 12:18
    Yes, that Chisel tool sounds very interesting.

    There is more on RISC V here
    http://en.wikipedia.org/wiki/RISC-V

    and this sounds brave - no carry bit ?! - is going to affect a LOT of things
    ["RISC-V intentionally [B]lacks condition codes, and even lacks a carry bit[/B].[4] The designers claim that this can simplify CPU designs by minimizing interactions between instructions.[4] Instead RISC-V builds comparison operations into its conditional-jumps.[4] Use of comparisons may slightly increase its power usage in some applications. The lack of a carry bit complicates multiple-precision arithmetic. RISC-V does not detect or flag most arithmetic errors, including overflow, underflow and divide by zero.[4] RISC-V also lacks the "count leading zero" and bit-field operations normally used to speed software floating-point in a pure-integer processor."]

    Not sounding like a very embedded-centric core to me - poor boolean and byte support, Academics think everything is 32bits ?
    Wonder how long before they add another of their "Standard Extension" to fix this ?
  • rod1963rod1963 Posts: 752
    edited 2015-03-03 12:32
    My bet is that LowRisc designers are expecting the "minion processors" to do the bit banging and grunt work the way the PRU's on the SITARA's and TPU's on the MC68332's(and now PPC's) handle it.
  • jmgjmg Posts: 15,155
    edited 2015-03-03 13:23
    rod1963 wrote: »
    My bet is that LowRisc designers are expecting the "minion processors" to do the bit banging and grunt work the way the PRU's on the SITARA's and TPU's on the MC68332's(and now PPC's) handle it.

    You may be right - in which case we then need a standard for a "minion processors" ;)
    The 8051 would be one, but I suspect is larger than a P1 COG
  • David BetzDavid Betz Posts: 14,511
    edited 2015-03-03 13:24
    jmg wrote: »
    You may be right - in which case we then need a standard for a "minion processors" ;)
    The 8051 would be one, but I suspect is larger than a P1 COG
    The minion processors also use the RISC V instruction set I think although maybe with extensions.
  • David BetzDavid Betz Posts: 14,511
    edited 2015-03-03 13:29
    Here is what they say about the LowRISC chip and its minion cores:
    The system contains two superscalar RISC-V cores and a larger number of smaller RISC-V cores or “minions”. The minion cores have direct access to external I/O pins through a thin layer of logic or “I/O shim” which gives hardware support for basic tasks such as shifting data in or out efficiently.
  • Heater.Heater. Posts: 21,230
    edited 2015-03-03 14:56
    @jmg,
    .. it [RISC V] already has the same 'single instruction set' mirage as ARM. ARMs are not binary comparible across the huge numbers of cores hidden under the 'ARM" brand either
    I'm curious as to what you mean by "mirage".

    RISC certainly is a mirage in the ARM. That thing has multiple instruction sets. Including two different forms of compressed (THUMB) instruction sets. The latest chips support all of these. The instruction set manual runs to three thousand pages or so!

    RISC V does not care about "Reduced Instruction Set" per se. Rather the aim is to arrive at a common standard instruction set. One that anyone can build a machine for without any licensing deals. As they say, the instruction set is the most important interface in computing. The interface between software and hardware. We have international standards for all kinds of things, programming languages, network protocols, etc, etc. Yet not the instruction set. It's time there was such a standard.

    To that end the aim is simplicity. Instead of thousands of pages of ISA manual the whole thing can be summarised on one page!
    Not sounding like a very embedded-centric core to me - poor boolean and byte support, Academics think everything is 32bits ?
    That may well be true.

    At least looked at from the old perspective. Today a 32 bit CPU is very fast, very cheap, very small and very low power consumption. It may not have optimal support for "bit twiddling" but it will outrun everything before anyway so why bother with the complication of supporting "odd" instructions?

    Yes, RISC V omits the carry bit and overflow detection etc. Basically there is no point putting anything in the spec. that high level language compilers do not use and which makes the chip more complex.

    There is of course nothing stopping users of the spec. adding such extensions.

    Those academics do at at least build a lot of actual chips.
  • rod1963rod1963 Posts: 752
    edited 2015-03-03 15:07
    If you look at Freescale's eTPU microengine it's not anything like the LowRisc proposed minion processor.. In some respects they are reminiscent of Chip's cogs with a hardware task scheduler and threads with a lot hardware piled on top.

    Here's a description of the eTPU2 (taken from MPC5676R Reference manual section 21) - there's more to them than this. The MPC5676R is a dual core PPC micro designed to control automotive systems and is a I/O beast.

    Event-Triggered VLIW processor (microengine):
    — 2 stage pipeline implementation (fetch and execution), with separate instruction memory
    - SCM - and data memory
    - SDM (Harvard architecture)
    — fixed-length instruction execution in two system clock microcycle
    — interleaved SCM access in dual eTPU Engine avoids contention in time for instruction memory
    — SCM address space of up to 16K positions (64 KB)
    — SDM with interleaved access in dual eTPU Engine avoids contention for data memory
    — SDM address space of up to 8 KB (both Engines
    — instruction set with embedded Channel support, including specialized Channel control subinstructions and conditional branching on Channel-specific flags.
    — channel-oriented addressing: channel-bound address mode with Host configured Channel Base Address allows channel data isolation, independent of microengine application code.
    — channel-bound data address space of upto 128 32-bit parameters (512 bytes)
    — global parameter address mode allows access to common Channel data of up to 256 32-bit parameters (1024 bytes)
    — support for indirect and stacked data access schemes.
    — parallel execution of: data access, ALU, Channel control and flow control subinstructions in selected combinations.
    — 32-bit microengine registersand 24-bit resolution ALU, with 1 microcycle addition and subtraction, absolute value, bitwise logical operations on 24-bit, 16-bit, or byte operands;single bit manipulation, shift operations, sign extension and conditional execution.
    — additional 24-bit Multiply/MAC/Divide unit which supports all signed/unsigned Multiply/MAC combinations, and unsigned 24-bit Divide. The MAC/Divide unit works in parallel with the regular microcode command.
  • bruceebrucee Posts: 239
    edited 2015-03-03 15:21
    I'll continue the pointlessness here then, at least until Heater sends me one of his P2's :)

    Back to why I am here, as it is apparent that a P2 won't be around this year, maybe not even the next, what should Parallax do in the meantime? As has been brought up the P1 makes a good hobbyist tool mostly because it is almost a single chip and in DIP form factor. How about an LPC1114 which is available in a DIP28, the LPC812 as a DIP8 may be too limited? These 2 are single chips, no external Flash, each has internal supplies and internal trimmed oscillators.

    So how to differentiate? Eclipse (the public domain dev system) is bloated, slow, complex... The others are limited in time or program size, and are also bloated and complex. Parallax has a simple IDE that works in C, GCC already does LPC1114s. Mate the 2 together.
  • jmgjmg Posts: 15,155
    edited 2015-03-03 15:28
    Heater. wrote: »
    There is of course nothing stopping users of the spec. adding such extensions.
    .
    True, they already have a lot of "Standard Extensions" so I expect more will come....
    Of course, all of those myriad of extensions, then rather contradicts the holy grail of ' international standards for the instruction set'.... :)
    Given HLL hides the opcodes anyway, I'm not sure how important "international standards for the instruction set'" is.

    I have no idea what the latest extended Opcode list from intel is, yet I use their cores every day.
  • Heater.Heater. Posts: 21,230
    edited 2015-03-03 15:30
    mindrobots ,

    (So, Heater, what new toy did you order?? I'm already tapped out for March April May but I can put it on the toy list!)

    Actually I lied I haven't got the order in yet. What I'm after now is the beast: 64 Bit ARM, 8 cores, 1.2GHZ, 1GB RAM, credit card sized. The HiKey ARM board:
    https://www.96boards.org/
    https://www.96boards.org/products/hikey/

    Supposedly available at Avnet and Arrow. Arrow have no stock it seems but I was hoping to talk a friend of a friend at the local Arrow office to get me one ASAP.
  • jmgjmg Posts: 15,155
    edited 2015-03-03 15:35
    brucee wrote: »
    So how to differentiate? Eclipse (the public domain dev system) is bloated, slow, complex... The others are limited in time or program size, and are also bloated and complex. Parallax has a simple IDE that works in C, GCC already does LPC1114s. Mate the 2 together.

    I'm lost ? You are saying Parallax should switch SimpleIDE to LPC1114, and not sell any more P1's?
    DIP packages are a dying niche, for low volume development there are PCB modules with DIP form-factors - there is no real need to encapsulate die into large DIP anymore.


    what should Parallax do in the meantime?
    Not become a NXP sales vendor.
    To me it make more sense for Parallax to do a [P1 & FPGA module], that allows their Software infrastructure to confinue, and seeds designs for P2, as well as allows for P2 Verliog testing.
  • bruceebrucee Posts: 239
    edited 2015-03-03 15:58
    You are saying Parallax should switch SimpleIDE to LPC1114, and not sell any more P1's?
    Basically yes. They could continue to sell the few they do, continue waiting for a vaporware P2.

    The P1 is slow, expensive and 10 years old. Yes DIPs are a throwback, but they are very easy to use in the DIY community.

    I just don't see P1/FPGA making much sense outside a very small community, unless Parallax can tackle the Verilog side and make it simple for DIY/educators, and then make it cheap enough.

    SOCs are a commodity now, so I don't care whether NXP or Freescale or MicroChip is the supplier, much as I don't care much who is supplying the resistors or capacitors. Spending $5M to develop a chip that you might sell a million units over the lifetime really doesn't make much business sense, when you can buy chips with similar capabilities for less than $5 in volume.
  • Heater.Heater. Posts: 21,230
    edited 2015-03-03 16:01
    jmg,

    At first sight the idea of an instruction set standard designed with extensibility in mind is a worry. Surely things would run out of control as companies add their own incompatible bells and whistles?

    On refelection I don't think that would happen very much.

    Given there is such an adopted standard and given there are all the software tools available to go with it, compilers, operating systems, debuggers etc etc. And given that you are a chip foundry or Soc vendor. You are not about to start adding random junk of your choosing to the instruction set. The cost of providing the software tools would be too great and adoption would be low. The would be no point.

    On the other hand, for extensions that everyone feels the need for, e.g. floating point support, extensions will be developed by the user base (The chip fabs and SoC vendors) as a communal, mutually beneficial effort. Those extensions will be a common standard themselves.

    One can already see how this plays out in the development of GCC and Linux and so on in that world. Why not the very chip you are running on as well?
    Given HLL hides the opcodes anyway, I'm not sure how important "international standards for the instruction set'" is.
    If you are a chip FAB or SoC vendor it's becoming very important. For your next wizzy mobile or internet of things device you need a CPU in there. You need all the software tools to support it. What to do?

    1) Use an Intel instructions set. There is lot's of tools for that, GCC, Linux etc etc. Not going to happen, Intel won't license it to you and they will sue your but if you use their ISA wihout permission.

    2) Use an ARM core. Fine but just negotiating that license is expensive and time consuming. As chips get cheaper and cheaper the license looks more and more expensive.

    3) Use MIPS, or design your own. Not really viable.

    What might the chip FAB and SoC vendors solution be? : Let's all get together behind an open standard and build to that. No licencing hassles. Common software support. Easy.
    I have no idea what the latest extended Opcode list from intel is, yet I use their cores every day.
    As do we all. But can you take your Windows apps that you have bought and paid for and run them on ARM?

    Almost every day comes a question on the Raspberry Pi forum asking how to install WINE so they can run some Windows program. Can't be done, their program is for x86, the Raspi is ARM, WINE is not an emulator.

    We would all benefit from a common instruction set even if we never get down and use it ourselves. It opens up the market, releases us from the Intel lock in. In the ever growing mobile/IoT world it releases us from ARM.
  • kwinnkwinn Posts: 8,697
    edited 2015-03-03 16:08
    Heater. wrote: »
    Dave,

    Yes. How did you guess. It's an All Relay Machine. :)

    Would those be mechanical or optical relays?
  • Heater.Heater. Posts: 21,230
    edited 2015-03-03 16:08
    Bruce,
    Parallax has a simple IDE that works in C, GCC already does LPC1114s. Mate the 2 together.
    What a lovely idea. That would be sweet.

    No way for Parallax to make money out of it though. SimpleIDE is Open Source and the LPC is not their product.

    Besides it's effectively been done already has it not: http://developer.mbed.org
  • jmgjmg Posts: 15,155
    edited 2015-03-03 16:11
    @Heater,

    GCC can already support cores available now on OpenCores so I see all of that more as a 'critical mass' issue, and not so much to do with 'standards'.
    Certainly it is nice to have enough 'critical mass' behind an available design that it is easy to deploy, but I can also see that the more 'extensions' that are added, the more that critical mass is diluted.

    Does anyone have Fitter reports on RISC V in a FPGA that also runs P1V, so we can compare Size/Speed ?

    I found this
    Xilinx XC6VLX240T-1FFG1156 FPGA 
    RISC-V mapping and place-and-route.
    LUTs 	DSP48s 	BRAMs 	Max Freq
    5570 	3 	5 	91 MHz
    
  • mindrobotsmindrobots Posts: 6,506
    edited 2015-03-03 16:22
    Fitter reports from the same FPGA running both will take a while. RISC-V only runs on Xilinx Artex as far as I've seen, no P1V, yet.

    Edit: Sorry, Xilinx Zynq 7000 series, my bad! The other FPGA I ordered is the Artex.
  • rod1963rod1963 Posts: 752
    edited 2015-03-03 16:29
    Heater

    The BS2 line is based on Microchip mcu's yet they are the prime money maker for Parallax. They could have easily upgraded the BasicStamp line to the LPC1114 or PIC32 and kept the product line very competitive. Both would have made great BS3's.

    The thing is Parallax is more of a VAR - you don't buy from them because they are the cheapest, you buy their stuff because of the support you get. That's still worth a lot to many hobbyists and educators.
  • Heater.Heater. Posts: 21,230
    edited 2015-03-03 16:31
    jmg,

    The most likely candidate on Open Cores for an open instruction set architecture is the Open Risc. As it happens the Open Risc is basically a processor architecture designed by the same guys at UCB behind RISC V.

    Quite why they decided against writing Open Risc into their proposed standard I don't know.

    Just to be clear, this is not about a particular processor design, or FPGA core implementation. Their proposed RISC V standard is only a specification for an instruction set. It does not care how you design your core to support it.

    As I said, I don't see those extension possibilities diluting anything. It's not in a chip/SoC vendors best interest to go alone and do that. But I can imagine that common useful extensions may well arise collaboratively. As happens in the Linux kernel world.

    I have no idea how big RISC V is in an FPGA yet but you can download the RISC V Rocket core and all the software to support it from here: http://riscv.org/download.html#tab_rocket_core. Seems it fits in a zynq chip anyway.....Hey, I happen to have a Zynq board....
  • Heater.Heater. Posts: 21,230
    edited 2015-03-03 16:42
    rod1963,

    The BS2 is a historical relic isn't it?

    Seriously I have no idea, I don't even know what a BS2 is really. Certainly never seen anyone using them in decades of hanging around the embedded industry and mixing with hobbyists. Haven't programmed in BASIC seriously since about 1975.

    I do agree with the idea that Parallax as a VAR, specializing in hobby and educational electronics.

    Which then poses the hard question: If that is their core business, core competency, why invest millions in a chip design?

    Don't get me wrong. I'm very glad their are people around like Chip and Ken who do such crazy things. And I'm still looking forward to the P2 "Chipmas".
  • mindrobotsmindrobots Posts: 6,506
    edited 2015-03-03 16:56
    A zynq chip with a dual core Cortex A-9 on it! It appears to use that for a linux front end. So much for "simple" at this stage of the game. Zybo Zynq board may show up in time for the weekend but more typically Monday....curse that day job!!!
  • jmgjmg Posts: 15,155
    edited 2015-03-03 17:05
    rod1963 wrote: »
    The BS2 line is based on Microchip mcu's yet they are the prime money maker for Parallax. They could have easily upgraded the BasicStamp line to the LPC1114 or PIC32 and kept the product line very competitive. Both would have made great BS3's.

    That is a very good point - aside from all of the P1 and P2 activity, there is great scope to bump BS2 -> BS3.
    The best target MCU would be one of the newer 1.8~5.5V ARMs as that avoids 3v3 issues in retrofit apps.
    IIRC Infineon have the best looking peripherals @ 1.8~5.5V, and good supply lines. Some Asian vendors also have 5V ARM, but perhaps without the long term supply performance of Infineon.

    I see Infineon have added 4mm QFN24 and 5mm QFN40 packages to 128kF, and these parts have QuadSPI,so a nifty BS3 could result.
Sign In or Register to comment.