Shop OBEX P1 Docs P2 Docs Learn Events
Prop II: Speculation & Details... Will it do what you want??? - Page 13 — Parallax Forums

Prop II: Speculation & Details... Will it do what you want???

11011131516

Comments

  • LeonLeon Posts: 7,620
    edited 2011-05-11 07:31
    Is FreeBASIC relevant to embedded development?
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-11 07:36
    Leon:
    No it is not, though neither is this old school thing (VB) that you keep mentioning. Unfortunately I am not aware of any complete BASIC compilers that are good for embedded development. Perhaps PropBASIC could become one if the effort is put into it to make it a complete BASIC Compiler.
  • ctwardellctwardell Posts: 1,716
    edited 2011-05-11 07:38
    Mini-Rant...

    There are a lot of smart people on this thread, wasting time.

    C.W.
  • localrogerlocalroger Posts: 3,451
    edited 2011-05-11 08:19
    Leon wrote: »
    I've never seen BASIC used for software development in any of the companies I've worked for.

    Then you occupy an interesting niche. For a period of about 8 years VB was the only low barrier to entry solution for writing Microsoft Windows apps. When Windows first came out the example "Hello, World" program was two pages long. VB reduced that to one line and some drag and drop.

    By the time other solutions became available VB was very entrenched. With VB4 it became compiled and despite lacking pointers was fairly fast. It was also an extremely safe language with full array and string length bounds checking, nearly invisible garbage collection, seamless handling of even very long variable-length strings, and an extremely solid IDE. Up until the day it was deprecated in favor of the visibly inferior and incompatible .Net, VB was the one language I could count on any customer being able to work with if I needed to collaborate on code.

    You would not want to use VB to write a game or an operating system but most of the world's software isn't games or operating systems, and when most of the heavy lifting is being done by the OS through its API anyway the rapid development for what is otherwise a very difficult environment becomes much more useful than the advantages offered by other languages. For years VB was therefore the most popular development tool for the most common OS in the world. I know I have seen statistics putting its penetration at over 50% in the early 2000's. If you've never had occasion to work with it that is quite a remarkable feat of avoidance.
  • Heater.Heater. Posts: 21,230
    edited 2011-05-11 09:15
    localroger,
    If you've never had occasion to work with it that is quite a remarkable feat of avoidance.

    Not remarkable at all. I've never seen VB used for real work either. Perhaps that's because I'm over here in Europe or perhaps it's because I've mostly been immersed in large embedded systems work for big technology companies.

    Had I worked on business software perhaps things would be different.

    Either way I can't see what relevance it has now a days in the embedded world that the Propeller occupies and when cross-platform apps are a good idea (as if they ever were not).
  • Dave HeinDave Hein Posts: 6,347
    edited 2011-05-11 09:31
    I've never worked with VB either. It's probably a function of when people learned computer programming and what the colleges were teaching at the time. I grew up on Fortran and then transitioned to C with a short period of RatFor in between. Most of my work has been in image and signal processing on imbedded processors, where C seems to dominate.
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-11 09:48
    I may be completely off base here, though:

    I do not think that many waited till collage to learn to program. Of those of us under about 45, I do not even think that most of us waited till we were 8 years old.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-05-11 09:49
    My first experience with BASIC was on a timeshare system via a Teletype terminal and an acoustic coupler to the phone. It was pretty much pure Dartmouth BASIC, with no frills. Next came a variant of Hewlett Packard BASIC on my Poly 88 8080 machine. I switched to Microsoft BASIC when I "upgraded" to a TRS 80. I liked the HP BASIC better, but my MS BASIC programs ran not only on the TRS 80, but on a later CP/M machine, and later still under QuickBASIC with MSDOS. I still have occasion to run those original TRS 80 programs (3D geometry and G-code generator for milling fishing lure molds) on WinXP, but they require a DOS window emulator for the QuickBASIC editor.

    Later, I used VB3 to develop a machine vision IDE for Win3.1 and was quite satisfied with it at the time. But, by the time Win95 came around, the VB3 windows were looking pretty dated. I later investigated VB6 and was horrified by its bloated size and unnecessary complexity, so I never used it. 'Been using Perl/Tk ever since for Windows (and Linux) programming. Perl/Tk also runs on OS/X, but requires the X11 windows manager. I like Perl's elegant string-handling capabilities, above all else, and its vast module libraries. Once you get the hang of it and develop good programming habits, it really does make the hard things easy.

    -Phil
  • Sal AmmoniacSal Ammoniac Posts: 213
    edited 2011-05-11 10:11
    I have also never used BASIC professionally. I've always worked in the embedded industry and BASIC is just not a big player in this field. When I started almost thirty years ago, embedded development was almost exclusively assembly language. As embedded processors started getting faster and larger, development gradually started moving to C, which is now the predominant language used by far. C++ is a distant second. Other languages are used so seldomly that they don't really appear on the radar screen.

    BASIC, primarily VB, is very popular for writing applications used internally by companies large and small. Since this is where the majority of overall software development takes place, it's not surprising that VB has such a large market share.

    Shrink wrapped applications, however, are still primarily written in C++, not VB.
  • D.PD.P Posts: 790
    edited 2011-05-11 10:14
    PicBasic Pro, RealBasic from Real Software enough said about professional level basic compilers. Please get back to the subject of this thread.

    dp
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-05-11 10:16
    Please get back to the subject of this thread.
    Aye, aye, sir! I've said everything about I'm gonna say. :)

    -Phil
  • Heater.Heater. Posts: 21,230
    edited 2011-05-17 08:19
    Prop II: Too little, too late???

    Yes.

    Compared to this monster: http://www.hotchips.org/uploads/archive22/HC22.23.310-Brown-PowerEN%20Presentation%2028July2010.pdf

    16 Cores
    64 hardware scheduled threads.
    64 bit Power PC architecture
    Hardware accelerators for: Crypto, compression, XML parsing, Regular expressions.
    1.43 Billion transistors!
    1.75GHz operation.
    Three 2.5GHz links for multiple chip solutions. Up to four chips with 256 coherent threads.
    Floating point (of course)
    Four 19Gbs Ethernet MAC.

    If only the could but that in a big DIP package. Oh and I can see any way to wiggle general purpose I/O pins.
    Seems to consume about 60 amps at 1volt at full tilt!!!

    If my bosses plans come together we will soon be porting our core applications to that device.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-05-17 08:27
    Why in ---- would they waste silicon on an XML engine? I can see crypto and regex acceleration, but XML?!!

    -Phil
  • potatoheadpotatohead Posts: 10,254
    edited 2011-05-17 09:10
    What a monster.

    As for implementing XML in silicon, wouldn't that have the same justification anything else does? eg: Multiply requires many instructions, or just one.

    If the device is to stream XML, then that silicon would make sense. Seems awfully thick though. Lots of more compact ways to stream data...
  • Heater.Heater. Posts: 21,230
    edited 2011-05-17 09:14
    Good question:

    Good question. It occurred to me to.

    Apparently it's all to do with SOA:
    Service Oriented Architecture Acceleration: XML appliances help manage SOA functions that sit between the network infrastructure layer and the application infrastructure layers. IBM's PowerEN processor can help these network devices secure and accelerate XML and Web services deployments. With its multi threading and XML accelerator, the PowerEN can parse, hash, perform schema validation and cache Web traffic, resulting in faster data delivery.
    Also this from a PowerEN white paper:
    Processors and threads alone are not enough to keep up with wire-speed computing across multiple 10G ports. PowerEN uses embedded hardware accelerators as a power-efficient way to deliver performance for standard functions:
    • Host Ethernet acceleration for network protocol processing
    • Encryption / Decryption acceleration
    • Pattern Matching acceleration [7]
    • XML acceleration [8]
    • Compression / decompression acceleration [9]
    PowerEN accelerators are designed for high throughput as shown in Table I. They can be activated directly from user space, using virtual memory addresses, which is key for application developers ease of use. Accelerators further support virtual functions to enable quality of service; state preservation 3 and occasionally complementary software for the more complex XML and RegX accelerators. The accelerators can be used in combination to achieve certain function at very high throughput; for example, achieve 100% deep packet inspection at 40Gbps, which is a key capability for security workloads.
    Also this:
    Using an XML accelerator may make it more cost efficient to store and query XML documents in a database [30] instead of storing preprocessed information in a relational database.
    Turns out the XML engine is not such a big area of the chip. See attached pic.

    Just so happens that our core apps generate a huge amount of real-time data in XML. I have been campaigning to replace this with some more efficient scheme like Google's Protocol Buffers. To no avail so far. Seems the "solution" might end up being throwing silicon at it.

    All that data ends up in an SQL data base. Which is going to get slower and slower as the data accumulates into piles of gigabytes. Given the way we finally use the data it may be as well just to dump most of it into a file as XML and let that hardware parser deal with it.

    That only leaves the issue of actual network bandwidth wasted in using XML. But then that is only a small fraction of the Gb/s they are looking forward to.

    If our plans work out we are looking to scale our current small field trial systems up to the thousands of units. At which point some serious horse power will be required.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-05-17 10:10
    Both XML and HTML are bandwidth hogs, especially when they're "properly" indented. I'm surprised nobody has come up with a shortform for these protocols -- short of non-textual data compression.

    -Phil
  • Heater.Heater. Posts: 21,230
    edited 2011-05-17 10:39
    Hence the compression engine on that PowerEN chip.

    It's always puzzled me. HTML and XML were in vented as document markup languages. To me nice and human readable. A modern day web page is anything but nice to read. Have a look at the page source for this forum for example. XML as a data transport is a nice idea but when the markup is ten times the size of the actual data, as I have often seen, it seems to have gone badly wrong.

    I guess spending so much time working on space constrained, relatively slow MCUs and such makes one sensitive to such blatant waste.
  • potatoheadpotatohead Posts: 10,254
    edited 2011-05-17 10:40
    Well, could always zlib them, ship them, and un-zlib them :)

    @Heater, that makes a good use case. Tons of things are going SOA. From a overall efficiency perspective, it's ugly. However, from a integration perspective, the "waste" is worth it. Extending that to devices makes some sense to me.
  • Heater.Heater. Posts: 21,230
    edited 2011-05-17 10:42
    Phil,
    I'm surprised nobody has come up with a shortform for these protocols

    Have a look at Google's Protocol Buffers http://code.google.com/apis/protocolbuffers/
    Does everything you can do with XML but about 10 time smaller and faster and easier. OK the protocol buffers are not human readable ASCII but so what?
  • jazzedjazzed Posts: 11,803
    edited 2011-05-17 10:47
    Heater. wrote: »
    If my bosses plans come together we will soon be porting our core applications to that device.
    Heater, you should also seriously consider a mature solution from Cavium.

    I worked with Cavium's Octeon series on a 96 core blade for security applications for 4 years until 2008. Really nice chip. Industry standard and mature GNU/GCC and full SMP Linux 2.6+ open source port.
  • Heater.Heater. Posts: 21,230
    edited 2011-05-17 11:05
    Cavium looks interesting.

    But the thing is our core application is not at the server end it's at the embedded control end of the system. Until we have 100% reliable networking that intelligence has to stay out at the edge of the network preferably in as small a box as possible. So like all such large infrastructure projects you need:
    a) The server stuff somewhere.
    b) The remote intelligence out in the field, often it's own processor in it's own box.
    c) A network, cable, optical, wireless, whatever. Routers, switches etc.
    d) Security. A high priority requirement.

    Now that PowerEN is obviously designed as a supper dupper chip for routers and such. BUT what if you could have device like a router that would also allow you to run your remote intelligence application? Poof, half of your hardware requirements have gone away. We no longer have to worry about providing hardware to run our code on. That is the direction our company is going hopefully.

    And that's how I got to looking at PowerEN which will be used in such "edge" devices.

    I only mentioned it here because of all those lovely cores and hardware threads and I/O tightly linked with the CPU.
  • Sal AmmoniacSal Ammoniac Posts: 213
    edited 2011-05-17 12:19
    There's another company making multi-core chips, Tilera, who claim to have chips with up to 100 64-bit CPU cores. Of course, all of these are in huge BGA packages and consume quite a lot of power, so they're not likely to appeal to anyone currently using the Propeller. ;-)

    I don't know if these chips are actually in production yet.
  • HShankoHShanko Posts: 402
    edited 2011-05-17 12:33
    For anyone interested, simply go to http://www.tilera.com/
  • Kevin WoodKevin Wood Posts: 1,266
    edited 2011-05-17 15:34
    There's also Chuck Moore's multicore stuff... http://greenarraychips.com
  • Sal AmmoniacSal Ammoniac Posts: 213
    edited 2011-05-17 16:19
    Kevin Wood wrote: »
    There's also Chuck Moore's multicore stuff... http://greenarraychips.com

    That's a strange beast... I've never seen an 18 bit processor before (I've seen 12, 36, and 60 bitters, but never an 18). Looks like it's memory-starved like Prop cogs and is designed to run Forth. Very non-mainstream.
  • markaericmarkaeric Posts: 282
    edited 2011-05-17 17:09
    But is there any other chip out there that will have ninety-something ADCs/DACs?
  • localrogerlocalroger Posts: 3,451
    edited 2011-05-17 18:06
    That's a strange beast... I've never seen an 18 bit processor before (I've seen 12, 36, and 60 bitters, but never an 18). Looks like it's memory-starved like Prop cogs and is designed to run Forth. Very non-mainstream.

    If you google for Chuck Moore you will see that this approach is consistent with an almost religiously contrarian philosophy he has followed throughout his career. Although I have taken a lot of inspiration from him in the design of my Windmill large-code system I'm sure he would consider me an overcomplicating apostate for implementing such unnecessary things as R-stack frame local variables and an overcomplicated range of "dictionary entry" types.

    Chuck preached two guiding principles to use in all design processes:

    1. Keep it simple. If you don't need it, don't put it in. The simpler it is, the easier it will be to debug and if you make it simple enough it will become almost impossible to even create a bug.

    2. Don't anticipate. If you don't need it NOW, don't put it in. Don't put in hooks for future functionality, more bits than you need, or more anything than you need for the current task in hand. If you need more later, you'll actually spend less time and do a better job by starting at the beginning wth the new mix of requirements.

    The reason for Chuck's 18-bit structure has to do with this philosophy; he decided this was the ideal width to support his stack machine instruction set and what amounts to 16-bit but easily extensible math. The fact that it isn't a power of 2 didn't matter to him, presumably for reasons similar to DEC's decision to use 36-bit words in the PDP-10.
  • Sal AmmoniacSal Ammoniac Posts: 213
    edited 2011-05-18 09:48
    localroger wrote: »
    If you google for Chuck Moore you will see that this approach is consistent with an almost religiously contrarian philosophy he has followed throughout his career. Although I have taken a lot of inspiration from him in the design of my Windmill large-code system I'm sure he would consider me an overcomplicating apostate for implementing such unnecessary things as R-stack frame local variables and an overcomplicated range of "dictionary entry" types.

    I've always considered Forth somewhat of a cult programming language. It really polarizes people: you either love it or you hate it--there's very little middle ground. Some of my friends consider it a hippie language. ;-)

    I can see the usefulness of Forth in applications such as Open Firmware, but I can't see much use for it in mainstream applications or even in embedded systems. Anyone know of any familiar applications that have been written in Forth?

    Moore seems to be pushing colorForth these days. I took a look and quickly got a headache from the weird colors used to replace punctuation. Some of the colors it uses, like cyan and yellow, are really hard to read on most systems.
  • localrogerlocalroger Posts: 3,451
    edited 2011-05-18 19:26
    Anyone know of any familiar applications that have been written in Forth?

    There is a device which was the flagship instrument in its field, designed in 1981 and upgraded in fully compatible increments until 1995, so that you could take a 1981 instrument which failed and put a 1995 motherboard in it and it would work with all the same peripherals. We installed literally thousands of them. They only stopped making compatible upgrades because Texas Instruments stopped making the CPU. Its firmware was written in Forth.

    That device was rock-solid. Nothing they have made since, mostly programmed in C++, has been nearly as reliable.
  • jmgjmg Posts: 15,148
    edited 2011-05-20 21:08
    markaeric wrote: »
    But is there any other chip out there that will have ninety-something ADCs/DACs?

    Yes, try 96 x 12b ADC (!) , and 48 x 8 bit PWM, 64K Flash in also a 128 pin package.

    http://www.coreriver.co.kr/product-lines/CORERIVERmcu_linkSejong100.htm
Sign In or Register to comment.