Shop OBEX P1 Docs P2 Docs Learn Events
Any ideas for P2 demos? - Page 2 — Parallax Forums

Any ideas for P2 demos?

245678

Comments

  • Publison wrote: »
    Guitar stuff would be gratifying to the younger crowd. A ramped up version of the Coyote would be cool:
    https://forums.parallax.com/discussion/101493/project-openstomp-tm-coyote-1-guitar-effects-pedal/p1
    I'm thinking delay with the built in A/D and D/A with some HyperRam would bring parts count down, (minus some filtering).

    Arrrgh no!

    FX are 10-a-penny. IK multimedia Amplitube on the iPhone can do amazing stuff.

    I am talking direct string processing where actual string tension is irrelevant.

    One can be in standard tuning and seamlessly switch to open G (Keef Richards) without touching the tuners.

    Emulate the presence of a capo without needing a capo.

    PERFECT intonation that is impossible to achieve on a traditional guitar, no matter the price.
  • msrobotsmsrobots Posts: 3,701
    edited 2020-02-10 06:25
    I think some small lab instrument should do it.

    HDMI, Keyboard/Mouse, Oscilloscope mode, Signal generator, maybe some logic Analyzer. Hyper-Ram as Buffer, SD and maybe some code to connect serial to that open source protocol used by a lot of scopes (forgot the name right now, tired).

    It could show the versatility of the smart pins, use ADC/DAC, SD, Hyper-Ram and the best of it is that whoever buys it will need a second P2 because he/she will not give up the first one because it is already a useful tool.

    Make it configurable, say 16 channel Osc or 4 channel Osc and 8 logic plus 4 Signal generated, or...

    Just a EVAL board plus Hyper-Ram, some BNC Accessory board(s) for the Osc-Pins, something USEFUL to show the potential.

    Mike
  • The trouble with doing just one thing is that the P2 either gets put into a box as "good for that type of thing" or it gets compared to some other XYZ that does it better and is way cheaper. No, the P2 has to be seen to be juggling all kinds of fruit while deftly peeling and slicing and dicing them at the same time. So it will be obvious not only that it can handle various kinds of fruit, but it can do it all at the same time, without skipping a beat.

    Combining these instruments together as a combo package would be a start but it would be good to see it controlling some motors or actuators as well.
  • ErNaErNa Posts: 1,738
    Hello Peter, the main point is identical hardware for everyone. The moment P2D2 is available and maybe the companion or a baseboard, the game will start. Motor control is a big issue and certainly will be a lot of fun. There are different project underway. While commercial products look for any cent to save, hobby and research spend a lot of money just for fun. I can foresee, that logging motor control data will open a new dimension of motor control applications.
    As soon as the P2D2 arrive we will find a way to have them produced in masses ;-) What about an autonomous quadcopter swarm to do firefighting ;-) Or, if we are not ready in time, we could use them as a rake surrogate to clean up the forrests so no new wildfires can start. Also in market in California. ;-) On one hand, such a drone could drain a swamp and on the other hand fight the fires. But a drone could also dry a swamp by blowing the water out and fan flames if a wild fire is needed. I think I have to stop sinking ;-)
    Waiting for arrival of redeemers...

  • In addition to commercial products, P2 can also be used for pure research.
    https://www.ligo.caltech.edu/
    https://www.glowconsortium.de/index.php/en/lofar-about/stations-featured
    Many of these applications are based on FPGA and/or DSP.
    P2 can fill this gap.
  • A demo needs to include something visual, eye catching, for the Arduino masses. I think that is critical. Playing video, such as what Peter has done in TaqOZ, seems like an ideal, as that is something the Arduino crowed has never seen, and there many other cogs that could be utilized at the same time. It places the end users mind in the space that the P2 is more of a SOC than a microcontroller, and that is a good thing.
  • As millions of P2's get out there controlling motors and doing cool IOT things, we should have a movement to dedicate a cog or two that isn't doing anything else working on SETI data.

    Friend: "Hey Roark! What is your SmartFridge doing?"
    Me: "Searching for intelligent alien life."
    Friend: "Cool!"
  • ;-) ;-)
    If that's an allusion to my post, then I can reassure you. LIGO / LOFAR has nothing to do with SETI search. And even if that were the case, that's not the stupidest thing people can deal with.
    Take a look here:
    https://www.gw-openscience.org/about/
    there is many software/signal processing/ know how needed.
    And that can also be worth it commercially.
  • Reinhard wrote: »
    ;-) ;-)
    If that's an allusion to my post, then I can reassure you. LIGO / LOFAR has nothing to do with SETI search. And even if that were the case, that's not the stupidest thing people can deal with.
    Take a look here:
    https://www.gw-openscience.org/about/
    there is many software/signal processing/ know how needed.
    And that can also be worth it commercially.

    Wow! That's pretty cool!

    I didn't know about LIGO/LOFAR when I posted that, and I certainly wasn't poking fun at anyone. I was just thinking about using spare cogs in future P2 IOT devices for a useful purpose. SETI (using BOINC) is sort of the "million monkeys" approach to distributed problem solving, and it really would be a hoot if some toaster somewhere, during it's off-time, found an ET's signal, or cracked the genetic code for cancer.
  • all right.
    I think LIGO works in a frequency range where the P2 snores. The amount of data must be managed with an SD card/fast SPI

    LOFAR has the baseband of 80 ... 900 MHz.
    I wrote about it a long time ago:
    https://forums.parallax.com/discussion/135391/my-first-larger-propeller-project#latest
    Not all links may be valid anymore, but much of this can be now implemented in P2 without additional hardware.
  • K2K2 Posts: 691
    I think it was Heater who once poo-pooed the idea of using video as a selling point for P1. Guess video wasn't his thing...and that's fine. But it was definitely what attracted me and many others to the P1. It is a major part of what attracts many to the P2. As ke4pjw suggests, it takes a heroic programming effort to create even the crudest VDG from an Arduino. To create one (or several!) quality raster scan video stream(s) and still juggle many other demanding real-time tasks with perfection is stunning.

    But that's just half the magic. The other half is that any old Joe can do it. That was the dumbfounding moment for me when, after a bit of staring at the code of one of Chip's video demos, I saw how I could use it to make a composite video display for my needs. I'll never forget walking away shaking my head in wonder because the whole dang thing worked! I hadn't read the manual and I hadn't written a single line of Spin or PASM before. It was the ultimate embodiment of the principle of least-surprise.

    If there were some way to communicate the sheer accessibility of Propeller chips, that would be a coup! It's nothing at all like starting from scratch with a Blackfin processor (a shoot-me-now prospect if ever there were one).
  • ke4pjw wrote: »
    A demo needs to include something visual, eye catching, for the Arduino masses. I think that is critical. Playing video, such as what Peter has done in TaqOZ, seems like an ideal, as that is something the Arduino crowed has never seen, and there many other cogs that could be utilized at the same time. It places the end users mind in the space that the P2 is more of a SOC than a microcontroller, and that is a good thing.

    I also believe that to move real numbers, Parallax has to show that real world work can be done -- and I think the code base has to come from them; the idea being it has been vetted. For me, I would love to see a palette of industrial objects -- things I've written in Spin/PASM for the P1 -- release by Parallax so that engineers can take this cool silicon and deploy it very quickly.

    On my mind...
    -- updated FDS
    -- high-speed I2C and SPI coms
    -- multi-file SD driver
    -- OLED driver(s) with graphics interface
    -- HD RS-458 interface
    -- MODBUS RTU slave (using HD 485)
    -- Ethernet interface
    -- PWM engine
    -- smart LED drivers (WS2812, AP102C, etc)
    -- stereo WAV player
    -- I2S interface
    -- G-Code interpreter
    -- multi-motor stepper driver

    What happens, sadly, is people will say, "Oh, those are easy -- someone will do it." and then they don't, until one of us is paid by a client to do it and then we release that object. But that code NOT coming from Parallax has a potentially negative impact on Propeller sales (my opinion)

    All the work on the silicon has to be backed up, for new users, with a big library of real-world code. I have a couple hundred libraries that I will port, but that's going to take time and learning the insides and out of Spin2 and PASM2. Because of the continual flux of P2 development, I didn't jump in until just now.
  • So that is my point about the video display graphics and also playing a video WHILE doing everything else, without breaking a sweat. When I get some time I will play the video in a floating window while running a live scope in another, with a logic analyzer in another, and a console shell too. I don't think that even a Pi can do all that except with assistance (such as a P2).
  • One key feature is high speed ADC/DAC on every pin.

    I don't really want to put myself under pressure to write this, but: We could have multiple NTSC inputs. Then put them in a grid like the video quad processors from days past. Most likely it would be black and white. Or totally warp the pixels for a surround view like new cars.


    Connect a long wire and receive AM radio?


    Multichannel recorder/mixer. This seems to be used often for voice assistants. But it could also be used for triangulation.
    Massive oscilloscope >4 channels. Maybe 24 channels.

    FM transmitter. Pi does this, P2 will do it better.

    I wonder what the competition thought when the Pi4 came out with dual HDMI. How about 4 or 6 ports? :sunglasses:

    A super precise ultrasound array?

    Could we run a huge number of WS2812s that would be problematic for other devices?

  • I'd like to try some time of flight measurements by streaming the entire port A to hub memory, one clock per long.

    Each port A input would be connected to a different length of wire, in appox 1 inch increments, which would all meet at a common node, which the photosensor's output would be connected to. So P0 would go via 1" of wire to the common node, and P31 would go via 32" of wire to the same common node.

    That way you can subdivide 1 clock pulse (1/360,000,000 sec) by 32, maybe ~1 inch resolution may be attainable if this works? (The signal travels at some fraction of light speed once in the wire, depending on its characteristic impedance)

    I was thinking starting with a high speed ethernet fibre optic transceiver may be the way to start this, and try different lengths of fibre to do some calibration runs.
  • cgraceycgracey Posts: 14,133
    Tubular wrote: »
    I'd like to try some time of flight measurements by streaming the entire port A to hub memory, one clock per long.

    Each port A input would be connected to a different length of wire, in appox 1 inch increments, which would all meet at a common node, which the photosensor's output would be connected to. So P0 would go via 1" of wire to the common node, and P31 would go via 32" of wire to the same common node.

    That way you can subdivide 1 clock pulse (1/360,000,000 sec) by 32, maybe ~1 inch resolution may be attainable if this works? (The signal travels at some fraction of light speed once in the wire, depending on its characteristic impedance)

    I was thinking starting with a high speed ethernet fibre optic transceiver may be the way to start this, and try different lengths of fibre to do some calibration runs.

    Using Goertzel to output a sine and measure returning phase might yield time of flight, too.
  • Cluso99Cluso99 Posts: 18,066
    For the P1, the ability to do multiple things in parallel was a key point in me deciding to jump in.

    The fact that it could do video, which at that time usually required specialised chips, was a great teaser even tho' I didn't need it then.

    I think a demo (the P2-EVAL2 can do it) showing off say 4 VGA/HDMI 1080p screens doing different things would be amazing. Not necessarily useful, but a great talking point. Perhaps with a Keyboard inputting text and a Mouse just moving around on a different screen. Needs to make one think WOW all that on a chip - what else can it do???

    Perhaps one of the screens can be tracing one of the videos' pins in scope mode??
  • ErNaErNa Posts: 1,738
    If we are able to do a line follower with a simple robot, if there is a ping sensor or laser range finder, why not create a propeller hat to support disabled people? Just to give them a direction or warn of proximate obstacles. TOF with phase control might be possible with multiple frequencies so Goertzel can track the phase of the different components and so create a longer range or higher resolution. The signal aquisition unit can feed into a ANN and so support decision making of the hat carrier. The hat should not carry a sign of the propeller, as this would be to obviously identifying a commercial, but can use some camouflage like a emitting colour in the lower visible frequency range. As ANN is not designed for a special purpose, but can learn, additional sensors like those for humidity, temperature, chemical vapors might help to find hidden information about upcoming changes in the environment and so wirelessly stimulate the ability to increase awareness of the ecosystem we all live in.
  • rogloh wrote: »
    You probably need to focus on the real strengths of the P2...

    -Flexible I/O mapping, something that hot swaps/auto detects and reconfigures could show this.
    -Video generation VGA/composite/component/DVI - multiple simultaneous outputs
    -DACs/ADCs on any pin
    ...

    There were surely a lot of good proposals about possible demos. However, please don't forget that the raw power of the hardware is only one side. There are other chips with lots of computing power. Demonstrating what the chip can do is by far not as impressive than showing that all can be done with very little effort.

    Creating something with multiple video outputs would also be possible with an FPGA. But it requires much more manpower, more sophisticated tools and so on.

    The biggest advantage of the Propeller is that you don't need a software suite that needs hours to install and to configure. You can create something impressive within minutes with only some lines of code. You press a button and it takes less than a second to compile, load and run. It's simple, straightforward but still with very little limits. If you convince people about this you've won.

    So I think it's important to also focus on the tool side. Create good documentation and an easy to use IDE. Tutorials would also be nice. Something like the "Propeller Education Kit Labs" would be a good starting point (although I think nobody wants to read manuals these days. Younger people look up everything on Youtube)

    And a single step source level debugger with GUI is absolutely essential. I haven't seen an IDE without the last 20 years except for the Propeller Tool. I've used it anyway because everything else was so cool and I could put up with having to use an external tool for simulation. But newcomers could be convinced easier if a debugger was integrated.
  • Cluso99Cluso99 Posts: 18,066
    edited 2020-02-14 10:25
    ManAtWork wrote: »
    rogloh wrote: »
    You probably need to focus on the real strengths of the P2...

    -Flexible I/O mapping, something that hot swaps/auto detects and reconfigures could show this.
    -Video generation VGA/composite/component/DVI - multiple simultaneous outputs
    -DACs/ADCs on any pin
    ...

    There were surely a lot of good proposals about possible demos. However, please don't forget that the raw power of the hardware is only one side. There are other chips with lots of computing power. Demonstrating what the chip can do is by far not as impressive than showing that all can be done with very little effort.

    Creating something with multiple video outputs would also be possible with an FPGA. But it requires much more manpower, more sophisticated tools and so on.

    The biggest advantage of the Propeller is that you don't need a software suite that needs hours to install and to configure. You can create something impressive within minutes with only some lines of code. You press a button and it takes less than a second to compile, load and run. It's simple, straightforward but still with very little limits. If you convince people about this you've won.

    So I think it's important to also focus on the tool side. Create good documentation and an easy to use IDE. Tutorials would also be nice. Something like the "Propeller Education Kit Labs" would be a good starting point (although I think nobody wants to read manuals these days. Younger people look up everything on Youtube)

    And a single step source level debugger with GUI is absolutely essential. I haven't seen an IDE without the last 20 years except for the Propeller Tool. I've used it anyway because everything else was so cool and I could put up with having to use an external tool for simulation. But newcomers could be convinced easier if a debugger was integrated.

    I wrote a P1 debugger years ago which could single step a single cog either in pasm or spin where it could either single step the whole interpreter or just the byte-code. It took no cog ram as it hid in the shadow ram. Zero interest!

    Hopefully this time around, with added hardware support, there might be more interest.
  • Cluso99 wrote: »
    I wrote a P1 debugger years ago which could single step a single cog either in pasm or spin where it could either single step the whole interpreter or just the byte-code. It took no cog ram as it hid in the shadow ram. Zero interest!

    Wow, interesting. You mean this one? This time I found it immediately when searching for "Cluso99 debugger". This was in 2008, before I joined the propeller community at the end of 2009. I remember that I searched a lot. The best I had found was "pPropellerSim". I still use it for debugging P1 asm code.

    The problem is that something burried in past forum discussions is often hard to find. To have success it must be listed in the "Downloads and Documentation" tab of the product page. You only can google for something you know it exists and you know the name or at least a good keyword of it.

  • ;-))
    quote: Alice Cooper, 1972, Luney Tune
    Is this all real?
    Is this all necessary?
    Or it this a joke?
    512 x 266 - 49K
    tux.png 49.1K
  • JonnyMac wrote: »
    -- I2S interface

    I've used I2S to interface to a 2-channel 24 bit 96kHz audio ADC with the P1. I could port that to the P2.
  • cgraceycgracey Posts: 14,133
    ManAtWork wrote: »
    rogloh wrote: »
    You probably need to focus on the real strengths of the P2...

    -Flexible I/O mapping, something that hot swaps/auto detects and reconfigures could show this.
    -Video generation VGA/composite/component/DVI - multiple simultaneous outputs
    -DACs/ADCs on any pin
    ...

    There were surely a lot of good proposals about possible demos. However, please don't forget that the raw power of the hardware is only one side. There are other chips with lots of computing power. Demonstrating what the chip can do is by far not as impressive than showing that all can be done with very little effort.

    Creating something with multiple video outputs would also be possible with an FPGA. But it requires much more manpower, more sophisticated tools and so on.

    The biggest advantage of the Propeller is that you don't need a software suite that needs hours to install and to configure. You can create something impressive within minutes with only some lines of code. You press a button and it takes less than a second to compile, load and run. It's simple, straightforward but still with very little limits. If you convince people about this you've won.

    So I think it's important to also focus on the tool side. Create good documentation and an easy to use IDE. Tutorials would also be nice. Something like the "Propeller Education Kit Labs" would be a good starting point (although I think nobody wants to read manuals these days. Younger people look up everything on Youtube)

    And a single step source level debugger with GUI is absolutely essential. I haven't seen an IDE without the last 20 years except for the Propeller Tool. I've used it anyway because everything else was so cool and I could put up with having to use an external tool for simulation. But newcomers could be convinced easier if a debugger was integrated.

    I think THIS is the way to market the P2.
  • ManAtWork wrote: »
    rogloh wrote: »
    You probably need to focus on the real strengths of the P2...

    -Flexible I/O mapping, something that hot swaps/auto detects and reconfigures could show this.
    -Video generation VGA/composite/component/DVI - multiple simultaneous outputs
    -DACs/ADCs on any pin
    ...

    There were surely a lot of good proposals about possible demos. However, please don't forget that the raw power of the hardware is only one side. There are other chips with lots of computing power. Demonstrating what the chip can do is by far not as impressive than showing that all can be done with very little effort.

    Creating something with multiple video outputs would also be possible with an FPGA. But it requires much more manpower, more sophisticated tools and so on.

    The biggest advantage of the Propeller is that you don't need a software suite that needs hours to install and to configure. You can create something impressive within minutes with only some lines of code. You press a button and it takes less than a second to compile, load and run. It's simple, straightforward but still with very little limits. If you convince people about this you've won.

    So I think it's important to also focus on the tool side. Create good documentation and an easy to use IDE. Tutorials would also be nice. Something like the "Propeller Education Kit Labs" would be a good starting point (although I think nobody wants to read manuals these days. Younger people look up everything on Youtube)

    And a single step source level debugger with GUI is absolutely essential. I haven't seen an IDE without the last 20 years except for the Propeller Tool. I've used it anyway because everything else was so cool and I could put up with having to use an external tool for simulation. But newcomers could be convinced easier if a debugger was integrated.
    Actually, the thing that really made P1 easy to use was the library of tested objects in OBEX. The bare chip was probably harder to use than ones with built-in hardware peripherals since you had to roll your own in software. So one of the first steps for P2 needs to be to build a library of standard peripherals that are tested and can be integrated into an application as easily as the P1 object could.

  • I have 2 projects I'm currently working on: 1) ATU replica; 2) Educational CubeSat prototype

    The Audio Terminal Unit is the primary audio communication interface onboard the ISS. I worked on this system for 5 years as its subsystem engineer before being forced into retirement. The ATU used a monolithic FPGA, a time division multiple access (TDMA) protocol and a fiber optic network to process and move audio around the station. I thought it might be fun for nostalgia to build a replica based on wireless communication between units and Bluetooth on my phone.

    I'm currently working on the software suite for a Propeller based CubeSat for educational use. I've already designed and 3D printed the frame and milled the necessary PCBs into a PC/104 format. I'm currently designing the primary processor board with KiCAD. My objective is to move the necessary hardware from one of my Propeller Project Boards (#32810) onto a milled PC/104 board.

    I'm excited about the P2 because I think it will work nicely with both my projects. Sticking with a space theme the latest SBIR RFP released by NASA has several interesting requirements for hardware that might make interesting demos. For example: wireless sensor networks, wireless communication/data networks for use across [rather than through] bulkheads, autonomous systems, neutral networks, fault management, coordination/control of drone swarms, navigation and control (i.e. CORDIC?), etc.

    I know JSC down here is designing new space suits and communication systems. The suits could probably benefit from a powerful microcontroller. From an educational perspective, though, what about a wearable device that can collect multiple sensor data and present that information and fault predictions on a small forearm based video device which an input interface designed for gloved use with an audio interface.

    When I was working in the RaPID lab with the procedures group (ie the guys that wrote all the procedures the astronauts used on the ISS). They were hot on trying to produce an automated way of performing procedures onboard which used a form of augumented video where just looking at a valve, for example, would bring up the required procedure on their pad consisting of a mixture of manual and automated steps with data being fed to the pad by the valve hardware itself.

    Simple demos in this venue might be enough to capture the interest of kids.
  • JonnyMac wrote: »
    I also believe that to move real numbers, Parallax has to show that real world work can be done -- and I think the code base has to come from them; the idea being it has been vetted. For me, I would love to see a palette of industrial objects -- things I've written in Spin/PASM for the P1 -- release by Parallax so that engineers can take this cool silicon and deploy it very quickly.

    On my mind...
    -- updated FDS
    -- high-speed I2C and SPI coms
    -- multi-file SD driver
    -- OLED driver(s) with graphics interface
    -- HD RS-458 interface
    -- MODBUS RTU slave (using HD 485)
    -- Ethernet interface
    -- PWM engine
    -- smart LED drivers (WS2812, AP102C, etc)
    -- stereo WAV player
    -- I2S interface
    -- G-Code interpreter
    -- multi-motor stepper driver

    What happens, sadly, is people will say, "Oh, those are easy -- someone will do it." and then they don't, until one of us is paid by a client to do it and then we release that object. But that code NOT coming from Parallax has a potentially negative impact on Propeller sales (my opinion)

    All the work on the silicon has to be backed up, for new users, with a big library of real-world code. I have a couple hundred libraries that I will port, but that's going to take time and learning the insides and out of Spin2 and PASM2. Because of the continual flux of P2 development, I didn't jump in until just now.

    Jon,
    I agree with this approach. The above P2 code modules would persuade many users to take the plunge.
    My particular frustration is the lack of focus on G code interpreters/modules.
    It seems to me that G code is the basis of all serious industrial robotic machinery and surely that should be reflected in STEM and hobby robotics?
    I am sure a basic STEM 2 or 3-D table with stepper motors and G code modules will be of great value and interest.
    Even small hobby mobile robots (like activitybot or scribbler, for instance) could have G code control options.
  • David Betz wrote: »
    ManAtWork wrote: »
    rogloh wrote: »
    You probably need to focus on the real strengths of the P2...

    -Flexible I/O mapping, something that hot swaps/auto detects and reconfigures could show this.
    -Video generation VGA/composite/component/DVI - multiple simultaneous outputs
    -DACs/ADCs on any pin
    ...

    There were surely a lot of good proposals about possible demos. However, please don't forget that the raw power of the hardware is only one side. There are other chips with lots of computing power. Demonstrating what the chip can do is by far not as impressive than showing that all can be done with very little effort.

    Creating something with multiple video outputs would also be possible with an FPGA. But it requires much more manpower, more sophisticated tools and so on.

    The biggest advantage of the Propeller is that you don't need a software suite that needs hours to install and to configure. You can create something impressive within minutes with only some lines of code. You press a button and it takes less than a second to compile, load and run. It's simple, straightforward but still with very little limits. If you convince people about this you've won.

    So I think it's important to also focus on the tool side. Create good documentation and an easy to use IDE. Tutorials would also be nice. Something like the "Propeller Education Kit Labs" would be a good starting point (although I think nobody wants to read manuals these days. Younger people look up everything on Youtube)

    And a single step source level debugger with GUI is absolutely essential. I haven't seen an IDE without the last 20 years except for the Propeller Tool. I've used it anyway because everything else was so cool and I could put up with having to use an external tool for simulation. But newcomers could be convinced easier if a debugger was integrated.
    Actually, the thing that really made P1 easy to use was the library of tested objects in OBEX. The bare chip was probably harder to use than ones with built-in hardware peripherals since you had to roll your own in software. So one of the first steps for P2 needs to be to build a library of standard peripherals that are tested and can be integrated into an application as easily as the P1 object could.

    Yeah ! I'm totally agree with David. Like an OBEX for the P2 with many drivers available, could help a lot to get the final purpose of what people would do.
  • I've worked with a ton of MCUs over the years; as well as doing sysadminny and webdevvy stuff, so I've surfed a LOT of codebases over the past 20 years.

    Something that jumps out at me: Most people's "Example" codebases *SUCK*.
    It's usually a list of classes and function calls with a couple 10-30 line snippets that are practically useless for integration because they don't document any of the quirks.

    For webapps, it's usually some kind of simple todo-list CRUD application. No authentication, no cookies, no JWT, no javascript...

    It bugged me so much when I started learning flask in python, I had to make my own.
    https://github.com/kamilion/kaizen/blob/master/app/__init__.py

    And leave reasonable comments...
    https://github.com/kamilion/kaizen/blob/master/auth/authmodel.py#L81-L86

    And even some unreasonably long comments.
    https://gist.github.com/kamilion/e7c703fa52882e2a5e84c82e667c8f64
    The compiler strips them anyway.


    As an example of some *good* examples I've run across recently: Espressif's ESP32 IDF.
    https://github.com/espressif/esp-idf/tree/master/examples

    They could be better, but already there's a number of completely built projects you can just compile and flash.

    Unfortunately, they really suck at helpful comments documenting inter-library quirks, which is something they could do better with.

    As an example of some *poor* examples, almost any of Unity3D's scripting reference.
    https://docs.unity3d.com/ScriptReference/Events.UnityEvent.html
    The manual's just as terrible.
    https://docs.unity3d.com/Manual/MessagingSystem.html

    They have an "asset store" / paid-OBEX, in which they publish certain example projects as no-cost/free downloads.
    Most of them have accompanying tutorial videos, but no supporting documentation, and lack comments.

    The best example projects are paid-download from thirdparties, ranging from $14 to $300. (Which is a tiny buy-in price when you consider the average PC game costs about $60)
    Many of these come with PDF documentation, class references, example code, and example projects with multiple integrations.
    Here's one of the best I've worked with: https://assetstore.unity.com/packages/templates/systems/ultimate-character-controller-99962
    And a list of it's integrations with other assets: https://opsive.com/support/documentation/ultimate-character-controller/integrations/final-ik/

  • Cluso99Cluso99 Posts: 18,066
    ManAtWork wrote: »
    Cluso99 wrote: »
    I wrote a P1 debugger years ago which could single step a single cog either in pasm or spin where it could either single step the whole interpreter or just the byte-code. It took no cog ram as it hid in the shadow ram. Zero interest!

    Wow, interesting. You mean this one? This time I found it immediately when searching for "Cluso99 debugger". This was in 2008, before I joined the propeller community at the end of 2009. I remember that I searched a lot. The best I had found was "pPropellerSim". I still use it for debugging P1 asm code.

    The problem is that something burried in past forum discussions is often hard to find. To have success it must be listed in the "Downloads and Documentation" tab of the product page. You only can google for something you know it exists and you know the name or at least a good keyword of it.

    Yes, that’s it. Cannot believe it was 12 years ago. How time flies :smiley:
Sign In or Register to comment.