Shop OBEX P1 Docs P2 Docs Learn Events
Simulating neurons with an array of props. — Parallax Forums

Simulating neurons with an array of props.

kwinnkwinn Posts: 8,697
edited 2011-07-06 06:09 in Propeller 1
My original thought was to have the neurons send messages to the individual neurons they are connected to, but perhaps it would be better to have the neurons listen for messages from neurons they are connected to. This may speed things up.
«13

Comments

  • kwinnkwinn Posts: 8,697
    edited 2011-06-07 20:16
    jazzed, you are absolutely right about using the "core" props on the boards for visual processing. I'm having a bit of trouble getting out of the "single board with I/O connectors around the edges" box. With 6 faces on the stack of boards the majority of the I/O connections could very well be on the top and bottom rather than the edges of the boards.
  • jazzedjazzed Posts: 11,803
    edited 2011-06-07 20:32
    Glad you brought this topic to the Propeller forum from the Robotics forum.

    Guess we need to start looking for an appropriate camera for image capture if recognition is one of the goals of the Artificial Neural Network "ANN".

    I agree with listening for messages. Imagine a COG listening to pre-determined HUB locations for a mailbox flag.

    In the simplest case, a mailbox could be stuffed with information received from a "transport" COG. Neurons can send information to other Neurons via the transport COG. Neural network connections themselves are a bit "fuzzy" to me right now and deserve more reading.

    The details of a Neuron would be determined later, but I expect that "K" Neural processing "jmpret threads" could be active in a COG.

    People always fret about floating point with a Neuron, but I contend all that is really necessary is signed arithmetic and Propeller is pretty good with that using 32 bit longs.

    If a COG based Neural processor can be developed, the actual physical transport mechanism can be whatever the user chooses. The only requirement for interfacing with a Neuron is the mailbox.
  • kwinnkwinn Posts: 8,697
    edited 2011-06-07 23:51
    jazzed wrote: »
    Glad you brought this topic to the Propeller forum from the Robotics forum.

    Since we were discussing an array of Propellers to do this it seemed like the natural choice.
    Guess we need to start looking for an appropriate camera for image capture if recognition is one of the goals of the Artificial Neural Network "ANN".

    Agreed, but we also need audio if we are to process the usual inputs to any intelligence.
    I agree with listening for messages. Imagine a COG listening to pre-determined HUB locations for a mailbox flag.

    On further thought it seemed like a less bandwidth intensive approach than sending messages to every connected neuron. It also seemed to be closer to the way our nervous system works.
    In the simplest case, a mailbox could be stuffed with information received from a "transport" COG. Neurons can send information to other Neurons via the transport COG. Neural network connections themselves are a bit "fuzzy" to me right now and deserve more reading.

    True. A message could be broadcast and all the neurons could process it as required without regard as to how it was received. What the message is and how the neurons process it is a bit fuzzy to me as well. As Steve Ciarcia at Circuit Cellar said " my low level language is solder " and so is mine.
    The details of a Neuron would be determined later, but I expect that "K" Neural processing "jmpret threads" could be active in a COG.

    Some form of LMM and use of hub memory should allow a cog to simulate multiple neurons.
    People always fret about floating point with a Neuron, but I contend all that is really necessary is signed arithmetic and Propeller is pretty good with that using 32 bit longs.

    I don't really see floating point or even signed arithmetic as being required for this, and the beauty of the prop is that messages are not limited to any specific bit length. You may have to write a driver but you can choose whatever bit length suits your requirement.
    If a COG based Neural processor can be developed, the actual physical transport mechanism can be whatever the user chooses. The only requirement for interfacing with a Neuron is the mailbox.

    That is why I proposed a cube or hypercube connection scheme. It allows for a low latency communications scheme while allowing a very flexible neuron interconnect method.
  • Martin_HMartin_H Posts: 4,051
    edited 2011-06-08 07:35
    I really think you have two projects here.

    One project is building interconnect software to allow multiple propeller chips to form a simple network and cooperate on a task. If you solve this problem correctly it will be useful for more than just neural networks. This problem has been solved several times over the past 30 years, so it would be best to copy a known good approach. Obviously in a much reduced and less general form because of the limited resources.

    The other project is simulating a neural network on a propeller or propellers using this interconnect software. But it might be worth just simulating a small network on a single propeller first. With 32K of RAM you should be able to have a sparse matrix of maybe 100 neurons to write all the math routines required.

    If you allow multi-cast I would be curious how you'll handle bus collisions. In the robotics forum I was suggesting copying the concepts used with TCP/IP in a much reduced form. TCP allows nodes to broadcast on the bus and if they collide they rebroadcast after a back off algorithm.
  • kwinnkwinn Posts: 8,697
    edited 2011-06-08 10:19
    Martin_H wrote: »
    I really think you have two projects here.

    One project is building interconnect software to allow multiple propeller chips to form a simple network and cooperate on a task.

    Absolutely. The first challenge would be to settle on an interconnect scheme and a communications protocol.
    If you solve this problem correctly it will be useful for more than just neural networks. This problem has been solved several times over the past 30 years, so it would be best to copy a known good approach. Obviously in a much reduced and less general form because of the limited resources.

    Good point. I threw out the cube/hypercube idea for it's simplicity, ease of expansion, simple routing, and high speed. Other connection schemes should be discussed before a decision is made.
    The other project is simulating a neural network on a propeller or propellers using this interconnect software. But it might be worth just simulating a small network on a single propeller first. With 32K of RAM you should be able to have a sparse matrix of maybe 100 neurons to write all the math routines required.

    I was thinking that two of the TetraProp boards would be good for this. I was also considering laying out a board with a 3x3 array of props with one prop as a master that provides the clock and software download to the other 8 props. A stack of 3 such boards should be a pretty good start
    If you allow multi-cast I would be curious how you'll handle bus collisions. In the robotics forum I was suggesting copying the concepts used with TCP/IP in a much reduced form. TCP allows nodes to broadcast on the bus and if they collide they rebroadcast after a back off algorithm.

    My thought was also to use a reduced form of TCP/IP with the prop address being related to it's physical location in the array. Possibly a 12 bit address with 4 bits each for X, Y, and Z coordinates in the array. Individual neurons in each prop would be appended as IP addresses are done now. So it would be "PropCoordinate.NeuronNumber".

    Again, alternate methods should be discussed before a decision is made. Perhaps some form of time slicing similar to hub access on the propeller could be combined with the CSMA/CD protocol used by TCP/IP. With all the props in the array in such close proximity they could share a common clock of some kind to coordinate things.
  • jazzedjazzed Posts: 11,803
    edited 2011-06-08 11:33
    The basic functional units seem to be:
    1. Create "threaded" Neurons.
    2. Create a protocol independent communications layer.
    3. Connect the communications layer to some transport layer.
    4. Connect any hardware to support the transport layer.
    Some parallelization of effort is possible of course.

    The type of hardware connections used is less important than the methodology for using it. Threaded Neurons would have training/operational modes and can read/write intra-chip mailbox pointers. Protocol independent communications layer will encode/decode buffers with inter-chip end-points. Transport layer would be encapsulating data on TCP/UDP or some other underlying protocol.

    Additional topics for research:
    • A connection manager (nameserver?) may be required for inter-node communications.
    • Some form of Genetic Algorithm GA to control directions of growth (not computational GA).
  • kwinnkwinn Posts: 8,697
    edited 2011-06-08 20:32
    Is anyone aware of any available code or tutorials for simulating neurons and/or neural networks?
  • jazzedjazzed Posts: 11,803
    edited 2011-06-08 21:11
    kwinn wrote: »
    Is anyone aware of any available code or tutorials for simulating neurons and/or neural networks?


    Here's a source code repository: http://tralvex.com/pub/nap/zip/
    I like nasanets.zip - except it has weird C BEGIN/END body macros :)

    Same site, different code in C: http://tralvex.com/pub/nap/nn-src/


    Wikipedia has lots of info. Some algorithms are buried there.

    http://en.wikipedia.org/wiki/Artificial_neural_network
    http://en.wikipedia.org/wiki/Artificial_neuron
    http://en.wikipedia.org/wiki/Perceptron
    http://en.wikipedia.org/wiki/Feedforward_neural_network#Multi-layer_perceptron

    http://en.wikipedia.org/wiki/Computational_neuroscience

    Here is my list of PDP links.

    http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain&p=1006938&viewfull=1#post1006938

    Some spin character recognition code poorly documented.

    http://forums.parallax.com/attachment.php?attachmentid=52090&d=1202847726
  • LeonLeon Posts: 7,620
    edited 2011-06-08 23:53
    This is the best resource for neural net material:

    ftp://ftp.sas.com/pub/neural/FAQ.html
  • prof_brainoprof_braino Posts: 4,313
    edited 2011-06-09 05:29
    TCP/IP with the prop address being related to it's physical location in the array. Possibly a 12 bit address with 4 bits each for X, Y, and Z

    Does this mean there is a common bus that handles all the props communication?
    I had the idea that biological neurons just have direct connections between neurons, and "use" determines the amount of traffic on a given channel.
    Obviously, I have a lot of reading to catch up....
  • LeonLeon Posts: 7,620
    edited 2011-06-09 05:58
    ANNs usually have direct connections between the neurons, as is the case with real ones, which can have thousands of connections to other neurons. It depends on the type of neuron, though.
  • jazzedjazzed Posts: 11,803
    edited 2011-06-09 07:22
    If a propeller had infinite memory, any number of "direct connections" would be possible.

    A "bus architecture" or some other connection interface will be required for multiple propeller connections. There are different ways to do it. Hopefully the code can be designed so that it can be used on any hardware connections.
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-06-09 08:01
    I keep reading about the Prop's limited memory being a problem in creating an ANN. Would this be a good application for external memory?

    If so, how frequently would the memory need to be written too? Would flash work as external memory or would one need to use SRAM?
  • LeonLeon Posts: 7,620
    edited 2011-06-09 08:36
    The usual approach is a fast processor and RAM; it's simple, cheap, and works very well. The very successful Talon ANPR system uses that approach. I used to work for the company that developed it (in a different division), and can remember seeing cars fitted with cameras for testing the system, parked at the entrance to our car park.
  • jazzedjazzed Posts: 11,803
    edited 2011-06-09 09:48
    Duane Degn wrote: »
    I keep reading about the Prop's limited memory being a problem in creating an ANN. Would this be a good application for external memory?
    The reason memory is a problem is because of the typical implementation. Direct connections in this case are really pointers or indicies to memory locations. If the neurons are distributed across multiple processors, memory should not be an issue. Direct connections can be made via inter-processor virtual connections. While that will slow things a little, having multiple CPU/MCUs would allow more computational parallelization.
    Duane Degn wrote: »
    If so, how frequently would the memory need to be written too? Would flash work as external memory or would one need to use SRAM?
    A centralized SDCard or wear-leveled flash store may be useful for saving neuron "weights" ... a per-propeller flash like what SpinSocket-Flash offers could be used for saving weights, but it's probably not necessary. SRAM would be required for normal operations.
  • LeonLeon Posts: 7,620
    edited 2011-06-09 09:58
    It's a good idea to decide on the application before fixing on a particular technique and hardware. Different applications require different types of network.
  • jazzedjazzed Posts: 11,803
    edited 2011-06-09 10:06
    Leon wrote: »
    It's a good idea to decide on the application before fixing on a particular technique and hardware. Different applications require different types of network.
    Sure, makes sense. Any solution with propellers would be just as good as the next solution with propeller though. I'm not looking at using different CPU/MCU hardware at this point, so don't bother with that suggestion :)

    Since you are interested enough in the thread, it would be really nice for you to provide a couple of examples of pairing applications and techniques to further the cause.
  • LeonLeon Posts: 7,620
    edited 2011-06-09 10:13
    Multi-layer perceptrons are often used for pattern recognition, like recognising hand-drawn numerals.
  • jazzedjazzed Posts: 11,803
    edited 2011-06-09 10:20
    Leon wrote: »
    Multi-layer perceptrons are often used for pattern recognition, like recognising hand-drawn numerals.
    True. There are multiple references to multi-layer perceptrons (a neural node form) being used in interpreting obfuscated information. In what cases would they not be appropriate?
  • LeonLeon Posts: 7,620
    edited 2011-06-09 10:24
    Reducing high-dimensional data to low-dimensional data, as is conventionally done with multi-dimensional scaling. Kohonen's self-organising map techniques are used for that sort of thing, with unsupervised learning.
  • jazzedjazzed Posts: 11,803
    edited 2011-06-09 10:30
    Leon wrote: »
    Reducing high-dimensional data to low-dimensional data, as is conventionally done with multi-dimensional scaling. Kohonen's self-organising map techniques are used for that sort of thing, with unsupervised learning.
    Fascinating :) ... http://www.ai-junkie.com/ann/som/som1.html
  • LeonLeon Posts: 7,620
    edited 2011-06-09 10:40
    Jim Austin at the University of York is doing some interesting stuff with neural net technology:

    http://www.cs.york.ac.uk/auramol/

    I was thinking of doing a PhD with him when I worked for British Aerospace, and visited him once. He still had some transputer hardware I'd designed some years before.

    The Propeller isn't suitable for any of those applications I've mentioned, and I can't think of one to which it is particularly well-suited. The architecture is all wrong, it hasn't got enough on-chip memory, external memory will be too slow, and the comms channels will slow everything down still more. You could try implementing something simple, and see how you get on, if you don't believe me.
  • jazzedjazzed Posts: 11,803
    edited 2011-06-09 11:34
    Leon wrote: »
    You could try implementing something simple, and see how you get on, if you don't believe me.

    That's the direction this is headed. I have no reason to question your judgement. Not trying is not knowing. One way or another something interesting will come out of it.

    If you want to help, that's great. Otherwise, you should find something else to do with your time.
  • LeonLeon Posts: 7,620
    edited 2011-06-09 11:39
    Have you decided what it's going to do?
  • jazzedjazzed Posts: 11,803
    edited 2011-06-09 11:45
    There are various ideas floating around. Writing code and developing infrastructure to support those ideas at this point is the highest priority. I'm writing other code just now though.

    Maybe you would like to contribute to the infrastructure? It would be a great way to lift you Propeller standing and demonstrate that you're not just some other MCU suggesting distractor.
  • kwinnkwinn Posts: 8,697
    edited 2011-06-09 11:54
    Does this mean there is a common bus that handles all the props communication?
    I had the idea that biological neurons just have direct connections between neurons, and "use" determines the amount of traffic on a given channel.
    Obviously, I have a lot of reading to catch up....

    The bus architecture is still under discussion. What I proposed was multiple buses. One for each row and column of props on a board/board plane, and one for each stacked row of props. Each prop would then have a separate connection to one x, one y, and one z bus. This makes for simple fast routing, low bus contention, and redundant communications paths. Neuron connections would be virtual connections using these buses.

    I also suggested that neurons "listen" for signals from neurons they have connections to rather than have them send multiple messages or messages with multiple recipients. Seems like it would keep message traffic down.
  • jazzedjazzed Posts: 11,803
    edited 2011-06-09 12:05
    With virtual connections available, only one physical connection to each processor is required. Redundant connections would be best however. There are various ways to achieve physical connectivity.

    The software infrastructure should support any physical mechanism if possible to allow optimizing for *any* connection. If you want image recognition for example, the software should be able to configure the network for that purpose, but it should also be able to configure a network for aural processing too if there are enough resources available.
  • kwinnkwinn Posts: 8,697
    edited 2011-06-09 22:24
    The problem with one physical connection to each processor is that it limits the bandwidth and reduces the possibility of redundancy. Hardware such as the ultra high speed switches used in clusters are a possible single point of failure. Better to have multiple connections and a simple robust physical connection scheme. Having each prop connected to multiple buses does that and can increase bandwidth and reduce latency at the same time.

    Physical mechanisms for optimizing any connection scheme would range from extremely difficult to impossible. For this particular project we should settle for a software connection mechanism that optimizes use of the chosen hardware connection scheme for the purpose intended.

    In other words instead of trying to come up with an optimum connection scheme for current neuron software and attempting to fit the prop to that, lets come up with an optimum connection scheme for an array of propellers and write neuron software that takes advantage of the prop's strengths.
  • RaymanRayman Posts: 14,876
    edited 2011-06-10 08:03
    I did some error back-propagation neural network stuff a long time ago... It'd be fun to do that with the Prop and include some visual feedback...

    I wonder if the ROM math tables would be useful for generating a nice non-linear response...
  • prof_brainoprof_braino Posts: 4,313
    edited 2011-06-11 08:35
    "feedforward networks with a single hidden layer and trained by least-squares are statistically consistent estimators of arbitrary square-integrable regression functions under certain practically-satisfiable assumptions regarding sampling, target noise, number of hidden units, size of weights, and form of hidden-unit activation function (White, 1990)."

    Is this what we're talking about then, or is the quote talking about an option chosen for a particular research case?

    Would a simple application for neural net be something like coordinating legs on a hexapod robot, so the feet all touch the ground at the same time on uneven surfaces?
Sign In or Register to comment.