DAVE HEIN: @Humanoido, the Big Brain should be able to re-wire around defective parts of it's brain, just like a human brain can partially recover when there is injury to part of the brain.
It's agreed the Big Brain should have the capability to do some self rewire. This would likely take place with software. If one section of the Brain goes DOA, another subroutine could handle the rerouting.
Self programming is an interesting topic. A given set of wiring parameters defined could address the most important parts of the Brain.
Challenges exist for rewiring if failures occur through hardware and hanging results. Self rewiring would most likely address a loose wire or a fused port.
One suggestion is to have self tests in place immediately upon boot-up and do any self programming to rewire at that time. A lot will depend on the comprehensiveness of the diagnostics.
Automatic program modification is the topic of some posts. An area in software could be reserved for this type of code.
What about breakage due to accident or injury? Let's say the humanoid robot topples to the floor and the Brain is damaged. Diagnostics may be the first step in assessing the damage and creating reconstructive patterns of healing.
Distributed diagnostics could run in the background making sure all primary Brain parts are healthy. For example, the Brain Stem would be responsible for its own diagnostics, as would each Brain Span have responsibility as well.
This opens up the question of redundancy. If a PING goes bad, will another one be available? Perhaps not. But if two nearly identical eyes are available like humans, a diversion to the good eye is possible. This could be as simple as changing input port identifications.
The human brain has left hemisphere and right hemisphere. Learning can cross over. It's possible to give task reassignments from one part of the Brain to another.
BST is not installing properly into the Mac. This is holding up Big Brain development. Since the Propeller Software Tool does not run on the Mac, there is no way to get programs into the prop.
Big Breakthrough in Machine Brain Technology!
100,000 Neurons into the 1st 100 Propellers
This is a big breakthrough in big brain machine technology using Propellers. I have successfully delivered (Neural Matter Injected) 100,000 neurons into the first 100 Propellers. It's time to celebrate!!! See you tomorrow!
When I first read this I thought cool 1,000 neurons but I was more interested in the how you injected the neurons. Now that I have messed around with ANN a bit, I am finding that 1000 neurons in a single Propeller is a huge accomplishment.
The Propeller has 32K general purpose RAM. That means, 32k/1000 = 32.768 bytes for each neuron. That is equivalent to 8 PASM instructions or 8 longs. There is also 16k of COG RAM. It is possible to that you could get a total of 50 bytes for each neuron but you’d have to do some HUB clean up after loading the COGs.
I believe 1000 virtual neurons are this most feasible approach but also the most complicated in terms of supporting logic.
Please explain the method or methods used to estimate 1,000+ neurons per Propeller. How is Propeller memory allocated to support your neuron structure?
Estimates as to what will fit into a propeller chip can base upon the amount of available memory less the size of a single neuron times the number of neurons. Another option is to read the Propeller Tool memory map from actual downloads.
By this statement, I have to assume you are not using virtual neurons. You are actually allocated RAM memory. The neuron map has to be quite small. Can you give the community a map of the neuron in HUB RAM with data types?
Post 755 is quite accurate as individual exampling neurons were code trimmed to fit 32 bytes which is a good handful of statements per each. The virtual neurons must of course fit into memory. They're VR because no physical biological or hardware matter is used in their representations. The injectors are not included at this time, though, maybe they will be counted in the future, I mean the code appends to the inside or outside of the neuron. I think it's similar to that now but I don't know how to explain it - new machine terminology is needed and that will take some time to define. I had a nice screen shot showing the memory map of this which is how I kept track of it, knowing what was available, but did not find the pic on the restored hard drive material (yet). I figure it will be necessary to make a new memory pic when I have a computer that can run code from a working BST. I have actually spent most of the time including the part in the quote on injection to get that many examples to fit and smoothly load. The injector is the brunt of the focus and in-development time. There's a post about neural data types and what is expected from the neurons. For the neural map, its currently the same neuron which is cloned (I never said cloned), then injected. In the future, I plan to evolve (I never said evolve) these neurons using a machine DNA (I never said machine DNA), so they start out alike then evolve. I have a great interest in evolutionary programs. For this to happen, I'm looking at fewer exampling neurons so they have more memory and can become more filled with function. All of this is still in the works and at some stage of design or development. When the new Mac arrives, one of the first programs to install will be a good design program to work up some of these maps and charts. But right now, it's really just at the exampling level and ultimately simple.
Pictorial map representing Big Brain neural
exampling layout, includes early multi-state
machine neurons, propeller array, three populated
partitions, clock, encapsulation, neural injector,
and clone disseminator
Note that in Partitions 1, 2, and 3 are only initial, i.e. an unlimited number of partitions may be attached and are numerically determined by the amount of resources dedicated to the Brain. In the three Partition model, varying number populations are possible though each is capped at exactly 50. The upper Partition is always the Expandor while the lower will always hold 50.
Every rectangle in 1, 2, and 3, and every tic mark in 2 and 3 represents one propeller chip with eight cogs. Design shows 1, 2 and 3 representing 400 cogs each.
The Disseminator, Cloner, and techniques of individual prop densities have not been introduced and no additional information is available.
The diagram could be revised already.. I noted the output stators are shown on the neuron but the actual output devices could also be shown on the partitions of the Propeller array. For each rectangle and dashed line appearing inside the Propeller Array Blocks, attach one LED output device.
Each Brain Partition has 50 LEDs that run simultaneously in parallel. To keep things simple, I have connected a single LED on each Propeller chip at P15 spot. Why P15? This is purely for physical location reasons on the Injector planes as it allows a higher density of more prop chips and the LED's wiring to coexist on a solder-less breadboard. This is a new development.
The EXO is handled differently because it was built first using PPPBs and a ten pin connector socket array that spans the side of the prop which goes to VDD, VSS, P24, P25, P26, P27, P28, P29, P30, P31. These Spans use the board's existing power LEDs converted to Data LEDs as outputs.
Yes, good observation! I thought about and considered as many elements as possible including memory constraints long before April of 2011. Overall, I've thought many years about this project. It was stepped up in development around the year 2002 with up to five processors (OEM BASIC Stamps included) to control a humanoid. The example humanoid was built using parts from a Parallax Toddler robot kit. But of course things just grew from that point on. It's now nine years and around 100 parallel multi-processing machines and robots built to get to the Big Brain. If you think this happened overnight, it did not!
There are two main challenges in the field of Humanoid Robotics. One is the ability to deliver a long lasting source of power. The second is having access to a powerful Brain. Working with chemicals is not as interesting or safe as working on a Brain, so I chose the Machine Brain to develop.
Humanoido, being that you thought about memory constraints for a long time, please explain in technical detail what you mean by"code trimmed to fit 32 bytes". What were the challenges you had to overcome?
Humanoido, being that you thought about memory constraints for a long time, please explain in technical detail what you mean by"code trimmed to fit 32 bytes". What were the challenges you had to overcome?
I was using trial and error with program size, i.e. if it worked, it fit, if it failed, it didn't fit. That was a kind of trimming action to fit the 32K HUB. That worked but needed a more specific solution. First I searched for a program that could calculate the size of specific propeller code in HUB but none was available. Next, I used the memory map in the Propeller Software Tool and monitored the usage of LONGs etc. For each code, I snapped the HEX screen and filed it all together. I did some short study in past about the size of code statements and added together to determine usage. So there you have three methods.
I'm looking forward to new code development and rewriting the old code from memory (from the hard drive crash), only this time newer goals will be in place. It's likely there will be two sections to code development, A) Discrete and Unionized. A is simply the smallest and best possible demo of a function. B is the connection of one or more functions.
Success was achieved today in installing the tools necessary to program Propellers again, only this time - using the Mac computer. Good News! But now there are other challenges in the same older MacBook (OSX 10.4 Tiger) and I'll need to install a newer OSX (OS10.5 Leopard) after obtaining the install DVD set, and then reinstall the drivers and prop tools. Some of those challenges were talked about in the other thread.
This MacBook computer is to hold the system over until the new more powerful Mac is obtained. The reason I use the word "obtained" is because it's a real ordeal to get a computer out here in this stick of the Earth. Apple pulled a fast one on China and will only ship outdated less powerful CPU versions to China Apple Stores to dispose of their old inventory. What a bummer!
That means I'll need to order the Mac computer from the USA and have it shipped to an alternate country where I must travel there in person and pick it up, then hand carry it back into China as a "used computer." Why? Because shipments of new technology items are confiscated in China at China Customs. Another bummer! So I simply don't know when things will be up to speed again.
I would like to follow Dave Hein's good advice and when everything is up to par, begin posting code for specific snippet review. So like the open source development trial, we'll do a stint of posting.
On another venue, the Brain is continuing to receive some new hardware additions so this can be a time out for collecting designs, development of ideas, hardware extensions and efficiency mods. We're looking at expansion with another big partition. So if you have 50 props to spare, that will be just about right.. we intend to assimilate..
x x x x
50 50 50 50
400 400 400 400
400 800 1200 1600
I was using trial and error with program size, i.e. if it worked, it fit, if it failed, it didn't fit. That was a kind of trimming action to fit the 32K HUB
You did not answer the question. Probably my fault for not being specific enough.
You said, "individual exampling neurons were code trimmed to fit 32 bytes". I'm not querying about the entire program only the individual neuron code trimming. I guess a more specific question is, what functionality is contained in a 32 byte neuron? I'd think that code trimming at the byte level means optimization or removing a function. You must have weighed pros and cons related to functionality. We're talking about 32 bytes here. Admittedly, my memory is not the best but I'm sure I'd remember whittling code to fit in a 32 byte space. Anyway, that was the basis of my question, "What were the challenges you had to overcome?"
As written in post 766, it seems like your test case was simply about size; if it worked, it fit, if it failed, it didn't fit.
I guess a more specific question is, what functionality is contained in a 32 byte neuron?
Already answered your question and posted. If you look back across more than one post, you can see the details of each neuron. All these tested types fit. The SN is probably the best example. But these are the maximum quantity exampling ones. You can certainly build much larger neurons in smaller numbers.
As written in post 766, it seems like your test case was simply about size; if it worked, it fit, if it failed, it didn't fit.
Size matters. The number of neurons matter. Both can work toward determining characteristics of a neural net. That was the first method which was simply trial and error (and easy). However the other methods are more sophisticated, accurate and predictable. There are three methods used, not just one.
I was using trial and error with program size, i.e. if it worked, it fit, if it failed, it didn't fit. That was a kind of trimming action to fit the 32K HUB. That worked but needed a more specific solution. First I searched for a program that could calculate the size of specific propeller code in HUB but none was available. Next, I used the memory map in the Propeller Software Tool and monitored the usage of LONGs etc. For each code, I snapped the HEX screen and filed it all together. I did some short study in past about the size of code statements and added together to determine usage. So there you have three methods.
Don't knock it. There's a lot of exampling you can do with 32 bytes.
Already posted. If you look back, you can see the details of each neuron.
URL?
But these are the maximum quantity exampling ones.
That's what I'll looking for... functionality of exampling ones
Size matters. The number of neurons matter. Both can work toward determining characteristics of a neural net.
Sure, size is a constraint in 32K of main RAM. I would think that functionality is king though.
Don't knock it. There's a lot of exampling you can do with 32 bytes.
??? I'm not knocking anything. There's a lot of exampling you can do with 32 bytes. This is exactly what I'm looking for, can you provide an example? Source code is best and easiest to understand.
Looking forward to your neuron functionality post.
URL? That's what I'll looking for... functionality of exampling ones. Looking forward to your neuron functionality post
Already posted. Look at specific posts about neurons. Just pick out the type of neuron that you're interested in and read about what it can or cannot do. It's time again to update the index. It will more readily help find posts.
Sure, size is a constraint in 32K of main RAM. I would think that functionality is king though.
Yes we do need function, even with the smallest (SN) or test neuron.
Quantity (1) will determine size (2) or visa versa.
Size will determine function (3).
There's a lot of exampling you can do with 32 bytes.
How long will you continue to be suckered by this charade? Either Humanoido has something he can post source code for -- right now -- or he has nothing. There is no middle ground.
Mike, How long will you continue to be suckered by this charade? Either Humanoido has something he can post source code for -- right now -- or he has nothing. There is no middle ground. -Phil
Thank you Phil, you have a remarkable delicate way with words. I will always remember your patience, support and kindness. May you have a long life and live in good health.
Page 33
641 Remote Brain Posting - Carry your posting "computer" in your shirt pocket
642 Online Brain Index Updated to Page 33
643 Brain Add On Computers - Are they useful?
644 Phil comments Lenovo, T32, WinXP, Mandrake Linux
645 Humanoid commentspc without CDs
646 Stats on the Restored Hard Drive - the Brain Wins! Almost nothing is restored
647 Schematic Request Related to Brain Development
648 Phil comments how to retrieve attachments
649 Humanoido comments
650 Apple AMD Radeon HD 6750M Graphics Card 480 Stream Processors
651 Brain Gets Apple Mac Computer
652 Stepping up the Big Brain
653 Brain Config in April
654 Brain Config in May
655 Brain Programming Languages Selection SPIN, OPENCL, PBASIC, PASM, XCODE
656 Brain Relegated to ProMac Symbiotic Union
657 New Ideas for Autonomous Brain Backup - Cloning from Backup
658 Robotic Brain Mobility & Robotics
659 Zoot comments trademarks
660 Brain Naming Conventions
Page 34
661 Big Brain Design Breakthroughs with PROPELLERS & MACS
662 Big Brain Gains Utilizing Hard Drives
663 Big Brain's Supplanted Memory
664 Brain Programming in XCode 4
665 *** Change in Brain Project Development ***
666 Phil comments less stream of consciousness
667 Duane Degn comments
668 Mike G comments you're working on the cutting edge
669 Brain Project Overview - Props and computers
670 Humanoido comment
671 Humanoido comment celebration
672 Phil comments incredulous
673 Dave Hein comments be patient Phil
674 Is Brain DNA Genome Possible?
675 Brain Life Power Challenges & Considerations
676 Big Brain Genome Maps Instead of Schematics
677 Parallax-Propeller-Equipped Brain Genetic Machine Genome Project
678 Big Brain Sex - Is the Big Brain Hermaphrodite, Male or Female?
679 Humanoido comments grasshopper
680 Big Brain Propeller Waves - Different clocks for different blocks
Page 35
681 Build a Big Brain Propeller Qualitative EEG Machine - Measure Big Brain's Brain Waves
682 Big Brain EEG Machine as a Diagnostic
683 Leon comments neuron example to solve xor problem
684 Big Brain State of the INIT-Neuron
685 Phil comments
686 Leon comments ANN, Hull University, MSc project, Dataglove, BAe, MAD, Brough
687 Humanoido comments
688 Mike G comments asking for neuron code
689 Leon comments Here is a neural net toolbox for Scilab
690 Humanoido comments not posting code
691 Links to Managed Neural Projects
692 Mike G comments
693 humanoid comments no posting of development or test code for the reasons cited
694 Dave Hein comments you have expressed a lot of good ideas
695 Humanoido comments Brain development or documenting partial Brain development
696 Jazzed Tetra Prop Spins Brain Life Ideas
697 Dave Hein comments on posting code
698 Duane Degn comments on his great projects!
699 Brain Methodology
700 Humanoid comments to Dave Hein
Page 36
701 Potential of Connecting the Degn Massive LED Array
702 Future Brains with Afflictions - The 1st Guide to Machine Intelligence Sickness
703 Duane Degn comments about LED Array
704 Machine Brain Neurons and Neural Matter - New Brain Dictionary Definitions
705 Refinement Brain's Hybrid Interface - Successful Sharing Hybrid results in fewer wires
706 Brain Neural Complexity - Adding to the Brain Dictionary
707 Dictionary of Propeller Machine Brain Terms
708 Big Brain's Hyper Neural Threading HNT
709 Big Brain Domain Partition - Propeller DPs increase volume performance
710 Propeller Brain Change with Supporting Computers
711 Leon comments TFLOPS
712 What is the Giant Brain? Propeller or Other?
713 Leon comments simulate the ANN on the Mac
714 The Propeller ANN Mac
715 Propeller Brain Mac GPUs Selected
716 Many Faces of the Giant Propeller Brain
717 Moving Toward Ultra Brain with Props & Macs
718 Giant Brain TFLOPS
719 Leon comments 5,000 I/Os
720 Big Brain Backup Terabyte Drives
Page 37
721 Massive Propeller Brain Trinary State Output Device - a plane with quadrillions...
722 Brain Merging with the Massive Transposition Machine
723 Big Brain BOSS - Brain Operating Software System
724 Brain Doctor Meeting 05.14.11
725 AI Quotations
726 ElectricAye comments Kurzweil, Chinese Room
727 Humanoido comments Chinese Room
728 New Type of Propeller Simplex Neuron SN - Modified INTNeuron and Test Neurons? (with pseudo code)
729 Phil comments problem with early AI research, top down, bottom up
730 Reverse Engineering Evolution
731 Giant Machine Brain Eats Propellers
732 Little Parts - a Machine in the Machine - Isolate your brains
733 ElectricAye comments
734 Dave Hein comments modeling nerve cells
735 Phil comments cannot simulate intelligence without considering environmental factors
736 Leon comments consciousness
737 Phil comments unique possession of consciousness
738 Leon comments qualia
739 Time Stamping Human Evolution
740 Humanoid comments human consciousness and pure thought
Page 38
741 Mike G comments neurons, simple neural net
742 Defining Consciousness
743 ElectricAye comments
744 Exploring the Neural Net with the Simplex Neuron
745 Phil comments code
746 Programming to be Self Aware
747 Humanoido comments intention is to release BOSS when it reaches BETA
748 Controlling the Brain with Light
749 Mike G comments neurons
750 Simplex Neurons
751 Phil comments
752 Humanoido comments to Phil
753 Big Brain Self Rewiring
754 BST and Mac
755 Mike G comments neuron memory
756 Mike G comments neuron memory
757 Handling Neurons
758 NEURAL BRAIN MAP
759 Mike G comments neuron memorny
760 Phil comments source code
Page 39
761 Brain Map Revisions
762 Mike G comments on muffler wiring
763 Brain History & Challenges - Historical roots, humanoids, multi-processing machines
764 Humanoido comments car forum
765 Mike G comments
766 Determining Memory Usage
767 The Uphill Computer Journey
768 Mike G comments neuron memory
769 Humanoid comments neuron size matters
770 Mike G comments
771 Humanodo comments
772 Phil comments
773 Humanoido comments
774 Big Brain Index Update
Humanoido, here's very basic neuron-like SPIN that took all of about 5 minutes to write and debug. Probably quicker than than the time it took you to update the index.
CON
_clkmode = xtal1 + pll16x
_xinfreq = 5_000_000
DAT
pattern byte %10101100
output byte $00
OBJ
pst : "Parallax Serial Terminal.spin"
PUB Main | i
pst.Start(115_200)
waitcnt((clkfreq / 1_000 * 2_000) + cnt)
SampleInput(0, $AC)
SampleInput(1, $BB)
SampleInput(2, $AC)
SampleInput(3, $BB)
SampleInput(4, $AC)
SampleInput(5, $BB)
SampleInput(6, $AC)
SampleInput(7, $BB)
pst.bin(output, 8)
pst.char(13)
SampleInput(0, $BB)
SampleInput(1, $AC)
SampleInput(2, $AC)
SampleInput(3, $AC)
SampleInput(4, $AC)
SampleInput(5, $BB)
SampleInput(6, $AC)
SampleInput(7, $BB)
pst.bin(output, 8)
' Check input byte and output a 1 if
' the pattern matches, otherwise output a 0
PUB SampleInput(id, input) | tmp
tmp := |< id
if(input == pattern)
output |= tmp
else
tmp := !tmp
output &= tmp
I am patient, but I don't suffer hedging, dodging, evasion, and smokescreens gladly. You've been asked direct questions about your work and, like an adroit politician who has something to hide, you either ignore the question or throw up a cloud of new buzzwords to obscure the lack of a direct answer. Someone has to play the Clara Peller role in this thread. If no one else will do it, I guess it has to be me.
Humanoido, here's very basic neuron-like SPIN that took all of about 5 minutes to write and debug. Probably quicker than than the time it took you to update the index.
CON
_clkmode = xtal1 + pll16x
_xinfreq = 5_000_000
DAT
pattern byte %10101100
output byte $00
OBJ
pst : "Parallax Serial Terminal.spin"
PUB Main | i
pst.Start(115_200)
waitcnt((clkfreq / 1_000 * 2_000) + cnt)
SampleInput(0, $AC)
SampleInput(1, $BB)
SampleInput(2, $AC)
SampleInput(3, $BB)
SampleInput(4, $AC)
SampleInput(5, $BB)
SampleInput(6, $AC)
SampleInput(7, $BB)
pst.bin(output, 8)
pst.char(13)
SampleInput(0, $BB)
SampleInput(1, $AC)
SampleInput(2, $AC)
SampleInput(3, $AC)
SampleInput(4, $AC)
SampleInput(5, $BB)
SampleInput(6, $AC)
SampleInput(7, $BB)
pst.bin(output, 8)
' Check input byte and output a 1 if
' the pattern matches, otherwise output a 0
PUB SampleInput(id, input) | tmp
tmp := |< id
if(input == pattern)
output |= tmp
else
tmp := !tmp
output &= tmp
Thank you Mike G, excellent - with your programming talent, computer setup and ability to knock these programs out so quickly, it's likely what would take me one year to write, you could probably do in one hour. Your contributed software is greatly appreciated.
The Big Brain is a machine based on the Propeller chip that can run simulated Neurons. In its most simple form, a Simplex Neuron, or SN, is injected into each Propeller chip and simulates neuronal input (held constant) and fire/firing output states.
___________________________________________________________________________
The a map of the Big Machine Brain Project shows a Simplex Neuron (SN) with input and outputs and three possible states of Fire, a Zero, One or Limbo state. The Injector Cell Coating represents code to inject the SN into the Propeller Array (PA). The PA is currently subdivided into three smaller arrays called Partitions. Each Partition holds a maximum of fifty populated Parallax Propeller chips. The design handles an unlimited number of Partitions based on available resources. The rule of thumb is all Partitions must be fully populated except for the last Partition. The map with three Partitions show 150 Propeller chips. The small boxes represent Propeller chips and the red dots represent SN outputs. SN input is simply held static for testing. In testing, each red dot is a dedicated LED (see photos). In SL or Simple Loading, the Injectors distribute a single SN to each chip. The SN is cloned to every Propeller. Each Propeller works in unison with every other Propeller. Clocking is not an issue during neural matter activity. Each Propeller can run its own clock. Clocking is more of an issue during Neural Matter Injection NMI when all the Propellers are in Sync and handled specifically by the Neural Injection OverLord. SNs are made to do exampling Fire in any chosen state and set outputs. All SNs work together at the same time. This is primarily what is shown in the map. What is not shown are the representations of Cogs, clocks, 4,800K of distributed memory, EEPROMs and specific distribution wiring. To gain Cogs, multiply each red dot by eight. The map represents 1,250 small RISC processors and a 150 point to point display.
These are new TeraByte capacity drives (for Apple Macintosh) that just arrived from Taiwan for the Big Propeller Brain project. Massive TB drives will hold the influx of new Big Brain programming, data, designs, research and include duplicity of backups. These are possibly holdover drives until RAID is introduced in new ThunderBolt connectivity.
Drives are quality Seagate mechanisms which are 10x faster than USB 2.0 and have ability to share files between Macs and PCs. The drive set includes preformatted 1TB USB 3.0 drives with cables. Faster and more efficient drive access is obtained with reformatting in extended Mac formats. The drives passed all tests with high marks and and received CNET Editors' Choice award in May of 2010.
This represents the first photo taken and processed with the Apple MacBook computer (seen under the drives). While PICASA is installed and working, iPhoto is doing a similar job quite well. Programs used were iPhoto and Preview. Is there a difference between this and previous XP windows processed photos? This photo is greatly reduced in quality by two full jpg levels for a reduction in file size from a half megabyte. The original photos on the Mac appear to jump out of the screen in vivid detail, color and more lifelike appearance. Several development memory sticks are seen in the background.
_________________________________________
Considerable attention is being placed on backups and backup systems to insure the survivability of data and new programming. Several drive plans are being initiated. When the computer is "set up" complete, a drive image will be created. Development will happen momentarily on the desktop (internal hard drive) and move to external drives including USB sticks. The DT will also get frequent backup on externals. Externals will duplicated Brain files and important data. OSX now has special backup automatic programs which will be utilized.
The Propeller Brain and the Brain Computer will travel. During travel, the Big Brain will retain its normal programming in EEPROM while the support computer will be entirely clean. Data will be transfered to pocket 1 TB USB external drives and carried in a separate location(s).
Hard drives will have additional functions. For special data that does not need frequent access, a memory will set up for the Machine Brain in a specific location. Brain memory data can be downloaded to the drive, accessed from the drive and updated on the drive. One advantage is mirroring the Brain state and saving it on the hard drive.
A lot of thought is going into the use of Big Brain big TB drives. One use is the technique of Brain Mirrors. A Brain Mirror uses software and storage to create an image of the current state of the Brain, a snapshot of current variables and parameters which may include dreams, memories, thoughts, feelings, learning, timing states, and other parametric identities.
TB drives on a supporting Mac have access to Propeller uploading and additional programming languages on the Mac that can be tailored to a Big Brain Mirror. Sets of mirrors are possible which can be used to study the Brain's behavior over a period of time slices.
These are big mirrors. What about the use of small mirrors? Small mirrors are also possible and can include smaller data sets. These are Selectable data mirrors and can include a vertical domain of specific data.
Some vertical domains can include the data collection center, the current activity cache, the mobility status, a diagnostic center, a frozen thought, and other aspects. Small Mirrors can be made to fit smaller drives, such as Flash. Propeller 64K and larger eeproms can also store Small Mirrors directly on the Propeller chip.
Comments
http://forums.parallax.com/showthread.php?131688-Hecka-Prop-)&p=1000779#post1000779
DAVE HEIN: @Humanoido, the Big Brain should be able to re-wire around defective parts of it's brain, just like a human brain can partially recover when there is injury to part of the brain.
It's agreed the Big Brain should have the capability to do some self rewire. This would likely take place with software. If one section of the Brain goes DOA, another subroutine could handle the rerouting.
Self programming is an interesting topic. A given set of wiring parameters defined could address the most important parts of the Brain.
Challenges exist for rewiring if failures occur through hardware and hanging results. Self rewiring would most likely address a loose wire or a fused port.
One suggestion is to have self tests in place immediately upon boot-up and do any self programming to rewire at that time. A lot will depend on the comprehensiveness of the diagnostics.
Automatic program modification is the topic of some posts. An area in software could be reserved for this type of code.
What about breakage due to accident or injury? Let's say the humanoid robot topples to the floor and the Brain is damaged. Diagnostics may be the first step in assessing the damage and creating reconstructive patterns of healing.
Distributed diagnostics could run in the background making sure all primary Brain parts are healthy. For example, the Brain Stem would be responsible for its own diagnostics, as would each Brain Span have responsibility as well.
This opens up the question of redundancy. If a PING goes bad, will another one be available? Perhaps not. But if two nearly identical eyes are available like humans, a diversion to the good eye is possible. This could be as simple as changing input port identifications.
The human brain has left hemisphere and right hemisphere. Learning can cross over. It's possible to give task reassignments from one part of the Brain to another.
BST is not installing properly into the Mac. This is holding up Big Brain development. Since the Propeller Software Tool does not run on the Mac, there is no way to get programs into the prop.
http://forums.parallax.com/showthread.php?131758-BST-Mac-Install
EDIT: success was finally achieved
http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain&p=991102&viewfull=1#post991102
When I first read this I thought cool 1,000 neurons but I was more interested in the how you injected the neurons. Now that I have messed around with ANN a bit, I am finding that 1000 neurons in a single Propeller is a huge accomplishment.
The Propeller has 32K general purpose RAM. That means, 32k/1000 = 32.768 bytes for each neuron. That is equivalent to 8 PASM instructions or 8 longs. There is also 16k of COG RAM. It is possible to that you could get a total of 50 bytes for each neuron but you’d have to do some HUB clean up after loading the COGs.
I believe 1000 virtual neurons are this most feasible approach but also the most complicated in terms of supporting logic.
Please explain the method or methods used to estimate 1,000+ neurons per Propeller. How is Propeller memory allocated to support your neuron structure?
By this statement, I have to assume you are not using virtual neurons. You are actually allocated RAM memory. The neuron map has to be quite small. Can you give the community a map of the neuron in HUB RAM with data types?
Post 755 is quite accurate as individual exampling neurons were code trimmed to fit 32 bytes which is a good handful of statements per each. The virtual neurons must of course fit into memory. They're VR because no physical biological or hardware matter is used in their representations. The injectors are not included at this time, though, maybe they will be counted in the future, I mean the code appends to the inside or outside of the neuron. I think it's similar to that now but I don't know how to explain it - new machine terminology is needed and that will take some time to define. I had a nice screen shot showing the memory map of this which is how I kept track of it, knowing what was available, but did not find the pic on the restored hard drive material (yet). I figure it will be necessary to make a new memory pic when I have a computer that can run code from a working BST. I have actually spent most of the time including the part in the quote on injection to get that many examples to fit and smoothly load. The injector is the brunt of the focus and in-development time. There's a post about neural data types and what is expected from the neurons. For the neural map, its currently the same neuron which is cloned (I never said cloned), then injected. In the future, I plan to evolve (I never said evolve) these neurons using a machine DNA (I never said machine DNA), so they start out alike then evolve. I have a great interest in evolutionary programs. For this to happen, I'm looking at fewer exampling neurons so they have more memory and can become more filled with function. All of this is still in the works and at some stage of design or development. When the new Mac arrives, one of the first programs to install will be a good design program to work up some of these maps and charts. But right now, it's really just at the exampling level and ultimately simple.
Pictorial map representing Big Brain neural
exampling layout, includes early multi-state
machine neurons, propeller array, three populated
partitions, clock, encapsulation, neural injector,
and clone disseminator
Note that in Partitions 1, 2, and 3 are only initial, i.e. an unlimited number of partitions may be attached and are numerically determined by the amount of resources dedicated to the Brain. In the three Partition model, varying number populations are possible though each is capped at exactly 50. The upper Partition is always the Expandor while the lower will always hold 50.
Every rectangle in 1, 2, and 3, and every tic mark in 2 and 3 represents one propeller chip with eight cogs. Design shows 1, 2 and 3 representing 400 cogs each.
The Disseminator, Cloner, and techniques of individual prop densities have not been introduced and no additional information is available.
Humanoido you said, individual exampling neurons were code trimmed to fit 32 bytes. Obviously, you considered the shared memory constraint of 32K well before 04-09-2011 when you announced Big Breakthrough in Machine Brain Technology! 100,000 Neurons into the 1st 100 Propellers . Sorry to sound like a broken record…
Please explain the method or methods used to estimate 1,000+ neurons per Propeller.
-Phil
The diagram could be revised already.. I noted the output stators are shown on the neuron but the actual output devices could also be shown on the partitions of the Propeller array. For each rectangle and dashed line appearing inside the Propeller Array Blocks, attach one LED output device.
Each Brain Partition has 50 LEDs that run simultaneously in parallel. To keep things simple, I have connected a single LED on each Propeller chip at P15 spot. Why P15? This is purely for physical location reasons on the Injector planes as it allows a higher density of more prop chips and the LED's wiring to coexist on a solder-less breadboard. This is a new development.
The EXO is handled differently because it was built first using PPPBs and a ten pin connector socket array that spans the side of the prop which goes to VDD, VSS, P24, P25, P26, P27, P28, P29, P30, P31. These Spans use the board's existing power LEDs converted to Data LEDs as outputs.
Historical roots based on humanoids and creating multi-processing machines
Yes, good observation! I thought about and considered as many elements as possible including memory constraints long before April of 2011. Overall, I've thought many years about this project. It was stepped up in development around the year 2002 with up to five processors (OEM BASIC Stamps included) to control a humanoid. The example humanoid was built using parts from a Parallax Toddler robot kit. But of course things just grew from that point on. It's now nine years and around 100 parallel multi-processing machines and robots built to get to the Big Brain. If you think this happened overnight, it did not!
There are two main challenges in the field of Humanoid Robotics. One is the ability to deliver a long lasting source of power. The second is having access to a powerful Brain. Working with chemicals is not as interesting or safe as working on a Brain, so I chose the Machine Brain to develop.
http://www.carsforums.com/
Source code would be great.
I was using trial and error with program size, i.e. if it worked, it fit, if it failed, it didn't fit. That was a kind of trimming action to fit the 32K HUB. That worked but needed a more specific solution. First I searched for a program that could calculate the size of specific propeller code in HUB but none was available. Next, I used the memory map in the Propeller Software Tool and monitored the usage of LONGs etc. For each code, I snapped the HEX screen and filed it all together. I did some short study in past about the size of code statements and added together to determine usage. So there you have three methods.
I'm looking forward to new code development and rewriting the old code from memory (from the hard drive crash), only this time newer goals will be in place. It's likely there will be two sections to code development, A) Discrete and Unionized. A is simply the smallest and best possible demo of a function. B is the connection of one or more functions.
Success was achieved today in installing the tools necessary to program Propellers again, only this time - using the Mac computer. Good News! But now there are other challenges in the same older MacBook (OSX 10.4 Tiger) and I'll need to install a newer OSX (OS10.5 Leopard) after obtaining the install DVD set, and then reinstall the drivers and prop tools. Some of those challenges were talked about in the other thread.
This MacBook computer is to hold the system over until the new more powerful Mac is obtained. The reason I use the word "obtained" is because it's a real ordeal to get a computer out here in this stick of the Earth. Apple pulled a fast one on China and will only ship outdated less powerful CPU versions to China Apple Stores to dispose of their old inventory. What a bummer!
That means I'll need to order the Mac computer from the USA and have it shipped to an alternate country where I must travel there in person and pick it up, then hand carry it back into China as a "used computer." Why? Because shipments of new technology items are confiscated in China at China Customs. Another bummer! So I simply don't know when things will be up to speed again.
I would like to follow Dave Hein's good advice and when everything is up to par, begin posting code for specific snippet review. So like the open source development trial, we'll do a stint of posting.
On another venue, the Brain is continuing to receive some new hardware additions so this can be a time out for collecting designs, development of ideas, hardware extensions and efficiency mods. We're looking at expansion with another big partition. So if you have 50 props to spare, that will be just about right.. we intend to assimilate..
x x x x
50 50 50 50
400 400 400 400
400 800 1200 1600
You said, "individual exampling neurons were code trimmed to fit 32 bytes". I'm not querying about the entire program only the individual neuron code trimming. I guess a more specific question is, what functionality is contained in a 32 byte neuron? I'd think that code trimming at the byte level means optimization or removing a function. You must have weighed pros and cons related to functionality. We're talking about 32 bytes here. Admittedly, my memory is not the best but I'm sure I'd remember whittling code to fit in a 32 byte space. Anyway, that was the basis of my question, "What were the challenges you had to overcome?"
As written in post 766, it seems like your test case was simply about size; if it worked, it fit, if it failed, it didn't fit.
Already answered your question and posted. If you look back across more than one post, you can see the details of each neuron. All these tested types fit. The SN is probably the best example. But these are the maximum quantity exampling ones. You can certainly build much larger neurons in smaller numbers.
Size matters. The number of neurons matter. Both can work toward determining characteristics of a neural net. That was the first method which was simply trial and error (and easy). However the other methods are more sophisticated, accurate and predictable. There are three methods used, not just one.
I was using trial and error with program size, i.e. if it worked, it fit, if it failed, it didn't fit. That was a kind of trimming action to fit the 32K HUB. That worked but needed a more specific solution. First I searched for a program that could calculate the size of specific propeller code in HUB but none was available. Next, I used the memory map in the Propeller Software Tool and monitored the usage of LONGs etc. For each code, I snapped the HEX screen and filed it all together. I did some short study in past about the size of code statements and added together to determine usage. So there you have three methods.
Don't knock it. There's a lot of exampling you can do with 32 bytes.
That's what I'll looking for... functionality of exampling ones
Sure, size is a constraint in 32K of main RAM. I would think that functionality is king though.
??? I'm not knocking anything. There's a lot of exampling you can do with 32 bytes. This is exactly what I'm looking for, can you provide an example? Source code is best and easiest to understand.
Looking forward to your neuron functionality post.
Already posted. Look at specific posts about neurons. Just pick out the type of neuron that you're interested in and read about what it can or cannot do. It's time again to update the index. It will more readily help find posts.
Yes we do need function, even with the smallest (SN) or test neuron.
Quantity (1) will determine size (2) or visa versa.
Size will determine function (3).
There's a lot of exampling you can do with 32 bytes.
No source code is available at this time. You may review the pseudo code example in a previous post.
How long will you continue to be suckered by this charade? Either Humanoido has something he can post source code for -- right now -- or he has nothing. There is no middle ground.
-Phil
Thank you Phil, you have a remarkable delicate way with words. I will always remember your patience, support and kindness. May you have a long life and live in good health.
Cheers.
Now added seven more pages to the Big Brain Index, bringing it up to date with the current posting.
For the complete index:
http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain&p=977025&viewfull=1#post977025
Page 33
641 Remote Brain Posting - Carry your posting "computer" in your shirt pocket
642 Online Brain Index Updated to Page 33
643 Brain Add On Computers - Are they useful?
644 Phil comments Lenovo, T32, WinXP, Mandrake Linux
645 Humanoid commentspc without CDs
646 Stats on the Restored Hard Drive - the Brain Wins! Almost nothing is restored
647 Schematic Request Related to Brain Development
648 Phil comments how to retrieve attachments
649 Humanoido comments
650 Apple AMD Radeon HD 6750M Graphics Card 480 Stream Processors
651 Brain Gets Apple Mac Computer
652 Stepping up the Big Brain
653 Brain Config in April
654 Brain Config in May
655 Brain Programming Languages Selection SPIN, OPENCL, PBASIC, PASM, XCODE
656 Brain Relegated to ProMac Symbiotic Union
657 New Ideas for Autonomous Brain Backup - Cloning from Backup
658 Robotic Brain Mobility & Robotics
659 Zoot comments trademarks
660 Brain Naming Conventions
Page 34
661 Big Brain Design Breakthroughs with PROPELLERS & MACS
662 Big Brain Gains Utilizing Hard Drives
663 Big Brain's Supplanted Memory
664 Brain Programming in XCode 4
665 *** Change in Brain Project Development ***
666 Phil comments less stream of consciousness
667 Duane Degn comments
668 Mike G comments you're working on the cutting edge
669 Brain Project Overview - Props and computers
670 Humanoido comment
671 Humanoido comment celebration
672 Phil comments incredulous
673 Dave Hein comments be patient Phil
674 Is Brain DNA Genome Possible?
675 Brain Life Power Challenges & Considerations
676 Big Brain Genome Maps Instead of Schematics
677 Parallax-Propeller-Equipped Brain Genetic Machine Genome Project
678 Big Brain Sex - Is the Big Brain Hermaphrodite, Male or Female?
679 Humanoido comments grasshopper
680 Big Brain Propeller Waves - Different clocks for different blocks
Page 35
681 Build a Big Brain Propeller Qualitative EEG Machine - Measure Big Brain's Brain Waves
682 Big Brain EEG Machine as a Diagnostic
683 Leon comments neuron example to solve xor problem
684 Big Brain State of the INIT-Neuron
685 Phil comments
686 Leon comments ANN, Hull University, MSc project, Dataglove, BAe, MAD, Brough
687 Humanoido comments
688 Mike G comments asking for neuron code
689 Leon comments Here is a neural net toolbox for Scilab
690 Humanoido comments not posting code
691 Links to Managed Neural Projects
692 Mike G comments
693 humanoid comments no posting of development or test code for the reasons cited
694 Dave Hein comments you have expressed a lot of good ideas
695 Humanoido comments Brain development or documenting partial Brain development
696 Jazzed Tetra Prop Spins Brain Life Ideas
697 Dave Hein comments on posting code
698 Duane Degn comments on his great projects!
699 Brain Methodology
700 Humanoid comments to Dave Hein
Page 36
701 Potential of Connecting the Degn Massive LED Array
702 Future Brains with Afflictions - The 1st Guide to Machine Intelligence Sickness
703 Duane Degn comments about LED Array
704 Machine Brain Neurons and Neural Matter - New Brain Dictionary Definitions
705 Refinement Brain's Hybrid Interface - Successful Sharing Hybrid results in fewer wires
706 Brain Neural Complexity - Adding to the Brain Dictionary
707 Dictionary of Propeller Machine Brain Terms
708 Big Brain's Hyper Neural Threading HNT
709 Big Brain Domain Partition - Propeller DPs increase volume performance
710 Propeller Brain Change with Supporting Computers
711 Leon comments TFLOPS
712 What is the Giant Brain? Propeller or Other?
713 Leon comments simulate the ANN on the Mac
714 The Propeller ANN Mac
715 Propeller Brain Mac GPUs Selected
716 Many Faces of the Giant Propeller Brain
717 Moving Toward Ultra Brain with Props & Macs
718 Giant Brain TFLOPS
719 Leon comments 5,000 I/Os
720 Big Brain Backup Terabyte Drives
Page 37
721 Massive Propeller Brain Trinary State Output Device - a plane with quadrillions...
722 Brain Merging with the Massive Transposition Machine
723 Big Brain BOSS - Brain Operating Software System
724 Brain Doctor Meeting 05.14.11
725 AI Quotations
726 ElectricAye comments Kurzweil, Chinese Room
727 Humanoido comments Chinese Room
728 New Type of Propeller Simplex Neuron SN - Modified INTNeuron and Test Neurons? (with pseudo code)
729 Phil comments problem with early AI research, top down, bottom up
730 Reverse Engineering Evolution
731 Giant Machine Brain Eats Propellers
732 Little Parts - a Machine in the Machine - Isolate your brains
733 ElectricAye comments
734 Dave Hein comments modeling nerve cells
735 Phil comments cannot simulate intelligence without considering environmental factors
736 Leon comments consciousness
737 Phil comments unique possession of consciousness
738 Leon comments qualia
739 Time Stamping Human Evolution
740 Humanoid comments human consciousness and pure thought
Page 38
741 Mike G comments neurons, simple neural net
742 Defining Consciousness
743 ElectricAye comments
744 Exploring the Neural Net with the Simplex Neuron
745 Phil comments code
746 Programming to be Self Aware
747 Humanoido comments intention is to release BOSS when it reaches BETA
748 Controlling the Brain with Light
749 Mike G comments neurons
750 Simplex Neurons
751 Phil comments
752 Humanoido comments to Phil
753 Big Brain Self Rewiring
754 BST and Mac
755 Mike G comments neuron memory
756 Mike G comments neuron memory
757 Handling Neurons
758 NEURAL BRAIN MAP
759 Mike G comments neuron memorny
760 Phil comments source code
Page 39
761 Brain Map Revisions
762 Mike G comments on muffler wiring
763 Brain History & Challenges - Historical roots, humanoids, multi-processing machines
764 Humanoido comments car forum
765 Mike G comments
766 Determining Memory Usage
767 The Uphill Computer Journey
768 Mike G comments neuron memory
769 Humanoid comments neuron size matters
770 Mike G comments
771 Humanodo comments
772 Phil comments
773 Humanoido comments
774 Big Brain Index Update
Humanoido, here's very basic neuron-like SPIN that took all of about 5 minutes to write and debug. Probably quicker than than the time it took you to update the index.
-Phil
Thank you Mike G, excellent - with your programming talent, computer setup and ability to knock these programs out so quickly, it's likely what would take me one year to write, you could probably do in one hour. Your contributed software is greatly appreciated.
Phil, I think you're a bit jealous! Haha! Plus, I always knew you were the one that wears the dress in the family. ROTFLOL @ Phil!!!
The Big Brain is a machine based on the Propeller chip that can run simulated Neurons. In its most simple form, a Simplex Neuron, or SN, is injected into each Propeller chip and simulates neuronal input (held constant) and fire/firing output states.
___________________________________________________________________________
The a map of the Big Machine Brain Project shows a Simplex Neuron (SN) with input and outputs and three possible states of Fire, a Zero, One or Limbo state. The Injector Cell Coating represents code to inject the SN into the Propeller Array (PA). The PA is currently subdivided into three smaller arrays called Partitions. Each Partition holds a maximum of fifty populated Parallax Propeller chips. The design handles an unlimited number of Partitions based on available resources. The rule of thumb is all Partitions must be fully populated except for the last Partition. The map with three Partitions show 150 Propeller chips. The small boxes represent Propeller chips and the red dots represent SN outputs. SN input is simply held static for testing. In testing, each red dot is a dedicated LED (see photos). In SL or Simple Loading, the Injectors distribute a single SN to each chip. The SN is cloned to every Propeller. Each Propeller works in unison with every other Propeller. Clocking is not an issue during neural matter activity. Each Propeller can run its own clock. Clocking is more of an issue during Neural Matter Injection NMI when all the Propellers are in Sync and handled specifically by the Neural Injection OverLord. SNs are made to do exampling Fire in any chosen state and set outputs. All SNs work together at the same time. This is primarily what is shown in the map. What is not shown are the representations of Cogs, clocks, 4,800K of distributed memory, EEPROMs and specific distribution wiring. To gain Cogs, multiply each red dot by eight. The map represents 1,250 small RISC processors and a 150 point to point display.
These are new TeraByte capacity drives (for Apple Macintosh) that just arrived from Taiwan for the Big Propeller Brain project. Massive TB drives will hold the influx of new Big Brain programming, data, designs, research and include duplicity of backups. These are possibly holdover drives until RAID is introduced in new ThunderBolt connectivity.
Drives are quality Seagate mechanisms which are 10x faster than USB 2.0 and have ability to share files between Macs and PCs. The drive set includes preformatted 1TB USB 3.0 drives with cables. Faster and more efficient drive access is obtained with reformatting in extended Mac formats. The drives passed all tests with high marks and and received CNET Editors' Choice award in May of 2010.
This represents the first photo taken and processed with the Apple MacBook computer (seen under the drives). While PICASA is installed and working, iPhoto is doing a similar job quite well. Programs used were iPhoto and Preview. Is there a difference between this and previous XP windows processed photos? This photo is greatly reduced in quality by two full jpg levels for a reduction in file size from a half megabyte. The original photos on the Mac appear to jump out of the screen in vivid detail, color and more lifelike appearance. Several development memory sticks are seen in the background.
_________________________________________
Considerable attention is being placed on backups and backup systems to insure the survivability of data and new programming. Several drive plans are being initiated. When the computer is "set up" complete, a drive image will be created. Development will happen momentarily on the desktop (internal hard drive) and move to external drives including USB sticks. The DT will also get frequent backup on externals. Externals will duplicated Brain files and important data. OSX now has special backup automatic programs which will be utilized.
The Propeller Brain and the Brain Computer will travel. During travel, the Big Brain will retain its normal programming in EEPROM while the support computer will be entirely clean. Data will be transfered to pocket 1 TB USB external drives and carried in a separate location(s).
Hard drives will have additional functions. For special data that does not need frequent access, a memory will set up for the Machine Brain in a specific location. Brain memory data can be downloaded to the drive, accessed from the drive and updated on the drive. One advantage is mirroring the Brain state and saving it on the hard drive.
A lot of thought is going into the use of Big Brain big TB drives. One use is the technique of Brain Mirrors. A Brain Mirror uses software and storage to create an image of the current state of the Brain, a snapshot of current variables and parameters which may include dreams, memories, thoughts, feelings, learning, timing states, and other parametric identities.
TB drives on a supporting Mac have access to Propeller uploading and additional programming languages on the Mac that can be tailored to a Big Brain Mirror. Sets of mirrors are possible which can be used to study the Brain's behavior over a period of time slices.
These are big mirrors. What about the use of small mirrors? Small mirrors are also possible and can include smaller data sets. These are Selectable data mirrors and can include a vertical domain of specific data.
Some vertical domains can include the data collection center, the current activity cache, the mobility status, a diagnostic center, a frozen thought, and other aspects. Small Mirrors can be made to fit smaller drives, such as Flash. Propeller 64K and larger eeproms can also store Small Mirrors directly on the Propeller chip.