Opening Up Brain Channels
Brain is capable of multi channels
The idea for the Brain, since the Brain has the capability to
run many multiple channels of activity, is to develop Brain
Channels that deliver some unique service.
For example, a number of channels are now working,
with previously described apps, and inclusive of the app in the
next post (the game of Life).
Channels can deliver sound, speech, singing, languages,
input, output, hearing, vision, etc.
This is an early Brain development phase, and dedicating
a channel to an app is one objective.
Eventually when the number of channels or channel apps
are increased, a list of channels will be provided.
For example, in the Brain folder a number of programs are kept
which can run in channels and are already working and tested.
A preliminary working list looks something like this:
Speech, sing, whisper, recite, vocalize, choir
Sound (audio volume level)
Graphics (color, resolution, text, fonts)
Keyboard (type input, letter key to speech sound)
TV (large screen, small screen, row/line char, color, see graphics)
Brain Achieves Cellular Automatronic Life
Propeller Program gives life to the Brain
Actual photo of this amazing Universe of Life, which is still
unfolding inside the Propeller Brain, has evolved over a
time period of one hour! Looking like stars in the galaxy,
these are actually evolutionary cellular life forms.
Wired for Evolving Automatronic Life, the Brain is
alive as seen in this window of the unfolding Universe
portal, made possible by a Parallax 3.5-inch TV as a
viewer and computational Propeller chips.
Sometimes you find life in the strangest of places living under the strangest conditions. This is one such example, which is developed by Cambridge mathematician John Conway. In this Brain Automata, Life is created through collections of living cells that are born, breed and die, based on the mathematical conditions imposed upon Universe life. Throughout their lifetime, they form groups and patterns of considerable importance.
Conway's work was popularized in a 1970 Scientific American article. It has become a must know/must read work for Artificial Intelligence followers as well as a litmus for various fields of mathematical academia.
The universe of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in various states, alive, breeding, or dead. Each cell interacts with eight neighbors, which are horizontally, vertically, or diagonally adjacent.
During each step of evolutionary time, the following happens: For a space that is 'populated'
Each cell with one or no neighbors dies, as if by loneliness.
Each cell with four or more neighbors dies, as if by overpopulation.
Each cell with two or three neighbors survives.
For a space that is 'empty' or 'unpopulated'
Each cell with three neighbors becomes populated.
The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed—births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called a tick (each generation is a pure function of the preceding one). The rules continue to be applied repeatedly to create further generations.
Brain Cellular Automaton Life Options
Screen, algorithm, resolution, cell, evolution
John Conway, the genius mathematician creator of Cellular Automata "Life," is alive today. There are many followers of CA, and for good reasons. Read the Wiki excerpt and find out why..
Propeller Life by Kenn Pitts http://obex.parallax.com/objects/141/
Propeller demo cellular automaton based on John Conway's Game of Life. Fun to watch! Click on the samples provided in the app, or roll your own. Works great on the Prop demo board, uses a VGA screen and mouse. Enjoy! Added Commentary The propeller can only do one-dimensional arrays. But, in this scenario, I needed a 2-dimensional array. So I was mapping one virtually onto the array using some quick math: array[x + (y * 20)]. While this works, because the propeller has no built-in multiply instruction, it has to add. A lot. So, instead, I decided on this: So you have a location array. To the left of it is array[i-1], right is array[i+1], below it is array[i+20]... and using only addition and subtraction there instead. That alone boosted performance about 3x.
Cellular Automata 256 Rules with Sound by William Como - Modified by Jesse Burt to run on Rayman's Propeller Touchscreen Platform (PTP) Rayslogic.com, LLC). VERSION 1.0-modified. Requires Graphics.spin and PTP_LcdDriver for PTP, make sure Graphics.spin is from PTP sources, not Propeller Tool/Hydra/etc! Uses 320x240 ResolutionModified by Avsa242 for use with Raymond's Touch Screen. CA256rules_ptp.spin
Cellular Automaton on the Prop with Wolfram's Rule 90 by Acantostega http://forums.parallax.com/showthread.php?90296-Blinky-Lightsrule90.spin http://forums.parallax.com/attachment.php?attachmentid=44483&d=1165400523
Cellular automaton on the Prop. I just went with Wolfram's rule 90 (it can be done in a more efficient way with XOR) and didn't try to enforce any sort of global synchronization. The idea is that each cog is a cell, and each cell corresponds to a led. Port A is used by each cog to assess the state of its neighbors. It cycles between two or three states... (rule 90 forms a sierpinski carpet, although you won't be able to see anything like that in 8 leds). While a cog per cell is inefficient, I guess it's truer to the spirit of CAs. It also opens up the possibility of asynchronous CAs, although other rules apply in that case. Maybe the clock.spin module can be used for global synch.
Humanoido's Comment: Works with a Parallax Demo Board. The LEDs evolve with a rather short life span.
Amazing Sand Physics: 6 cogs animate 10,000 grains of sand by Dennis Ferron http://forums.parallax.com/showthread.php?92322-Amazing-Sand-Physics-demo-uses-6-cogs-to-animate-10-000-grains-of-sand-in-realt SandDemo - Archive [Date 2007.02.25 Time 00.30].zip http://forums.parallax.com/attachment.php?attachmentid=45611&d=1172416571
I can animate 10,000 grains of sand with just 12K of memory and a small processor because the grains of sand are cellular automata. I got the idea from Conway's Life game. In the game of Life, cells are "alive" or "dead", and there are simple rules which govern invidual cells to create complex overall behavior. After seeing what arises in Conway's Life, I figured cellular automata would work for doing the physics for particle interactions, too, and guess what, it does! There are only two basic rules which operate on the sand particles: 1. A sand particle will fall if there is an empty space beneath it. 2. If 3 or more particles are stacked up, the bottom particle will be pushed left or right out of the tower, if there is space.
Here's a version of the demo that achieves smooth scrolling using only 1 video page, using a "mode x" method. It is the same as the original sand demo, except that now the whole background (including moving sand) scrolls in a continuous loop. It doesn't require any more CPU resources to do this over the nonscrolling version, because it doesn't require any memory moves (block-image transfers) to scroll. SandDemo - Archive [Date 2007.02.25 Time 22.40].zip
Dennis Ferron comments on implementing Cellular Automatrons on Multiple Cogs http://forums.parallax.com/showthread.php?92322-Amazing-Sand-Physics-demo-uses-6-cogs-to-animate-10-000-grains-of-sand-in-realt ..this particular algorithm is easy to run in parallel because each "cell" only looks at the cells very close to it, so you can run multiple cogs on it as long as they are more than 4 or 5 pixels away from each other as they work. So each cog works from bottom to top, line by line, and it's ok as long as all the cogs move from bottom to top at the same speed and maintain some separation between them. Problems ensue if a cog gets bogged down and a cog "below" it catches up to the line it's on. The thing with dividing the screen into buckets is that it's difficult to handle the edge cases where two buckets touch, and if you ignore the boundaries and just let cogs move sand in and out of each other's buckets, then there is still the possibility that another cog will be writing sand into your bucket area while you are trying to put a grain in the same spot. So instead of buckets I thought I'd just have all the cogs scan the whole screen, but dynamically insert waits after each line so that the cogs all remain exactly 1/6 of the screen away from each other as they scan. If one cog is getting a little bogged down (happens if there is a lot of sand) the others would all have to slow down too, to maintain an equal distance. There are still caveats with that; for instance a hole at the bottom of a sand dune can only "bubble up" to the top as fast as the scan speed of a single cog doing the sim no matter how many other cogs are used. Having more cogs doesn't make holes bubble up any faster, but it allows more holes to bubble up at once.
Commentary by Epmoyer http://forums.parallax.com/showthread.php?96312-John-Conway-s-Game-of-Life One way to speed the processing up considerably (for this particular set of CA rules) is to take advantage of the fact that 1) The patterns which evolve from life are never heavily populated (i.e. there are typically more empty cells than full) and 2) The rule set in life depends only upon the number of neighbors and not their relative positions. Having made those two observations, you can create a "neighbor count" array of 20 x 20 bytes (i.e. one for cell). For each pass you zero the neighbor count array, then search the cell array for populated cells. Most will be empty, and you can move on doing no math at all. When you do find a cell, increment the neighbor count for all surrounding cells. When done, make another pass and apply the spawn/survive/die logic to each cell based on its neighbor count. You'll end up having to access far fewer than 3200 (8x20x20) cells and should get a significant speed improvement. For very sparse screens you'll be down around 400 (cell check pass) + 200(incrementing adjacent cells on 25populated cells) = 600 or so cell accesses instead of 3200, which is about a 6x boost in execution speed. Well, actually theres also the pass to zero the array (400) and the pass to check the array when done (400), which puts you at 1400 instead of 3200 so perhaps its a 2x increase in speed, but you can do some things to speed that up as well. For one thing you can use nibbles instead of bytes (i.e. store two counts per byte) which will halve the size of your neighbor count array, and you can use a block fill to zero it which will be very fast.
Jonlink0 describes another Automaton http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain/page3 The Brian's Brain cellular automaton (http://en.wikipedia.org/wiki/Brian%27s_Brain) is an interesting brain model. Even though the only action potentials that can be sent are excitatory, and each neuron only has an 8-cell (moore) neighborhood, "waves" of action potentials will grow chaotically from a disturbance in the medium. Some method of creating inhibitory action potentials (perhaps utilizing extra cell states?), as well as input/output methods (easily done by changing certain cell states according to the states of the robot's sensors), may cause interesting behavior. Some sort of inhibitory action potentials are obviously necessary to prevent the automaton from becoming saturated with chaotic activity, which is what Brian's Brain will default to if given semi-random sensor input. (The same thing occurs in BEAM technology, such as in bicore- and microcore- based neural/nervous networks; specialized circuits are often added to prevent this).
Tracey Allen comments:http://forums.parallax.com/showthread.php?85001-Propeller-supercomputing/page2It seems to me Propeller also has potential for "supra-computer" explorations. Like... Cellular Automata: "A regular array of identical finite state automata whose next state is determined solely by their current state and the state of their neighbours." <www.cs.bham.ac.uk/~wbl/thesis.glossary.html> Neural Network: "An interconnected assembly of simple processing elements, units or nodes, whose functionality is loosely based on the animal brain. The processing ability of the network is stored in the inter-unit connection strengths, or weights, obtained by a process of adaptation to, or learning from, a set of training patterns. Neural nets are used in bioinformatics to map data and make predictions." <www.inproteomics.com/nwglosno.html>
Part of my grad school work was on phase locking and chaos in coupled neurons. I did experiments using coupled oscillators built with programmable unijunction transistors, and using numerical simulation on a PDP7. One propeller could do 8 neurons, and with the pwm output and summed sigma-delta inputs, it could even do the analog weighting functions. Wow, use propeller to build an analog super-computer! Tracy Allen www.emesystems.com
Excerpt From WIKI:
Ever since its publication, Conway's Game of Life has attracted much interest, because of the surprising ways in which the patterns can evolve. Life provides an example of emergence and self-organization. It is interesting for computer scientists, physicists, biologists, economists, mathematicians, philosophers, generative scientists and others to observe the way that complex patterns can emerge from the implementation of very simple rules. The game can also serve as a didactic analogy, used to convey the somewhat counter-intuitive notion that "design" and "organization" can spontaneously emerge in the absence of a designer. For example, philosopher and cognitive scientist Daniel Dennett has used the analogue of Conway's Life "universe" extensively to illustrate the possible evolution of complex philosophical constructs, such as consciousness and free will, from the relatively simple set of deterministic physical laws governing our own universe.[4][5][6]
The popularity of Conway's Game of Life was helped by its coming into being just in time for a new generation of inexpensive minicomputers which were being released into the market. The game could be run for hours on these machines, which would otherwise have remained unused at night. In this respect, it foreshadowed the later popularity of computer-generated fractals. For many, Life was simply a programming challenge; a fun way to use otherwise wasted CPU cycles. For some, however, Life had more philosophical connotations. It developed a cult following through the 1970s and beyond; current developments have gone so far as to create theoretic emulations of computer systems within the confines of a Life board.[7][8]
Conway chose his rules carefully, after considerable experimentation, to meet three criteria:
There should be no initial pattern for which there is a simple proof that the population can grow without limit.
There should be initial patterns that apparently do grow without limit.
There should be simple initial patterns that grow and change for a considerable period of time before coming to an end in the following possible ways:
Fading away completely (from overcrowding or from becoming too sparse), or
Settling into a stable configuration that remains unchanged thereafter, or
Entering an oscillating phase in which they endlessly repeat a cycle of two or more periods.
It is now possible to use a Brain offsetting technique to keep cogs available for specific multi-processing thinking while highly specific sole purposed code is tasked out to other dedicated processors. These can easily be Parallax host processors dedicated to singular tasks.
One example of this is the host board #23. 23 is in charge of TX/RX and nothing else. It's sole objective is as data pipe for remote sources with a requirement of data exchange.
Another example is the board found in the the Brain Stem. This nerve center is responsible for one thing - motion control. While motion control can include many sub-facets of motion and movement, it nonetheless is single mind tasked, i.e. control of all motions and the governing of primary motor responses.
These examples are much like autonomic re-sponsors, leaving the cogs free for thinking. Like the human brain, these "host regios" will take up the slack for positions of performing elements such as speech, sound recognition, and numerous other functions (taste, smell, vision, touch, etc.).
It works with or without the screen. Just type on the keyboard and hear the keys spoken. For example, type the number 1 key and you'll hear "one."
Speaking, voice, speech synthesis, vocal tract, phoneme generation, speech synthesis, text to speech, are all important elements of the the Giant Brain. The development continues..
I thought it would be cool to use a keyboard and actually hear the keys typed. There's a bit of history to this type of program. I have written a talking keyboard program on almost every computer that I've owned. So the Propeller Brain should have one too!
The thread describes the code and has the download, both linked below.
In continuing with programs that vocalize the Brain, this next addition adds a set of lips.
These are special lips that you can only hear, but hearing is the first step with lips, isn't it?!
These programs are written for the Brain as tools to become available for future uses and
as experience in learning the range and capabilities of the code.
Flipping lips is an idea that originates from early grade school, when kids have fun making
sounds by the rapid altercation of finger to lips thus varying the glottal exit sound component.
The Brain is heavy into sound, having a dedicated sound processor region. Like a human, the
Propeller chip in the Brain can reproduce many human-like sounds and create many sounds
that go beyond human capabilities.
Brain Automaton Singing
Beginning of Automatonic Brain Song
Back deep inside that speech region of the Brain is the capability of song.
This is the first song program created for the brain based on the Automaton
algorithm.
In the future, we have the possibility for the Brain AI to create its own songs.
So what is presented here is a first step.
wow, this barin project is amazing! I have been working on a project for more then a year were the robot implements moods into it's everyday tasks. But just by looking at this tells me I have a long way to go. I will be posting PAD, my robot project soon on the forums.
Brain Mood 1st Considerations
Psychological state of mind
Intro
Mood can set the personality of an individual, behavior response,
state of mind, and characteristics. A Brain emotion model affects
individuality and the perception that others have towards it.
Establishing Mood
Mood is set through communications such as speech and
various forms of expression.
Emotion/Mood Lifespan
An emotion is related to a mood and has a life span in the
Brain. The longevity factor of emotion affects the behavior
response of the Brain.
Transient Model
The mood model can thus be transient and affected by sensor
input. The model is primarily a collection of rules governing the
behavior response to input and stimuli.
Mood Characteristics
The Brain will first have basic mood characteristics embedded.
Some suggestions are pleased, surprised, frown upon, curious
and inquisitive, short curt and quick, lazy or whimsical, sad or
depressed, happy excited enthusiastic, intellectually challenging.
Recursion
Mood can be highly dependent on memory recursion in more
advanced models. Without recursive techniques, a Solid Static
model is workable.
Knowledge Compression
The Recursive Model consumes larger portions of memory.
Recursion can use methods of knowledge compression, which,
is another very important part of the Brain.
Level of AI
Much of this will depend on the generation and level of AI
permeated throughout the Brain. It's still too early to begin writing
code because the AI model needs to be in place first and then
various module subroutines can implement.
Face
Will a Brain face be mandatory and a prerequisite for the Mood Module?
Probably not. Communications with the Brain in conversation could
elicit bouts of evident personality without a smiling or frowning face.
Inherent traits and mannerisms march through in verbal speech,
inflection, the strength of text and the placement/use of words.
News http://www.nyumbanilabs.com/2009/07/robot-with-moods/ A hyper-realistic Einstein robot at the University of California, San Diego has learned to smile and make facial expressions through a process of self-guided learning. The UC San Diego researchers used machine learning to “empower” their robot to learn to make realistic facial expressions. “As far as we know, no other research group has used machine learning to teach a robot to make realistic facial expressions,” said Tingfan Wu, the computer science Ph.D. student from the UC San Diego Jacobs School of Engineering who presented this advance on June 6 at the IEEE International Conference on Development and Learning. The faces of robots are increasingly realistic and the number of artificial muscles that controls them is rising. In light of this trend, UC San Diego researchers from the Machine Perception Laboratory are studying the face and head of their robotic Einstein in order to find ways to automate the process of teaching robots to make lifelike facial expressions. This Einstein robot head has about 30 facial muscles, each moved by a tiny servo motor connected to the muscle by a string. Today, a highly trained person must manually set up these kinds of realistic robots so that the servos pull in the right combinations to make specific face expressions. In order to begin to automate this process, the UCSD researchers looked to both developmental psychology and machine learning.
Sovr said:
Re: Fill the Big Brain wow, this brain project is amazing! I have been working on a project for more then a year were the robot implements moods into it's everyday tasks. But just by looking at this tells me I have a long way to go. I will be posting PAD, my robot project soon on the forums
Thanks Sovr. The brain is just getting started and although wiring will continue for some time, at least I can run processors in various stages and parts of the entity and do Spin programming. Much of this is a learn as you go along project.. I'm especially interested in your robot mood algorithms and knowing how you implemented PAD. Is it based on any kind of memory recursion? Or mainly a solid static model? I look forward to seeing your robot project posted on the Forum.
iBrain On LOGO for Unfolding Dream Worlds
Dimensionally Dream with LOGO
Enter a vast Brain unfolding dream world made possible with "dream compressing" LOGO language
Is this the first Brain Dream? Using a LOGO language Brain
Channel and techniques of Dream Code Compression, the
Brain can accomplish Propeller-based dreaming.
Dreaming
Dreaming is a big and important thing with the Brain. Evolving a dreamscape in full dual dimensional real time rendering is challenging both in terms of memory and processing power. Compressing the vast scale of changing motion images in an unfolding dream and displaying it is the objective. But how to do this?
Vector Graphics Dream Rendering
One way to accomplish all of this is through the rendering of iBrain dreaming in real time with a special designed language that can handle vector graphics drawing. It must also compress the images, i.e. generate compression code that can store code to create images. What strange new language can do this and would we be lucky enough for a version that works on the Propeller platform?
LOGO
The answer of today's Brain dreaming stems back to the 1980s when Turtle LOGO became popular. A spinoff version of this language is now available for the Propeller chip - the Brain is now on LOGO.
Logo was created in 1967 for educational use, more so for constructivistteaching, by Daniel G. Bobrow, Wally Feurzeig, Seymour Papert and Cynthia Solomon. The name is derived from the Greek logos meaning word, emphasising the contrast between itself and other existing programming languages that processed numbers. It can be used to teach most computer science concepts, as UC Berkeley Lecturer Brian Harvey does in his Computer Science Logo Style trilogy.[1]
Why LOGO
Why LOGO language? The language is a perfect high speed rendering planar-multi-dimensional construct with compacted image and graphics coding compression for evolving Brain dreamworlds.
Increasing LOGO Dimensions LOGO is considered a two dimensional construct that can explore angles, geometry and measurement. However, the Brain makes use of other dimensional parameters such as time, color, processor positioning, channeling, and luminosity.
Programming the Brain in LOGO
How to program Brain LOGO? Here's a list of commands to follow.
Dream Unfolding
What is Dream Unfolding? It's the construction of a dream by expanding code. Dreams are also folded using code in LOGO. For example, here's code for a simple spiral dream that unfolds in only a few seconds time. The illustration shows results made by the specific rotation of a geometrical construct.
to spiral :ln
fd :ln
rt 92
fd :ln+1
rt 92
spiral :ln+2
end
Then just enter
spiral
Here's 4 programs to unfold geometrical objects.
Square: REPEAT 4 [FD 100 RT 90]
Circle: REPEAT 90 [FD 10 RT 4]
Octagon: REPEAT 8 [FD 50 RT 45]
Star: REPEAT 5 [FD 100 RT 144]
repeat 180[fd 4 rt 2]
and get a (pretty good approximation of a) circle
A Fractal Tree
a fractal tree
to tree :size
if :size < 5
{
stop
}
forward :size
right 30
tree :size-5
left 60
tree :size-5
right 30
backward :size
Sample in Spin by Dreamwriter
case Command
COMMAND_FD, COMMAND_FORWARD:
eraseTurtle
parameter1 := getIntParameter(ParameterList)
ParameterList := secondaryReturn
'standard geometry, x = delta * cos(angle), y = delta * sin(angle). +90 so zero degrees points straight up.
turtlex := turtlex + ((parameter1 * getCos(turtleangle+90)) / 65535)
turtley := turtley + ((parameter1 * getSin(turtleangle+90)) / 65535)
'plot the line
gr.colorwidth(2,0)
gr.plot(oldTurtlex, oldTurtley)
gr.line(turtlex, turtley)
drawTurtle
NewStringPosition := ParameterList
Discussion
Ralfw is a LOGO hacker, one of the originals from the 1980s. He writes," I did LOGO on the Apple II, the MIT version which was since distributed by Terrapin and Krell. (I was taking a seminar taught by Hal Abelson and Andy diSessa) My dream is for LOTS of LOGO varieties to experiment with, i.e. the Atari (LCSI) version did things the Apple II/MIT version did not, for example... Robotics-focused LOGOs have features that neither '80s version did. Right now I have the Chameleon AVR (8-bit AVR), but I'm hoping to wire-wrap up a second propeller card as an expansion card for my Apple II, to get: VGA output, stereo sound, networking of some sort.
Conclusion
In conclusion, the Brain utilizing a form of LOGO language to trace out the pathways of unfolding Brain Dimensional Dreaming is now a possiblity. LOGO is the means of creating unfolding dream worlds.
Brain X-RAY Machine
Developing Brain X-Rays now a Reality
It's not often that you have the opportunity with various scientific medical equipment to x-ray your project, for the reason of study, component placement analysis, project continuing extension and development, upgrading and enhancement, revision and various engineering requirements.
With new emerging technology, you don't need a medical CT-Scan machine. For the first time with technical three dimensional data supplied by NikosG, it becomes possible to x-ray the Brain using the Google SketchUp CAD program.
This technical Brain X-Ray was achieved through a technique of data development made possible by Master technical artisan NikosG of Greece. The TV view is remarkable, showing Brain detail through the TV in addition to maintaining the screen and markings. This x-ray can be extended from the outside-in as shown here or from the inside-out through the back side of the Brain.
Compared to the x-ray of a human brain, the Parallax Propeller
iBrain is significantly different with boards of silicon
processors. At this development time, the iBrain is electrical
but that could change with the mixing of bio modules that can
interface to electrical modules. In particular, the module for
tasting substances could be governed by bio sampling.
Much like a typical NMR Nuclear Magnetic Spin Resonance image, the Brain view can be developed inside the computer. Typical processing is remarkable, with the ability to move a camera around from any angle and photo the result. Brain X-rays will become a part of Brain analysis in the future.
Brain TV Tuning for BIG Characters
How to tune a TV driver
The ROM font text driver
The graphics driver with a mix of colors
Left - graphics red text screen Right - green screen version
The How & Why of Propeller TV
You think it would be a simple thing. Grab a TV driver, run your demo program, and voila! Instant selection of big characters easily visible on a tiny TV screen... NOT! One must go in to the gritty part of the code and know what to adjust and were to find it. If you don't know, it won't go!
How to Get Those BIG Characters?
This post is designed to cover everything you need to know when making BIG characters on the 3.5-inch TFT LCD Color Monitor from Parallax. This is just a small TV with 320 x 240 pixel resolution. It's small to fit inside the belly of the Brain. This is an ideal TV because it weighs almost nothing and is extremely small and easy to mount into a small place.
The Challenge of Small TVs With small TVs come small text. It's so small in fact that when the letters blur together, one may not be able to read the results. So in our continuing Brain adventure, we learn that most TV drivers are designed for big screen TVs. It makes you wonder if most program developers of TV drivers live in big screen TV movie theaters! That tiny micron nano font looks great on the big screen but is totally useless on the tiny TV.
Grit Driver Diving
Ok, time to roll up your shirt sleeves and dive into the driver, make the changes, and get out of there asap! There are two approaches to handing the TV drivers. Yes, drivers as in plural. It boils down to two main drivers for the Propeller TV with a lot of variations. Let's take a look at the two main drivers.
Two Main Drivers
Text TV First we have a Chip Gracey Text TV Driver that uses the Propeller chip fonts which are stored in ROM.This, as viewed on the tiny TV, is the clearest and sharpest we have seen. So modifying this driver is a top priority. These characters are white on a normal dark TV screen backdrop.
Graphics TV Next, we have Graphics TV Driver that uses interlacing to make characters, letters, and graphics, plus adds color to the equation. The font size is normally very tiny in the demo code for this driver.
The ROM text driver is tack sharp
The graphics text can mix color
A graphics view in standard white char on dark backdrop
There's nothing like a retro green screen
Interlacing
Another factor comes into play. The graphics driver uses a method called interlacing. Tiny TVs may or may not handle interlacing well, i.e. the font letter may show a smudge, smear, blur, or a kind of stamped echo. This can happen at the bottom of the font, in the middle or at the top. It can also vary across the screen. For example, in some TVs the first and second lines are tack sharp but farther down there's the introduction of artifacts.
Never Fear, Help is Near
Imploring the need to modify these drivings and find a way to do it is the next step. But don't worry, it's all accomplished. Several brilliant minded Techies came to the rescue. OBC refined the objective. Jrjr had VGA options for alternative approaches. Rayman made recommendations for a graphics approach. Ariba (Andy) shot out expert advice for the Text TV Driver and how to modify it. Perry master minded the changes for the Graphics TV Driver for large text, color, and remarkably found a way to improve the interlacing with a single code change, thereby minimizing artifacts. Roger offered his code that handles large numbers and regular text. Publison was kind enough to photo the results from two other monitors of different sizes and manufacture for appearance comparisons with the Graphics Driver installed. Potatohead noted the interlacing challenges faced by small TVs. Phil Pilgrim recalled that the Propeller BackPack is capable of font scaling and to check the code.
BIG Results are in
What we now have are two main modified drivers, one based on the ROM font and another based on graphics. These are tuned to create large characters on several lines. The final code is posted below. The first is for tack-sharp text and the second is for text, graphics, color.
Small exampling programs will come out by the thousands, like intelligent but small neural packages (programs) that contribute to the overall the thought.
What can we do with a thought?
Hold it whereupon it becomes a memory
Solve it where a solution is demanded
Utilize it in a contributory fashion
What is developed?
The Propeller Brain collective is doing a density of one thousand of these small contributory thought programs for every eight cogs - this amounts to 21 boards with 168 cogs and a total of 21,000 thought programs, which fits into our established model range.
The Key
Developing some initial small examples of simple thought processes
Machine/Human Brain Time Travel Speed up and slow down brain time
The Propeller Brain Perspective
The Propeller Brain project is a large endeavor, that can be scaled up or down to fit many levels of applications and levels of available resources. While the human brain is an extremely complex organism, it's simulation is possible on several levels. Knowing the resultant behaviors of the human brain can help our machine brain formulation. We can learn from millions of years of evolution.
Time in the Brain
One area of Brain learning where the human model can contribute a valuable and useful algorithmic process is that of time. In the human physiology, there is a process governed by the brain, with survival instinct, that can alter time. It allows the survival of the species - the brain can think faster, come up with solutions quickly when life is threatened. It can also think slower in ways that are chemically proven to extend life.
Time in Review
There are several modes of time that exist in the real world. Let's examine four of these types of time.
Physical Time
Time can be physical as shown by Albert Einstein. In his time travel equation it shows how moving clocks can run slow and relativity events seem to speed up. We can measure this effect, as the formula predicts, in various gravity fields and at speeds when traveling faster than 1/10th the speed of light.
Event Time
In event time, time is merely the passage of events. It flows only in the forward direction. Although one can review unfolding time, it is not possible to go back, thus winding the clock backwards. Event time can be seen through a telescope. You are looking at the unfolding history of a deep space galaxy that is located billions of light years away. Today, the galaxy is evolutionary different from the image seen.
Physiological Time
In human physiology, time does the opposite, it compounds progressively from the moment we are born. As we age, time continues to increase in speed and passage. There is a simple example. When we are young, time passes very slow. Hours can seem like days. When we are more aged, time speeds up. Months can seem like days.
Take for example a child who is only 12 months of age. To that child, 6 months is half his entire lifetime. Six months seems like an eternity. However, when the child is a grown person at age 50, a half year is relatively a fast moment in time, merely 1/100th of the total life lived.
So this value of time is relative to our physiology. Time begins slow and runs faster and faster in the physiological world. Theoretically, physiologically speaking, if you live long enough, you will travel into the perspective future.
Machine Time
Electronic Machine Brain time is dependent on clock cycles. It is limited by an Einsteinian world, as electromagnetic radiation as we currently know it (excluding theory) cannot be made to travel faster than the speed of light.
Can we physical time travel our computer chips? Yes, the electromagnetic component can travel faster than 1/10th the speed of light. Can we physiologically time travel with a machine chip? Yes, we can alter the clock from a baseline and progress it.
Can we make use of event time? Yes. In a machine, we can record and store the elements that happen with the passage of events, and those elements can unfold in the forward time direction.
Some Useful Applications of Time
In a human, the brain processes images, one at time, at the rate of 30fps. In high levels of stress or life threatening situations, time will slow down, and images will process at 120fps in the brain. Can we do this with a computer chip brain? Yes. The clock can process at a normal baseline, and accelerate under given conditions, thus processing more information in unit time.
The propeller can process at internal clock slow time (approx. 20 kHz), or internal clock fast time (approx. 12 MHz) or run with an external crystal at high speed time, typically 80 MHz. Clock Modes: (a) External crystal 4 -8 MHz (16 x PLL) (b) Internal oscillator ~12 MHz or ~20 kHz (c) Direct drive.
Conclusion
We have only touched upon some basic principles of time and briefly discussed how different types of time can be applied to the Propeller Brain.
When I first came across your posts, I though wow these guys really have it together. Connecting many STAMPs or Propeller together to make a supercomputer, wow, that is a serious endeavor. I spent several hours reading posts and following links from the 40 board Propeller skyscraper to the STAMP SEED supercomputer. I have to conclude that you have created nothing more than a bunch of distinct and separate boards stacked one on top of the other. I looked at the STAMP super computer. It's just a bunch of STAMP boards connected to pin 0. Sorry for the rant. I'm just mad at myself for spending time on this. I wish you the best.
Sorry; I was a little upset that day after spending hours weeding through tons of serial threads. I thought it would be easier to just ask. So do you have any examples of Prop to Prop communication and multiple propellers problem solving? I'm having a hard enough time with parallel processing on a single Propeller.
The iBrain is a Parallax Propeller based robot brain made up of many processors programmed in Spin language.
It's a very nice project. A couple of thoughts relevant to what you're trying to achieve:
Intelligence exists with regards to something external, in our case the human society and the universe. You can test your AI against both, however action in the physical world is slow and energy-consuming -- I'd advise instead in the first stage of your AI development process to use a simulation. This makes development quicker and also cheaper.
You can design the AI by hand, either from a formal theory or using trial-and-error (or both), but this is quite tedious as well. Another way is to provide the initial program reflectivity, snapshots, debugging & self-modification capabilities. Then evolve a collection of programs against your simulated environment. The "final" program can then be transferred to the iBrain.
Sorry; I was a little upset that day after spending hours weeding through tons of serial threads. I thought it would be easier to just ask. So do you have any examples of Prop to Prop communication and multiple propellers problem solving? I'm having a hard enough time with parallel processing on a single Propeller.
Fair enough. Apology accepted. I have some test examples of Propeller to Propeller communication. The code was first used when the 1st and 2nd PPPBs were installed. It's working on the first BUS. After that, I designed a reconfigurable hardware serial interface. There is also code designed for adding other processor types down by the Brain Stem. This code is completely different for communications. The test for this is working too. I believe I can get both to work on the same BUS. As you can see, it's still being tested. Now I introduced something new called Brain channeling. This lets an app run on a specific addressed board and then channel communicate results to other boards or the display. I need a program scheme so multiple boards can be switched in and out on the TV for display of data.
It's a very nice project. A couple of thoughts relevant to what you're trying to achieve:
Intelligence exists with regards to something external, in our case the human society and the universe. You can test your AI against both, however action in the physical world is slow and energy-consuming -- I'd advise instead in the first stage of your AI development process to use a simulation. This makes development quicker and also cheaper.
You can design the AI by hand, either from a formal theory or using trial-and-error (or both), but this is quite tedious as well. Another way is to provide the initial program reflectivity, snapshots, debugging & self-modification capabilities. Then evolve a collection of programs against your simulated environment. The "final" program can then be transferred to the iBrain.
Hi cde, thanks for the kind words, background and ideas. The simulation idea is workable, but there's nothing that I could find to simulate 170 cogs connected together. Maybe some code run on the PC can be designed to emulate the connection, then write the code to run on it. Has anyone accomplished work in this field? I had recently saved some languages that can run on the PC - these are free, and could be used to develop a simulation. But is Spin different enough from these languages to effect programming in Spin, and not the language for emulation? Another idea is use one Propeller chip and run a small version of the code there, in up to 8 cogs. Then expand the working code to more processors. I have done some work using this approach. Indeed intelligence is always a barometer against the real world condition.
Brain Base Communications Design, code, schematics, tests
January 8, 2011 Post: First successful test results with the first gamut of testing programs, matching phase and using PST as output, see post 141 page 8. Showing the PST on COM52 with Propeller #1 as a talker and #2 as a listener.
This is one wire serial communications at 9600 Baud with eight stop bits and positive polarity with a BUS interface. Each Propeller can talk or receive. The first Propeller is a designated Master. In the test block, the remaining two PPPBs are slaves. Both slaves are currently listeners while the Master is the talker. Serial transmission is accurate and stable over a million bit test. The code pair is a talk/listen configuration.
Introduction
A while back (two months ago) the test code for connecting Propeller to Propeller was established. The interfacing of the first two propeller boards is called the Brain Base.
Configuration
Note the Brain Stem connected Basic Stamp 2 to Propeller and used different test code. The Brain Stem resides under the Brain Base. Above the Brain Base resides the Brain Spans.
Code
This code concerns the Brain Base only. Posted here is working test code for Propeller to Propeller communications, testing on a BUS. Refer to the connection schematic for wiring.
As reviewed on page 6 post 112, these are
options for serial interfacing http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain/page6
More napkin sketches: Brain Base Schematic - 1st tests
were compounded on this framework
The wiring schematic/sketch for the Brain Base includes wiring for a data LED, a protected one-wire BUS, and a processor decoupling capacitor. On top side boards, Prop Plug attaches with the logo facing up. On the Brain Spans, this concept is extended to accommodate more boards.
| SW1 | SW2 | INTERFACE
|
| 0 | 0 | FD
|
| 0 | 1 | HD, PL2
|
| 1 | 0 | HD, PL1
|
| 1 | 1 | PL1, PL2
KEY
1 INDICATES SWITCH ON
0 INDICATES SWITCH OFF
FD FULL DUPLEX
PL1 1ST PARTY LINE
PL2 2ND PARTY LINE
HD HALF DUPLEX
Note the early proposal Hybrid nature of the iBrain at post 252 p13 http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain/page13 Parameters
Talk Code is provided (both Tx and Rx) which is tested at 9,600 BAUD. Set communications polarity to one and stop bits to eight. Match the baud rate in both Tx and Rx. The test code sends numbers 0 to 9 each every second to the Propeller and loops.
Indicator Programs
Indicator test programs are also attached, to keep a single LED on, and pulse a single LED. Programs set the pin number and the pulse rate. Note use of a Repeat loop to keep the LED on. Refer to the comments in all code for more information.
TEST RESULTS
============ WIRING - GOOD
CODE - GOOD
FUNCTION LIBRARY - GOOD
CLOCK - GOOD
TIMING DELAY - GOOD
BAUD - GOOD
DATA - GOOD
BUS - GOOD
PIN - GOOD
POLARITY - MATCHED/GOOD
STOP BITS - MATCHED/GOOD
Humanoido's Communication Program - Receiver
Prop to Prop
Receive numbers 0 to 9 each every second from the loop
Propeller to Propeller Receiver v1.1
prop_prop_rx.spin
Updated Wednesday March 9, 2011
Giant Brain Project
Receives data on a one wire bus
Propeller to Propeller BUS Communications
Testing the Brain Base
Humanoido's Communication Program - Transmitter
Prop to Prop
Send numbers 0 to 9 each every second to the Propeller and loop
Propeller to Propeller Transmitter v1.1
prop_prop_tx.spin
Updated Wednesday March 9, 2011
Giant Brain Project
Transmits data on a one wire bus
Propeller to Propeller BUS Communications
Testing the Brain Base
Brain Geneaology From Stamps to Propellers
A collective tracing of multi-processor brain
design and development Leading up to the iBrain
Projects with a Purpose
To fully understand the how and why of the current Propeller collective Brain design, it's beneficial to trace the family roots over time. The original design sprang forth from a humanoid robot in use during 2002 and consisting of several networked Parallax processors. Where did the design for the Propeller-based Brain Base come from? What are its origins? Follow the progressive evolution of these designs, culminating with the Brain's arsenal of Propeller chips.
The first multi-processor robot used
a design that predates this Toddler
Humanoid robot version which also
uses a network of Parallax processors.
The development of iBrain is ongoing. No one knows where this will lead to. Will the Brain become self aware? Will these early designs evolve into a life form? How will the iBrain help understanding in today's world? Amidst the chaos of interminable development efforts, inside this Brain Universe there are indications of the dawning of logic and evolution moving towards a greater purpose. http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain
1) PEK 1 2) MC Computer 3) LED Machine 4) 2-Proper 2 props, 1 PEK, 1 on same breadboard, 2-Prop-Experiment 5) Spark 2 2 props, 1 Proto Board & 1 in parallel 6) PiggyTwins 2 props, 1 piggybacked on another 7) Dueling Breadboards 2 props, one on ea., f/interface tests 8) Spark 4 Tiny Tim, 4 props, two proto boards w/2 props on ea 9) Spark 5 5 props stacked Proto Boards 10) Spark 6 6 props 3 proto boards 2 props on each 11) Spark 8 Tertiary ADJUNCT, 8 props 4 proto boards 2 props on each 12) Propalot 10 props on solderless breadboard 13) Spark 10 10 props, 5 proto boards w/10 props total 14) TTB Test Bed of Ten 10 props single board 15) Twelvenator Board of Twelve, 12 props green board 16) UltraSpark 15 15 props, interrupted stack Proto Boards 17) Tertiary 20 20 props, 15 proto boards stacked 5 props 18) UltraSpark 20 20 props stacked 19) Boe-Bot Brain Project 20 props as Brain on Boe-Bot 20) MLEPS Super Language Machine 25 props 21) UltraSpark 40 Supermicrocontroller 40 props 320 cores, 6,400/8,320 MIPS 22) Smartest Boe-Bot Brain Temporal Experiment 40 props, US40, 1 BOE, 321C
The above list omits several test machines. These will be added after the photos are reviewed.
The newest machines 1) Brain Span 3 rows of six Propeller boards each 2) Brain Stem 1 Propeller board and 1 Stamp board 3) Brain Base 2 Propellers boards 4) Current level of the Brain 22 boards
Brain Stem Communications
Revealing Brain Stem test code
This chronicles the development of test code, phase I, for the Brain Stem.
Note, the Brain Stem is made up from two boards, a BOE and a PPPB.
Many months ago, the first Brain Stem was awaiting more wiring. Since
that time, the Stem was completed and is now a fully functional and
integrated module inside the Giant Brain.
Last year in November, the first Brain Stem was
completed and tested. Seen here it sets on the
green pad for further analysis. At that time, the
Brain did not exist and was only a vision in the
mind's eye!
____________________________________________
Robotic Brain Stem Discovery Thread http://forums.parallax.com/showthread.php?127310-Robotic-Brain-Stem&p=955611#post955611
Introduction
The Brain Stem is one of the most important parts of the Brain. It resides below the Brain Base. The Stem passes signals like a nerve center for reflex, muscle and primarily mobility control. It harnesses mobility software to give the brain motion control.
Compatibility
The Brain Stem has another purpose. It serves as a commonality compatibility interface with other processor-based robots, for example, Parallax robots using BASIC Stamps. It is also compatible with Propeller-based robots, including the Boe-Bot, SumoBot, S2, Stingray, QuadRover, and is an ideal candidate for the Robot Base Full Kit.
SuperSTAMP
Communications was developed and tested on the first Propeller above the first BS2 which is known as the SuperSTAMP. The SuperSTAMP is another project developed to give maximum power to a BASIC Stamp. The SuperSTAMP mates to the PPPB and consists of both Tx and Rx programs along a common protected BUS.
Wiring Diagram for a Brain Stem. Another napkin sketch, the Stem
consists of a Propeller board (PPPB) and a BASIC Stamp 2 Board (BOE).
Test wiring is straightforward.
Wiring
Important wiring note. The schematic shows a Propeller to Stamp connection but does not show the lines running from Vss to Vss. Note that Vdd to Vdd is not implemented as the Propeller is a 3.1-volt device and the Stamp is a 5-volt device. The actual feed is from the Stamp regulator to gain the 5-volts and from the external power supply to gain the 3.1 volts.
Programming Code
Brain Stem code consists of one folder with three files.
PROP-BS2 LEVEL5.bs2
PROP-BS2-LEVEL5.spin
BS2_Functions.spin
Operating the Programs
Load the first program into the BS2. This will act as the receiver. Load the second "Propeller" program and run it. This is the transmitter.
Testing
In the test, the Propeller is continually talking to the Stamp and the Stamp is listening. The Stamp code makes use of the the Debug screen for output.
Error Detection
The code also has error detection. If the system hangs, a timeout occurs and a message is given. Following this, the loop will continue looking for the next character.
Prop to STAMP Voltage Levels
The BUS data transmission from the Prop to the Stamp is a compatible voltage level.
Do we add some appendages and a mobility platform to the brain?
If this is added, then does it become just another robot?
Where is the line in the sand?
The brain could include simple appendages
and a mobility platform for locomotion. Call
the Brain and it goes to you.
Brain Appendages
One idea is to add some appendages to the Brain,
not necessarily hands and arms, but rather
protective stubs or stumps that can bump around,
protect itself from threatening animals, and bump open
swinging doors, push things, make some gestures
and signs with signals, and possibly do some
light limited physical work activity.
Types and Needs of Appendages
stubs
stumps
protection
light work
signaling, signing & gesturing
bumping
pressing & pushing
initiating an alarm
blocking
shielding
soundless vision for work in the vacuum of space
writing, printing
Mobility
Mobility can also include motion to move short
distances. If mounted on a robotics platform, the
Brain could find you, face you when talking to it,
move to a power feeding station, turn to a specific
Flip Mode when you command it, and adjust its
position for better hearing, speaking, vision, balance,
and other environmental positioning. A brain in fluid
could have flippers and fins to adjust its under-
fluid positioning or flotation orientation.
Types & Needs of Mobility
adjust hearing
compensate incline
transport
feed
engage flip modes
increase vision acuity
direct speech & sound
environmental positioning
positioning for engagement
Top Down Approach with Bottom Under
Armaments can mount on gimbals near the top
as motion controllers can attach underneath.
Wheels, treads, rollers, snaking skin or other
forms of transport can initiate motion.
Hi cde, thanks for the kind words, background and ideas. The simulation idea is workable, but there's nothing that I could find to simulate 170 cogs connected together. Maybe some code run on the PC can be designed to emulate the connection, then write the code to run on it. Has anyone accomplished work in this field? I had recently saved some languages that can run on the PC - these are free, and could be used to develop a simulation. But is Spin different enough from these languages to effect programming in Spin, and not the language for emulation? Another idea is use one Propeller chip and run a small version of the code there, in up to 8 cogs. Then expand the working code to more processors. I have done some work using this approach. Indeed intelligence is always a barometer against the real world condition.
Yes, it hasn't been done yet. The first thing needed is of course a simulator on Spin/PASM level. Now the simulation can either strive to be precise (cycle accurate) and faithful to the hardware, but this will be slow and may take too long (ie years) for any significant form of evolution to occur. Or you could settle on innacurate but fast simulation, for example translating PASM code to x86 and letting it run (you'll have to catch exceptions). Now the generated code might no longer be suited to run on the real hardware because of the timing imprecisions.
Another point for AI is that it may require some heavy lifting, esp. floating point computations which the Propeller lacks in HW form. Have you considered adding an umFPU to your design? Obviously, I assume you don't want to move to another microcontroller, that would have support for floating point.
In the real life scenario, the brain is active during a normal day's thinking. During the day, it stores some key words regarding its experiences. These words are nouns, verbs and adjectives. They are quantified. There is either a weight specificity of these words or a random selection. The weight could be equal to the numerical re-occurrence of the word or based on time.
At the end of the day, the brain requires sleep. A dream world takes over. Using the dream algorithm matching capabilities of LOGO language (remembering the brain functions in a hybrid mode with multiple programming languages) the key words are used to unfold the image(s). The fuzzy logic of combining various random nouns, verbs and adjectives creates a surrealist dream world. The images of this dream world unfold on the TV screen.
LOGO is a very capable language for the use of dream folding and unfolding. There is a web site showing how the language was used to image beautiful colorful birds, so graphically there's much capability in matching the dreaming application. If PropLOGO has the features, it can be used. If it does not have the feature, each feature can be created in SPIN.
The first code example should be a simple one. A choice of 3 words will fuel the Dream Folding algorithm. Each word will have code stored to recreate the word's meaning. Nouns are the most simple, representing people, places or things. Verbs are given the associative property. Adjectives fall into different classes. Color and size adjectives are recognizable while others are not (such as hot, cold, and actions like bubbling, rapid, bouncing). The adjective is like a skin applied to the dream world.
The first dreams are static frames of unfolding dreams. More powerful dreaming could achieve 30fps for motion picture quality. Distributed imaging can create more advancing dreaming modes.
The equation hierarchy is noun, adjective in the first lite model. Two sets are stored of each class and the images are folded. Perhaps three elements of each can demo the effect.
The Process of Dream Folding
Nouns Adjectives (A) Color (C) Size (S)
---------------------------------------
Car Red Large
House Blue Medium
Person Green Small
3
df ={[(N)*(SA)+(CA)]dt
0
where df is the dream folding equation, the integral is over
0 to three, N is the noun, SA is the size adjective, CA is the
color adjective and dt is the time derivative
t is directly proportional to the re-occurrence of a word
in unit time
More advanced dreaming can take on the parameters of psychology, encompass larger dream worlds, hold more nouns, adjective and verbs, and function in real time as dreaming unfolds. More advanced dreaming can use more sophisticated dream compression for folding and unfolding.
Another point for AI is that it may require some heavy lifting, esp. floating point computations which the Propeller lacks in HW form. Have you considered adding an umFPU to your design? Obviously, I assume you don't want to move to another microcontroller, that would have support for floating point.
cde: actually this is a misnomer. The Prop is fully capable of floating point. I know, when Mike Green provided the tutoring on this topic, I was totally glued to it.
The Brain at this point is completely INT. I spent years programming inclusive of INT and so far envision sticking with it. I know there are other chips out there, but the focus is mostly Propeller only. More work is accomplished that way with the resources at hand. But of course we would look at another chip to implement algorithms on the Prop, if needed.
The only reason for moving to another Parallax chip is to add it to the collective in a kind of Brain absorption. For example, the Brain has already absorbed Propellers and Stamps, though it's primarily touted as a Propeller machine. Other processors make greater compatibility when connecting sensors in the real world.
For actually needing FP, that remains to be seen. Most people rarely calculate things to several decimal points, or any decimals at all. I just don't see need for such high precision accuracy when we really need to go fuzzy logic on most things and use guesstimates.
Perhaps the necessity of a brain designed for very precise calculations is merely an allusion. I would be very happy if this is a brain designed to take us one step closer to life.
As an unknown person has stated, in reference to the schools of astronomy and pools of artificial intelligence, the place where we discover life may not necessarily be (first) from the giant collective hands of radio telescopes through project SETI but rather through a life form of our own creation.
Yes, it hasn't been done yet. The first thing needed is of course a simulator on Spin/PASM level. Now the simulation can either strive to be precise (cycle accurate) and faithful to the hardware, but this will be slow and may take too long (ie years) for any significant form of evolution to occur. Or you could settle on innacurate but fast simulation, for example translating PASM code to x86 and letting it run (you'll have to catch exceptions). Now the generated code might no longer be suited to run on the real hardware because of the timing imprecisions.
cde, indeed, these are points well made. If the simulation can be made, say on windows, and the code already has a one to one relationship with the prop, then we're ready to write code and can transfer it relatively unscathed in a reasonable amount of time. I'm thinking of installing a similar version of LOGO programming language and doing the straightforward folding code on the PC, then, simply cut and paste the folds into Spin Tool.
For actually needing FP, that remains to be seen. Most people rarely calculate things to several decimal points, or any decimals at all. I just don't see need for such high precision accuracy when we really need to go fuzzy logic on most things and use guesstimates.
I imagine this would depend on the kind of data processing the AI would need to do. A quick guess could be done using fixed point, and a more precise, but slower calculation could involve the floating point module you mentioned. One aspect of AI you might also want to look into is competition, that is to have a pool competing programs in a Propeller with a "judge" somehow rewarding the most correct/efficient program based on an evaluator relevant to the task -- a bit like Tierra/Avida.
As an unknown person has stated, in reference to the schools of astronomy and pools of artificial intelligence, the place where we discover life may not necessarily be (first) from the giant collective hands of radio telescopes through project SETI but rather through a life form of our own creation.
Comments
Brain is capable of multi channels
The idea for the Brain, since the Brain has the capability to
run many multiple channels of activity, is to develop Brain
Channels that deliver some unique service.
For example, a number of channels are now working,
with previously described apps, and inclusive of the app in the
next post (the game of Life).
Channels can deliver sound, speech, singing, languages,
input, output, hearing, vision, etc.
This is an early Brain development phase, and dedicating
a channel to an app is one objective.
Eventually when the number of channels or channel apps
are increased, a list of channels will be provided.
For example, in the Brain folder a number of programs are kept
which can run in channels and are already working and tested.
A preliminary working list looks something like this:
Propeller Program gives life to the Brain
Actual photo of this amazing Universe of Life, which is still
unfolding inside the Propeller Brain, has evolved over a
time period of one hour! Looking like stars in the galaxy,
these are actually evolutionary cellular life forms.
Wired for Evolving Automatronic Life, the Brain is
alive as seen in this window of the unfolding Universe
portal, made possible by a Parallax 3.5-inch TV as a
viewer and computational Propeller chips.
Sometimes you find life in the strangest of places living under the strangest conditions. This is one such example, which is developed by Cambridge mathematician John Conway. In this Brain Automata, Life is created through collections of living cells that are born, breed and die, based on the mathematical conditions imposed upon Universe life. Throughout their lifetime, they form groups and patterns of considerable importance.
Conway's work was popularized in a 1970 Scientific American article. It has become a must know/must read work for Artificial Intelligence followers as well as a litmus for various fields of mathematical academia.
The universe of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in various states, alive, breeding, or dead. Each cell interacts with eight neighbors, which are horizontally, vertically, or diagonally adjacent.
During each step of evolutionary time, the following happens:
For a space that is 'populated'
- Each cell with one or no neighbors dies, as if by loneliness.
- Each cell with four or more neighbors dies, as if by overpopulation.
- Each cell with two or three neighbors survives.
For a space that is 'empty' or 'unpopulated'- Each cell with three neighbors becomes populated.
The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed—births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called a tick (each generation is a pure function of the preceding one). The rules continue to be applied repeatedly to create further generations.Screen, algorithm, resolution, cell, evolution
John Conway, the genius mathematician creator of Cellular Automata "Life," is alive today. There are many followers of CA, and for good reasons. Read the Wiki excerpt and find out why..
John Conway's Game of Life by Spork Frog
http://forums.parallax.com/showthread.php?96312-John-Conway-s-Game-of-Life
http://forums.parallax.com/attachment.php?attachmentid=48968&d=1188635041
This is a simple implementation of John Conway's Game of Life, using a 20x20 grid, on a Hydra with VGA output. Surprisingly, I don't think this has been done yet with the Hydra. Written completley in Spin as a self programming test over about 6 hours.
Propeller Life by Kenn Pitts
http://obex.parallax.com/objects/141/
Propeller demo cellular automaton based on John Conway's Game of Life. Fun to watch! Click on the samples provided in the app, or roll your own. Works great on the Prop demo board, uses a VGA screen and mouse. Enjoy! Added Commentary The propeller can only do one-dimensional arrays. But, in this scenario, I needed a 2-dimensional array. So I was mapping one virtually onto the array using some quick math: array[x + (y * 20)]. While this works, because the propeller has no built-in multiply instruction, it has to add. A lot. So, instead, I decided on this: So you have a location array. To the left of it is array[i-1], right is array[i+1], below it is array[i+20]... and using only addition and subtraction there instead. That alone boosted performance about 3x.
PropBASIC demo program "Conway's Game of Life" by Bean
http://forums.parallax.com/showthread.php?118102-PropBASIC-demo-program-quot-Conway-s-Game-of-Life-quot
video_life.pbas
video_life.spin
MonoVid.spin
Here is another PropBASIC demo program. This is Conway's game of life on a 256x192 matrix. Output is NTSC from the Demo board.
Into cellular automata? Try Propeller Life! by Tallyken PropellerLife.zip
http://forums.parallax.com/showthread.php?92367-Into-cellular-automata-Try-Propeller-Life!
http://forums.parallax.com/attachment.php?attachmentid=45644&d=1172556846
You'll need a VGA screen + mouse. Chip Gracey comments: For those who haven't run it, it's a complete application that uses a mouse and VGA to show some 'cellular automata' (is that right?). It has a clean, tight feeling about it that reminds me of how computers USED to be, and hopefully will be someday, again. I suppose there are other such apps posted in the forum, but I tried this one out because it just used the demo board and was easy to run.
Hardware Version Life by RobotWorkshop
http://www.sparetimegizmos.com/Hardware/Life_Game.htm
http://forums.parallax.com/showthread.php?92367-Into-cellular-automata-Try-Propeller-Life!
I built a Hardware version of life and have hanging up on the wall. It's a very cool and relaxing display.
Cellular Automata 256 Rules with Sound by William Como
http://forums.parallax.com/attachment.php?attachmentid=54843&d=1217417073
Requires GRAPHICS and TV drivers (see IDE sources directory) and comboKeyboard driver. Uses 512x192 Resolution. For Demoboard/Protoboard/SpinStudio Boards. To see what it looks like running rule #150 (using the tv driver) go here: http://www.youtube.com/watch?v=Dh9EglZJvZs
Cellular Automata 256 Rules with Sound by William Como - Modified by Jesse Burt to run on Rayman's Propeller Touchscreen Platform (PTP) Rayslogic.com, LLC). VERSION 1.0-modified. Requires Graphics.spin and PTP_LcdDriver for PTP, make sure Graphics.spin is from PTP sources, not Propeller Tool/Hydra/etc! Uses 320x240 ResolutionModified by Avsa242 for use with Raymond's Touch Screen. CA256rules_ptp.spin
Cellular Automata 256 Rules with Fractals & Sound by Virand
http://forums.parallax.com/showthread.php?100033-Miscellaneous-Demo%28s%29 CellularAutomata256Rules.spin
http://forums.parallax.com/attachment.php?attachmentid=54843&d=1217417073
For hydra. More fractals with sound. All 256 "1-D" cellular automata rules to choose from, although not all are interesting. Don't forget you need the drivers from the Sources directory. This one uses the keyboard. If you liked entropysynth, this is very similar and very different. And if you select 257 you get the same as 001 and so on, in case you wonder about that.
"256 Rules" with improved graphics and ability to randomize the CA by Virand improvedCA256rules.spin
http://forums.parallax.com/attachment.php?attachmentid=55106&d=1218684438
"256 Rules" with improved graphics and ability to randomize the CA to make the boring rules interesting and vice versa. Needs comboKeyboard.spin driver.
Cellular Automaton on the Prop with Wolfram's Rule 90 by Acantostega
http://forums.parallax.com/showthread.php?90296-Blinky-Lights rule90.spin
http://forums.parallax.com/attachment.php?attachmentid=44483&d=1165400523
Cellular automaton on the Prop. I just went with Wolfram's rule 90 (it can be done in a more efficient way with XOR) and didn't try to enforce any sort of global synchronization. The idea is that each cog is a cell, and each cell corresponds to a led. Port A is used by each cog to assess the state of its neighbors. It cycles between two or three states... (rule 90 forms a sierpinski carpet, although you won't be able to see anything like that in 8 leds). While a cog per cell is inefficient, I guess it's truer to the spirit of CAs. It also opens up the possibility of asynchronous CAs, although other rules apply in that case. Maybe the clock.spin module can be used for global synch.
Humanoido's Comment: Works with a Parallax Demo Board. The LEDs evolve with a rather short life span.
Amazing Sand Physics: 6 cogs animate 10,000 grains of sand by Dennis Ferron
http://forums.parallax.com/showthread.php?92322-Amazing-Sand-Physics-demo-uses-6-cogs-to-animate-10-000-grains-of-sand-in-realt
SandDemo - Archive [Date 2007.02.25 Time 00.30].zip
http://forums.parallax.com/attachment.php?attachmentid=45611&d=1172416571
I can animate 10,000 grains of sand with just 12K of memory and a small processor because the grains of sand are cellular automata. I got the idea from Conway's Life game. In the game of Life, cells are "alive" or "dead", and there are simple rules which govern invidual cells to create complex overall behavior. After seeing what arises in Conway's Life, I figured cellular automata would work for doing the physics for particle interactions, too, and guess what, it does! There are only two basic rules which operate on the sand particles: 1. A sand particle will fall if there is an empty space beneath it. 2. If 3 or more particles are stacked up, the bottom particle will be pushed left or right out of the tower, if there is space.
Here's a version of the demo that achieves smooth scrolling using only 1 video page, using a "mode x" method. It is the same as the original sand demo, except that now the whole background (including moving sand) scrolls in a continuous loop. It doesn't require any more CPU resources to do this over the nonscrolling version, because it doesn't require any memory moves (block-image transfers) to scroll. SandDemo - Archive [Date 2007.02.25 Time 22.40].zip
Dennis Ferron comments on implementing Cellular Automatrons on Multiple Cogs
http://forums.parallax.com/showthread.php?92322-Amazing-Sand-Physics-demo-uses-6-cogs-to-animate-10-000-grains-of-sand-in-realt ..this particular algorithm is easy to run in parallel because each "cell" only looks at the cells very close to it, so you can run multiple cogs on it as long as they are more than 4 or 5 pixels away from each other as they work. So each cog works from bottom to top, line by line, and it's ok as long as all the cogs move from bottom to top at the same speed and maintain some separation between them. Problems ensue if a cog gets bogged down and a cog "below" it catches up to the line it's on. The thing with dividing the screen into buckets is that it's difficult to handle the edge cases where two buckets touch, and if you ignore the boundaries and just let cogs move sand in and out of each other's buckets, then there is still the possibility that another cog will be writing sand into your bucket area while you are trying to put a grain in the same spot. So instead of buckets I thought I'd just have all the cogs scan the whole screen, but dynamically insert waits after each line so that the cogs all remain exactly 1/6 of the screen away from each other as they scan. If one cog is getting a little bogged down (happens if there is a lot of sand) the others would all have to slow down too, to maintain an equal distance. There are still caveats with that; for instance a hole at the bottom of a sand dune can only "bubble up" to the top as fast as the scan speed of a single cog doing the sim no matter how many other cogs are used. Having more cogs doesn't make holes bubble up any faster, but it allows more holes to bubble up at once.
Commentary by Epmoyer
http://forums.parallax.com/showthread.php?96312-John-Conway-s-Game-of-Life
One way to speed the processing up considerably (for this particular set of CA rules) is to take advantage of the fact that 1) The patterns which evolve from life are never heavily populated (i.e. there are typically more empty cells than full) and 2) The rule set in life depends only upon the number of neighbors and not their relative positions. Having made those two observations, you can create a "neighbor count" array of 20 x 20 bytes (i.e. one for cell). For each pass you zero the neighbor count array, then search the cell array for populated cells. Most will be empty, and you can move on doing no math at all. When you do find a cell, increment the neighbor count for all surrounding cells. When done, make another pass and apply the spawn/survive/die logic to each cell based on its neighbor count. You'll end up having to access far fewer than 3200 (8x20x20) cells and should get a significant speed improvement. For very sparse screens you'll be down around 400 (cell check pass) + 200(incrementing adjacent cells on 25populated cells) = 600 or so cell accesses instead of 3200, which is about a 6x boost in execution speed. Well, actually theres also the pass to zero the array (400) and the pass to check the array when done (400), which puts you at 1400 instead of 3200 so perhaps its a 2x increase in speed, but you can do some things to speed that up as well. For one thing you can use nibbles instead of bytes (i.e. store two counts per byte) which will halve the size of your neighbor count array, and you can use a block fill to zero it which will be very fast.
Jonlink0 describes another Automaton
http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain/page3
The Brian's Brain cellular automaton (http://en.wikipedia.org/wiki/Brian%27s_Brain) is an interesting brain model. Even though the only action potentials that can be sent are excitatory, and each neuron only has an 8-cell (moore) neighborhood, "waves" of action potentials will grow chaotically from a disturbance in the medium. Some method of creating inhibitory action potentials (perhaps utilizing extra cell states?), as well as input/output methods (easily done by changing certain cell states according to the states of the robot's sensors), may cause interesting behavior. Some sort of inhibitory action potentials are obviously necessary to prevent the automaton from becoming saturated with chaotic activity, which is what Brian's Brain will default to if given semi-random sensor input. (The same thing occurs in BEAM technology, such as in bicore- and microcore- based neural/nervous networks; specialized circuits are often added to prevent this).
Tracey Allen comments:http://forums.parallax.com/showthread.php?85001-Propeller-supercomputing/page2It seems to me Propeller also has potential for "supra-computer" explorations. Like...
Cellular Automata: "A regular array of identical finite state automata whose next state is determined solely by their current state and the state of their neighbours." <www.cs.bham.ac.uk/~wbl/thesis.glossary.html>
Neural Network: "An interconnected assembly of simple processing elements, units or nodes, whose functionality is loosely based on the animal brain. The processing ability of the network is stored in the inter-unit connection strengths, or weights, obtained by a process of adaptation to, or learning from, a set of training patterns. Neural nets are used in bioinformatics to map data and make predictions." <www.inproteomics.com/nwglosno.html>
Part of my grad school work was on phase locking and chaos in coupled neurons. I did experiments using coupled oscillators built with programmable unijunction transistors, and using numerical simulation on a PDP7. One propeller could do 8 neurons, and with the pwm output and summed sigma-delta inputs, it could even do the analog weighting functions. Wow, use propeller to build an analog super-computer! Tracy Allen www.emesystems.com
Excerpt From WIKI:
Ever since its publication, Conway's Game of Life has attracted much interest, because of the surprising ways in which the patterns can evolve. Life provides an example of emergence and self-organization. It is interesting for computer scientists, physicists, biologists, economists, mathematicians, philosophers, generative scientists and others to observe the way that complex patterns can emerge from the implementation of very simple rules. The game can also serve as a didactic analogy, used to convey the somewhat counter-intuitive notion that "design" and "organization" can spontaneously emerge in the absence of a designer. For example, philosopher and cognitive scientist Daniel Dennett has used the analogue of Conway's Life "universe" extensively to illustrate the possible evolution of complex philosophical constructs, such as consciousness and free will, from the relatively simple set of deterministic physical laws governing our own universe.[4][5][6]
The popularity of Conway's Game of Life was helped by its coming into being just in time for a new generation of inexpensive minicomputers which were being released into the market. The game could be run for hours on these machines, which would otherwise have remained unused at night. In this respect, it foreshadowed the later popularity of computer-generated fractals. For many, Life was simply a programming challenge; a fun way to use otherwise wasted CPU cycles. For some, however, Life had more philosophical connotations. It developed a cult following through the 1970s and beyond; current developments have gone so far as to create theoretic emulations of computer systems within the confines of a Life board.[7][8]
Conway chose his rules carefully, after considerable experimentation, to meet three criteria:
Maintaining multi-processing purity
It is now possible to use a Brain offsetting technique to keep cogs available for specific multi-processing thinking while highly specific sole purposed code is tasked out to other dedicated processors. These can easily be Parallax host processors dedicated to singular tasks.
One example of this is the host board #23. 23 is in charge of TX/RX and nothing else. It's sole objective is as data pipe for remote sources with a requirement of data exchange.
Another example is the board found in the the Brain Stem. This nerve center is responsible for one thing - motion control. While motion control can include many sub-facets of motion and movement, it nonetheless is single mind tasked, i.e. control of all motions and the governing of primary motor responses.
These examples are much like autonomic re-sponsors, leaving the cogs free for thinking. Like the human brain, these "host regios" will take up the slack for positions of performing elements such as speech, sound recognition, and numerous other functions (taste, smell, vision, touch, etc.).
Input by hearing the keys
It works with or without the screen. Just type on the keyboard and hear the keys spoken. For example, type the number 1 key and you'll hear "one."
Speaking, voice, speech synthesis, vocal tract, phoneme generation, speech synthesis, text to speech, are all important elements of the the Giant Brain. The development continues..
I thought it would be cool to use a keyboard and actually hear the keys typed. There's a bit of history to this type of program. I have written a talking keyboard program on almost every computer that I've owned. So the Propeller Brain should have one too!
The thread describes the code and has the download, both linked below.
the thread
http://forums.parallax.com/showthread.php?130041-Propeller-Talking-Keyboard
the code
TALKING KEYBOARD.zip
Vocalization sound series
In continuing with programs that vocalize the Brain, this next addition adds a set of lips.
These are special lips that you can only hear, but hearing is the first step with lips, isn't it?!
These programs are written for the Brain as tools to become available for future uses and
as experience in learning the range and capabilities of the code.
Flipping lips is an idea that originates from early grade school, when kids have fun making
sounds by the rapid altercation of finger to lips thus varying the glottal exit sound component.
The Brain is heavy into sound, having a dedicated sound processor region. Like a human, the
Propeller chip in the Brain can reproduce many human-like sounds and create many sounds
that go beyond human capabilities.
flipping_lips.zip
http://forums.parallax.com/showthread.php?130061-Propeller-Flipping-Lips
Beginning of Automatonic Brain Song
Back deep inside that speech region of the Brain is the capability of song.
This is the first song program created for the brain based on the Automaton
algorithm.
In the future, we have the possibility for the Brain AI to create its own songs.
So what is presented here is a first step.
automaton_song.zip
http://forums.parallax.com/showthread.php?130060-Propeller-Automaton-Machine-Song
Regular, Slow & Slow-Fast Counting
Counting is very important for Brains. Counting up or counting down,
counting regular, fast, slow, or in combination, it's all in the code.
These are Brain speech programs that run on a Parallax Demo Board
with a headset or amplifier.
Psychological state of mind
Intro
Mood can set the personality of an individual, behavior response,
state of mind, and characteristics. A Brain emotion model affects
individuality and the perception that others have towards it.
Establishing Mood
Mood is set through communications such as speech and
various forms of expression.
Emotion/Mood Lifespan
An emotion is related to a mood and has a life span in the
Brain. The longevity factor of emotion affects the behavior
response of the Brain.
Transient Model
The mood model can thus be transient and affected by sensor
input. The model is primarily a collection of rules governing the
behavior response to input and stimuli.
Mood Characteristics
The Brain will first have basic mood characteristics embedded.
Some suggestions are pleased, surprised, frown upon, curious
and inquisitive, short curt and quick, lazy or whimsical, sad or
depressed, happy excited enthusiastic, intellectually challenging.
Recursion
Mood can be highly dependent on memory recursion in more
advanced models. Without recursive techniques, a Solid Static
model is workable.
Knowledge Compression
The Recursive Model consumes larger portions of memory.
Recursion can use methods of knowledge compression, which,
is another very important part of the Brain.
Level of AI
Much of this will depend on the generation and level of AI
permeated throughout the Brain. It's still too early to begin writing
code because the AI model needs to be in place first and then
various module subroutines can implement.
Face
Will a Brain face be mandatory and a prerequisite for the Mood Module?
Probably not. Communications with the Brain in conversation could
elicit bouts of evident personality without a smiling or frowning face.
Inherent traits and mannerisms march through in verbal speech,
inflection, the strength of text and the placement/use of words.
News
http://www.nyumbanilabs.com/2009/07/robot-with-moods/
A hyper-realistic Einstein robot at the University of California, San Diego has learned to smile and make facial expressions through a process of self-guided learning. The UC San Diego researchers used machine learning to “empower” their robot to learn to make realistic facial expressions. “As far as we know, no other research group has used machine learning to teach a robot to make realistic facial expressions,” said Tingfan Wu, the computer science Ph.D. student from the UC San Diego Jacobs School of Engineering who presented this advance on June 6 at the IEEE International Conference on Development and Learning. The faces of robots are increasingly realistic and the number of artificial muscles that controls them is rising. In light of this trend, UC San Diego researchers from the Machine Perception Laboratory are studying the face and head of their robotic Einstein in order to find ways to automate the process of teaching robots to make lifelike facial expressions. This Einstein robot head has about 30 facial muscles, each moved by a tiny servo motor connected to the muscle by a string. Today, a highly trained person must manually set up these kinds of realistic robots so that the servos pull in the right combinations to make specific face expressions. In order to begin to automate this process, the UCSD researchers looked to both developmental psychology and machine learning.
Sovr said:
wow, this brain project is amazing! I have been working on a project for more then a year were the robot implements moods into it's everyday tasks. But just by looking at this tells me I have a long way to go. I will be posting PAD, my robot project soon on the forums
Thanks Sovr. The brain is just getting started and although wiring will continue for some time, at least I can run processors in various stages and parts of the entity and do Spin programming. Much of this is a learn as you go along project.. I'm especially interested in your robot mood algorithms and knowing how you implemented PAD. Is it based on any kind of memory recursion? Or mainly a solid static model? I look forward to seeing your robot project posted on the Forum.
Dimensionally Dream with LOGO
Enter a vast Brain unfolding dream world made possible with "dream compressing" LOGO language
Is this the first Brain Dream? Using a LOGO language Brain
Channel and techniques of Dream Code Compression, the
Brain can accomplish Propeller-based dreaming.
Dreaming
Dreaming is a big and important thing with the Brain. Evolving a dreamscape in full dual dimensional real time rendering is challenging both in terms of memory and processing power. Compressing the vast scale of changing motion images in an unfolding dream and displaying it is the objective. But how to do this?
Vector Graphics Dream Rendering
One way to accomplish all of this is through the rendering of iBrain dreaming in real time with a special designed language that can handle vector graphics drawing. It must also compress the images, i.e. generate compression code that can store code to create images. What strange new language can do this and would we be lucky enough for a version that works on the Propeller platform?
LOGO
The answer of today's Brain dreaming stems back to the 1980s when Turtle LOGO became popular. A spinoff version of this language is now available for the Propeller chip - the Brain is now on LOGO.
LOGO According to Wiki
LOGO is a computer programming language used for functional programming.[1] It is an adaptation and dialect of the Lisp language; some have called it Lisp without the parentheses. Today, it is known mainly for its turtle graphics, but it also has significant facilities for handling lists, files, I/O, and recursion.
Logo was created in 1967 for educational use, more so for constructivist teaching, by Daniel G. Bobrow, Wally Feurzeig, Seymour Papert and Cynthia Solomon. The name is derived from the Greek logos meaning word, emphasising the contrast between itself and other existing programming languages that processed numbers. It can be used to teach most computer science concepts, as UC Berkeley Lecturer Brian Harvey does in his Computer Science Logo Style trilogy.[1]
Why LOGO
Why LOGO language? The language is a perfect high speed rendering planar-multi-dimensional construct with compacted image and graphics coding compression for evolving Brain dreamworlds.
Downloads
The original Propeller implementation of LOGO is a HYDRA version. The modified version linked here was converted over to the Parallax Demo Board by OBC. The thread is here http://forums.parallax.com/showthread.php?93780-Hydra-Logo-1.4-ported and the download is here JMM_logo_014_propeller.zip
- HYDRA version LOGO by Joshua Meeds
- Parts based on Keyboard Demo by Andre' LaMothe
- Ported to the Propeller by Jeff Ledger
- Remarked NES controller lines
- Remarked Sound Routines (right pin might fix)
- Changed speed settings for Propeller
- Changed video pin settings
- Using "graphics" , "tv", and "comboKeyboard"
A LOGO Primerhttp://el.media.mit.edu/logo-foundation/logo/turtle.html
Increasing LOGO Dimensions
LOGO is considered a two dimensional construct that can explore angles, geometry and measurement. However, the Brain makes use of other dimensional parameters such as time, color, processor positioning, channeling, and luminosity.
Programming the Brain in LOGO
How to program Brain LOGO? Here's a list of commands to follow. Dream Unfolding
What is Dream Unfolding? It's the construction of a dream by expanding code. Dreams are also folded using code in LOGO. For example, here's code for a simple spiral dream that unfolds in only a few seconds time. The illustration shows results made by the specific rotation of a geometrical construct.
Here's 4 programs to unfold geometrical objects.
- Square: REPEAT 4 [FD 100 RT 90]
- Circle: REPEAT 90 [FD 10 RT 4]
- Octagon: REPEAT 8 [FD 50 RT 45]
- Star: REPEAT 5 [FD 100 RT 144]
repeat 180[fd 4 rt 2]and get a (pretty good approximation of a) circle
A Fractal Tree
Sample in Spin by Dreamwriter
Explore Brain LOGO Programming with Online Applets
http://www.mathsnet.net/logo/turtlelogo/index.html
Discussion
Ralfw is a LOGO hacker, one of the originals from the 1980s. He writes," I did LOGO on the Apple II, the MIT version which was since distributed by Terrapin and Krell. (I was taking a seminar taught by Hal Abelson and Andy diSessa) My dream is for LOTS of LOGO varieties to experiment with, i.e. the Atari (LCSI) version did things the Apple II/MIT version did not, for example... Robotics-focused LOGOs have features that neither '80s version did. Right now I have the Chameleon AVR (8-bit AVR), but I'm hoping to wire-wrap up a second propeller card as an expansion card for my Apple II, to get: VGA output, stereo sound, networking of some sort.
Additional Sources
http://forums.parallax.com/showthread.php?91381-Hydra-Logo-the-future-of-realtime-Propeller-programming!
http://www.meeds.net/Hydra.html
Downloads
http://forums.parallax.com/showthread.php?92990-Hydra-Logo-1.4-ready-for-download!
HYDRA LOGO Ported to Demo or Proto boards
http://forums.parallax.com/showthread.php?93780-Hydra-Logo-1.4-ported
Conclusion
In conclusion, the Brain utilizing a form of LOGO language to trace out the pathways of unfolding Brain Dimensional Dreaming is now a possiblity. LOGO is the means of creating unfolding dream worlds.
Developing Brain X-Rays now a Reality
It's not often that you have the opportunity with various scientific medical equipment to x-ray your project, for the reason of study, component placement analysis, project continuing extension and development, upgrading and enhancement, revision and various engineering requirements.
With new emerging technology, you don't need a medical CT-Scan machine. For the first time with technical three dimensional data supplied by NikosG, it becomes possible to x-ray the Brain using the Google SketchUp CAD program.
This technical Brain X-Ray was achieved through a technique of data development made possible by Master technical artisan NikosG of Greece. The TV view is remarkable, showing Brain detail through the TV in addition to maintaining the screen and markings. This x-ray can be extended from the outside-in as shown here or from the inside-out through the back side of the Brain.
Compared to the x-ray of a human brain, the Parallax Propeller
iBrain is significantly different with boards of silicon
processors. At this development time, the iBrain is electrical
but that could change with the mixing of bio modules that can
interface to electrical modules. In particular, the module for
tasting substances could be governed by bio sampling.
Much like a typical NMR Nuclear Magnetic Spin Resonance image, the Brain view can be developed inside the computer. Typical processing is remarkable, with the ability to move a camera around from any angle and photo the result. Brain X-rays will become a part of Brain analysis in the future.
SketchUp is a free program by Google.
How to tune a TV driver
The ROM font text driver
The graphics driver with a mix of colors
Left - graphics red text screen Right - green screen version
The How & Why of Propeller TV
You think it would be a simple thing. Grab a TV driver, run your demo program, and voila! Instant selection of big characters easily visible on a tiny TV screen... NOT! One must go in to the gritty part of the code and know what to adjust and were to find it. If you don't know, it won't go!
How to Get Those BIG Characters?
This post is designed to cover everything you need to know when making BIG characters on the 3.5-inch TFT LCD Color Monitor from Parallax. This is just a small TV with 320 x 240 pixel resolution. It's small to fit inside the belly of the Brain. This is an ideal TV because it weighs almost nothing and is extremely small and easy to mount into a small place.
The Challenge of Small TVs
With small TVs come small text. It's so small in fact that when the letters blur together, one may not be able to read the results. So in our continuing Brain adventure, we learn that most TV drivers are designed for big screen TVs. It makes you wonder if most program developers of TV drivers live in big screen TV movie theaters! That tiny micron nano font looks great on the big screen but is totally useless on the tiny TV.
Grit Driver Diving
Ok, time to roll up your shirt sleeves and dive into the driver, make the changes, and get out of there asap! There are two approaches to handing the TV drivers. Yes, drivers as in plural. It boils down to two main drivers for the Propeller TV with a lot of variations. Let's take a look at the two main drivers.
Two Main Drivers
The ROM text driver is tack sharp
The graphics text can mix color
A graphics view in standard white char on dark backdrop
There's nothing like a retro green screen
Interlacing
Another factor comes into play. The graphics driver uses a method called interlacing. Tiny TVs may or may not handle interlacing well, i.e. the font letter may show a smudge, smear, blur, or a kind of stamped echo. This can happen at the bottom of the font, in the middle or at the top. It can also vary across the screen. For example, in some TVs the first and second lines are tack sharp but farther down there's the introduction of artifacts.
Never Fear, Help is Near
Imploring the need to modify these drivings and find a way to do it is the next step. But don't worry, it's all accomplished. Several brilliant minded Techies came to the rescue. OBC refined the objective. Jrjr had VGA options for alternative approaches. Rayman made recommendations for a graphics approach. Ariba (Andy) shot out expert advice for the Text TV Driver and how to modify it. Perry master minded the changes for the Graphics TV Driver for large text, color, and remarkably found a way to improve the interlacing with a single code change, thereby minimizing artifacts. Roger offered his code that handles large numbers and regular text. Publison was kind enough to photo the results from two other monitors of different sizes and manufacture for appearance comparisons with the Graphics Driver installed. Potatohead noted the interlacing challenges faced by small TVs. Phil Pilgrim recalled that the Propeller BackPack is capable of font scaling and to check the code.
Discovery thread
http://forums.parallax.com/showthread.php?130008-Really-BIG-Letters-amp-Numbers
BIG Results are in
What we now have are two main modified drivers, one based on the ROM font and another based on graphics. These are tuned to create large characters on several lines. The final code is posted below. The first is for tack-sharp text and the second is for text, graphics, color.
Program bouncing in multi cog systems
It's time to begin addressing the issue of establishing pure thought in the Brain. The overall idea will include several fields.
- Alive The thought is a living entity like Conway's Life
- Progressive Thinking the ability to look forward like the game of Chess
- Cycles live (contribute to the task), grow (take on data), and die (delete or recycle itself)
Discovery threadhttp://forums.parallax.com/showthread.php?130125-Program-Bouncing-in-Multi-Cog-Sysems&p=982512#post982512
Small exampling programs will come out by the thousands, like intelligent but small neural packages (programs) that contribute to the overall the thought.
What can we do with a thought?
- Hold it whereupon it becomes a memory
- Solve it where a solution is demanded
- Utilize it in a contributory fashion
What is developed?The Propeller Brain collective is doing a density of one thousand of these small contributory thought programs for every eight cogs - this amounts to 21 boards with 168 cogs and a total of 21,000 thought programs, which fits into our established model range.
The Key
Developing some initial small examples of simple thought processes
Speed up and slow down brain time
The Propeller Brain Perspective
The Propeller Brain project is a large endeavor, that can be scaled up or down to fit many levels of applications and levels of available resources. While the human brain is an extremely complex organism, it's simulation is possible on several levels. Knowing the resultant behaviors of the human brain can help our machine brain formulation. We can learn from millions of years of evolution.
Time in the Brain
One area of Brain learning where the human model can contribute a valuable and useful algorithmic process is that of time. In the human physiology, there is a process governed by the brain, with survival instinct, that can alter time. It allows the survival of the species - the brain can think faster, come up with solutions quickly when life is threatened. It can also think slower in ways that are chemically proven to extend life.
Time in Review
There are several modes of time that exist in the real world. Let's examine four of these types of time.
Physical Time
Time can be physical as shown by Albert Einstein. In his time travel equation it shows how moving clocks can run slow and relativity events seem to speed up. We can measure this effect, as the formula predicts, in various gravity fields and at speeds when traveling faster than 1/10th the speed of light.
Event Time
In event time, time is merely the passage of events. It flows only in the forward direction. Although one can review unfolding time, it is not possible to go back, thus winding the clock backwards. Event time can be seen through a telescope. You are looking at the unfolding history of a deep space galaxy that is located billions of light years away. Today, the galaxy is evolutionary different from the image seen.
Physiological Time
In human physiology, time does the opposite, it compounds progressively from the moment we are born. As we age, time continues to increase in speed and passage. There is a simple example. When we are young, time passes very slow. Hours can seem like days. When we are more aged, time speeds up. Months can seem like days.
Take for example a child who is only 12 months of age. To that child, 6 months is half his entire lifetime. Six months seems like an eternity. However, when the child is a grown person at age 50, a half year is relatively a fast moment in time, merely 1/100th of the total life lived.
So this value of time is relative to our physiology. Time begins slow and runs faster and faster in the physiological world. Theoretically, physiologically speaking, if you live long enough, you will travel into the perspective future.
Machine Time
Electronic Machine Brain time is dependent on clock cycles. It is limited by an Einsteinian world, as electromagnetic radiation as we currently know it (excluding theory) cannot be made to travel faster than the speed of light.
Can we physical time travel our computer chips? Yes, the electromagnetic component can travel faster than 1/10th the speed of light. Can we physiologically time travel with a machine chip? Yes, we can alter the clock from a baseline and progress it.
Can we make use of event time? Yes. In a machine, we can record and store the elements that happen with the passage of events, and those elements can unfold in the forward time direction.
Some Useful Applications of Time
In a human, the brain processes images, one at time, at the rate of 30fps. In high levels of stress or life threatening situations, time will slow down, and images will process at 120fps in the brain. Can we do this with a computer chip brain? Yes. The clock can process at a normal baseline, and accelerate under given conditions, thus processing more information in unit time.
The propeller can process at internal clock slow time (approx. 20 kHz), or internal clock fast time (approx. 12 MHz) or run with an external crystal at high speed time, typically 80 MHz. Clock Modes: (a) External crystal 4 -8 MHz (16 x PLL) (b) Internal oscillator ~12 MHz or ~20 kHz (c) Direct drive.
Conclusion
We have only touched upon some basic principles of time and briefly discussed how different types of time can be applied to the Propeller Brain.
It's a very nice project. A couple of thoughts relevant to what you're trying to achieve:
Design, code, schematics, tests
January 8, 2011 Post: First successful test results with the first gamut of testing programs, matching phase and using PST as output, see post 141 page 8. Showing the PST on COM52 with Propeller #1 as a talker and #2 as a listener.
This is one wire serial communications at 9600 Baud with eight stop bits and positive polarity with a BUS interface. Each Propeller can talk or receive. The first Propeller is a designated Master. In the test block, the remaining two PPPBs are slaves. Both slaves are currently listeners while the Master is the talker. Serial transmission is accurate and stable over a million bit test. The code pair is a talk/listen configuration.
Introduction
A while back (two months ago) the test code for connecting Propeller to Propeller was established. The interfacing of the first two propeller boards is called the Brain Base.
Configuration
Note the Brain Stem connected Basic Stamp 2 to Propeller and used different test code. The Brain Stem resides under the Brain Base. Above the Brain Base resides the Brain Spans.
Code
This code concerns the Brain Base only. Posted here is working test code for Propeller to Propeller communications, testing on a BUS. Refer to the connection schematic for wiring.
As reviewed on page 6 post 112, these are
options for serial interfacing
http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain/page6
More napkin sketches: Brain Base Schematic - 1st tests
were compounded on this framework
The wiring schematic/sketch for the Brain Base includes wiring for a data LED, a protected one-wire BUS, and a processor decoupling capacitor. On top side boards, Prop Plug attaches with the logo facing up. On the Brain Spans, this concept is extended to accommodate more boards.
| SW1 | SW2 | INTERFACE
|
| 0 | 0 | FD
|
| 0 | 1 | HD, PL2
|
| 1 | 0 | HD, PL1
|
| 1 | 1 | PL1, PL2
KEY
1 INDICATES SWITCH ON
0 INDICATES SWITCH OFF
FD FULL DUPLEX
PL1 1ST PARTY LINE
PL2 2ND PARTY LINE
HD HALF DUPLEX
Note the early proposal Hybrid nature of the iBrain at post 252 p13
http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain/page13
Parameters
Talk Code is provided (both Tx and Rx) which is tested at 9,600 BAUD. Set communications polarity to one and stop bits to eight. Match the baud rate in both Tx and Rx. The test code sends numbers 0 to 9 each every second to the Propeller and loops.
Indicator Programs
Indicator test programs are also attached, to keep a single LED on, and pulse a single LED. Programs set the pin number and the pulse rate. Note use of a Repeat loop to keep the LED on. Refer to the comments in all code for more information.
TEST RESULTS
============
WIRING - GOOD
CODE - GOOD
FUNCTION LIBRARY - GOOD
CLOCK - GOOD
TIMING DELAY - GOOD
BAUD - GOOD
DATA - GOOD
BUS - GOOD
PIN - GOOD
POLARITY - MATCHED/GOOD
STOP BITS - MATCHED/GOOD
Humanoido's Communication Program - Receiver
Prop to Prop
Receive numbers 0 to 9 each every second from the loop
Propeller to Propeller Receiver v1.1
prop_prop_rx.spin
Updated Wednesday March 9, 2011
Giant Brain Project
Receives data on a one wire bus
Propeller to Propeller BUS Communications
Testing the Brain Base
Humanoido's Communication Program - Transmitter
Prop to Prop
Send numbers 0 to 9 each every second to the Propeller and loop
Propeller to Propeller Transmitter v1.1
prop_prop_tx.spin
Updated Wednesday March 9, 2011
Giant Brain Project
Transmits data on a one wire bus
Propeller to Propeller BUS Communications
Testing the Brain Base
From Stamps to Propellers
A collective tracing of multi-processor brain
design and development Leading up to the iBrain
Projects with a Purpose
To fully understand the how and why of the current Propeller collective Brain design, it's beneficial to trace the family roots over time. The original design sprang forth from a humanoid robot in use during 2002 and consisting of several networked Parallax processors. Where did the design for the Propeller-based Brain Base come from? What are its origins? Follow the progressive evolution of these designs, culminating with the Brain's arsenal of Propeller chips.
The first multi-processor robot used
a design that predates this Toddler
Humanoid robot version which also
uses a network of Parallax processors.
Review this wiring post 109 on page 6
4D Morphing Computer (with CoProcessor)
The first developed Hybrid
AM Algorithm Machine's Hybrid Bus
The BSS Interface Page 6 post 108
Largest mix of processor models known
Download software (22 programs)
Self Adjusting Master Code (New!)
Read the article in Penguin Tech 4
View the Schematic (page 6)
Watch the movie
The TriCore schematic
Runs the smallest known AI program
http://forums.parallax.com/showthrea...puter&p=822511
The MSS
Smallest two core system
http://forums.parallax.com/showthread.php?p=821451
The 2S, post 112
Two of the most powerful processors
http://forums.parallax.com/showthrea...computer/page6
Powerful SEED
Self Evolving Enumerating Deterministic
http://forums.parallax.com/showthrea...puter&p=817126
Now enter the design phase of iBrain.
The development of iBrain is ongoing. No one knows where this will lead to. Will the Brain become self aware? Will these early designs evolve into a life form? How will the iBrain help understanding in today's world? Amidst the chaos of interminable development efforts, inside this Brain Universe there are indications of the dawning of logic and evolution moving towards a greater purpose.
http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain
Brain genealogy also includes a rather large number of Propeller multi-processing machines.
http://forums.parallax.com/showthread.php?123828-40-Props-in-a-Skyscraper/page3
Beginning with the PEK 1 experiments, the list includes 22 various multi-processor Brain predecessor machines.
1) PEK 1
2) MC Computer
3) LED Machine
4) 2-Proper 2 props, 1 PEK, 1 on same breadboard, 2-Prop-Experiment
5) Spark 2 2 props, 1 Proto Board & 1 in parallel
6) PiggyTwins 2 props, 1 piggybacked on another
7) Dueling Breadboards 2 props, one on ea., f/interface tests
8) Spark 4 Tiny Tim, 4 props, two proto boards w/2 props on ea
9) Spark 5 5 props stacked Proto Boards
10) Spark 6 6 props 3 proto boards 2 props on each
11) Spark 8 Tertiary ADJUNCT, 8 props 4 proto boards 2 props on each
12) Propalot 10 props on solderless breadboard
13) Spark 10 10 props, 5 proto boards w/10 props total
14) TTB Test Bed of Ten 10 props single board
15) Twelvenator Board of Twelve, 12 props green board
16) UltraSpark 15 15 props, interrupted stack Proto Boards
17) Tertiary 20 20 props, 15 proto boards stacked 5 props
18) UltraSpark 20 20 props stacked
19) Boe-Bot Brain Project 20 props as Brain on Boe-Bot
20) MLEPS Super Language Machine 25 props
21) UltraSpark 40 Supermicrocontroller 40 props 320 cores, 6,400/8,320 MIPS
22) Smartest Boe-Bot Brain Temporal Experiment 40 props, US40, 1 BOE, 321C
The above list omits several test machines. These will be added after the photos are reviewed.
The newest machines
1) Brain Span 3 rows of six Propeller boards each
2) Brain Stem 1 Propeller board and 1 Stamp board
3) Brain Base 2 Propellers boards
4) Current level of the Brain 22 boards
Revealing Brain Stem test code
This chronicles the development of test code, phase I, for the Brain Stem.
Note, the Brain Stem is made up from two boards, a BOE and a PPPB.
Many months ago, the first Brain Stem was awaiting more wiring. Since
that time, the Stem was completed and is now a fully functional and
integrated module inside the Giant Brain.
Last year in November, the first Brain Stem was
completed and tested. Seen here it sets on the
green pad for further analysis. At that time, the
Brain did not exist and was only a vision in the
mind's eye!
____________________________________________
Robotic Brain Stem Discovery Thread
http://forums.parallax.com/showthread.php?127310-Robotic-Brain-Stem&p=955611#post955611
Introduction
The Brain Stem is one of the most important parts of the Brain. It resides below the Brain Base. The Stem passes signals like a nerve center for reflex, muscle and primarily mobility control. It harnesses mobility software to give the brain motion control.
Compatibility
The Brain Stem has another purpose. It serves as a commonality compatibility interface with other processor-based robots, for example, Parallax robots using BASIC Stamps. It is also compatible with Propeller-based robots, including the Boe-Bot, SumoBot, S2, Stingray, QuadRover, and is an ideal candidate for the Robot Base Full Kit.
SuperSTAMP
Communications was developed and tested on the first Propeller above the first BS2 which is known as the SuperSTAMP. The SuperSTAMP is another project developed to give maximum power to a BASIC Stamp. The SuperSTAMP mates to the PPPB and consists of both Tx and Rx programs along a common protected BUS.
Wiring Diagram for a Brain Stem. Another napkin sketch, the Stem
consists of a Propeller board (PPPB) and a BASIC Stamp 2 Board (BOE).
Test wiring is straightforward.
Wiring
Important wiring note. The schematic shows a Propeller to Stamp connection but does not show the lines running from Vss to Vss. Note that Vdd to Vdd is not implemented as the Propeller is a 3.1-volt device and the Stamp is a 5-volt device. The actual feed is from the Stamp regulator to gain the 5-volts and from the external power supply to gain the 3.1 volts.
Programming Code
Brain Stem code consists of one folder with three files.
- PROP-BS2 LEVEL5.bs2
- PROP-BS2-LEVEL5.spin
- BS2_Functions.spin
Operating the ProgramsLoad the first program into the BS2. This will act as the receiver. Load the second "Propeller" program and run it. This is the transmitter.
Testing
In the test, the Propeller is continually talking to the Stamp and the Stamp is listening. The Stamp code makes use of the the Debug screen for output.
Error Detection
The code also has error detection. If the system hangs, a timeout occurs and a message is given. Following this, the loop will continue looking for the next character.
Prop to STAMP Voltage Levels
The BUS data transmission from the Prop to the Stamp is a compatible voltage level.
Do we add some appendages and a mobility platform to the brain?
If this is added, then does it become just another robot?
Where is the line in the sand?
The brain could include simple appendages
and a mobility platform for locomotion. Call
the Brain and it goes to you.
Brain Appendages
One idea is to add some appendages to the Brain,
not necessarily hands and arms, but rather
protective stubs or stumps that can bump around,
protect itself from threatening animals, and bump open
swinging doors, push things, make some gestures
and signs with signals, and possibly do some
light limited physical work activity.
Types and Needs of Appendages
- stubs
- stumps
- protection
- light work
- signaling, signing & gesturing
- bumping
- pressing & pushing
- initiating an alarm
- blocking
- shielding
- soundless vision for work in the vacuum of space
- writing, printing
MobilityMobility can also include motion to move short
distances. If mounted on a robotics platform, the
Brain could find you, face you when talking to it,
move to a power feeding station, turn to a specific
Flip Mode when you command it, and adjust its
position for better hearing, speaking, vision, balance,
and other environmental positioning. A brain in fluid
could have flippers and fins to adjust its under-
fluid positioning or flotation orientation.
Types & Needs of Mobility
- adjust hearing
- compensate incline
- transport
- feed
- engage flip modes
- increase vision acuity
- direct speech & sound
- environmental positioning
- positioning for engagement
Top Down Approach with Bottom UnderArmaments can mount on gimbals near the top
as motion controllers can attach underneath.
Wheels, treads, rollers, snaking skin or other
forms of transport can initiate motion.
Yes, it hasn't been done yet. The first thing needed is of course a simulator on Spin/PASM level. Now the simulation can either strive to be precise (cycle accurate) and faithful to the hardware, but this will be slow and may take too long (ie years) for any significant form of evolution to occur. Or you could settle on innacurate but fast simulation, for example translating PASM code to x86 and letting it run (you'll have to catch exceptions). Now the generated code might no longer be suited to run on the real hardware because of the timing imprecisions.
Another point for AI is that it may require some heavy lifting, esp. floating point computations which the Propeller lacks in HW form. Have you considered adding an umFPU to your design? Obviously, I assume you don't want to move to another microcontroller, that would have support for floating point.
Moving closer to real Propeller dreaming
In the real life scenario, the brain is active during a normal day's thinking. During the day, it stores some key words regarding its experiences. These words are nouns, verbs and adjectives. They are quantified. There is either a weight specificity of these words or a random selection. The weight could be equal to the numerical re-occurrence of the word or based on time.
At the end of the day, the brain requires sleep. A dream world takes over. Using the dream algorithm matching capabilities of LOGO language (remembering the brain functions in a hybrid mode with multiple programming languages) the key words are used to unfold the image(s). The fuzzy logic of combining various random nouns, verbs and adjectives creates a surrealist dream world. The images of this dream world unfold on the TV screen.
LOGO is a very capable language for the use of dream folding and unfolding. There is a web site showing how the language was used to image beautiful colorful birds, so graphically there's much capability in matching the dreaming application. If PropLOGO has the features, it can be used. If it does not have the feature, each feature can be created in SPIN.
The first code example should be a simple one. A choice of 3 words will fuel the Dream Folding algorithm. Each word will have code stored to recreate the word's meaning. Nouns are the most simple, representing people, places or things. Verbs are given the associative property. Adjectives fall into different classes. Color and size adjectives are recognizable while others are not (such as hot, cold, and actions like bubbling, rapid, bouncing). The adjective is like a skin applied to the dream world.
The first dreams are static frames of unfolding dreams. More powerful dreaming could achieve 30fps for motion picture quality. Distributed imaging can create more advancing dreaming modes.
The equation hierarchy is noun, adjective in the first lite model. Two sets are stored of each class and the images are folded. Perhaps three elements of each can demo the effect.
The Process of Dream Folding
More advanced dreaming can take on the parameters of psychology, encompass larger dream worlds, hold more nouns, adjective and verbs, and function in real time as dreaming unfolds. More advanced dreaming can use more sophisticated dream compression for folding and unfolding.
cde: actually this is a misnomer. The Prop is fully capable of floating point. I know, when Mike Green provided the tutoring on this topic, I was totally glued to it.
The Brain at this point is completely INT. I spent years programming inclusive of INT and so far envision sticking with it. I know there are other chips out there, but the focus is mostly Propeller only. More work is accomplished that way with the resources at hand. But of course we would look at another chip to implement algorithms on the Prop, if needed.
The only reason for moving to another Parallax chip is to add it to the collective in a kind of Brain absorption. For example, the Brain has already absorbed Propellers and Stamps, though it's primarily touted as a Propeller machine. Other processors make greater compatibility when connecting sensors in the real world.
For actually needing FP, that remains to be seen. Most people rarely calculate things to several decimal points, or any decimals at all. I just don't see need for such high precision accuracy when we really need to go fuzzy logic on most things and use guesstimates.
Perhaps the necessity of a brain designed for very precise calculations is merely an allusion. I would be very happy if this is a brain designed to take us one step closer to life.
As an unknown person has stated, in reference to the schools of astronomy and pools of artificial intelligence, the place where we discover life may not necessarily be (first) from the giant collective hands of radio telescopes through project SETI but rather through a life form of our own creation.
I imagine this would depend on the kind of data processing the AI would need to do. A quick guess could be done using fixed point, and a more precise, but slower calculation could involve the floating point module you mentioned. One aspect of AI you might also want to look into is competition, that is to have a pool competing programs in a Propeller with a "judge" somehow rewarding the most correct/efficient program based on an evaluator relevant to the task -- a bit like Tierra/Avida.
I agree.