Brain Merging with the Massive Transposition Machine
There are several machine technologies now merged with the Big Brain. One is the Massive Transposition Machine, a machine that I designed/invented around 2 years ago, mentioned at the signature link. The Massive Transposition Machine was originally designed for a thousand Propellers but was scaled back in the actual construction, which was shy of the 1,000 props and finally led to the Big Brain. This was spun off the devices gleaned from the US40. Rather than do two incredibly large Propeller projects, many of the details of the MTM were and are being incorporated into the Extremely Large Brain project. This is not the only previously invented technology going into the Big Brain as some time ago the technique of Cloning was likewise developed (my off site paper). Using Brain Cloning has resulted in the more effective Brain packages, propagation and time saving methods, increasing the effective software speed, reducing code, and has led to better techniques of folding and unfolding in dreaming and neural packaging. More details of this will follow.
BOSS Brain Operating Software System (OS Code Development name) now enters its second year of development. In Phase II, possible Brain additions to be integrated include testing, neural packaging, propagation, cloning, injection, partitioning, reporting, parameterization and an effect called crystallization plus some output handling. On the other hand, memory boundaries may cap functions. New BOSS edition upgrades the number of neurons across the array. This means larger handled numbers in single chips, propagated across the neural net. More development is anticipated after two Macs are made ready for Propeller Brain support. Currently BOSS only runs on Propellers and has it's greatest ability as a multiple Propeller OS. Still in its infancy, BOSS development time will continue ad hoc. The policy of BOSS is non-release until a Beta stage is made available.
A special weekend meeting held with the medical brain doctor, who flew in for the Exposition, was remarkable on several accounts. We talked at length about machine brain and human brain technology. Primarily he sees many applications in the capacities of many machine brain devices alongside the human brain. He thinks the machine Big Brain with many operating Propeller cores is fantastic technology and wanted to see more. I showed him the wiring, EXO, injectors, I/O, and neural matter partitions.
The world wide Exposition was about stem cell research and its many applications. We talked about the processes of regeneration created by the application of stem cells in humans and how machine brains could directly benefit from the inherent mechanisms of biological stem cell activities when converted to bases of software and hardware, ultimately regenerative restorative reconstructive and uniquely rebirth devices of DNA Gnomic mechanisms through floating machine algorithms.
Many of the stem cell mechanisms at this time are not understood. Application of stem cells are known, in many cases, to have restored vision to the blind, cured diabetics, healed the heart and circulatory system, cured brain disease, reconstructed skin and organ tissue, and conquered a myriad of other human anomalies and ailments.
How DNA is restored from the mechanisms of stem cells is also unknown at this time. There was some discussion about the discovery of these mechanisms through a personified machine Brain technology which is a very interesting conceptual idea.
Last time we met, aside from never drinking pop or eating fast food, he carried a new small black pc ms-windows computer, and a new black cell phone with many features. This time, I was surprised how much I influenced him. I'm not sure if it was all good. He was eating fast food, drinking Coca Cola! But it was commendable he followed my Apple advice and gave up his cell phone for a new white Apple iPhone 4, and was carrying his new white Apple MacBook Air computer. I was impressed!
http://altered-states.net/barry/newsletter329/index.htm
Stem cells have the remarkable potential to develop into many different cell types in the body. Serving as a sort of repair system for the body, they can theoretically divide without limit to replenish other cells as long as the person or animal is still alive. When a stem cell divides, each new cell has the potential to either remain a stem cell or become another type of cell with a more specialized function, such as a muscle cell, a red blood cell, or a brain cell.
"If you invent a breakthrough in artificial intelligence, so machines can learn, that is worth 10 Microsofts."
-- Bill Gates
I visualize a time when we will be to robots what dogs are to humans...
CLAUDE SHANNON, The Mathematical Theory of Communication
Our ultimate objective is to make programs that learn from their experience as effectively as humans do. We shall…say that a program has common sense if it automatically deduces for itself a sufficient wide class of immediate consequences of anything it is told and what it already knows.
JOHN MCCARTHY, "Programs with Common Sense", 1958
Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain.
ALAN TURING, "Computing Machinery and Intelligence"
An important concept both in Artificial Life and in Artificial Intelligence is that of a genetic algorithm (GA). GAs employ methods analogous to the processes of natural evolution in order to produce successive generations of software entities that are increasingly fit for their intended purpose.
JACK COPELAND, The Essential Turing
The human brain has about 100 billion neurons. With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation ... (but) only 200 calculations per second.... With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second. This is a conservatively high estimate.... In 1997, $2,000 of neural computer chips using only modest parallel processing could perform around 2 billion calculations per second.... This capacity will double every twelve months. Thus by the year 2020, it will have doubled about twenty-three times, resulting in a speed of about 20 million billion neural connection calculations per second, which is equal to the human brain.
RAY KURZWEIL, The Age of Spiritual Machines
The key issue as to whether or not a non-biological entity deserves rights really comes down to whether or not it's conscious.... Does it have feelings?
RAY KURZWEIL, USA Today, Aug. 19, 2007
Our ultimate objective is to make programs that learn from their experience as effectively as humans do. We shall…say that a program has common sense if it automatically deduces for itself a sufficient wide class of immediate consequences of anything it is told and what it already knows.
John McCarthy, "Programs with Common Sense", 1958
Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans' ability to control or even understand them.
RAY KURZWEIL, Scientific American, June 2010
Joseph Chilton Pearce Play is the only way the highest intelligence of humankind can unfold.
Henri Frederic Amiel Man becomes man only by his intelligence, but he is man only by his heart.
Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats.
Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans' ability to control or even understand them.
RAY KURZWEIL, Scientific American, June 2010...
I've always wondered what Kurzweil and his buddies think about the Chinese Room:
Nice find, and fascinating debatable material but has raised a lot of heat in the AI field.
http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain&p=1001046&viewfull=1#post1001046 However, don't you think it's a log of hogwash? For example, Searle fails to define exactly what constitutes "thinking" because he doesn't know. It indicts the weakest mechanisms of symbol manipulations. The comparison also fails to address the outside mechanism that is sliding papers around with Chinese symbols into understandable response patterns. It decentralizes the mechanisms of machine intelligence to that of random acts of symbol placement - a very poor representation which is handicapped and one sided for any machine intelligence comparison. Saying that a program cannot give a machine understanding is completely unfounded. The real fact should regard the type and level of machine understanding and positions of evolution as Expert Systems today can already run circles around the Chinese Room. The mere nonsense of the Chinese Room also partakes in mismatch of cultural language and communication and forces lack of understanding to make a subversive point against machine intelligence and understanding, and reeks of dark ages material moving backwards rather than forwards in science and technology.
http://en.wikipedia.org/wiki/Chinese_room
The Chinese room is an argument against certain claims of leading thinkers in the field of artificial intelligence,[3] and is not concerned with the level of intelligence that an AI program can display.[4] Searle's argument is directed against functionalism and computationalism (philosophical positions inspired by AI), rather than the goals of applied AI research itself.[5] The argument leaves aside the question of creating an artificial mind by methods other than symbol manipulation.[6
New Type of Propeller Simplex Neuron SN Modified INTNeuron and Test Neurons?
A discovery? Not actually, but more of something born out of necessity. For testing purposes, a new neuron formulation is added to the list of Big Brain neuron types. This is an exampling neuron which has the real time defined representation of a real neuron. The focus is to create a representative neuron with the following objectives:
Least amount of code
Least amount of memory
Maximum number of neurons
Able to represent output
Very inexpensive and cost effective
Stripped off complexity
Retains ability to fire in at least two states
Simple trigger mechanism
Easily injected
State expandable
Convertible to the next higher neuron
Compatible with existing neural software
Functions with same existing code
What is the number one objective of the new SN? It can create fire. How this fire can be detected will become the subject of future discussions and designs.
What is the name of this new neuron? This is the Simplex Neuron or simply SN. Simplex neurons are so simple, they began to look like other things. For example, their firing mechanisms can simply trigger a software state that can be read in real time, or for visual interpretations, turn on and off a single LED. SNs have no periodicity and no parametric tuning. SNs have the most simple mechanism and trigger either on or off in an output state viewed at a pin or viewed within internal software.
The hot new thing about SNs is the sheer numbers. This is very exciting because it doubles the number of neurons in the machine. Preliminary estimates based on the percentage reduction of code seems to indicate excess of doubling effects, i.e. where we had 1,000 neurons firing in a single chip, we now have 2 to 3 thousand neurons firing. For example, in the first two Propeller array partitions, up to 300,000 neurons can be tested and increment at a rate of 150,000 neurons per partion.
Exactly how these will be assembled is in the works. It is possible to modify a previous neural code by reduction and end up with the SN shape. LEDs are often favored in terms of quick go-no go identifications, however, in terms of power management and in the numbers predicted, a kind of array inspection in real time needs to be developed. This would be a kind of neural inspector capable of examining any particular SN state. On the other hand, if LEDs are retained, the firing ability state output can be increased to more states.
This is not the beginning or the end of experimentation with various types of machine neurons and neural matter. It's likely that more advanced, though possibly not more simple, neurons will be invented for the Tremendously Large Propeller-based Brain.
Very Large Propeller Brain
Injectable Simplex Neuron Outline
Structure Defined Internal / External
Soft Wiring to Internal / Hard wiring to Ext
Define Fire Out: On / Off / State
End / Repeat / Refresh Internal / Refresh Ext
The problem, IMO, with most early AI research -- especially that of the McCarthy/Minsky crowd -- is that they began by working top-down from thinking, logic, and conceptualizing, rather than building bottom-up from primitive stimulus and response. I believe you have to model the behavior of a nematode successfully before you can hope to emulate human thought.
Indeed, working from the bottom up in AI development research creates a solid foundation of science with each layer building upon the previous. McCarthy and Minsky are not the only people to have perhaps tried reverse engineering from the top down. And why not? Human brain evolution has seen 5 million years of development. In today's impatient society, it's doubtful the AI community is going to schedule that amount of development time.
There are many predictions about human brain reverse engineering
Reverse-engineering the human brain so we can simulate it using computers may be just two decades away, says Ray Kurzweil, artificial intelligence expert and author of the best-selling book The Singularity is Near. It would be the first step toward creating machines that are more powerful than the human brain. These supercomputers could be networked into a cloud computing architecture to amplify their processing capabilities. Meanwhile, algorithms that power them could get more intelligent. Together these could create the ultimate machine that can help us handle the challenges of the future, says Kurzweil.
The Giant Machine Brain has now consumed ALL known propellers from all known Propeller projects. Everyone remembers exactly what happened when the US40 parked itself just a little too close to the Giant Brain. This hungry Brain Beast with an appetite for cogs in the morning and chips in the evening would continue consumption if it were not for the new lack of cogs and chips. Current indications show that not one extra prop is available for running the simplest of tests. Even Mr. PEK met dire consequences and was assimilated into the collective. This situation indicates one of two things.. Either order more props or create a testing Giant Brain partition. It is proposed that a switching isolated test partition inside one of the existing partitions could recycle the Giant Brain into more functions, one of which is a testing machine inside the machine.
Little Parts - a Machine in the Machine
Isolate your brains
Little Parts is a tiny collective of parts inside the Giant Brain that can be isolated for running small tests. Little Parts, or LP, requires the eeprom, circuit for the Prop Tool and supporting components (crystal, resistor, decouplers, LEDs). LP is located inside the number one Giant Brain partition and uses the same schematic as the PEK when one chip is tested.
To use LP, isolate the number of Propeller chips required in the test by removing the interface sectional wires after the final prop to terminate the sequence. Simply provide termination, optionally remove the power / ground to the remaining collective and run the tests.
The problem, IMO, with most early AI research -- especially that of the McCarthy/Minsky crowd -- is that they began by working top-down from thinking, logic, and conceptualizing, rather than building bottom-up from primitive stimulus and response. I believe you have to model the behavior of a nematode successfully before you can hope to emulate human thought....
No doubt that is true. In fact, I would guess you need to start even much, much farther down the evolutionary tree than nematodes.
I think one mistake being made by the AI people is their presumption that consciousness is based on merely synaptic activity, as though neurons and "wiring" is what it's all about, whereas information processing in living organisms takes place beginning at the molecular level, and it's becoming more obvious to researchers that quantum physical phenomena of those molecular interactions are playing a big role, too. There's a form of molecular computing going on that boggles the mind.
... Human brain evolution has seen 5 million years of development....
Life has existed on earth for at least 3.2 billion years. There's some evidence for it going back to 3.8 billion and it was probably getting its act together even before that. Because the human brain sprang from a multitude of electrical and biochemical systems that came long before it, I would say 5 million years is a big underestimate.
.... the Big Brain project started with the same number of neurons that exist in a Sponge and Trichoplax.....
Keep in mind that the lowly sponge has been around for about 500 million years. That means its organization, on macroscopic and molecular levels, benefits from the roughly 3 billion years of experimentation and countless multitudes of individual organisms that have come long before it. I think it's a bit presumptive to begin with the complexity of a sponge and believe that's somehow a very simple, easy-to-understand, well-understood toehold on which to begin a technological ascent of the type many AI people talk about.
...The coordinating mechanism is unknown, but may involve chemicals similar to neurotransmitters. However glass sponges rapidly transmit electrical impulses through all parts of the syncytium, and use this to halt the motion of their flagella if the incoming water contains toxins or excessive sediment. Myocytes are thought to be responsible for closing the osculum and for transmitting signals between different parts of the body.
Sponges contain genes very similar to those that contain the "recipe" for the post-synaptic density, an important signal-receiving structure in the neurons of all other animals. However in sponges these genes are only activated in "flask cells" that appear only in larvae and may provide some sensory capability while the larvae are swimming. This raises questions about whether flask cells represent the predecessors of true neurons or are evidence that sponges' ancestors had true neurons but lost them as they adapted to a sessile lifestyle.
I don't think modeling nerve cells is the best way to create artificial intelligence. That's like saying the best way to synthesize voice is to model the cells in the vocal tract. Voice synthesis has improved over the years by starting with a simple acoustic model, and then adding more complexity based on fairly high-level features of the human vocal tract. It would be unecessarily complicated to model it starting from the cellular level.
I think the same is true for AI. Start with a high level model of human thought, and then add more complexity to the model to more accurately simulate how a human thinks. Let's say we wanted to simulate an ant. We could spend decades trying to accurately simulate the 250,000 neurons in an ant, or we could spend a much shorter time accurately simulating how an ant responds to stimuli.
The point of my comment is that you cannot hope to simulate intelligence without considering environmental factors, such as external stimuli and responses. What makes humans smart is more than an isolated logic machine -- a talking head on a post, if you will -- but the memory of sensory experiences and our interactions with an external world. I sincerely do not believe that human intelligence can be abstracted away from its fundamental, reptilian stimulus/response mode, except in narrow domains, such as chess-playing or trivia knowledge. If you can't model how a nematode responds to its environment, you can't hope to model human thought since it, too, relies so heavily upon environmental interactions.
The main difference is that we seem to possess consciousness, although people can't agree on exactly what it is. Whether machines will ever possess it is an interesting question.
By claiming a unique possession of consciousness, I think we exalt our kind way too much. How can we say we have it, if we don't even know what it is? How can we be sure that "consciousness" is not just a consequence of complexity, whether biological or mechanical. Too often it's defined in circular, anthropomorphic terms: consciousness being that which is uniquely possessed by humans (and maybe chimps and porpoises if we're feeling particularly generous).
What about qualia, such as the "redness" of a tomato? It's very difficult to talk about qualia without invoking consciousness, although some philosophers argue that qualia don't exist. It's difficult to see how a machine could possess qualia.
Life has existed on earth for at least 3.2 billion years. There's some evidence for it going back to 3.8 billion and it was probably getting its act together even before that. Because the human brain sprang from a multitude of electrical and biochemical systems that came long before it, I would say 5 million years is a big underestimate.
One could also think about dating human evolution back to the Big Bang of the Universe, 13.7 billion years ago, which would be even more accurate. http://en.wikipedia.org/wiki/Big_bang#Overview
What we may find in the future is that human consciousness and pure thought is nothing more than a very complex computer program run by advanced biological mechanisms.
I took a little break from Spinneret development. I spend a few days looking into a simple artificial neural network. I found some simple C/C++ examples then thought about how to do a similar exercise on the Propeller. My focus is simple pattern detection using a single-layer feed forward network. The neuron input is an 8x8 (64bit) matrix where 1 is on and 0 is off.
I thought I would try the alphabet. I need 26 neurons, 64 bit input, and 64 weighted 32 bit values for the inputs. I guess I could use something less than 32 bit weighted values.
a = i1w1+i2w2+i3w3... + inwn
The weighted values are like memory.
If I neglected any prop to prop I/O, and executed the neural net inside one Propeller, it would easily take up an entire Prop. Plus the neurons would have to share COGs and run sequentially.
I agree... I believe this comes in the form of memory.
The point of my comment is that you cannot hope to simulate intelligence without considering environmental factors, such as external stimuli and responses
What about qualia, such as the "redness" of a tomato? It's very difficult to talk about qualia without invoking consciousness, although some philosophers argue that qualia don't exist.
I can’t see how 2,000 to 3,000 neurons can be jammed into a Propeller.
Humanoido, please explain the methods you use to inject 2,000 to 3,000 in a single propeller. As I understand, a neuron takes multiple inputs and a single output, so at the very least you'd have 4,000 to 6,000 inputs and 2,000 to 3,000 outputs. A neuron has to learn or be trained so you have that overhead as well. I used weighted values.
...a robot can be programmed to be aware of its own mind....
I would like to see the code that performs that function. Or even an algorithm will do.
If I program a robot to say "I think therefore I am," does that make the robot aware?
Is it possible that it is the process of living things - all those molecules and electrical signals playing in concert - that is making consciousness, and that without that type of process the best you can hope for is nothing but a nice simulation? Why are Ai people so certain that the process itself isn't responsible for the entire phenomena of consciousness?
please explain the methods you use to inject 2,000 to 3,000 in a single propeller. As I understand, a neuron takes multiple inputs and a single output, so at the very least you'd have 4,000 to 6,000 inputs and 2,000 to 3,000 outputs. A neuron has to learn or be trained so you have that overhead as well. I used weighted values. What kind ANN are you building?
We are only estimating or speculating that 2K or 3K SN's can fit. The SN was just invented. After making that quote, I thought the number may be more accurately around 9 thousand or more. It's going to depend on the actual size outcome of the SN.
The method to inject thousands of neurons into a Propeller Brain array is adequately discussed in reference to one, two and three Propeller Partitions. a Propeller Partition can hold 50 props.
The details of injection of that many SN neurons into a single chip has not been previously discussed, nor the way the OS is handled. It will be described in future posts.
Simplex Neural Injection into a single propeller currently does not use the Neural Matter Injector but rather a different method related to BOSS.
It may, however, incorporate the existing NMI in the future so development integration will take place first. It is unknown at this time if ranges of chips for injection should be program specified or developed.
The post however does explain the left half entrance of the derived Machine Simplex Neuron is held constant so the inputs are just software dummies for simplicity and exampling. SNs are "pre-trained" so to speak (see list). Outputs are actual output firing but obviously with limitations as explained in the post and pseudo code.
In one test some time ago, I used the LED Machine, which has 32 LEDs. My solution was to use cog switching in and out of banks.
With many more SNs, we need a new detector, which I think should be some internal mechanism as noted in the post. In another thread we talked about simulating the fire on a TV screen using pixels in high numbers but I don't know an easy way to do that.
With SNs, I think a code probe could simply read their states in real time as selected.
ElectricAye: I would like to see the code that performs that function. Or even an algorithm will do.
There's a published Penguin Robot algorithm (in PBASIC using the BS2px) that can do that. In fact, it's brain can leave messages to itself when reset. It would be quite well aware of its own brain using that algorithm.
ElectricAye: If I program a robot to say "I think therefore I am," does that make the robot aware?
No. The robot must know it's thinking by a mechanism which provides that knowledge to the robot.
ElectricAye: Is it possible that it is the process of living things - all those molecules and electrical signals playing in concert - that is making consciousness, and that without that type of process the best you can hope for is nothing but a nice simulation? Why are Ai people so certain that the process itself isn't responsible for the entire phenomena of consciousness?
Of course, yes it takes a large concert, and that's why we have Expert Systems to narrow it down to a smaller band of specialty.
The AI community is split on the mechanisms to achieve machine consciousness. Plus, they cannot agree on the process that defines consciousness.
Ed Boyden shows how, by inserting genes for light-sensitive proteins into brain cells, he can selectively activate or de-activate specific neurons with fiber-optic implants. With this unprecedented level of control, he's managed to cure mice of analogs of PTSD and certain forms of blindness. On the horizon: neural prosthetics. Session host Juan Enriquez leads a brief post-talk Q&A.
About Ed Boyden: At the MIT Media Lab, Ed Boyden leads the Synthetic Neurobiology Group, which invents technologies to reveal how cognition and emotion arise from brain networks -- and to enable systematic repair of… Full bio and more links
We are only estimating or speculating that 2K or 3K SN's can fit. The SN was just invented.
Put the SN and injection process aside for a moment. Your single Propeller neuron count estimate is 2 orders of magnitude larger than what I believe can be achieved. IMO, the ideal environment is one neuron per COG. Realistically, you would have several interleaved neurons in a single COG. Another process would orchestrate neuron inputs and outputs to and from HUB or PIN I/O. For example, if you had two neurons in a COG, you certainly need to know which neuron is sampling and firing in time; memory locking and such. Then you have to deal with the neuron outputs which must be assembled and feedback, feed forward, or wrapped in a message for transport. The whole transport mechanism takes resource as well.
We are only estimating or speculating that 2K or 3K SN's can fit. The SN was just invented. After making that quote, I thought the number may be more accurately around 9 thousand or more. It's going to depend on the actual size outcome of the SN.
Please explain the method or methods used to estimate 1,000+ neurons per Propeller.
The count estimate is for SNs. A different neuron will have a different size, operate differently and thus have a different count. The entire purpose of the new SN idea is to investigate basic properties and testing of the smallest SN with the highest quantity numbers.
Sure, one could go in the opposite direction and write fewer more complex neurons per cog. There are two schools. Some say it's more reasonable to start with and fundamentally evolve a simple neuron model rather than going with a more complex one off the bat.
Estimates as to what will fit into a propeller chip can base upon the amount of available memory less the size of a single neuron times the number of neurons. Another option is to read the Propeller Tool memory map from actual downloads.
You haven't even come close to answering Mike's query, "Please explain the method or methods used to estimate 1,000+ neurons per Propeller," with anything concrete.
Comments
There are several machine technologies now merged with the Big Brain. One is the Massive Transposition Machine, a machine that I designed/invented around 2 years ago, mentioned at the signature link. The Massive Transposition Machine was originally designed for a thousand Propellers but was scaled back in the actual construction, which was shy of the 1,000 props and finally led to the Big Brain. This was spun off the devices gleaned from the US40. Rather than do two incredibly large Propeller projects, many of the details of the MTM were and are being incorporated into the Extremely Large Brain project. This is not the only previously invented technology going into the Big Brain as some time ago the technique of Cloning was likewise developed (my off site paper). Using Brain Cloning has resulted in the more effective Brain packages, propagation and time saving methods, increasing the effective software speed, reducing code, and has led to better techniques of folding and unfolding in dreaming and neural packaging. More details of this will follow.
BOSS Brain Operating Software System (OS Code Development name) now enters its second year of development. In Phase II, possible Brain additions to be integrated include testing, neural packaging, propagation, cloning, injection, partitioning, reporting, parameterization and an effect called crystallization plus some output handling. On the other hand, memory boundaries may cap functions. New BOSS edition upgrades the number of neurons across the array. This means larger handled numbers in single chips, propagated across the neural net. More development is anticipated after two Macs are made ready for Propeller Brain support. Currently BOSS only runs on Propellers and has it's greatest ability as a multiple Propeller OS. Still in its infancy, BOSS development time will continue ad hoc. The policy of BOSS is non-release until a Beta stage is made available.
A special weekend meeting held with the medical brain doctor, who flew in for the Exposition, was remarkable on several accounts. We talked at length about machine brain and human brain technology. Primarily he sees many applications in the capacities of many machine brain devices alongside the human brain. He thinks the machine Big Brain with many operating Propeller cores is fantastic technology and wanted to see more. I showed him the wiring, EXO, injectors, I/O, and neural matter partitions.
The world wide Exposition was about stem cell research and its many applications. We talked about the processes of regeneration created by the application of stem cells in humans and how machine brains could directly benefit from the inherent mechanisms of biological stem cell activities when converted to bases of software and hardware, ultimately regenerative restorative reconstructive and uniquely rebirth devices of DNA Gnomic mechanisms through floating machine algorithms.
Many of the stem cell mechanisms at this time are not understood. Application of stem cells are known, in many cases, to have restored vision to the blind, cured diabetics, healed the heart and circulatory system, cured brain disease, reconstructed skin and organ tissue, and conquered a myriad of other human anomalies and ailments.
How DNA is restored from the mechanisms of stem cells is also unknown at this time. There was some discussion about the discovery of these mechanisms through a personified machine Brain technology which is a very interesting conceptual idea.
Last time we met, aside from never drinking pop or eating fast food, he carried a new small black pc ms-windows computer, and a new black cell phone with many features. This time, I was surprised how much I influenced him. I'm not sure if it was all good. He was eating fast food, drinking Coca Cola! But it was commendable he followed my Apple advice and gave up his cell phone for a new white Apple iPhone 4, and was carrying his new white Apple MacBook Air computer. I was impressed!
http://altered-states.net/barry/newsletter329/index.htm
Stem cells have the remarkable potential to develop into many different cell types in the body. Serving as a sort of repair system for the body, they can theoretically divide without limit to replenish other cells as long as the person or animal is still alive. When a stem cell divides, each new cell has the potential to either remain a stem cell or become another type of cell with a more specialized function, such as a muscle cell, a red blood cell, or a brain cell.
A Machine cell
"If you invent a breakthrough in artificial intelligence, so machines can learn, that is worth 10 Microsofts."
-- Bill Gates
I visualize a time when we will be to robots what dogs are to humans...
CLAUDE SHANNON, The Mathematical Theory of Communication
Our ultimate objective is to make programs that learn from their experience as effectively as humans do. We shall…say that a program has common sense if it automatically deduces for itself a sufficient wide class of immediate consequences of anything it is told and what it already knows.
JOHN MCCARTHY, "Programs with Common Sense", 1958
Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain.
ALAN TURING, "Computing Machinery and Intelligence"
An important concept both in Artificial Life and in Artificial Intelligence is that of a genetic algorithm (GA). GAs employ methods analogous to the processes of natural evolution in order to produce successive generations of software entities that are increasingly fit for their intended purpose.
JACK COPELAND, The Essential Turing
The human brain has about 100 billion neurons. With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation ... (but) only 200 calculations per second.... With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second. This is a conservatively high estimate.... In 1997, $2,000 of neural computer chips using only modest parallel processing could perform around 2 billion calculations per second.... This capacity will double every twelve months. Thus by the year 2020, it will have doubled about twenty-three times, resulting in a speed of about 20 million billion neural connection calculations per second, which is equal to the human brain.
RAY KURZWEIL, The Age of Spiritual Machines
The key issue as to whether or not a non-biological entity deserves rights really comes down to whether or not it's conscious.... Does it have feelings?
RAY KURZWEIL, USA Today, Aug. 19, 2007
Our ultimate objective is to make programs that learn from their experience as effectively as humans do. We shall…say that a program has common sense if it automatically deduces for itself a sufficient wide class of immediate consequences of anything it is told and what it already knows.
John McCarthy, "Programs with Common Sense", 1958
Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans' ability to control or even understand them.
RAY KURZWEIL, Scientific American, June 2010
Joseph Chilton Pearce Play is the only way the highest intelligence of humankind can unfold.
Henri Frederic Amiel Man becomes man only by his intelligence, but he is man only by his heart.
Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats.
Howard Aiken
I've always wondered what Kurzweil and his buddies think about the Chinese Room:
http://en.wikipedia.org/wiki/Chinese_room
Nice find, and fascinating debatable material but has raised a lot of heat in the AI field.
http://forums.parallax.com/showthread.php?124495-Fill-the-Big-Brain&p=1001046&viewfull=1#post1001046
However, don't you think it's a log of hogwash? For example, Searle fails to define exactly what constitutes "thinking" because he doesn't know. It indicts the weakest mechanisms of symbol manipulations. The comparison also fails to address the outside mechanism that is sliding papers around with Chinese symbols into understandable response patterns. It decentralizes the mechanisms of machine intelligence to that of random acts of symbol placement - a very poor representation which is handicapped and one sided for any machine intelligence comparison. Saying that a program cannot give a machine understanding is completely unfounded. The real fact should regard the type and level of machine understanding and positions of evolution as Expert Systems today can already run circles around the Chinese Room. The mere nonsense of the Chinese Room also partakes in mismatch of cultural language and communication and forces lack of understanding to make a subversive point against machine intelligence and understanding, and reeks of dark ages material moving backwards rather than forwards in science and technology.
http://en.wikipedia.org/wiki/Chinese_room
The Chinese room is an argument against certain claims of leading thinkers in the field of artificial intelligence,[3] and is not concerned with the level of intelligence that an AI program can display.[4] Searle's argument is directed against functionalism and computationalism (philosophical positions inspired by AI), rather than the goals of applied AI research itself.[5] The argument leaves aside the question of creating an artificial mind by methods other than symbol manipulation.[6
Modified INTNeuron and Test Neurons?
A discovery? Not actually, but more of something born out of necessity. For testing purposes, a new neuron formulation is added to the list of Big Brain neuron types. This is an exampling neuron which has the real time defined representation of a real neuron. The focus is to create a representative neuron with the following objectives:
What is the number one objective of the new SN? It can create fire. How this fire can be detected will become the subject of future discussions and designs.
What is the name of this new neuron? This is the Simplex Neuron or simply SN. Simplex neurons are so simple, they began to look like other things. For example, their firing mechanisms can simply trigger a software state that can be read in real time, or for visual interpretations, turn on and off a single LED. SNs have no periodicity and no parametric tuning. SNs have the most simple mechanism and trigger either on or off in an output state viewed at a pin or viewed within internal software.
The hot new thing about SNs is the sheer numbers. This is very exciting because it doubles the number of neurons in the machine. Preliminary estimates based on the percentage reduction of code seems to indicate excess of doubling effects, i.e. where we had 1,000 neurons firing in a single chip, we now have 2 to 3 thousand neurons firing. For example, in the first two Propeller array partitions, up to 300,000 neurons can be tested and increment at a rate of 150,000 neurons per partion.
Exactly how these will be assembled is in the works. It is possible to modify a previous neural code by reduction and end up with the SN shape. LEDs are often favored in terms of quick go-no go identifications, however, in terms of power management and in the numbers predicted, a kind of array inspection in real time needs to be developed. This would be a kind of neural inspector capable of examining any particular SN state. On the other hand, if LEDs are retained, the firing ability state output can be increased to more states.
This is not the beginning or the end of experimentation with various types of machine neurons and neural matter. It's likely that more advanced, though possibly not more simple, neurons will be invented for the Tremendously Large Propeller-based Brain.
-Phil
Indeed, working from the bottom up in AI development research creates a solid foundation of science with each layer building upon the previous. McCarthy and Minsky are not the only people to have perhaps tried reverse engineering from the top down. And why not? Human brain evolution has seen 5 million years of development. In today's impatient society, it's doubtful the AI community is going to schedule that amount of development time.
BTW, the Big Brain project started with the same number of neurons that exist in a Sponge and Trichoplax.
http://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons
There are many predictions about human brain reverse engineering
Reverse-engineering the human brain so we can simulate it using computers may be just two decades away, says Ray Kurzweil, artificial intelligence expert and author of the best-selling book The Singularity is Near. It would be the first step toward creating machines that are more powerful than the human brain. These supercomputers could be networked into a cloud computing architecture to amplify their processing capabilities. Meanwhile, algorithms that power them could get more intelligent. Together these could create the ultimate machine that can help us handle the challenges of the future, says Kurzweil.
The Giant Machine Brain has now consumed ALL known propellers from all known Propeller projects. Everyone remembers exactly what happened when the US40 parked itself just a little too close to the Giant Brain. This hungry Brain Beast with an appetite for cogs in the morning and chips in the evening would continue consumption if it were not for the new lack of cogs and chips. Current indications show that not one extra prop is available for running the simplest of tests. Even Mr. PEK met dire consequences and was assimilated into the collective. This situation indicates one of two things.. Either order more props or create a testing Giant Brain partition. It is proposed that a switching isolated test partition inside one of the existing partitions could recycle the Giant Brain into more functions, one of which is a testing machine inside the machine.
Isolate your brains
Little Parts is a tiny collective of parts inside the Giant Brain that can be isolated for running small tests. Little Parts, or LP, requires the eeprom, circuit for the Prop Tool and supporting components (crystal, resistor, decouplers, LEDs). LP is located inside the number one Giant Brain partition and uses the same schematic as the PEK when one chip is tested.
To use LP, isolate the number of Propeller chips required in the test by removing the interface sectional wires after the final prop to terminate the sequence. Simply provide termination, optionally remove the power / ground to the remaining collective and run the tests.
No doubt that is true. In fact, I would guess you need to start even much, much farther down the evolutionary tree than nematodes.
I think one mistake being made by the AI people is their presumption that consciousness is based on merely synaptic activity, as though neurons and "wiring" is what it's all about, whereas information processing in living organisms takes place beginning at the molecular level, and it's becoming more obvious to researchers that quantum physical phenomena of those molecular interactions are playing a big role, too. There's a form of molecular computing going on that boggles the mind.
Life has existed on earth for at least 3.2 billion years. There's some evidence for it going back to 3.8 billion and it was probably getting its act together even before that. Because the human brain sprang from a multitude of electrical and biochemical systems that came long before it, I would say 5 million years is a big underestimate.
Keep in mind that the lowly sponge has been around for about 500 million years. That means its organization, on macroscopic and molecular levels, benefits from the roughly 3 billion years of experimentation and countless multitudes of individual organisms that have come long before it. I think it's a bit presumptive to begin with the complexity of a sponge and believe that's somehow a very simple, easy-to-understand, well-understood toehold on which to begin a technological ascent of the type many AI people talk about.
http://en.wikipedia.org/wiki/Sponge
I think the same is true for AI. Start with a high level model of human thought, and then add more complexity to the model to more accurately simulate how a human thinks. Let's say we wanted to simulate an ant. We could spend decades trying to accurately simulate the 250,000 neurons in an ant, or we could spend a much shorter time accurately simulating how an ant responds to stimuli.
-Phil
-Phil
One could also think about dating human evolution back to the Big Bang of the Universe, 13.7 billion years ago, which would be even more accurate.
http://en.wikipedia.org/wiki/Big_bang#Overview
I thought I would try the alphabet. I need 26 neurons, 64 bit input, and 64 weighted 32 bit values for the inputs. I guess I could use something less than 32 bit weighted values.
a = i1w1+i2w2+i3w3... + inwn
The weighted values are like memory.
If I neglected any prop to prop I/O, and executed the neural net inside one Propeller, it would easily take up an entire Prop. Plus the neurons would have to share COGs and run sequentially.
I agree... I believe this comes in the form of memory.
I can’t see how 2,000 to 3,000 neurons can be jammed into a Propeller.
Humanoido, please explain the methods you use to inject 2,000 to 3,000 in a single propeller. As I understand, a neuron takes multiple inputs and a single output, so at the very least you'd have 4,000 to 6,000 inputs and 2,000 to 3,000 outputs. A neuron has to learn or be trained so you have that overhead as well. I used weighted values.
What kind ANN are you building?
from the Apple Dictionary:
consciousness |ˈk
I would like to see the code that performs that function. Or even an algorithm will do.
If I program a robot to say "I think therefore I am," does that make the robot aware?
Is it possible that it is the process of living things - all those molecules and electrical signals playing in concert - that is making consciousness, and that without that type of process the best you can hope for is nothing but a nice simulation? Why are Ai people so certain that the process itself isn't responsible for the entire phenomena of consciousness?
We are only estimating or speculating that 2K or 3K SN's can fit. The SN was just invented. After making that quote, I thought the number may be more accurately around 9 thousand or more. It's going to depend on the actual size outcome of the SN.
The method to inject thousands of neurons into a Propeller Brain array is adequately discussed in reference to one, two and three Propeller Partitions. a Propeller Partition can hold 50 props.
The details of injection of that many SN neurons into a single chip has not been previously discussed, nor the way the OS is handled. It will be described in future posts.
Simplex Neural Injection into a single propeller currently does not use the Neural Matter Injector but rather a different method related to BOSS.
It may, however, incorporate the existing NMI in the future so development integration will take place first. It is unknown at this time if ranges of chips for injection should be program specified or developed.
The post however does explain the left half entrance of the derived Machine Simplex Neuron is held constant so the inputs are just software dummies for simplicity and exampling. SNs are "pre-trained" so to speak (see list). Outputs are actual output firing but obviously with limitations as explained in the post and pseudo code.
In one test some time ago, I used the LED Machine, which has 32 LEDs. My solution was to use cog switching in and out of banks.
With many more SNs, we need a new detector, which I think should be some internal mechanism as noted in the post. In another thread we talked about simulating the fire on a TV screen using pixels in high numbers but I don't know an easy way to do that.
With SNs, I think a code probe could simply read their states in real time as selected.
-Phil
ElectricAye: I would like to see the code that performs that function. Or even an algorithm will do.
There's a published Penguin Robot algorithm (in PBASIC using the BS2px) that can do that. In fact, it's brain can leave messages to itself when reset. It would be quite well aware of its own brain using that algorithm.
ElectricAye: If I program a robot to say "I think therefore I am," does that make the robot aware?
No. The robot must know it's thinking by a mechanism which provides that knowledge to the robot.
ElectricAye: Is it possible that it is the process of living things - all those molecules and electrical signals playing in concert - that is making consciousness, and that without that type of process the best you can hope for is nothing but a nice simulation? Why are Ai people so certain that the process itself isn't responsible for the entire phenomena of consciousness?
Of course, yes it takes a large concert, and that's why we have Expert Systems to narrow it down to a smaller band of specialty.
The AI community is split on the mechanisms to achieve machine consciousness. Plus, they cannot agree on the process that defines consciousness.
Check out these latest developments in brain technology using light.
There's some talk of downloading and uploading memories... controlling epilepsy, restoring vision, ...
http://www.ted.com/talks/ed_boyden.html?utm_source=newsletter_weekly_2011-05-17&utm_campaign=newsletter_weekly&utm_medium=email
Ed Boyden shows how, by inserting genes for light-sensitive proteins into brain cells, he can selectively activate or de-activate specific neurons with fiber-optic implants. With this unprecedented level of control, he's managed to cure mice of analogs of PTSD and certain forms of blindness. On the horizon: neural prosthetics. Session host Juan Enriquez leads a brief post-talk Q&A.
About Ed Boyden: At the MIT Media Lab, Ed Boyden leads the Synthetic Neurobiology Group, which invents technologies to reveal how cognition and emotion arise from brain networks -- and to enable systematic repair of… Full bio and more links
Thanks Nikos!
Put the SN and injection process aside for a moment. Your single Propeller neuron count estimate is 2 orders of magnitude larger than what I believe can be achieved. IMO, the ideal environment is one neuron per COG. Realistically, you would have several interleaved neurons in a single COG. Another process would orchestrate neuron inputs and outputs to and from HUB or PIN I/O. For example, if you had two neurons in a COG, you certainly need to know which neuron is sampling and firing in time; memory locking and such. Then you have to deal with the neuron outputs which must be assembled and feedback, feed forward, or wrapped in a message for transport. The whole transport mechanism takes resource as well.
Please explain the method or methods used to estimate 1,000+ neurons per Propeller.
The count estimate is for SNs. A different neuron will have a different size, operate differently and thus have a different count. The entire purpose of the new SN idea is to investigate basic properties and testing of the smallest SN with the highest quantity numbers.
Sure, one could go in the opposite direction and write fewer more complex neurons per cog. There are two schools. Some say it's more reasonable to start with and fundamentally evolve a simple neuron model rather than going with a more complex one off the bat.
Estimates as to what will fit into a propeller chip can base upon the amount of available memory less the size of a single neuron times the number of neurons. Another option is to read the Propeller Tool memory map from actual downloads.
You haven't even come close to answering Mike's query, "Please explain the method or methods used to estimate 1,000+ neurons per Propeller," with anything concrete.
-Phil