Leon,
I suspect that the graphics board was "built from scratch" so that either the graphics buffer could be used for data quicker than going through the (probably old) graphics chip or perhaps the graphics chip could be used for vector/array processing or both. Newer graphics boards AFAIK are already designed for this sort of usage.
Quick interview with Dr. Jim regarding the attributes of both PC and Propeller architectures:
A PC has all the bloatware that it has to plow through to get to your application. By the time it executes a small portion of your application's code and your application makes an I/O request (any request is processed as a software interupt, the 2F interrupt vector), it has to get through all the bloatware to make the request and then has to get it back to you. Your 3.2 GHz is brought to its knees.
If you have multiple applications running in the background, by the time you measure the throughput, the actual I/O in to I/O out of your program, it will not process any video information in real time itself. Instead, it has to use the "video accelerator card" which is actually one or more DSPs running the video portion of the application, and you have to write your own code for that as well because there is no package available to do what has to be done. This is not just putting a movie on the screen.
Your throughput is actually less than the 8 dedicated cogs running at 4 clock cycle times of 12.5 nanoseconds each which comes out to a 50 ns per instruction average, some branch instructions require 8 cycles, but those are not the bulk of the code (machine intelligence software).
The 3.2 GHz software, due to the bloatware, is not capable of doing the job.
The operating system is written specifically for and to support machine intelligence functions. It manages all of the memory assets as well as the sensors and servo mechanics. This requires a total of 4 propeller chips or 32 cogs running at 50 ns per instruction per cog to operate. This puts us at about 1.5 ns per instruction average over the 32 cogs. This architecture leaves a 3.2 GHz multicore processor and all of its bloatware in the dust. It cannot possibly achieve the I/O throughput that 4 propellers can sustain and which must be maintained for real time machine intelligence applications, i.e. the android we are building.
Thanks for listening,
Dr. Jim
Mark Allred
P.S. Dr. Jim says, "And now I must get back to my lab [noparse]:)[/noparse]"
Mike Green said...
Newer graphics boards AFAIK are already designed for this sort of usage.
In fact a lot of non-graphics processing can be done using the GPU in modern video cards as a fast math coprocessor. Here's a short article: smart-machines.blogspot.com/2007/02/nvidia-makes-it-easy-to-do-math-on.html. I suspect that Jim Gouge's built-from-scratch graphics board was done many years ago, when such amenities as we have now were not available off-the-shelf, and that it was done without cost being a consideration (since it was for the military). I'm sure it was a real tour de force at the time.
BTW, none of my comments should be construed as personal attacks. But that doesn't mean I will sit still for statements that strain credulity or fall short of factual completeness. Without challenging such assertions, a novice forum visitor might just go ahead and drink the Kool Aid without first reading the label.
-Phil
Addendum: Having now read Mark's further explication, there is more than a little truth to what he says. But the bloat that gets in the way is due to the OS and is not an inherent characteristic of the processor. What makes the Propeller so attractive, by comparison, is not so much that it's fast (which it is), but that it's simple and fun (which was Chip's design criterion from the get-go). I do wonder what could be accomplished with a 3.2GHz Pentium running MSDOS.
Post Edited (Phil Pilgrim (PhiPi)) : 8/12/2009 11:25:45 PM GMT
Can you tell dr jim you can bipas the os and write a simple time slicing os that will be almost as efficient as multiple cores.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
propmod_us and propmod_1x1 are in stock. Only $30. PCB available for $5
Want to make projects and have Gadget Gangster sell them for you? propmod-us_ps_sd and propmod-1x1 are now available for use in your Gadget Gangster Projects.
Need to upload large images or movies for use in the forum. you can do so at uploader.propmodule.com for free.
Yah, Phil, just think of those old DOS games that used clock cycles as a timing device for the game. They would be going supersonic with today's chips. It would render them unplayable.
Ofcourse if your building a graphics card from scratch to do data computation (Nevermind the nVidias with their hundreds of cores...)... wouldn't it be something else? Why give it any graphics processing capability?
Anyway, biggest problem I see is that I think any near-intelligent software needs a ton of memory... not just some sram or something, but terrabytes or more. The process of learning requires a memory, and alot of it so that multiple instances of lots of input and responses to can be stored, compared against each other... so a more accurate 'understanding' of the event can be stored. Only after an event (and aspects of it) have been idle for a very very long time, can it be erased... or 'forgotten'.
If the machine has 8 bits of input at a rate of 100 ms per sample... how much data would need to be stored to learn that the act of not closing the fridge door was the real cause of the milk going bad? Or does AI not ever learn to close the fridge door properly?
It definitely needs more than 32k, which is why we built our SRAM memory modules. Dr. Jim has complete control over every aspect of the board, something he desires to maintain. The OS will barely fit into 2 MB. It really needs 4 MB to be workable.
Again, we are talking about the intelligence of a bee or mabe a rat, not that of a human, so keep that in perspective.
We are not using 8 or 16 bits, but Dr. Jim has made a 32-bit virtual machine.
We are creating layers of memory which we intend to take care of short and long term memory issues. It is true that eventually no more learning will occur without the addition of more memory.
Currently, for all the development that has been done to date on this project, we have accomplished with 2 MB. I have occasionally seen Dr. Jim use 4 MB, but that is rare, more for testing the two boards working together. However, now with the KISS OS about to be released, we will probably need 4 MB for continuing development.
mallred said...
... can teach your robot to talk and understand you. can hold conversations with it, rough at first, then more elaborate. can build a robot around it and pass in data streams from any sensory device, it will learn by vision, hearing, touch, or any sense you choose to add to it.
mallred said...
Again, we are talking about the intelligence of a bee or mabe a rat, not that of a human, so keep that in perspective.
Mark, I do appreciate you trying to pass on info from the good Dr. Here's my take on what you mentioned:
mallred said...
Quick interview with Dr. Jim regarding the attributes of both PC and Propeller architectures:
A PC has all the bloatware that it has to plow through to get to your application. By the time it executes a small portion of your application's code and your application makes an I/O request (any request is processed as a software interupt, the 2F interrupt vector), it has to get through all the bloatware to make the request and then has to get it back to you. Your 3.2 GHz is brought to its knees.
"To it's knees", as in 50% loss? 99% loss? OK, say I lose 99% of my 3200MHz machine, that leaves me at 32MHz. Hey, look! I'm operating at about the speed of 1 cog! Oh, wait, I still have HW multiplies, SIMD, CUDA if I want it, not to mention I will never give you 99% loss. I call B.S.
mallred said...
If you have multiple applications running in the background, by the time you measure the throughput, the actual I/O in to I/O out of your program, it will not process any video information in real time itself. Instead, it has to use the "video accelerator card" which is actually one or more DSPs running the video portion of the application, and you have to write your own code for that as well because there is no package available to do what has to be done. This is not just putting a movie on the screen.
Weird. When writing video game type frameworks you can get 100 fps, with very little controller lag and ptenty of processing going on every frame. True, it isn't "realtime", for a suitably "instantaneous" definition of "realtime", but it's plenty fast. Are you talking about video input? Yes there are capture cards for that. I worked with one system that would pull down 2000 fps. The HW was beefy, and you _really_ had to illuminate the subject, but there it is. Regarding "This is not just putting a movie on the screen. ", I'm assuming you have never written any video decompression software. Again, I call B.S.
mallred said...
Your throughput is actually less than the 8 dedicated cogs running at 4 clock cycle times of 12.5 nanoseconds each which comes out to a 50 ns per instruction average, some branch instructions require 8 cycles, but those are not the bulk of the code (machine intelligence software).
I don't know what to say to this bit. You give numbers for one side of the comparison, and "actually less" for the other side. If you would care to set up a benchmark we could get some actual numbers in there. I kind of get the feeling you are taking someones word for it.
mallred said...
The 3.2 GHz software, due to the bloatware, is not capable of doing the job.
The operating system is written specifically for and to support machine intelligence functions. It manages all of the memory assets as well as the sensors and servo mechanics. This requires a total of 4 propeller chips or 32 cogs running at 50 ns per instruction per cog to operate. This puts us at about 1.5 ns per instruction average over the 32 cogs. This architecture leaves a 3.2 GHz multicore processor and all of its bloatware in the dust. It cannot possibly achieve the I/O throughput that 4 propellers can sustain and which must be maintained for real time machine intelligence applications, i.e. the android we are building.
1.5ns average instruction count is 666 MIPS, well let's round to 667 to avoid The Beast . Fine. That is well under the 2GFLOPS you can (easily) get from a desktop machine, and your 2/3 GIPS is integer only, no floating point (not that your application needs floating point). If you are talking about the turnaround time for strictly digital input to be processed and reflected back to a digital output, then I may give you the latency issue, but it really depends on the amount of processing you need to do in the interim. In summary I think you may be confusing latency with throughput.
mallred said...
Thanks for listening,
Dr. Jim
Mark Allred
P.S. Dr. Jim says, "And now I must get back to my lab [noparse]:)[/noparse]"
Yep, these sentient computers aren't going to program themselves, you know.
Jonathan
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
The other thing, as I mentioned in the last thread, is you can take that 3.2 GHz machine and reformat the hard drive and put FreeDOS on it, which gives you unfettered access to gigabytes of RAM with a 32-bit flat memory model. Now DOS was called bloated back in its day, but that day was 1985 and the thing about DOS is it doesn't do anything unless you ask it to, so all you have to do is avoid system calls and you have instantaneous hardware access to all that fast RAM and whatever I/O you need. You can load up the RAM from the hard drive, then ignore the drive until turnoff time and spin it off and save the volatile RAM contents when you need to. And while you're not saving and loading, your OS overhead will be *zero* but you won't have to do any of the hardware interfaces yourself if you don't have time because FreeDOS has most of them for you.
localroger: But what will that cost? Price was an issue in selection choice because Dr. Jim wants all persons interested to be able to afford to build it on their kitchen table. This is also the reason for using DIP style chips, so people can actually work with them. So far, no one has had to. We have built and tested every board we have sent out. And we just sold another one today, so we have to build that one tomorrow and send it out.
I know you think our boards are expensive. I put in my own initiative to reduce the price and took some heat for that as it is. Believe me, you have a quality board for the price. The 512K SRAM chip itself is $6, in quantities of 100 it is $5.50. So we are talking about $24 our cost to populate a 2 MB board. That does not include any of the logic chips, the 4-layer board itself, not to mention the entire board is socketed, plus a few capacitors and a power connector. You add in the price to get the boards made at $20 a board our cost, and it all adds up. If anyone can offer recommendations for less expensive materials while allowing us to keep our superb quality, then I'm all ears. In fact, Dr. Jim says that if anyone can do what he has done for $30, holding the same quality, you can become our supplier. We don't want to be in the business of selling boards. We want to concentrate on machine intelligence research.
What is your PCB fab house? Try Golden Pheonix PCB. You can get 2 layer boards for $110 a square meter. The smaller you make your board, the more boards you get. I do not know there 4 layer board rate.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔ Computers are microcontrolled.
Robots are microcontrolled. I am microcontrolled. But you·can·call me micro.
If it's not Parallax then don't even bother. · I have changed my avatar so that I will no longer be confused with others who use genaric avatars.
Mark,
I think that most of those of us that have experience in building stuff for sale understand where the money goes and that, unlike hobbyists, you can't be expected to work for nothing. It's tedious to assemble PCBs. Several people have suggested that these boards could be built more cheaply and that's probably true. At some point, you may want to farm the work out.
The main complaint is that there's very little information you've provided about the product (and your other products). This has been said before and it's still true. This is one of the ugly truths about the difference between development and marketing. Just like building the PCBs, it's not fun for most tech people (or business people for that matter) to assemble the dull facts and whip them into an understandable form so that others can understand them. For an OS, a brief list of functions implemented in terms of how the user might make use of them. This would include commands too. This might be considered the executive summary of the user's manual. This is not something to wait to do later. People are already judging the potential quality of your (you and Dr. Gouge) future work (the AI stuff) on the basis of how you're communicating now, on the "simple stuff". Unfortunately you're not doing as well as you could and should be at this point.
From the Golden Pheonix PCB site, the board setup fee is $270, each board (3x4", as it looks in the picture on your site, with single silkscreen, double soldermask, 100 mil thickness, 6 mil minimum distance) is $6.20 a board.
I wasn't able to find a number for readership or subscriber of Robot magazine, but with a serial publishing deal you can probably expect to sell a thousand boards at least. So, that's $6.50 a board, which would cut $13.50 off of your price.
@Rsadeika: Just because it is too expensive at $200 doesn't mean that we expect it for $30. It is silly to expect someone to put up(a 2MB board for $30) or shut up( about an overprice board for $200).
Some of the arguments are so strange, that I sometimes think they do come from a different universe!
Bloatware on other OSes than the KISS:
Other OSes have been named, but trading in huge memory (GigaBytes) with linear access and no swapping with 2MBs that have to be accessed by "bloatware" is just far out.
Price of a KISS-OS-environment:
Needs at least two 2MB-boards and 4 Proto-boards. That makes at about 450 US$, right? Does that save money if you compare it to a mini-ITX-board plus power-supply? OK, less IO, but there are many cheap IO-boards for the PC. And if you need processing power on them, you should have a look at MESA-boards.
Size of CPU:
What's the footprint of 2 memory-boards and 4 Proto-boards?
Regarding teaching the OS (that is promised for speech recognition):
Supposed I do have the required hardware. I apply power and say "turn left and move 3 meters ahead". How would that teaching look like until my robot actually does that. How do I teach him the meaning of "turn", "left", "move", "3" "meter" "ahead" and what will happen if I then say "Left, then 3 forward". Will it say "how many degrees to the left? What unit of '3'?".
And most of all, how long will it take to teach that and what is the expected time until it reacts.
I still have to laugh when about 10 years ago my boss bought that IBM dictating software. In the afternoon, he was so pround what the software did after intensive training. I said to him: "Now take that letter from yesterday and dictate it". The outcome was so funny that we both really cried from laughing. He never used the software again.
Nick
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Never use force, just go for a bigger hammer!
The DIY Digital-Readout for mills, lathes etc.: YADRO
mallred said...
Quick interview with Dr. Jim regarding the attributes of both PC and Propeller architectures:
A PC has all the bloatware that it has to plow through to get to your application. By the time it executes a small portion of your application's code and your application makes an I/O request (any request is processed as a software interupt, the 2F interrupt vector), it has to get through all the bloatware to make the request and then has to get it back to you. Your 3.2 GHz is brought to its knees.
If you have multiple applications running in the background, by the time you measure the throughput, the actual I/O in to I/O out of your program, it will not process any video information in real time itself. Instead, it has to use the "video accelerator card" which is actually one or more DSPs running the video portion of the application, and you have to write your own code for that as well because there is no package available to do what has to be done. This is not just putting a movie on the screen.
Your throughput is actually less than the 8 dedicated cogs running at 4 clock cycle times of 12.5 nanoseconds each which comes out to a 50 ns per instruction average, some branch instructions require 8 cycles, but those are not the bulk of the code (machine intelligence software).
The 3.2 GHz software, due to the bloatware, is not capable of doing the job.
The operating system is written specifically for and to support machine intelligence functions. It manages all of the memory assets as well as the sensors and servo mechanics. This requires a total of 4 propeller chips or 32 cogs running at 50 ns per instruction per cog to operate. This puts us at about 1.5 ns per instruction average over the 32 cogs. This architecture leaves a 3.2 GHz multicore processor and all of its bloatware in the dust. It cannot possibly achieve the I/O throughput that 4 propellers can sustain and which must be maintained for real time machine intelligence applications, i.e. the android we are building.
Thanks for listening,
Dr. Jim
Mark Allred
P.S. Dr. Jim says, "And now I must get back to my lab [noparse]:)[/noparse]"
Instead of making his own graphics card, he could have used a modern PC with something like an nVidia GeForce card. This has its own GPU (the latest ones have lots of processors) that can be used for number crunching with no OS "bloatware", which would make development of the application much easier. Free development software is available. This would offer much more performance than the proposed Propeller system and be a lot more more convenient for the user.
Graphics cards don't use DSPs, BTW.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
@mallred: Why would I want a robot that I would have to teach when I could just program it to do what I want? This is quite a simple question, really.
I've tried to come up with numerous ways to explain this, however my eloquence escapes me. If you give it a series of instructions, it will never be able to do anything except those instructions. If you can teach it, it must have the ability to learn and therefore can become a much more versatile and interesting machine.
I guess its the difference between you learning what is in your math book, and learning how to actually apply the math. My ability to create a workable example appears to be significantly reduced currently.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lt's not particularly silly, is it?
microcontrolled said...
@mallred: Why would I want a robot that I would have to teach when I could just program it to do what I want? This is quite a simple question, really.
- Learning is not straight forward and I also wonder as you do. So what if it learns stuff I don't want it to do? How do you guide it's learning, how does it then apply that to new situations and inputs? If I have to teach it everything is that more or less tedious than coding it? Can I teach it to selectively forget things? How do I know that it has completely unlearned that? How do I know it won't learn it again?
Some learning seems feasible like learning the limits of the operating space without the need for complex descriptions defining it for them. Some learning of the limits of pressure to apply to different materials and remembering that or other similar items. Things where you outline the broader picture of what you want it do with program and it learns the details of doing it in the environment it's in. That would be very desirable I think.
@mallred, it's *cheap*. FreeDOS is as the name suggests free and open-source. You can take a step down from the state of the art and get a 1 to 1.5 GHz modern processor (say a netbork or mini-ITX) for under $500 which will still blow rings around a stack of Props. That will includes nonvolatile mass storage and all standard I/O, and you can get massively parallel I/O controls in a number of different ways (including direct to hardware through PCI) for a couple of hundred more.
You could even use Propellers to do the I/O, with standard objects translating serial commands to jitter-free control signals, with much less work than building custom memory boards to try to get Propellers to do the thinking too. Using standard objects I could put together a board to let one propeller serially control servos and return feedback signals in an afternoon for about fifty dollars.
Actually, if this proposed AI operating system were to reach the goals advertised here, I would think it would be a economic waste to limit it to the small Propeller audience...· If what they state can be done is true, this would be a breakthrough for any platform and it would be in their financial interest to begin with the largest market base, which is inevitably the pc architecture.
You can buy a quad core amd 3ghz pc for $400 with 3gb ram
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
propmod_us and propmod_1x1 are in stock. Only $30. PCB available for $5
Want to make projects and have Gadget Gangster sell them for you? propmod-us_ps_sd and propmod-1x1 are now available for use in your Gadget Gangster Projects.
Need to upload large images or movies for use in the forum. you can do so at uploader.propmodule.com for free.
Comments
I suspect that the graphics board was "built from scratch" so that either the graphics buffer could be used for data quicker than going through the (probably old) graphics chip or perhaps the graphics chip could be used for vector/array processing or both. Newer graphics boards AFAIK are already designed for this sort of usage.
A PC has all the bloatware that it has to plow through to get to your application. By the time it executes a small portion of your application's code and your application makes an I/O request (any request is processed as a software interupt, the 2F interrupt vector), it has to get through all the bloatware to make the request and then has to get it back to you. Your 3.2 GHz is brought to its knees.
If you have multiple applications running in the background, by the time you measure the throughput, the actual I/O in to I/O out of your program, it will not process any video information in real time itself. Instead, it has to use the "video accelerator card" which is actually one or more DSPs running the video portion of the application, and you have to write your own code for that as well because there is no package available to do what has to be done. This is not just putting a movie on the screen.
Your throughput is actually less than the 8 dedicated cogs running at 4 clock cycle times of 12.5 nanoseconds each which comes out to a 50 ns per instruction average, some branch instructions require 8 cycles, but those are not the bulk of the code (machine intelligence software).
The 3.2 GHz software, due to the bloatware, is not capable of doing the job.
The operating system is written specifically for and to support machine intelligence functions. It manages all of the memory assets as well as the sensors and servo mechanics. This requires a total of 4 propeller chips or 32 cogs running at 50 ns per instruction per cog to operate. This puts us at about 1.5 ns per instruction average over the 32 cogs. This architecture leaves a 3.2 GHz multicore processor and all of its bloatware in the dust. It cannot possibly achieve the I/O throughput that 4 propellers can sustain and which must be maintained for real time machine intelligence applications, i.e. the android we are building.
Thanks for listening,
Dr. Jim
Mark Allred
P.S. Dr. Jim says, "And now I must get back to my lab [noparse]:)[/noparse]"
BTW, none of my comments should be construed as personal attacks. But that doesn't mean I will sit still for statements that strain credulity or fall short of factual completeness. Without challenging such assertions, a novice forum visitor might just go ahead and drink the Kool Aid without first reading the label.
-Phil
Addendum: Having now read Mark's further explication, there is more than a little truth to what he says. But the bloat that gets in the way is due to the OS and is not an inherent characteristic of the processor. What makes the Propeller so attractive, by comparison, is not so much that it's fast (which it is), but that it's simple and fun (which was Chip's design criterion from the get-go). I do wonder what could be accomplished with a 3.2GHz Pentium running MSDOS.
Post Edited (Phil Pilgrim (PhiPi)) : 8/12/2009 11:25:45 PM GMT
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
propmod_us and propmod_1x1 are in stock. Only $30. PCB available for $5
Want to make projects and have Gadget Gangster sell them for you? propmod-us_ps_sd and propmod-1x1 are now available for use in your Gadget Gangster Projects.
Need to upload large images or movies for use in the forum. you can do so at uploader.propmodule.com for free.
Thanks,
Mark
Anyway, biggest problem I see is that I think any near-intelligent software needs a ton of memory... not just some sram or something, but terrabytes or more. The process of learning requires a memory, and alot of it so that multiple instances of lots of input and responses to can be stored, compared against each other... so a more accurate 'understanding' of the event can be stored. Only after an event (and aspects of it) have been idle for a very very long time, can it be erased... or 'forgotten'.
If the machine has 8 bits of input at a rate of 100 ms per sample... how much data would need to be stored to learn that the act of not closing the fridge door was the real cause of the milk going bad? Or does AI not ever learn to close the fridge door properly?
... So how much ram do you have again?
It definitely needs more than 32k, which is why we built our SRAM memory modules. Dr. Jim has complete control over every aspect of the board, something he desires to maintain. The OS will barely fit into 2 MB. It really needs 4 MB to be workable.
Again, we are talking about the intelligence of a bee or mabe a rat, not that of a human, so keep that in perspective.
We are not using 8 or 16 bits, but Dr. Jim has made a 32-bit virtual machine.
We are creating layers of memory which we intend to take care of short and long term memory issues. It is true that eventually no more learning will occur without the addition of more memory.
Currently, for all the development that has been done to date on this project, we have accomplished with 2 MB. I have occasionally seen Dr. Jim use 4 MB, but that is rare, more for testing the two boards working together. However, now with the KISS OS about to be released, we will probably need 4 MB for continuing development.
Hope I answered a few of your questions.
Mark
I've never talked to a rat. Dr. Dolittle?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm
"To it's knees", as in 50% loss? 99% loss? OK, say I lose 99% of my 3200MHz machine, that leaves me at 32MHz. Hey, look! I'm operating at about the speed of 1 cog! Oh, wait, I still have HW multiplies, SIMD, CUDA if I want it, not to mention I will never give you 99% loss. I call B.S.
Weird. When writing video game type frameworks you can get 100 fps, with very little controller lag and ptenty of processing going on every frame. True, it isn't "realtime", for a suitably "instantaneous" definition of "realtime", but it's plenty fast. Are you talking about video input? Yes there are capture cards for that. I worked with one system that would pull down 2000 fps. The HW was beefy, and you _really_ had to illuminate the subject, but there it is. Regarding "This is not just putting a movie on the screen. ", I'm assuming you have never written any video decompression software. Again, I call B.S.
I don't know what to say to this bit. You give numbers for one side of the comparison, and "actually less" for the other side. If you would care to set up a benchmark we could get some actual numbers in there. I kind of get the feeling you are taking someones word for it.
1.5ns average instruction count is 666 MIPS, well let's round to 667 to avoid The Beast . Fine. That is well under the 2GFLOPS you can (easily) get from a desktop machine, and your 2/3 GIPS is integer only, no floating point (not that your application needs floating point). If you are talking about the turnaround time for strictly digital input to be processed and reflected back to a digital output, then I may give you the latency issue, but it really depends on the amount of processing you need to do in the interim. In summary I think you may be confusing latency with throughput.
Yep, these sentient computers aren't going to program themselves, you know.
Jonathan
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
I know you think our boards are expensive. I put in my own initiative to reduce the price and took some heat for that as it is. Believe me, you have a quality board for the price. The 512K SRAM chip itself is $6, in quantities of 100 it is $5.50. So we are talking about $24 our cost to populate a 2 MB board. That does not include any of the logic chips, the 4-layer board itself, not to mention the entire board is socketed, plus a few capacitors and a power connector. You add in the price to get the boards made at $20 a board our cost, and it all adds up. If anyone can offer recommendations for less expensive materials while allowing us to keep our superb quality, then I'm all ears. In fact, Dr. Jim says that if anyone can do what he has done for $30, holding the same quality, you can become our supplier. We don't want to be in the business of selling boards. We want to concentrate on machine intelligence research.
Mark
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Computers are microcontrolled.
Robots are microcontrolled.
I am microcontrolled.
But you·can·call me micro.
If it's not Parallax then don't even bother.
·
I have changed my avatar so that I will no longer be confused with others who use genaric avatars.
I think that most of those of us that have experience in building stuff for sale understand where the money goes and that, unlike hobbyists, you can't be expected to work for nothing. It's tedious to assemble PCBs. Several people have suggested that these boards could be built more cheaply and that's probably true. At some point, you may want to farm the work out.
The main complaint is that there's very little information you've provided about the product (and your other products). This has been said before and it's still true. This is one of the ugly truths about the difference between development and marketing. Just like building the PCBs, it's not fun for most tech people (or business people for that matter) to assemble the dull facts and whip them into an understandable form so that others can understand them. For an OS, a brief list of functions implemented in terms of how the user might make use of them. This would include commands too. This might be considered the executive summary of the user's manual. This is not something to wait to do later. People are already judging the potential quality of your (you and Dr. Gouge) future work (the AI stuff) on the basis of how you're communicating now, on the "simple stuff". Unfortunately you're not doing as well as you could and should be at this point.
From the Golden Pheonix PCB site, the board setup fee is $270, each board (3x4", as it looks in the picture on your site, with single silkscreen, double soldermask, 100 mil thickness, 6 mil minimum distance) is $6.20 a board.
I wasn't able to find a number for readership or subscriber of Robot magazine, but with a serial publishing deal you can probably expect to sell a thousand boards at least. So, that's $6.50 a board, which would cut $13.50 off of your price.
Bloatware on other OSes than the KISS:
Other OSes have been named, but trading in huge memory (GigaBytes) with linear access and no swapping with 2MBs that have to be accessed by "bloatware" is just far out.
Price of a KISS-OS-environment:
Needs at least two 2MB-boards and 4 Proto-boards. That makes at about 450 US$, right? Does that save money if you compare it to a mini-ITX-board plus power-supply? OK, less IO, but there are many cheap IO-boards for the PC. And if you need processing power on them, you should have a look at MESA-boards.
Size of CPU:
What's the footprint of 2 memory-boards and 4 Proto-boards?
Regarding teaching the OS (that is promised for speech recognition):
Supposed I do have the required hardware. I apply power and say "turn left and move 3 meters ahead". How would that teaching look like until my robot actually does that. How do I teach him the meaning of "turn", "left", "move", "3" "meter" "ahead" and what will happen if I then say "Left, then 3 forward". Will it say "how many degrees to the left? What unit of '3'?".
And most of all, how long will it take to teach that and what is the expected time until it reacts.
I still have to laugh when about 10 years ago my boss bought that IBM dictating software. In the afternoon, he was so pround what the software did after intensive training. I said to him: "Now take that letter from yesterday and dictate it". The outcome was so funny that we both really cried from laughing. He never used the software again.
Nick
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Never use force, just go for a bigger hammer!
The DIY Digital-Readout for mills, lathes etc.:
YADRO
Instead of making his own graphics card, he could have used a modern PC with something like an nVidia GeForce card. This has its own GPU (the latest ones have lots of processors) that can be used for number crunching with no OS "bloatware", which would make development of the application much easier. Free development software is available. This would offer much more performance than the proposed Propeller system and be a lot more more convenient for the user.
Graphics cards don't use DSPs, BTW.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Post Edited (Leon) : 8/13/2009 10:06:50 AM GMT
"The Plane The Plane"
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Caelum videre iussit, et erectos ad sidera tollere vultus
Certe, toto, sentio nos in kansate non iam adesse
@mallred: Why would I want a robot that I would have to teach when I could just program it to do what I want? This is quite a simple question, really.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Computers are microcontrolled.
Robots are microcontrolled.
I am microcontrolled.
But you·can·call me micro.
If it's not Parallax then don't even bother.
I have changed my avatar so that I will no longer be confused with others who use generic avatars (and I'm more of a Prop head then a BS2 nut, anyway)
Unfortunately this is one of those rare times one can accurately use the phrase "you're too young to understand" [noparse];)[/noparse]
In the interest of not seeming to be that much of a swine, I present exhibit (A) :
en.wikipedia.org/wiki/Fantasy_Island
I've tried to come up with numerous ways to explain this, however my eloquence escapes me. If you give it a series of instructions, it will never be able to do anything except those instructions. If you can teach it, it must have the ability to learn and therefore can become a much more versatile and interesting machine.
I guess its the difference between you learning what is in your math book, and learning how to actually apply the math. My ability to create a workable example appears to be significantly reduced currently.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lt's not particularly silly, is it?
- Learning is not straight forward and I also wonder as you do. So what if it learns stuff I don't want it to do? How do you guide it's learning, how does it then apply that to new situations and inputs? If I have to teach it everything is that more or less tedious than coding it? Can I teach it to selectively forget things? How do I know that it has completely unlearned that? How do I know it won't learn it again?
Some learning seems feasible like learning the limits of the operating space without the need for complex descriptions defining it for them. Some learning of the limits of pressure to apply to different materials and remembering that or other similar items. Things where you outline the broader picture of what you want it do with program and it learns the details of doing it in the environment it's in. That would be very desirable I think.
You could even use Propellers to do the I/O, with standard objects translating serial commands to jitter-free control signals, with much less work than building custom memory boards to try to get Propellers to do the thinking too. Using standard objects I could put together a board to let one propeller serially control servos and return feedback signals in an afternoon for about fifty dollars.
-Phil
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
It will be interesting to watch.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
propmod_us and propmod_1x1 are in stock. Only $30. PCB available for $5
Want to make projects and have Gadget Gangster sell them for you? propmod-us_ps_sd and propmod-1x1 are now available for use in your Gadget Gangster Projects.
Need to upload large images or movies for use in the forum. you can do so at uploader.propmodule.com for free.