Shop OBEX P1 Docs P2 Docs Learn Events
Dr. Jim Gouge's articles on machine intelligence with the Propeller - download - Page 2 — Parallax Forums

Dr. Jim Gouge's articles on machine intelligence with the Propeller - download

24

Comments

  • heaterheater Posts: 3,370
    edited 2009-08-24 21:00
    @Ken: There is one important point which I forgot to mention in my first post on this thread.

    Jim and mallred have repeatedly made assertions about the performance of the Propeller chip that would lead the unsuspecting Propeller newbie to believe that it has a lot more horse power than it really has, that it can out perform a regular PC in raw performance. They liken it to a "super computer".

    Those assertions are there again in those articles.

    Now the Propeller is a wonderful micro controller, it's interrupt free parallel processing is gem and for sure there is a whole class of tasks for which it will outperform a PC. However it should not be compared to the raw horse power of Giga Hz multi-core processor with Giga bytes of memory. Bolting a bunch of Props together does not get you there either.

    This "over Hyping" of the Prop is going to reflect badly when their customers discover what they have actually been sold is a bunch of micro-controllers lashed together and not a Deep Blue.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    For me, the past is not over yet.
  • potatoheadpotatohead Posts: 10,261
    edited 2009-08-24 21:28
    Well Hanno, I thought maybe a protoboard smoothie would be the easiest way to go, should it come to that! Consider that a well realized "exit" strategy! [noparse]:)[/noparse] (yes, folks! I'm here all night!)

    --->and isn't "Will It Blend?" fun?

    I share the same concerns heater does on this. If the right expectations are set, nobody but MIT has any real worries. At this time, I don't see them well set, and that's a worry. Not enough of one to comment much on, but one none the less.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Propeller Wiki: Share the coolness!
    Chat in real time with other Propellerheads on IRC #propeller @ freenode.net
    Safety Tip: Life is as good as YOU think it is!
  • Guns of FunGuns of Fun Posts: 26
    edited 2009-08-24 21:29
    I take technological break throughs with a grain of salt, but I was definetely excited by Dr. Jim's interview.· I think we should be supporting those who think outside the box.· Even if the whole project doesn't work out, some part of it may give the next inventor a great idea.
  • localrogerlocalroger Posts: 3,452
    edited 2009-08-24 21:51
    @GoF, I don't think any of us would be here if we weren't in favor of thinking outside the box. It's thinking off the edge of the cliff that we have a problem with.
  • Guns of FunGuns of Fun Posts: 26
    edited 2009-08-24 22:06
    Your cliff just·becomes another box.
  • localrogerlocalroger Posts: 3,452
    edited 2009-08-24 22:14
    That depends on how big a box it is.
  • Guns of FunGuns of Fun Posts: 26
    edited 2009-08-24 22:23
    It gets too big when guys like Moller http://www.moller.com/ squander millions of investors money on a pipedream.
  • Cluso99Cluso99 Posts: 18,069
    edited 2009-08-25 10:38
    Sorry Ken, I have to agree with heater. Parallax should distance itself as far away from this as it can get.

    And for the record, I believe it was Hanno who offered to eat a prop chip. Maybe I should offer to eat one too??? Hanno has proved he has the qualifications to comment.

    My concern is that the prop will inevitably wear some of the mud when it begins to stick!

    Postedit: Sorry, missed the second page of the thread, but comments still apply.

    Should also have commented on Parallax's new speech recognition board! You should really take a look cool.gif

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Links to other interesting threads:

    · Home of the MultiBladeProps: TriBladeProp, RamBlade, TwinBlade,·SixBlade, website
    · Single Board Computer:·3 Propeller ICs·and a·TriBladeProp board (ZiCog Z80 Emulator)
    · Prop Tools under Development or Completed (Index)
    · Emulators: Micros eg Altair, and Terminals eg VT100 (Index) ZiCog (Z80) , MoCog (6809)
    · Search the Propeller forums·(uses advanced Google search)
    My cruising website is: ·www.bluemagic.biz·· MultiBladeProp is: www.bluemagic.biz/cluso.htm

    Post Edited (Cluso99) : 8/25/2009 10:47:15 AM GMT
  • Nick MuellerNick Mueller Posts: 815
    edited 2009-08-25 11:40
    Strange that Parallax is adding fuel to the M.I.T. hype. Seems they don't read all posts here. wink.gif


    Nick

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Never use force, just go for a bigger hammer!

    The DIY Digital-Readout for mills, lathes etc.:
    YADRO
  • NetHogNetHog Posts: 104
    edited 2009-08-25 12:38
    Guys, remember, Parallax has to take a neutral stance. Taking a negative stance can lead to issues as much as taking a posative stance.

    Dr Jim reminds me of the Artilary man in the book "War of the Worlds".

    There are a number of technological errors that Dr Jim makes that if he was working for me, I would be pulling him up left right and center (such as making the assertion that 32-way Propeller is more powerful than a multi-core PC, and that things are inprecise because of logic).

    However, I thought this statement to be particularly interesting:

    [noparse][[/noparse]quote]On a disk, at the end of a sector or track, you have an error
    correcting code [noparse][[/noparse]ECC] so that if you lose bits in that data stream in that
    sector, you can reconstruct it. I own a patent that performs a similar
    function to encode logic streams (Patent no. 5267328). The concept is
    about how error-correcting codes are generated. Error-correcting codes
    describe information to which they are attached in sufficient detail such
    that if some information is lost, it can be reconstructed and filled back in.

    The Patent quoted is
    [noparse][[/noparse]quote]This invention describes methods and apparatus for the radial extraction and pattern recognition of image information based on "back scatter" produced by focused beam of energy such as those used in radar, sonar, and ultrasound.

    It's a very impressive patent. However the subject matters are entirely unrelated. It's not an error correcting code (mathematically precise) but rather a means to decode an image effect to identify the cause.
  • Cluso99Cluso99 Posts: 18,069
    edited 2009-08-25 13:36
    Nethog: Excellent point. Given your quote above, and the actual patent which I just skimmed, I seriously wonder if the patent is actually owned by the same person???

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Links to other interesting threads:

    · Home of the MultiBladeProps: TriBladeProp, RamBlade, TwinBlade,·SixBlade, website
    · Single Board Computer:·3 Propeller ICs·and a·TriBladeProp board (ZiCog Z80 Emulator)
    · Prop Tools under Development or Completed (Index)
    · Emulators: Micros eg Altair, and Terminals eg VT100 (Index) ZiCog (Z80) , MoCog (6809)
    · Search the Propeller forums·(uses advanced Google search)
    My cruising website is: ·www.bluemagic.biz·· MultiBladeProp is: www.bluemagic.biz/cluso.htm
  • localrogerlocalroger Posts: 3,452
    edited 2009-08-25 14:24
    Wow, it just gets weirder and weirder.
  • LeonLeon Posts: 7,620
    edited 2009-08-25 14:41
    I was rather confused by a PM someone sent me about a James O Gouge who appears to be working in the same area. He looked up some of Gouge's work and he seems to have got a first degree in about 1996. However, Dr Gouge seems to have adult children, and sounds as if he is in his dotage, as am I. Perhaps they are father and son.

    Leon

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Amateur radio callsign: G1HSM
    Suzuki SV1000S motorcycle

    Post Edited (Leon) : 8/25/2009 2:47:51 PM GMT
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2009-08-25 16:30
    The patent was issued in 1993 and expires in less than three years from now, so the clock is ticking. Without seeing the figures in conjunction with the text, it's really hard to fathom what he's doing. He does claim that the algotrithms will work in "real time" on a (1993 vintage) PC.

    The patent relates to sonar and ultrasound feature extraction. This may seem odd to say, but there are some nice parallels between sonar/radar imagery and speech spectra. For example, compare these images:

    Speech spectrum over time.
    Ground-penetrating radar image.
    Sonar (chart recorder) image (false color).

    In each, the objective is feature extraction from background noise and making sense of the "shapes" that are revealed. But, like other posters, I don't get how this relates to error-correcting codes.

    -Phil
  • CounterRotatingPropsCounterRotatingProps Posts: 1,132
    edited 2009-08-25 20:38
    Phil Pilgrim (PhiPi) said...

    I do nurse a nagging fear, however, that Dr. Jim may leverage any eventual success by dragging the Propeller into the mire of software patents. [noparse][[/noparse]...] The establishment of prior art may turn out to be one of the OBEX's biggest assets!
    Hasn't most of the low-level stuff that would underpin such a claim already been done for other things and are in the OBEX?· (Formants, audio, fft, general toolbox 'bit twiddling' etc)

    Problem is in America you can sue your dear mother for giving birth to you.
    She then, of course, will counter-sue for a life-time of pain and suffering.

    lol.gif

    I'm not worried for the Prop - or anyone who uses it - if they fail. No one will really care what platform they tried to use because it would be self-evident that they were on the wrong track in the first place. If they succeed, as has been said, then that would make the Prop look good. But if they succeed, get greedy, *and* try to claim patent infringements - then there are a LOT of us here as witnessess who would defend Parallax with·loyal earnestness.

    They'd get their behinds burned jumpin.gif

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
  • CounterRotatingPropsCounterRotatingProps Posts: 1,132
    edited 2009-08-25 21:01
    Phil Pilgrim (PhiPi) said...
    ... This may seem odd to say, but there are some nice parallels between sonar/radar imagery and speech spectra. For example, compare these images:


    [noparse][[/noparse]images]

    In each, the objective is feature extraction from background noise and making sense of the "shapes" that are revealed. But, like other posters, I don't get how this relates to error-correcting codes.
    Could it be so simple that they are mapping such unique features into a lookup table?· It's been common practice in music synths·for some years for the sound not to be generated any more by additive systhesis (Reverse/DFFT) but just by emitting samples stored in a big lookup table (with some post processing to change pitch).

    Isn't the reverse possible - were you 'know' your sample by tokenizing to essential data points? Once identified via its token signature, it then calls a dispatch (command) table. I don't know the gory details of Audiotex speech recognition algorythms, but have always·assumed that's how they·work --- but without the tokenization.

    So maybe that mapping they are just calling·an "error code" - more obfusication than accuracy.

    But it would still take tables much larger that the sin/log/char roms in the Prop... oh, maybe that's where their big memory device comes in...

    Am I way off here?

    (gads, I'm speculating too now ! )

    " Bridge to Sickbay... another one for Threads Anonymous "


    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔


    Post Edited (CounterRotatingProps) : 8/25/2009 9:07:49 PM GMT
  • mallredmallred Posts: 122
    edited 2009-09-01 03:17
    We are not thieves. We would never try a ploy against Ken et. al. They have been very good to us. Please be reasonable in your posts.
  • ElectricAyeElectricAye Posts: 4,561
    edited 2009-09-01 04:12
    I'm sure glad I have no idea what any of this is about.
    Because if I did have some idea of what any of this is about, I'm guessing my sarcasm circuits would be fried to a crisp.

    Lucky me.
    scool.gif

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Watching the world pass me by, one photon at a time.
  • Oldbitcollector (Jeff)Oldbitcollector (Jeff) Posts: 8,091
    edited 2009-09-01 14:26
    mallred said...
    They have been very good to us. Please be reasonable in your posts.


    Any chance we could get some video of something in action? Even baby steps without
    compromising your IP? I for one would like to understand what you are doing,
    but so far all I've seen is video with theory.

    If you are selling your OS and boards, how about a video demonstration of the product?
    Seems reasonable to me, but I've made this request two other times and was ignored.

    I'm sure we could get Hanno to video his eating of a Protoboard. (Blended with fruit of course!)

    OBC

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    New to the Propeller?

    Visit the: The Propeller Pages @ Warranty Void.
  • mallredmallred Posts: 122
    edited 2009-09-02 01:11
    OBC: That is a nice thought. We would post that video first on our webpage for all time.

    I know you have all been very patient. We are getting very close to the time when we will have everything in place for video demonstrations of not only voice recgonition, but also machine intelligence, since VR uses MI in order to work. We can sort of kill two birds with one stone with that approach.

    All we need now is time for Dr. Jim to write the code. You will have video of a working prototype soon, even if it is not completely assembled into a robot, it will be able to communicate with us. We will definitely put that on video as soon as it is ready. Give us about a month.

    Thanks,

    Mr. Mark
  • jazzedjazzed Posts: 11,803
    edited 2009-09-02 01:20
    Honestly, I can't wait to be amazed. Anything that demonstrates intelligence would bring credibility. In the mean time why don't you ask Dr. Jim exactly what he defines as demonstratable intelligence so that we know what to expect. Good luck.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    --Steve

    Propeller Tools
  • mallredmallred Posts: 122
    edited 2009-09-02 02:24
    That's fair enough. Dr. Jim says that the robot will know two words in English to begin with, "Yes" and "No". From there, you must teach it about words and syntax and structure. It will take time, and then it will only speak Pidgin language. But, given time and practice, it will cluster similar items together internally. It will be able to make relationships and store those relationships in permanent memory. This is how it will learn. It will learn from us, it's teachers, through the two ears we have built for audio input.

    This may take time to accomplish. The computer should eventually be able to formulate questions when it does not understand something so that it can learn from it's own generated questions. Would that be a fair demonstration?

    Mark
  • NetHogNetHog Posts: 104
    edited 2009-09-02 14:24
    mallred said...
    That's fair enough. Dr. Jim says that the robot will know two words in English to begin with, "Yes" and "No". From there, you must teach it about words and syntax and structure. It will take time, and then it will only speak Pidgin language. But, given time and practice, it will cluster similar items together internally. It will be able to make relationships and store those relationships in permanent memory. This is how it will learn. It will learn from us, it's teachers, through the two ears we have built for audio input.

    This may take time to accomplish. The computer should eventually be able to formulate questions when it does not understand something so that it can learn from it's own generated questions. Would that be a fair demonstration?

    Mark
    Given that the ability for a·speach recognition·to differentiate between "Yes" and "No" was done many many years ago, I'm assuming that what will be demonstrated is the computer actually going through the process of interactive·learning?

    What will we see come December?

    PS don't let my management chain see this. I've scheduled a project through middle of next year. If they see that someone can achieve machine intelligence by end of the year, they'll have me working nights!
    ·
  • Agent420Agent420 Posts: 439
    edited 2009-09-02 14:26
    A two word 'Yes' 'No' vocabulary sound rather limiting.· And if that is all it understands, what method would be used to 'teach' it additional words and syntax?

    One the one hand, I do understand that many AI implementations are based around 'teaching' the system.· On the other, much of the way this project has been presented seems to have resulted in much higher initial expectations than what you describe.

    And I can't help but think of the Furby.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
  • Nick MuellerNick Mueller Posts: 815
    edited 2009-09-02 14:35
    > Dr. Jim says that the robot will know two words in English to begin with, "Yes" and "No".
    > From there, you must teach it about words and syntax and structure.

    Brilliant! I didn't expect that much information! Not from you, at least.
    Do you ever ask questions when Dr. Jim tells you something, or do you directly create a new thread to spread the incredible news?

    Assuming I do have that miracle of intelligence and I switch it on. What happens? Nothing?

    Then when I say "No", what did he learn? Will he switch off? And when I say "Yes", will he stay powerd on until I say "No"?
    How will I get that thingy to react on "Get beer"? Even if he brings me cold tea instead, that'd be OK for now.


    Nick

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Never use force, just go for a bigger hammer!

    The DIY Digital-Readout for mills, lathes etc.:
    YADRO
  • jazzedjazzed Posts: 11,803
    edited 2009-09-02 17:04
    mallred said...
    That's fair enough. Dr. Jim says that the robot will know two words in English to begin with, "Yes" and "No". From there, you must teach it about words and syntax and structure. It will take time, and then it will only speak Pidgin language. But, given time and practice, it will cluster similar items together internally. It will be able to make relationships and store those relationships in permanent memory. This is how it will learn. It will learn from us, it's teachers, through the two ears we have built for audio input.

    This may take time to accomplish. The computer should eventually be able to formulate questions when it does not understand something so that it can learn from it's own generated questions. Would that be a fair demonstration?

    So to eliminate any dependencies on the user's ability to train this "brain" and speed up the process, MIT should come up with some audio recordings via MP3 or some means that will make the training fool-proof and speed up the process by 24 hour teaching ... an interface that is not audible could also be useful. In this way at least you get an apples-apples comparison for objectivity rather than telling the user "My bot is in the Stanford MBA program, it is not my fault that you're not smart enough to train yours."

    I suggest maybe a Bangalore call center replacement training series [noparse]:)[/noparse] At least we would have something to compare to the current line of call center Voice Recognition bots available in AT&T or other big company customer service numbers. This would be a good start.

    However just following a script, offering a list of options, recognizing a response, and acting accordingly does not demonstrate intelligence (although I personally know several people in Bangalore who are very intelligent).

    Some evidence of an "emergent property" in communications from all of this would be required I think to demonstrate intelligence. Does formulating questions qualify as demonstrating questions? Probably not in terms of lists related to the subject which might be produced by lookup on a keyword (my browser's spell checker can do this, and I know it is not intelligent). This requires clarification.

    You need to define an objective and measurable response to the question: "what demonstrates intelligence?"

    Good luck.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    --Steve

    Propeller Tools
  • Agent420Agent420 Posts: 439
    edited 2009-09-02 17:11
    From the interview articles:
    ROBOT mag said...
    DR. JIM: I have made some inroads in this. I have developed software that runs on PCs and some mainframes. I am now porting this over to a Parallax Propeller-based “hobby research super computer” setup. I cannot predetermine the internal interconnections it makes between its neuronal groups, and it can make mistakes and learn by correcting them.
    ...
    ROBOT: Have you tested the synthetic brain in a verbal interaction?

    DR. JIM:
    Yes; on a PC, but I really want it on the Parallax Propeller.

    As this seems to be the basis for his work on the Propeller architecture, is there any demonstration or description of the pc or mainframe applications?· Even something like that would go a long way in displaying what he's talking about.·
    I'd think that a video of verbal interaction would be something they'd really want to publicize.· I don't understand the cryptic nature of this program.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
  • PraxisPraxis Posts: 333
    edited 2009-09-02 17:20
    > Dr. Jim says that the robot will know two words in English to begin with, "Yes" and "No".

    Wow, impressive achievement.

    >And I can't help but think of the Furby.

    Is that something like a gerbil?

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Caelum videre iussit, et erectos ad sidera tollere vultus

    Certe, toto, sentio nos in kansate non iam adesse
  • mallredmallred Posts: 122
    edited 2009-09-02 17:26
    We have actually considered setting the robot in front of the TV and seeing how well it learns that way. Unfortunately, there is usually nothing intelligent on TV.
  • PraxisPraxis Posts: 333
    edited 2009-09-02 17:29
    mallred said...

    Unfortunately, there is usually nothing intelligent on TV.

    Sorry to say but that pretty much sums this thread.

    Cheers

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Caelum videre iussit, et erectos ad sidera tollere vultus

    Certe, toto, sentio nos in kansate non iam adesse
Sign In or Register to comment.