Shop OBEX P1 Docs P2 Docs Learn Events
A computer achieving self-awareness — Parallax Forums

A computer achieving self-awareness

.:John:..:John:. Posts: 64
edited 2008-08-23 17:18 in General Discussion
DELETED. Because nobody cares about noobs.

Post Edited By Moderator (Joshua Donelson (Parallax)) : 10/23/2009 4:37:12 AM GMT

Comments

  • jazzedjazzed Posts: 11,803
    edited 2008-08-23 03:01
    A computer achieving self-awareness is probably decades away. Today we have imprinting of our thinking via algorithms to solve problems and that allows large amounts of automation. Even neural networks need to be trained, and fuzzy logic is just a better way to deal with input. If anyone can get a neural system to be massively parallel and be able to achieve selective connections that a more generic machine could make, but that seems well beyond even symmetric multi processing with many GHz speed cpu cores. On demand reprogrammable FPGA's could solve some parallel and performance limits, but the cost would have to be sponsored research ... but that would suck money like a oil-producing third world country during a "war on terror" ....

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    --Steve
  • waltcwaltc Posts: 158
    edited 2008-08-23 04:44
    Remember my CS teacher( a software engineer at Lockheed) who laughed at the mere mention of AI back in the early 90's. Even a few years back the mention of AI still elicited laughter from CS teachers I know.

    Lots of very smart folks have been trying for decades with no success in sight.

    The DoD still spends bucket loads of cash and has been for a while with nothing to show for it except a few dodgy and very fragile robotic vehicles that take a team of geniuses to make do anything at all. S

    Neural nets IMS still collapse under their connections after a certain size. Intel who pursued AI with a line of Neural Net chips, gave up and gave away $1k Neural Net chips for the asking. Still have a few somewhere along with a couple of transputer chips. Really bizarre chips, more like analog computers, could really do some decent image recognition back in the day.

    Parallel processing # AI only faster number crunching. Then there's coding issue.

    Genetic algorithms another holy grail that went nowhere. Then there was the mix of GA and NN.

    After that I lost interest in the field.
  • SRLMSRLM Posts: 5,045
    edited 2008-08-23 06:02
    My school has an AI teacher. Guess they think that its something thats going to develop soon...
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2008-08-23 07:40
    waltc,

    I will agree that AI has been the "flying car" of CS for decades, but I have to take issue with your assertion that genetic algorithms have gone "nowhere". The list of successful applications of mathematical optimization through genetic recombination and propagation is large and growing. One has only to Google genetic-algorithms applications for examples. GAs are like hammers, though, and not all problem domains are nails.

    -Phil

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    'Still some PropSTICK Kit bare PCBs left!
  • waltcwaltc Posts: 158
    edited 2008-08-23 17:18
    Phil

    I wasn't saying GA's were useless, I only mentioned them because some AI folks were trying to mix GA's and NN's(software based IMS) in order to solve the connection overload problem that severely limited NN size.
Sign In or Register to comment.