Shop OBEX P1 Docs P2 Docs Learn Events
How does a computer learn? — Parallax Forums

How does a computer learn?

AImanAIman Posts: 531
edited 2012-01-22 08:36 in General Discussion
Can someone explain to me how a computer learns and why it matters?
«1

Comments

  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2012-01-19 21:58
    First define "learn."

    A computer can be programmed to evaluate based on new data (a form of learning), but that programming eventually comes down to something created by a human. Is it true learning if you have to teach the machine how to learn each new thing? Some say not.

    In any case, this Wiki article is a good place to start: http://en.wikipedia.org/wiki/Cognitive_science

    -- Gordon
  • lanternfishlanternfish Posts: 366
    edited 2012-01-19 23:13
    OT:
    First define "learn." ... Is it true learning if you have to teach the machine how to learn each new thing? ... -- Gordon

    Replace machine with student. The number of so called 'techs' that are pouring out of some training institutions who have no idea of words like 'manual', 'search engine' 'similar'.... They seem to see each problem as new and unique, and have no ability to problem solve based on previous experience. And as a mentor I find I have to very, very patient and take time to explain the similarity to a previous problem/solution.

    And that is the end of my grumpy old tech muttering.
  • ElectricAyeElectricAye Posts: 4,561
    edited 2012-01-19 23:35
    Wikipedia has some stuff on it sometimes.

    http://en.wikipedia.org/wiki/Machine_learning
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2012-01-20 01:57
    Comprehensive understanding of learning is rather illusive with humans, so the computer analog is rather mythical. At best, a focused task is optimal for computers and often what you are getting is mathematical interpolation or reference to a most recent list of human behaviors (a data buffer).

    But if you want a computer to use human language it can speak in a limited manner and is much worse a listening to everyday conversation.
  • Heater.Heater. Posts: 21,230
    edited 2012-01-20 02:10
    Who said anything about language? Us humans are born without a language in our heads. Somehow we learn one or more as we grow. The learning capability is there without a language.
    In the animal world young Chimps learn to smash nuts with rocks by watching older Chimps doing it. Young bears learn to catch fish by watching their parents. And countless other examples.
    Can a computer do any of this? I have yet to see it. Except in very limited restricted cases.
  • Mike GMike G Posts: 2,702
    edited 2012-01-20 03:26
    Let's say a computer is connected to a distance sensor. The distance sensor is rotated 360 degrees while recording distance readings every degree. You could argue the computer learned the position of nearby objects.
  • Heater.Heater. Posts: 21,230
    edited 2012-01-20 03:47
    Mike G,

    Well this conversion could get a bit philosophical.
    Let's say a computer is connected to a distance sensor.

    OK.
    The distance sensor is rotated 360 degrees while recording distance readings every degree.

    You mean the computer has recorded something. OK.
    You could argue the computer learned the position of nearby objects.

    No I would not. The computer knows nothing about "objects" or "nearby" or "position" in this simple set up. There seems to be a bunch of numbers now stored away in it memory but any interpretation of them is entirely down to you.

    Let's replace "computer" with "Mick G". Mike G memorizes all the numbers coming from somewhere. What has Mike G learned from this? Nothing useful.
  • LeonLeon Posts: 7,620
    edited 2012-01-20 04:13
    Artificial neural networks are a nice example of computer learning:

    ftp://ftp.sas.com/pub/neural/FAQ.html

    I've done some work with them, and they are quite fascinating. They are used here in the UK for number plate recognition.
  • mindrobotsmindrobots Posts: 6,506
    edited 2012-01-20 04:59
    Stanford University is offering a wonderful online class (taught to the standards of their live classes) called Machine Learning (ml-class.org). I believe the next session starts Jan 23rd.

    You can learn how a computer can learn statistical classification, data mining and other related tasks through supervised and unsupervised learning techniques.

    Google learns more about you, your search interests and searching in general each time you use it. Your Spam Blocker learns. Many programs learn about things and are able to make very good predictions and classifications based on their learning. There are many examples of Machine Learning around that you may not even realize as Leon and others have pointed out.

    At this point, learning isn't an issue, it's a matter of thinking and reasoning and abstracting. Taking the learned data and using it outside the framework in which it was originally presented and intended. This isn't a problem particular to the machine condition as Lanternfish and Heater pointed out above.

    I went through a good portion of the Machine Learning class last fall until I got short on time. I'm going to try it again some day, it was fascinating.
  • Mike GMike G Posts: 2,702
    edited 2012-01-20 05:21
    No, the computer learned something about the surroundings. If the computer took a step in the direction of a 1 but was unable to move, the executing code could say moving in the direction of a 1 is not good. The distance mapping is used as a feedback mechanism in the learning process. It all philosophical stuff...
  • Heater.Heater. Posts: 21,230
    edited 2012-01-20 05:53
    Years ago, before micro-processors and such, Wireless World magazine published a construction article about what we might now call a "bot" that learned.

    Cybernetic Cynthia was built in the form of a snail. When you touched her shell she pulled her head into her shell in case of danger. Slowly she would pop her head out again. Until the shell was touched again. But if you touched her shell more rapidly, repeatedly, she would "learn" that there was "no danger" and leave her head out side the shell. If you then stopped touching her shell for a long time she would forget what had happened. Then if you came back and touched her she would again hide her head in her shell in "fear".

    Cynthia's "learning" was done with nothing more than a simple circuit with a micro switch as the shell sensor, a transistor or two and the charge on a capacitor as the memory.
  • Heater.Heater. Posts: 21,230
    edited 2012-01-20 06:04
    Quite so. The incoming sensor data changed the internal state of the computer. Perhaps in ways far more complicated than just recording the pattern of numbers as put in your initial example. If we saw the machine scanned its environment, then tried to step in some direction that it cannot go, then connected the data from it's scan with that collision such that it never stepped in a direction when the scan has nearby returns. Then we would be observing some kind of learned behavior.

    In a casual way we call that "learning". But is it really the same as when I shove my fingers into the fascinating bright red glow of an electric heater as a baby and learning never to do that gain because it hurts like hell?
  • Mike GMike G Posts: 2,702
    edited 2012-01-20 06:14
    But is it really the same as when I shove my fingers into the fascinating bright red glow of an electric heater as a baby and learning never to do that gain because it hurts like hell?
    Sure, some events are more important to remember than others.
  • Heater.Heater. Posts: 21,230
    edited 2012-01-20 06:34
    Sure, some events are more important to remember than others.

    Now there's a thing. How could any machine that we have built have any idea about what is important or not?
    "important" is a value judgment based on feeling, emotions, experience, which most people would say a bunch of connect transistors or other switches does not and cannot have.
  • Martin_HMartin_H Posts: 4,051
    edited 2012-01-20 06:38
    An interesting book which inspired BEAM robotics is "Vehicles: Experiments in Synthetic Psychology", by Valentino Braitenberg. Here's the Amazon URL:

    http://www.amazon.com/Vehicles-Experiments-Psychology-Valentino-Braitenberg/dp/0262521121

    The book is a bit abstract at times and initially doesn't seem to be related to AI. But that's because he's trying to separate principles of machine learning from implementation via a digital computer. The BEAM guys took some of his ideas and ran with them since analog computation is as valid as digital computation, and often cheaper. As an aside he died back in September 2011, if I had know I would have put up a notice in this forum.
  • mindrobotsmindrobots Posts: 6,506
    edited 2012-01-20 07:53
    OK , now I find this funny and interesting and to the point of this thread. From the famous BRIGE thread that is going concurrently, we have this (insightful and well researched) quote from Heater:
    I googled for it before posting as well. Then I realized there are a lot of sites out the that look like dictionaries offering meaning for words like "brige" even including examples of usage. It then became clear that all the examples are taken from typos and misspelling dredged up on the internet. They are just bots auto generating misinformation.

    Like this one for example: http://www.wordnik.com/words/brige

    It's nuts.

    Which leads me to propose that the bots and crawlers on the web are learning to misspell, maldefine language and trust the Internet as an authoritative source of information at the same level as a large portion of the human population!! The speaks highly of the "State of the Art" in Machine Learning......and not so much for human learning!
  • Heater.Heater. Posts: 21,230
    edited 2012-01-20 08:06
    Good example midrobots. I did not make the connection.

    So, no amount of computers hoovering up bits, mashing them around according to whatever algorithms (neural net, expert system etc etc) and then regurgitating the results is any kind of learning.

    On the other hand, often when I think about this I can't find any reason why a human is any different from a machine. Stimulus in, response out, that's it. Then I get very depressed.

    Except, in the middle of all that stimulus in response out business there is me experiencing something about it all.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-01-20 08:07
    "Learning" implies empirical behavior. IOW, to learn something, a computer must be programmed to test various behaviors in a given environment, to evaluate the outcome of each test, and to modify its future behavior in order to increase "good" outcomes and reduce "bad" outcomes. From that standpoint, learning can be thought of as an exercise in optimization where, through trail and error, the structure of a given "utility function" can be teased out so that behavior becomes more confined to those regions in its domain that yield maximum reward (or minimum punishment). The best learning systems are those which employ optimal strategies in their choices of trials, rather than blindly trying things at random.

    -Phil
  • mindrobotsmindrobots Posts: 6,506
    edited 2012-01-20 08:10
    Uncanny!
    rather than blindly trying things at random.

    Phil's watched me program!! :lol:
  • Mike GMike G Posts: 2,702
    edited 2012-01-20 09:25
    Now there's a thing. How could any machine that we have built have any idea about what is important or not?
    That's the point of the feedback. What Phil said.
  • Heater.Heater. Posts: 21,230
    edited 2012-01-20 09:55
    OK. So we determine that we need feed back for any kind of learning. Crack open any old mechanical light switch and you will see that it uses feed back to maintain its on or off states. Or consider the humble flip-flop built withNAND gates. Or the typical PID control loop. They all have feed back. Can any of those be said to be "learning" in a commonly accepted meaning of the word?
    How are we to define this "learning"? what else does it need?
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-01-20 10:21
    Heater,

    The examples you gave lack empirical behavior. See my previous post.

    -Phil
  • PliersPliers Posts: 280
    edited 2012-01-20 10:25
    I think that the word “learn” is being confused with the word “intelligence”.
    A computer is a collection of items put together to do work.

    A computer could “learn” a lot by being “taught” via programming and data gathering, but it will never be intelligent.

    Human intelligence is a full of wispy stuff: dreams, desires, fear, and anger. Emotions a machine can not have.

    Machines do learn, but they are not intelligent.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-01-20 10:37
    Pliers wrote:
    A computer could “lean” a lot by being “taught” via programming and data gathering, but it will never be intelligent.
    Au contraire! There are multiple examples of computer programs that learn. Two learning mechanisms in common use are neural network simulations and genetic algorithms. Granted, a computer has to be programmed in advance to implement these methods but, once let loose in the problem areas to which they're subjected, they can exhibit adaptive behavior that's too complex to analyze deductively.

    Also, to say that computers will never think is also to say that humans cannot think -- unless you ascribe human thought processes to mystical powers that lie beyond the realm of physics.

    -Phil
  • bill190bill190 Posts: 769
    edited 2012-01-20 11:13
    Most computers do not learn. They are like that light switch on the wall. You flip the switch, the light turns on. The switch never learns when you want to turn it on or anything else.
  • Martin_HMartin_H Posts: 4,051
    edited 2012-01-20 11:41
    Also, to say that computers will never think is also to say that humans cannot think -- unless you ascribe human thought processes to mystical powers that lie beyond the realm of physics.

    I sometimes wonder about the human ability to learn and think.
  • ilovepiilovepi Posts: 9
    edited 2012-01-20 15:24
    I like Phil's answer.

    The way I think of machine learning is you have a model and you have a way to update that model.
    The model might be: f(x) = mx+b
    The update step tries to adjust 'm' and 'b' to make f(x) meet the objective as closely as possible.

    There are many approaches to modeling. The model above will work well when the output needs to be a linear function of the input, but not so well in other cases. More complicated models give you more freedom, you can tweak them so they mimic anything but the problem is it's harder to find the right adjustments for a complex model and it's harder to compute them

    So there are many ways to update the model. A general approach is "gradient descent" where you adjust the parameters of the model in the direction where it seems to be getting closer to the right answers. But depending on the model, gradient descent can also converge on a non-optimal answer. Models and their update strategy are designed together to find good solutions.

    I think one of the big ideas we see everywhere today is parsimony in the form of sparseness. We can have a complicated model but during learning prefer simple variations of the model. In the example above, we could prefer solutions for 'm' and 'b' where one of the two is close to 0. If you were learning the grammar of English, you can prefer simpler grammars. In the EE world, this idea is doing amazing things like recovering signals that are sampled below the Nyquist limit (compressed sensing).

    I also have to recommend the Stanford ML class. It was a lot of fun. The online AI class was also good and covers how you use these models to make decisions in the presence of uncertainty.
  • AImanAIman Posts: 531
    edited 2012-01-20 21:50
    For illustrations sake lets say an IR is hooked into a Propller chip and every second a log is created of the distance to the object in front of the IR and is recordered with a time stamp. Say a 2 gig thumb drive holds the information.

    As time progresses the log is updated enough times to be useable and the information is averaged so that "it" knows on a certain day of the week at a certain time the odds are that the nearest object will be a set distance.

    Is that what you are trying to describe as how a computer learns?
  • ilovepiilovepi Posts: 9
    edited 2012-01-21 14:44
    Alman,

    That's certainly one model and update strategy. You're calculating P(ir_reading | time_dow) which you can read as "the probability of a specific ir reading given a particular time and day of week."

    Now lets make it more interesting. What if your sensor isn't always on? If you have no reading for a particular day, then that day the next week, you might predict the expected ir_reading as 0 which certainly isn't right. Just because the sensor hasn't observed a particular circumstance doesn't mean it doesn't happen. So there's many different schemes for smoothing and interpolating over missing data.

    We can also give the model more predictive power by, for instance, looking not only at the time/day-of-week, but also the last N hours. So we calculate P(ir_reading | time_dow , ir_reading 1 hour ago, ir_reading 2 hours ago, ..., ir_reading N hours ago) which you can read as "the probability of a specific ir reading given a particular date_time and history of the last N ir readings". That gives us recordings of the sort:

    ir_reading, time_dow, 1 hour ago, 2 hours ago, ..., N hours ago
    4, mon 9:00, 5, 6, 7
    3, mon 10:00, 2, 3, 6
    7, mon 11:00, 5, 3, 2
    1, tue 12:00, 4, 2, 6
    10, tue 1:00, 8, 2, 5

    Because there are so many combinations of date_time and history, it would take a lot of storage to make observations for each of them and chances are, we'd never observe most of them. In these cases, instead of making our predictions purely based on our recorded observations, we can try to adjust a flexible model of a sort like f(x)=mx+b in my previous post to mimic the observations we do have, and use that to generalize over situations we've never encountered before.

    Here we're taking quite a leap of faith: just because a model that we've tweaked to mimic the observed data is flexible enough to do so, doesn't mean it also does a good job in novel situations we haven't trained it on. For that reason, models are tweaked (the learning part) with only part of the observed data, and then tested using the omitted data. From that, we have a good idea how well it generalizes over circumstances that haven't happened yet.

    If you're interested in seeing concrete but accessible uses of machine learning, you may want to look at http://robowiki.net/ which archives strategies (including many history-based learning strategies) used to make bots to compete in a game called Robocode.
  • HumanoidoHumanoido Posts: 5,770
    edited 2012-01-21 23:12
    There are many different ways of learning depending on the challenge, both for humans and for machines. With computers, it's all about how the computer programmer decides to write the program. In AI, the idea is to give the machine some guidelines and let it make decisions and carry on.
Sign In or Register to comment.