It is easy to see what mallred has to deal with now, notice Dr. Jim getting agitated when he (assumed mallred) tries to video the schematics in the vid. Funny.
and of course...Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a total mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.
That's really quite remarkable. Thanks for sharing it. Now I won't feel so bad about trnasposing letters when I type — another characteristic that seems to get worse as I get older.
I'm just thinking that's where the brain vs the ai will be a struggle. We have other cues besides sound to make things clear. I work as a dispatcher for fire in the summers and I can tell you that what you hear and what is said can be quite different depending on your past experience and what your brain is looking to hear. Differentiating clutter from cross traffic is an art that I'm better at than my peers. But I fought fire where they did not and as such, can guess at what they are saying. For what it's worth...
The same thing that you describe happens when you become used to people talking in a different dialect or severe accent. Your brain is an amazing filter that can pick-up on the most subtle cues.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔ Beau Schwabe
IC Layout Engineer
Parallax, Inc.
Have you ever ridden in a small plane and listened to the control tower give instructions? I'm surprised anyone is able to land safely! How do pilots interpret such distorted gobbledygook?
> How do pilots interpret such distorted gobbledygook?
I remember when I was sitting in the train and two obviously Indians were sitting face to me talking. It took 10 complete minutes for me to realize they were talking English. From that moment on I did understand them.
Nick
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Never use force, just go for a bigger hammer!
The DIY Digital-Readout for mills, lathes etc.: YADRO
Phil Pilgrim (PhiPi) said...
Have you ever ridden in a small plane and listened to the control tower give instructions? I'm surprised anyone is able to land safely! How do pilots interpret such distorted gobbledygook?
-Phil
It's really easy after you do it for a while. It's just like the steps a program goes through. There are certain things which are relayed at a certain time (in a certain order), and the processor is looking for those things in order. A pilot does the same thing. There is information relayed in a very specific order, and the pilot is trained to listen to that information in the order it is to be presented. A new air controller will often trip up a seasoned pilot. Mostly because they sometimes stumble or relay information different than the standard method.
It does take some time to learn to hear the information when talking to a tower. It took me a realistic 8 months to finally catch on to what was being relayed, and feel good about it. Also another reason a pilot repeats what he/she "thinks" they heard. That way if there is some wrong interpretation the controller can correct the pilot.
The real key here is in being able to predict a pattern or knowing the order of events as it has been mentioned. I would bet that with the Cambridge University study, It would fail if the jumbled words chosen were stand alone and not part of a sentence when they were presented to the control group.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔ Beau Schwabe
IC Layout Engineer
Parallax, Inc.
(English) sentences are comprised of "key" and "noise" words. Interpretation of the sentence (spoken or otherwise) can be recognised mostly by the key words. The keys provide the essence of what is being communicated, the noise (often) provides additional (or sometimes necessary) context. An algorithm that extracts and distills keys from "sentences" into "like" bit streams lends itself reasonably well to compression techniques, and, combined with ECC codes could provide an efficient method of storing "like" categories together. Given that ECC technologies are designed to "replace or supply" either missing or bits in error of a data stream, this technique could be applied to a variety of technologies. Speech recognition included.
I don't find these ideas of using such techniques without merit for storage and even retrival of audio information. I do find fault with current technologies in use today; for example: when I call my ISP for a technical problem I need to go through a ~15 minute dialogue, initiated at their end. It goes something like this:
Them: So do you have a problem with your ISP account?
Me: yes
repeat between 2 and 4
Them: I'm sorry, I did not understand your response, please answer "yes" or "no"
Me: yes
...
Them: OK, So we have determined you have a problem with your ISP account; is this correct?
Me: yes
...
Them: I'm sorry, I did not understand your response, please answer "yes" or "no"
...
Me: YES
Them: OK, so it seems you have a problem communicating, I'll pass you to one of our Engineers, is that OK?
Me: yes
Them: I'm sorry, I did not understand your response, it's now been some 20++ minutes since you called and I'm hopefull you are highly agitated so now I'll pass you to a Human who can then hang up on you because you will probably be rude because you are frustrated and annoyed...
I'm sure you get the idea ) In any case, I can speed up the process of getting to talk to a Human by botching the responces - an "arrgghhgghh!!!" as a "yes" response seems to confuse it so the AI at the other end determines it can't understand and gives up - that way, _I_ get somewhere.
Back on track; I sure hope this pans out because I for one am sick of trying to communicate with VR that has a poor hit rate. As far as VR goes, I'm sure the Dr will succeed in ways using his "IP" techniques for word recognition - I think his idea is workable in as far as words go but as for interpretation of same, I'd like to see it work. How cool would it be to say "drive the car into the river" and the AI "understood" "please park the car in the garage". Of course driving the car into the river would be:
a) a bad idea
b) void your warranty
c) proof of a form of AI because it either "understood" or was able to "learn" the difference between a and b or finally
d) contravene a "Human Directive" because that's what you told it to do and it disobeyed in non-violation of a Prime Directive!
References:
1a. The Cat sat on the mat
1b. Matt had a Cat that sat
1c. On the mat, Matt's Cat sat
1d. Did a Cat on the mat sat
1e. Where did a Cat sat
1f. Where is the Cat of Matt
1g. The satellite has a catatonic state
1h. On which mat is Matt's Cat
1j. Which cat sat on the mat
1k. Why did Matt's matte Cat sat on a matte mat
1m. What is the difference between a Cat and a mat and a Matt or even a matte mat
The example above provides both keys and noise; determining which is which from what, where and how is what the Dr needs to be able to demonstrate as "AI" WRT VR - proving this can be accomplished using
a) VR to text and/or text to speech and
b) interpretation between a Human and an AI/MI to perform either
1) conversation and/or
2) action
- then I'll consider some trust (somewhat). A pure no-audience Video will not be allowed because it's too easy to fake and has already been done so that would not be original.
Of course, some of the above borders - deliberately - because these are "psuedo" real examples of being able to filter the noise and the keys. Words are words but what about context and meaning?
"Wordy Rappinghood"
...
Words are stupid,
Words are fun,
Words can put you one the run
...
he1957 said...
... I sure hope this pans out because I for one am sick of trying to communicate with VR that has a poor hit rate.
It is altogether possible that institutions using phone-bots don't really want to help anyone at all ... there are solutions for this. Better for them to use keypad numbers which is the most reasonable least common denominator. No doubt they would be in denial about their phone-bot investment though and would not relent.
If I had a choice of a "service provider" in point; I'd jump ship now.
They had that technology but decided it got me to a rep too quickly using a few (remembered) keypad presses;
... _that_ was unacceptable because they'd have to have staff to actually provide service!
I was reading my Communications of the ACM journal (or magazine?), and it had a news blub about a new face recognition algorithm. For the most part, it was completely over my head. But the picture that they provided showed a woman with sunglasses being compared to a database of women (all photos mug shots). It showed how the face was converted into some sort of pixelated blur that was completely unidentifyable as a face, but it matched. It seemed that they threw away information like Dr. Jim is doing, but it managed to have a very high accuracy.
SRLM said...
I was reading my Communications of the ACM journal·... it managed to have a very high accuracy.
I remember someone explaining the difference between precision and accuracy. It went like this:
Fred had a machine gun. He fired it at a target on a tree. All bullets missed the bulls-eye, but landed within 1mm of each other - that's precision, but not accuracy.
Jim had another machine gun. He fired it at a target on a tree. Bullets peppered the target, with one hitting the bulls eye, and about 100 surrounding the area. That's accurate, but not precise.
Why do I say this? One of the techniques Dr Jim is using (per one of his videos's) is a hashing algorithm. Actually Dr Jim calls it "CRC" and "Compression" but these descriptions are both incorrect - CRC could be used to calculate a hash, but is not itself a hash, nor what Dr Jim does; Compression assumes the recovery of information - Hashing is a many-to-one mapping. Many semantically different words and phrases may "hash" to one number.
Hashing algorithms are mathamatically accurate, it is always true that "Hash(n) == Hash(n)". Most hashing algorithms are not precise, i.e. There are many cases of 'm' where "Hash(n) == Hash(m)". Cryptographic hashing algorithms tend to be both accurate and precise (by design). However this is not what Dr Jim demonstrated in his video.
Thus, using an accurate but in-precise hashing algorithm, you might be able to accurately·show that No·== No, and Yes·== Yes. But you might also confuse "No" with "Row", "Sow", "Sew", "Flow", "Tow" etc. And "Yes" with "Guess" "Less" "Mess" etc.
I think the point of hashing, though, is to map similar values in the domain to disparate values in the range. That way hash "collisions" among similar words, like "row" and "sow", become less likely.
I don't mind his use of the word "compression", BTW. A hash is really just a lossy compression, like JPEG, but unlike LZW, which is lossless.
Some hashed like md5 can be reversed but only through ridiculously large look up tables. Salting the value makes this imposible though
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
propmod_us and propmod_1x1 are in stock. Only $30. PCB available for $5
Want to make projects and have Gadget Gangster sell them for you? propmod-us_ps_sd and propmod-1x1 are now available for use in your Gadget Gangster Projects.
Need to upload large images or movies for use in the forum. you can do so at uploader.propmodule.com for free.
There is more than one type of hash. A Wikipedia overview (YMMV) ...
Hash function made from any kind of data. Some types are:
Cryptographic hash function, a hash function suitable for use in information security
Hash table, a data structure associating keys with values using a hash function
Associative array, an abstract data type often implemented as a hash table
Geometric hashing, a method for efficiently finding geometric objects of the same or similar shape
I though mpark was talking about a hash table which would have no problem doing a reverse look up of md5 or whatever as long as there was capacity for the key. In the thread context, I suppose an unchained hash would have value. What happens if the data being represented have completely different meanings? Would you "push" multiple meanings and "peek" the stack when required until something suitable comes up?
A hash function is a Many->One function that has multiple uses. I.e. a hash table is not itself a hash, but is a use of·a hash (even though some people call the hash table the hash).
A·good hashing algorithm that uses a 32-bit word has more chance of a false hit than 1 in 2 billion. A poor hashing algorithm will tend to have false hits for similar inputs.
A cryptographic hashing algorithm, such as SHA-256, has a very very good distribution such that a chance of a false hit is one in 5x10^76 (for SHA-256, erm, pretty unlikely), along with other special properties that make it good for cryptographic applications (I'm quite happy to go into this, but not relevant here).
When using a hashing algorithm for searching such as the application that Dr Jim is using it for, there will need to be some normalization occurring before applying the hashing algorithm. I'm confident Dr Jim can achieve a method of normalization that will increase likelyhood of hits, but at the cost of not being able to differentiate. An example of normalization in the text world is to do phonetic matching (e.g. Thick and fik considered 'equal')
Using a hashing algorithm on top of the normalization helps with faster searches (i.e. a hash table). In a typical use of a hash table, the hashes reduce the search time but a final compare is performed on the normalized or non-normalized data. by eliminating the final compare, which I believe is·intended by Dr Jim, would cause words that hash to the same thing to be considered equal even if they are not normalized to be the same word (see above probabilities).
FYI I've attached the code listing c/o Dr Jim from http://www.youtube.com/watch?v=_MjudhviBVk·showing an example hashing algorithm, that he has incorrectly denoted as "CRC". You can see·an example of·normalization "Temp := Temp & $DF" as well as the hash·algorithm "CRC := (CRC <- 1) ^ Temp"··It is academically interesting to note with this particular algorithm·that only the last 32 characters are significant.
That made me spill some beer onto the screen (taken from M.I.T blog):
"The full version will require a modified Propeller Proto USB board, at least one 2MB memory expansion board (two would be better), a datalogger, and will be able to support multi-terrabyte hard drives for permanent storage.
This amount of storage should be enough to allow for most and possibly all the words in all the spoken languages on earth to be stored. The system should be able to learn and respond to multiple languages."
They annonced a free trial-version of the voice recognition (limited to about ten words).
Nick
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Never use force, just go for a bigger hammer!
The DIY Digital-Readout for mills, lathes etc.: YADRO
^ There is a big difference between simply recognizing all those words and understanding them, or at least be capable enough to process in terms of sentences.· The best speech->text dictation apps still have some trouble.
FYI: Motorola VCP200 voice recognizer ic (C) VCPI 1988 (obsolete) support.radioshack.com/support_supplies/15365.htm
This chip was sold by Radio Shack 20 years ago. I was reminded of it by the post
of the Goertzel learning demo and also my brother bought some at a hamfest.
It is speaker-independent and uses zero-crossing (primitive clipped analog/1-bit sound input).
It can recognize yes, no, on, off, and some directional phrases and was used in
voice controlled toy robots. It's copyrighted so it must contain interesting software,
and thus necessarily something to execute the software also, like ye mere olde little PIC1654,
which has done a lot of similarly incredible and amazing things with only 1K in it's time.
They do seem to have gone very quiet, I'd forgotten about them.
Here is their latest update:
By Dr. Jim on Nov 23, 2009 | In Applications
Dr. Jim continues to work on the Voice Recognition software. He is getting closer to completion. He has been out of town for a few weeks, but once he returns, Voice Recognition should be released soon after. Keep an eye on the blog for release information.
Mark Allred
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Comments
How many steps further is that from learning single words?
mallred was talking about pidgin english.
Master:
no hope Mr. Jim robot understand man
command clear, robot nonsense
money back!
Robot:
Synaptical error at $BEE7FADE, rebooting now ...
Nick
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Never use force, just go for a bigger hammer!
The DIY Digital-Readout for mills, lathes etc.:
YADRO
BTW: those FET switches might allow alternatively experimenting with the Propeller sigma-delta A/D approach.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
--Steve
Propeller Tools
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
-Phil
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
The same thing that you describe happens when you become used to people talking in a different dialect or severe accent. Your brain is an amazing filter that can pick-up on the most subtle cues.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Beau Schwabe
IC Layout Engineer
Parallax, Inc.
-Phil
I remember when I was sitting in the train and two obviously Indians were sitting face to me talking. It took 10 complete minutes for me to realize they were talking English. From that moment on I did understand them.
Nick
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Never use force, just go for a bigger hammer!
The DIY Digital-Readout for mills, lathes etc.:
YADRO
It's really easy after you do it for a while. It's just like the steps a program goes through. There are certain things which are relayed at a certain time (in a certain order), and the processor is looking for those things in order. A pilot does the same thing. There is information relayed in a very specific order, and the pilot is trained to listen to that information in the order it is to be presented. A new air controller will often trip up a seasoned pilot. Mostly because they sometimes stumble or relay information different than the standard method.
It does take some time to learn to hear the information when talking to a tower. It took me a realistic 8 months to finally catch on to what was being relayed, and feel good about it. Also another reason a pilot repeats what he/she "thinks" they heard. That way if there is some wrong interpretation the controller can correct the pilot.
J Long
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
James L
Partner/Designer
Lil Brother SMT Assembly Services
Please note: Due to economic conditions the light at the end of the tunnel will be turned off until further notice. Thanks for your understanding.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Beau Schwabe
IC Layout Engineer
Parallax, Inc.
I don't find these ideas of using such techniques without merit for storage and even retrival of audio information. I do find fault with current technologies in use today; for example: when I call my ISP for a technical problem I need to go through a ~15 minute dialogue, initiated at their end. It goes something like this:
Them: So do you have a problem with your ISP account?
Me: yes
repeat between 2 and 4
Them: I'm sorry, I did not understand your response, please answer "yes" or "no"
Me: yes
...
Them: OK, So we have determined you have a problem with your ISP account; is this correct?
Me: yes
...
Them: I'm sorry, I did not understand your response, please answer "yes" or "no"
...
Me: YES
Them: OK, so it seems you have a problem communicating, I'll pass you to one of our Engineers, is that OK?
Me: yes
Them: I'm sorry, I did not understand your response, it's now been some 20++ minutes since you called and I'm hopefull you are highly agitated so now I'll pass you to a Human who can then hang up on you because you will probably be rude because you are frustrated and annoyed...
I'm sure you get the idea ) In any case, I can speed up the process of getting to talk to a Human by botching the responces - an "arrgghhgghh!!!" as a "yes" response seems to confuse it so the AI at the other end determines it can't understand and gives up - that way, _I_ get somewhere.
Back on track; I sure hope this pans out because I for one am sick of trying to communicate with VR that has a poor hit rate. As far as VR goes, I'm sure the Dr will succeed in ways using his "IP" techniques for word recognition - I think his idea is workable in as far as words go but as for interpretation of same, I'd like to see it work. How cool would it be to say "drive the car into the river" and the AI "understood" "please park the car in the garage". Of course driving the car into the river would be:
a) a bad idea
b) void your warranty
c) proof of a form of AI because it either "understood" or was able to "learn" the difference between a and b or finally
d) contravene a "Human Directive" because that's what you told it to do and it disobeyed in non-violation of a Prime Directive!
References:
1a. The Cat sat on the mat
1b. Matt had a Cat that sat
1c. On the mat, Matt's Cat sat
1d. Did a Cat on the mat sat
1e. Where did a Cat sat
1f. Where is the Cat of Matt
1g. The satellite has a catatonic state
1h. On which mat is Matt's Cat
1j. Which cat sat on the mat
1k. Why did Matt's matte Cat sat on a matte mat
1m. What is the difference between a Cat and a mat and a Matt or even a matte mat
The example above provides both keys and noise; determining which is which from what, where and how is what the Dr needs to be able to demonstrate as "AI" WRT VR - proving this can be accomplished using
a) VR to text and/or text to speech and
b) interpretation between a Human and an AI/MI to perform either
1) conversation and/or
2) action
- then I'll consider some trust (somewhat). A pure no-audience Video will not be allowed because it's too easy to fake and has already been done so that would not be original.
Of course, some of the above borders - deliberately - because these are "psuedo" real examples of being able to filter the noise and the keys. Words are words but what about context and meaning?
"Wordy Rappinghood"
...
Words are stupid,
Words are fun,
Words can put you one the run
...
Credit extended to the Band.
machineinteltech.com/blog/blog1.php
It was posted a couple of days ago.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
--Steve
Propeller Tools
They had that technology but decided it got me to a rep too quickly using a few (remembered) keypad presses;
... _that_ was unacceptable because they'd have to have staff to actually provide service!
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
Chat in real time with other Propellerheads on IRC #propeller @ freenode.net
Safety Tip: Life is as good as YOU think it is!
The paper by the original researcher (with the images) can be found here: decision.csl.illinois.edu/~yima/psfile/Sparse_Vision.pdf
The website of the researcher is here: decision.csl.illinois.edu/~yima/Publication.html
Basically, it's probably possible to have something similar in the audio realm.
Fred had a machine gun. He fired it at a target on a tree. All bullets missed the bulls-eye, but landed within 1mm of each other - that's precision, but not accuracy.
Jim had another machine gun. He fired it at a target on a tree. Bullets peppered the target, with one hitting the bulls eye, and about 100 surrounding the area. That's accurate, but not precise.
Why do I say this? One of the techniques Dr Jim is using (per one of his videos's) is a hashing algorithm. Actually Dr Jim calls it "CRC" and "Compression" but these descriptions are both incorrect - CRC could be used to calculate a hash, but is not itself a hash, nor what Dr Jim does; Compression assumes the recovery of information - Hashing is a many-to-one mapping. Many semantically different words and phrases may "hash" to one number.
Hashing algorithms are mathamatically accurate, it is always true that "Hash(n) == Hash(n)". Most hashing algorithms are not precise, i.e. There are many cases of 'm' where "Hash(n) == Hash(m)". Cryptographic hashing algorithms tend to be both accurate and precise (by design). However this is not what Dr Jim demonstrated in his video.
Thus, using an accurate but in-precise hashing algorithm, you might be able to accurately·show that No·== No, and Yes·== Yes. But you might also confuse "No" with "Row", "Sow", "Sew", "Flow", "Tow" etc. And "Yes" with "Guess" "Less" "Mess" etc.
·
I don't mind his use of the word "compression", BTW. A hash is really just a lossy compression, like JPEG, but unlike LZW, which is lossless.
-Phil
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
--Steve
Propeller Tools
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
propmod_us and propmod_1x1 are in stock. Only $30. PCB available for $5
Want to make projects and have Gadget Gangster sell them for you? propmod-us_ps_sd and propmod-1x1 are now available for use in your Gadget Gangster Projects.
Need to upload large images or movies for use in the forum. you can do so at uploader.propmodule.com for free.
Hash function made from any kind of data. Some types are:
- Cryptographic hash function, a hash function suitable for use in information security
- Hash table, a data structure associating keys with values using a hash function
- Associative array, an abstract data type often implemented as a hash table
- Geometric hashing, a method for efficiently finding geometric objects of the same or similar shape
I though mpark was talking about a hash table which would have no problem doing a reverse look up of md5 or whatever as long as there was capacity for the key. In the thread context, I suppose an unchained hash would have value. What happens if the data being represented have completely different meanings? Would you "push" multiple meanings and "peek" the stack when required until something suitable comes up?▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
--Steve
Propeller Tools
A·good hashing algorithm that uses a 32-bit word has more chance of a false hit than 1 in 2 billion. A poor hashing algorithm will tend to have false hits for similar inputs.
A cryptographic hashing algorithm, such as SHA-256, has a very very good distribution such that a chance of a false hit is one in 5x10^76 (for SHA-256, erm, pretty unlikely), along with other special properties that make it good for cryptographic applications (I'm quite happy to go into this, but not relevant here).
When using a hashing algorithm for searching such as the application that Dr Jim is using it for, there will need to be some normalization occurring before applying the hashing algorithm. I'm confident Dr Jim can achieve a method of normalization that will increase likelyhood of hits, but at the cost of not being able to differentiate. An example of normalization in the text world is to do phonetic matching (e.g. Thick and fik considered 'equal')
Using a hashing algorithm on top of the normalization helps with faster searches (i.e. a hash table). In a typical use of a hash table, the hashes reduce the search time but a final compare is performed on the normalized or non-normalized data. by eliminating the final compare, which I believe is·intended by Dr Jim, would cause words that hash to the same thing to be considered equal even if they are not normalized to be the same word (see above probabilities).
FYI I've attached the code listing c/o Dr Jim from http://www.youtube.com/watch?v=_MjudhviBVk·showing an example hashing algorithm, that he has incorrectly denoted as "CRC". You can see·an example of·normalization "Temp := Temp & $DF" as well as the hash·algorithm "CRC := (CRC <- 1) ^ Temp"··It is academically interesting to note with this particular algorithm·that only the last 32 characters are significant.
"The full version will require a modified Propeller Proto USB board, at least one 2MB memory expansion board (two would be better), a datalogger, and will be able to support multi-terrabyte hard drives for permanent storage.
This amount of storage should be enough to allow for most and possibly all the words in all the spoken languages on earth to be stored. The system should be able to learn and respond to multiple languages."
They annonced a free trial-version of the voice recognition (limited to about ten words).
Nick
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Never use force, just go for a bigger hammer!
The DIY Digital-Readout for mills, lathes etc.:
YADRO
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
support.radioshack.com/support_supplies/15365.htm
This chip was sold by Radio Shack 20 years ago. I was reminded of it by the post
of the Goertzel learning demo and also my brother bought some at a hamfest.
It is speaker-independent and uses zero-crossing (primitive clipped analog/1-bit sound input).
It can recognize yes, no, on, off, and some directional phrases and was used in
voice controlled toy robots. It's copyrighted so it must contain interesting software,
and thus necessarily something to execute the software also, like ye mere olde little PIC1654,
which has done a lot of similarly incredible and amazing things with only 1K in it's time.
Here is their latest update:
By Dr. Jim on Nov 23, 2009 | In Applications
Dr. Jim continues to work on the Voice Recognition software. He is getting closer to completion. He has been out of town for a few weeks, but once he returns, Voice Recognition should be released soon after. Keep an eye on the blog for release information.
Mark Allred
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Post Edited (Leon) : 12/15/2009 8:20:29 PM GMT
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm