Using the Say-It Module 30080 with the Propeller??? Deal of the Day
WBA Consulting
Posts: 2,934
After seeing the Say It Module up today as the Deal of the Day, I am considering on getting one for a project idea I have for my 4 year old daughter. However, I would like to use it with the Propeller, but I do not see any documentation or threads to support that. After looking at the datasheet, it appears that the Say It Module is geared specifically for the BS2.
Will I be starting from scratch if I try to use the Say It Module with the Propeller?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Andrew Williams
WBA Consulting
WBA-TH1M Sensirion SHT11 Module
Special Olympics Polar Bear Plunge, Mar 20, 2010
Will I be starting from scratch if I try to use the Say It Module with the Propeller?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Andrew Williams
WBA Consulting
WBA-TH1M Sensirion SHT11 Module
Special Olympics Polar Bear Plunge, Mar 20, 2010
Comments
From a software standpoint, you're mostly starting from scratch, particularly since there's no document describing the command / response codes for the Say It Module. The communication between the Module and a Propeller is straightforward serial I/O and any of the existing serial I/O objects could be used. You could pretty easily translate the existing demo program from PBasic to Spin. Unfortunately, you couldn't use the GUI with a Propeller since the "bridge" program source is not available and there's no equivalent program for the Propeller. You could train the Say It Module using the GUI and a Stamp, then use it with a Propeller. You might be able to do some experimenting to figure out most of the training codes from the PBasic demo program which seems to use some of them.
The documentation provided does have a table of commands that can be used to control the Module using any of the existing serial I/O objects. You can't use the provided GUI with the Propeller, but you could take the generated PBasic code and use that for a template for your Propeller application.
Post Edited (Mike Green) : 3/2/2010 7:10:22 PM GMT
Mike - Aren’t the commands listed on page 15 of the pdf enough to use the Say It Module?
Off to order a Say It Module.
Duane
If anybody does write something - be a pal a share it with us.· I'm kinda new at the prop, and re-writing the BS2 program into spin would probably see· me well into my sixties.
Cheers.
thanks!
Jeff
I purchased a couple of these modules when they were the Deal of the Day and I'm just getting around to using them.
I thought I'd add the Say It Module to the wireless controller of my most recent robot project.
If someone else has some Propeller code and is willing to share it, I'd prefer to have a head start.
I'll be sure and post any (working) code I come up with.
Are there still others wanting to use a Say It with the Prop?
Now you can still use the pre-programmed speaker independant commands with the propeller chip, or use a Basic Stamp to train the speaker dependant commands and then use the Say-It with a propeller. The Say-It serial protocol is fairly simple to use and translating a GUI generated program from PBasic to Spin shouldn't be hard.
The GUI can also generate the PBasic source for the bridge program, so it might be possible to translate that program into Spin and use the Propeller to program the Say-It. But that's for advanced users.
I was just wondering if anyone has something I could start from.
Duane
The GUI lets you train the Say-It with additional commands, but only when the Say-It is connected to a supported uController in a supported configuration. For example I added commands like North, East, South, and West to allow me to align the robot to a compass. But even without additional commands the Say-It is pretty neat. It occurred to me that the Say-It coupled with the Propeller chip is more powerful than using it with a Basic Stamp or Arduino. This is because one cog could always tend the Say-It and allow for asynchronous verbal commands.
The manual lists all the commands and parameters needed to program speaker dependent commands.
I believe the GUI just makes it easier to add the additional commands. I don't think there is anything the GUI can do that can't be done by having the Propeller give the appropriate command.
I do have several Basic Stamps. It just want to see if I can come up with a Propeller only solution.
Yeah, that's what I'm leaning toward. So far I've written code to wake up the Say It and to have it use English. I still have a ways to go.
It turns out making a bridge program for the Propeller is relatively easy. (You can still call me an advanced user if you want.)
I've attached a simple bridge program. I've had to disconnect and reconnect the Say It from the Prop after the program as started in order to get the GUI to find the Say It. I also have to try connecting several times from the GUI before it is successful.
I'm also writing a more advance/useful demo for using the Say It. By using a second serial line to the computer, I can have both the GUI up and the PST connected at the same time. My program listens in on the conversation between the computer and Say It and displays the conversation to the terminal window. This has been very useful in seeing the protocol in action.
BTW, The Say It works fine with 3.3V.
One thing that wasn't clear from just reading the manual was the different wordsets don't have to be used in any particular order.
You can kind of think of the wordsets as button pads. Only one button pad (wordset) can be active at a time but your program determines what happens when one of the words in the set is said. Just like you can program any behavior from a press of a button. Your program can either switch to a different wordset or take some other action.
I had though the trigger word was to wake up the Say It. This is not the case. You could use any of the wordsets as the first set of words the Say It listens for. This makes for many many possible combinations of words.
I programed the Say It with Calculator, plus, minus, times, divided_by, equals and digits. I figure I can use the Say It to make a verbal calculator. I'll use the word "digits" to have the Propeller switch the Say It to wordset3 (the numbers zero through ten). I'll say the digits into the Say It and use the word "ten" to indicate I'm through entering digits. I figure I don't need "ten" to be a number since I can use "one" and "zero". I think this will be fun.
I'll post my other code when I have it working better. If anyone else is using the Say It with the Propeller and would like to see the other code as it is now, let me know. I'm always willing to share my code even if it's a work in progress.
Duane
Edit(3/11/15): Warning, the code attached is an old version. There are better options available.
I plan to upload this program or an improved version to my GitHub account
If there isn't a replacement on GitHub send me a message and I'll make sure to upload the replacement code.
The attached code is very buggy but it does have some useful stuff.
If you use a Prop Plug as a second serial connection to the Prop, you can use the second connection as a bridge to the GUI and listen in on the conversation with a terminal program.
There is also a feature to record the GUI/Say It conversation to EEPROM. I don't think this record feature is needed for now. I included it so one could record the information if only using one serial line. I haven't been able to get the bridge to work using the same line as the debug line.
My suggestion to anyone trying this program is to type "w" at the menu to wake the Say It up. Then type "o" and enter "0" to change the timeout to infinity. Then select "t" from the menu to have the Say It listen for the trigger word.
Once the Say It hears the trigger the terminal will prompt you with a list of words of the current wordset it is listening for. The program will then lead you though the various branches of options depending on which word is spoken. Many of the options end with needing to say three digits. The idea is that if you want your robot to turn right, you would then give the angle to indicate how much to turn.
If you try it out, I hope you let me know.
There are all sorts of unused variables. The code is horribly inefficient. I normally would want to rewrite this but I just don't have time right now.
I don't make use of any speaker dependent word groups yet.
Hopefully this will be useful to someone.
The Say It module is a lot of fun to play with.
Duane
P.S. The program writes the value of one variable to the EEPROM the first time the program is run. There are several menu items that will write to the EEPROM. These menu items are marked with an "*". If you do use the EEPROM menu options, the program assumes you have a 64MB EEPROM on your board.
Edit(3/11/15): Warning, the code attached is an old version. There are better options available.
I plan to upload this program or an improved version to my GitHub account
If there isn't a replacement on GitHub send me a message and I'll make sure to upload the replacement code.
The GUI and BS2 sample program may be found here.
I was just reviewing the program attached to post #16 of this thread. It looks like it might be a useful program if it didn't require a second com line to the PC (via Prop Plug) to work. I'd like to fix this 20 month old code some time today if I have time. I might not have time today since my day job has decided to get in the way of my bot building today.
I think the code in post #16 give an example of the way the SayIt works. You need to specify to the SayIt which word group to listen for and then the program needs to decide is the word group should be changed or not based on the word spoken. It looks like the main decision making is taking place in the "FindBranch" method and the "TakeAction" method.
I haven't used the SayIt much yet. I keep intending to incorporate it into a robot or a robot remote. I still intend to do so.
While I don't have a lot of experience using the SayIt, I think I understand generally how it works, so if anyone has questions about using it a Propeller, I'll try to answer them.
The SimpleBridge in attached to post #15 works as a bridge between the SayIt and the GUI program.
I removed the bridge section of the program I attached to post #16. I also simplified the program a bit and hopefully made it a bit easier to understand. I've attached the newer version to this post.
Here's the output to the terminal window of my recent use of this program. I'm going to break the output into small parts to make easier to explain what's going on. The program waits for a keypress before starting.
It then displays a menu of choices. The SayIt needs to be awakened before using so the one of the first options to choose should be "w".
The SayIt woke on the second attempt. The menu was then displayed again which I'll omit here.
I then chose "t" from the menu to have the SayIt listen for the trigger word.
After the above was displayed the menu was also displayed. The menu is displayed frequently while the program is running. I'll delete future main menu output without comment from now on.
At this point I said the word "robot". If I were to wait longer than the timeout period, I'd receive an error message and I'd again need to select "t" from the menu before continuing. The timeout amount can be changed by selecting "o" from the main menu.
Since I said "robot" within the timeout period, the following was displayed.
I then said the word "attack".
I then said "up".
The program accepts three digit numbers. I might switch to using the word "ten" and an end of number indicator to allow the number of digits to be variable. For the program as it is now, I need to say three digits. Each digit needs to be said separate from the others. I said the numbers "7", "6" and "3".
Here's the output while saying these numbers (I've cut some of what was displayed).
Once the SayIt had hear all three numbers, it could then act on the command. In this case the action is just to display what the command was, but if I were to use this with a robot, I could have the robot take some preprogramed action. I just need to decide what the robot should do when it hears "Attack Up 763".
Here's the remainder of the output.
The robot went back to waiting for the trigger word.
The microcontroller (uC) needs to keep track of which word group the SayIt is listening for and the uC also needs to decide if the word group should be changed or not once the SayIt has heard a word.
You can kind of think of the SayIt as a touch screen device with changing menu options. The welcome screen would have just the word "Robot" on it. Once the "Robot" is selected another menu appears with the options:
"action"
"move"
"turn"
"run"
"look"
"attack"
"stop"
"hello"
The program needs to decide which menu (word set) to present (or listen for). When one of the above words is spoken, you could either have the program command the SayIt to listen to word set 2 or word set 3 or go back to listening for "robot".
The SayIt doesn't make decisions. It just tells the uC which word of the word set has been spoken. It's up to you as the programmer to decide which actions should happen based on which word was spoken. This includes needing to program the uC to send a command to SayIt to start listening for whichever word set you decide should be next.
I listed word set #1 above. Word set #0 is just the trigger word "robot". There are also word sets #2 and #3. Here are the word sets as I've listed them in my program.
I included trailing zeros so each word takes up nine bytes of memory. This makes it easier to display the word based on its assigned number.
By using speaker dependent words the number of possible branches gets very large. Instead of "word sets" the speaker dependent words are grouped together in "groups". It's possible to define speaker dependent words using a uC but it would probably be easier to implement speaker dependent words and groups by entering them in through the GUI and then listing these groups and words within your program.
I haven't used speaker dependent words from within my program yet. I think the commands to send to the SayIt are a bit different but the concept is the same as with using word sets.
As I previously mentioned, I haven't used the SayIt much my I think I've got a basic understanding of how it works and I'd be glad help out with questions if I can.
There's a bit of an issue the labels on the SayIt. The TX and RX labels on the SayIt correspond to the TX and RX of the uC not the SayIt module. The diagram in the manual adds to the confusion. Here's my annotated diagram.
Edit(3/11/15): Warning, the code attached is an old version. There are better options available.
I plan to upload this program or an improved version to my GitHub account
If there isn't a replacement on GitHub send me a message and I'll make sure to upload the replacement code.
You're just three years too late. To the day apparently (kind of weird).
Parallax hasn't sold SayIt modules for a while. I received a PM asking about my bridge program so I thought I'd update the program I had been working on. Deal of the Day, those were the days. I purchased a lot of stuff through DoD.
While the SayIt isn't around anymore the module the SayIt used is still for sale. Here it is at the Robot Shop as EasyVR. I actually like the EasyVR form factor better than the SayIt's form factor.
Since the GUI program uses the com port to the Propeller an additional serial connection to the Propeller is required to monitor the exchange between PC and SayIt. I used a Prop Plug attached to P18 and P19.
I found the PC GUI program uses commands not listed in the SayIt documentation. I don't think it's important to understand these extra commands, but I think it's interesting they exist.
I monitored the exchange between PC and SayIt as I experimented with speaker dependent commands. For example this is the exchange to rename the command in group #1 index #1 to "KITCHEN":
While the command to change a name is given in the SayIt documentation, I found it very helpful to see a few actual examples of the code in use.
Here's the command to add the command "HELP" to group #2 index #0 (the group was previously empty):
You can see in the case where the there wasn't already a command for that group, the command had to be created. The documentation state the command "g" is to used to insert new speaker dependent command. The following two characters identify the group index and command position.
After creating and naming the new command, I clicked "train" and followed the GUI prompts. The training exchange was:
When I attempted to test group 2 but I didn't speak soon enough, I received an error message from the GUI. Here's the exchange:
In the above example, the characters "<$20>" represents the space character. I wanted to make sure each character was displayed in the terminal window. Any characters that were ASCII values 32 or less or 127 and above where displayed as their hexadecimal value.
Here's another attempt at using group #2 in which the SayIt successfully identified the command with index #7:
I hope it's clear where the displayed data originated.
The program adds the heading "PC: " to data received from the PC GUI program and the program adds "Say It: " to data from the SayIt module.
I think the attached program is useful in understanding the commands expected by the SayIt and also makes it clear what data to expect to receive back from the SayIt when commands are issued.
I plan to add some speaker dependent commands to the program, previously posted, SayItAlpha and share the updated program here.
If anyone else is using the SayIt with the Propeller, I hope they share their experience with the rest of us.
Edit(3/6/13): Some of the output didn't copy correctly. I've fixed it.
Edit(3/11/15): Warning, the code attached is an old version. There are likely better options available.
I plan to upload this program or an improved version to my GitHub account
If there isn't code similar to what is attached here on my on GitHub, send me a message and I'll make and check for any improved versions of the code.
I've been looking through the forums and have found a lot of things about the SayIt. I've tried to use this code to work with my EasyVR that I had purchased and used with my arduino, but I recently switched over to the propeller and have been trying to use it with that, but to no luck. I have used the code provided on the parallax site to use with the SayIt but connected the EasyVR. So far it only works with the built in command that go along with the Robot trigger, but when I try to use the words that I create it does not work at all. Does anyone know where I may have gone wrong or can provide any kind of assistance.
Thanks
"no luck"? It sounds like you've had some? You found this thread right? That's lucky.
What code? Do you mean code posted in this thread? If so, which post?
I assume using the code from this thread. I don't recall if the code I posted included a menu item for speaker dependent (SD) words. If it did include a menu item for SD words, then it was more of a place holder for a future feature than an included feature.
I'm just barely figuring out how to use SD words myself so I'm pretty sure there's not any SD features in the code I've posted so far.
So it sounds like your EasyVR is working as well as a SayIt with the code posted in this thread. Wow, what luck!
The SD words present additional challenges for a programmer since not only do you need to figure out what combination of words will do what but you also need to decide how many SD groups to use. There are lots of possible SD groups (something like 16) but you only get to define 32 (IIRC) SD words.
How many SD words should be in each group? How many groups should be used? I'm still trying to figure this out.
It seems like one group should be used to describe a general area of tasks. I was thinking of using words like "kitchen" and "yard" to indicate to the program which type of task I want the robot to perform. Once I select kitchen, do I want the SayIt (I'll use SayIt to mean either a SayIt or EasyVR) to listen to yet another SD group or should it listen for one of preprogrammed sets?
These are some of the things I'm trying to figure out. As I mentioned previously in this thread, the SayIt doesn't make decisions, it just tells the microcontroller (uC) which word of a group or set was heard. It's up to the uC to tell the SayIt which group or set should be active and what should happen when a specific word is said. Should a different group or set be selected next or should it still listen in the same group/set.
The commands for using SD words are included in the SayIt documentation. The above "conversation" between SayIt and PC also gives an indication of which characters the uC should be expecting and which characters the uC will need to send to the SayIt in order to select a specific group.
If you have a FTDI device to use as a second serial connection between Propeller and PC, you could use the monitor program yourself to see which characters are passed between SayIt and PC.
I don't think I have any example code of using SD with the Prop yet. Once I have some sample code, I'll post it here.
If the commander program operates at the same baud as the SayIt, it should work. Otherwise, you'd just need to change the baud settings in the bridge program.
I had trouble with the tx and rx lines being switched on the SayIt. I don't know if the VR shield has the same issue or not.
It possible to define words and train words with the Propeller as the input, but I think it's easier to just use the PC program to program the SD words and groups and then program the Propeller to watch for these words.
As I mentioned above, I think the trick is to figure out how many groups to use and what words should be assigned to which groups.
Wow, I hadn't ever seen that object.
Based on the description it sounds like to does what the PBasic program does.
I'm betting I'll get to programming some code to use SD words in within a week or so. I want to use the SayIt to control an automated pipette gizmo I've made. I used the SI words "turn" and "forward" to indicate I want the stepper motor to deliver some acid, but I think I'd rather use SD words like "fill" and "drops" to make the interface more intuitive.
My code "SayItAlpha" posted above has an option of listening for the trigger followed by listening for the other word sets. I think you need to use the initialize option before you can have the module listen for the trigger word.
The default timeout isn't very long. I'd suggest setting it to zero so it doesn't timeout at all. There's a menu option in the "SayItAlpha" program for setting the timeout.
I think one of the weaknesses of my "SayItAlpha" program is it's too complicated. I'll try to write some simplified example code to make it easier to see what is needed to interface with the SayIt. The simplified code will have fewer possible outcomes since keeping track of which "branch" of the menu tree is what makes the code so complicated.
The PC program outputs some data that's not in the documentation. Any character that's not a normal printable character is displayed as a hexadecimal value I add the brackets <$xx> to indicate it's a character outside the normal printable range. Space characters get displayed as hex since it would be hard to tell where they were displayed. The "FF" value could either be one of these undocumented characters or noise on the line. I think the serial driver I used doesn't catching framing errors.
No, the SayIt or VRbot doesn't make any decisions on its own. It has to be instructed to listen for another set or group.
To switch to group 1, you have to send "dB" to the module. It will then output "r" if it recognizes the word spoken. The Prop then needs to send a space character to tell the module to continue. The module will then output a character identifying the index number of the word recognized. If the module output a "B" it would mean it heard the word with index #1.
The program then needs to decide what group or set the module should listen for next. If the module isn't instructed to change to a different group or set it will continue to monitor the same one.