ARLO and Robotics Developer Studio
T3rr0rByte13
Posts: 29
Hi, I was thinking about buying a Eddie until I realized its being phased out. I have heard that ARLO, while not officially supported, can work with Microsoft Robotics Developer Studio. I guess my questions are: 1. Is there any gotcha's with using the ARLO for RDS. 2. Should I use the Propeller Activity Board or the Eddie Control Board and what are the benefits of using one or the other. 3. Lastly what other components would I need that are not shown in the diagram: EG: Batteries, Charger, Components for Method of Communication between the Laptop and the Eddie control board/propeller activity board. Thanks for the input! 3cPro pro
Comments
NWCCTV: I program in C# and already have a component that will need to run in Windows. While I am still quite new to RDS, I should be able to figure it out (that's half the fun). I will ask Matt these questions about the control board.
The differences are substantial.
Encoders
============
Arlo has quadrature encoders with signals brought out for direct Propeller interfacing.
Eddie uses the Position Controller, a serial-interface co-processor.
Motors
============
Arlo motors are about 100 RPM, low-backlash, and support a tighter fit under the deck (more deck space available).
Eddie motors are about 150 RPM, higher backlash, and required more deck space since they protrude from the top.
Control Board
============
Eddie has plugs for everything needed. Motor drivers on the board.
Arlo is a choose-your-own-adventure control board system at present. Use the Propeller Activity Board with HB-25s.
If you want a Microsoft RDS robot, certainly choose the Eddie. While some of the Arlo features are truly preferred, we've not written RDS-comptabile firmware to turn the Arlo setup into something RDS will recognize. It's entirely possible, but not practical.
If you go with the Arlo route, you will need to batteries. We suggest two 12V 7AHr SLAs wired in parallel. They fit perfectly under the deck. You'll have to do a bit of work to rig up your own switch plate system.
The parts you see from Arlo are stable, and won't be changing. This results in the Ala Carte presentation that you see today. What's missing is a switch plate and an Arlo-specific control board for an out-of-the-box experience that compares to Eddie. The benefits of the approach you see today, however, are flexibility to choose some of your own parts without us making all of the decisions you want.
Unless you want a PC in this mix, choose Arlo!
For me, the two main reasons to use RDS are Kinect, and the ability to run off-the-shelf vision tools in windows. Neither Kinect nor vision *require* Windows, but it's pretty much plug-and-play on the Windows platform. If you have something else that needs Windows, that's all the more reason, but I personally prefer using Windows only in a supervisory role, and offloading everything else to the robot's core processor, which has the benefit of operating in real-time.
The Eddie control board is clever and useful, but keep in mind that most any Propeller board will do. You could start with a $50 Activity Board, which has some terrific features already built in. You don't NEED the Eddie board, and may prefer developing your own. Eddie contains the Prop plus the motor control bridges, which is nice, but I always worry about burning things out when connecting up high amperage motors. I'd rather toast a $50 HB-25 rather than a $250 board. That said, the HB-25s do everything in their power to keep you from toasting them. They don't even get lightly brown on one side...
I think you need to start by asking Microsoft if there's a path forward. They haven't updated (including news) the RDS site since June of last year.
Whether or not any of this has bearing on Arlo I can't say, as I'm not in that inner-circle. But there have been concerns voiced within the community re: updates and ongoing commitment regarding RDS. I think anyone supporting the platform needs to be diligent regarding their time and other investment.
This is a fine question, Gordon.
Perhaps they just can't make any profit, everyone who wants to use it is using the free community version but they can't make any sales of the software to commercial companies. It find it a bit odd though that they have kept their site up (while it doesn't take much to keep a site running) perhaps this is one of those things that needs to sit for a while until a commercial demand for the software can be found. That raises a question for me is there really even a need for PC based Robotics? I can find a use for it, but can anyone else? For me its not really controlling the robot through robotics studio that interests me but the convenience of having a PC based program (that could run without a robot) and give it a physical presence. Ultimately my robot is just going to move in all four directions and avoid obstacles. Maybe I'm just one of the odd ones out on this one?
Is there really a place for PC robotics? I think so, but the real practical market need is probably more or less satisfied by embedded 32-bit processors doing more simplistic functions than skeleton tracking and advanced blob depth perception like is capable with the Kinect plus a PC. However, the Kinect has made its way into an astounding number of applications well beyond what was originally intended for video games. Vision is one of the game changers that currently warrants a PC. And some complex processing is probably best done on a PC.
My opinion is that if there's a place for a PC in robotics, it probably doesn't require a PC at all because the market needs determine the kind of technology used to satisfy the market. Take telepresence as an example. This could be accomplished with Slate/Android/iPad *tablet* with some serial I/O for the sensors. And this tablet would run Skype, and with a Skype API people should be able to attend a meeting and roam around an office "in" a robot. This kind of semi-autonomous control would rely on the robot's sensors to avoid people and doors, and maybe steps, but basically allow somebody to navigate an office and attend a meeting. It could require a PC, but could be done with something less, too.
I think Skype is the tool that Microsoft should be capitalizing on in robotics.
I guess there's another audience for using a PC to do robotics - people who want to program on a PC and don't want to get into electronics, sensors and the embedded stuff and just treat it like a serial stream of data. I mean, heck, if you want to program on a PC, then program on a PC. There's no right or wrong way to do anything anymore, and you're pursuing this interest I assume for personal reasons rather than mass manufacturing of a product. There's nothing wrong with doing what you want.
Most people you'll encounter on this forum have an embedded view first. There are some PC software developers, too, but not a whole lot of them.
You are not alone.
However, looking down the road at your project - will your robot need to know which of the four directions it is going? Do you want to know if it's N,S, E or W, or if you've simply turned 180 degrees from where you were? If the former, you will soon need a compass. Then you'll need to modify some of the Eddie Propeller firmware and how RDS handles it. But if you just want to use the Position Controller to avoid objects you should be able to use the default firmware.
http://www.leafproject.org/
Alex (guy on the right) devised the PC-to-robot interface using an HC-11. That's a fine processor, but I think the Propeller is better for any RTOS robotics application. The interface between the two is USB serial. The Leaf software communicates to the controller via message chunks. Robin (woman on the left) has written some very clever non-Kinect vision routines for doing things like fairly sophisticated object detection.
These are HUGE robots because at least in the case of Bruce's application, they are for physical interaction with children, so they are about the size of a small person. Your robot need not be as large. If you go the PC route, you can get netbooks that are fairly robust and lightweight. The laptops used on these prototypes were from several generations ago.
Ken makes excellent points regarding whether you want or need a full Windows PC. There's certainly a comfort in writing in C# or VB if you already write in these languages for a day job. But we're to the point now where we really don't need all the overhead of Windows just to steer a robot.
I think it would be worth porting the Eddie firmware to the new Activity board and just update the motor control code to drive the HB-25s. That way you could still leverage the new Arlo platform as a replacement for Eddie. Seems that the only missing piece is the firmware to make RDS think it is talking to an Eddie board. If it is an issue of resources to put a bounty up on the project and offer that up on the forums for the community to help with.
Robert
Robert
For larger autonomous robots it can be very useful to leverage the power of a PC on-board. It can handle tasks that are often beyond what someone would typically do in an embedded system. The embedded systems can handle all the lower level and interface tasks that the PC wouldn't be well suited for.
I have LEAF running as the brain for one of my very large robots and so far it had been working out great. It has the HC11 board to control the drive and many sensors but has an extra Propeller board to control all the LED lighting and some extra sensors I have on the robot. I've always thought that it wouldn't be too had to write some firmware on the Propeller to have it emulate what the HC11 LEAF board does and use that instead. It would probably cost less and open up projects like LEAF to more people. Also, now that the Eddie and Arlo bases have good quadrature encoders that base could end up being the standard for LEAF based robots. Ideally those could be the classic HC11 based or run a new Propeller board emulating it. Tkat that a step further and if someone wants to run RDS then they could either re-load the firmware on the Propeller controller or perhaps write an interface to let RDS leverage the LEAF board (and also emulated LEAF on Propeller) so that a person could launch LEAF or RDS. That would really be cool and useful. Having the laptop on board makes it easier to use the Speech recognition, custom voices (my robot uses a UK English voice), Vision (Kinect or other), higher level AI software, and easier to leverage the Internet when that connection is available.
Having the new Arlo base configured so it is easy to use stand alone and leverage the existing ActivityBot material is great. However if there isn't too much extra work involved to let it ALSO work as what I would consider an intelligent peripheral for a PC or laptop then that should seriously be considered.
Robert
I took a quick peek at Alex's code and it looks like only two files really need to be re-engineered to make a Leaf emulator on the Propeller. These are ProcessCmds.c and USB.c. These are the files concerned with receiving and dispatching messages from the host PC. If a Propeller-based variant used PropC, the syntax could remain largely intact. For the other functions, like reading sonar, accelerometers, etc., those already have OBEX or other drivers available, and could be retrofitted to provide their data in a format for the message dispatcher. Any code related to interrupts, timers, and other hardware in the HC-11/12 would be dropped completely, as there is no need for it.
Anyway, I've long thought the Leaf Project and the Prop would make a good team, but haven't personally had the time to pursue it. Just about everything except RoboRealm on the Leaf side is already open source. Or you can use OpenCV if you're so inclined, or if you know Windows DirectShow (like I do!), you can write your own frame-grabbing and vision analysis routines. The CodeProject is filled with very nice vision analysis algorithms.
All this is to say I'm not knocking RDS, as I think it put some gravitas to the notion of PC-based robotics. But there's so much else out there with great functionality, it's worth looking at the alternatives.
I think that if the same FTDI USB to Parallal adapter module were used then the existing Nav and Control code on the PC wouldn't know the difference.
In case you haven't seen it here is an old clip of my LEAF based robot. It was all put into the shell of an old DC-2 robot by Android Amusement. The robot was an empty shell when I found it and the robot is a lot smarter now....
http://www.youtube.com/watch?v=F83TwxJ9FJo
I'm in the process of rebuilding the laptop with Windows 8.1 which I think may actually work well with the touchscreen in the front of the robot. I haven't tried RDS myself but eventually may want to load it on the system as well and may try to write a driver that will allow it to talk to the LEAF board so I could launch either one.
http://whiteboxrobotics.com/index.html
http://images.search.yahoo.com/search/images;_ylt=AwrTcYDAH3hS7FQAegijzbkF?p=914+pc+bot&fr=&ei=utf-8&n=30&x=wrt
Robert, Nice video. Most any USB to serial adapter should work. I think (but didn't look closely) Alex is receiving message blocks as serial commands, then processing them in a case statement. Or, if he's reading parallel data from the USB module, then that's something to change. That might have been a design feature due to parts availability and cost back then. The Leaf software would (ideally) be unchanged.
Erco, I looked at the White Box robots MANY years ago. I just checked again, and none of the dealer links work. Last News and Events dated "March, 2009." Oh, well.
It was originally touted at a fairly low price point, then got progressively more expensive. In a 2007-ish Sharper Image economy, it might have worked, but who has that kind of money these days?
So true. Back then I was getting 5% interest from any number of online savings banks. Now the best is 0.90%! Where has all the money gone?