Any Compelling Raspberry Pi Robots?
The Raspberry Pi is neat and seems like a good value, but I haven't seen any compelling Raspberry Pi robots. A Google search tends to locate fairly basic robots that you could build with a microcontroller. With that kind of CPU power I'd hope to see some sort of machine vision using the camera module to control a robot arm, but I haven't seen any projects like that.
There's a fair amount of technology to interface the Raspberry Pi to microcontrollers (e.g. Propeller Hat and RoboPi) which shows there's interest. But so far most of the projects seem like a microcontroller could do it on its own, and the Raspberry Pi is being used as a programming environment or scripting language to talk to the microcontroller.
The reason I'm asking is that I don't own a Raspberry Pi and I am wondering if I am missing out on something really cool, or sticking with a PC and microcontroller is all I need to get the job done.
There's a fair amount of technology to interface the Raspberry Pi to microcontrollers (e.g. Propeller Hat and RoboPi) which shows there's interest. But so far most of the projects seem like a microcontroller could do it on its own, and the Raspberry Pi is being used as a programming environment or scripting language to talk to the microcontroller.
The reason I'm asking is that I don't own a Raspberry Pi and I am wondering if I am missing out on something really cool, or sticking with a PC and microcontroller is all I need to get the job done.
Comments
Seems to me that there is a natural division of functions between a microcontroller and something like the RPi when it comes to robots. Higher level functions like vision, speech, and goal execution need the power a system like the RPi provides. A microcontroller (or several micros) like the Propeller are ideal for controlling the robots hardware based on commands from the RPi.
As kninn points out, there are applications which a R Pi could really be useful. I think the most common use of the R Pi on a robot is to stream video or images from the camera. While this is a useful ability (and one I plan to copy), I think some sort of wireless IP camera would also do this.
I'm pretty sure I've seen some good examples of R Pi based robots but they're not coming to my mind. I'll add links to the examples if I can remember them.
You don't need anything special to stream video, and it can be handled separately from the controller. Not sure that's the killer app. I've looked at some of the vision processing possible with the Pi, and have been underwhelmed. I'm familiar with processing on Windows (using DirectShow), but that's a limiting knowledge unless I want to run some version of Windows, and I don't. The angst comes from this: Vision libraries for Windows are so common it's a constant tug to find some Windows solution. I think that's why running a Windows laptop with Eddie and a Kinect (with or without what's left of RDS) is still so compelling. People can just go out, write some very simple VB.NET code to connect up all these open source vision analysis filters, and go to town.
I think this all comes down to who the RPi is made for -- people who like fiddling with kernels and Ubuntu packages and various drivers they find on the odd GitHub. But the average robotics experimenter is not to that level. So instead they fall back to the fairly generic demonstrations of RPi-powered bots. Not that there aren't some truly fantastic robots out that there using the Pi, but they're well beyond what your weekend robotics warrior is able to do. (And Martin, I'm not clumping you in that WRW group, as you and many others here are well beyond that, just making an observation of the kinds of videos I see on YouTube.)
http://www.mikronauts.com/robot-zoo/elf-2wd-pi-robot/
I am now working on getting h.264 streaming going.
Using a Pi on a robot is really only needed for higher end robotics experiments - vision and speech were already mentioned above - I'd add complex navigation and mapping issues, and any experiments where you need a lot of code/data.
With all respect to Gordon, I disagree about using Windows libraries and a laptop for vision experiments for most users, largely due to the costs involved, and for the size of the larger robotics platform needed to support the laptop etc.
Unfortunately I have not been able to do as much with my Pi bots as I'd like due to other commitments for my time, but I will be doing more as time allows
It was intended to be an ultra cheap computer that kids could hack on and hopefully learn something about programming with.
Do you need a Pi to be the "brains" of your robot creation? Well, if it's a simple line follower probably not. If you have bigger plans then perhaps yes.
Round here the Pi makes a very good Prop Plug, at a similar price!
some C++ programs on the RPi 2 that give some usefull response. Here is a video of my Stingray robot color detecting an orange ball using
the RPi camera module. Also the front arm tracks the ball vertically and the whole robot tracks horizontal.
https://www.youtube.com/watch?v=26EfYeDxc3M
But there are all these Windows XP laptops, you see, which are effectively free. They need a larger robot, to be sure, but projects like Leaf make all the files available in single downloads. They developed it some 10 years ago, so you know older hardware will still work. (This is one of the surprising benefits of DirectShow. The architecture works well even on hardware-limited computers. I got near 30 fps frame grabbing and basic color blob recognition on a 2005-era Toshiba with 512 MB of RAM. Go figure. This is all done using standard filters and a .NET wrapper called DirectShow.Net, which makes DS available to non-C++ apps.)
The ease with which to get started makes all the difference. Willow Garage uses this approach, and it uses a fairly small turtle robot for a base. Eddie is on the large size for a robot that can move a laptop. They don't need to be that big, and some folks (schools, for some reason) like the bigger bots.
That said, it would be better to have it on an ARM-based card-sized controller. Does Microsoft make an XP Lite available for the Pi? (Not a serious question, but I bet people would get it for their robots with vision.)
Robots with two-way video can be more like the very funny Modern Family episode ("American Skyper") that was on a few weeks ago. In it, Phil was stuck in a hotel in Seattle, but had an avatar bot, with his face on the main screen via Skype, roam though the house. The LCD screen was on a self-balancing robot base. No need for the video to pass through the microcontroller, which has its hands full doing navigation and balancing. (These kinds of bots, BTW, is what Willow Garage has morphed into.)
Dave, nice to hear you've successfully used Pi for video processing.
I agree that a Windows laptop can be a good solution, budget and robot size permitting - and I also agree that it is easier to implement.
I think where we disagree (somewhat) is that I like the tiny/small SBC approach, as it allows vision and more complex problem solving on a fairly small robot, and I think our difference is due to your preferring an easier .net solution vs. my preferring a smaller SBC
The rest of what I enjoy about RoboPi+RPi robots would work as well with a laptop - namely:
- ssh'ing into the bot for development
- RDP into the bot for a desktop
I find that being able to do the above REALLY helps the software development cycle vs. attach propplug, download, detach propplug, try it, rinse-and-repeat - but that ofcourse would also work with a laptop.
Hmm.. maybe I sould make a bot for one of my Acer netbooks...
There is a reason why we need things like the Pi in the world.
Elf is currently being updated, I hope to get h.264 streaming working today!
Actually I just realized something.
The $25 5MP CSI camera is a pretty strong argument for the Raspberry Pi.
The flaw in saying that the RPi is only a bit more than a microcontroller board is what happens when you let the magic smoke out? A wiring mishap with a DIP microcontroller will set you back less than $10. But the RPi is all surface mount parts, and would make me more nervous about wiring mishaps. They aren't frequent, but they happen.
If I need one to develop a "Killer App", even a cheapskate like me won't lose sleep over blowing up a few $35 RasPis. If you wanna make an omelette...
Don't fail to see the forest, let alone the trees. Why don't we see more vision-capable robots using these ARM-based processors? They're capable of it. Well, I think I know the reason. It has to do with simple human nature, and going with what you know.
I'm not a Pi user -- mainly just Propeller and Arduino -- but for all of you who are, and build robots with them, let's share links to your efforts, like Dave did with his video. (BTW, Bill, I know you go out of your way to demonstrate your findings. Bravo!)
The best way to convince someone is to show it in action. I applaud those who are willing to publish in print or the Web what they've done with their accumulated knowledge. Dave, is there any write-up on your color blob code?
Cue Erco to remind us that SERVO Magazine loves articles like these!
FYI, I just got h.264 streaming working from Elf. I stumbled a number of times, following blog instructions that appear to be outdated until I found a nice simple one that "just worked"
Now I have to figure out how to reduce the latency, I find 2sec to be way too high. I do love the 24fps @ 1296x972 ! (1/4 sensor resolution)
I wish that the respivid command had a built-in h.264 streaming mode, that would greatly reduce latency.
I did post the code somewhere but cannot remember. Here is one of my latest detectors that is tuned for red color objects. It captures 200 x 140 video frames
and displays the largest detected red object x, y screen centroid location/area and frames per second on the video with a box over the detected object. It also
sends out a 6 byte serial packet with the above information after each processed video frame. Then it is up to your robot as what to do with that information.
The Pi was created as an educational tool, a way to attract kids into the world or programming. It happens to support a camera really easily, both still and video. It can do wonders with GLES. It can handle most of what you need for networking, ssh, http, https, websockets, ntp, etc etc. With it's Linux OS you can take your pick of languages to program in.
I would say that it's abilities in the real-world, real-time interfacing department are limited. But that's why we have Bill and his Propeller boards for the Pi to off load that to. There are other examples.
Is that what you want for your robot? Maybe, maybe not. What are your goals and requirements?
Certainly there are examples of autonomous boats and so on that use the Pi.
It's all good fun, use it were it is appropriate.
I believe you can compile C++ programs from the above also I just have not figured it out yet. By installing using the above terminal line you get OpenCV 2.4.3 last I checked.
Right now I am building up a new card and do not have OpenCV access. But I will put back one of my prior images and see if I can come up with the Python version
of the detector program and post it here.
a USB web cam) you need to install the uv4l-raspicam driver the directions are here - http://www.linux-projects.org/modules/sections/index.php?op=viewarticle&artid=14
From the config file I change the camara output to 640 x 480 and enable no preview to yes. The Python progam I am posting here will put 2 200 x 140 video screens with one
showing the video with a small white box over the largest detected object centroid. The other shows a binary masked image video. Sorry Martin for going O.T.
https://www.raspberrypi.org/forums/viewforum.php?f=105 It's all in a very preliminary release state as far as I can tell from reading that thread.
What sort of speed are you getting with this? Could you make something interactive and compelling for a Maker Faire type event or is it too slow?
After sending out the serial data with a 110x80 video screen I have seen it bounce between 20-30 FPS on a Pi 2. So if you need more speed make the picture smaller.
I am using 110x80 video on an Activitybot that has a pan/tilt servo mechanism with a camara mounted on the back of the bot. It keeps pretty good track on the object it is tracking.
A frame rate of 20-30 fps should be more than enough for object tracking, and is better than I had expected. Maybe not playing ping-pong, but good for following an colored object.
I'm compelled!
a really good job and is not much slower than C++. It will provide usefull speed. Also I should point out that both the C++ and Python programs I posted here are written
expecting /dev/video0 to be the camera. If you plug in a USB webcam and fire up the RPi the programs will use it instead of the RPi camera module.
the camera. Which looks pixelated when resized bigger. I knew there was a reason I took the time to build OpenCV for a latter version that works properly with the camera.
I have been using OpenCV 2.4.9. I am getting ready to build OpenCV 2.4.11. It is not hard after doing it a few times. If anybody wants to know how I build it start a new thread
and I can guide you thru Robert Castle's method of installing OpenCV on the Pi.
Edit: Using the earlier version of Opencv still works properly with USB webcams, it only has the small picture problem using the RPi camera module.