Shop OBEX P1 Docs P2 Docs Learn Events
Any Compelling Raspberry Pi Robots? — Parallax Forums

Any Compelling Raspberry Pi Robots?

Martin_HMartin_H Posts: 4,051
edited 2015-06-15 12:07 in Robotics
The Raspberry Pi is neat and seems like a good value, but I haven't seen any compelling Raspberry Pi robots. A Google search tends to locate fairly basic robots that you could build with a microcontroller. With that kind of CPU power I'd hope to see some sort of machine vision using the camera module to control a robot arm, but I haven't seen any projects like that.

There's a fair amount of technology to interface the Raspberry Pi to microcontrollers (e.g. Propeller Hat and RoboPi) which shows there's interest. But so far most of the projects seem like a microcontroller could do it on its own, and the Raspberry Pi is being used as a programming environment or scripting language to talk to the microcontroller.

The reason I'm asking is that I don't own a Raspberry Pi and I am wondering if I am missing out on something really cool, or sticking with a PC and microcontroller is all I need to get the job done.
«1

Comments

  • kwinnkwinn Posts: 8,697
    edited 2015-06-07 07:24
    I think the answer depends on your goals. If you want to build a robot with capabilities like vision, speech recognition, and speech then you are probably missing out on something cool. If all you want is a robot for line following, obstacle detection, etc. then you're not missing much.

    Seems to me that there is a natural division of functions between a microcontroller and something like the RPi when it comes to robots. Higher level functions like vision, speech, and goal execution need the power a system like the RPi provides. A microcontroller (or several micros) like the Propeller are ideal for controlling the robots hardware based on commands from the RPi.
  • Duane DegnDuane Degn Posts: 10,588
    edited 2015-06-07 08:12
    Most of the Raspberry Pi robots I've seen could have easily been controlled with a microcontroller.

    As kninn points out, there are applications which a R Pi could really be useful. I think the most common use of the R Pi on a robot is to stream video or images from the camera. While this is a useful ability (and one I plan to copy), I think some sort of wireless IP camera would also do this.

    I'm pretty sure I've seen some good examples of R Pi based robots but they're not coming to my mind. I'll add links to the examples if I can remember them.
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2015-06-07 09:48
    This is my feeling as well -- RPi examples don't always showcase its strengths. The feeling goes it's only a little more $ than an official Arduino (the clones are much cheaper now, though), so why not use it. But that defeats the purpose of RPi's promise.

    You don't need anything special to stream video, and it can be handled separately from the controller. Not sure that's the killer app. I've looked at some of the vision processing possible with the Pi, and have been underwhelmed. I'm familiar with processing on Windows (using DirectShow), but that's a limiting knowledge unless I want to run some version of Windows, and I don't. The angst comes from this: Vision libraries for Windows are so common it's a constant tug to find some Windows solution. I think that's why running a Windows laptop with Eddie and a Kinect (with or without what's left of RDS) is still so compelling. People can just go out, write some very simple VB.NET code to connect up all these open source vision analysis filters, and go to town.

    I think this all comes down to who the RPi is made for -- people who like fiddling with kernels and Ubuntu packages and various drivers they find on the odd GitHub. But the average robotics experimenter is not to that level. So instead they fall back to the fairly generic demonstrations of RPi-powered bots. Not that there aren't some truly fantastic robots out that there using the Pi, but they're well beyond what your weekend robotics warrior is able to do. (And Martin, I'm not clumping you in that WRW group, as you and many others here are well beyond that, just making an observation of the kinds of videos I see on YouTube.)
  • Bill HenningBill Henning Posts: 6,445
    edited 2015-06-07 10:29
    I recently added MJPEG streaming to Elf (one of my RoboPi + Raspberry Pi robots) and while it was very simple to get running, I was not happy with the frame rate.

    http://www.mikronauts.com/robot-zoo/elf-2wd-pi-robot/

    I am now working on getting h.264 streaming going.

    Using a Pi on a robot is really only needed for higher end robotics experiments - vision and speech were already mentioned above - I'd add complex navigation and mapping issues, and any experiments where you need a lot of code/data.

    With all respect to Gordon, I disagree about using Windows libraries and a laptop for vision experiments for most users, largely due to the costs involved, and for the size of the larger robotics platform needed to support the laptop etc.

    Unfortunately I have not been able to do as much with my Pi bots as I'd like due to other commitments for my time, but I will be doing more as time allows :)
  • Heater.Heater. Posts: 21,230
    edited 2015-06-07 11:17
    I don't know. As far as I can tell the Pi was not created to be an alternative to an Arduino or Propeller or whatever.

    It was intended to be an ultra cheap computer that kids could hack on and hopefully learn something about programming with.

    Do you need a Pi to be the "brains" of your robot creation? Well, if it's a simple line follower probably not. If you have bigger plans then perhaps yes.

    Round here the Pi makes a very good Prop Plug, at a similar price! :)
  • ratronicratronic Posts: 1,451
    edited 2015-06-07 11:23
    Vision processing is the only thing I have used a RPi for. But vision processing can be complicated and a little slow using the RPi. I have come up with

    some C++ programs on the RPi 2 that give some usefull response. Here is a video of my Stingray robot color detecting an orange ball using

    the RPi camera module. Also the front arm tracks the ball vertically and the whole robot tracks horizontal.

    https://www.youtube.com/watch?v=26EfYeDxc3M
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2015-06-07 11:47
    With all respect to Gordon, I disagree about using Windows libraries and a laptop for vision experiments for most users, largely due to the costs involved, and for the size of the larger robotics platform needed to support the laptop etc.

    But there are all these Windows XP laptops, you see, which are effectively free. They need a larger robot, to be sure, but projects like Leaf make all the files available in single downloads. They developed it some 10 years ago, so you know older hardware will still work. (This is one of the surprising benefits of DirectShow. The architecture works well even on hardware-limited computers. I got near 30 fps frame grabbing and basic color blob recognition on a 2005-era Toshiba with 512 MB of RAM. Go figure. This is all done using standard filters and a .NET wrapper called DirectShow.Net, which makes DS available to non-C++ apps.)

    The ease with which to get started makes all the difference. Willow Garage uses this approach, and it uses a fairly small turtle robot for a base. Eddie is on the large size for a robot that can move a laptop. They don't need to be that big, and some folks (schools, for some reason) like the bigger bots.

    That said, it would be better to have it on an ARM-based card-sized controller. Does Microsoft make an XP Lite available for the Pi? (Not a serious question, but I bet people would get it for their robots with vision.)

    Robots with two-way video can be more like the very funny Modern Family episode ("American Skyper") that was on a few weeks ago. In it, Phil was stuck in a hotel in Seattle, but had an avatar bot, with his face on the main screen via Skype, roam though the house. The LCD screen was on a self-balancing robot base. No need for the video to pass through the microcontroller, which has its hands full doing navigation and balancing. (These kinds of bots, BTW, is what Willow Garage has morphed into.)

    Dave, nice to hear you've successfully used Pi for video processing.
  • NWCCTVNWCCTV Posts: 3,629
    edited 2015-06-07 12:09
    I messed with my RPi last year for a while. But, I found projects to be similar to what I had already done with the Stamp and Propeller so I set it aside. When it comes to the point that I need machine vision I may explore it again but for now it is shelved.
  • Bill HenningBill Henning Posts: 6,445
    edited 2015-06-07 12:14
    Gordon,

    I agree that a Windows laptop can be a good solution, budget and robot size permitting - and I also agree that it is easier to implement.

    I think where we disagree (somewhat) is that I like the tiny/small SBC approach, as it allows vision and more complex problem solving on a fairly small robot, and I think our difference is due to your preferring an easier .net solution vs. my preferring a smaller SBC :)

    The rest of what I enjoy about RoboPi+RPi robots would work as well with a laptop - namely:

    - ssh'ing into the bot for development
    - RDP into the bot for a desktop

    I find that being able to do the above REALLY helps the software development cycle vs. attach propplug, download, detach propplug, try it, rinse-and-repeat - but that ofcourse would also work with a laptop.

    Hmm.. maybe I sould make a bot for one of my Acer netbooks...
  • Heater.Heater. Posts: 21,230
    edited 2015-06-07 12:26
    Do what? Use a 15 years old OS and software and some big old clunky laptop on a robot?

    There is a reason why we need things like the Pi in the world.
  • Bill HenningBill Henning Posts: 6,445
    edited 2015-06-07 12:42
    I agree that we need things like the Pi! I am however curious at the relative performance, weight, implications for the bot etc., between say a dual Atom netbook and an RPi 2.

    Elf is currently being updated, I hope to get h.264 streaming working today!

    Actually I just realized something.

    The $25 5MP CSI camera is a pretty strong argument for the Raspberry Pi.
    Heater. wrote: »
    Do what? Use a 15 years old OS and software and some big old clunky laptop on a robot?

    There is a reason why we need things like the Pi in the world.
  • Martin_HMartin_H Posts: 4,051
    edited 2015-06-07 15:04
    @All, thanks for the feedback. It sounds like a killer robotics app for the Raspberry Pi still needs to be developed.

    The flaw in saying that the RPi is only a bit more than a microcontroller board is what happens when you let the magic smoke out? A wiring mishap with a DIP microcontroller will set you back less than $10. But the RPi is all surface mount parts, and would make me more nervous about wiring mishaps. They aren't frequent, but they happen.
  • ercoerco Posts: 20,255
    edited 2015-06-07 17:32
    Martin_H wrote: »
    A wiring mishap with a DIP microcontroller will set you back less than $10.

    If I need one to develop a "Killer App", even a cheapskate like me won't lose sleep over blowing up a few $35 RasPis. If you wanna make an omelette... :)
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2015-06-07 18:01
    Too bad it it comes down to a 15 year old OS that's been demonstrated to be a good platform for video image processing, with free code falling out of every crevice of CodeProject and similar repositories, versus a promising card-sized computer that's being relegated to line following.

    Don't fail to see the forest, let alone the trees. Why don't we see more vision-capable robots using these ARM-based processors? They're capable of it. Well, I think I know the reason. It has to do with simple human nature, and going with what you know.

    I'm not a Pi user -- mainly just Propeller and Arduino -- but for all of you who are, and build robots with them, let's share links to your efforts, like Dave did with his video. (BTW, Bill, I know you go out of your way to demonstrate your findings. Bravo!)

    The best way to convince someone is to show it in action. I applaud those who are willing to publish in print or the Web what they've done with their accumulated knowledge. Dave, is there any write-up on your color blob code?

    Cue Erco to remind us that SERVO Magazine loves articles like these!
  • Bill HenningBill Henning Posts: 6,445
    edited 2015-06-07 18:43
    Thanks Gordon. I try to write articles that will help people, and save them mucho hair tearing...

    FYI, I just got h.264 streaming working from Elf. I stumbled a number of times, following blog instructions that appear to be outdated until I found a nice simple one that "just worked"

    Now I have to figure out how to reduce the latency, I find 2sec to be way too high. I do love the 24fps @ 1296x972 ! (1/4 sensor resolution)

    I wish that the respivid command had a built-in h.264 streaming mode, that would greatly reduce latency.
  • ratronicratronic Posts: 1,451
    edited 2015-06-08 08:51
    Dave, is there any write-up on your color blob code?

    I did post the code somewhere but cannot remember. Here is one of my latest detectors that is tuned for red color objects. It captures 200 x 140 video frames

    and displays the largest detected red object x, y screen centroid location/area and frames per second on the video with a box over the detected object. It also

    sends out a 6 byte serial packet with the above information after each processed video frame. Then it is up to your robot as what to do with that information.
  • Heater.Heater. Posts: 21,230
    edited 2015-06-08 10:29
    For sure the Pi is not an Arduino or Propeller or a traditional little micro-controller.

    The Pi was created as an educational tool, a way to attract kids into the world or programming. It happens to support a camera really easily, both still and video. It can do wonders with GLES. It can handle most of what you need for networking, ssh, http, https, websockets, ntp, etc etc. With it's Linux OS you can take your pick of languages to program in.

    I would say that it's abilities in the real-world, real-time interfacing department are limited. But that's why we have Bill and his Propeller boards for the Pi to off load that to. There are other examples.

    Is that what you want for your robot? Maybe, maybe not. What are your goals and requirements?

    Certainly there are examples of autonomous boats and so on that use the Pi.

    It's all good fun, use it were it is appropriate.
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2015-06-08 11:20
    Dave, nice code. I see it uses OpenCV. I keep seeing it can be a bear to install and build. Did you find that, or are there better package installers available now?
  • ratronicratronic Posts: 1,451
    edited 2015-06-08 11:50
    I had to build it on the Pi. It takes a few hours. There is an easy way to install OpenCV for use with Python by typing into a terminal sudo apt-get install python-opencv libopencv-dev .

    I believe you can compile C++ programs from the above also I just have not figured it out yet. By installing using the above terminal line you get OpenCV 2.4.3 last I checked.

    Right now I am building up a new card and do not have OpenCV access. But I will put back one of my prior images and see if I can come up with the Python version

    of the detector program and post it here.
  • ratronicratronic Posts: 1,451
    edited 2015-06-08 15:19
    I took a fresh Raspbian image and the installation of OpenCV using the above method takes ~13 minutes via WiFi on a RPi 2. Also to use the RPi camera module as /dev/video0 (like

    a USB web cam) you need to install the uv4l-raspicam driver the directions are here - http://www.linux-projects.org/modules/sections/index.php?op=viewarticle&artid=14

    From the config file I change the camara output to 640 x 480 and enable no preview to yes. The Python progam I am posting here will put 2 200 x 140 video screens with one

    showing the video with a small white box over the largest detected object centroid. The other shows a binary masked image video. Sorry Martin for going O.T.
    # detect red - output object location and size of largest detected red object
    import numpy as np
    import cv2
    import serial
    
    #ser = serial.Serial('/dev/ttyUSB0', 115200)
    ser = serial.Serial('/dev/ttyAMA0', 115200)
    cap = cv2.VideoCapture(0)
    
    while(True):
        _, frame = cap.read()
        frame = cv2.resize(frame, (200, 140))
        orig = frame.copy()
        frame = cv2.blur(frame, (4, 4))
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        thresh = cv2.inRange(hsv, np.array((0, 50, 50)), np.array((3, 255, 255)))
        thresh2 = thresh.copy()
        contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
        max_area = 0
    
        for cnt in contours:
            area = cv2.contourArea(cnt)
            if area > max_area:
                max_area = area
                best_cnt = cnt
    
        if max_area > 0:
            M = cv2.moments(best_cnt)
            cx, cy = int(M['m10'] / M['m00']), int(M['m01'] / M['m00'])
            cz = int(max_area)
            cv2.rectangle(orig, (cx - 5, cy - 5), (cx + 5, cy + 5), (255, 255, 255), 2)
            ser.write('!')
            ser.write('&')
            ser.write(chr(cx))
            ser.write(chr(cy))
            ser.write(chr(cz & 0xff))
            ser.write(chr(cz >> 8))
        # Show it, if key pressed is 'Esc', exit program
        cv2.imshow('Fr', orig)
        cv2.imshow('Th', thresh2)
        if cv2.waitKey(1) == 27:
            break
    
    # Clean up everything before leaving
    ser.close()
    cap.release()
    cv2.destroyAllWindows()
    
  • Heater.Heater. Posts: 21,230
    edited 2015-06-09 01:15
    GordonMcComb,
    Does Microsoft make an XP Lite available for the Pi?
    The is no such Windows for the Pi. Recently MS put out a thing called "Windows 10 for IoT" on the Pi. It does not actually have any windows though.
    https://www.raspberrypi.org/forums/viewforum.php?f=105 It's all in a very preliminary release state as far as I can tell from reading that thread.
  • Keith YoungKeith Young Posts: 569
    edited 2015-06-09 07:24
    ratronic,

    What sort of speed are you getting with this? Could you make something interactive and compelling for a Maker Faire type event or is it too slow?
  • ratronicratronic Posts: 1,451
    edited 2015-06-09 07:51
    The frame rate wobles quite a bit with the Raspbian operating system running in the background. It also depends on how many pixels are in the video.

    After sending out the serial data with a 110x80 video screen I have seen it bounce between 20-30 FPS on a Pi 2. So if you need more speed make the picture smaller.

    I am using 110x80 video on an Activitybot that has a pan/tilt servo mechanism with a camara mounted on the back of the bot. It keeps pretty good track on the object it is tracking.
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2015-06-09 12:01
    Thanks for the code, Dave. I'm ordering a Pi2 as we speak!

    A frame rate of 20-30 fps should be more than enough for object tracking, and is better than I had expected. Maybe not playing ping-pong, but good for following an colored object.
  • mindrobotsmindrobots Posts: 6,506
    edited 2015-06-09 12:12
    Coconut Pi is a rather interesting autonomous UAV powered by a Raspberry Pi.

    I'm compelled!
  • ratronicratronic Posts: 1,451
    edited 2015-06-09 12:33
    The video I posted in post# 7 with the Stingray is actually using a Pi 1 B+. That frame rate of 20-30 FPS is on a RPi 2 using a compiled C++ binary. But I have found that Python does

    a really good job and is not much slower than C++. It will provide usefull speed. Also I should point out that both the C++ and Python programs I posted here are written

    expecting /dev/video0 to be the camera. If you plug in a USB webcam and fire up the RPi the programs will use it instead of the RPi camera module.
  • ratronicratronic Posts: 1,451
    edited 2015-06-10 10:06
    I need to let anybody that is going to try video on the Pi know that the version of OpenCV you get using the easy method I posted above, OpenCV only sees 64x64 output from

    the camera. Which looks pixelated when resized bigger. I knew there was a reason I took the time to build OpenCV for a latter version that works properly with the camera.

    I have been using OpenCV 2.4.9. I am getting ready to build OpenCV 2.4.11. It is not hard after doing it a few times. If anybody wants to know how I build it start a new thread

    and I can guide you thru Robert Castle's method of installing OpenCV on the Pi.

    Edit: Using the earlier version of Opencv still works properly with USB webcams, it only has the small picture problem using the RPi camera module.
  • Keith YoungKeith Young Posts: 569
    edited 2015-06-14 10:49
    https://www.youtube.com/watch?v=3BJFxnap0AI Not mine but a great idea. Simple/doable yet fairly impressive.
  • Heater.Heater. Posts: 21,230
    edited 2015-06-15 02:47
    Yes, and he does more impressive things: https://www.youtube.com/user/74Samy/videos.
  • Martin_HMartin_H Posts: 4,051
    edited 2015-06-15 07:10
    I'd bet that any CMU cam and a Propeller or Arduino could do that object tracking balance-bot.
Sign In or Register to comment.