Shop OBEX P1 Docs P2 Docs Learn Events
Face detection — Parallax Forums

Face detection

ErlendErlend Posts: 612
edited 2014-01-09 17:59 in General Discussion
These days even quite low-priced cameras come with a face detection feature where the screen will show a live marker - ususally a red square frame - superimposed over the picture wherever there is a face. Lots of cameras can even track several faces simultaneously. What if I could get my hands on one of those chips that are responsible for this computing achievement. Somewhere in there are the x-y coordinates of the face in the picture. How nice it would be to hack into that, and to use these coordinates to orient the head or eyes of a robot? A robot that effectively looks into your face at all times?

Can it be done? Anyone researched this?

Erlend

Comments

  • ErlendErlend Posts: 612
    edited 2014-01-08 04:24
    Ooops - a glitch in my google-before-posting algorithm. Here's one chip. Probably not affordable, though.

    Erlend
  • Heater.Heater. Posts: 21,230
    edited 2014-01-08 04:41
    An interesting project idea.

    My first run at this would be as follows:

    1) Get a Raspberry Pi and it's camera module.

    2) Ask Google to find me some facial recognition software that I can build to run on the Pi. There is at least one program that will analyse an image and spit out face coordinates. Sadly I remember what it is called. Perhaps you can do something with OpenCV.

    3) Hook the Pi up to a Propeller via the UART on the Pi GPIO pins. The Prop will driver servos for robot steering and other I/O.

    4) Hook all these together with a program written in Python, or JavaScript or whatever you like.
  • ErlendErlend Posts: 612
    edited 2014-01-08 05:19
    1) Finally a reason to get a Raspberry?
    2) Yes, there is something out there, probably the most likely candidate is OpenCV, or alternatively Android 4
    3) Plain vanilla
    4) With reference to previous language wars maybe I should be silent on this, but my preference would certainly be to slave the Raspberry and let Spin be the supreme ruler

    - but I cannot get my mind off those tiny cheap chips inside cameras - maybe piggyback a red-square-x/y-detector onto one such a camera...

    Erlend
  • LeonLeon Posts: 7,620
    edited 2014-01-08 06:28
    They don't use dedicated hardware - the face detection is performed using software on the processor (probably an ARM).
  • ctwardellctwardell Posts: 1,716
    edited 2014-01-08 06:44
    Leon wrote: »
    They don't use dedicated hardware - the face detection is performed using software on the processor (probably an ARM).

    Actually based on the link in post #2, they sometimes do.

    C.W.
  • ErlendErlend Posts: 612
    edited 2014-01-08 07:08
    Another idea: take a mobile phone running Android 4, put together a simple App which reads the face coordinates and send them by Bluetooth or wires over to the Propeller which is in control. Could even let the display of the mobile run a robo-face animation.

    Erlend
  • kwinnkwinn Posts: 8,697
    edited 2014-01-08 07:45
    Wouldn't it be simpler to capture a low res version of the image and calculate the position of the frame in the image? Once you have the frame position you can determine which way to move the camera to center it.
  • Beau SchwabeBeau Schwabe Posts: 6,566
    edited 2014-01-08 08:19
    But isn't facial recognition simply a modified edge detection? ... If you take an image and shift it by one pixel in the X and Y and XOR that image with the original, most of the common mode pixels will drop off leaving only contrasting pixels or the edge. The Eyes, nose, and mouth should also produce contrast pixels that based on their proportion to one another could be recognized as a face. .... I would think a Propeller could do facial recognition based on a similar approach.
  • ercoerco Posts: 20,256
    edited 2014-01-08 08:30
    This guy shows you the illusion of tracking you using... paper. http://www.youtube.com/watch?v=A4QcyW-qTUg
  • xanaduxanadu Posts: 3,347
    edited 2014-01-08 10:46
    OpenCV explains how their algorithm works - http://docs.opencv.org/trunk/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html

    I hesitate to mention RoboRealm because it's so bulky but it does recognition really well.
  • prof_brainoprof_braino Posts: 4,313
    edited 2014-01-08 11:26
    http://en.wikipedia.org/wiki/OpenCV
    http://opencv.org/

    oh you beat me to it. Anyway, this is what I have been meaning to do for ages. The guys in the robot club use this and can identify hands and faces. The demo they showed used an old anfroid phone, but it should work with anything, even usb web cam. I have a Pi + camera, but did not get to this project yet. If you go this route would like to play along.
  • Duane DegnDuane Degn Posts: 10,588
    edited 2014-01-08 11:55
    But isn't facial recognition simply a modified edge detection? ... If you take an image and shift it by one pixel in the X and Y and XOR that image with the original, most of the common mode pixels will drop off leaving only contrasting pixels or the edge. The Eyes, nose, and mouth should also produce contrast pixels that based on their proportion to one another could be recognized as a face. .... I would think a Propeller could do facial recognition based on a similar approach.

    This is on my todo list.

    I hope Hano's method of capturing NTSC B&W images could be used to generate the image to manipulate.

    I know I've shown this image a bunch of times on the forum, but I just think it's so cool to be able to capture a low res image like this with the Propeller.

    LedHi.jpg


    I think Phil's PropCAM would probably make the process even easier.

    I also think the Parallax Laser Range Finder (which has a small camera) and CMUcam4 are also possible ways of using a Propeller to capture an image to be used in a face finding algorithm.

    I personally would like to leave PCs (and berry pies) out of a robot unless absolutely needed.

    BTW, post #4 of my index has links to various Propeller machine vision threads.
    489 x 480 - 119K
  • Cats92Cats92 Posts: 149
    edited 2014-01-08 12:50
    Agree with Xanadu : Roborealm does it well and works on a small Windows computer (here is the problem but try something like the ebox 3350mx) .
    It is a toolbox for image analysis very efficient . I used it with a Propeller on a robot platform.
    Easy to track a colored object.

    Jean Paul
  • xanaduxanadu Posts: 3,347
    edited 2014-01-08 18:37
    Cats92 wrote: »
    Agree with Xanadu : Roborealm does it well and works on a small Windows computer (here is the problem but try something like the ebox 3350mx) .
    It is a toolbox for image analysis very efficient . I used it with a Propeller on a robot platform.
    Easy to track a colored object.

    Jean Paul

    Or don't even put the PC on the robot.

    Robot *video <wireless> host PC <XBee> robot.

    *IP camera or analog tx/rx (1.2ghz and 5.8ghz work very well) IP Cams use less battery, most are ~120ma@5vdc. Analog cameras have much higher sample rate.

    You have to be close enough to your mini-PC because of wireless but your robot will have a ton of CPU to play with. Makes tracking, identifying and even navigation really easy.
  • SteveWoodroughSteveWoodrough Posts: 190
    edited 2014-01-08 19:02
    I did something similar over the Christmas holiday with RoboRealm and a wireless router. RRis not perfect but it is easy to use and interface with the Prop.

    https://www.youtube.com/watch?v=UZe48U0K89k

    Xanadu: Where you ever able to determine the USB composite video device you used on your project?

    Regards,
    Steve
  • xanaduxanadu Posts: 3,347
    edited 2014-01-08 19:50
    I did something similar over the Christmas holiday with RoboRealm and a wireless router. RRis not perfect but it is easy to use and interface with the Prop.

    https://www.youtube.com/watch?v=UZe48U0K89k

    Xanadu: Where you ever able to determine the USB composite video device you used on your project?

    Regards,
    Steve

    The one I have is no longer sold, and it was like $8 too. Most of the capture devices that cost $50 and up you're paying for garbage bundled software. The closest I can find to it (same manufacturer according to device manager) doesn't list the color format so it's a gamble. In fact I spent some time looking around and all of the good ones seem to be discontinued.

    I think if you email RR support they have some kind of unofficial list. I'm pretty sure that is what I ended up doing originally, but there were also way more options to choose from. Don't get roped into spending $80 though, even if you have to find a place that takes returns and pay a restock fee you should be able to find something for under $20.

    You can use any capture device that is DirectX compliant and in RGB24 or RGB555 color format. Of course the hardware will also need to be compatible with your OS.

    Sweet video, and really nice bot! What's your average FPS into RR with that webcam/router setup? Also if you don't mind what is the power draw?

    Edit: Just heard 15 FPS in your video, another 10-15 FPS and that thing will be screaming for navigation. If 15 FPS is working okay for you I'd consider the power consumption, you might end up with higher FPS and less mA which is worthwhile. Otherwise I'm not sure trading power for FPS is a good idea, I guess it depends on how much battery you have.
  • SteveWoodroughSteveWoodrough Posts: 190
    edited 2014-01-09 16:07
    I'm not positive, but I think the FPS limitation has to do with the fact that I'm transmitting the video over the TP-Link. Power is supplied by the 2400 mAh battery bank and I can let it run for about 3 hours. So roughly 800 mA? Of course that is the combined load of the camera, router, and the bank itself. I don't have a way to "turn up" the FPS on this camera. This was more a proof of concept and the next step will be a wireless 900 MHz tx/rx system. If we ever get this snow to melt I thought a fun experiment might be back yard navigation with my Magellan bot staying only on the "green" grass.

    Thanks,
    Steve
  • KotobukiKotobuki Posts: 82
    edited 2014-01-09 17:44
    Not to hijack a thread, but I have been wondering about this too. The neat thing with human faces is that they all have the same features, in roughly the same proportions. Here is a thought that I have been considering... something that would recognize a dog's face. There are god only knows how many breeds of dogs, and those different breeds all seem to have a different shape. Would it be possible to gather several thousand head on photos of different dogs, make a composit generic dog face, and then have a computer (a small one, not a super computer) recognize it as a dog? (Or cat, monkey, fish, or whatever?)

    Just a thought.


    Joe
  • xanaduxanadu Posts: 3,347
    edited 2014-01-09 17:59
    Staying on green grass will be interesting with different lighting conditions and the contrast between the edge of the grass and the outer perimeter.

    It sounds like you could cut power consumption and increase the FPS by switching to analog video. There are probably a lot of benefits of having the wireless network on the robot though. In fact it's almost against my religion to remove a network device, I usually only add them hehe.

    Here's a breakdown of my analog setup:

    CCD 700 line board camera = 80ma

    1.2gHz 700mw video transmitter = 300ma

    I'm not familiar with your access point, but 700mw will hit the same range as the average access point with stock antennas.
Sign In or Register to comment.