Shop OBEX P1 Docs P2 Docs Learn Events
Skynet is Tracking 16 Bogeys — Parallax Forums

Skynet is Tracking 16 Bogeys

ercoerco Posts: 20,255
edited 2012-06-24 22:25 in Robotics
Incoming!

http://www.youtube.com/watch?v=dXiubArqPQk&feature=digest_wed

The Terminator will surely rise from ExDiv's workshop.

Comments

  • Martin_HMartin_H Posts: 4,051
    edited 2011-08-31 19:59
    Amazing and it also puts my two machine vision projects to shame.
  • ExDxVExDxV Posts: 29
    edited 2011-10-04 11:01
    Machine that can see!

    Route training and navigation by map:

    See also: AVM Navigator help page, Using of AVM plugin in RoboRealm

    Notification:

    *What your robot see and how it affect to navigation?

    In our case the robot uses only limited sequence of images from camera for
    navigation. The AVM Navigator application just tries to recognize images so
    that be able to understand robot location. And if you show in route training
    process of some image sequence (just lead your robot in "Marker mode")
    then in automatic navigation mode the robot should have possibility to see
    the same sequence of images without anyone changes. Otherwise AVM Navigator
    will not be able to recognize images and it will cause to fail in
    localization and navigation.
  • Martin_HMartin_H Posts: 4,051
    edited 2011-10-04 11:27
    That was really cool, thanks for posting that.
  • ExDxVExDxV Posts: 29
    edited 2011-10-18 01:28
    It's test of new algorithm for AVM Navigator v0.7.3.

    First in video the robot has received command: "go to the checkpoint" and when robot arrived to the checkpoint then I brought robot back (several times) in different position of learned route. When robot noticed the changes then it indicated that robot was displaced because any commands were not given by robot to his motors however changes were seen in the input image.

    Then robot started looking around and localized his current position. Further the robot just calculated path from current position to the checkpoint and went there (and so forth).
  • ExDxVExDxV Posts: 29
    edited 2012-04-07 07:33
  • TtailspinTtailspin Posts: 1,326
    edited 2012-04-07 07:54
    I am just a hobbiest... So the only comment I can make is.. WOW!, very cool!.


    -Tommy
  • ExDxVExDxV Posts: 29
    edited 2012-04-21 12:24
    It is a testing of the enhanced tracking algorithm that takes into consideration variable servo speed:

    See VBScript program and diagram below for more details:
    ' Get turret control variables
    flt_base = 10000
    
    rate_v  = GetVariable("RATE_V")/flt_base
    turret_v  = GetVariable("TURRET_V")/flt_base
    turret_Sv = GetVariable("TURRET_V_SPEED")
    
    rate_h  = GetVariable("RATE_H")/flt_base
    turret_h  = GetVariable("TURRET_H")/flt_base
    turret_Sh = GetVariable("TURRET_H_SPEED")
    turret_f  = GetVariable("TURRET_FIRE")
    step_counter = GetVariable("STEP_COUNTER")
    
    dX = 0
    dY = 0
    
    status = ""
    turret_v_initial = -80
    
    nvObjectsTotal = GetVariable("NV_OBJECTS_TOTAL")
    
    if nvObjectsTotal>0 then   ' If any object was found
    
    	' Get image size
    	img_w = GetVariable("IMAGE_WIDTH")
    	img_h = GetVariable("IMAGE_HEIGHT")
    	
    	' Get array variables of recognized objects
    	nvArrObjRectX = GetArrayVariable("NV_ARR_OBJ_RECT_X")
    	nvArrObjRectY = GetArrayVariable("NV_ARR_OBJ_RECT_Y")
    	nvArrObjRectW = GetArrayVariable("NV_ARR_OBJ_RECT_W")
    	nvArrObjRectH = GetArrayVariable("NV_ARR_OBJ_RECT_H")
    
    	' Get center coordinates of first object from array
    	obj_x = nvArrObjRectX(0) + nvArrObjRectW(0)/2
    	obj_y = nvArrObjRectY(0) - nvArrObjRectH(0)/2
    	
    	' Get difference between object and screen centers
    	dX = img_w/2 - obj_x
    	dY = img_h/2 - obj_y
    	
    	dXr = 1 - abs(dX*4/img_w)
    	if dXr < 0 then dXr = 0
    	
    	dYr = 1 - abs(dY*4/img_h)
    	if dYr < 0 then dYr = 0
    	
    	turret_min = -100
    	turret_max = 100
    	reaction   = 7
    	speed_min  = 1
    	speed_max  = 100
    	filtering  = 0.7
    	decay      = 0.1
    	threshold  = round(img_w*0.03)
    
    	sRateH = exp(-dXr*reaction)
    	sRateV = exp(-dYr*reaction)
    	
    	rate_h = rate_h + (sRateH - rate_h)*filtering
    	rate_v = rate_v + (sRateV - rate_v)*filtering
    	
    	turret_Sh = round(speed_min + rate_h*(speed_max - speed_min))
    	turret_Sv = round(speed_min + rate_v*(speed_max - speed_min))
    	
    	delta_h = (img_w/8)*rate_h
    	delta_v = (img_h/8)*rate_v
    
      if step_counter =< 0 then
    		step_counter = round(exp(-(dXr*dYr)*reaction*0.7)*15)
    		
    		if dX > threshold then
    			' The object is at left side
    			turret_h = turret_h - delta_h
    		
    			if turret_h < turret_min then turret_h = turret_min
    		end if
    
    		if dX < -threshold then
    			' The object is at right side
    			turret_h = turret_h + delta_h
    		
    			if turret_h > turret_max then turret_h = turret_max
    		end if
    	
    		if dY > threshold then
    			' The object is at the bottom
    			turret_v = turret_v - delta_v
    		
    			if turret_v < turret_min then turret_v = turret_min
    		end if
    	
    		if dY < -threshold then
    			' The object is at the top
    			turret_v = turret_v + delta_v
    		
    			if turret_v > turret_max then turret_v = turret_max
    		end if
    	else
    		step_counter = step_counter - 1
    	end if
    		
    	' Is the target locked?
    	if dX < threshold and dX > -threshold and dY < threshold and dY > -threshold then
    		status = "Target is locked"
    		turret_f = 1
    	else
    		status = "Tracking"
    		turret_f = 0
    	end if
    else
    	' Back to the center if object is lost
    	if turret_h > 0 then turret_h = turret_h - 1
    	if turret_h < 0 then turret_h = turret_h + 1
    	if turret_v > turret_v_initial then turret_v = turret_v - 1
    	if turret_v < turret_v_initial then turret_v = turret_v + 1
    	
    	turret_Sh = speed_min
    	turret_Sv = speed_min
    	
    	rate_h = rate_h - rate_h*decay
    	rate_v = rate_v - rate_v*decay
    	
    	turret_f = 0
    end if
    
    ' Set turret control variables
    SetVariable "RATE_V", rate_v*flt_base
    SetVariable "TURRET_V", turret_v*flt_base
    SetVariable "TURRET_V_CONTROL", round(turret_v)
    SetVariable "TURRET_V_SPEED", turret_Sv
    SetVariable "RATE_H", rate_h*flt_base
    SetVariable "TURRET_H", turret_h*flt_base
    SetVariable "TURRET_H_CONTROL", round(turret_h)
    SetVariable "TURRET_H_SPEED", turret_Sh
    SetVariable "TURRET_FIRE", turret_f
    SetVariable "STEP_COUNTER", step_counter
    SetVariable "DELTA_X", dX
    SetVariable "DELTA_Y", dY
    SetVariable "TURRET_STATUS", status
    

    attachment.php?attachmentid=3966&d=1335029045&thumb=1
    plingboot wrote:
    I've had a 'fiddle' with AVM navigator and managed to teach it a few objects/faces, but not the first idea how to turn that into tracking commands…
    It is easy. You should just use "Object recognition mode" in AVM Navigator module.
    First clear the AVM search tree by click on button "Set key image size(New)" and further press "Yes".

    Now you can train AVM on some faces like in video below:

    Face training demo

    When training will be done you should use variables that described below for your VBScript program:

    NV_OBJECTS_TOTAL - total number of recognized objects
    NV_ARR_OBJ_RECT_X - left-top corner X coordinate of recognized object
    NV_ARR_OBJ_RECT_Y - left-top corner Y coordinate of recognized object
    NV_ARR_OBJ_RECT_W - width of recognized object
    NV_ARR_OBJ_RECT_H - height of recognized object

    As example you can use these VBScript programs that was published in this topics:
    http://www.roborealm.com/forum/index.php?thread_id=3881#
    http://forums.trossenrobotics.com/showthread.php?4764-Using-of-AVM-plugin-in-RoboRealm&p=48865#post48865
  • ercoerco Posts: 20,255
    edited 2012-04-21 13:51
    @ExDxV: Thanks for another amazing demo video. Your "Terminator" sunglasses only prove my prediction that Skynet will arise from the excellent work that you are doing!
  • ExDxVExDxV Posts: 29
    edited 2012-04-21 22:00
    I just try to create machine that can see real world and I think that this solution is pretty possible because Mother Nature already made it a long time ago ;)
  • ExDxVExDxV Posts: 29
    edited 2012-06-24 12:35
    This is test of Human tracking by AVM Navigator in "Object recognition" mode with "Learn from motion" option.
  • ercoerco Posts: 20,255
    edited 2012-06-24 19:25
    Beautiful! So do wear all-white exclusively, or does help the software track you? :)
  • ExDxVExDxV Posts: 29
    edited 2012-06-24 22:25
    The source information for tracking is motion history between frames.
    So if I would clothe the other garments that have contraster texture it could work just better ;)
Sign In or Register to comment.