-
Notifications
You must be signed in to change notification settings - Fork 4
Motion detection
Motion node is state dependent. For this reason, in order to proceed to it's execution two steps are required. First of all it is necessary to initialize node with an appropriate roslaunch command and second to change to one of the appropriate states with a rosrun command. This node is responsible to identify motion in a scene and works with one camera. This camera can be either captured from Kinect, or Xtion or a simple usb web camera. The corresponding laucher to initiate motion node allows to choose the camera to capture from.
You initiate the execution by running:
roslaunch pandora_vision_motion pandora_vision_motion_node_standalone.launch [option]
Argument option:
- With a Kinect plugged in your computer:
openni:=true
- With a Xtion plugged in your computer:
openni2:=true
- With a usb camera plugged in your computer:
usbcamera:=true
You initiate the execution by running:
roslaunch pandora_vision_motion pandora_vision_motion_node.launch
The current node begins it's execution, only if robot state is changed either to MODE_SENSOR_HOLD or MODE_SENSOR_TEST. After the initialization of motion node, we change state by running:
rosrun state_manager state_changer state
Argument state:
- state = 4 corresponds to MODE_SENSOR_HOLD
- state = 7 corresponds to MODE_SENSOR_TEST
Run rosrun rqt_reconfigure rqt_reconfigure
and choose pandora_vision
-> motion_node
.
From there you can view:
- the input image, by choosing
show_image
- the background of current scene, by choosing
show_background
- bounding boxes around detected moving objects, by choosing
show_moving_objects_contours
- the difference between foreground and background, by choosing
show_diff_image
- all of the above, by choosign
visualization