sl-jetson c44a30561a feat: person detection + tracking (YOLOv8n TensorRT)
New package: saltybot_perception

person_detector_node.py:
- Subscribes /camera/color/image_raw + /camera/depth/image_rect_raw
  (ApproximateTimeSynchronizer, slop=50ms)
- Subscribes /camera/color/camera_info for intrinsics
- YOLOv8n inference via TensorRT FP16 engine (Orin Nano 67 TOPS)
  Falls back to ONNX Runtime when engine not found (dev/CI)
- Letterbox preprocessing (640x640), YOLOv8n post-process + NMS
- Median-window depth lookup at bbox centre (7x7 px)
- Back-projects 2D pixel + depth to 3D point in camera frame
- tf2 transform to base_link (fallback: camera_color_optical_frame)
- Publishes:
    /person/detections  vision_msgs/Detection2DArray  all persons
    /person/target      geometry_msgs/PoseStamped     tracked person 3D
    /person/debug_image sensor_msgs/Image              (optional)

tracker.py — SimplePersonTracker:
- Single-target IoU-based tracker
- Picks closest valid person (smallest depth) on first lock
- Re-associates across frames using IoU threshold
- Holds last known position for configurable duration (default 2s)
- Monotonically increasing track IDs

detection_utils.py — pure helpers (no ROS2 deps, testable standalone):
- nms(), letterbox(), remap_bbox(), get_depth_at(), pixel_to_3d()

scripts/build_trt_engine.py:
- Converts ONNX to TensorRT FP16 engine using TRT Python API
- Prints trtexec CLI alternative
- Includes YOLOv8n download instructions

config/person_detection_params.yaml:
- confidence_threshold: 0.40, min_depth: 0.5m, max_depth: 5.0m
- track_hold_duration: 2.0s, target_frame: base_link

launch/person_detection.launch.py:
- engine_path, onnx_path, publish_debug_image, target_frame overridable

Tests: 26/26 passing (test_tracker.py + test_postprocess.py)
- IoU computation, NMS suppression, tracker state machine,
  depth filtering, hold duration, re-association, track ID

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 23:21:24 -05:00

43 lines
1.4 KiB
XML

<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>saltybot_perception</name>
<version>0.1.0</version>
<description>
Person detection and tracking for saltybot person-following mode.
Uses YOLOv8n with TensorRT FP16 on Jetson Orin Nano Super (67 TOPS).
Publishes person bounding boxes and 3D target position for the follow loop.
</description>
<maintainer email="seb@vayrette.com">seb</maintainer>
<license>MIT</license>
<depend>rclpy</depend>
<depend>sensor_msgs</depend>
<depend>geometry_msgs</depend>
<depend>vision_msgs</depend>
<depend>tf2_ros</depend>
<depend>tf2_geometry_msgs</depend>
<depend>cv_bridge</depend>
<depend>image_transport</depend>
<exec_depend>python3-numpy</exec_depend>
<exec_depend>python3-opencv</exec_depend>
<exec_depend>python3-launch-ros</exec_depend>
<!-- TensorRT (Jetson) — optional, falls back to onnxruntime -->
<!-- exec_depend>python3-tensorrt</exec_depend -->
<!-- exec_depend>python3-pycuda</exec_depend -->
<!-- ONNX Runtime fallback -->
<!-- exec_depend>python3-onnxruntime</exec_depend -->
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<test_depend>python3-pytest</test_depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>