sl-jetson c44a30561a feat: person detection + tracking (YOLOv8n TensorRT)
New package: saltybot_perception

person_detector_node.py:
- Subscribes /camera/color/image_raw + /camera/depth/image_rect_raw
  (ApproximateTimeSynchronizer, slop=50ms)
- Subscribes /camera/color/camera_info for intrinsics
- YOLOv8n inference via TensorRT FP16 engine (Orin Nano 67 TOPS)
  Falls back to ONNX Runtime when engine not found (dev/CI)
- Letterbox preprocessing (640x640), YOLOv8n post-process + NMS
- Median-window depth lookup at bbox centre (7x7 px)
- Back-projects 2D pixel + depth to 3D point in camera frame
- tf2 transform to base_link (fallback: camera_color_optical_frame)
- Publishes:
    /person/detections  vision_msgs/Detection2DArray  all persons
    /person/target      geometry_msgs/PoseStamped     tracked person 3D
    /person/debug_image sensor_msgs/Image              (optional)

tracker.py — SimplePersonTracker:
- Single-target IoU-based tracker
- Picks closest valid person (smallest depth) on first lock
- Re-associates across frames using IoU threshold
- Holds last known position for configurable duration (default 2s)
- Monotonically increasing track IDs

detection_utils.py — pure helpers (no ROS2 deps, testable standalone):
- nms(), letterbox(), remap_bbox(), get_depth_at(), pixel_to_3d()

scripts/build_trt_engine.py:
- Converts ONNX to TensorRT FP16 engine using TRT Python API
- Prints trtexec CLI alternative
- Includes YOLOv8n download instructions

config/person_detection_params.yaml:
- confidence_threshold: 0.40, min_depth: 0.5m, max_depth: 5.0m
- track_hold_duration: 2.0s, target_frame: base_link

launch/person_detection.launch.py:
- engine_path, onnx_path, publish_debug_image, target_frame overridable

Tests: 26/26 passing (test_tracker.py + test_postprocess.py)
- IoU computation, NMS suppression, tracker state machine,
  depth filtering, hold duration, re-association, track ID

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 23:21:24 -05:00
..

Jetson Nano — AI/SLAM Platform Setup

Self-balancing robot: Jetson Nano dev environment for ROS2 Humble + SLAM stack.

Stack

Component Version / Part
Platform Jetson Nano 4GB
JetPack 4.6 (L4T R32.6.1, CUDA 10.2)
ROS2 Humble Hawksbill
DDS CycloneDDS
SLAM slam_toolbox
Nav Nav2
Depth camera Intel RealSense D435i
LiDAR RPLIDAR A1M8
MCU bridge STM32F722 (USB CDC @ 921600)

Quick Start

# 1. Host setup (once, on fresh JetPack 4.6)
sudo bash scripts/setup-jetson.sh

# 2. Build Docker image
bash scripts/build-and-run.sh build

# 3. Start full stack
bash scripts/build-and-run.sh up

# 4. Open ROS2 shell
bash scripts/build-and-run.sh shell

Docs

Files

jetson/
├── Dockerfile              # L4T base + ROS2 Humble + SLAM packages
├── docker-compose.yml      # Multi-service stack (ROS2, RPLIDAR, D435i, STM32)
├── README.md               # This file
├── docs/
│   ├── pinout.md           # GPIO/I2C/UART pinout reference
│   └── power-budget.md     # Power budget analysis (10W envelope)
└── scripts/
    ├── entrypoint.sh       # Docker container entrypoint
    ├── setup-jetson.sh     # Host setup (udev, Docker, nvpmodel)
    └── build-and-run.sh    # Build/run helper

Power Budget (Summary)

Scenario Total
Idle 2.9W
Nominal (SLAM active) ~10.2W
Peak 15.4W

Target: 10W (MAXN nvpmodel). Use RPLIDAR standby + 640p D435i for compliance. See docs/power-budget.md for full analysis.