0ecf341c57
feat: Add Issue #365 — UWB DW3000 anchor/tag tracking (bearing + distance)
...
Software-complete implementation of the two-anchor UWB ranging stack.
All ROS2 / serial code written against an abstract interface so tests run
without physical hardware (anchors on order).
New message
- UwbTarget.msg: valid, bearing_deg, distance_m, confidence,
anchor0/1_dist_m, baseline_m, fix_quality (0=none 1=single 2=dual)
Core library — _uwb_tracker.py (pure Python, no ROS2/runtime deps)
- parse_frame(): ASCII RANGE,<id>,<tag>,<mm> protocol decoder
- bearing_from_ranges(): law-of-cosines 2-anchor bearing with confidence
(penalises extreme angles + close-range geometry)
- bearing_single_anchor(): fallback bearing=0, conf≤0.3
- BearingKalman: 1-D constant-velocity Kalman filter [bearing, rate]
- UwbRangingState: thread-safe per-anchor state + stale timeout + Kalman
- AnchorSerialReader: background thread, readline() interface (real or mock)
ROS2 node — uwb_node.py
- Opens /dev/ttyUSB0 + /dev/ttyUSB1 (configurable)
- Non-fatal serial open failure (will publish FIX_NONE until plugged in)
- Publishes /saltybot/uwb_target at 10 Hz (configurable)
- Graceful shutdown: stops reader threads
Tests — test/test_uwb_tracker.py: 64/64 passing
- Frame parsing: valid, malformed, STATUS, CR/LF, mm→m conversion
- Bearing geometry: straight-ahead, ±45°, ±30°, symmetry, confidence
- Kalman: seeding, smoothing, convergence, rate tracking
- UwbRangingState: single/dual fix, stale timeout, thread safety
- AnchorSerialReader: mock serial, bytes decode, stop()
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 15:25:23 -05:00
c620dc51a7
feat: Add Issue #363 — P0 person tracking for follow-me mode
...
Implements real-time person detection + tracking pipeline for the
follow-me motion controller on Jetson Orin Nano Super (D435i).
Core components
- TargetTrack.msg: bearing_deg, distance_m, confidence, bbox, vel_bearing_dps,
vel_dist_mps, depth_quality (0-3)
- _person_tracker.py (pure-Python, no ROS2/runtime deps):
· 8-state constant-velocity Kalman filter [cx,cy,w,h,vcx,vcy,vw,vh]
· Greedy IoU data association
· HSV torso colour histogram re-ID (16H×8S, Bhattacharyya similarity)
with fixed saturation clamping (s = (cmax−cmin)/cmax, clipped to [0,1])
· FollowTargetSelector: nearest person auto-lock, hold_frames hysteresis
· TENTATIVE→ACTIVE after min_hits; LOST track removal after max_lost_frames
with per-frame lost_age increment across all LOST tracks
· bearing_from_pixel, depth_at_bbox (median, quality flags)
- person_tracking_node.py:
· YOLOv8n via ultralytics (TRT FP16 on first run) → HOG+SVM fallback
· Subscribes colour + depth + camera_info + follow_start/stop
· Publishes /saltybot/target_track at ≤30 fps
- test/test_person_tracker.py: 59/59 tests passing
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 15:19:02 -05:00
672120bb50
feat(perception): geometric face emotion classifier (Issue #359 )
...
Classifies facial expressions into neutral/happy/surprised/angry/sad
using geometric rules over MediaPipe Face Mesh landmarks — no ML model
required at runtime.
Rules
-----
surprised: brow_raise > 0.12 AND eye_open > 0.07 AND mouth_open > 0.07
happy: smile > 0.025 (lip corners above lip midpoint)
angry: brow_furl > 0.02 AND smile < 0.01
sad: smile < -0.025 AND brow_furl < 0.015
neutral: default
Changes
-------
- saltybot_scene_msgs/msg/FaceEmotion.msg — per-face emotion + features
- saltybot_scene_msgs/msg/FaceEmotionArray.msg
- saltybot_scene_msgs/CMakeLists.txt — register new msgs
- _face_emotion.py — pure-Python: FaceLandmarks, compute_features,
classify_emotion, detect_emotion, from_mediapipe
- face_emotion_node.py — subscribes /camera/color/image_raw,
publishes /saltybot/face_emotions (≤15 fps)
- test/test_face_emotion.py — 48 tests, all passing
- setup.py — add face_emotion entry point
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 14:39:49 -05:00
677e6eb75e
feat(perception): MFCC nearest-centroid audio scene classifier (Issue #353 )
...
Classifies ambient audio into indoor/outdoor/traffic/park at 1 Hz using
a 16-d feature vector (13 MFCC + spectral centroid + rolloff + ZCR) with
a normalised nearest-centroid classifier. Centroids are computed at import
time from seeded synthetic prototypes, ensuring deterministic behaviour.
Changes
-------
- saltybot_scene_msgs/msg/AudioScene.msg — label + confidence + features[16]
- saltybot_scene_msgs/CMakeLists.txt — register AudioScene.msg
- _audio_scene.py — pure-numpy feature extraction + NearestCentroidClassifier
- audio_scene_node.py — subscribes /audio/audio, publishes /saltybot/audio_scene
- test/test_audio_scene.py — 53 tests (all passing) with synthetic audio
- setup.py — add audio_scene entry point
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 14:03:11 -05:00
2a9b03dd76
feat(perception): depth-based obstacle size estimator (Issue #348 )
...
Projects LIDAR clusters into the D435i depth image to estimate 3-D
obstacle width and height in metres.
- saltybot_scene_msgs/msg/ObstacleSize.msg — new message
- saltybot_scene_msgs/msg/ObstacleSizeArray.msg — array wrapper
- saltybot_scene_msgs/CMakeLists.txt — register new msgs
- saltybot_bringup/_obstacle_size.py — pure-Python helper:
CameraParams (intrinsics + LIDAR→camera extrinsics)
ObstacleSizeEstimate (NamedTuple)
lidar_to_camera() LIDAR frame → camera frame transform
project_to_pixel() pinhole projection + bounds check
sample_depth_median() uint16 depth image window → median metres
estimate_height() vertical strip scan for row extent → height_m
estimate_cluster_size() full pipeline: cluster → size estimate
- saltybot_bringup/obstacle_size_node.py — ROS2 node
sub: /scan, /camera/depth/image_rect_raw, /camera/depth/camera_info
pub: /saltybot/obstacle_sizes (ObstacleSizeArray)
width from LIDAR bbox; height from depth strip back-projection;
graceful fallback (LIDAR-only) when depth image unavailable;
intrinsics latched from CameraInfo on first arrival
- test/test_obstacle_size.py — 33 tests, 33 passing
- setup.py — add obstacle_size entry
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 13:32:41 -05:00
bd9cb6da35
feat(perception): lane/path edge detector (Issue #339 )
...
Adds Canny+Hough+bird-eye perspective pipeline for detecting left/right
path edges from the forward camera. Pure-Python helper (_path_edges.py)
is fully tested; ROS2 node publishes PathEdges on /saltybot/path_edges.
- saltybot_scene_msgs/msg/PathEdges.msg — new message
- saltybot_scene_msgs/CMakeLists.txt — register PathEdges.msg
- saltybot_bringup/_path_edges.py — PathEdgeConfig, PathEdgesResult,
build/apply_homography, canny_edges,
hough_lines, classify_lines,
average_line, warp_segments,
process_frame
- saltybot_bringup/path_edges_node.py — ROS2 node (sensor_msgs/Image →
PathEdges, parameters for all
tunable Canny/Hough/birdseye params)
- test/test_path_edges.py — 38 tests, 38 passing
- setup.py — add path_edges console_script
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 11:33:22 -05:00
eb61207532
feat(perception): dynamic obstacle velocity estimator (Issue #326 )
...
Adds ObstacleVelocity/ObstacleVelocityArray msgs and an
ObstacleVelocityNode that clusters /scan points, tracks each centroid
with a constant-velocity Kalman filter, and publishes velocity vectors
on /saltybot/obstacle_velocities.
New messages (saltybot_scene_msgs):
msg/ObstacleVelocity.msg — obstacle_id, centroid, velocity,
speed_mps, width_m, depth_m,
point_count, confidence, is_static
msg/ObstacleVelocityArray.msg — array wrapper with header
New files (saltybot_bringup):
saltybot_bringup/_obstacle_velocity.py — pure helpers (no ROS2 deps)
KalmanTrack constant-velocity 2-D KF: predict(dt) / update(centroid)
coasting counter → alive flag; confidence = age/n_init
associate() greedy nearest-centroid matching (O(N·M), strict <)
ObstacleTracker predict-all → associate → update/spawn → prune cycle
saltybot_bringup/obstacle_velocity_node.py
Subscribes /scan (BEST_EFFORT); reuses _lidar_clustering helpers;
publishes ObstacleVelocityArray on /saltybot/obstacle_velocities
Parameters: distance_threshold_m=0.20, min_points=3, range 0.05–12m,
max_association_dist_m=0.50, max_coasting_frames=5,
n_init_frames=3, q_pos=0.05, q_vel=0.50, r_pos=0.10,
static_speed_threshold=0.10
test/test_obstacle_velocity.py — 48 tests, all passing
Modified:
saltybot_scene_msgs/CMakeLists.txt — register new msgs
saltybot_bringup/setup.py — add obstacle_velocity console_script
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 06:53:04 -05:00
4dbb4c6f0d
feat(perception): appearance-based person re-identification (Issue #322 )
...
social-bot integration tests / Lint (flake8 + pep257) (pull_request) Failing after 12s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (pull_request) Has been skipped
social-bot integration tests / Latency profiling (GPU, Orin) (pull_request) Has been cancelled
Adds PersonTrack/PersonTrackArray msgs and a PersonReidNode that matches
individuals across camera views using HSV colour histogram appearance
features and cosine similarity, with EMA gallery update and 30s stale timeout.
New messages (saltybot_scene_msgs):
msg/PersonTrack.msg — track_id, camera_id, bbox, confidence,
first_seen, last_seen, is_stale
msg/PersonTrackArray.msg — array wrapper with header
New files (saltybot_bringup):
saltybot_bringup/_person_reid.py — pure kinematics (no ROS2 deps)
extract_hsv_histogram() 2-D HS histogram (H=16, S=8 → 128-dim, L2-norm)
cosine_similarity() handles zero/non-unit vectors
match_track() best gallery match above threshold (strict >)
TrackGallery add/update/match/mark_stale/prune_stale
TrackEntry mutable dataclass; EMA feature blend (α=0.3)
saltybot_bringup/person_reid_node.py
Subscribes /camera/color/image_raw + /saltybot/scene/objects (BEST_EFFORT)
Crops COCO person (class_id=0) ROIs; extracts features; matches gallery
Publishes PersonTrackArray on /saltybot/person_tracks at 5 Hz
Parameters: camera_id, similarity_threshold=0.75, stale_timeout_s=30,
max_tracks=20, publish_hz=5.0
test/test_person_reid.py — 50 tests, all passing
Modified:
saltybot_scene_msgs/CMakeLists.txt — register PersonTrack/Array msgs
saltybot_bringup/setup.py — add person_reid console_script
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 06:45:43 -05:00
f5093ecd34
feat(perception): HSV color object segmenter — Issue #274
...
- Add ColorDetection.msg + ColorDetectionArray.msg to saltybot_scene_msgs
- Add _color_segmenter.py: HsvRange/ColorBlob types, COLOR_RANGES defaults,
mask_for_color() (dual-band red wrap), find_color_blobs() with morph open,
contour extraction, area filter and max-blob-per-color limit
- Add color_segment_node.py: subscribes /camera/color/image_raw (BEST_EFFORT),
publishes /saltybot/color_objects (ColorDetectionArray) per frame;
active_colors, min_area_px, max_blobs_per_color params
- Add saltybot_scene_msgs exec_depend to saltybot_bringup/package.xml
- Register color_segmenter console_script in setup.py
- 34/34 unit tests pass (no ROS2 required)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 17:32:41 -05:00
b2fdc3a500
feat(perception): QR code reader on CSI surround frames (Issue #233 )
...
Adds cv2.QRCodeDetector-based QR reader that subscribes to all four IMX219
CSI camera streams, deduplicates detections with a 2 s per-payload cooldown,
and publishes /saltybot/qr_codes (QRDetectionArray) at 10 Hz. New
QRDetection / QRDetectionArray messages added to saltybot_scene_msgs.
16/16 pure-Python tests pass (no ROS2 required).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 12:12:57 -05:00
a9717cd602
feat(scene): semantic scene understanding — YOLOv8n TRT + room classification + hazards (Issue #141 )
...
New packages:
saltybot_scene_msgs — 4 msgs (SceneObject, SceneObjectArray, RoomClassification, BehaviorHint)
saltybot_scene — 3 nodes + launch + config + TRT build script
Nodes:
scene_detector_node — YOLOv8-nano TRT FP16 (target ≥15 FPS @ 640×640);
synchronized RGB+depth input; filters scene classes
(chairs, tables, doors, stairs, pets, appliances);
3D back-projection via aligned depth; depth-based hazard
scan (HazardClassifier); room classification at 2Hz;
publishes /social/scene/objects + /social/scene/hazards
+ /social/scene/room_type
behavior_adapter_node — adapts speed_limit_mps + personality_mode from room
type and hazard severity; publishes BehaviorHint on
/social/scene/behavior_hint (on-change + 1Hz heartbeat)
costmap_publisher_node — converts SceneObjectArray → PointCloud2 disc rings
for Nav2 obstacle_layer + MarkerArray for RViz;
publishes /social/scene/obstacle_cloud
Modules:
yolo_utils.py — YOLOv8 preprocess/postprocess (letterbox, cx/cy/w/h decode,
NMS), COCO+custom class table (door=80, stairs=81, wet=82),
hazard-by-class mapping
room_classifier.py — rule-based (object co-occurrence weights + softmax) with
optional MobileNetV2 TRT/ONNX backend (Places365-style 8-class)
hazard_classifier.py — depth-only hazard patterns: drop (row-mean cliff), stairs
(alternating depth bands), wet floor (depth std-dev), glass
(zero depth + strong Sobel edges in RGB)
scripts/build_scene_trt.py — export YOLOv8n → ONNX → TRT FP16; optionally build
MobileNetV2 room classifier engine; includes benchmark
Topic map:
/social/scene/objects SceneObjectArray ~15+ FPS
/social/scene/room_type RoomClassification ~2 Hz
/social/scene/hazards SceneObjectArray on hazard
/social/scene/behavior_hint BehaviorHint on-change + 1 Hz
/social/scene/obstacle_cloud PointCloud2 Nav2 obstacle_layer
/social/scene/object_markers MarkerArray RViz debug
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 09:59:53 -05:00