cfa8ee111d
Merge pull request 'feat: Replace GNOME with Cage+Chromium kiosk (Issue #374 )' ( #377 ) from sl-webui/issue-374-cage-kiosk into main
2026-03-03 17:46:14 -05:00
b04fd916ff
Merge pull request 'feat: MageDok 7in display setup for Orin (Issue #369 )' ( #373 ) from sl-webui/issue-369-display-setup into main
2026-03-03 17:20:15 -05:00
042c0529a1
feat: Add Issue #375 — adaptive camera power mode manager
...
Implements a 5-mode FSM for dynamic sensor activation based on speed,
scenario, and battery level — avoids running all 4 CSI cameras + full
sensor suite when unnecessary, saving ~1 GB RAM and significant compute.
Five modes (sensor sets):
SLEEP — no sensors (~150 MB RAM)
SOCIAL — webcam only (~400 MB RAM, parked/socialising)
AWARE — front CSI + RealSense + LIDAR (~850 MB RAM, indoor/<5km/h)
ACTIVE — front+rear CSI + RealSense + LIDAR + UWB (~1.15 GB, 5-15km/h)
FULL — all 4 CSI + RealSense + LIDAR + UWB (~1.55 GB, >15km/h)
Core library — _camera_power_manager.py (pure Python, no ROS2 deps)
- CameraPowerFSM.update(speed_mps, scenario, battery_pct) → ModeDecision
- Speed-driven upgrades: instant (safety-first)
- Speed-driven downgrades: held for downgrade_hold_s (default 5s, anti-flap)
- Scenario overrides (instant, bypass hysteresis):
· CROSSING / EMERGENCY → FULL always
· PARKED → SOCIAL immediately
· INDOOR → cap at AWARE (never ACTIVE/FULL indoors)
- Battery low cap: battery_pct < threshold → cap at AWARE
- Idle timer: near-zero speed holds at AWARE for idle_to_social_s (30s)
before dropping to SOCIAL (avoids cycling at traffic lights)
ROS2 node — camera_power_node.py
- Subscribes: /saltybot/speed, /saltybot/scenario, /saltybot/battery_pct
- Publishes: /saltybot/camera_mode (CameraPowerMode, latched, 2 Hz)
- Publishes: /saltybot/camera_cmd/{front,rear,left,right,realsense,lidar,uwb,webcam}
(std_msgs/Bool, TRANSIENT_LOCAL so late subscribers get last state)
- Logs mode transitions with speed/scenario/battery context
Tests — test/test_camera_power_manager.py: 64/64 passing
- Sensor configs: counts, correct flags per mode, safety invariants
- Speed upgrades: instantaneous at all thresholds, no hold required
- Downgrade hysteresis: hold timer, cancellation on speed spike, hold=0 instant
- Scenario overrides: CROSSING/EMERGENCY/PARKED/INDOOR, all CSIs on crossing
- Battery low: cap at AWARE, threshold boundary
- Idle timer: delay AWARE→SOCIAL, motion resets timer
- Reset, labels, ModeDecision fields
- Integration: full ride scenario (walk→jog→sprint→crossing→indoor→park→low bat)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 16:48:17 -05:00
e2587b60fb
feat: SaltyFace web app UI for Chromium kiosk (Issue #370 )
...
Animated robot expression interface as lightweight web application:
**Architecture:**
- HTML5 Canvas rendering engine
- Node.js HTTP server (localhost:3000)
- ROSLIB WebSocket bridge for ROS2 topics
- Fullscreen responsive design (1024×600)
**Features:**
- 8 emotional states (happy, alert, confused, sleeping, excited, emergency, listening, talking)
- Real-time ROS2 subscriptions:
- /saltybot/state (emotion triggers)
- /saltybot/battery (status display)
- /saltybot/target_track (EXCITED emotion)
- /saltybot/obstacles (ALERT emotion)
- /social/speech/is_speaking (TALKING emotion)
- /social/speech/is_listening (LISTENING emotion)
- Tap-to-toggle status overlay
- 60fps Canvas animation on Wayland
- ~80MB total memory (Node.js + browser)
**Files:**
- public/index.html — Main page (1024×600 fullscreen)
- public/salty-face.js — Canvas rendering + ROS2 integration
- server.js — Node.js HTTP server with CORS support
- systemd/salty-face-server.service — Auto-start systemd service
- docs/SALTY_FACE_WEB_APP.md — Complete setup & API documentation
**Integration:**
- Runs in Chromium kiosk (Issue #374 )
- Depends on rosbridge_server for WebSocket bridge
- Serves on localhost:3000 (configurable)
**Next:** Issue #371 (Accessibility enhancements)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 16:42:41 -05:00
82b8f40b39
feat: Replace GNOME with Cage + Chromium kiosk (Issue #374 )
...
Lightweight fullscreen kiosk for MageDok 7" display:
**Architecture:**
- Cage: Minimal Wayland compositor (replaces GNOME)
- Chromium: Fullscreen kiosk browser for SaltyFace web UI
- PulseAudio: HDMI audio routing (from Issue #369 )
- Touch: HID input from MageDok USB device
**Memory Savings:**
- GNOME desktop: ~650MB RAM
- Cage + Chromium: ~200MB RAM
- Net gain: ~450MB for ROS2 workloads
**Files:**
- config/cage-magedok.ini — Cage display settings (1024×600@60Hz)
- config/wayland-magedok.conf — Wayland output configuration
- scripts/chromium_kiosk.sh — Cage + Chromium launcher
- systemd/chromium-kiosk.service — Auto-start systemd service
- launch/cage_display.launch.py — ROS2 launch configuration
- docs/CAGE_CHROMIUM_KIOSK.md — Complete setup & troubleshooting guide
**Next:** Issue #370 (Salty Face as web app in Chromium kiosk)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 16:41:00 -05:00
6592b58f65
feat: Add Issue #350 — smooth velocity ramp controller
...
Adds a rate-limiting shim between raw /cmd_vel and the drive stack to
prevent wheel slip, tipping, and jerky motion from step velocity inputs.
Core library — _velocity_ramp.py (pure Python, no ROS2 deps)
- VelocityRamp: applies independent accel/decel limits to linear-x and
angular-z with configurable max_lin_accel, max_lin_decel,
max_ang_accel, max_ang_decel
- _ramp_axis(): per-axis rate limiter with correct accel/decel selection
(decel when |target| < |current| or sign reversal; accel otherwise)
- Emergency stop: step(0.0, 0.0) bypasses ramp → immediate zero output
- Asymmetric limits supported (e.g. faster decel than accel)
ROS2 node — velocity_ramp_node.py
- Subscribes /cmd_vel, publishes /cmd_vel_smooth at configurable rate_hz
- Parameters: max_lin_accel (0.5 m/s²), max_lin_decel (0.5 m/s²),
max_ang_accel (1.0 rad/s²), max_ang_decel (1.0 rad/s²), rate_hz (50)
Tests — test/test_velocity_ramp.py: 50/50 passing
- _ramp_axis: accel/decel selection, sign reversal, overshoot prevention
- Construction: invalid params raise ValueError, defaults verified
- Linear/angular ramp-up: step size, target reached, no overshoot
- Deceleration: asymmetric limits, partial decel (non-zero target)
- Emergency stop: immediate zero, state cleared, resume from zero
- Sign reversal: passes through zero without jumping
- Reset: state cleared, next ramp starts from zero
- Monotonicity: linear and angular outputs are monotone toward target
- Rate accuracy: 50Hz/10Hz step sizes, 100-step convergence verified
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 15:45:05 -05:00
45d456049a
feat: MageDok 7in display setup for Jetson Orin (Issue #369 )
...
Add complete display integration for MageDok 7" IPS touchscreen:
Configuration Files:
- X11 display config (xorg-magedok.conf) — 1024×600 @ 60Hz
- PulseAudio routing (pulseaudio-magedok.conf) — HDMI audio to speakers
- Udev rules (90-magedok-touch.rules) — USB touch device permissions
- Systemd service (magedok-display.service) — auto-start on boot
ROS2 Launch:
- magedok_display.launch.py — coordinate display/touch/audio setup
Helper Scripts:
- verify_display.py — validate 1024×600 resolution via xrandr
- touch_monitor.py — detect MageDok USB touch, publish status
- audio_router.py — configure PulseAudio HDMI sink routing
Documentation:
- MAGEDOK_DISPLAY_SETUP.md — complete installation and troubleshooting guide
Features:
✓ DisplayPort → HDMI video from Orin DP connector
✓ USB touch input as HID device (driver-free)
✓ HDMI audio routing to built-in speakers
✓ 1024×600 native resolution verification
✓ Systemd auto-launch on boot (no login prompt)
✓ Headless fallback when display disconnected
✓ ROS2 status monitoring (touch/audio/resolution)
Supports Salty Face UI (Issue #370 ) and accessibility features (Issue #371 )
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 15:44:03 -05:00
0ecf341c57
feat: Add Issue #365 — UWB DW3000 anchor/tag tracking (bearing + distance)
...
Software-complete implementation of the two-anchor UWB ranging stack.
All ROS2 / serial code written against an abstract interface so tests run
without physical hardware (anchors on order).
New message
- UwbTarget.msg: valid, bearing_deg, distance_m, confidence,
anchor0/1_dist_m, baseline_m, fix_quality (0=none 1=single 2=dual)
Core library — _uwb_tracker.py (pure Python, no ROS2/runtime deps)
- parse_frame(): ASCII RANGE,<id>,<tag>,<mm> protocol decoder
- bearing_from_ranges(): law-of-cosines 2-anchor bearing with confidence
(penalises extreme angles + close-range geometry)
- bearing_single_anchor(): fallback bearing=0, conf≤0.3
- BearingKalman: 1-D constant-velocity Kalman filter [bearing, rate]
- UwbRangingState: thread-safe per-anchor state + stale timeout + Kalman
- AnchorSerialReader: background thread, readline() interface (real or mock)
ROS2 node — uwb_node.py
- Opens /dev/ttyUSB0 + /dev/ttyUSB1 (configurable)
- Non-fatal serial open failure (will publish FIX_NONE until plugged in)
- Publishes /saltybot/uwb_target at 10 Hz (configurable)
- Graceful shutdown: stops reader threads
Tests — test/test_uwb_tracker.py: 64/64 passing
- Frame parsing: valid, malformed, STATUS, CR/LF, mm→m conversion
- Bearing geometry: straight-ahead, ±45°, ±30°, symmetry, confidence
- Kalman: seeding, smoothing, convergence, rate tracking
- UwbRangingState: single/dual fix, stale timeout, thread safety
- AnchorSerialReader: mock serial, bytes decode, stop()
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 15:25:23 -05:00
c620dc51a7
feat: Add Issue #363 — P0 person tracking for follow-me mode
...
Implements real-time person detection + tracking pipeline for the
follow-me motion controller on Jetson Orin Nano Super (D435i).
Core components
- TargetTrack.msg: bearing_deg, distance_m, confidence, bbox, vel_bearing_dps,
vel_dist_mps, depth_quality (0-3)
- _person_tracker.py (pure-Python, no ROS2/runtime deps):
· 8-state constant-velocity Kalman filter [cx,cy,w,h,vcx,vcy,vw,vh]
· Greedy IoU data association
· HSV torso colour histogram re-ID (16H×8S, Bhattacharyya similarity)
with fixed saturation clamping (s = (cmax−cmin)/cmax, clipped to [0,1])
· FollowTargetSelector: nearest person auto-lock, hold_frames hysteresis
· TENTATIVE→ACTIVE after min_hits; LOST track removal after max_lost_frames
with per-frame lost_age increment across all LOST tracks
· bearing_from_pixel, depth_at_bbox (median, quality flags)
- person_tracking_node.py:
· YOLOv8n via ultralytics (TRT FP16 on first run) → HOG+SVM fallback
· Subscribes colour + depth + camera_info + follow_start/stop
· Publishes /saltybot/target_track at ≤30 fps
- test/test_person_tracker.py: 59/59 tests passing
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 15:19:02 -05:00
672120bb50
feat(perception): geometric face emotion classifier (Issue #359 )
...
Classifies facial expressions into neutral/happy/surprised/angry/sad
using geometric rules over MediaPipe Face Mesh landmarks — no ML model
required at runtime.
Rules
-----
surprised: brow_raise > 0.12 AND eye_open > 0.07 AND mouth_open > 0.07
happy: smile > 0.025 (lip corners above lip midpoint)
angry: brow_furl > 0.02 AND smile < 0.01
sad: smile < -0.025 AND brow_furl < 0.015
neutral: default
Changes
-------
- saltybot_scene_msgs/msg/FaceEmotion.msg — per-face emotion + features
- saltybot_scene_msgs/msg/FaceEmotionArray.msg
- saltybot_scene_msgs/CMakeLists.txt — register new msgs
- _face_emotion.py — pure-Python: FaceLandmarks, compute_features,
classify_emotion, detect_emotion, from_mediapipe
- face_emotion_node.py — subscribes /camera/color/image_raw,
publishes /saltybot/face_emotions (≤15 fps)
- test/test_face_emotion.py — 48 tests, all passing
- setup.py — add face_emotion entry point
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 14:39:49 -05:00
677e6eb75e
feat(perception): MFCC nearest-centroid audio scene classifier (Issue #353 )
...
Classifies ambient audio into indoor/outdoor/traffic/park at 1 Hz using
a 16-d feature vector (13 MFCC + spectral centroid + rolloff + ZCR) with
a normalised nearest-centroid classifier. Centroids are computed at import
time from seeded synthetic prototypes, ensuring deterministic behaviour.
Changes
-------
- saltybot_scene_msgs/msg/AudioScene.msg — label + confidence + features[16]
- saltybot_scene_msgs/CMakeLists.txt — register AudioScene.msg
- _audio_scene.py — pure-numpy feature extraction + NearestCentroidClassifier
- audio_scene_node.py — subscribes /audio/audio, publishes /saltybot/audio_scene
- test/test_audio_scene.py — 53 tests (all passing) with synthetic audio
- setup.py — add audio_scene entry point
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 14:03:11 -05:00
2a9b03dd76
feat(perception): depth-based obstacle size estimator (Issue #348 )
...
Projects LIDAR clusters into the D435i depth image to estimate 3-D
obstacle width and height in metres.
- saltybot_scene_msgs/msg/ObstacleSize.msg — new message
- saltybot_scene_msgs/msg/ObstacleSizeArray.msg — array wrapper
- saltybot_scene_msgs/CMakeLists.txt — register new msgs
- saltybot_bringup/_obstacle_size.py — pure-Python helper:
CameraParams (intrinsics + LIDAR→camera extrinsics)
ObstacleSizeEstimate (NamedTuple)
lidar_to_camera() LIDAR frame → camera frame transform
project_to_pixel() pinhole projection + bounds check
sample_depth_median() uint16 depth image window → median metres
estimate_height() vertical strip scan for row extent → height_m
estimate_cluster_size() full pipeline: cluster → size estimate
- saltybot_bringup/obstacle_size_node.py — ROS2 node
sub: /scan, /camera/depth/image_rect_raw, /camera/depth/camera_info
pub: /saltybot/obstacle_sizes (ObstacleSizeArray)
width from LIDAR bbox; height from depth strip back-projection;
graceful fallback (LIDAR-only) when depth image unavailable;
intrinsics latched from CameraInfo on first arrival
- test/test_obstacle_size.py — 33 tests, 33 passing
- setup.py — add obstacle_size entry
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 13:32:41 -05:00
bd9cb6da35
feat(perception): lane/path edge detector (Issue #339 )
...
Adds Canny+Hough+bird-eye perspective pipeline for detecting left/right
path edges from the forward camera. Pure-Python helper (_path_edges.py)
is fully tested; ROS2 node publishes PathEdges on /saltybot/path_edges.
- saltybot_scene_msgs/msg/PathEdges.msg — new message
- saltybot_scene_msgs/CMakeLists.txt — register PathEdges.msg
- saltybot_bringup/_path_edges.py — PathEdgeConfig, PathEdgesResult,
build/apply_homography, canny_edges,
hough_lines, classify_lines,
average_line, warp_segments,
process_frame
- saltybot_bringup/path_edges_node.py — ROS2 node (sensor_msgs/Image →
PathEdges, parameters for all
tunable Canny/Hough/birdseye params)
- test/test_path_edges.py — 38 tests, 38 passing
- setup.py — add path_edges console_script
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 11:33:22 -05:00
eb61207532
feat(perception): dynamic obstacle velocity estimator (Issue #326 )
...
Adds ObstacleVelocity/ObstacleVelocityArray msgs and an
ObstacleVelocityNode that clusters /scan points, tracks each centroid
with a constant-velocity Kalman filter, and publishes velocity vectors
on /saltybot/obstacle_velocities.
New messages (saltybot_scene_msgs):
msg/ObstacleVelocity.msg — obstacle_id, centroid, velocity,
speed_mps, width_m, depth_m,
point_count, confidence, is_static
msg/ObstacleVelocityArray.msg — array wrapper with header
New files (saltybot_bringup):
saltybot_bringup/_obstacle_velocity.py — pure helpers (no ROS2 deps)
KalmanTrack constant-velocity 2-D KF: predict(dt) / update(centroid)
coasting counter → alive flag; confidence = age/n_init
associate() greedy nearest-centroid matching (O(N·M), strict <)
ObstacleTracker predict-all → associate → update/spawn → prune cycle
saltybot_bringup/obstacle_velocity_node.py
Subscribes /scan (BEST_EFFORT); reuses _lidar_clustering helpers;
publishes ObstacleVelocityArray on /saltybot/obstacle_velocities
Parameters: distance_threshold_m=0.20, min_points=3, range 0.05–12m,
max_association_dist_m=0.50, max_coasting_frames=5,
n_init_frames=3, q_pos=0.05, q_vel=0.50, r_pos=0.10,
static_speed_threshold=0.10
test/test_obstacle_velocity.py — 48 tests, all passing
Modified:
saltybot_scene_msgs/CMakeLists.txt — register new msgs
saltybot_bringup/setup.py — add obstacle_velocity console_script
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 06:53:04 -05:00
4dbb4c6f0d
feat(perception): appearance-based person re-identification (Issue #322 )
...
social-bot integration tests / Lint (flake8 + pep257) (pull_request) Failing after 12s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (pull_request) Has been skipped
social-bot integration tests / Latency profiling (GPU, Orin) (pull_request) Has been cancelled
Adds PersonTrack/PersonTrackArray msgs and a PersonReidNode that matches
individuals across camera views using HSV colour histogram appearance
features and cosine similarity, with EMA gallery update and 30s stale timeout.
New messages (saltybot_scene_msgs):
msg/PersonTrack.msg — track_id, camera_id, bbox, confidence,
first_seen, last_seen, is_stale
msg/PersonTrackArray.msg — array wrapper with header
New files (saltybot_bringup):
saltybot_bringup/_person_reid.py — pure kinematics (no ROS2 deps)
extract_hsv_histogram() 2-D HS histogram (H=16, S=8 → 128-dim, L2-norm)
cosine_similarity() handles zero/non-unit vectors
match_track() best gallery match above threshold (strict >)
TrackGallery add/update/match/mark_stale/prune_stale
TrackEntry mutable dataclass; EMA feature blend (α=0.3)
saltybot_bringup/person_reid_node.py
Subscribes /camera/color/image_raw + /saltybot/scene/objects (BEST_EFFORT)
Crops COCO person (class_id=0) ROIs; extracts features; matches gallery
Publishes PersonTrackArray on /saltybot/person_tracks at 5 Hz
Parameters: camera_id, similarity_threshold=0.75, stale_timeout_s=30,
max_tracks=20, publish_hz=5.0
test/test_person_reid.py — 50 tests, all passing
Modified:
saltybot_scene_msgs/CMakeLists.txt — register PersonTrack/Array msgs
saltybot_bringup/setup.py — add person_reid console_script
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 06:45:43 -05:00
067a871103
feat(perception): wheel encoder differential drive odometry (Issue #184 )
...
social-bot integration tests / Lint (flake8 + pep257) (pull_request) Failing after 7s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (pull_request) Has been skipped
social-bot integration tests / Latency profiling (GPU, Orin) (pull_request) Has been cancelled
Adds saltybot_bridge_msgs package with WheelTicks.msg (int32 left/right
encoder counts) and a WheelOdomNode that subscribes to
/saltybot/wheel_ticks, integrates midpoint-Euler differential drive
kinematics (handling int32 counter rollover), and publishes
nav_msgs/Odometry on /odom_wheel at 50 Hz with optional TF broadcast.
New files:
jetson/ros2_ws/src/saltybot_bridge_msgs/
msg/WheelTicks.msg
CMakeLists.txt, package.xml
jetson/ros2_ws/src/saltybot_bringup/
saltybot_bringup/_wheel_odom.py — pure kinematics (no ROS2 deps)
saltybot_bringup/wheel_odom_node.py — 50 Hz timer node + TF broadcast
test/test_wheel_odom.py — 42 tests, all passing
Modified:
saltybot_bringup/package.xml — add saltybot_bridge_msgs, nav_msgs deps
saltybot_bringup/setup.py — add wheel_odom console_script entry
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 00:41:39 -05:00
e24c0b2e26
feat(perception): sky detector for outdoor navigation — Issue #307
...
- Add _sky_detector.py: SkyResult NamedTuple; detect_sky() with dual HSV
band masking (blue sky H∈[90,130]/S∈[40,255]/V∈[80,255] OR overcast
S∈[0,50]/V∈[185,255]), cv2.bitwise_or combined mask; sky_fraction over
configurable top scan_frac region; horizon_y = bottommost row where
per-row sky fraction ≥ row_threshold (−1 when no sky detected)
- Add sky_detect_node.py: subscribes /camera/color/image_raw (BEST_EFFORT),
publishes Float32 /saltybot/sky_fraction and Int32 /saltybot/horizon_y
per frame; scan_frac (default 0.60) and row_threshold (default 0.30) params
- Register sky_detector console script in setup.py
- 33/33 unit tests pass (no ROS2 required)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 21:39:28 -05:00
3bf603f685
feat(perception): terrain roughness estimator via Gabor + LBP — Issue #296
...
- Add _terrain_roughness.py: RoughnessResult NamedTuple; gabor_energy() with
4-orientation × 2-wavelength (5px, 10px) quadrature Gabor bank, DC removal
via image mean subtraction (prevents false high energy on uniform surfaces);
lbp_variance() using 8-point radius-1 LBP in vectorised numpy slice
comparisons (no sklearn); estimate_roughness() with bottom roi_frac crop,
normalised blend roughness = 0.5*(gabor/500) + 0.5*(lbp/5000) clipped [0,1]
- Add terrain_rough_node.py: subscribes /camera/color/image_raw (BEST_EFFORT),
publishes Float32 /saltybot/terrain_roughness at 2Hz (configurable via
publish_hz param); roi_frac param default 0.40 (bottom 40% = floor region)
- Register terrain_roughness console script in setup.py
- 37/37 unit tests pass (no ROS2 required)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 21:12:51 -05:00
c5f3a5b2ce
feat(perception): motion blur detector via Laplacian variance — Issue #286
...
- Add _blur_detector.py: BlurResult NamedTuple, laplacian_variance() (ksize=3
Laplacian on greyscale, with optional ROI crop), detect_blur() returning
variance + is_blurred flag + threshold; handles greyscale and BGR inputs,
empty ROI returns 0.0
- Add blur_detect_node.py: subscribes /camera/color/image_raw (BEST_EFFORT),
publishes Bool /saltybot/image_blurred and Float32 /saltybot/blur_score per
frame; threshold and roi_frac ROS params
- Register blur_detector console script in setup.py
- 25/25 unit tests pass (no ROS2 required)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 20:46:37 -05:00
f5093ecd34
feat(perception): HSV color object segmenter — Issue #274
...
- Add ColorDetection.msg + ColorDetectionArray.msg to saltybot_scene_msgs
- Add _color_segmenter.py: HsvRange/ColorBlob types, COLOR_RANGES defaults,
mask_for_color() (dual-band red wrap), find_color_blobs() with morph open,
contour extraction, area filter and max-blob-per-color limit
- Add color_segment_node.py: subscribes /camera/color/image_raw (BEST_EFFORT),
publishes /saltybot/color_objects (ColorDetectionArray) per frame;
active_colors, min_area_px, max_blobs_per_color params
- Add saltybot_scene_msgs exec_depend to saltybot_bringup/package.xml
- Register color_segmenter console_script in setup.py
- 34/34 unit tests pass (no ROS2 required)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 17:32:41 -05:00
f0e11fe7ca
feat(bringup): depth image hole filler via bilateral interpolation (Issue #268 )
...
Adds multi-pass spatial-Gaussian hole filler for D435i depth images.
Each pass replaces zero/NaN pixels with the Gaussian-weighted mean of valid
neighbours in a growing kernel (×1, ×2.5, ×6 default); original valid
pixels are never modified. Handles uint16 mm → float32 m conversion,
border pixels via BORDER_REFLECT, and above-d_max pixels as holes.
Publishes filled float32 depth on /camera/depth/filled at camera rate.
37/37 pure-Python tests pass (no ROS2 required).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 14:19:27 -05:00
9d12805843
feat(bringup): visual odometry drift detector (Issue #260 )
...
Adds sliding-window drift detector that compares cumulative path lengths
of visual odom and wheel odom over a configurable window (default 10 s).
Drift = |vo_path − wheel_path|; flagged when ≥ 0.5 m (configurable).
OdomBuffer handles per-source rolling storage with automatic age eviction.
Publishes Bool on /saltybot/vo_drift_detected and Float32 on
/saltybot/vo_drift_magnitude at 2 Hz. 27/27 pure-Python tests pass.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 13:26:07 -05:00
32857435a1
feat(bringup): floor surface type classifier on D435i RGB (Issue #249 )
...
Adds multi-feature nearest-centroid classifier for 6 surface types:
carpet, tile, wood, concrete, grass, gravel. Features: circular hue mean,
saturation mean/std, brightness, Laplacian texture variance, Sobel edge
density — all extracted from the bottom 40% of each frame (floor ROI).
Majority-vote temporal smoother (window=5) suppresses single-frame noise.
Publishes std_msgs/String on /saltybot/floor_type at 2 Hz.
34/34 pure-Python tests pass (no ROS2 required).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 12:51:14 -05:00
ff34f5ac43
feat(bringup): LIDAR Euclidean object clustering + RViz visualisation (Issue #239 )
...
Adds gap-based Euclidean distance clustering of /scan LaserScan points.
Each cluster is published as a labelled semi-transparent CUBE + TEXT marker
in /saltybot/lidar_clusters (MarkerArray), sorted nearest-first. Stale
markers from shrinking cluster counts are explicitly deleted each cycle.
22/22 pure-Python tests pass (no ROS2 required).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 12:21:35 -05:00
b722279739
feat(bringup): scan height filter with IMU pitch compensation (Issue #211 )
...
Two files added to saltybot_bringup:
- _scan_height_filter.py: pure-Python helpers (no rclpy) —
filter_scan_by_height() projects each LIDAR ray to world-frame height
using pitch/roll from the IMU and filters ground/ceiling returns;
pitch_roll_from_accel() uses convention-agnostic atan2 formula;
AttitudeEstimator low-pass filters the accelerometer attitude.
- scan_height_filter_node.py: subscribes /scan + /camera/imu, publishes
/scan_filtered (LaserScan) for Nav2 at source rate (up to 20 Hz).
setup.py: adds scan_height_filter entry point.
18/18 unit tests pass.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 11:50:56 -05:00
e6065e1531
feat(jetson): camera health watchdog node (issue #198 )
...
Adds camera_health_node.py + _camera_state.py to saltybot_bringup:
• _camera_state.py — pure-Python CameraState dataclass (no ROS2):
on_frame(), age_s, fps(window_s), status(),
should_reset() + mark_reset() with 30s cooldown
• camera_health_node.py — subscribes 6 image topics (D435i color/depth
+ 4× IMX219 CSI front/right/rear/left);
1 Hz tick: WARNING at >2s silence, ERROR at
>10s + v4l2 stream-off/on reset for CSI cams;
publishes /saltybot/camera_health JSON with
per-camera status, age_s, fps, total_frames
• test/test_camera_health.py — 15 unit tests (15/15 pass, no ROS2 needed)
• setup.py — adds camera_health_monitor console_scripts entry point
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 11:11:48 -05:00
c26293d000
feat(jetson): depth confidence filter node (issue #190 )
...
Adds depth_confidence_filter_node.py to saltybot_bringup:
- Synchronises /camera/depth/image_rect_raw + /camera/depth/confidence
via ApproximateTimeSynchronizer (10ms slop)
- Zeros pixels where confidence uint8 < threshold * 255 (default 0.5)
- Republishes filtered float32 depth on /camera/depth/filtered
- Registered as depth_confidence_filter console_scripts entry point
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 11:02:15 -05:00
57420807ca
feat(webui): live camera viewer — multi-stream + detection overlays (Issue #177 )
...
UI (src/hooks/useCamera.js, src/components/CameraViewer.jsx):
- 7 camera sources: front/left/rear/right CSI, D435i RGB/depth, panoramic
- Compressed image subscription via rosbridge (sensor_msgs/CompressedImage)
- Client-side 15fps gate (drops excess frames, reduces JS pressure)
- Per-camera FPS indicator with quality badge (FULL/GOOD/LOW/NO SIGNAL)
- Detection overlays: face boxes + names (/social/faces/detections),
gesture icons (/social/gestures), scene object labels + hazard colours
(/social/scene/objects); overlay mode selector (off/faces/gestures/objects/all)
- 360° panoramic equirect viewer with mouse/touch drag azimuth pan
- Picture-in-picture: up to 3 pinned cameras via ⊕ button
- One-click recording (MediaRecorder → MP4/WebM download)
- Snapshot to PNG with detection overlay composite + timestamp watermark
- Cameras tab added to TELEMETRY group in App.jsx
Jetson (rosbridge bringup):
- rosbridge_params.yaml: whitelist + /camera/depth/image_rect_raw/compressed,
/camera/panoramic/compressed, /social/faces/detections,
/social/gestures, /social/scene/objects
- rosbridge.launch.py: D435i colour republisher (JPEG 75%) +
depth republisher (compressedDepth/PNG16 preserving uint16 values)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 10:47:01 -05:00
9341e9d986
feat(mapping): RTAB-Map persistence + multi-session + map management (Issue #123 )
...
- Add saltybot_mapping package: MapDatabase, MapExporter, MapManagerNode
- 6 ROS2 services: list/save_as/load/delete maps + export occupancy/pointcloud
- Auto-save current.db every 5 min; keep last 5 autosaves; warn at 2 GB
- Update rtabmap_params.yaml: database_path, Mem/InitWMWithAllNodes=true,
Rtabmap/StartNewMapOnLoopClosure=false (multi-session persistence by default)
- Update slam_rtabmap.launch.py: remove --delete_db_on_start, add fresh_start
arg (deletes DB before launch) and database_path arg (load named map)
- CLI tools: backup_map.py, export_map.py
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 09:17:54 -05:00
039355d5bb
feat: full_stack.launch.py — one-command autonomous stack bringup
...
Adds saltybot_bringup/launch/full_stack.launch.py: a single launch file
that brings up the entire SaltyBot software stack in dependency order,
with mode selection (indoor / outdoor / follow).
Launch sequence (wall-clock delays):
t= 0s robot_description (URDF + TF)
t= 0s STM32 bidirectional serial bridge
t= 2s sensors (RPLIDAR A1M8 + RealSense D435i)
t= 2s cmd_vel safety bridge (deadman + ramp + AUTONOMOUS gate)
t= 4s UWB driver (MaUWB DW3000 anchors on USB)
t= 4s CSI cameras — 4x IMX219 (optional, enable_csi_cameras:=true)
t= 6s SLAM — RTAB-Map RGB-D+LIDAR (indoor only)
t= 6s Outdoor GPS nav (outdoor only)
t= 6s YOLOv8n person detection (TensorRT)
t= 9s Person follower (UWB primary + camera fusion)
t=14s Nav2 navigation stack (indoor only)
t=17s rosbridge WebSocket server (port 9090)
Modes:
indoor — SLAM + Nav2 + full sensor suite + follow + UWB (default)
outdoor — GPS nav + sensors + follow + UWB (no SLAM)
follow — sensors + UWB + perception + follower only
Launch arguments:
mode, use_sim_time, enable_csi_cameras, enable_uwb, enable_perception,
enable_follower, enable_bridge, enable_rosbridge, follow_distance,
max_linear_vel, uwb_port_a, uwb_port_b, stm32_port
Also updates saltybot_bringup/package.xml:
- Adds exec_depend for all saltybot_* packages included by full_stack
- Updates maintainer to sl-jetson
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 00:56:39 -05:00
6420e07487
feat: rosbridge WebSocket server for web UI (port 9090)
...
Adds rosbridge_suite to the Jetson stack so the browser dashboard can
subscribe to ROS2 topics via roslibjs over ws://jetson:9090.
docker-compose.yml
New service: saltybot-rosbridge
- Runs saltybot_bringup/launch/rosbridge.launch.py
- network_mode: host → port 9090 directly reachable on Jetson LAN
- Depends on saltybot-ros2, stm32-bridge, csi-cameras
saltybot_bringup/launch/rosbridge.launch.py
- rosbridge_websocket node (port 9090, params from rosbridge_params.yaml)
- 4× image_transport/republish nodes: compress CSI camera streams
/camera/<name>/image_raw → /camera/<name>/image_raw/compressed (JPEG 75%)
saltybot_bringup/config/rosbridge_params.yaml
Whitelisted topics:
/map /scan /tf /tf_static
/saltybot/imu /saltybot/balance_state
/cmd_vel
/person/*
/camera/*/image_raw/compressed
max_message_size: 10 MB (OccupancyGrid headroom)
saltybot_bringup/SENSORS.md
Added rosbridge connection section with roslibjs snippet,
topic reference table, bandwidth estimates, and throttle_rate tips.
saltybot_bringup/package.xml
Added exec_depend: rosbridge_server, image_transport,
image_transport_plugins (all already installed in Docker image).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 00:22:02 -05:00
772a70b545
feat: Nav2 path planning + obstacle avoidance (Phase 2b)
...
Integrates Nav2 autonomous navigation stack with RTAB-Map SLAM on Orin
Nano Super. No AMCL/map_server needed — RTAB-Map provides /map + TF.
New files:
- jetson/config/nav2_params.yaml DWB controller,
NavFn planner, RPLIDAR obstacle layer, RealSense voxel layer;
10Hz local / 5Hz global costmap; robot_radius 0.15m, max_vel 1.0 m/s
- jetson/ros2_ws/src/saltybot_bringup/launch/nav2.launch.py
wraps nav2_bringup navigation_launch with saltybot params + BT XML
- jetson/ros2_ws/src/saltybot_bringup/behavior_trees/
navigate_to_pose_with_recovery.xml BT: replan@1Hz, DWB follow,
recovery: clear maps → spin 90° → wait 5s → back up 0.30m
Updated:
- jetson/docker-compose.yml add saltybot-nav2 service
(depends_on: saltybot-ros2)
- jetson/ros2_ws/src/saltybot_bringup/setup.py install behavior_trees/*.xml
- jetson/ros2_ws/src/saltybot_bringup/package.xml add rtabmap_ros + nav2_bringup
- projects/saltybot/SLAM-SETUP-PLAN.md Phase 2b ✅ Done
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 22:54:24 -05:00
c5d6a72d39
feat: update SLAM stack for Jetson Orin Nano Super (67 TOPS, JetPack 6)
...
Platform upgrade: Jetson Nano 4GB → Orin Nano Super 8GB (March 1, 2026)
All Nano-era constraints removed — power/rate/resolution limits obsolete.
Dockerfile: l4t-jetpack:r36.2.0 (JetPack 6 / Ubuntu 22.04 / CUDA 12.x),
ROS2 Humble via native apt, added ros-humble-rtabmap-ros,
ros-humble-v4l2-camera for future IMX219 CSI (Phase 2c)
New: slam_rtabmap.launch.py — Orin primary SLAM entry point
RTAB-Map with subscribe_scan (RPLIDAR) + subscribe_rgbd (D435i)
Replaces slam_toolbox as docker-compose default
New: config/rtabmap_params.yaml — Orin-optimized
DetectionRate 10Hz, MaxFeatures 1000, Grid/3D true,
TimeThr 0 (no limit), Mem/STMSize 0 (unlimited)
Updated: config/realsense_d435i.yaml — 848x480x30, pointcloud enabled
Updated: config/slam_toolbox_params.yaml — 10Hz rate, 1s map interval
Updated: SLAM-SETUP-PLAN.md — full rewrite for Orin: arch diagram,
Phase 2c IMX219 plan (4x 160° CSI surround), 25W power budget
docker-compose.yml: image tag jetson-orin, default → slam_rtabmap.launch.py
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 21:46:27 -05:00
76067d6d89
feat(bd-a2j): RealSense D435i + RPLIDAR A1M8 ROS2 driver integration
...
Adds saltybot_bringup ROS2 package with four launch files:
- realsense.launch.py — D435i at 640x480x15fps, IMU unified topic
- rplidar.launch.py — RPLIDAR A1M8 via /dev/rplidar udev symlink
- sensors.launch.py — both sensors + static TF (base_link→laser/camera)
- slam.launch.py — sensors + slam_toolbox online_async (compose entry point)
Sensor config YAMLs (mounted at /config/ in container):
- realsense_d435i.yaml — Nano power-budget settings (15fps, no pointcloud)
- rplidar_a1m8.yaml — Standard scan mode, 115200 baud, laser frame
- slam_toolbox_params.yaml — Nano-tuned (2Hz processing, 5cm resolution)
Fixes docker-compose volume mount: ./ros2_ws/src:/ros2_ws/src
(was ./ros2_ws:/ros2_ws/src — would have double-nested the src directory)
Topic reference and verification commands in SENSORS.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 17:14:21 -05:00