e35bd949c0
feat: Integrate 360° LIDAR obstacle avoidance into full_stack (Issue #364 )
...
Adds saltybot_lidar_avoidance node to the launch sequence at t=3s.
The RPLIDAR A1M8 node was already fully implemented with:
- Emergency stop if obstacle detected within 0.5m
- Speed-dependent safety zones (0.6m @ 0 m/s → 3.0m @ 5.56 m/s)
- Forward scanning window (±30° cone)
- Debounced obstacle detection (2-frame debounce)
- Publishes /cmd_vel_safe with filtered velocity commands
Integrates seamlessly with Nav2 stack and cmd_vel_mux multiplexer.
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-04 12:23:43 -05:00
3c93a72d01
Merge pull request 'refactor: ESC abstraction layer with pluggable backends (Issue #388 )' ( #389 ) from sl-firmware/issue-388-esc-abstraction into main
2026-03-04 11:36:24 -05:00
844504e92e
refactor: ESC abstraction layer with pluggable backends (Issue #388 )
...
BREAKING CHANGE: Hoverboard implementation moved to pluggable vtable architecture.
## Implementation
### New Files
- include/esc_backend.h: Abstract interface (vtable) with:
- esc_telemetry_t struct (voltage, current, temp, speed, steer, fault)
- esc_backend_t vtable (init, send, estop, resume, get_telemetry)
- Runtime registration (esc_backend_register/get)
- Convenience wrappers (esc_init, esc_send, esc_estop, etc)
- src/esc_backend.c: Backend registry and wrapper implementations
- src/esc_hoverboard.c: Hoverboard backend implementing vtable
- USART2 @ 115200 baud configuration
- EFeru FOC packet encoding (0xABCD start, XOR checksum)
- Backward-compatible hoverboard_init/send wrappers
- Telemetry stub (future: add RX feedback parsing)
- src/esc_vesc.c: VESC backend stub (filled by Issue #383 )
- Placeholder functions for FSESC 4.20 Plus integration
- Public vesc_backend_register_impl() for runtime registration
- Ready for pyvesc protocol implementation
### Modified Files
- src/motor_driver.c: Changed from direct hoverboard_send() calls to esc_send()
- No logic changes, ESC-agnostic via vtable
- include/config.h: Added ESC_BACKEND define
- Compile-time selection (default: HOVERBOARD)
- Comments document architecture for future VESC support
### Removed Files
- src/hoverboard.c: Original implementation merged into esc_hoverboard.c
## Architecture Benefits
1. **Backend Pluggability**: Support multiple ESC types without code duplication
2. **Zero Direct Dependencies**: motor_driver.c never calls hoverboard functions directly
3. **Clean Testing**: Each backend can be tested/stubbed independently
4. **Future-Ready**: VESC integration (Issue #383 ) just implements the vtable
5. **Backward Compatible**: Existing code calling hoverboard_init/send still works
## Testing
- pio run: ✅ PASS (55.4KB Flash, 16.9KB RAM)
- Hoverboard backend tested via existing balance tests (unchanged logic)
- VESC backend stub compiles and links (no-op until #383 fills implementation)
## Blocks
- Issue #383 (VESC integration) — ready to implement vtable functions
- Issue #384 (pan/tilt servo) — may use independent PWM (not blocked)
## Dependencies
- None — this is pure refactoring, no API changes for callers
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-04 10:36:35 -05:00
985e03a26d
Merge pull request 'fix: add missing bno055.h include in main.c' ( #387 ) from sl-firmware/fix-bno055-include into main
2026-03-04 09:54:56 -05:00
d52e7af554
fix: Add missing bno055.h include to resolve implicit declaration warnings
...
Adds #include "bno055.h" to src/main.c to resolve implicit declaration
warnings for bno055_read(), bno055_calib_status(), and bno055_temperature().
Functions were properly implemented but header was missing from includes.
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-04 08:45:51 -05:00
ce1a5e5fee
Merge pull request 'feat: VESC UART driver node with pyvesc (Issue #383 )' ( #385 ) from sl-controls/issue-383-vesc into main
2026-03-04 08:40:15 -05:00
a11722e872
feat: Implement VESC UART driver node (Issue #383 )
...
ROS2 driver for Flipsky FSESC 4.20 Plus (VESC dual ESC) motor control.
Replaces hoverboard ESC communication with pyvesc library.
Features:
- UART serial communication (configurable port/baud)
- Dual command modes: duty_cycle (-100 to 100) and RPM setpoint
- Telemetry publishing: voltage, current, RPM, temperature, fault codes
- Command timeout: auto-zero throttle if no cmd_vel received
- Heartbeat-based connection management
- Comprehensive error handling and logging
Topics:
- Subscribe: /cmd_vel (geometry_msgs/Twist)
- Publish: /vesc/state (JSON telemetry)
- Publish: /vesc/raw_telemetry (debug)
Launch: ros2 launch saltybot_vesc_driver vesc_driver.launch.py
Config: config/vesc_params.yaml
Next phase: Integrate with cmd_vel_mux + safety layer.
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-04 07:05:46 -05:00
bc3ed1a0c7
Merge pull request 'fix: resolve all compile errors across 6 files (Issue #337 )' ( #382 ) from sl-controls/issue-337-build-fix into main
2026-03-03 19:58:54 -05:00
f4e71777ec
fix: Resolve all compile and linker errors (Issue #337 )
...
Fixed 7 compile errors across 6 files:
1. servo.c: Removed duplicate ServoState typedef, updated struct definition in header
2. watchdog.c: Fixed IWDG handle usage - moved to global scope for IRQHandler access
3. ultrasonic.c: Fixed timer handle type mismatches - use TIM_HandleTypeDef instead of TIM_TypeDef, replaced HAL_TIM_IC_Init_Compat with proper HAL functions
4. main.c: Replaced undefined functions - imu_calibrated() → mpu6000_is_calibrated(), crsf_is_active() → manual state check
5. ina219.c: Stubbed I2C functions pending HAL implementation
Build now passes with ZERO errors.
- RAM: 6.5% (16964 bytes / 262144)
- Flash: 10.6% (55368 bytes / 524288)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 19:00:12 -05:00
6df453e8d0
Merge pull request 'feat: deaf/accessibility mode (Issue #371 )' ( #381 ) from sl-controls/issue-371-accessibility into main
2026-03-03 18:51:30 -05:00
5604670646
feat: Implement deaf/accessibility mode with STT, touch keyboard, TTS (Issue #371 )
...
Accessibility mode for hearing-impaired users:
- Speech-to-text display: Integrates with saltybot_social speech_pipeline_node
- Touch keyboard overlay: 1024x600 optimized for MageDok 7in display
- TTS output: Routes to MageDok speakers via PulseAudio
- Web UI server: Responsive keyboard interface with real-time display updates
- Auto-confirm: Optional TTS feedback for spoken input
- Physical keyboard support: Both touch and physical input methods
Features:
- Keyboard buffer with backspace/clear/send controls
- Transcript history display (max 10 entries)
- Status indicators for STT/TTS ready state
- Number/symbol support (1-5, punctuation)
- HTML/CSS responsive design optimized for touch
- ROS2 integration via /social/speech/transcript and /social/conversation/request
Launch: ros2 launch saltybot_accessibility_mode accessibility_mode.launch.py
UI Port: 8080 (MageDok display access)
Config: config/accessibility_params.yaml
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 18:17:41 -05:00
b942bb549a
Merge pull request 'feat: 360° LIDAR obstacle avoidance (Issue #364 )' ( #380 ) from sl-controls/issue-364-lidar-avoidance into main
2026-03-03 18:15:28 -05:00
3a639507c7
Merge pull request 'feat: Salty Face animated expression UI (Issue #370 )' ( #379 ) from sl-webui/issue-370-salty-face into main
2026-03-03 18:15:17 -05:00
8aa4072a63
feat(webui): Salty Face animated expression UI — contextual emotions (Issue #370 )
...
Add animated facial expression interface for MageDok 7" display:
Core Features:
✓ 8 emotional states:
- Happy (default idle)
- Alert (obstacles detected)
- Confused (searching, target lost)
- Sleeping (prolonged inactivity)
- Excited (target reacquired)
- Emergency (e-stop triggered)
- Listening (microphone active)
- Talking (TTS output)
Visual Design:
✓ Minimalist Cozmo/Vector-inspired eyes + optional mouth
✓ Canvas-based GPU-accelerated rendering
✓ 30fps target on Jetson Orin Nano
✓ Emotion-specific eye characteristics:
- Scale changes (alert widened eyes)
- Color coding per emotion
- Pupil position tracking
- Blinking rates vary by state
- Eye wandering (confused searching)
- Bouncing animation (excited)
- Flash effect (emergency)
Mouth Animation:
✓ Synchronized with text-to-speech output
✓ Shape frames: closed, smile, oh, ah, ee sounds
✓ ~10fps lip sync animation
ROS2 Integration:
✓ Subscribe to /saltybot/state (emotion triggers)
✓ Subscribe to /saltybot/target_track (tracking state)
✓ Subscribe to /saltybot/obstacles (alert state)
✓ Subscribe to /social/speech/is_speaking (talking mode)
✓ Subscribe to /social/speech/is_listening (listening mode)
✓ Subscribe to /saltybot/battery (status tracking)
✓ Subscribe to /saltybot/audio_level (audio feedback)
HUD Overlay:
✓ Tap-to-toggle status display
✓ Battery percentage indicator
✓ Robot state label
✓ Distance to target (meters)
✓ Movement speed (m/s)
✓ System health percentage
✓ Color-coded health indicator (green/yellow/red)
Integration:
✓ New DISPLAY tab group (rose color)
✓ Full-screen rendering on 1024×600 MageDok display
✓ Responsive to robot state machine
✓ Supports kiosk mode deployment
Build Status: ✅ PASSING
- 126 modules (+1 for SaltyFace)
- 281.57 KB main bundle (+11 KB)
- 0 errors
Depends on: Issue #369 (MageDok display setup)
Foundation for: Issue #371 (Accessibility mode)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 18:14:49 -05:00
cfa8ee111d
Merge pull request 'feat: Replace GNOME with Cage+Chromium kiosk (Issue #374 )' ( #377 ) from sl-webui/issue-374-cage-kiosk into main
2026-03-03 17:46:14 -05:00
34c7af38b2
Merge pull request 'feat: battery coulomb counter (Issue #325 )' ( #378 ) from sl-perception/issue-325-battery-coulomb into main
2026-03-03 17:46:03 -05:00
410ace3540
feat: battery coulomb counter (Issue #325 )
...
Add coulomb counter for accurate SoC estimation independent of load:
- New coulomb_counter module: integrate current over time to track Ah consumed
* coulomb_counter_init(capacity_mah) initializes with battery capacity
* coulomb_counter_accumulate(current_ma) integrates current at 100 Hz
* coulomb_counter_get_soc_pct() returns SoC 0-100% (255 = invalid)
* coulomb_counter_reset() for charge-complete reset
- Battery module integration:
* battery_accumulate_coulombs() reads motor INA219 currents and accumulates
* battery_get_soc_coulomb() returns coulomb-based SoC with fallback to voltage
* Initialize coulomb counter at startup with DEFAULT_BATTERY_CAPACITY_MAH
- Telemetry updates:
* JLink STATUS: use coulomb SoC if available, fallback to voltage-based
* CRSF battery frame: now includes remaining capacity in mAh (from coulomb counter)
* CRSF capacity field was always 0; now reflects actual remaining mAh
- Mainloop integration:
* Call battery_accumulate_coulombs() every tick for continuous integration
* INA219 motor currents + 200 mA subsystem baseline = total battery draw
Motor current sources (INA219 addresses 0x40/0x41) provide most power draw;
Jetson ROS2 battery_node already prioritizes coulomb-based soc_pct from STATUS frame.
Default capacity: 2200 mAh (typical lab 3S LiPo); configurable via firmware parameter.
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 17:35:34 -05:00
5cec6779e5
feat: Integrate IWDG watchdog timer driver (Issue #300 )
...
- Replace safety.c's direct IWDG initialization with watchdog module API
- Use watchdog_init(2000) for ~2s timeout in safety_init()
- Use watchdog_kick() in safety_refresh() to feed the watchdog
- Remove unused watchdog_get_divider() helper function
- Watchdog now configured with automatic prescaler selection
The watchdog module provides a clean, flexible IWDG interface that:
- Automatically calculates prescaler and reload values
- Detects watchdog-triggered resets via watchdog_was_reset_by_watchdog()
- Supports timeout range of ~1ms to ~32 seconds
- Integrates seamlessly with existing safety system
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 17:29:59 -05:00
aeb90efa61
feat: Implement 360° LIDAR obstacle avoidance (Issue #364 )
...
Implements ROS2 node for RPLIDAR A1M8 obstacle detection with:
- Emergency stop at 0.5m
- Speed-dependent safety zone (3m @ 20km/h, scales linearly)
- Forward-facing 60° obstacle cone scanning
- Publishes /saltybot/obstacle_alert and /cmd_vel_safe
- Debounced obstacle detection (2 frames)
- JSON status reporting
Launch: ros2 launch saltybot_lidar_avoidance lidar_avoidance.launch.py
Config: config/lidar_avoidance_params.yaml
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 17:29:14 -05:00
b04fd916ff
Merge pull request 'feat: MageDok 7in display setup for Orin (Issue #369 )' ( #373 ) from sl-webui/issue-369-display-setup into main
2026-03-03 17:20:15 -05:00
a8a9771ec7
Merge pull request 'feat: adaptive camera power modes (Issue #375 )' ( #376 ) from sl-perception/issue-375-camera-power-modes into main
2026-03-03 17:20:04 -05:00
042c0529a1
feat: Add Issue #375 — adaptive camera power mode manager
...
Implements a 5-mode FSM for dynamic sensor activation based on speed,
scenario, and battery level — avoids running all 4 CSI cameras + full
sensor suite when unnecessary, saving ~1 GB RAM and significant compute.
Five modes (sensor sets):
SLEEP — no sensors (~150 MB RAM)
SOCIAL — webcam only (~400 MB RAM, parked/socialising)
AWARE — front CSI + RealSense + LIDAR (~850 MB RAM, indoor/<5km/h)
ACTIVE — front+rear CSI + RealSense + LIDAR + UWB (~1.15 GB, 5-15km/h)
FULL — all 4 CSI + RealSense + LIDAR + UWB (~1.55 GB, >15km/h)
Core library — _camera_power_manager.py (pure Python, no ROS2 deps)
- CameraPowerFSM.update(speed_mps, scenario, battery_pct) → ModeDecision
- Speed-driven upgrades: instant (safety-first)
- Speed-driven downgrades: held for downgrade_hold_s (default 5s, anti-flap)
- Scenario overrides (instant, bypass hysteresis):
· CROSSING / EMERGENCY → FULL always
· PARKED → SOCIAL immediately
· INDOOR → cap at AWARE (never ACTIVE/FULL indoors)
- Battery low cap: battery_pct < threshold → cap at AWARE
- Idle timer: near-zero speed holds at AWARE for idle_to_social_s (30s)
before dropping to SOCIAL (avoids cycling at traffic lights)
ROS2 node — camera_power_node.py
- Subscribes: /saltybot/speed, /saltybot/scenario, /saltybot/battery_pct
- Publishes: /saltybot/camera_mode (CameraPowerMode, latched, 2 Hz)
- Publishes: /saltybot/camera_cmd/{front,rear,left,right,realsense,lidar,uwb,webcam}
(std_msgs/Bool, TRANSIENT_LOCAL so late subscribers get last state)
- Logs mode transitions with speed/scenario/battery context
Tests — test/test_camera_power_manager.py: 64/64 passing
- Sensor configs: counts, correct flags per mode, safety invariants
- Speed upgrades: instantaneous at all thresholds, no hold required
- Downgrade hysteresis: hold timer, cancellation on speed spike, hold=0 instant
- Scenario overrides: CROSSING/EMERGENCY/PARKED/INDOOR, all CSIs on crossing
- Battery low: cap at AWARE, threshold boundary
- Idle timer: delay AWARE→SOCIAL, motion resets timer
- Reset, labels, ModeDecision fields
- Integration: full ride scenario (walk→jog→sprint→crossing→indoor→park→low bat)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 16:48:17 -05:00
e2587b60fb
feat: SaltyFace web app UI for Chromium kiosk (Issue #370 )
...
Animated robot expression interface as lightweight web application:
**Architecture:**
- HTML5 Canvas rendering engine
- Node.js HTTP server (localhost:3000)
- ROSLIB WebSocket bridge for ROS2 topics
- Fullscreen responsive design (1024×600)
**Features:**
- 8 emotional states (happy, alert, confused, sleeping, excited, emergency, listening, talking)
- Real-time ROS2 subscriptions:
- /saltybot/state (emotion triggers)
- /saltybot/battery (status display)
- /saltybot/target_track (EXCITED emotion)
- /saltybot/obstacles (ALERT emotion)
- /social/speech/is_speaking (TALKING emotion)
- /social/speech/is_listening (LISTENING emotion)
- Tap-to-toggle status overlay
- 60fps Canvas animation on Wayland
- ~80MB total memory (Node.js + browser)
**Files:**
- public/index.html — Main page (1024×600 fullscreen)
- public/salty-face.js — Canvas rendering + ROS2 integration
- server.js — Node.js HTTP server with CORS support
- systemd/salty-face-server.service — Auto-start systemd service
- docs/SALTY_FACE_WEB_APP.md — Complete setup & API documentation
**Integration:**
- Runs in Chromium kiosk (Issue #374 )
- Depends on rosbridge_server for WebSocket bridge
- Serves on localhost:3000 (configurable)
**Next:** Issue #371 (Accessibility enhancements)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 16:42:41 -05:00
82b8f40b39
feat: Replace GNOME with Cage + Chromium kiosk (Issue #374 )
...
Lightweight fullscreen kiosk for MageDok 7" display:
**Architecture:**
- Cage: Minimal Wayland compositor (replaces GNOME)
- Chromium: Fullscreen kiosk browser for SaltyFace web UI
- PulseAudio: HDMI audio routing (from Issue #369 )
- Touch: HID input from MageDok USB device
**Memory Savings:**
- GNOME desktop: ~650MB RAM
- Cage + Chromium: ~200MB RAM
- Net gain: ~450MB for ROS2 workloads
**Files:**
- config/cage-magedok.ini — Cage display settings (1024×600@60Hz)
- config/wayland-magedok.conf — Wayland output configuration
- scripts/chromium_kiosk.sh — Cage + Chromium launcher
- systemd/chromium-kiosk.service — Auto-start systemd service
- launch/cage_display.launch.py — ROS2 launch configuration
- docs/CAGE_CHROMIUM_KIOSK.md — Complete setup & troubleshooting guide
**Next:** Issue #370 (Salty Face as web app in Chromium kiosk)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 16:41:00 -05:00
46fc2db8e6
Merge pull request 'feat: smooth velocity ramp controller (Issue #350 )' ( #372 ) from sl-perception/issue-350-velocity-ramp into main
2026-03-03 16:17:55 -05:00
6592b58f65
feat: Add Issue #350 — smooth velocity ramp controller
...
Adds a rate-limiting shim between raw /cmd_vel and the drive stack to
prevent wheel slip, tipping, and jerky motion from step velocity inputs.
Core library — _velocity_ramp.py (pure Python, no ROS2 deps)
- VelocityRamp: applies independent accel/decel limits to linear-x and
angular-z with configurable max_lin_accel, max_lin_decel,
max_ang_accel, max_ang_decel
- _ramp_axis(): per-axis rate limiter with correct accel/decel selection
(decel when |target| < |current| or sign reversal; accel otherwise)
- Emergency stop: step(0.0, 0.0) bypasses ramp → immediate zero output
- Asymmetric limits supported (e.g. faster decel than accel)
ROS2 node — velocity_ramp_node.py
- Subscribes /cmd_vel, publishes /cmd_vel_smooth at configurable rate_hz
- Parameters: max_lin_accel (0.5 m/s²), max_lin_decel (0.5 m/s²),
max_ang_accel (1.0 rad/s²), max_ang_decel (1.0 rad/s²), rate_hz (50)
Tests — test/test_velocity_ramp.py: 50/50 passing
- _ramp_axis: accel/decel selection, sign reversal, overshoot prevention
- Construction: invalid params raise ValueError, defaults verified
- Linear/angular ramp-up: step size, target reached, no overshoot
- Deceleration: asymmetric limits, partial decel (non-zero target)
- Emergency stop: immediate zero, state cleared, resume from zero
- Sign reversal: passes through zero without jumping
- Reset: state cleared, next ramp starts from zero
- Monotonicity: linear and angular outputs are monotone toward target
- Rate accuracy: 50Hz/10Hz step sizes, 100-step convergence verified
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 15:45:05 -05:00
45d456049a
feat: MageDok 7in display setup for Jetson Orin (Issue #369 )
...
Add complete display integration for MageDok 7" IPS touchscreen:
Configuration Files:
- X11 display config (xorg-magedok.conf) — 1024×600 @ 60Hz
- PulseAudio routing (pulseaudio-magedok.conf) — HDMI audio to speakers
- Udev rules (90-magedok-touch.rules) — USB touch device permissions
- Systemd service (magedok-display.service) — auto-start on boot
ROS2 Launch:
- magedok_display.launch.py — coordinate display/touch/audio setup
Helper Scripts:
- verify_display.py — validate 1024×600 resolution via xrandr
- touch_monitor.py — detect MageDok USB touch, publish status
- audio_router.py — configure PulseAudio HDMI sink routing
Documentation:
- MAGEDOK_DISPLAY_SETUP.md — complete installation and troubleshooting guide
Features:
✓ DisplayPort → HDMI video from Orin DP connector
✓ USB touch input as HID device (driver-free)
✓ HDMI audio routing to built-in speakers
✓ 1024×600 native resolution verification
✓ Systemd auto-launch on boot (no login prompt)
✓ Headless fallback when display disconnected
✓ ROS2 status monitoring (touch/audio/resolution)
Supports Salty Face UI (Issue #370 ) and accessibility features (Issue #371 )
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 15:44:03 -05:00
631282b95f
Merge pull request 'feat: Issue #365 — UWB DW3000 anchor/tag tracking (bearing + distance)' ( #368 ) from sl-perception/issue-365-uwb-tracking into main
2026-03-03 15:41:49 -05:00
0ecf341c57
feat: Add Issue #365 — UWB DW3000 anchor/tag tracking (bearing + distance)
...
Software-complete implementation of the two-anchor UWB ranging stack.
All ROS2 / serial code written against an abstract interface so tests run
without physical hardware (anchors on order).
New message
- UwbTarget.msg: valid, bearing_deg, distance_m, confidence,
anchor0/1_dist_m, baseline_m, fix_quality (0=none 1=single 2=dual)
Core library — _uwb_tracker.py (pure Python, no ROS2/runtime deps)
- parse_frame(): ASCII RANGE,<id>,<tag>,<mm> protocol decoder
- bearing_from_ranges(): law-of-cosines 2-anchor bearing with confidence
(penalises extreme angles + close-range geometry)
- bearing_single_anchor(): fallback bearing=0, conf≤0.3
- BearingKalman: 1-D constant-velocity Kalman filter [bearing, rate]
- UwbRangingState: thread-safe per-anchor state + stale timeout + Kalman
- AnchorSerialReader: background thread, readline() interface (real or mock)
ROS2 node — uwb_node.py
- Opens /dev/ttyUSB0 + /dev/ttyUSB1 (configurable)
- Non-fatal serial open failure (will publish FIX_NONE until plugged in)
- Publishes /saltybot/uwb_target at 10 Hz (configurable)
- Graceful shutdown: stops reader threads
Tests — test/test_uwb_tracker.py: 64/64 passing
- Frame parsing: valid, malformed, STATUS, CR/LF, mm→m conversion
- Bearing geometry: straight-ahead, ±45°, ±30°, symmetry, confidence
- Kalman: seeding, smoothing, convergence, rate tracking
- UwbRangingState: single/dual fix, stale timeout, thread safety
- AnchorSerialReader: mock serial, bytes decode, stop()
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 15:25:23 -05:00
94d12159b4
Merge pull request 'feat(webui): ROS parameter editor in Settings panel (Issue #354 )' ( #360 ) from sl-webui/issue-354-settings into main
2026-03-03 15:20:48 -05:00
eac203ecf4
Merge pull request 'feat: Issue #363 — P0 person tracking for follow-me mode' ( #367 ) from sl-perception/issue-363-person-tracking into main
2026-03-03 15:20:26 -05:00
c620dc51a7
feat: Add Issue #363 — P0 person tracking for follow-me mode
...
Implements real-time person detection + tracking pipeline for the
follow-me motion controller on Jetson Orin Nano Super (D435i).
Core components
- TargetTrack.msg: bearing_deg, distance_m, confidence, bbox, vel_bearing_dps,
vel_dist_mps, depth_quality (0-3)
- _person_tracker.py (pure-Python, no ROS2/runtime deps):
· 8-state constant-velocity Kalman filter [cx,cy,w,h,vcx,vcy,vw,vh]
· Greedy IoU data association
· HSV torso colour histogram re-ID (16H×8S, Bhattacharyya similarity)
with fixed saturation clamping (s = (cmax−cmin)/cmax, clipped to [0,1])
· FollowTargetSelector: nearest person auto-lock, hold_frames hysteresis
· TENTATIVE→ACTIVE after min_hits; LOST track removal after max_lost_frames
with per-frame lost_age increment across all LOST tracks
· bearing_from_pixel, depth_at_bbox (median, quality flags)
- person_tracking_node.py:
· YOLOv8n via ultralytics (TRT FP16 on first run) → HOG+SVM fallback
· Subscribes colour + depth + camera_info + follow_start/stop
· Publishes /saltybot/target_track at ≤30 fps
- test/test_person_tracker.py: 59/59 tests passing
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 15:19:02 -05:00
bcf848109b
Merge pull request 'feat(perception): geometric face emotion classifier (Issue #359 )' ( #361 ) from sl-perception/issue-359-face-emotion into main
2026-03-03 15:07:17 -05:00
672120bb50
feat(perception): geometric face emotion classifier (Issue #359 )
...
Classifies facial expressions into neutral/happy/surprised/angry/sad
using geometric rules over MediaPipe Face Mesh landmarks — no ML model
required at runtime.
Rules
-----
surprised: brow_raise > 0.12 AND eye_open > 0.07 AND mouth_open > 0.07
happy: smile > 0.025 (lip corners above lip midpoint)
angry: brow_furl > 0.02 AND smile < 0.01
sad: smile < -0.025 AND brow_furl < 0.015
neutral: default
Changes
-------
- saltybot_scene_msgs/msg/FaceEmotion.msg — per-face emotion + features
- saltybot_scene_msgs/msg/FaceEmotionArray.msg
- saltybot_scene_msgs/CMakeLists.txt — register new msgs
- _face_emotion.py — pure-Python: FaceLandmarks, compute_features,
classify_emotion, detect_emotion, from_mediapipe
- face_emotion_node.py — subscribes /camera/color/image_raw,
publishes /saltybot/face_emotions (≤15 fps)
- test/test_face_emotion.py — 48 tests, all passing
- setup.py — add face_emotion entry point
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 14:39:49 -05:00
f7f89403d5
Merge pull request 'feat(social): system resource monitor for Jetson Orin (Issue #355 )' ( #357 ) from sl-jetson/issue-355-sysmon into main
social-bot integration tests / Lint (flake8 + pep257) (push) Failing after 4s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (push) Has been skipped
social-bot integration tests / Latency profiling (GPU, Orin) (push) Has been cancelled
2026-03-03 14:32:49 -05:00
ae76697a1c
Merge pull request 'feat(perception): MFCC nearest-centroid audio scene classifier (Issue #353 )' ( #358 ) from sl-perception/issue-353-audio-scene into main
2026-03-03 14:32:36 -05:00
677e6eb75e
feat(perception): MFCC nearest-centroid audio scene classifier (Issue #353 )
...
Classifies ambient audio into indoor/outdoor/traffic/park at 1 Hz using
a 16-d feature vector (13 MFCC + spectral centroid + rolloff + ZCR) with
a normalised nearest-centroid classifier. Centroids are computed at import
time from seeded synthetic prototypes, ensuring deterministic behaviour.
Changes
-------
- saltybot_scene_msgs/msg/AudioScene.msg — label + confidence + features[16]
- saltybot_scene_msgs/CMakeLists.txt — register AudioScene.msg
- _audio_scene.py — pure-numpy feature extraction + NearestCentroidClassifier
- audio_scene_node.py — subscribes /audio/audio, publishes /saltybot/audio_scene
- test/test_audio_scene.py — 53 tests (all passing) with synthetic audio
- setup.py — add audio_scene entry point
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 14:03:11 -05:00
0af4441120
feat(social): system resource monitor for Jetson Orin (Issue #355 )
...
social-bot integration tests / Lint (flake8 + pep257) (push) Failing after 9s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (push) Has been skipped
social-bot integration tests / Lint (flake8 + pep257) (pull_request) Failing after 9s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (pull_request) Has been skipped
social-bot integration tests / Latency profiling (GPU, Orin) (push) Has been cancelled
social-bot integration tests / Latency profiling (GPU, Orin) (pull_request) Has been cancelled
Polls /proc/stat (CPU delta), /proc/meminfo (RAM), os.statvfs (disk),
/sys/devices/gpu.0/load (GPU), and thermal zone sysfs paths; publishes
JSON payload on /saltybot/system_resources at 1 Hz.
Pure helpers (parse_proc_stat, cpu_percent_from_stats, parse_meminfo,
compute_ram_stats, read_disk_usage, read_gpu_load, read_thermal_zones)
are all unit-tested offline. Injectable I/O on SysmonNode allows full
node tick tests without /proc or /sys. 67/67 tests passing.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 13:54:31 -05:00
ddb93bec20
Issue #354 : Add ROS parameter editor to Settings
...
Add Parameters tab for live ROS parameter editing with:
- get_parameters service integration
- set_parameter service support
- Type-specific input controls
- Node-based grouping
- Search filtering
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 13:49:51 -05:00
358c1ab6f9
Merge pull request 'feat(webui): dedicated CAMERAS tab group with live MJPEG viewer (Issue #349 )' ( #352 ) from sl-webui/issue-349-camera-viewer into main
2026-03-03 13:44:48 -05:00
7966eb5187
Merge pull request 'feat(perception): depth-based obstacle size estimator (Issue #348 )' ( #351 ) from sl-perception/issue-348-obstacle-size into main
2026-03-03 13:44:32 -05:00
2a9b03dd76
feat(perception): depth-based obstacle size estimator (Issue #348 )
...
Projects LIDAR clusters into the D435i depth image to estimate 3-D
obstacle width and height in metres.
- saltybot_scene_msgs/msg/ObstacleSize.msg — new message
- saltybot_scene_msgs/msg/ObstacleSizeArray.msg — array wrapper
- saltybot_scene_msgs/CMakeLists.txt — register new msgs
- saltybot_bringup/_obstacle_size.py — pure-Python helper:
CameraParams (intrinsics + LIDAR→camera extrinsics)
ObstacleSizeEstimate (NamedTuple)
lidar_to_camera() LIDAR frame → camera frame transform
project_to_pixel() pinhole projection + bounds check
sample_depth_median() uint16 depth image window → median metres
estimate_height() vertical strip scan for row extent → height_m
estimate_cluster_size() full pipeline: cluster → size estimate
- saltybot_bringup/obstacle_size_node.py — ROS2 node
sub: /scan, /camera/depth/image_rect_raw, /camera/depth/camera_info
pub: /saltybot/obstacle_sizes (ObstacleSizeArray)
width from LIDAR bbox; height from depth strip back-projection;
graceful fallback (LIDAR-only) when depth image unavailable;
intrinsics latched from CameraInfo on first arrival
- test/test_obstacle_size.py — 33 tests, 33 passing
- setup.py — add obstacle_size entry
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 13:32:41 -05:00
93028dc847
feat(webui): add dedicated CAMERAS tab group for camera viewer (Issue #349 )
...
Move camera viewer from TELEMETRY to new CAMERAS tab group (rose color).
Reorganizes tab structure to separate media capture from system telemetry.
CameraViewer.jsx already provides comprehensive MJPEG stream support:
- Multi-camera switching (7 total: front/left/rear/right CSI, D435i RGB/depth, panoramic)
- FPS counter per camera with quality badge (FULL/GOOD/LOW/NO SIGNAL)
- Resolution and camera info display
- Detection overlays (faces, gestures, scene objects)
- Picture-in-picture support (up to 3 pinned cameras)
- Video recording (MP4/WebM) and snapshot capture
- 360° panoramic viewer with mouse drag pan
- Color-coded quality indicators based on FPS
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 13:27:09 -05:00
3bee8f3cb4
Merge pull request 'feat(social): trigger-based ROS2 bag recorder (Issue #332 )' ( #335 ) from sl-jetson/issue-332-rosbag-recorder into main
social-bot integration tests / Lint (flake8 + pep257) (push) Failing after 11s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (push) Has been skipped
social-bot integration tests / Latency profiling (GPU, Orin) (push) Has been cancelled
2026-03-03 13:25:55 -05:00
813d6f2529
feat(social): trigger-based ROS2 bag recorder (Issue #332 )
...
social-bot integration tests / Lint (flake8 + pep257) (pull_request) Failing after 12s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (pull_request) Has been skipped
social-bot integration tests / Latency profiling (GPU, Orin) (pull_request) Has been cancelled
BagRecorderNode: subscribes /saltybot/record_trigger (Bool), spawns
ros2 bag record subprocess, drives idle/recording/stopping/error
state machine; auto-stop timeout, SIGINT graceful shutdown with
SIGKILL fallback. Publishes /saltybot/recording_status (String).
Configurable topics (csv), bag_dir, prefix, compression, size limit.
Subprocess injectable for offline testing. 101/101 tests pass.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 13:19:49 -05:00
9b538395c0
Merge pull request 'feat(webui): hand tracking skeleton visualization (Issue #344 )' ( #346 ) from sl-webui/issue-344-hand-viz into main
2026-03-03 13:19:46 -05:00
a0f3677732
Merge pull request 'feat: Add pure pursuit path follower for Nav2 (Issue #333 )' ( #334 ) from sl-controls/issue-333-pure-pursuit into main
2026-03-03 13:19:29 -05:00
3fce9bf577
Merge pull request 'feat(perception): MediaPipe hand tracking — Leap Motion pivot (Issue #342 )' ( #345 ) from sl-perception/issue-342-hand-tracking into main
2026-03-03 13:19:28 -05:00
1729e43964
feat(perception): MediaPipe hand tracking — Leap Motion pivot (Issue #342 )
...
PART 1 AUDIT: Zero Leap Motion / UltraLeap references found in any
saltybot_* package. Existing gesture_node.py (saltybot_social) already
uses MediaPipe — no cleanup required.
PART 2 NEW PACKAGES:
saltybot_hand_tracking_msgs (ament_cmake)
- HandLandmarks.msg — 21 landmarks (float32[63]), handedness,
gesture label + direction, wrist position
- HandLandmarksArray.msg
saltybot_hand_tracking (ament_python)
- _hand_gestures.py — pure-Python gesture classifier (no ROS2/MP deps)
Vocabulary: stop (open palm) → pause/stop,
point (index up) → direction command + 8-compass,
disarm (fist) → emergency-off,
confirm (thumbs-up) → confirm action,
follow_me (peace sign) → follow mode,
greeting (wrist oscillation) → greeting response
WaveDetector: sliding-window lateral wrist tracking
- hand_tracking_node.py — ROS2 node
sub: /camera/color/image_raw (BEST_EFFORT)
pub: /saltybot/hands (HandLandmarksArray)
/saltybot/hand_gesture (std_msgs/String)
MediaPipe model_complexity=0 (lite) for 20+ FPS
on Orin Nano Super; background MP init thread;
per-hand WaveDetector instances
- test/test_hand_gestures.py — 35 tests, 35 passing
Covers: Landmark, HandGestureResult, WaveDetector, all 6 gesture
classifiers, priority ordering, direction vectors, confidence bounds
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-03 12:47:22 -05:00
347449ed95
feat(webui): hand pose tracking and gesture visualization (Issue #344 )
...
Features:
- Subscribes to /saltybot/hands (21 landmarks per hand - MediaPipe format)
- Subscribes to /saltybot/hand_gesture (String gesture label)
- Canvas-based hand skeleton rendering with bone connections
- Support for dual hand tracking (left and right)
- Handedness indicators with color coding
* Left hand: green
* Right hand: yellow
- Real-time gesture display with confidence indicator
- Per-landmark confidence visualization
- Bone connections between all 21 joints
Hand Skeleton Features:
- 21 MediaPipe landmarks per hand
* Wrist (1)
* Thumb (4)
* Index finger (4)
* Middle finger (4)
* Ring finger (4)
* Pinky finger (4)
- 20 bone connections between joints
- Confidence-based rendering (only show high-confidence points)
- Scaling and normalization for viewport
- Joint type indicators (tips with ring outline)
- Glow effects around landmarks
Gesture Recognition:
- Real-time gesture label display
- Confidence percentage (0-100%)
- Color-coded confidence:
* Green: >80% (high confidence)
* Yellow: 50-80% (medium confidence)
* Blue: <50% (detecting)
Hand Status Display:
- Live detection status for both hands
- Visual indicators (✓ detected / ◯ not detected)
- Dual-hand canvas rendering
- Gesture info panel with confidence bar
Integration:
- Added to SOCIAL tab group as "Hands" tab
- Positioned after "Faces" tab
- Uses subscribe hook for real-time updates
- Dark theme with color-coded hands
- Canvas-based rendering for smooth visualization
Build: 125 modules, no errors
Main bundle: 270.08 KB
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-03 12:43:19 -05:00