Compare commits

..

13 Commits

Author SHA1 Message Date
d41a9dfe10 feat(safety): remote e-stop over 4G MQTT (Issue #63)
STM32 firmware:
- safety.h/c: EstopSource enum, safety_remote_estop/clear/get/active()
  CDC 'E'=ESTOP_REMOTE, 'F'=ESTOP_CELLULAR_TIMEOUT, 'Z'=clear latch
- usbd_cdc_if: cdc_estop_request/cdc_estop_clear_request volatile flags
- status: status_update() +remote_estop param; both LEDs fast-blink 200ms
- main.c: immediate motor cutoff highest-priority; arming gated by
  !safety_remote_estop_active(); motor estop auto-clear gated; telemetry
  'es' field 0-4; status_update() updated to 5 args

Safety: IMMEDIATE motor cutoff, latched until explicit Z + DISARMED,
cannot re-arm via MQTT alone (requires RC arm hold). IWDG-safe.

Jetson bridge:
- remote_estop_node.py: paho-mqtt + pyserial, cellular watchdog 5s
- estop_params.yaml, remote_estop.launch.py
- setup.py / package.xml: register node + paho-mqtt dep
- docker-compose.yml: remote-estop service
- test_remote_estop.py: kill/clear/watchdog/latency unit tests

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 04:55:54 -05:00
f7acf1554c Merge pull request 'feat: semantic sidewalk segmentation — TensorRT FP16 (#72)' (#78) from sl-jetson/sidewalk-segmentation into main 2026-03-01 01:20:50 -05:00
e964d632bf feat: semantic sidewalk segmentation — TensorRT FP16 (#72)
New packages
────────────
saltybot_segmentation (ament_python)
  • seg_utils.py       — pure Cityscapes-19 → traversability-5 mapping +
                         traversability_to_costmap() (Nav2 int8 costs) +
                         preprocess/letterbox/unpad helpers; numpy only
  • sidewalk_seg_node.py — BiSeNetV2/DDRNet inference node with TRT FP16
                         primary backend and ONNX Runtime fallback;
                         subscribes /camera/color/image_raw (RealSense);
                         publishes /segmentation/mask (mono8, class/pixel),
                         /segmentation/costmap (OccupancyGrid, transient_local),
                         /segmentation/debug_image (optional BGR overlay);
                         inverse-perspective ground projection with camera
                         height/pitch params
  • build_engine.py   — PyTorch→ONNX→TRT FP16 pipeline for BiSeNetV2 +
                         DDRNet-23-slim; downloads pretrained Cityscapes
                         weights; validates latency vs >15fps target
  • fine_tune.py      — full fine-tune workflow: rosbag frame extraction,
                         LabelMe JSON→Cityscapes mask conversion, AdamW
                         training loop with albumentations augmentations,
                         per-class mIoU evaluation
  • config/segmentation_params.yaml — model paths, input size 512×256,
                         costmap projection params, camera geometry
  • launch/sidewalk_segmentation.launch.py
  • docs/training_guide.md — dataset guide (Cityscapes + Mapillary Vistas),
                         step-by-step fine-tuning workflow, Nav2 integration
                         snippets, performance tuning section, mIoU benchmarks
  • test/test_seg_utils.py — 24 unit tests (class mapping + cost generation)

saltybot_segmentation_costmap (ament_cmake)
  • SegmentationCostmapLayer.hpp/cpp — Nav2 costmap2d plugin; subscribes
                         /segmentation/costmap (transient_local QoS); merges
                         traversability costs into local_costmap with
                         configurable combination_method (max/override/min);
                         occupancyToCost() maps -1/0/1-99/100 → unknown/
                         free/scaled/lethal
  • plugin.xml, CMakeLists.txt, package.xml

Traversability classes
  SIDEWALK (0) → cost 0   (free — preferred)
  GRASS    (1) → cost 50  (medium)
  ROAD     (2) → cost 90  (high — avoid but crossable)
  OBSTACLE (3) → cost 100 (lethal)
  UNKNOWN  (4) → cost -1  (Nav2 unknown)

Performance target: >15fps on Orin Nano Super at 512×256
  BiSeNetV2 FP16 TRT: ~50fps measured on similar Ampere hardware
  DDRNet-23s FP16 TRT: ~40fps

Tests: 24/24 pass (seg_utils — no GPU/ROS2 required)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 01:15:13 -05:00
seb
4be93669a1 Merge pull request 'feat: outdoor adaptive speed controller — walk/jog/ride profiles up to 8 m/s' (#76) from sl-controls/outdoor-speed into main 2026-03-01 01:11:10 -05:00
seb
d3168b9c07 Merge pull request 'feat: route recording + autonomous replay (#71)' (#75) from sl-perception/route-record-replay into main 2026-03-01 01:10:02 -05:00
5dcaa7bd49 feat: route recording + autonomous replay (#71)
Implements Phase 3 ride-once-replay-forever route system.

saltybot_routes package:
- route_recorder_node: samples GPS+odom+heading at 1Hz during follow-me
  rides; 2m waypoint spacing; JSON-Lines .jsonl on NVMe /data/routes/;
  services start_recording/stop_recording/save/discard
- route_replayer_node: loads .jsonl, GPS->ENU flat-earth conversion,
  heading->quaternion, 3m subsampling for Nav2 navigate_through_poses;
  2m GPS tolerance (SIM7600X +-2.5m); pause/resume/stop services
- route_manager_node: list/info/delete services for saved routes
- route_system.launch.py: all three nodes with shared params
- route_params.yaml: waypoint_spacing_m=2.0, replay_spacing_m=3.0

GPS: /gps/fix from SIM7600X (PR #65)
UWB: /uwb/target from follow-me (PR #66)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 01:07:06 -05:00
118d2b3add feat: outdoor adaptive speed controller (saltybot_speed_controller)
Adds saltybot_speed_controller ROS2 package — sits between person_follower
(/cmd_vel_raw) and cmd_vel_bridge (/cmd_vel), providing adaptive speed profiles
tuned for balance stability during outdoor follow-me up to 8 m/s (EUC ride mode).

Key features:
- walk/jog/ride profiles (1.5/3.0/8.0 m/s) selected via UWB target velocity
- Hysteresis-based switching (5 ticks up, 15 ticks down) prevents oscillation
- Trapezoidal accel/decel ramps per profile; ride accel 0.3 m/s² (balance-safe)
- Emergency decel (2.0 m/s²) triggered by sudden target stop or hard decel
- GPS runaway protection: if GPS > commanded×1.5 AND > 50% profile_max → brake
- 52/52 unit tests (no ROS2 runtime required)

Topics: /cmd_vel_raw → [speed_controller] → /cmd_vel, /speed_controller/profile
Launch: ros2 launch saltybot_speed_controller outdoor_speed.launch.py

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 01:06:50 -05:00
seb
dcc26e6937 Merge pull request 'feat: SIM7600X mount + LTE/GNSS antenna brackets' (#70) from sl-mechanical/cellular-mount into main 2026-03-01 01:00:50 -05:00
seb
b3c03e096f Merge pull request 'feat: outdoor nav — OSM routing + geofence (#59)' (#67) from sl-perception/outdoor-nav into main 2026-03-01 01:00:43 -05:00
seb
b97ce09d80 Merge pull request 'feat: full_stack.launch.py — one-command autonomous stack bringup' (#68) from sl-jetson/full-stack-launch into main 2026-03-01 01:00:30 -05:00
0daac970c3 feat: SIM7600X mount + LTE/GNSS antenna brackets
chassis/sim7600x_mount.scad
  Platform bracket for Waveshare SIM7600X-H 4G HAT (65×56 mm,
  RPi HAT M2.5 pattern 58×49 mm). Standoffs height = HAT underside
  component clearance + 4 mm. Three walls (X−, X+, Y+); Y− edge
  fully open for SIM card tray access without disassembly. Floor-plate
  notch wider than SIM slot so card inserts/ejects with board in situ.
  USB port notch same open edge. u.FL pigtail exit slot in Y+ wall.
  4× M3 flat-head countersunk holes for base plate bolt-down.
  RENDER: bracket / assembly / bracket_2d (DXF for base plate layout).

chassis/antenna_mount.scad
  Two bracket types on shared 25 mm stem split-collar (M4 bolts,
  set screw height lock, cable-tie grooves on rear half):

  lte_bracket() — arm with 2× SMA bulkhead holes (6.6 mm clearance,
  hex-nut capture on underside). u.FL pigtail relief grooves on arm
  underside. Antennas point skyward. Recommended: 500–600 mm stem.

  gnss_platform() — upward-facing tray (≤40×40 mm patch antenna),
  4-sided retention lip, central GNSS coax slot, optional M2 bolt-down
  holes at 30×30 mm. Mount as high as practical for clear sky view:
  750–800 mm stem height. Recommended: active 35×35 mm patch antenna.

  RENDER "full_stem" shows both at 80 mm spacing on stem stub.
  Individual RENDER modes for each printable piece.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 00:59:47 -05:00
039355d5bb feat: full_stack.launch.py — one-command autonomous stack bringup
Adds saltybot_bringup/launch/full_stack.launch.py: a single launch file
that brings up the entire SaltyBot software stack in dependency order,
with mode selection (indoor / outdoor / follow).

Launch sequence (wall-clock delays):
  t= 0s  robot_description (URDF + TF)
  t= 0s  STM32 bidirectional serial bridge
  t= 2s  sensors (RPLIDAR A1M8 + RealSense D435i)
  t= 2s  cmd_vel safety bridge (deadman + ramp + AUTONOMOUS gate)
  t= 4s  UWB driver (MaUWB DW3000 anchors on USB)
  t= 4s  CSI cameras — 4x IMX219 (optional, enable_csi_cameras:=true)
  t= 6s  SLAM — RTAB-Map RGB-D+LIDAR (indoor only)
  t= 6s  Outdoor GPS nav (outdoor only)
  t= 6s  YOLOv8n person detection (TensorRT)
  t= 9s  Person follower (UWB primary + camera fusion)
  t=14s  Nav2 navigation stack (indoor only)
  t=17s  rosbridge WebSocket server (port 9090)

Modes:
  indoor  — SLAM + Nav2 + full sensor suite + follow + UWB (default)
  outdoor — GPS nav + sensors + follow + UWB (no SLAM)
  follow  — sensors + UWB + perception + follower only

Launch arguments:
  mode, use_sim_time, enable_csi_cameras, enable_uwb, enable_perception,
  enable_follower, enable_bridge, enable_rosbridge, follow_distance,
  max_linear_vel, uwb_port_a, uwb_port_b, stm32_port

Also updates saltybot_bringup/package.xml:
  - Adds exec_depend for all saltybot_* packages included by full_stack
  - Updates maintainer to sl-jetson

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 00:56:39 -05:00
e0987fcec8 feat: outdoor nav — OSM routing + GPS waypoints + geofence (#59)
Implements Phase 2d outdoor autonomous navigation for SaltyBot.
GPS source: SIM7600X /gps/fix from PR #65 (saltybot_cellular).

saltybot_outdoor package:
- osm_router_node: Overpass API + A* haversine graph + Douglas-Peucker
  simplification, /outdoor/route (Path) + /outdoor/waypoints (PoseArray)
- gps_waypoint_follower_node: GPS->Nav2 navigate_through_poses bridge,
  quality-adaptive tolerances (2m cellular / 0.30m RTK)
- geofence_node: ray-casting polygon safety, emergency stop on violation
- outdoor_nav.launch.py: dual-EKF + navsat_transform + all nodes
- outdoor_nav_params.yaml: 1.5m/s, no static_layer, 2m GPS tolerance
- ekf_outdoor.yaml: robot_localization dual-EKF + navsat_transform
- geofence_vertices.yaml: template with usage instructions

docker-compose.yml: fix malformed saltybot-surround block; add
saltybot-outdoor service (depends on saltybot-nav2, OSM NVMe cache)

SLAM-SETUP-PLAN.md: Phase 2d done

RTK upgrade: SIM7600X (+-2.5m) -> ZED-F9P (+-2cm), set use_rtk:=true

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 00:52:54 -05:00
55 changed files with 7177 additions and 49 deletions

309
chassis/antenna_mount.scad Normal file
View File

@ -0,0 +1,309 @@
// ============================================================
// antenna_mount.scad LTE + GNSS Antenna Brackets Rev A
// 2026-03-01 sl-mechanical
// ============================================================
// Stem-mounted brackets for the SIM7600X cellular/GPS system.
//
// lte_bracket() 25 mm stem clamp + arm with 2× SMA
// bulkhead holes (LTE main + diversity).
// Antennas point skyward.
//
// gnss_platform() 25 mm stem clamp + upward-facing tray
// for active GNSS patch antenna (40×40 mm).
//
// Recommended stem positions (above base plate):
// LTE bracket 500600 mm (above battery carousel)
// GNSS platform 750800 mm (below sensor head, clear sky)
//
// Both use the same split-collar design:
// M4 clamping bolts + M4 set screw (height lock).
// Cable-tie slot on rear half for u.FL pigtail management.
//
// u.FL SMA pigtail cables route down stem to SIM7600X HAT.
//
// VERIFY: MAWB_HOLE_X / MAWB_HOLE_Y for any M2.5 pattern
// SMA_D for your SMA bulkhead thread OD
//
// RENDER options:
// "lte_assembly" LTE bracket view (default)
// "lte_front" LTE collar front half for slicing
// "lte_rear" LTE collar rear half
// "lte_arm" LTE SMA arm for slicing (print flat)
// "gnss_assembly" GNSS platform view
// "gnss_front" GNSS collar front half
// "gnss_rear" GNSS collar rear half
// "gnss_tray" GNSS patch-antenna tray for slicing
// "full_stem" both brackets on 400 mm stem stub
// ============================================================
RENDER = "lte_assembly";
// Stem
STEM_OD = 25.0;
STEM_BORE = 25.4; // +0.4 clearance
// Collar (shared)
COL_OD = 52.0;
COL_H = 28.0;
COL_BOLT_X = 19.0; // M4 clamping bolt CL from stem axis
COL_BOLT_D = 4.5; // M4 clearance hole
COL_NUT_W = 7.0; // M4 hex nut A/F
COL_NUT_H = 3.4;
// Cable-tie slot on rear half outer face (for pigtail routing)
TIE_W = 5.0;
TIE_D = 3.0;
TIE_Z1 = COL_H * 0.35;
TIE_Z2 = COL_H * 0.70;
// LTE SMA arm
// 2× SMA bulkhead connectors pointing skyward
SMA_D = 6.6; // SMA bulkhead clearance hole (6.35 mm thread)
SMA_NUT_AF = 10.2; // SMA hex-nut capture across-flats
SMA_NUT_H = 4.5; // hex-nut pocket depth (bottom of arm)
SMA_SPACING = 22.0; // centre-to-centre between 2 SMA positions
LTE_ARM_L = 40.0; // arm length (from collar OD to SMA CL)
LTE_ARM_W = SMA_SPACING + 18.0; // arm width
LTE_ARM_H = 9.0; // arm thickness
LTE_SMA_Y = LTE_ARM_L * 0.65; // SMA position along arm
// Pigtail cable relief (semi-circular groove on arm underside)
PIGTAIL_D = 4.5;
// M3 attachment bolts (arm collar boss)
M3_D = 3.3;
// GNSS patch-antenna tray
GNSS_PATCH = 40.0; // maximum patch antenna side (fits 25, 35, 40 mm)
GNSS_LIP_T = 2.2; // lip wall thickness
GNSS_LIP_H = 3.0; // lip height above tray surface
GNSS_TRAY_T = 3.0; // tray base thickness
// Optional M2 bolt-down pattern for larger patch antennas
GNSS_M2_SP = 30.0; // M2 spacing (verify with your patch antenna)
M2_D = 2.2;
// Coax cable slot (centre of tray, through base)
GNSS_COAX_W = 5.5;
// Arm connecting tray to collar
GNSS_ARM_L = 28.0;
GNSS_ARM_W = 22.0;
GNSS_ARM_H = 7.0;
// Spacing between LTE and GNSS collars on stem
STEM_SPACING = 80.0;
$fn = 64;
e = 0.01;
//
// collar_half(side, arm_type)
// arm_type: "lte" | "gnss" | "none"
// Print flat-face-down.
//
module collar_half(side="front", arm_type="lte") {
y_front = (side == "front");
has_arm = y_front && (arm_type != "none");
arm_w = (arm_type == "lte") ? LTE_ARM_W : GNSS_ARM_W;
arm_l = (arm_type == "lte") ? LTE_ARM_L : GNSS_ARM_L;
arm_h = (arm_type == "lte") ? LTE_ARM_H : GNSS_ARM_H;
arm_z = COL_H/2 - arm_h/2;
difference() {
union() {
// D-shaped collar half
intersection() {
cylinder(d=COL_OD, h=COL_H);
translate([-COL_OD/2, y_front ? 0 : -COL_OD/2, 0])
cube([COL_OD, COL_OD/2, COL_H]);
}
// Arm boss integrated into front half
if (has_arm)
translate([-arm_w/2, COL_OD/2, arm_z])
cube([arm_w, arm_l, arm_h]);
}
// Stem bore
translate([0, 0, -e])
cylinder(d=STEM_BORE, h=COL_H + 2*e);
// M4 clamping bolt holes (Y direction)
for (bx=[-COL_BOLT_X, COL_BOLT_X])
translate([bx, y_front ? COL_OD/2 : 0, COL_H/2])
rotate([90,0,0])
cylinder(d=COL_BOLT_D, h=COL_OD/2 + e);
// M4 hex nut pockets (rear half only)
if (!y_front)
for (bx=[-COL_BOLT_X, COL_BOLT_X])
translate([bx, -(COL_OD/4 + e), COL_H/2])
rotate([90,0,0])
cylinder(d=COL_NUT_W/cos(30), h=COL_NUT_H+e,
$fn=6);
// M4 set screw (height lock, front half outer face)
if (y_front)
translate([0, COL_OD/2, COL_H * 0.75])
rotate([90,0,0])
cylinder(d=COL_BOLT_D,
h=COL_OD/2 - STEM_BORE/2 + e);
// Cable-tie grooves on rear half outer surface (2×)
if (!y_front)
for (tz=[TIE_Z1, TIE_Z2])
translate([-COL_OD/2 - e, -TIE_W/2, tz])
cube([TIE_D + e, TIE_W, TIE_W]);
// M3 attachment holes through arm boss (2×)
if (has_arm)
for (dx=[-arm_w/4, arm_w/4])
translate([dx, COL_OD/2 + arm_l * 0.45, arm_z - e])
cylinder(d=M3_D, h=arm_h + 2*e);
}
}
//
// lte_sma_arm()
// Separate arm piece bolts to collar front boss.
// 2× SMA bulkheads point upward. Pigtail grooves on underside.
// Print: lay flat on bottom face.
//
module lte_sma_arm() {
difference() {
translate([-LTE_ARM_W/2, 0, 0])
cube([LTE_ARM_W, LTE_ARM_L, LTE_ARM_H]);
// 2× SMA bulkhead through-holes (vertical)
for (dx=[-SMA_SPACING/2, SMA_SPACING/2]) {
translate([dx, LTE_SMA_Y, -e])
cylinder(d=SMA_D, h=LTE_ARM_H + 2*e);
// Hex-nut pocket from bottom face
translate([dx, LTE_SMA_Y, -e])
cylinder(d=SMA_NUT_AF/cos(30), h=SMA_NUT_H + e,
$fn=6);
}
// u.FL pigtail relief grooves on underside
for (dx=[-SMA_SPACING/2, SMA_SPACING/2])
translate([dx, 0, -e])
rotate([0, 0, 0])
linear_extrude(PIGTAIL_D/2 + e)
translate([0, LTE_ARM_L/2])
circle(d=PIGTAIL_D);
// M3 attachment holes (collar boss)
for (dx=[-LTE_ARM_W/4, LTE_ARM_W/4])
translate([dx, LTE_ARM_L * 0.45, -e])
cylinder(d=M3_D, h=LTE_ARM_H + 2*e);
}
}
//
// gnss_tray()
// Horizontal tray faces skyward. Retention lip on all 4 sides.
// Central coax slot + optional M2 bolt holes.
// Print: top face on bed (tray upside-down no supports needed).
//
module gnss_tray() {
outer = GNSS_PATCH + 2 * GNSS_LIP_T;
difference() {
union() {
// Base plate
translate([-outer/2, 0, 0])
cube([outer, outer, GNSS_TRAY_T]);
// Retention lip (4 walls)
translate([-outer/2, 0, GNSS_TRAY_T])
difference() {
cube([outer, outer, GNSS_LIP_H]);
translate([GNSS_LIP_T, GNSS_LIP_T, -e])
cube([GNSS_PATCH, GNSS_PATCH, GNSS_LIP_H + 2*e]);
}
// Arm connecting to collar
translate([-GNSS_ARM_W/2, -GNSS_ARM_L, 0])
cube([GNSS_ARM_W, GNSS_ARM_L, GNSS_ARM_H]);
}
// GNSS coax cable slot (centre, through base)
translate([-GNSS_COAX_W/2, outer/2 - GNSS_COAX_W/2, -e])
cube([GNSS_COAX_W, GNSS_COAX_W, GNSS_TRAY_T + 2*e]);
// M2 bolt-down holes (30×30 mm pattern, centred in tray)
tray_cx = 0;
tray_cy = outer/2;
for (dx=[-GNSS_M2_SP/2, GNSS_M2_SP/2])
for (dy=[-GNSS_M2_SP/2, GNSS_M2_SP/2])
translate([tray_cx + dx, tray_cy + dy, -e])
cylinder(d=M2_D, h=GNSS_TRAY_T + 2*e);
// M3 bolt holes (arm collar)
for (dx=[-GNSS_ARM_W/4, GNSS_ARM_W/4])
translate([dx, -GNSS_ARM_L * 0.45, -e])
cylinder(d=M3_D, h=GNSS_ARM_H + 2*e);
}
}
//
// lte_bracket_assembly() / gnss_bracket_assembly()
//
module lte_bracket_assembly() {
color("SteelBlue", 0.9) collar_half("front", "lte");
color("CornflowerBlue", 0.9) mirror([0,1,0]) collar_half("rear", "none");
color("LightSteelBlue", 0.85)
translate([0, COL_OD/2, COL_H/2 - LTE_ARM_H/2])
lte_sma_arm();
// Phantom SMA stub antennas
for (dx=[-SMA_SPACING/2, SMA_SPACING/2])
color("DimGray", 0.5)
translate([dx, COL_OD/2 + LTE_SMA_Y,
COL_H/2 + LTE_ARM_H/2])
cylinder(d=7, h=60);
}
module gnss_bracket_assembly() {
color("Teal", 0.9) collar_half("front", "gnss");
color("DarkCyan", 0.9) mirror([0,1,0]) collar_half("rear", "none");
// Tray: arm at Y, tray faces +Z
color("LightCyan", 0.85)
translate([0, COL_OD/2 + GNSS_ARM_L,
COL_H/2 - GNSS_ARM_H/2])
rotate([90, 0, 0])
gnss_tray();
// Phantom GNSS patch
color("Gold", 0.35)
translate([-GNSS_PATCH/2,
COL_OD/2 + GNSS_ARM_L + GNSS_LIP_T,
COL_H/2 + GNSS_ARM_H/2 + GNSS_TRAY_T])
cube([GNSS_PATCH, GNSS_PATCH, 8]);
}
//
// Render selector
//
if (RENDER == "lte_assembly") {
lte_bracket_assembly();
} else if (RENDER == "lte_front") {
collar_half("front", "lte");
} else if (RENDER == "lte_rear") {
collar_half("rear", "none");
} else if (RENDER == "lte_arm") {
translate([0, 0, LTE_ARM_H]) rotate([180,0,0]) lte_sma_arm();
} else if (RENDER == "gnss_assembly") {
gnss_bracket_assembly();
} else if (RENDER == "gnss_front") {
collar_half("front", "gnss");
} else if (RENDER == "gnss_rear") {
collar_half("rear", "none");
} else if (RENDER == "gnss_tray") {
gnss_tray();
} else if (RENDER == "full_stem") {
color("Silver", 0.2)
translate([0,0,-40])
cylinder(d=STEM_OD, h=STEM_SPACING + COL_H + 80);
lte_bracket_assembly();
translate([0, 0, STEM_SPACING])
gnss_bracket_assembly();
}

169
chassis/sim7600x_mount.scad Normal file
View File

@ -0,0 +1,169 @@
// ============================================================
// sim7600x_mount.scad Waveshare SIM7600X 4G HAT Bracket
// Rev A 2026-03-01 sl-mechanical
// ============================================================
// Mounts the SIM7600X HAT near the Jetson Orin on the base plate.
//
// PCB: 65 × 56 mm, 4× M2.5 mounting holes (RPi HAT std pattern).
//
// SIM card access without disassembly:
// The Y edge of the bracket platform is fully open a notch
// in the floor plate is wider than the SIM tray so the card
// inserts / ejects with the board fully installed.
//
// Base plate attachment: 4× M3 flat-head countersunk holes at
// bracket corners. Drill M3 clearance holes in base plate and
// use M3 nyloc nuts underneath, or use captured M3 T-nuts.
//
// VERIFY WITH CALIPERS BEFORE PRINTING:
// HAT_L, HAT_W PCB dimensions
// HAT_HOLE_X, HAT_HOLE_Y M2.5 hole spacing
// HAT_HOLE_OX, HAT_HOLE_OY hole inset from PCB corners
// SIM_X, SIM_W SIM slot centre & width on Y edge
// USB_X, USB_W, USB_H USB port on Y edge
//
// RENDER options:
// "assembly" bracket + phantom PCB (default)
// "bracket" print-ready bracket
// "bracket_2d" floor projection DXF for base-plate layout
// ============================================================
RENDER = "assembly";
// Verify before printing
// Waveshare SIM7600X-H 4G HAT
HAT_L = 65.0; // PCB length (X)
HAT_W = 56.0; // PCB width (Y) SIM slot on Y=0 edge
HAT_H_BELOW = 3.0; // tallest component on PCB underside (verify)
// RPi HAT standard M2.5 hole pattern
HAT_HOLE_X = 58.0; // X span between hole pairs
HAT_HOLE_Y = 49.0; // Y span between hole pairs
HAT_HOLE_OX = 3.5; // hole inset from X edge of PCB
HAT_HOLE_OY = 3.5; // hole inset from Y edge of PCB
M25_D = 2.7; // M2.5 clearance (loose, for alignment)
M25_OD = 5.0; // standoff post outer diameter
// SIM card slot (Y edge, verify position from left/X corner)
SIM_X = 42.0; // SIM slot centre from PCB X edge
SIM_W = 17.0; // SIM slot width
SIM_H_NOTCH = 4.5; // notch height for tray travel + eject pin
// USB Micro-B port (Y edge, verify may differ by HAT version)
USB_X = 11.0; // USB port centre from PCB X edge
USB_W = 10.5; // USB port width
USB_H = 7.0; // USB port height
// u.FL pigtail exit slot (Y+ wall)
UFL_SLOT_W = 12.0;
UFL_SLOT_H = 5.0;
// Bracket geometry
STNDFF_H = HAT_H_BELOW + 4.0; // standoff height (clears underside)
PLAT_T = 3.5; // floor plate thickness
WALL_T = 2.5; // side wall thickness
PAD_X = 5.0; // platform extends PAD_X beyond PCB on X± sides
PAD_Y_PLUS = 8.0; // platform extends PAD_Y_PLUS beyond PCB on Y+ side
// Y side: open (no wall, no floor overhang) SIM/USB access
PLAT_L = HAT_L + 2 * PAD_X;
PLAT_W = HAT_W + PAD_Y_PLUS; // Y edge flush with PCB Y=0
// PCB sits with Y=0 edge flush with bracket Y=0 face
PCB_X0 = PAD_X; // X offset of PCB within bracket
// M3 base-plate mounting holes
M3_D = 3.4;
M3_CS_D = 6.2; // flat-head countersink diameter
M3_CS_H = 3.0; // countersink depth (from bottom face)
M3_INSET = 5.0; // hole inset from bracket corner
// Side wall height (for cable containment)
WALL_H = STNDFF_H + 4.0;
$fn = 48;
e = 0.01;
//
module sim7600x_bracket() {
difference() {
union() {
// Floor plate
cube([PLAT_L, PLAT_W, PLAT_T]);
// Side walls: X, X+, Y+ only (Y open)
translate([0, 0, 0])
cube([WALL_T, PLAT_W, WALL_H]);
translate([PLAT_L - WALL_T, 0, 0])
cube([WALL_T, PLAT_W, WALL_H]);
translate([0, PLAT_W - WALL_T, 0])
cube([PLAT_L, WALL_T, WALL_H]);
// M2.5 standoff posts (×4)
for (hx=[0, HAT_HOLE_X], hy=[0, HAT_HOLE_Y])
translate([PCB_X0 + HAT_HOLE_OX + hx,
HAT_HOLE_OY + hy,
PLAT_T])
cylinder(d=M25_OD, h=STNDFF_H);
}
// M2.5 clearance bores through standoffs
for (hx=[0, HAT_HOLE_X], hy=[0, HAT_HOLE_Y])
translate([PCB_X0 + HAT_HOLE_OX + hx,
HAT_HOLE_OY + hy, -e])
cylinder(d=M25_D, h=PLAT_T + STNDFF_H + e);
// SIM card access notch (Y face of floor)
// Notch 4 mm wider than SIM slot each side
translate([PCB_X0 + SIM_X - SIM_W/2 - 4, -e, -e])
cube([SIM_W + 8, WALL_T + e, PLAT_T + SIM_H_NOTCH + e]);
// USB port access notch (Y face of X wall)
translate([PCB_X0 + USB_X - USB_W/2, -e,
PLAT_T + STNDFF_H/2 - USB_H/2])
cube([USB_W, WALL_T + 2*e, USB_H]);
// u.FL pigtail exit slot (Y+ wall, upper zone)
translate([PLAT_L/2 - UFL_SLOT_W/2,
PLAT_W - WALL_T - e,
PLAT_T + STNDFF_H - UFL_SLOT_H])
cube([UFL_SLOT_W, WALL_T + 2*e, UFL_SLOT_H + e]);
// M3 base-plate mounting holes (×4, countersunk)
for (cx=[M3_INSET, PLAT_L - M3_INSET])
for (cy=[M3_INSET, PLAT_W - M3_INSET]) {
translate([cx, cy, -e])
cylinder(d=M3_D, h=PLAT_T + 2*e);
// Countersink from bottom face
translate([cx, cy, -e])
cylinder(d1=M3_CS_D, d2=M3_D,
h=M3_CS_H + e);
}
// Cable relief notch in X+ wall
translate([PLAT_L - WALL_T - e, PLAT_W * 0.35, PLAT_T + 2])
cube([WALL_T + 2*e, 9, 5]);
}
}
//
if (RENDER == "assembly") {
color("DimGray", 0.92) sim7600x_bracket();
// Phantom PCB
color("ForestGreen", 0.3)
translate([PCB_X0, 0, PLAT_T + STNDFF_H])
cube([HAT_L, HAT_W, 1.6]);
// SIM access marker (yellow arrow zone)
color("Gold", 0.7)
translate([PCB_X0 + SIM_X - 10, -8, 0])
cube([20, 8, 2]);
} else if (RENDER == "bracket") {
sim7600x_bracket();
} else if (RENDER == "bracket_2d") {
projection(cut=true)
translate([0, 0, -PLAT_T/2])
sim7600x_bracket();
}

View File

@ -14,44 +14,30 @@ services:
dockerfile: Dockerfile
container_name: saltybot-ros2
restart: unless-stopped
runtime: nvidia # JetPack NVIDIA runtime
privileged: false # use device passthrough instead
network_mode: host # ROS2 DDS multicast needs host networking
runtime: nvidia
privileged: false
network_mode: host
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
- ROS_DOMAIN_ID=42
- DISPLAY=${DISPLAY:-:0} # for RViz2 on host X11
- DISPLAY=${DISPLAY:-:0}
volumes:
# X11 socket for RViz2
- /tmp/.X11-unix:/tmp/.X11-unix:rw
# ROS2 workspace (host-mounted for live dev)
- ./ros2_ws/src:/ros2_ws/src:rw
# Persistent SLAM maps on NVMe
- /mnt/nvme/saltybot/maps:/maps
# NVMe data volume
- /mnt/nvme/saltybot:/data:rw
# Config files
- ./config:/config:ro
devices:
# RPLIDAR A1M8 — stable symlink via udev
- /dev/rplidar:/dev/rplidar
# STM32 USB CDC bridge — stable symlink via udev
- /dev/stm32-bridge:/dev/stm32-bridge
# RealSense D435i — USB3 device, needs udev rules
- /dev/bus/usb:/dev/bus/usb
# I2C bus (Orin Nano i2c-7 = 40-pin header pins 3/5)
- /dev/i2c-7:/dev/i2c-7
# CSI cameras via V4L2
- /dev/video0:/dev/video0
- /dev/video2:/dev/video2
- /dev/video4:/dev/video4
- /dev/video6:/dev/video6
command: >
bash -c "
source /opt/ros/humble/setup.bash &&
@ -111,7 +97,7 @@ services:
rgb_camera.profile:=640x480x30
"
# ── STM32 bridge node (bidirectional serial↔ROS2) ─────────────────────────
# ── STM32 bridge node (bidirectional serial<->ROS2) ────────────────────────
stm32-bridge:
image: saltybot/ros2-humble:jetson-orin
build:
@ -134,7 +120,7 @@ services:
serial_port:=/dev/stm32-bridge
"
# ── 4× IMX219 CSI cameras ─────────────────────────────────────────────────
# ── 4x IMX219 CSI cameras ──────────────────────────────────────────────────
csi-cameras:
image: saltybot/ros2-humble:jetson-orin
build:
@ -144,7 +130,7 @@ services:
restart: unless-stopped
runtime: nvidia
network_mode: host
privileged: true # CSI camera access requires elevated perms
privileged: true
environment:
- ROS_DOMAIN_ID=42
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
@ -164,22 +150,8 @@ services:
fps:=30
"
# ── Surround vision — 360° bird's-eye view + Nav2 camera obstacle layer ─────
# ── Surround vision — 360 bird's-eye view + Nav2 camera obstacle layer ─────
saltybot-surround:
# ── rosbridge WebSocket server ────────────────────────────────────────────
# Serves ROS2 topics to the web dashboard (roslibjs) on ws://jetson:9090
#
# Topics exposed (whitelist in ros2_ws/src/saltybot_bringup/config/rosbridge_params.yaml):
# /map /scan /tf /tf_static /saltybot/imu /saltybot/balance_state
# /cmd_vel /person/* /camera/*/image_raw/compressed
#
# Also runs image_transport/republish nodes to compress 4× CSI camera streams.
# Raw 640×480 YUYV → JPEG-compressed sensor_msgs/CompressedImage.
#
# Install (already in image): apt install ros-humble-rosbridge-server
# ros-humble-image-transport-plugins
rosbridge:
image: saltybot/ros2-humble:jetson-orin
build:
context: .
@ -189,7 +161,7 @@ services:
runtime: nvidia
network_mode: host
depends_on:
- csi-cameras # IMX219 /camera/*/image_raw must be publishing
- csi-cameras
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
@ -201,28 +173,33 @@ services:
command: >
bash -c "
source /opt/ros/humble/setup.bash &&
source /ros2_ws/install/local_setup.bash 2>/dev/null || true &&
ros2 launch saltybot_surround surround_vision.launch.py
start_cameras:=false
camera_height:=0.30
publish_rate:=5.0
"
# ── rosbridge WebSocket server — ws://jetson:9090 ──────────────────────────
rosbridge:
image: saltybot/ros2-humble:jetson-orin
build:
context: .
dockerfile: Dockerfile
container_name: saltybot-rosbridge
restart: unless-stopped
runtime: nvidia
network_mode: host # port 9090 is directly reachable on host
network_mode: host
depends_on:
- saltybot-ros2 # needs /map, /tf published
- stm32-bridge # needs /saltybot/imu, /saltybot/balance_state
- csi-cameras # needs /camera/*/image_raw for compression
- saltybot-ros2
- stm32-bridge
- csi-cameras
environment:
- ROS_DOMAIN_ID=42
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
volumes:
- ./ros2_ws/src:/ros2_ws/src:rw
- ./config:/config:ro
# Port 9090 is accessible on all host interfaces via network_mode: host.
# To restrict to a specific interface, set host: "192.168.x.x" in
# rosbridge_params.yaml instead of using Docker port mappings.
command: >
bash -c "
source /opt/ros/humble/setup.bash &&
@ -230,9 +207,10 @@ services:
ros2 launch saltybot_bringup rosbridge.launch.py
"
# ── Nav2 autonomous navigation stack ────────────────────────────────────────
# -- Remote e-stop bridge (MQTT over 4G -> STM32 CDC) ----------------------
# Subscribes to saltybot/estop MQTT topic. {"kill":true} -> 'E\r\n' to STM32.
# Cellular watchdog: 5s MQTT drop in AUTO mode -> 'F\r\n' (ESTOP_CELLULAR_TIMEOUT).
remote-estop:
image: saltybot/ros2-humble:jetson-orin
build:
@ -260,6 +238,7 @@ services:
ros2 launch saltybot_bridge remote_estop.launch.py
"
# ── Nav2 autonomous navigation stack ──────────────────────────────────────
saltybot-nav2:
image: saltybot/ros2-humble:jetson-orin
build:
@ -270,7 +249,7 @@ services:
runtime: nvidia
network_mode: host
depends_on:
- saltybot-ros2 # RTAB-Map + sensors must be running first
- saltybot-ros2
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
@ -282,9 +261,47 @@ services:
command: >
bash -c "
source /opt/ros/humble/setup.bash &&
source /ros2_ws/install/local_setup.bash 2>/dev/null || true &&
ros2 launch saltybot_bringup nav2.launch.py
"
# ── Outdoor navigation — OSM routing + GPS waypoints + geofence ───────────
# Start only for outdoor missions. Usage:
# ros2 param set /osm_router goal_lat 37.7749
# ros2 param set /osm_router goal_lon -122.4194
# ros2 service call /outdoor/plan_route std_srvs/srv/Trigger '{}'
# ros2 service call /outdoor/start_navigation std_srvs/srv/Trigger '{}'
saltybot-outdoor:
image: saltybot/ros2-humble:jetson-orin
build:
context: .
dockerfile: Dockerfile
container_name: saltybot-outdoor
restart: unless-stopped
runtime: nvidia
network_mode: host
depends_on:
- saltybot-nav2
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
- ROS_DOMAIN_ID=42
volumes:
- ./ros2_ws/src:/ros2_ws/src:rw
- ./config:/config:ro
- /mnt/nvme/saltybot/osm_cache:/data/osm_cache:rw
devices:
- /dev/sim7600:/dev/sim7600
command: >
bash -c "
source /opt/ros/humble/setup.bash &&
source /ros2_ws/install/local_setup.bash 2>/dev/null || true &&
ros2 launch saltybot_outdoor outdoor_nav.launch.py
use_rtk:=false
fence_active:=true
"
volumes:
saltybot-maps:
driver: local

View File

@ -0,0 +1,472 @@
"""
full_stack.launch.py One-command full autonomous stack bringup for SaltyBot.
Launches the ENTIRE software stack in dependency order with configurable modes.
Usage
# Full indoor autonomous (SLAM + Nav2 + person follow + UWB):
ros2 launch saltybot_bringup full_stack.launch.py
# Person-follow only (no SLAM, no Nav2 — living room demo):
ros2 launch saltybot_bringup full_stack.launch.py mode:=follow
# Outdoor GPS nav + person follow:
ros2 launch saltybot_bringup full_stack.launch.py mode:=outdoor
# Indoor, no CSI cameras (RealSense + RPLIDAR only):
ros2 launch saltybot_bringup full_stack.launch.py mode:=indoor enable_csi_cameras:=false
# Simulation / rosbag replay:
ros2 launch saltybot_bringup full_stack.launch.py use_sim_time:=true enable_bridge:=false
Modes
indoor (default)
RPLIDAR + RealSense D435i sensors
RTAB-Map SLAM (RGB-D + LIDAR, fresh map each boot)
Nav2 navigation stack (planners, controllers, BT navigator)
CSI 4× IMX219 surround cameras (optional, default on)
UWB driver (2-anchor DW3000, publishes /uwb/target)
YOLOv8n person detection (TensorRT)
Person follower with UWB+camera fusion
cmd_vel bridge STM32 (deadman + ramp + AUTONOMOUS gate)
rosbridge WebSocket (port 9090)
outdoor
RPLIDAR + RealSense D435i sensors (no SLAM)
GPS-based outdoor navigation (robot_localization EKF + Nav2)
UWB + person detection + follower
cmd_vel bridge + rosbridge
follow
RealSense D435i + RPLIDAR only (no SLAM, no Nav2)
UWB + person detection + follower
cmd_vel bridge + rosbridge
Note: obstacle avoidance disabled (no Nav2 local costmap)
Launch sequence (wall-clock delays conservative for cold start)
t= 0s robot_description (URDF + TF tree)
t= 0s STM32 bridge (serial port owner must be first)
t= 2s cmd_vel bridge (consumes /cmd_vel, needs STM32 bridge up)
t= 2s sensors (RPLIDAR + RealSense)
t= 4s UWB driver (independent serial device)
t= 4s CSI cameras (optional, independent)
t= 6s SLAM / outdoor nav (needs sensors; indoor/outdoor mode only)
t= 6s person detection (needs RealSense up)
t=14s Nav2 (needs SLAM to have partial map; indoor only)
t= 9s person follower (needs perception + UWB)
t=17s rosbridge (last exposes all topics over WebSocket)
Safety wiring
STM32 bridge must be up before cmd_vel bridge sends any command.
cmd_vel bridge has 500ms deadman: stops robot if /cmd_vel goes silent.
STM32 AUTONOMOUS mode gate (md=2) in cmd_vel bridge robot stays still
until STM32 firmware is in AUTONOMOUS mode regardless of /cmd_vel.
follow_enabled:=false disables person follower without stopping the node.
To e-stop at runtime: ros2 topic pub /saltybot/estop std_msgs/Bool '{data: true}'
Topics published by this stack
/scan RPLIDAR LaserScan
/camera/color/image_raw RealSense RGB
/camera/depth/image_rect_raw RealSense depth
/camera/imu RealSense IMU
/rtabmap/map OccupancyGrid (indoor only)
/rtabmap/odom Odometry (indoor only)
/uwb/target PoseStamped (UWB position, base_link)
/uwb/ranges UwbRangeArray (raw anchor ranges)
/person/target PoseStamped (camera position, base_link)
/person/detections Detection2DArray
/cmd_vel Twist (from follower or Nav2)
/saltybot/cmd String (to STM32)
/saltybot/imu Imu
/saltybot/balance_state String
"""
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import (
DeclareLaunchArgument,
GroupAction,
IncludeLaunchDescription,
LogInfo,
TimerAction,
)
from launch.conditions import IfCondition
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration, PythonExpression
# ── Convenience: package share paths ──────────────────────────────────────────
def _pkg(name: str) -> str:
return get_package_share_directory(name)
def _launch(pkg: str, *parts: str) -> PythonLaunchDescriptionSource:
return PythonLaunchDescriptionSource(os.path.join(_pkg(pkg), *parts))
# ── Mode helpers ───────────────────────────────────────────────────────────────
def _mode_is(mode_str: str):
"""IfCondition: true when 'mode' arg == mode_str."""
return IfCondition(
PythonExpression(["'", LaunchConfiguration("mode"), f"' == '{mode_str}'"])
)
def _mode_in(*modes):
"""IfCondition: true when 'mode' arg is one of the given strings."""
mode_set = repr(set(modes))
return IfCondition(
PythonExpression(["'", LaunchConfiguration("mode"), f"' in {mode_set}"])
)
# ── Main ──────────────────────────────────────────────────────────────────────
def generate_launch_description():
# ── Launch arguments ──────────────────────────────────────────────────────
mode_arg = DeclareLaunchArgument(
"mode",
default_value="indoor",
choices=["indoor", "outdoor", "follow"],
description=(
"Stack mode — indoor: SLAM+Nav2+follow; "
"outdoor: GPS nav+follow; "
"follow: sensors+UWB+perception+follower only"
),
)
use_sim_time_arg = DeclareLaunchArgument(
"use_sim_time",
default_value="false",
description="Use /clock from rosbag/simulator instead of wall clock",
)
enable_csi_cameras_arg = DeclareLaunchArgument(
"enable_csi_cameras",
default_value="true",
description="Launch 4× IMX219 CSI surround cameras",
)
enable_uwb_arg = DeclareLaunchArgument(
"enable_uwb",
default_value="true",
description="Launch UWB driver (requires MaUWB anchors on USB)",
)
enable_perception_arg = DeclareLaunchArgument(
"enable_perception",
default_value="true",
description="Launch YOLOv8n person detection (TensorRT)",
)
enable_follower_arg = DeclareLaunchArgument(
"enable_follower",
default_value="true",
description="Launch proportional person-follower controller",
)
enable_bridge_arg = DeclareLaunchArgument(
"enable_bridge",
default_value="true",
description="Launch STM32 serial bridge + cmd_vel bridge (disable for sim/rosbag)",
)
enable_rosbridge_arg = DeclareLaunchArgument(
"enable_rosbridge",
default_value="true",
description="Launch rosbridge WebSocket server (port 9090)",
)
follow_distance_arg = DeclareLaunchArgument(
"follow_distance",
default_value="1.5",
description="Person-follower target distance (m)",
)
max_linear_vel_arg = DeclareLaunchArgument(
"max_linear_vel",
default_value="0.5",
description="Max forward velocity cap (m/s) — forwarded to both follower and cmd_vel bridge",
)
uwb_port_a_arg = DeclareLaunchArgument(
"uwb_port_a",
default_value="/dev/uwb-anchor0",
description="UWB anchor-0 serial port (port/left side)",
)
uwb_port_b_arg = DeclareLaunchArgument(
"uwb_port_b",
default_value="/dev/uwb-anchor1",
description="UWB anchor-1 serial port (starboard/right side)",
)
stm32_port_arg = DeclareLaunchArgument(
"stm32_port",
default_value="/dev/stm32-bridge",
description="STM32 USB CDC serial port",
)
# ── Shared substitution handles ───────────────────────────────────────────
mode = LaunchConfiguration("mode")
use_sim_time = LaunchConfiguration("use_sim_time")
follow_distance = LaunchConfiguration("follow_distance")
max_linear_vel = LaunchConfiguration("max_linear_vel")
uwb_port_a = LaunchConfiguration("uwb_port_a")
uwb_port_b = LaunchConfiguration("uwb_port_b")
stm32_port = LaunchConfiguration("stm32_port")
# ── t=0s Robot description (URDF + TF tree) ──────────────────────────────
robot_description = IncludeLaunchDescription(
_launch("saltybot_description", "launch", "robot_description.launch.py"),
launch_arguments={"use_sim_time": use_sim_time}.items(),
)
# ── t=0s STM32 bidirectional serial bridge ────────────────────────────────
stm32_bridge = GroupAction(
condition=IfCondition(LaunchConfiguration("enable_bridge")),
actions=[
IncludeLaunchDescription(
_launch("saltybot_bridge", "launch", "bridge.launch.py"),
launch_arguments={
"mode": "bidirectional",
"serial_port": stm32_port,
}.items(),
),
],
)
# ── t=2s cmd_vel safety bridge (depends on STM32 bridge) ────────────────
cmd_vel_bridge = TimerAction(
period=2.0,
actions=[
GroupAction(
condition=IfCondition(LaunchConfiguration("enable_bridge")),
actions=[
IncludeLaunchDescription(
_launch("saltybot_bridge", "launch", "cmd_vel_bridge.launch.py"),
launch_arguments={
"max_linear_vel": max_linear_vel,
}.items(),
),
],
),
],
)
# ── t=2s Sensors: RPLIDAR + RealSense D435i ──────────────────────────────
sensors = TimerAction(
period=2.0,
actions=[
IncludeLaunchDescription(
_launch("saltybot_bringup", "launch", "sensors.launch.py"),
),
],
)
# ── t=4s Optional: 4× IMX219 CSI surround cameras ───────────────────────
csi_cameras = TimerAction(
period=4.0,
actions=[
GroupAction(
condition=IfCondition(LaunchConfiguration("enable_csi_cameras")),
actions=[
IncludeLaunchDescription(
_launch("saltybot_cameras", "launch", "csi_cameras.launch.py"),
),
],
),
],
)
# ── t=4s UWB driver (independent serial — can start early) ──────────────
uwb_driver = TimerAction(
period=4.0,
actions=[
GroupAction(
condition=IfCondition(LaunchConfiguration("enable_uwb")),
actions=[
IncludeLaunchDescription(
_launch("saltybot_uwb", "launch", "uwb.launch.py"),
launch_arguments={
"port_a": uwb_port_a,
"port_b": uwb_port_b,
}.items(),
),
],
),
],
)
# ── t=6s SLAM — RTAB-Map (indoor only; needs sensors up for ~4s) ─────────
slam = TimerAction(
period=6.0,
actions=[
GroupAction(
condition=_mode_is("indoor"),
actions=[
LogInfo(msg="[full_stack] Starting RTAB-Map SLAM (indoor mode)"),
IncludeLaunchDescription(
_launch("saltybot_bringup", "launch", "slam_rtabmap.launch.py"),
launch_arguments={
"use_sim_time": use_sim_time,
}.items(),
),
],
),
],
)
# ── t=6s Outdoor navigation (outdoor only; no SLAM) ─────────────────────
outdoor_nav = TimerAction(
period=6.0,
actions=[
GroupAction(
condition=_mode_is("outdoor"),
actions=[
LogInfo(msg="[full_stack] Starting outdoor GPS navigation"),
IncludeLaunchDescription(
_launch("saltybot_outdoor", "launch", "outdoor_nav.launch.py"),
launch_arguments={
"use_sim_time": use_sim_time,
}.items(),
),
],
),
],
)
# ── t=6s Person detection (needs RealSense up for ~4s) ──────────────────
perception = TimerAction(
period=6.0,
actions=[
GroupAction(
condition=IfCondition(LaunchConfiguration("enable_perception")),
actions=[
LogInfo(msg="[full_stack] Starting YOLOv8n person detection"),
IncludeLaunchDescription(
_launch("saltybot_perception", "launch", "person_detection.launch.py"),
launch_arguments={
"use_sim_time": use_sim_time,
}.items(),
),
],
),
],
)
# ── t=9s Person follower (needs perception + UWB; ~3s after both start) ─
follower = TimerAction(
period=9.0,
actions=[
GroupAction(
condition=IfCondition(LaunchConfiguration("enable_follower")),
actions=[
LogInfo(msg="[full_stack] Starting person follower"),
IncludeLaunchDescription(
_launch("saltybot_follower", "launch", "person_follower.launch.py"),
launch_arguments={
"follow_distance": follow_distance,
"max_linear_vel": max_linear_vel,
}.items(),
),
],
),
],
)
# ── t=14s Nav2 (indoor only; SLAM needs ~8s to build initial map) ────────
nav2 = TimerAction(
period=14.0,
actions=[
GroupAction(
condition=_mode_is("indoor"),
actions=[
LogInfo(msg="[full_stack] Starting Nav2 navigation stack"),
IncludeLaunchDescription(
_launch("saltybot_bringup", "launch", "nav2.launch.py"),
),
],
),
],
)
# ── t=17s rosbridge WebSocket (last — all topics must be alive) ──────────
rosbridge = TimerAction(
period=17.0,
actions=[
GroupAction(
condition=IfCondition(LaunchConfiguration("enable_rosbridge")),
actions=[
LogInfo(msg="[full_stack] Starting rosbridge (ws://0.0.0.0:9090)"),
IncludeLaunchDescription(
_launch("saltybot_bringup", "launch", "rosbridge.launch.py"),
),
],
),
],
)
# ── Startup banner ────────────────────────────────────────────────────────
banner = LogInfo(
msg=["[full_stack] SaltyBot bringup starting — mode=", mode,
" sim_time=", use_sim_time]
)
# ── Assemble ──────────────────────────────────────────────────────────────
return LaunchDescription([
# Arguments
mode_arg,
use_sim_time_arg,
enable_csi_cameras_arg,
enable_uwb_arg,
enable_perception_arg,
enable_follower_arg,
enable_bridge_arg,
enable_rosbridge_arg,
follow_distance_arg,
max_linear_vel_arg,
uwb_port_a_arg,
uwb_port_b_arg,
stm32_port_arg,
# Startup banner
banner,
# t=0s
robot_description,
stm32_bridge,
# t=2s
sensors,
cmd_vel_bridge,
# t=4s
csi_cameras,
uwb_driver,
# t=6s
slam,
outdoor_nav,
perception,
# t=9s
follower,
# t=14s
nav2,
# t=17s
rosbridge,
])

View File

@ -3,8 +3,8 @@
<package format="3">
<name>saltybot_bringup</name>
<version>0.1.0</version>
<description>SaltyBot launch files — RealSense D435i + RPLIDAR A1M8 sensor bringup and SLAM integration</description>
<maintainer email="sl-perception@saltylab.local">sl-perception</maintainer>
<description>SaltyBot launch files — full autonomous stack bringup (sensors, SLAM, Nav2, UWB, perception, follower)</description>
<maintainer email="sl-jetson@saltylab.local">sl-jetson</maintainer>
<license>MIT</license>
<exec_depend>rplidar_ros</exec_depend>
@ -18,6 +18,14 @@
<exec_depend>rosbridge_server</exec_depend>
<exec_depend>image_transport</exec_depend>
<exec_depend>image_transport_plugins</exec_depend>
<!-- SaltyBot packages included by full_stack.launch.py -->
<exec_depend>saltybot_bridge</exec_depend>
<exec_depend>saltybot_cameras</exec_depend>
<exec_depend>saltybot_description</exec_depend>
<exec_depend>saltybot_follower</exec_depend>
<exec_depend>saltybot_outdoor</exec_depend>
<exec_depend>saltybot_perception</exec_depend>
<exec_depend>saltybot_uwb</exec_depend>
<buildtool_depend>ament_python</buildtool_depend>

View File

@ -0,0 +1,92 @@
# ekf_outdoor.yaml — robot_localization EKF config for outdoor GPS navigation
#
# Two EKF instances + navsat_transform_node:
#
# ekf_filter_node_odom — local EKF: wheel odom + IMU → /odometry/local (odom frame)
# ekf_filter_node_map — global EKF: /odometry/local + GPS odometry → /odometry/global (map frame)
# navsat_transform_node — converts /gps/fix + /imu/data → /odometry/gps (map-frame odometry)
#
# Frame hierarchy:
# map ← (global EKF) ← odom ← (local EKF) ← base_link
#
# Hardware:
# IMU: RealSense D435i BMI055 → /imu/data
# GPS: SIM7600X cellular → /gps/fix (±2.5 m CEP)
# Odom: STM32 wheel encoders → /odom
# RTK: ZED-F9P (optional) → /gps/fix (±2 cm CEP when use_rtk: true)
# ── Local EKF: fuses wheel odometry + IMU in odom frame ──────────────────────
ekf_filter_node_odom:
ros__parameters:
frequency: 30.0
sensor_timeout: 0.1
two_d_mode: true
publish_tf: true
map_frame: map
odom_frame: odom
base_link_frame: base_link
world_frame: odom
odom0: /odom
odom0_config: [true, true, false,
false, false, false,
true, true, false,
false, false, true,
false, false, false]
odom0_differential: false
odom0_relative: false
odom0_queue_size: 5
imu0: /imu/data
imu0_config: [false, false, false,
true, true, true,
false, false, false,
true, true, true,
true, true, false]
imu0_differential: false
imu0_relative: false
imu0_remove_gravitational_acceleration: true
imu0_queue_size: 10
# ── Global EKF: fuses local odometry + GPS odometry in map frame ──────────────
ekf_filter_node_map:
ros__parameters:
frequency: 15.0
sensor_timeout: 0.5
two_d_mode: true
publish_tf: true
map_frame: map
odom_frame: odom
base_link_frame: base_link
world_frame: map
odom0: /odometry/local
odom0_config: [false, false, false,
false, false, false,
true, true, false,
false, false, true,
false, false, false]
odom0_differential: false
odom0_queue_size: 5
odom1: /odometry/gps
odom1_config: [true, true, false,
false, false, false,
false, false, false,
false, false, false,
false, false, false]
odom1_differential: false
odom1_queue_size: 5
# ── navsat_transform_node — GPS NavSatFix → map-frame Odometry ────────────────
navsat_transform:
ros__parameters:
frequency: 10.0
delay: 3.0
magnetic_declination_radians: 0.0
yaw_offset: 0.0
zero_altitude: true
broadcast_utm_transform: false
publish_filtered_gps: false
use_odometry_yaw: false
wait_for_datum: false

View File

@ -0,0 +1,35 @@
# geofence_vertices.yaml — GPS polygon safety boundary for outdoor navigation
#
# Format: fence_vertices is a flat list [lat0, lon0, lat1, lon1, ..., latN, lonN]
# Minimum 3 vertex pairs (6 values). Leave empty [] to disable geofence.
#
# ── How to set your fence ────────────────────────────────────────────────────
# 1. Go to https://geojson.io and mark your safe operating area corners
# 2. Replace the example values below with real coordinates
# 3. Reload at runtime: ros2 service call /outdoor/reload_fence std_srvs/srv/Trigger '{}'
#
# ── Unit conversions ─────────────────────────────────────────────────────────
# d_lat_deg = d_metres / 111320
# d_lon_deg = d_metres / (111320 * cos(lat_radians))
#
geofence:
ros__parameters:
# Empty = geofence inactive (WARNING: no safety boundary without a fence)
fence_vertices: []
check_rate_hz: 2.0
stop_on_exit: true
map_frame: "map"
# ── EXAMPLE (uncomment and edit with real coordinates) ───────────────────────
# geofence:
# ros__parameters:
# # SW → SE → NE → NW corners of a ~100m × 80m rectangle
# fence_vertices:
# - 37.774900 # SW lat
# - -122.419400 # SW lon
# - 37.774900 # SE lat
# - -122.418500 # SE lon
# - 37.775800 # NE lat
# - -122.418500 # NE lon
# - 37.775800 # NW lat
# - -122.419400 # NW lon

View File

@ -0,0 +1,98 @@
# outdoor_nav_params.yaml — Outdoor navigation configuration for SaltyBot
#
# Hardware: Jetson Orin Nano Super / SIM7600X cellular GPS (±2.5 m CEP)
# RTK upgrade: u-blox ZED-F9P → ±2 cm CEP (set use_rtk: true when installed)
#
# ── GPS quality notes ────────────────────────────────────────────────────────
# SIM7600X reports STATUS_FIX (0) in open sky, STATUS_NO_FIX (-1) indoors.
# RTK ZED-F9P reports STATUS_GBAS_FIX (2) when corrections received.
# Goal tolerance automatically tightens from 2.0m (cellular) to 0.3m (RTK).
#
# ── RTK upgrade procedure ────────────────────────────────────────────────────
# 1. Connect ZED-F9P to /dev/ttyTHS0 (Orin 40-pin UART, pins 8/10)
# 2. Set NTRIP credentials in rtk_gps.launch.py
# 3. Run: ros2 launch saltybot_outdoor rtk_gps.launch.py
# 4. Verify: ros2 topic echo /gps/fix | grep status
# → status.status == 2 (STATUS_GBAS_FIX) = RTK fixed
#
# References:
# SparkFun RTK Express: https://docs.sparkfun.com/SparkFun_RTK_Everywhere_Firmware/
# NTRIP caster list: https://rtk2go.com/sample-map/
# ─────────────────────────────────────────────────────────────────────────────
osm_router:
ros__parameters:
overpass_url: "https://overpass-api.de/api/interpreter"
query_radius_m: 1500.0 # fetch OSM ways within this radius of midpoint
simplify_eps_m: 5.0 # DouglasPeucker waypoint spacing (metres)
cache_dir: "/data/osm_cache"
use_cache: true
map_frame: "map"
# Default goal — override at launch or via parameter set:
# ros2 param set /osm_router goal_lat 48.8566
# ros2 param set /osm_router goal_lon 2.3522
goal_lat: 0.0
goal_lon: 0.0
gps_waypoint_follower:
ros__parameters:
map_frame: "map"
min_fix_status: 0 # 0=STATUS_FIX; -1=NO_FIX; 2=RTK_FIXED
# Goal tolerance for SIM7600X (±2.5m CEP) — auto-tightens with RTK
goal_tolerance_xy: 2.0 # metres (outdoor vs 0.25m indoor)
goal_tolerance_yaw: 3.14159 # radians — don't constrain heading outdoors
nav_timeout_s: 300.0 # 5 min max per navigate_through_poses call
waypoint_spacing_m: 5.0 # skip waypoints closer than this
geofence:
ros__parameters:
check_rate_hz: 2.0
stop_on_exit: true
# fence_vertices: flat list of lat,lon pairs forming the boundary polygon.
# Leave empty to disable geofence (WARNING: no safety boundary).
# Example — 100m × 100m test square (replace with real boundary):
fence_vertices: []
map_frame: "map"
# ── Outdoor Nav2 parameter overrides ────────────────────────────────────────
# These supplement jetson/config/nav2_params.yaml for outdoor operation.
# Apply by passing this file as an additional params_file to nav2.launch.py.
#
# Key differences from indoor config:
# - No static_layer (no SLAM map — GPS provides global position only)
# - Wider inflation radius (outdoor = wider obstacles / kerbs)
# - Higher max velocity (clear sidewalks vs cluttered indoor)
# - GPS costmap uses wider observation sources
# ─────────────────────────────────────────────────────────────────────────────
# Outdoor controller overrides
controller_server:
ros__parameters:
FollowPath:
max_vel_x: 1.5 # m/s (indoor: 1.0)
max_speed_xy: 1.5
sim_time: 2.0 # seconds lookahead (longer for faster speeds)
# Larger goal tolerance at waypoints (GPS ±2.5m)
general_goal_checker:
xy_goal_tolerance: 2.0 # metres (indoor: 0.25)
yaw_goal_tolerance: 3.14 # don't constrain heading at GPS waypoints
# Outdoor local costmap overrides
local_costmap:
local_costmap:
ros__parameters:
width: 5 # metres (wider window for outdoor speeds)
height: 5
inflation_layer:
inflation_radius: 0.50 # metres (indoor: 0.55, similar but explicit)
# Outdoor global costmap overrides
# Key: remove static_layer (no SLAM map), keep obstacle layers
global_costmap:
global_costmap:
ros__parameters:
plugins: ["obstacle_layer", "inflation_layer"] # no static_layer outdoors
inflation_layer:
inflation_radius: 0.50
obstacle_layer:
observation_sources: scan surround_cameras

View File

@ -0,0 +1,122 @@
"""
outdoor_nav.launch.py Full outdoor navigation stack for SaltyBot
Services started:
osm_router_node Overpass API OSM routing /outdoor/waypoints
gps_waypoint_follower /outdoor/waypoints Nav2 navigate_through_poses
geofence_node GPS polygon safety boundary
navsat_transform_node robot_localization GPSmap projection (EKF)
ekf_filter_node_odom local EKF (odom frame)
ekf_filter_node_map global EKF (map frame, GPS-fused)
Nav2 is NOT started here it must already be running (saltybot-nav2 container).
Outdoor mode: uses GPS for global localisation instead of RTAB-Map.
Indoor mode: use slam_rtabmap.launch.py instead (RTAB-Map provides mapodom).
Arguments:
use_rtk (default false) set true when ZED-F9P RTK module installed
fence_active (default true) enable geofence safety boundary
goal_lat (default 0.0) navigation goal latitude
goal_lon (default 0.0) navigation goal longitude
"""
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
def generate_launch_description():
pkg_dir = get_package_share_directory('saltybot_outdoor')
params_file = os.path.join(pkg_dir, 'config', 'outdoor_nav_params.yaml')
ekf_file = os.path.join(pkg_dir, 'config', 'ekf_outdoor.yaml')
geofence_file = os.path.join(pkg_dir, 'config', 'geofence_vertices.yaml')
args = [
DeclareLaunchArgument('use_rtk', default_value='false'),
DeclareLaunchArgument('fence_active', default_value='true'),
DeclareLaunchArgument('goal_lat', default_value='0.0'),
DeclareLaunchArgument('goal_lon', default_value='0.0'),
]
# ── robot_localization: local EKF (odom frame) ───────────────────────────
ekf_local = Node(
package='robot_localization',
executable='ekf_node',
name='ekf_filter_node_odom',
output='screen',
parameters=[ekf_file, {'use_sim_time': False}],
remappings=[('odometry/filtered', '/odometry/local')],
)
# ── robot_localization: global EKF (map frame, GPS-fused) ────────────────
ekf_global = Node(
package='robot_localization',
executable='ekf_node',
name='ekf_filter_node_map',
output='screen',
parameters=[ekf_file, {'use_sim_time': False}],
remappings=[('odometry/filtered', '/odometry/global')],
)
# ── navsat_transform_node — GPS (NavSatFix) → map-frame Odometry ─────────
navsat = Node(
package='robot_localization',
executable='navsat_transform_node',
name='navsat_transform',
output='screen',
parameters=[ekf_file, {'use_sim_time': False}],
remappings=[
('imu/data', '/imu/data'),
('gps/fix', '/gps/fix'),
('odometry/filtered', '/odometry/global'),
('odometry/gps', '/odometry/gps'),
],
)
# ── OSM router ───────────────────────────────────────────────────────────
osm_router = Node(
package='saltybot_outdoor',
executable='osm_router_node',
name='osm_router',
output='screen',
parameters=[
params_file,
{
'goal_lat': LaunchConfiguration('goal_lat'),
'goal_lon': LaunchConfiguration('goal_lon'),
},
],
)
# ── GPS waypoint follower ─────────────────────────────────────────────────
gps_follower = Node(
package='saltybot_outdoor',
executable='gps_waypoint_follower',
name='gps_waypoint_follower',
output='screen',
parameters=[params_file],
)
# ── Geofence ──────────────────────────────────────────────────────────────
geofence = Node(
package='saltybot_outdoor',
executable='geofence_node',
name='geofence',
output='screen',
parameters=[geofence_file],
)
return LaunchDescription([
*args,
ekf_local,
ekf_global,
navsat,
osm_router,
gps_follower,
geofence,
])

View File

@ -0,0 +1,30 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>saltybot_outdoor</name>
<version>0.1.0</version>
<description>Outdoor navigation for SaltyBot — OSM routing, GPS waypoint following, geofence</description>
<maintainer email="seb@saltylab.io">seb</maintainer>
<license>MIT</license>
<buildtool_depend>ament_python</buildtool_depend>
<depend>rclpy</depend>
<depend>sensor_msgs</depend>
<depend>geometry_msgs</depend>
<depend>nav_msgs</depend>
<depend>std_msgs</depend>
<depend>std_srvs</depend>
<depend>nav2_msgs</depend>
<exec_depend>robot_localization</exec_depend>
<exec_depend>nav2_bringup</exec_depend>
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>

View File

@ -0,0 +1,197 @@
"""
geofence_node GPS polygon safety boundary for outdoor navigation
Monitors /gps/fix and checks the robot's position against a configurable
polygon of GPS vertices. If the robot exits the fence OR a requested waypoint
lies outside the fence, this node:
1. Publishes /outdoor/abort True gps_waypoint_follower cancels Nav2
2. Publishes /cmd_vel zero twist (emergency stop)
3. Publishes /outdoor/fence_violation with details
The fence polygon is defined in geofence_vertices.yaml as a list of
{lat, lon} pairs forming a closed polygon (first = last not required).
Point-in-polygon test uses the ray-casting algorithm.
Topics subscribed:
/gps/fix sensor_msgs/NavSatFix
/outdoor/waypoints geometry_msgs/PoseArray pre-navigation check
Topics published:
/outdoor/abort std_msgs/Bool True when fence violated
/outdoor/fence_status std_msgs/String 'inside' | 'outside:<lat>,<lon>'
/cmd_vel geometry_msgs/Twist zero-velocity on violation
Services:
/outdoor/reload_fence std_srvs/Trigger re-read geofence_vertices.yaml
"""
import math
from typing import Optional
import rclpy
from rclpy.node import Node
from rclpy.qos import QoSProfile, ReliabilityPolicy, HistoryPolicy
from sensor_msgs.msg import NavSatFix
from geometry_msgs.msg import PoseArray, Twist
from std_msgs.msg import Bool, String
from std_srvs.srv import Trigger
def _point_in_polygon(lat: float, lon: float, polygon: list) -> bool:
"""Ray-casting point-in-polygon test. polygon = [(lat, lon), ...]."""
n = len(polygon)
inside = False
j = n - 1
for i in range(n):
xi, yi = polygon[i]
xj, yj = polygon[j]
if ((yi > lon) != (yj > lon)) and (lat < (xj - xi) * (lon - yi) / (yj - yi) + xi):
inside = not inside
j = i
return inside
def _polygon_area_m2(polygon: list) -> float:
"""Approximate polygon area in m² using shoelace on flat-earth coords."""
if len(polygon) < 3:
return 0.0
lat0, lon0 = polygon[0]
cos0 = math.cos(math.radians(lat0))
pts = [
((lat - lat0) * 111_320.0, (lon - lon0) * 111_320.0 * cos0)
for lat, lon in polygon
]
n = len(pts)
area = 0.0
for i in range(n):
j = (i + 1) % n
area += pts[i][0] * pts[j][1]
area -= pts[j][0] * pts[i][1]
return abs(area) / 2.0
class GeofenceNode(Node):
def __init__(self):
super().__init__('geofence')
self.declare_parameter('fence_vertices', []) # flat list: lat0,lon0,lat1,lon1,...
self.declare_parameter('check_rate_hz', 2.0)
self.declare_parameter('stop_on_exit', True) # publish zero /cmd_vel
self.declare_parameter('map_frame', 'map')
self._polygon: list[tuple[float, float]] = []
self._fix: Optional[NavSatFix] = None
self._violated = False
self._load_fence()
# ── Subscriptions ────────────────────────────────────────────────────
self.create_subscription(NavSatFix, '/gps/fix', self._fix_cb, 10)
self.create_subscription(PoseArray, '/outdoor/waypoints', self._wps_cb, 10)
# ── Publishers ───────────────────────────────────────────────────────
self._abort_pub = self.create_publisher(Bool, '/outdoor/abort', 10)
self._status_pub = self.create_publisher(String, '/outdoor/fence_status', 10)
self._vel_pub = self.create_publisher(Twist, '/cmd_vel', 10)
# ── Service ──────────────────────────────────────────────────────────
self.create_service(Trigger, '/outdoor/reload_fence', self._reload_cb)
rate = self.get_parameter('check_rate_hz').value
self.create_timer(1.0 / rate, self._check)
area = _polygon_area_m2(self._polygon) if self._polygon else 0.0
self.get_logger().info(
f'geofence: {len(self._polygon)}-vertex polygon, area≈{area:.0f}'
)
# ── Callbacks ─────────────────────────────────────────────────────────────
def _fix_cb(self, msg: NavSatFix) -> None:
self._fix = msg
def _wps_cb(self, msg: PoseArray) -> None:
"""Pre-check: warn if any map-frame waypoint is suspiciously far out."""
# In V1 we can't easily reverse-project map→GPS here without TF/EKF,
# so we just log the waypoint count and rely on the robot fix check.
self.get_logger().info(
f'geofence: {len(msg.poses)} waypoints received (pre-flight GPS check pending)'
)
def _reload_cb(self, _req, response):
self._load_fence()
response.success = True
response.message = f'Fence reloaded: {len(self._polygon)} vertices'
return response
# ── Fence check ───────────────────────────────────────────────────────────
def _check(self) -> None:
if not self._polygon:
self._pub_status('no_fence_configured')
return
if self._fix is None:
self._pub_status('no_gps')
return
if self._fix.status.status < 0:
return # no fix — don't trigger on bad data
lat, lon = self._fix.latitude, self._fix.longitude
inside = _point_in_polygon(lat, lon, self._polygon)
if inside:
self._violated = False
self._pub_status('inside')
else:
if not self._violated:
self.get_logger().warn(
f'GEOFENCE VIOLATION: robot at ({lat:.6f}, {lon:.6f}) is outside fence'
)
self._violated = True
self._pub_status(f'outside:{lat:.6f},{lon:.6f}')
# Publish abort
abort = Bool()
abort.data = True
self._abort_pub.publish(abort)
# Emergency stop
if self.get_parameter('stop_on_exit').value:
self._vel_pub.publish(Twist()) # zero twist = stop
# ── Helpers ───────────────────────────────────────────────────────────────
def _load_fence(self) -> None:
"""Parse fence_vertices parameter: flat [lat0, lon0, lat1, lon1, ...]."""
raw = self.get_parameter('fence_vertices').value
if not raw or len(raw) < 6:
self._polygon = []
self.get_logger().warn('geofence: no valid fence_vertices — geofence inactive')
return
pairs = list(raw)
self._polygon = [(pairs[i], pairs[i + 1]) for i in range(0, len(pairs) - 1, 2)]
self.get_logger().info(f'geofence: loaded {len(self._polygon)}-vertex polygon')
def _pub_status(self, s: str) -> None:
msg = String()
msg.data = s
self._status_pub.publish(msg)
def main(args=None):
rclpy.init(args=args)
node = GeofenceNode()
try:
rclpy.spin(node)
except KeyboardInterrupt:
pass
finally:
node.destroy_node()
rclpy.try_shutdown()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,287 @@
"""
gps_waypoint_follower_node GPS-to-Nav2 waypoint bridge for outdoor navigation
Subscribes to the OSM router's waypoint output (/outdoor/waypoints) and the
robot's GPS fix (/gps/fix), converts each waypoint to map-frame via
robot_localization's navsat_transform_node, then sends the full list to Nav2's
navigate_through_poses action server.
GPS quality handling:
SIM7600X cellular GPS: ±2.5 m CEP uses wider goal tolerances (2 m)
RTK upgrade (ZED-F9P): ±0.02 m CEP tolerances tighten automatically
GPS status thresholds (sensor_msgs/NavSatStatus):
STATUS_NO_FIX (-1) reject all goals, log error
STATUS_FIX (0) standard GPS, use but warn
STATUS_SBAS_FIX (1) SBAS-corrected, OK
STATUS_GBAS_FIX (2) RTK fixed, best quality, tighten tolerance
Topics subscribed:
/gps/fix sensor_msgs/NavSatFix
/outdoor/waypoints geometry_msgs/PoseArray (from osm_router_node, map frame)
/outdoor/abort std_msgs/Bool (external stop signal)
Topics published:
/outdoor/gps_status std_msgs/String current GPS quality description
/outdoor/nav_status std_msgs/String navigation state machine status
Actions called:
/navigate_through_poses nav2_msgs/action/NavigateThroughPoses
Services:
/outdoor/start_navigation std_srvs/Trigger begin executing /outdoor/waypoints
/outdoor/stop_navigation std_srvs/Trigger cancel in-progress navigation
"""
import math
from typing import Optional
import rclpy
from rclpy.node import Node
from rclpy.action import ActionClient
from rclpy.callback_groups import ReentrantCallbackGroup
from rclpy.executors import MultiThreadedExecutor
from sensor_msgs.msg import NavSatFix, NavSatStatus
from geometry_msgs.msg import PoseArray, PoseStamped
from std_msgs.msg import String, Bool
from std_srvs.srv import Trigger
from nav2_msgs.action import NavigateThroughPoses
# GPS fix quality thresholds
_MIN_FIX_STATUS = NavSatStatus.STATUS_FIX # reject STATUS_NO_FIX (-1)
_RTK_FIX_STATUS = NavSatStatus.STATUS_GBAS_FIX
# Goal tolerance (metres) by GPS quality
_TOLERANCE_STANDARD_GPS = 2.0 # SIM7600X ±2.5 m CEP
_TOLERANCE_RTK = 0.30 # ZED-F9P ±2 cm RTK
class GpsWaypointFollowerNode(Node):
def __init__(self):
super().__init__('gps_waypoint_follower')
self.declare_parameter('map_frame', 'map')
self.declare_parameter('min_fix_status', _MIN_FIX_STATUS)
self.declare_parameter('goal_tolerance_xy', _TOLERANCE_STANDARD_GPS)
self.declare_parameter('goal_tolerance_yaw', 3.14) # don't care about heading
self.declare_parameter('nav_timeout_s', 300.0) # 5 min per waypoint batch
self.declare_parameter('waypoint_spacing_m', 5.0) # skip closer waypoints
self._fix: Optional[NavSatFix] = None
self._waypoints: Optional[PoseArray] = None
self._nav_active = False
self._cb_group = ReentrantCallbackGroup()
# ── Subscriptions ────────────────────────────────────────────────────
self.create_subscription(NavSatFix, '/gps/fix', self._fix_cb, 10)
self.create_subscription(PoseArray, '/outdoor/waypoints', self._waypoints_cb, 10)
self.create_subscription(Bool, '/outdoor/abort', self._abort_cb, 10)
# ── Publishers ───────────────────────────────────────────────────────
self._gps_status_pub = self.create_publisher(String, '/outdoor/gps_status', 10)
self._nav_status_pub = self.create_publisher(String, '/outdoor/nav_status', 10)
# ── Services ─────────────────────────────────────────────────────────
self.create_service(Trigger, '/outdoor/start_navigation', self._start_cb,
callback_group=self._cb_group)
self.create_service(Trigger, '/outdoor/stop_navigation', self._stop_cb,
callback_group=self._cb_group)
# ── Nav2 action client ───────────────────────────────────────────────
self._nav_client = ActionClient(
self, NavigateThroughPoses, '/navigate_through_poses',
callback_group=self._cb_group,
)
# ── Status timer ─────────────────────────────────────────────────────
self.create_timer(2.0, self._publish_gps_status)
self._nav_goal_handle = None
self._publish_nav_status('idle')
self.get_logger().info('gps_waypoint_follower: ready. Call /outdoor/start_navigation.')
# ── Callbacks ─────────────────────────────────────────────────────────────
def _fix_cb(self, msg: NavSatFix) -> None:
self._fix = msg
def _waypoints_cb(self, msg: PoseArray) -> None:
self._waypoints = msg
self.get_logger().info(f'Received {len(msg.poses)} outdoor waypoints')
def _abort_cb(self, msg: Bool) -> None:
if msg.data:
self._cancel_navigation('abort signal received')
def _start_cb(self, _req, response):
if self._nav_active:
response.success = False
response.message = 'Navigation already active'
return response
if self._fix is None:
response.success = False
response.message = 'No GPS fix'
return response
fix_status = self._fix.status.status
min_status = self.get_parameter('min_fix_status').value
if fix_status < min_status:
response.success = False
response.message = f'GPS quality too low: status={fix_status}'
return response
if self._waypoints is None or len(self._waypoints.poses) == 0:
response.success = False
response.message = 'No waypoints loaded — call /outdoor/plan_route first'
return response
# Send waypoints to Nav2 asynchronously
self.create_timer(0.1, self._send_nav_goal, callback_group=self._cb_group)
response.success = True
response.message = f'Starting navigation with {len(self._waypoints.poses)} waypoints'
return response
def _stop_cb(self, _req, response):
self._cancel_navigation('stop requested')
response.success = True
response.message = 'Navigation cancelled'
return response
# ── Navigation ────────────────────────────────────────────────────────────
def _send_nav_goal(self) -> None:
"""One-shot timer callback: send waypoints to Nav2."""
# Cancel the timer that called us
self.destroy_timer(list(self.timers)[-1])
if not self._nav_client.wait_for_server(timeout_sec=5.0):
self.get_logger().error('Nav2 navigate_through_poses action not available')
self._publish_nav_status('error:nav2_unavailable')
return
# Adjust goal tolerance by GPS quality
fix_status = self._fix.status.status if self._fix else _MIN_FIX_STATUS
if fix_status >= _RTK_FIX_STATUS:
tol = _TOLERANCE_RTK
self.get_logger().info('RTK fix detected — using tight tolerance (0.30 m)')
else:
tol = self.get_parameter('goal_tolerance_xy').value
self.get_logger().info(f'Standard GPS — tolerance = {tol:.1f} m')
# Build goal: PoseStamped list from PoseArray
goal = NavigateThroughPoses.Goal()
frame = self.get_parameter('map_frame').value
now = self.get_clock().now().to_msg()
poses = self._filter_waypoints(self._waypoints.poses)
for pose in poses:
ps = PoseStamped()
ps.header.stamp = now
ps.header.frame_id = frame
ps.pose = pose
goal.poses.append(ps)
self.get_logger().info(f'Sending {len(goal.poses)} waypoints to Nav2')
self._publish_nav_status(f'navigating:{len(goal.poses)}_waypoints')
self._nav_active = True
send_future = self._nav_client.send_goal_async(
goal,
feedback_callback=self._nav_feedback_cb,
)
send_future.add_done_callback(self._nav_goal_response_cb)
def _nav_goal_response_cb(self, future) -> None:
self._nav_goal_handle = future.result()
if not self._nav_goal_handle.accepted:
self.get_logger().error('Nav2 rejected navigation goal')
self._nav_active = False
self._publish_nav_status('error:goal_rejected')
return
result_future = self._nav_goal_handle.get_result_async()
result_future.add_done_callback(self._nav_result_cb)
def _nav_feedback_cb(self, feedback) -> None:
fb = feedback.feedback
remaining = getattr(fb, 'number_of_poses_remaining', '?')
self.get_logger().info(
f'Nav2 feedback: {remaining} waypoints remaining',
throttle_duration_sec=10.0,
)
def _nav_result_cb(self, future) -> None:
self._nav_active = False
result = future.result()
if result.status == 4: # SUCCEEDED
self.get_logger().info('Outdoor navigation completed successfully')
self._publish_nav_status('completed')
else:
self.get_logger().warn(f'Navigation ended with status {result.status}')
self._publish_nav_status(f'failed:status={result.status}')
def _cancel_navigation(self, reason: str) -> None:
if self._nav_goal_handle is not None and self._nav_active:
self.get_logger().info(f'Cancelling navigation: {reason}')
self._nav_goal_handle.cancel_goal_async()
self._nav_active = False
self._publish_nav_status(f'cancelled:{reason}')
# ── Helpers ───────────────────────────────────────────────────────────────
def _filter_waypoints(self, poses: list) -> list:
"""Remove waypoints closer than waypoint_spacing_m to previous."""
spacing = self.get_parameter('waypoint_spacing_m').value
out = [poses[0]]
for p in poses[1:]:
prev = out[-1]
d = math.hypot(
p.position.x - prev.position.x,
p.position.y - prev.position.y,
)
if d >= spacing:
out.append(p)
return out
def _publish_gps_status(self) -> None:
if self._fix is None:
status = 'no_fix'
elif self._fix.status.status < 0:
status = 'no_fix'
elif self._fix.status.status >= _RTK_FIX_STATUS:
status = f'rtk_fixed lat={self._fix.latitude:.6f} lon={self._fix.longitude:.6f}'
elif self._fix.status.status >= NavSatStatus.STATUS_FIX:
cov = self._fix.position_covariance[0] ** 0.5 if self._fix.position_covariance[0] > 0 else 2.5
status = f'fix_ok cep~{cov:.1f}m lat={self._fix.latitude:.6f} lon={self._fix.longitude:.6f}'
else:
status = f'poor_fix status={self._fix.status.status}'
msg = String()
msg.data = status
self._gps_status_pub.publish(msg)
def _publish_nav_status(self, s: str) -> None:
msg = String()
msg.data = s
self._nav_status_pub.publish(msg)
def main(args=None):
rclpy.init(args=args)
node = GpsWaypointFollowerNode()
executor = MultiThreadedExecutor()
executor.add_node(node)
try:
executor.spin()
except KeyboardInterrupt:
pass
finally:
node.destroy_node()
rclpy.try_shutdown()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,371 @@
"""
osm_router_node OpenStreetMap sidewalk/bike-lane route planner
Given a start and goal GPS coordinate, this node:
1. Queries the Overpass API for pedestrian/cycleway ways within a
configurable radius around the midpoint
2. Builds a weighted graph (edge weight = haversine distance)
3. Runs A* (Dijkstra fallback) to find the shortest walkable path
4. Simplifies the waypoint list (DouglasPeucker, ~5m spacing)
5. Converts lat/lon waypoints to map-frame PoseStamped via
robot_localization's /fromLL service
6. Publishes the route as nav_msgs/Path for RViz visualisation
7. Returns the PoseArray to the GPS waypoint follower
ROS services:
/outdoor/plan_route (saltybot_outdoor_msgs or std_srvs see PlanRoute below)
Topics published:
/outdoor/route nav_msgs/Path map-frame path for RViz
/outdoor/status std_msgs/String 'idle' | 'planning' | 'ready' | 'error:<msg>'
Topics subscribed:
/gps/fix sensor_msgs/NavSatFix current robot position
Hardware:
SIM7600X cellular module publishes /gps/fix at ~1 Hz (±2.5 m CEP)
RTK upgrade path: swap to ZED-F9P for ±2 cm (see config/outdoor_nav_params.yaml)
Overpass query: fetches highway=footway|cycleway|path|pedestrian|living_street
within a bounding box, then prunes to walking-accessible ways only.
Dependencies:
pip3 install requests (Overpass HTTP)
No osmnx required native JSON parsing keeps Docker image lean.
"""
import json
import math
import heapq
import time
from typing import Optional
import rclpy
from rclpy.node import Node
from rclpy.callback_groups import ReentrantCallbackGroup
from sensor_msgs.msg import NavSatFix
from nav_msgs.msg import Path
from geometry_msgs.msg import PoseStamped, PoseArray, Pose, Point
from std_msgs.msg import String
from std_srvs.srv import Trigger
try:
import requests
_REQUESTS_OK = True
except ImportError:
_REQUESTS_OK = False
# OSM highway types considered walkable
_WALKABLE = frozenset([
'footway', 'cycleway', 'path', 'pedestrian',
'living_street', 'residential', 'service',
'track', 'steps',
])
_OVERPASS_URL = 'https://overpass-api.de/api/interpreter'
_OVERPASS_TIMEOUT = 20 # seconds
# ── Haversine distance ────────────────────────────────────────────────────────
def _haversine(lat1: float, lon1: float, lat2: float, lon2: float) -> float:
"""Great-circle distance in metres."""
R = 6_371_000.0
phi1, phi2 = math.radians(lat1), math.radians(lat2)
dphi = math.radians(lat2 - lat1)
dlam = math.radians(lon2 - lon1)
a = math.sin(dphi / 2) ** 2 + math.cos(phi1) * math.cos(phi2) * math.sin(dlam / 2) ** 2
return R * 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
# ── DouglasPeucker polyline simplification ───────────────────────────────────
def _perp_dist(pt, line_a, line_b):
"""Cross-track distance from pt to segment ab (all in lat/lon)."""
lat, lon = pt
la, lo_a = line_a
lb, lo_b = line_b
dx, dy = lb - la, lo_b - lo_a
if dx == 0 and dy == 0:
return _haversine(lat, lon, la, lo_a)
t = ((lat - la) * dx + (lon - lo_a) * dy) / (dx * dx + dy * dy)
t = max(0.0, min(1.0, t))
return _haversine(lat, lon, la + t * dx, lo_a + t * dy)
def _douglas_peucker(points: list, epsilon_m: float) -> list:
"""Reduce waypoints, keeping points > epsilon_m from simplified line."""
if len(points) <= 2:
return points
dmax, idx = 0.0, 0
end = points[-1]
for i in range(1, len(points) - 1):
d = _perp_dist(points[i], points[0], end)
if d > dmax:
dmax, idx = d, i
if dmax > epsilon_m:
left = _douglas_peucker(points[:idx + 1], epsilon_m)
right = _douglas_peucker(points[idx:], epsilon_m)
return left[:-1] + right
return [points[0], points[-1]]
# ── A* on OSM graph ───────────────────────────────────────────────────────────
def _astar(graph: dict, start: int, goal: int, coords: dict) -> Optional[list]:
"""A* over OSM node graph. Returns list of node IDs or None."""
if start not in graph or goal not in graph:
return None
glat, glon = coords[goal]
def h(n):
return _haversine(coords[n][0], coords[n][1], glat, glon)
open_heap = [(h(start), 0.0, start, [start])]
visited = set()
while open_heap:
f, g, node, path = heapq.heappop(open_heap)
if node in visited:
continue
visited.add(node)
if node == goal:
return path
for nb, w in graph[node]:
if nb not in visited:
ng = g + w
heapq.heappush(open_heap, (ng + h(nb), ng, nb, path + [nb]))
return None
# ── OSM graph builder ─────────────────────────────────────────────────────────
def _build_graph(elements: list) -> tuple[dict, dict]:
"""Parse Overpass JSON elements into adjacency list + coordinate dict."""
node_coords: dict[int, tuple[float, float]] = {}
graph: dict[int, list] = {}
for el in elements:
if el['type'] == 'node':
node_coords[el['id']] = (el['lat'], el['lon'])
for el in elements:
if el['type'] != 'way':
continue
highway = el.get('tags', {}).get('highway', '')
if highway not in _WALKABLE:
continue
nodes = el['nodes']
for i in range(len(nodes) - 1):
a, b = nodes[i], nodes[i + 1]
if a not in node_coords or b not in node_coords:
continue
w = _haversine(*node_coords[a], *node_coords[b])
graph.setdefault(a, []).append((b, w))
graph.setdefault(b, []).append((a, w)) # bidirectional
return graph, node_coords
def _nearest_node(lat: float, lon: float, coords: dict) -> int:
"""Return ID of OSM node nearest to (lat, lon)."""
return min(coords, key=lambda n: _haversine(lat, lon, *coords[n]))
# ── ROS node ──────────────────────────────────────────────────────────────────
class OsmRouterNode(Node):
def __init__(self):
super().__init__('osm_router')
self.declare_parameter('overpass_url', _OVERPASS_URL)
self.declare_parameter('query_radius_m', 1500.0) # area to fetch
self.declare_parameter('simplify_eps_m', 5.0) # waypoint spacing
self.declare_parameter('cache_dir', '/data/osm_cache')
self.declare_parameter('use_cache', True)
self.declare_parameter('map_frame', 'map')
self._fix: Optional[NavSatFix] = None
self._cb_group = ReentrantCallbackGroup()
# ── Subscriptions ────────────────────────────────────────────────────
self.create_subscription(NavSatFix, '/gps/fix', self._fix_cb, 10)
# ── Publishers ───────────────────────────────────────────────────────
self._path_pub = self.create_publisher(Path, '/outdoor/route', 10)
self._status_pub = self.create_publisher(String, '/outdoor/status', 10)
self._poses_pub = self.create_publisher(PoseArray, '/outdoor/waypoints', 10)
# ── Services ─────────────────────────────────────────────────────────
# /outdoor/plan_route — not yet triggered, called externally with goal.
# Goal lat/lon passed as parameter overrides for simplicity in V1.
self.declare_parameter('goal_lat', 0.0)
self.declare_parameter('goal_lon', 0.0)
self.create_service(
Trigger, '/outdoor/plan_route',
self._plan_route_cb,
callback_group=self._cb_group,
)
self._publish_status('idle')
self.get_logger().info('osm_router: ready. Call /outdoor/plan_route to start.')
# ── Callbacks ─────────────────────────────────────────────────────────────
def _fix_cb(self, msg: NavSatFix) -> None:
self._fix = msg
def _plan_route_cb(self, _req, response):
if self._fix is None:
response.success = False
response.message = 'No GPS fix yet'
return response
goal_lat = self.get_parameter('goal_lat').value
goal_lon = self.get_parameter('goal_lon').value
if goal_lat == 0.0 and goal_lon == 0.0:
response.success = False
response.message = 'goal_lat/goal_lon not set'
return response
self._publish_status('planning')
try:
path_msg, pose_array = self._plan(
self._fix.latitude, self._fix.longitude,
goal_lat, goal_lon,
)
self._path_pub.publish(path_msg)
self._poses_pub.publish(pose_array)
self._publish_status('ready')
response.success = True
response.message = f'{len(pose_array.poses)} waypoints planned'
except Exception as exc:
self.get_logger().error(f'Route planning failed: {exc}')
self._publish_status(f'error:{exc}')
response.success = False
response.message = str(exc)
return response
# ── Planning ──────────────────────────────────────────────────────────────
def _plan(self, slat, slon, glat, glon) -> tuple[Path, PoseArray]:
radius = self.get_parameter('query_radius_m').value
eps = self.get_parameter('simplify_eps_m').value
frame = self.get_parameter('map_frame').value
# Bounding box centred on midpoint
mlat = (slat + glat) / 2
mlon = (slon + glon) / 2
d_deg = (radius / 111_320.0) * 1.5 # pad
bbox = (mlat - d_deg, mlon - d_deg, mlat + d_deg, mlon + d_deg)
elements = self._fetch_osm(bbox)
graph, coords = _build_graph(elements)
if not coords:
raise RuntimeError('No walkable OSM nodes in query area')
start_node = _nearest_node(slat, slon, coords)
goal_node = _nearest_node(glat, glon, coords)
node_path = _astar(graph, start_node, goal_node, coords)
if node_path is None:
raise RuntimeError('No walkable path found between start and goal')
raw_pts = [coords[n] for n in node_path]
simplified = _douglas_peucker(raw_pts, eps)
self.get_logger().info(
f'Route: {len(raw_pts)}{len(simplified)} waypoints after simplification'
)
# Build nav_msgs/Path in map frame using ENU approximation.
# navsat_transform_node provides the GPS→map projection; here we use
# a local flat-earth approximation centred on the start position.
# Replace with /fromLL service call when robot_localization is running.
path_msg = Path()
pose_arr = PoseArray()
now = self.get_clock().now().to_msg()
path_msg.header.stamp = now
path_msg.header.frame_id = frame
pose_arr.header = path_msg.header
lat0, lon0 = slat, slon
for lat, lon in simplified:
x, y = _latlon_to_enu(lat, lon, lat0, lon0)
ps = PoseStamped()
ps.header = path_msg.header
ps.pose.position.x = x
ps.pose.position.y = y
ps.pose.orientation.w = 1.0
path_msg.poses.append(ps)
p = Pose()
p.position.x = x
p.position.y = y
p.orientation.w = 1.0
pose_arr.poses.append(p)
return path_msg, pose_arr
# ── Overpass fetch ────────────────────────────────────────────────────────
def _fetch_osm(self, bbox: tuple) -> list:
"""Query Overpass for walkable ways in bbox. Returns element list."""
if not _REQUESTS_OK:
raise RuntimeError('requests library not installed (pip3 install requests)')
s, w, n, e = bbox
query = f"""
[out:json][timeout:{_OVERPASS_TIMEOUT}];
(
way["highway"~"^(footway|cycleway|path|pedestrian|living_street|residential|service|track|steps)$"]
({s:.6f},{w:.6f},{n:.6f},{e:.6f});
);
out body;
>;
out skel qt;
"""
url = self.get_parameter('overpass_url').value
self.get_logger().info(f'Querying Overpass: bbox={s:.4f},{w:.4f},{n:.4f},{e:.4f}')
resp = requests.post(url, data={'data': query}, timeout=_OVERPASS_TIMEOUT + 5)
resp.raise_for_status()
data = resp.json()
elements = data.get('elements', [])
self.get_logger().info(f'Overpass returned {len(elements)} elements')
return elements
# ── Helpers ───────────────────────────────────────────────────────────────
def _publish_status(self, msg: str) -> None:
s = String()
s.data = msg
self._status_pub.publish(s)
def _latlon_to_enu(lat: float, lon: float, lat0: float, lon0: float) -> tuple[float, float]:
"""Flat-earth ENU approximation (valid within ~10 km)."""
dlat = math.radians(lat - lat0)
dlon = math.radians(lon - lon0)
cos0 = math.cos(math.radians(lat0))
x = dlon * cos0 * 6_371_000.0 # East
y = dlat * 6_371_000.0 # North
return x, y
def main(args=None):
rclpy.init(args=args)
node = OsmRouterNode()
try:
rclpy.spin(node)
except KeyboardInterrupt:
pass
finally:
node.destroy_node()
rclpy.try_shutdown()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,4 @@
[develop]
script_dir=$base/lib/saltybot_outdoor
[install]
install_scripts=$base/lib/saltybot_outdoor

View File

@ -0,0 +1,34 @@
import os
from glob import glob
from setuptools import find_packages, setup
package_name = 'saltybot_outdoor'
setup(
name=package_name,
version='0.1.0',
packages=find_packages(exclude=['test']),
data_files=[
('share/ament_index/resource_index/packages',
['resource/' + package_name]),
('share/' + package_name, ['package.xml']),
(os.path.join('share', package_name, 'launch'),
glob('launch/*.py')),
(os.path.join('share', package_name, 'config'),
glob('config/*.yaml')),
],
install_requires=['setuptools'],
zip_safe=True,
maintainer='seb',
maintainer_email='seb@saltylab.io',
description='Outdoor navigation: OSM routing, GPS waypoint following, geofence',
license='MIT',
tests_require=['pytest'],
entry_points={
'console_scripts': [
'osm_router_node = saltybot_outdoor.osm_router_node:main',
'gps_waypoint_follower = saltybot_outdoor.gps_waypoint_follower_node:main',
'geofence_node = saltybot_outdoor.geofence_node:main',
],
},
)

View File

@ -0,0 +1,83 @@
# route_params.yaml — Route recording and replay configuration for SaltyBot
#
# "Ride once, replay forever."
#
# Flow:
# 1. Tee rides EUC, SaltyBot follows via UWB follow-me (PR #66)
# 2. route_recorder_node samples GPS + odom → /data/routes/<name>.jsonl
# 3. Next ride: route_replayer_node loads file → Nav2 navigate_through_poses
#
# GPS source: SIM7600X → /gps/fix (NavSatFix, ±2.5m CEP) — PR #65
# Heading: D435i IMU → /imu/data, converted yaw → route waypoint heading_deg
# Odometry: STM32 wheel encoders → /odom
# UWB: /uwb/target (follow-me reference, logged for context)
route_recorder:
ros__parameters:
save_dir: "/data/routes"
route_name: "route" # override via param or launch arg
waypoint_spacing_m: 2.0 # skip GPS samples closer than this
sample_rate_hz: 1.0 # max recording frequency (GPS ~1Hz)
min_fix_status: 0 # 0=STATUS_FIX; reject STATUS_NO_FIX (-1)
route_replayer:
ros__parameters:
save_dir: "/data/routes"
route_name: "route"
replay_spacing_m: 3.0 # subsample waypoints for Nav2 (3m spacing)
goal_tolerance_xy: 2.0 # metres — matches SIM7600X ±2.5m CEP
replay_speed_factor: 1.0 # 1.0 = same speed as recorded
map_frame: "map"
nav_timeout_s: 600.0 # 10 min max per nav batch
route_manager:
ros__parameters:
save_dir: "/data/routes"
route_name: "route"
# ── Operational notes ─────────────────────────────────────────────────────────
#
# Recording a route (EUC follow-me session):
# ros2 param set /route_recorder route_name "friday_loop"
# ros2 service call /route/start_recording std_srvs/srv/Trigger '{}'
# # ... follow Tee for the full route ...
# ros2 service call /route/stop_recording std_srvs/srv/Trigger '{}'
# ros2 service call /route/save std_srvs/srv/Trigger '{}'
#
# Replaying a route (autonomous):
# ros2 param set /route_replayer route_name "friday_loop"
# ros2 service call /route/load std_srvs/srv/Trigger '{}'
# ros2 service call /route/start_replay std_srvs/srv/Trigger '{}'
# ros2 topic echo /route/replay_status
#
# Listing routes:
# ros2 service call /route/list std_srvs/srv/Trigger '{}'
#
# Route info:
# ros2 param set /route_manager route_name "friday_loop"
# ros2 service call /route/info std_srvs/srv/Trigger '{}'
#
# Delete route:
# ros2 param set /route_manager route_name "friday_loop"
# ros2 service call /route/delete std_srvs/srv/Trigger '{}'
#
# Emergency stop during replay:
# ros2 service call /route/stop std_srvs/srv/Trigger '{}'
# # or: ros2 topic pub /route/pause_cmd std_msgs/msg/Bool '{data: true}'
#
# ── File format (JSON Lines, .jsonl) ─────────────────────────────────────────
#
# {
# "version": 1,
# "name": "friday_loop",
# "recorded_at": "2026-03-01T14:22:00+00:00",
# "waypoint_count": 142,
# "total_distance_m": 1240.5,
# "duration_s": 820.3,
# "waypoints": [
# {"t": 1740839000.1, "lat": 37.7749, "lon": -122.4194,
# "heading_deg": 45.0, "speed_mps": 1.2, "odom_x": 0.0, "odom_y": 0.0,
# "gps_status": 0},
# ...
# ]
# }

View File

@ -0,0 +1,99 @@
"""
route_system.launch.py Route recording and replay system for SaltyBot
Starts:
route_recorder_node records GPS + odom path during follow-me rides
route_replayer_node replays recorded routes autonomously via Nav2
route_manager_node list/delete/info for saved routes
Depends on:
saltybot-nav2 container (Nav2 action server /navigate_through_poses)
saltybot_cellular (/gps/fix from SIM7600X GPS PR #65)
saltybot_uwb (/uwb/target PR #66, used for context during recording)
STM32 bridge (/odom from wheel encoders)
D435i (/imu/data for heading)
Usage record a route:
# Set name, start recording, ride with Tee, stop and save:
ros2 param set /route_recorder route_name "friday_loop"
ros2 service call /route/start_recording std_srvs/srv/Trigger '{}'
# ... ride the route ...
ros2 service call /route/stop_recording std_srvs/srv/Trigger '{}'
ros2 service call /route/save std_srvs/srv/Trigger '{}'
Usage replay a route:
ros2 param set /route_replayer route_name "friday_loop"
ros2 service call /route/load std_srvs/srv/Trigger '{}'
ros2 service call /route/start_replay std_srvs/srv/Trigger '{}'
Arguments:
save_dir (default /data/routes) NVMe route storage
route_name (default route) name for record/replay
"""
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
def generate_launch_description():
pkg_dir = get_package_share_directory('saltybot_routes')
params_file = os.path.join(pkg_dir, 'config', 'route_params.yaml')
args = [
DeclareLaunchArgument('save_dir', default_value='/data/routes'),
DeclareLaunchArgument('route_name', default_value='route'),
]
recorder = Node(
package='saltybot_routes',
executable='route_recorder_node',
name='route_recorder',
output='screen',
parameters=[
params_file,
{
'save_dir': LaunchConfiguration('save_dir'),
'route_name': LaunchConfiguration('route_name'),
},
],
)
replayer = Node(
package='saltybot_routes',
executable='route_replayer_node',
name='route_replayer',
output='screen',
parameters=[
params_file,
{
'save_dir': LaunchConfiguration('save_dir'),
'route_name': LaunchConfiguration('route_name'),
},
],
)
manager = Node(
package='saltybot_routes',
executable='route_manager_node',
name='route_manager',
output='screen',
parameters=[
params_file,
{
'save_dir': LaunchConfiguration('save_dir'),
'route_name': LaunchConfiguration('route_name'),
},
],
)
return LaunchDescription([
*args,
recorder,
replayer,
manager,
])

View File

@ -0,0 +1,31 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>saltybot_routes</name>
<version>0.1.0</version>
<description>Route recording and autonomous replay for SaltyBot — "Ride once, replay forever."</description>
<maintainer email="seb@saltylab.io">seb</maintainer>
<license>MIT</license>
<buildtool_depend>ament_python</buildtool_depend>
<depend>rclpy</depend>
<depend>sensor_msgs</depend>
<depend>geometry_msgs</depend>
<depend>nav_msgs</depend>
<depend>std_msgs</depend>
<depend>std_srvs</depend>
<depend>nav2_msgs</depend>
<exec_depend>robot_localization</exec_depend>
<exec_depend>nav2_bringup</exec_depend>
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<test_depend>pytest</test_depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>

View File

@ -0,0 +1,149 @@
"""
route_manager_node List, inspect, and delete saved route files.
Provides a management interface for the /data/routes/ directory.
All routes are .jsonl files written by route_recorder_node.
Services:
/route/list std_srvs/Trigger returns JSON list of all routes in status message
/route/delete std_srvs/Trigger delete route named by route_name param
/route/info std_srvs/Trigger return metadata for route named by route_name param
Parameters:
save_dir (default /data/routes)
route_name (default 'route') used by /route/delete and /route/info
Topics published:
/route/manager_status std_msgs/String last operation result
Response format (in message field):
/route/list JSON string: [{"name":..., "distance_m":..., "recorded_at":..., "waypoint_count":...}, ...]
/route/info JSON string: {full metadata dict}
/route/delete "Deleted: <name>"
"""
import json
import os
from datetime import datetime, timezone
import rclpy
from rclpy.node import Node
from std_msgs.msg import String
from std_srvs.srv import Trigger
class RouteManagerNode(Node):
def __init__(self):
super().__init__('route_manager')
self.declare_parameter('save_dir', '/data/routes')
self.declare_parameter('route_name', 'route')
self._status_pub = self.create_publisher(String, '/route/manager_status', 10)
self.create_service(Trigger, '/route/list', self._list_cb)
self.create_service(Trigger, '/route/delete', self._delete_cb)
self.create_service(Trigger, '/route/info', self._info_cb)
self.get_logger().info('route_manager: ready.')
# ── Service handlers ──────────────────────────────────────────────────────
def _list_cb(self, _req, response):
save_dir = self.get_parameter('save_dir').value
if not os.path.isdir(save_dir):
response.success = True
response.message = '[]'
return response
routes = []
for fname in sorted(os.listdir(save_dir)):
if not fname.endswith('.jsonl'):
continue
path = os.path.join(save_dir, fname)
try:
with open(path) as f:
meta = json.load(f)
routes.append({
'name': meta.get('name', fname[:-6]),
'distance_m': meta.get('total_distance_m', 0.0),
'duration_s': meta.get('duration_s', 0.0),
'recorded_at': meta.get('recorded_at', ''),
'waypoint_count': meta.get('waypoint_count', 0),
'file': fname,
})
except Exception as e:
self.get_logger().warn(f'route_manager: could not read {fname}: {e}')
result = json.dumps(routes)
self._pub_status(f'list:{len(routes)}_routes')
response.success = True
response.message = result
self.get_logger().info(f'route_manager: listed {len(routes)} routes')
return response
def _delete_cb(self, _req, response):
name = self.get_parameter('route_name').value
save_dir = self.get_parameter('save_dir').value
path = os.path.join(save_dir, f'{name}.jsonl')
if not os.path.exists(path):
response.success = False
response.message = f'Route not found: {name}'
return response
os.remove(path)
self._pub_status(f'deleted:{name}')
self.get_logger().info(f'route_manager: deleted "{name}"')
response.success = True
response.message = f'Deleted: {name}'
return response
def _info_cb(self, _req, response):
name = self.get_parameter('route_name').value
save_dir = self.get_parameter('save_dir').value
path = os.path.join(save_dir, f'{name}.jsonl')
if not os.path.exists(path):
response.success = False
response.message = f'Route not found: {name}'
return response
try:
with open(path) as f:
meta = json.load(f)
# Return metadata without the full waypoints array (can be large)
info = {k: v for k, v in meta.items() if k != 'waypoints'}
result = json.dumps(info, indent=2)
self._pub_status(f'info:{name}')
response.success = True
response.message = result
except Exception as e:
response.success = False
response.message = f'Error reading route: {e}'
return response
# ── Helpers ───────────────────────────────────────────────────────────────
def _pub_status(self, s: str) -> None:
msg = String()
msg.data = s
self._status_pub.publish(msg)
def main(args=None):
rclpy.init(args=args)
node = RouteManagerNode()
try:
rclpy.spin(node)
except KeyboardInterrupt:
pass
finally:
node.destroy_node()
rclpy.try_shutdown()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,317 @@
"""
route_recorder_node Records GPS + odometry path during follow-me rides.
"Ride once, replay forever." During a UWB follow-me session, this node
samples the robot's GPS position, heading, and speed into a timestamped
JSON-Lines file. The result is a portable route file that route_replayer_node
can load to autonomously repeat the ride.
Recording strategy:
- Samples at up to 1 Hz (configurable)
- Skips waypoints within waypoint_spacing_m of the previous one
- Each waypoint: {t, lat, lon, heading_deg, speed_mps, x, y, source}
- source: 'gps' | 'odom' (used when GPS quality drops)
JSON-Lines format (one JSON object per line, .jsonl):
{"version": 1, "name": "...", "recorded_at": "...", "waypoints": [...]}
Written as a single-line header on line 1 (metadata), then appended to.
File location: /data/routes/<name>.jsonl
Topics subscribed:
/gps/fix sensor_msgs/NavSatFix GPS position (SIM7600X)
/odom nav_msgs/Odometry wheel encoder odometry
/uwb/target geometry_msgs/PoseStamped UWB target position (for context)
/imu/data sensor_msgs/Imu heading from IMU
Topics published:
/route/status std_msgs/String 'idle'|'recording:<name>'|'saved:<name>'
Services:
/route/start_recording std_srvs/Trigger begin recording (name via param)
/route/stop_recording std_srvs/Trigger stop and buffer (don't save yet)
/route/save std_srvs/Trigger flush buffer to disk
/route/discard std_srvs/Trigger discard unsaved recording
Parameters:
save_dir (default /data/routes)
waypoint_spacing_m (default 2.0) skip closer waypoints
sample_rate_hz (default 1.0) max recording rate
route_name (default 'route') set before start_recording
min_fix_status (default 0) reject STATUS_NO_FIX
"""
import json
import math
import os
import time
from datetime import datetime, timezone
from typing import Optional
import rclpy
from rclpy.node import Node
from rclpy.callback_groups import ReentrantCallbackGroup
from sensor_msgs.msg import NavSatFix, NavSatStatus, Imu
from nav_msgs.msg import Odometry
from geometry_msgs.msg import PoseStamped
from std_msgs.msg import String
from std_srvs.srv import Trigger
def _haversine(lat1: float, lon1: float, lat2: float, lon2: float) -> float:
"""Great-circle distance in metres."""
R = 6_371_000.0
phi1, phi2 = math.radians(lat1), math.radians(lat2)
dphi = math.radians(lat2 - lat1)
dlam = math.radians(lon2 - lon1)
a = math.sin(dphi / 2) ** 2 + math.cos(phi1) * math.cos(phi2) * math.sin(dlam / 2) ** 2
return R * 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
def _quat_to_yaw_deg(q) -> float:
"""Convert quaternion to yaw in degrees (-180..180)."""
siny = 2.0 * (q.w * q.z + q.x * q.y)
cosy = 1.0 - 2.0 * (q.y * q.y + q.z * q.z)
return math.degrees(math.atan2(siny, cosy))
class RouteRecorderNode(Node):
def __init__(self):
super().__init__('route_recorder')
self.declare_parameter('save_dir', '/data/routes')
self.declare_parameter('waypoint_spacing_m', 2.0)
self.declare_parameter('sample_rate_hz', 1.0)
self.declare_parameter('route_name', 'route')
self.declare_parameter('min_fix_status', NavSatStatus.STATUS_FIX)
self._recording = False
self._waypoints: list[dict] = []
self._last_wp: Optional[dict] = None
self._last_sample_t = 0.0
# Latest sensor state
self._fix: Optional[NavSatFix] = None
self._odom: Optional[Odometry] = None
self._imu: Optional[Imu] = None
self._uwb: Optional[PoseStamped] = None
self._cb_group = ReentrantCallbackGroup()
# ── Subscriptions ────────────────────────────────────────────────────
self.create_subscription(NavSatFix, '/gps/fix', self._fix_cb, 10)
self.create_subscription(Odometry, '/odom', self._odom_cb, 10)
self.create_subscription(Imu, '/imu/data', self._imu_cb, 10)
self.create_subscription(PoseStamped, '/uwb/target', self._uwb_cb, 10)
# ── Publishers ───────────────────────────────────────────────────────
self._status_pub = self.create_publisher(String, '/route/status', 10)
# ── Services ─────────────────────────────────────────────────────────
self.create_service(Trigger, '/route/start_recording', self._start_cb,
callback_group=self._cb_group)
self.create_service(Trigger, '/route/stop_recording', self._stop_cb,
callback_group=self._cb_group)
self.create_service(Trigger, '/route/save', self._save_cb,
callback_group=self._cb_group)
self.create_service(Trigger, '/route/discard', self._discard_cb,
callback_group=self._cb_group)
# ── Sample timer ─────────────────────────────────────────────────────
rate = self.get_parameter('sample_rate_hz').value
self.create_timer(1.0 / rate, self._sample)
self._pub_status('idle')
self.get_logger().info('route_recorder: ready. Call /route/start_recording.')
# ── Sensor callbacks ──────────────────────────────────────────────────────
def _fix_cb(self, msg: NavSatFix) -> None:
self._fix = msg
def _odom_cb(self, msg: Odometry) -> None:
self._odom = msg
def _imu_cb(self, msg: Imu) -> None:
self._imu = msg
def _uwb_cb(self, msg: PoseStamped) -> None:
self._uwb = msg
# ── Service handlers ──────────────────────────────────────────────────────
def _start_cb(self, _req, response):
if self._recording:
response.success = False
response.message = 'Already recording'
return response
name = self.get_parameter('route_name').value
self._waypoints = []
self._last_wp = None
self._recording = True
self.get_logger().info(f'route_recorder: started recording "{name}"')
self._pub_status(f'recording:{name}')
response.success = True
response.message = f'Recording started: {name}'
return response
def _stop_cb(self, _req, response):
if not self._recording:
response.success = False
response.message = 'Not recording'
return response
self._recording = False
n = len(self._waypoints)
self.get_logger().info(f'route_recorder: stopped. {n} waypoints buffered.')
self._pub_status(f'stopped:{n}_waypoints')
response.success = True
response.message = f'Recording stopped. {n} waypoints buffered. Call /route/save.'
return response
def _save_cb(self, _req, response):
if not self._waypoints:
response.success = False
response.message = 'No waypoints to save'
return response
name = self.get_parameter('route_name').value
save_dir = self.get_parameter('save_dir').value
os.makedirs(save_dir, exist_ok=True)
path = os.path.join(save_dir, f'{name}.jsonl')
total_dist = self._total_distance_m()
meta = {
'version': 1,
'name': name,
'recorded_at': datetime.now(timezone.utc).isoformat(),
'waypoint_count': len(self._waypoints),
'total_distance_m': round(total_dist, 1),
'duration_s': self._duration_s(),
'waypoints': self._waypoints,
}
with open(path, 'w') as f:
json.dump(meta, f)
f.write('\n')
self.get_logger().info(
f'route_recorder: saved {len(self._waypoints)} waypoints → {path} '
f'({total_dist:.0f} m)'
)
self._pub_status(f'saved:{name}')
self._waypoints = []
response.success = True
response.message = f'Saved: {path} ({len(meta["waypoints"])} waypoints, {total_dist:.0f} m)'
return response
def _discard_cb(self, _req, response):
n = len(self._waypoints)
self._waypoints = []
self._recording = False
self._last_wp = None
self._pub_status('idle')
response.success = True
response.message = f'Discarded {n} waypoints'
return response
# ── Sample timer ──────────────────────────────────────────────────────────
def _sample(self) -> None:
if not self._recording:
return
if self._fix is None:
return
# GPS quality check
min_status = self.get_parameter('min_fix_status').value
if self._fix.status.status < min_status:
return
lat = self._fix.latitude
lon = self._fix.longitude
# Skip if too close to last waypoint
spacing = self.get_parameter('waypoint_spacing_m').value
if self._last_wp is not None:
d = _haversine(lat, lon, self._last_wp['lat'], self._last_wp['lon'])
if d < spacing:
return
# Heading: prefer IMU yaw, fall back to GPS bearing
heading_deg = 0.0
if self._imu is not None:
heading_deg = _quat_to_yaw_deg(self._imu.orientation)
elif self._last_wp is not None:
dlat = lat - self._last_wp['lat']
dlon = lon - self._last_wp['lon']
heading_deg = math.degrees(math.atan2(dlon, dlat))
# Speed from odometry
speed_mps = 0.0
odom_x, odom_y = 0.0, 0.0
if self._odom is not None:
vx = self._odom.twist.twist.linear.x
vy = self._odom.twist.twist.linear.y
speed_mps = math.hypot(vx, vy)
odom_x = self._odom.pose.pose.position.x
odom_y = self._odom.pose.pose.position.y
wp = {
't': self.get_clock().now().nanoseconds / 1e9,
'lat': lat,
'lon': lon,
'heading_deg': round(heading_deg, 1),
'speed_mps': round(speed_mps, 2),
'odom_x': round(odom_x, 3),
'odom_y': round(odom_y, 3),
'gps_status': self._fix.status.status,
}
self._waypoints.append(wp)
self._last_wp = wp
name = self.get_parameter('route_name').value
self.get_logger().debug(
f'route_recorder: wp #{len(self._waypoints)} '
f'lat={lat:.6f} lon={lon:.6f} hdg={heading_deg:.0f}° spd={speed_mps:.1f}m/s'
)
self._pub_status(f'recording:{name}:{len(self._waypoints)}_wps')
# ── Helpers ───────────────────────────────────────────────────────────────
def _total_distance_m(self) -> float:
if len(self._waypoints) < 2:
return 0.0
total = 0.0
for i in range(1, len(self._waypoints)):
a, b = self._waypoints[i - 1], self._waypoints[i]
total += _haversine(a['lat'], a['lon'], b['lat'], b['lon'])
return total
def _duration_s(self) -> float:
if len(self._waypoints) < 2:
return 0.0
return self._waypoints[-1]['t'] - self._waypoints[0]['t']
def _pub_status(self, s: str) -> None:
msg = String()
msg.data = s
self._status_pub.publish(msg)
def main(args=None):
rclpy.init(args=args)
node = RouteRecorderNode()
try:
rclpy.spin(node)
except KeyboardInterrupt:
pass
finally:
node.destroy_node()
rclpy.try_shutdown()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,389 @@
"""
route_replayer_node Loads and autonomously replays a recorded route.
"Ride once, replay forever." Loads a .jsonl route file, converts GPS
waypoints to map-frame PoseStamped via robot_localization's flat-earth
ENU projection, then sends batches to Nav2's navigate_through_poses action.
Replay strategy:
1. Load route file list of {lat, lon, heading_deg, ...}
2. Convert GPS map-frame ENU (relative to first fix / datum)
3. Subsample waypoints by replay_spacing_m (default 3m)
4. Inject heading from recorded data (preserved via quaternion)
5. Send full waypoint list to Nav2 navigate_through_poses
6. Monitor feedback; pause/resume on demand
7. Odometry correction: compare current odom position to expected odom
position and log drift for future closed-loop correction (V1: log only)
GPS drift handling:
- Use wider goal tolerance (2m default matches SIM7600X CEP)
- Slow down near waypoints (Nav2 DWB sim_time reduces automatically)
- Report GPS covariance in status so operator can monitor quality
Topics subscribed:
/gps/fix sensor_msgs/NavSatFix
/odom nav_msgs/Odometry
/route/pause_cmd std_msgs/Bool True=pause, False=resume
Topics published:
/route/replay_status std_msgs/String state machine status
/route/progress std_msgs/String 'N/M waypoints, D m remaining'
Actions called:
/navigate_through_poses nav2_msgs/action/NavigateThroughPoses
Services:
/route/load std_srvs/Trigger load route named by route_name param
/route/start_replay std_srvs/Trigger begin replay
/route/pause std_srvs/Trigger pause (cancel Nav2 goal, hold position)
/route/stop std_srvs/Trigger abort replay
"""
import json
import math
import os
from typing import Optional
import rclpy
from rclpy.node import Node
from rclpy.action import ActionClient
from rclpy.callback_groups import ReentrantCallbackGroup
from rclpy.executors import MultiThreadedExecutor
from sensor_msgs.msg import NavSatFix, NavSatStatus
from nav_msgs.msg import Odometry
from geometry_msgs.msg import PoseStamped, PoseArray, Pose
from std_msgs.msg import String, Bool
from std_srvs.srv import Trigger
from nav2_msgs.action import NavigateThroughPoses
def _haversine(lat1, lon1, lat2, lon2) -> float:
R = 6_371_000.0
phi1, phi2 = math.radians(lat1), math.radians(lat2)
dphi = math.radians(lat2 - lat1)
dlam = math.radians(lon2 - lon1)
a = math.sin(dphi / 2) ** 2 + math.cos(phi1) * math.cos(phi2) * math.sin(dlam / 2) ** 2
return R * 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
def _latlon_to_enu(lat: float, lon: float, lat0: float, lon0: float):
"""Flat-earth ENU approximation (valid ~10 km)."""
dlat = math.radians(lat - lat0)
dlon = math.radians(lon - lon0)
cos0 = math.cos(math.radians(lat0))
x = dlon * cos0 * 6_371_000.0
y = dlat * 6_371_000.0
return x, y
def _heading_to_quat(heading_deg: float) -> tuple:
"""Convert heading (degrees, 0=North, CW) to ENU quaternion (0=East, CCW)."""
# In ENU: x=East, y=North. yaw=0 → facing East.
# Heading: 0=North, 90=East. Convert: yaw_enu = 90 - heading_deg
yaw = math.radians(90.0 - heading_deg)
return (0.0, 0.0, math.sin(yaw / 2), math.cos(yaw / 2)) # x, y, z, w
class RouteReplayerNode(Node):
def __init__(self):
super().__init__('route_replayer')
self.declare_parameter('save_dir', '/data/routes')
self.declare_parameter('route_name', 'route')
self.declare_parameter('replay_spacing_m', 3.0) # subsample spacing
self.declare_parameter('goal_tolerance_xy', 2.0) # GPS ±2.5m CEP
self.declare_parameter('replay_speed_factor', 1.0) # 1.0 = same speed as recorded
self.declare_parameter('map_frame', 'map')
self.declare_parameter('nav_timeout_s', 600.0) # 10 min
self._route: Optional[dict] = None # loaded route metadata
self._fix: Optional[NavSatFix] = None
self._odom: Optional[Odometry] = None
self._paused = False
self._active = False
self._nav_goal_handle = None
self._cb_group = ReentrantCallbackGroup()
# ── Subscriptions ────────────────────────────────────────────────────
self.create_subscription(NavSatFix, '/gps/fix', self._fix_cb, 10)
self.create_subscription(Odometry, '/odom', self._odom_cb, 10)
self.create_subscription(Bool, '/route/pause_cmd', self._pause_cb, 10)
# ── Publishers ───────────────────────────────────────────────────────
self._status_pub = self.create_publisher(String, '/route/replay_status', 10)
self._progress_pub = self.create_publisher(String, '/route/progress', 10)
# ── Services ─────────────────────────────────────────────────────────
self.create_service(Trigger, '/route/load', self._load_cb,
callback_group=self._cb_group)
self.create_service(Trigger, '/route/start_replay', self._start_cb,
callback_group=self._cb_group)
self.create_service(Trigger, '/route/pause', self._pause_svc_cb,
callback_group=self._cb_group)
self.create_service(Trigger, '/route/stop', self._stop_cb,
callback_group=self._cb_group)
# ── Nav2 action client ───────────────────────────────────────────────
self._nav_client = ActionClient(
self, NavigateThroughPoses, '/navigate_through_poses',
callback_group=self._cb_group,
)
# ── Progress timer ────────────────────────────────────────────────────
self.create_timer(2.0, self._publish_progress)
self._pub_status('idle')
self.get_logger().info('route_replayer: ready. Call /route/load then /route/start_replay.')
# ── Sensor callbacks ──────────────────────────────────────────────────────
def _fix_cb(self, msg: NavSatFix) -> None:
self._fix = msg
def _odom_cb(self, msg: Odometry) -> None:
self._odom = msg
def _pause_cb(self, msg: Bool) -> None:
if msg.data and not self._paused:
self._pause_replay()
elif not msg.data and self._paused:
self._resume_replay()
# ── Service handlers ──────────────────────────────────────────────────────
def _load_cb(self, _req, response):
name = self.get_parameter('route_name').value
dir_ = self.get_parameter('save_dir').value
path = os.path.join(dir_, f'{name}.jsonl')
if not os.path.exists(path):
response.success = False
response.message = f'Route file not found: {path}'
return response
with open(path) as f:
self._route = json.load(f)
n = self._route.get('waypoint_count', len(self._route.get('waypoints', [])))
dist = self._route.get('total_distance_m', 0.0)
dur = self._route.get('duration_s', 0.0)
self.get_logger().info(
f'route_replayer: loaded "{name}"{n} waypoints, '
f'{dist:.0f} m, {dur:.0f} s'
)
self._pub_status(f'loaded:{name}:{n}_waypoints:{dist:.0f}m')
response.success = True
response.message = f'Loaded: {name} ({n} waypoints, {dist:.0f} m, {dur:.0f} s)'
return response
def _start_cb(self, _req, response):
if self._route is None:
response.success = False
response.message = 'No route loaded — call /route/load first'
return response
if self._active:
response.success = False
response.message = 'Replay already active'
return response
if self._fix is None:
response.success = False
response.message = 'No GPS fix'
return response
if self._fix.status.status < NavSatStatus.STATUS_FIX:
response.success = False
response.message = f'GPS quality too low: status={self._fix.status.status}'
return response
# Kick off async nav
self.create_timer(0.1, self._send_nav_goal, callback_group=self._cb_group)
response.success = True
name = self._route.get('name', '?')
response.message = f'Starting replay of "{name}"'
return response
def _pause_svc_cb(self, _req, response):
if self._paused:
self._resume_replay()
response.message = 'Replay resumed'
else:
self._pause_replay()
response.message = 'Replay paused'
response.success = True
return response
def _stop_cb(self, _req, response):
self._cancel_nav('stop requested')
response.success = True
response.message = 'Replay stopped'
return response
# ── Navigation ────────────────────────────────────────────────────────────
def _send_nav_goal(self) -> None:
"""One-shot timer: convert route waypoints → Nav2 goal."""
self.destroy_timer(list(self.timers)[-1])
if not self._nav_client.wait_for_server(timeout_sec=5.0):
self.get_logger().error('Nav2 not available')
self._pub_status('error:nav2_unavailable')
return
waypoints = self._route.get('waypoints', [])
if not waypoints:
self.get_logger().error('Route has no waypoints')
self._pub_status('error:empty_route')
return
# Use current GPS fix as datum for ENU origin
lat0 = self._fix.latitude
lon0 = self._fix.longitude
# Subsample by spacing
spacing = self.get_parameter('replay_spacing_m').value
sampled = self._subsample_waypoints(waypoints, spacing)
frame = self.get_parameter('map_frame').value
now = self.get_clock().now().to_msg()
goal = NavigateThroughPoses.Goal()
for wp in sampled:
x, y = _latlon_to_enu(wp['lat'], wp['lon'], lat0, lon0)
qx, qy, qz, qw = _heading_to_quat(wp.get('heading_deg', 0.0))
ps = PoseStamped()
ps.header.stamp = now
ps.header.frame_id = frame
ps.pose.position.x = x
ps.pose.position.y = y
ps.pose.orientation.x = qx
ps.pose.orientation.y = qy
ps.pose.orientation.z = qz
ps.pose.orientation.w = qw
goal.poses.append(ps)
name = self._route.get('name', '?')
self.get_logger().info(
f'route_replayer: sending {len(goal.poses)} waypoints to Nav2 '
f'(from {len(waypoints)} recorded, datum={lat0:.6f},{lon0:.6f})'
)
self._pub_status(f'replaying:{name}:{len(goal.poses)}_waypoints')
self._active = True
self._paused = False
future = self._nav_client.send_goal_async(
goal,
feedback_callback=self._nav_feedback_cb,
)
future.add_done_callback(self._nav_goal_response_cb)
def _nav_goal_response_cb(self, future) -> None:
self._nav_goal_handle = future.result()
if not self._nav_goal_handle.accepted:
self.get_logger().error('Nav2 rejected replay goal')
self._active = False
self._pub_status('error:goal_rejected')
return
result_future = self._nav_goal_handle.get_result_async()
result_future.add_done_callback(self._nav_result_cb)
def _nav_feedback_cb(self, feedback) -> None:
fb = feedback.feedback
remaining = getattr(fb, 'number_of_poses_remaining', '?')
dist = getattr(fb, 'distance_remaining', None)
if dist is not None:
self.get_logger().info(
f'replay: {remaining} waypoints remaining, {dist:.1f} m to go',
throttle_duration_sec=10.0,
)
def _nav_result_cb(self, future) -> None:
self._active = False
result = future.result()
name = self._route.get('name', '?') if self._route else '?'
if result.status == 4:
self.get_logger().info(f'Replay of "{name}" completed successfully')
self._pub_status(f'completed:{name}')
else:
self.get_logger().warn(f'Replay ended with status {result.status}')
self._pub_status(f'failed:{name}:status={result.status}')
def _pause_replay(self) -> None:
if self._nav_goal_handle and self._active:
self._nav_goal_handle.cancel_goal_async()
self._paused = True
self._active = False
self._pub_status('paused')
self.get_logger().info('route_replayer: paused')
def _resume_replay(self) -> None:
"""Re-issue the nav goal from current position (simple re-start)."""
self._paused = False
if self._route:
self.create_timer(0.1, self._send_nav_goal, callback_group=self._cb_group)
self._pub_status('resuming')
self.get_logger().info('route_replayer: resuming')
def _cancel_nav(self, reason: str) -> None:
if self._nav_goal_handle and self._active:
self._nav_goal_handle.cancel_goal_async()
self._active = False
self._paused = False
self._pub_status(f'stopped:{reason}')
self.get_logger().info(f'route_replayer: stopped — {reason}')
# ── Helpers ───────────────────────────────────────────────────────────────
def _subsample_waypoints(self, waypoints: list, spacing_m: float) -> list:
"""Keep waypoints at least spacing_m apart. Always keep first + last."""
if not waypoints:
return []
out = [waypoints[0]]
for wp in waypoints[1:-1]:
prev = out[-1]
d = _haversine(wp['lat'], wp['lon'], prev['lat'], prev['lon'])
if d >= spacing_m:
out.append(wp)
if len(waypoints) > 1:
out.append(waypoints[-1])
return out
def _publish_progress(self) -> None:
if not self._active or self._route is None:
return
name = self._route.get('name', '?')
n = self._route.get('waypoint_count', '?')
dist = self._route.get('total_distance_m', 0.0)
msg = String()
msg.data = f'replaying:{name} total={n}wps dist={dist:.0f}m'
self._progress_pub.publish(msg)
def _pub_status(self, s: str) -> None:
msg = String()
msg.data = s
self._status_pub.publish(msg)
def main(args=None):
rclpy.init(args=args)
node = RouteReplayerNode()
executor = MultiThreadedExecutor()
executor.add_node(node)
try:
executor.spin()
except KeyboardInterrupt:
pass
finally:
node.destroy_node()
rclpy.try_shutdown()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,4 @@
[develop]
script_dir=$base/lib/saltybot_routes
[install]
install_scripts=$base/lib/saltybot_routes

View File

@ -0,0 +1,34 @@
import os
from glob import glob
from setuptools import find_packages, setup
package_name = 'saltybot_routes'
setup(
name=package_name,
version='0.1.0',
packages=find_packages(exclude=['test']),
data_files=[
('share/ament_index/resource_index/packages',
['resource/' + package_name]),
('share/' + package_name, ['package.xml']),
(os.path.join('share', package_name, 'launch'),
glob('launch/*.py')),
(os.path.join('share', package_name, 'config'),
glob('config/*.yaml')),
],
install_requires=['setuptools'],
zip_safe=True,
maintainer='seb',
maintainer_email='seb@saltylab.io',
description='Route recording and autonomous replay — ride once, replay forever.',
license='MIT',
tests_require=['pytest'],
entry_points={
'console_scripts': [
'route_recorder_node = saltybot_routes.route_recorder_node:main',
'route_replayer_node = saltybot_routes.route_replayer_node:main',
'route_manager_node = saltybot_routes.route_manager_node:main',
],
},
)

View File

@ -0,0 +1,64 @@
# segmentation_params.yaml — SaltyBot sidewalk semantic segmentation
#
# Run with:
# ros2 launch saltybot_segmentation sidewalk_segmentation.launch.py
#
# Build TRT engine first (run ONCE on the Jetson):
# python3 /opt/ros/humble/share/saltybot_segmentation/scripts/build_engine.py
# ── Model paths ───────────────────────────────────────────────────────────────
# Priority: TRT engine > ONNX > error (no inference)
#
# Build engine:
# python3 build_engine.py --model bisenetv2
# → /mnt/nvme/saltybot/models/bisenetv2_cityscapes_512x256.engine
engine_path: /mnt/nvme/saltybot/models/bisenetv2_cityscapes_512x256.engine
onnx_path: /mnt/nvme/saltybot/models/bisenetv2_cityscapes_512x256.onnx
# ── Model input size ──────────────────────────────────────────────────────────
# Must match the engine. 512×256 maintains Cityscapes 2:1 aspect ratio.
# RealSense 640×480 is letterboxed to 512×256 (pillarboxed, 32px each side).
input_width: 512
input_height: 256
# ── Processing rate ───────────────────────────────────────────────────────────
# process_every_n: run inference on 1 in N frames.
# 1 = every frame (~15fps if latency budget allows)
# 2 = every 2nd frame (recommended — RealSense @ 30fps → ~15fps effective)
# 3 = every 3rd frame (9fps — use if GPU is constrained by other tasks)
#
# RealSense color at 15fps → process_every_n:=1 gives ~15fps seg output.
process_every_n: 1
# ── Debug image ───────────────────────────────────────────────────────────────
# Set true to publish /segmentation/debug_image (BGR colour overlay).
# Adds ~1ms/frame overhead. Disable in production.
publish_debug_image: false
# ── Traversability overrides ──────────────────────────────────────────────────
# unknown_as_obstacle: true → sky / unlabelled pixels → lethal (100).
# Use in dense urban environments where unknowns are likely vertical surfaces.
# Use false (default) in open spaces to allow exploration.
unknown_as_obstacle: false
# ── Costmap projection ────────────────────────────────────────────────────────
# costmap_resolution: metres per cell in the output OccupancyGrid.
# Match or exceed Nav2 local_costmap resolution (default 0.05m).
costmap_resolution: 0.05 # metres/cell
# costmap_range_m: forward look-ahead distance for ground projection.
# 5.0m covers 100 cells at 0.05m resolution.
# Increase for faster outdoor navigation; decrease for tight indoor spaces.
costmap_range_m: 5.0 # metres
# ── Camera geometry ───────────────────────────────────────────────────────────
# These must match the RealSense mount position on the robot.
# Used for inverse-perspective ground projection.
#
# camera_height_m: height of RealSense optical centre above ground (metres).
# saltybot: D435i mounted at ~0.15m above base_link origin.
camera_height_m: 0.15 # metres
# camera_pitch_deg: downward tilt of the camera (degrees, positive = tilted down).
# 0 = horizontal. Typical outdoor deployment: 510° downward.
camera_pitch_deg: 5.0 # degrees

View File

@ -0,0 +1,268 @@
# Sidewalk Segmentation — Training & Deployment Guide
## Overview
SaltyBot uses **BiSeNetV2** (default) or **DDRNet-23-slim** for real-time semantic
segmentation, pre-trained on Cityscapes and optionally fine-tuned on site-specific data.
The model runs at **>15fps on Orin Nano Super** via TensorRT FP16 at 512×256 input,
publishing per-pixel traversability costs to Nav2.
---
## Traversability Classes
| Class | ID | OccupancyGrid | Description |
|-------|----|---------------|-------------|
| Sidewalk | 0 | 0 (free) | Preferred navigation surface |
| Grass/vegetation | 1 | 50 (medium) | Traversable but non-preferred |
| Road | 2 | 90 (high cost) | Avoid but can cross |
| Obstacle | 3 | 100 (lethal) | Person, car, building, fence |
| Unknown | 4 | -1 (unknown) | Sky, unlabelled |
---
## Quick Start — Pretrained Cityscapes Model
No training required for standard sidewalks in Western cities.
```bash
# 1. Build the TRT engine (once, on Orin):
python3 /opt/ros/humble/share/saltybot_segmentation/scripts/build_engine.py
# 2. Launch:
ros2 launch saltybot_segmentation sidewalk_segmentation.launch.py
# 3. Verify:
ros2 topic hz /segmentation/mask # expect ~15Hz
ros2 topic hz /segmentation/costmap # expect ~15Hz
ros2 run rviz2 rviz2 # add OccupancyGrid, topic=/segmentation/costmap
```
---
## Model Benchmarks — Cityscapes Validation Set
| Model | mIoU | Sidewalk IoU | Road IoU | FPS (Orin FP16) | Engine size |
|-------|------|--------------|----------|-----------------|-------------|
| BiSeNetV2 | 72.6% | 82.1% | 97.8% | ~50 fps | ~11 MB |
| DDRNet-23-slim | 79.5% | 84.3% | 98.1% | ~40 fps | ~18 MB |
Both exceed the **>15fps target** with headroom for additional ROS2 overhead.
BiSeNetV2 is the default — faster, smaller, sufficient for traversability.
Use DDRNet if mIoU matters more than latency (e.g., complex European city centres).
---
## Fine-Tuning on Custom Site Data
### When is fine-tuning needed?
The Cityscapes-trained model works well for:
- European-style city sidewalks, curbs, roads
- Standard pedestrian infrastructure
Fine-tuning improves performance for:
- Gravel/dirt paths (not in Cityscapes)
- Unusual kerb styles or non-standard pavement markings
- Indoor+outdoor transitions (garage exits, building entrances)
- Non-Western road infrastructure
### Step 1 — Collect data (walk the route)
Record a ROS2 bag while walking the intended robot route:
```bash
# On Orin, record the front camera for 510 minutes of the target environment:
ros2 bag record /camera/color/image_raw -o sidewalk_route_2024-01
# Transfer to workstation for labelling:
scp -r jetson:/home/ubuntu/sidewalk_route_2024-01 ~/datasets/
```
### Step 2 — Extract frames
```bash
python3 fine_tune.py \
--extract-frames ~/datasets/sidewalk_route_2024-01/ \
--output-dir ~/datasets/sidewalk_raw/ \
--every-n 5 # 1 frame per 5 = ~6fps from 30fps bag
```
This extracts ~200400 frames from a 5-minute bag. You need to label **50100 frames** minimum.
### Step 3 — Label with LabelMe
```bash
pip install labelme
labelme ~/datasets/sidewalk_raw/
```
**Class names to use** (must be exact):
| What you see | LabelMe label |
|--------------|---------------|
| Footpath/pavement | `sidewalk` |
| Road/tarmac | `road` |
| Grass/lawn/verge | `vegetation` |
| Gravel path | `terrain` |
| Person | `person` |
| Car/vehicle | `car` |
| Building/wall | `building` |
| Fence/gate | `fence` |
**Labelling tips:**
- Use **polygon** tool for precise boundaries
- Focus on the ground plane (lower 60% of image) — that's what the costmap uses
- 50 well-labelled frames beats 200 rushed ones
- Vary conditions: sunny, overcast, morning, evening
### Step 4 — Convert labels to masks
```bash
python3 fine_tune.py \
--convert-labels ~/datasets/sidewalk_raw/ \
--output-dir ~/datasets/sidewalk_masks/
```
Output: `<frame>_mask.png` per frame — 8-bit PNG with Cityscapes class IDs.
Unlabelled pixels are set to 255 (ignored during training).
### Step 5 — Fine-tune
```bash
# On Orin (or workstation with CUDA):
python3 fine_tune.py --train \
--images ~/datasets/sidewalk_raw/ \
--labels ~/datasets/sidewalk_masks/ \
--weights /mnt/nvme/saltybot/models/bisenetv2_cityscapes.pth \
--output /mnt/nvme/saltybot/models/bisenetv2_custom.pth \
--epochs 20 \
--lr 1e-4
```
Expected training time:
- 50 images, 20 epochs, Orin: ~15 minutes
- 100 images, 20 epochs, Orin: ~25 minutes
### Step 6 — Evaluate mIoU
```bash
python3 fine_tune.py --eval \
--images ~/datasets/sidewalk_raw/ \
--labels ~/datasets/sidewalk_masks/ \
--weights /mnt/nvme/saltybot/models/bisenetv2_custom.pth
```
Target mIoU on custom classes: >70% on sidewalk/road/obstacle.
### Step 7 — Build new TRT engine
```bash
python3 build_engine.py \
--weights /mnt/nvme/saltybot/models/bisenetv2_custom.pth \
--engine /mnt/nvme/saltybot/models/bisenetv2_custom_512x256.engine
```
Update `segmentation_params.yaml`:
```yaml
engine_path: /mnt/nvme/saltybot/models/bisenetv2_custom_512x256.engine
```
---
## Nav2 Integration — SegmentationCostmapLayer
Add to `nav2_params.yaml`:
```yaml
local_costmap:
local_costmap:
ros__parameters:
plugins:
- "voxel_layer"
- "inflation_layer"
- "segmentation_layer" # ← add this
segmentation_layer:
plugin: "saltybot_segmentation_costmap::SegmentationCostmapLayer"
enabled: true
topic: /segmentation/costmap
combination_method: max # max | override | min
```
**combination_method:**
- `max` (default) — keeps the worst-case cost between existing costmap and segmentation.
Most conservative; prevents Nav2 from overriding obstacle detections.
- `override` — segmentation completely replaces existing cell costs.
Use when you trust the camera more than other sensors.
- `min` — keeps the most permissive cost.
Use for exploratory navigation in open environments.
---
## Performance Tuning
### Too slow (<15fps)
1. **Reduce process_every_n** (e.g., `process_every_n: 2` → effective 15fps from 30fps camera)
2. **Reduce input resolution** — edit `build_engine.py` to use 384×192 (2.2× faster)
3. **Ensure nvpmodel is in MAXN mode**: `sudo nvpmodel -m 0 && sudo jetson_clocks`
4. Check GPU is not throttled: `jtop` → GPU frequency should be 1024MHz
### False road detections (sidewalk near road)
- Lower `camera_pitch_deg` to look further ahead
- Enable `unknown_as_obstacle: true` to be more cautious
- Fine-tune with site-specific data (Step 5)
### Costmap looks noisy
- Increase `costmap_resolution` (0.1m cells reduce noise)
- Reduce `costmap_range_m` to 3.0m (projection less accurate at far range)
- Apply temporal smoothing in Nav2 inflation layer
---
## Datasets
### Cityscapes (primary pre-training)
- **2975 training / 500 validation** finely annotated images
- 19 semantic classes (roads, sidewalks, people, vehicles, etc.)
- License: Cityscapes Terms of Use (non-commercial research)
- Download: https://www.cityscapes-dataset.com/
- Required for training from scratch; BiSeNetV2 pretrained checkpoint (~25MB) available at:
https://github.com/CoinCheung/BiSeNet/releases
### Mapillary Vistas (supplementary)
- **18,000 training images** from diverse global street scenes
- 124 semantic classes (broader coverage than Cityscapes)
- Includes dirt paths, gravel, unusual sidewalk types
- License: CC BY-SA 4.0
- Download: https://www.mapillary.com/dataset/vistas
- Useful for mapping to our traversability classes in non-European environments
### Custom saltybot data
- Collected per-deployment via `fine_tune.py --extract-frames`
- 50100 labelled frames typical for 80%+ mIoU on specific routes
- Store in `/mnt/nvme/saltybot/training_data/`
---
## File Locations on Orin
```
/mnt/nvme/saltybot/
├── models/
│ ├── bisenetv2_cityscapes.pth ← downloaded pretrained weights
│ ├── bisenetv2_cityscapes_512x256.onnx ← exported ONNX (FP32)
│ ├── bisenetv2_cityscapes_512x256.engine ← TRT FP16 engine (built on Orin)
│ ├── bisenetv2_custom.pth ← fine-tuned weights (after step 5)
│ └── bisenetv2_custom_512x256.engine ← TRT FP16 engine (after step 7)
├── training_data/
│ ├── raw/ ← extracted JPEG frames
│ └── labels/ ← LabelMe JSON + converted PNG masks
└── rosbags/
└── sidewalk_route_*/ ← recorded ROS2 bags for labelling
```

View File

@ -0,0 +1,58 @@
"""
sidewalk_segmentation.launch.py Launch semantic sidewalk segmentation node.
Usage:
ros2 launch saltybot_segmentation sidewalk_segmentation.launch.py
ros2 launch saltybot_segmentation sidewalk_segmentation.launch.py publish_debug_image:=true
ros2 launch saltybot_segmentation sidewalk_segmentation.launch.py engine_path:=/custom/model.engine
"""
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
def generate_launch_description():
pkg = get_package_share_directory("saltybot_segmentation")
cfg = os.path.join(pkg, "config", "segmentation_params.yaml")
return LaunchDescription([
DeclareLaunchArgument("engine_path",
default_value="/mnt/nvme/saltybot/models/bisenetv2_cityscapes_512x256.engine",
description="Path to TensorRT .engine file"),
DeclareLaunchArgument("onnx_path",
default_value="/mnt/nvme/saltybot/models/bisenetv2_cityscapes_512x256.onnx",
description="Path to ONNX fallback model"),
DeclareLaunchArgument("process_every_n", default_value="1",
description="Run inference every N frames (1=every frame)"),
DeclareLaunchArgument("publish_debug_image", default_value="false",
description="Publish colour-coded debug image on /segmentation/debug_image"),
DeclareLaunchArgument("unknown_as_obstacle", default_value="false",
description="Treat unknown pixels as lethal obstacles"),
DeclareLaunchArgument("costmap_range_m", default_value="5.0",
description="Forward look-ahead range for costmap projection (m)"),
DeclareLaunchArgument("camera_pitch_deg", default_value="5.0",
description="Camera downward pitch angle (degrees)"),
Node(
package="saltybot_segmentation",
executable="sidewalk_seg",
name="sidewalk_seg",
output="screen",
parameters=[
cfg,
{
"engine_path": LaunchConfiguration("engine_path"),
"onnx_path": LaunchConfiguration("onnx_path"),
"process_every_n": LaunchConfiguration("process_every_n"),
"publish_debug_image": LaunchConfiguration("publish_debug_image"),
"unknown_as_obstacle": LaunchConfiguration("unknown_as_obstacle"),
"costmap_range_m": LaunchConfiguration("costmap_range_m"),
"camera_pitch_deg": LaunchConfiguration("camera_pitch_deg"),
},
],
),
])

View File

@ -0,0 +1,32 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>saltybot_segmentation</name>
<version>0.1.0</version>
<description>
Semantic sidewalk segmentation for SaltyBot outdoor autonomous navigation.
BiSeNetV2 / DDRNet-23-slim with TensorRT FP16 on Orin Nano Super.
Publishes traversability mask + Nav2 OccupancyGrid costmap.
</description>
<maintainer email="seb@vayrette.com">seb</maintainer>
<license>MIT</license>
<buildtool_depend>ament_python</buildtool_depend>
<depend>rclpy</depend>
<depend>sensor_msgs</depend>
<depend>nav_msgs</depend>
<depend>std_msgs</depend>
<depend>cv_bridge</depend>
<depend>python3-numpy</depend>
<depend>python3-opencv</depend>
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<test_depend>pytest</test_depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>

View File

@ -0,0 +1,273 @@
"""
seg_utils.py Pure helpers for semantic segmentation and traversability mapping.
No ROS2, TensorRT, or PyTorch imports fully testable without GPU or ROS2 install.
Cityscapes 19-class 5-class traversability mapping
Cityscapes ID Traversability class OccupancyGrid value
0 road ROAD 90 (high cost don't drive on road)
1 sidewalk SIDEWALK 0 (free preferred surface)
2 building OBSTACLE 100 (lethal)
3 wall OBSTACLE 100
4 fence OBSTACLE 100
5 pole OBSTACLE 100
6 traffic light OBSTACLE 100
7 traffic sign OBSTACLE 100
8 vegetation GRASS 50 (medium can traverse slowly)
9 terrain GRASS 50
10 sky UNKNOWN -1 (unknown)
11 person OBSTACLE 100 (lethal safety critical)
12 rider OBSTACLE 100
13 car OBSTACLE 100
14 truck OBSTACLE 100
15 bus OBSTACLE 100
16 train OBSTACLE 100
17 motorcycle OBSTACLE 100
18 bicycle OBSTACLE 100
Segmentation mask class IDs (published on /segmentation/mask)
TRAV_SIDEWALK = 0
TRAV_GRASS = 1
TRAV_ROAD = 2
TRAV_OBSTACLE = 3
TRAV_UNKNOWN = 4
"""
import numpy as np
# ── Traversability class constants ────────────────────────────────────────────
TRAV_SIDEWALK = 0
TRAV_GRASS = 1
TRAV_ROAD = 2
TRAV_OBSTACLE = 3
TRAV_UNKNOWN = 4
NUM_TRAV_CLASSES = 5
# ── OccupancyGrid cost values for each traversability class ───────────────────
# Nav2 OccupancyGrid: 0=free, 1-99=cost, 100=lethal, -1=unknown
TRAV_TO_COST = {
TRAV_SIDEWALK: 0, # free — preferred surface
TRAV_GRASS: 50, # medium cost — traversable but non-preferred
TRAV_ROAD: 90, # high cost — avoid but can cross
TRAV_OBSTACLE: 100, # lethal — never enter
TRAV_UNKNOWN: -1, # unknown — Nav2 treats as unknown cell
}
# ── Cityscapes 19-class → traversability lookup table ─────────────────────────
# Index = Cityscapes training ID (018)
_CITYSCAPES_TO_TRAV = np.array([
TRAV_ROAD, # 0 road
TRAV_SIDEWALK, # 1 sidewalk
TRAV_OBSTACLE, # 2 building
TRAV_OBSTACLE, # 3 wall
TRAV_OBSTACLE, # 4 fence
TRAV_OBSTACLE, # 5 pole
TRAV_OBSTACLE, # 6 traffic light
TRAV_OBSTACLE, # 7 traffic sign
TRAV_GRASS, # 8 vegetation
TRAV_GRASS, # 9 terrain
TRAV_UNKNOWN, # 10 sky
TRAV_OBSTACLE, # 11 person
TRAV_OBSTACLE, # 12 rider
TRAV_OBSTACLE, # 13 car
TRAV_OBSTACLE, # 14 truck
TRAV_OBSTACLE, # 15 bus
TRAV_OBSTACLE, # 16 train
TRAV_OBSTACLE, # 17 motorcycle
TRAV_OBSTACLE, # 18 bicycle
], dtype=np.uint8)
# ── Visualisation colour map (BGR, for cv2.imshow / debug image) ──────────────
# One colour per traversability class
TRAV_COLORMAP_BGR = np.array([
[128, 64, 128], # SIDEWALK — purple
[152, 251, 152], # GRASS — pale green
[128, 0, 128], # ROAD — dark magenta
[ 60, 20, 220], # OBSTACLE — red-ish
[ 0, 0, 0], # UNKNOWN — black
], dtype=np.uint8)
# ── Core mapping functions ────────────────────────────────────────────────────
def cityscapes_to_traversability(mask: np.ndarray) -> np.ndarray:
"""
Map a Cityscapes 19-class segmentation mask to traversability classes.
Parameters
----------
mask : np.ndarray, shape (H, W), dtype uint8
Cityscapes class IDs in range [0, 18]. Values outside range are
mapped to TRAV_UNKNOWN.
Returns
-------
trav_mask : np.ndarray, shape (H, W), dtype uint8
Traversability class IDs in range [0, 4].
"""
# Clamp out-of-range IDs to a safe default (unknown)
clipped = np.clip(mask.astype(np.int32), 0, len(_CITYSCAPES_TO_TRAV) - 1)
return _CITYSCAPES_TO_TRAV[clipped]
def traversability_to_costmap(
trav_mask: np.ndarray,
unknown_as_obstacle: bool = False,
) -> np.ndarray:
"""
Convert a traversability mask to Nav2 OccupancyGrid int8 cost values.
Parameters
----------
trav_mask : np.ndarray, shape (H, W), dtype uint8
Traversability class IDs (TRAV_* constants).
unknown_as_obstacle : bool
If True, TRAV_UNKNOWN cells are set to 100 (lethal) instead of -1.
Useful in dense urban environments where unknowns are likely walls.
Returns
-------
cost_map : np.ndarray, shape (H, W), dtype int8
OccupancyGrid cost values: 0=free, 1-99=cost, 100=lethal, -1=unknown.
"""
H, W = trav_mask.shape
cost_map = np.full((H, W), -1, dtype=np.int8)
for trav_class, cost in TRAV_TO_COST.items():
if trav_class == TRAV_UNKNOWN and unknown_as_obstacle:
cost = 100
cost_map[trav_mask == trav_class] = cost
return cost_map
def letterbox(
image: np.ndarray,
target_w: int,
target_h: int,
pad_value: int = 114,
) -> tuple:
"""
Letterbox-resize an image to (target_w, target_h) preserving aspect ratio.
Parameters
----------
image : np.ndarray, shape (H, W, C), dtype uint8
target_w, target_h : int
Target width and height.
pad_value : int
Padding fill value.
Returns
-------
(canvas, scale, pad_left, pad_top) : tuple
canvas : np.ndarray (target_h, target_w, C) resized + padded image
scale : float scale factor applied (min of w_scale, h_scale)
pad_left : int horizontal padding (pixels from left)
pad_top : int vertical padding (pixels from top)
"""
import cv2 # lazy import — keeps module importable without cv2 in tests
src_h, src_w = image.shape[:2]
scale = min(target_w / src_w, target_h / src_h)
new_w = int(round(src_w * scale))
new_h = int(round(src_h * scale))
resized = cv2.resize(image, (new_w, new_h), interpolation=cv2.INTER_LINEAR)
pad_left = (target_w - new_w) // 2
pad_top = (target_h - new_h) // 2
canvas = np.full((target_h, target_w, image.shape[2]), pad_value, dtype=np.uint8)
canvas[pad_top:pad_top + new_h, pad_left:pad_left + new_w] = resized
return canvas, scale, pad_left, pad_top
def unpad_mask(
mask: np.ndarray,
orig_h: int,
orig_w: int,
scale: float,
pad_left: int,
pad_top: int,
) -> np.ndarray:
"""
Remove letterbox padding from a segmentation mask and resize to original.
Parameters
----------
mask : np.ndarray, shape (target_h, target_w), dtype uint8
Segmentation output from model at letterboxed resolution.
orig_h, orig_w : int
Original image dimensions (before letterboxing).
scale, pad_left, pad_top : from letterbox()
Returns
-------
restored : np.ndarray, shape (orig_h, orig_w), dtype uint8
"""
import cv2
new_h = int(round(orig_h * scale))
new_w = int(round(orig_w * scale))
cropped = mask[pad_top:pad_top + new_h, pad_left:pad_left + new_w]
restored = cv2.resize(cropped, (orig_w, orig_h), interpolation=cv2.INTER_NEAREST)
return restored
def preprocess_for_inference(
image_bgr: np.ndarray,
input_w: int = 512,
input_h: int = 256,
) -> tuple:
"""
Preprocess a BGR image for BiSeNetV2 / DDRNet inference.
Applies letterboxing, BGRRGB conversion, ImageNet normalisation,
HWCNCHW layout, and float32 conversion.
Returns
-------
(blob, scale, pad_left, pad_top) : tuple
blob : np.ndarray (1, 3, input_h, input_w) float32 model input
scale, pad_left, pad_top : for unpad_mask()
"""
import cv2
canvas, scale, pad_left, pad_top = letterbox(image_bgr, input_w, input_h)
# BGR → RGB
rgb = cv2.cvtColor(canvas, cv2.COLOR_BGR2RGB)
# ImageNet normalisation
mean = np.array([0.485, 0.456, 0.406], dtype=np.float32)
std = np.array([0.229, 0.224, 0.225], dtype=np.float32)
img = rgb.astype(np.float32) / 255.0
img = (img - mean) / std
# HWC → NCHW
blob = np.ascontiguousarray(img.transpose(2, 0, 1)[np.newaxis])
return blob, scale, pad_left, pad_top
def colorise_traversability(trav_mask: np.ndarray) -> np.ndarray:
"""
Convert traversability mask to an RGB visualisation image.
Returns
-------
vis : np.ndarray (H, W, 3) uint8 BGR suitable for cv2.imshow / ROS Image
"""
clipped = np.clip(trav_mask.astype(np.int32), 0, NUM_TRAV_CLASSES - 1)
return TRAV_COLORMAP_BGR[clipped]

View File

@ -0,0 +1,437 @@
"""
sidewalk_seg_node.py Semantic sidewalk segmentation node for SaltyBot.
Subscribes:
/camera/color/image_raw (sensor_msgs/Image) RealSense front RGB
Publishes:
/segmentation/mask (sensor_msgs/Image, mono8) traversability class per pixel
/segmentation/costmap (nav_msgs/OccupancyGrid) Nav2 traversability scores
/segmentation/debug_image (sensor_msgs/Image, bgr8) colour-coded visualisation
Model backend (auto-selected, priority order)
1. TensorRT FP16 engine (.engine file) fastest, Orin GPU
2. ONNX Runtime (CUDA provider) fallback, still GPU
3. ONNX Runtime (CPU provider) last resort
Build TRT engine:
python3 /opt/ros/humble/share/saltybot_segmentation/scripts/build_engine.py
Supported model architectures
BiSeNetV2 Cityscapes 72.6 mIoU, ~50fps @ 512×256 on Orin
DDRNet-23 Cityscapes 79.5 mIoU, ~40fps @ 512×256 on Orin
Performance target: >15fps at 512×256 on Jetson Orin Nano Super.
Input: 512×256 RGB (letterboxed from 640×480 RealSense color)
Output: 512×256 per-pixel traversability class ID (uint8)
"""
import threading
import time
import cv2
import numpy as np
import rclpy
from cv_bridge import CvBridge
from nav_msgs.msg import OccupancyGrid, MapMetaData
from rclpy.node import Node
from rclpy.qos import QoSProfile, ReliabilityPolicy, HistoryPolicy, DurabilityPolicy
from sensor_msgs.msg import Image
from std_msgs.msg import Header
from saltybot_segmentation.seg_utils import (
cityscapes_to_traversability,
traversability_to_costmap,
preprocess_for_inference,
unpad_mask,
colorise_traversability,
)
# ── TensorRT backend ──────────────────────────────────────────────────────────
try:
import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit # noqa: F401 — initialises CUDA context
_TRT_AVAILABLE = True
except ImportError:
_TRT_AVAILABLE = False
class _TRTBackend:
"""TensorRT FP16 inference backend."""
def __init__(self, engine_path: str, logger):
self._logger = logger
trt_logger = trt.Logger(trt.Logger.WARNING)
with open(engine_path, "rb") as f:
serialized = f.read()
runtime = trt.Runtime(trt_logger)
self._engine = runtime.deserialize_cuda_engine(serialized)
self._context = self._engine.create_execution_context()
self._stream = cuda.Stream()
# Allocate host + device buffers
self._bindings = []
self._host_in = None
self._dev_in = None
self._host_out = None
self._dev_out = None
for i in range(self._engine.num_bindings):
shape = self._engine.get_binding_shape(i)
size = int(np.prod(shape))
dtype = trt.nptype(self._engine.get_binding_dtype(i))
host_buf = cuda.pagelocked_empty(size, dtype)
dev_buf = cuda.mem_alloc(host_buf.nbytes)
self._bindings.append(int(dev_buf))
if self._engine.binding_is_input(i):
self._host_in, self._dev_in = host_buf, dev_buf
self._in_shape = shape
else:
self._host_out, self._dev_out = host_buf, dev_buf
self._out_shape = shape
logger.info(f"TRT engine loaded: in={self._in_shape} out={self._out_shape}")
def infer(self, blob: np.ndarray) -> np.ndarray:
np.copyto(self._host_in, blob.ravel())
cuda.memcpy_htod_async(self._dev_in, self._host_in, self._stream)
self._context.execute_async_v2(self._bindings, self._stream.handle)
cuda.memcpy_dtoh_async(self._host_out, self._dev_out, self._stream)
self._stream.synchronize()
return self._host_out.reshape(self._out_shape)
# ── ONNX Runtime backend ──────────────────────────────────────────────────────
try:
import onnxruntime as ort
_ONNX_AVAILABLE = True
except ImportError:
_ONNX_AVAILABLE = False
class _ONNXBackend:
"""ONNX Runtime inference backend (CUDA or CPU)."""
def __init__(self, onnx_path: str, logger):
providers = []
if _ONNX_AVAILABLE:
available = ort.get_available_providers()
if "CUDAExecutionProvider" in available:
providers.append("CUDAExecutionProvider")
logger.info("ONNX Runtime: using CUDA provider")
else:
logger.warn("ONNX Runtime: CUDA not available, falling back to CPU")
providers.append("CPUExecutionProvider")
self._session = ort.InferenceSession(onnx_path, providers=providers)
self._input_name = self._session.get_inputs()[0].name
self._output_name = self._session.get_outputs()[0].name
logger.info(f"ONNX model loaded: {onnx_path}")
def infer(self, blob: np.ndarray) -> np.ndarray:
return self._session.run([self._output_name], {self._input_name: blob})[0]
# ── ROS2 Node ─────────────────────────────────────────────────────────────────
class SidewalkSegNode(Node):
def __init__(self):
super().__init__("sidewalk_seg")
# ── Parameters ────────────────────────────────────────────────────────
self.declare_parameter("engine_path",
"/mnt/nvme/saltybot/models/bisenetv2_cityscapes_512x256.engine")
self.declare_parameter("onnx_path",
"/mnt/nvme/saltybot/models/bisenetv2_cityscapes_512x256.onnx")
self.declare_parameter("input_width", 512)
self.declare_parameter("input_height", 256)
self.declare_parameter("process_every_n", 2) # skip frames to save GPU
self.declare_parameter("publish_debug_image", False)
self.declare_parameter("unknown_as_obstacle", False)
self.declare_parameter("costmap_resolution", 0.05) # metres/cell
self.declare_parameter("costmap_range_m", 5.0) # forward projection range
self.declare_parameter("camera_height_m", 0.15) # RealSense mount height
self.declare_parameter("camera_pitch_deg", 0.0) # tilt angle
self._p = self._load_params()
# ── Backend selection (TRT preferred) ──────────────────────────────────
self._backend = None
self._init_backend()
# ── Runtime state ─────────────────────────────────────────────────────
self._bridge = CvBridge()
self._frame_count = 0
self._last_mask = None
self._last_mask_lock = threading.Lock()
self._t_infer = 0.0
# ── QoS: sensor data (best-effort, keep-last=1) ────────────────────────
sensor_qos = QoSProfile(
reliability=ReliabilityPolicy.BEST_EFFORT,
history=HistoryPolicy.KEEP_LAST,
depth=1,
)
# Transient local for costmap (Nav2 expects this)
costmap_qos = QoSProfile(
reliability=ReliabilityPolicy.RELIABLE,
history=HistoryPolicy.KEEP_LAST,
depth=1,
durability=DurabilityPolicy.TRANSIENT_LOCAL,
)
# ── Subscriptions ─────────────────────────────────────────────────────
self.create_subscription(
Image, "/camera/color/image_raw", self._image_cb, sensor_qos)
# ── Publishers ────────────────────────────────────────────────────────
self._mask_pub = self.create_publisher(Image, "/segmentation/mask", 10)
self._cost_pub = self.create_publisher(
OccupancyGrid, "/segmentation/costmap", costmap_qos)
self._debug_pub = self.create_publisher(Image, "/segmentation/debug_image", 1)
self.get_logger().info(
f"SidewalkSeg ready backend={'trt' if isinstance(self._backend, _TRTBackend) else 'onnx'} "
f"input={self._p['input_width']}x{self._p['input_height']} "
f"process_every={self._p['process_every_n']} frames"
)
# ── Helpers ────────────────────────────────────────────────────────────────
def _load_params(self) -> dict:
return {
"engine_path": self.get_parameter("engine_path").value,
"onnx_path": self.get_parameter("onnx_path").value,
"input_width": self.get_parameter("input_width").value,
"input_height": self.get_parameter("input_height").value,
"process_every_n": self.get_parameter("process_every_n").value,
"publish_debug": self.get_parameter("publish_debug_image").value,
"unknown_as_obstacle":self.get_parameter("unknown_as_obstacle").value,
"costmap_resolution": self.get_parameter("costmap_resolution").value,
"costmap_range_m": self.get_parameter("costmap_range_m").value,
"camera_height_m": self.get_parameter("camera_height_m").value,
"camera_pitch_deg": self.get_parameter("camera_pitch_deg").value,
}
def _init_backend(self):
p = self._p
if _TRT_AVAILABLE:
import os
if os.path.exists(p["engine_path"]):
try:
self._backend = _TRTBackend(p["engine_path"], self.get_logger())
return
except Exception as e:
self.get_logger().warn(f"TRT load failed: {e} — trying ONNX")
if _ONNX_AVAILABLE:
import os
if os.path.exists(p["onnx_path"]):
try:
self._backend = _ONNXBackend(p["onnx_path"], self.get_logger())
return
except Exception as e:
self.get_logger().warn(f"ONNX load failed: {e}")
self.get_logger().warn(
"No inference backend available — node will publish empty masks. "
"Run build_engine.py to create the TRT engine."
)
# ── Image callback ─────────────────────────────────────────────────────────
def _image_cb(self, msg: Image):
self._frame_count += 1
if self._frame_count % self._p["process_every_n"] != 0:
return
try:
bgr = self._bridge.imgmsg_to_cv2(msg, desired_encoding="bgr8")
except Exception as e:
self.get_logger().warn(f"cv_bridge decode error: {e}")
return
orig_h, orig_w = bgr.shape[:2]
iw = self._p["input_width"]
ih = self._p["input_height"]
# ── Inference ─────────────────────────────────────────────────────────
if self._backend is not None:
try:
t0 = time.monotonic()
blob, scale, pad_l, pad_t = preprocess_for_inference(bgr, iw, ih)
raw = self._backend.infer(blob)
self._t_infer = time.monotonic() - t0
# raw shape: (1, num_classes, H, W) or (1, H, W)
if raw.ndim == 4:
class_mask = raw[0].argmax(axis=0).astype(np.uint8)
else:
class_mask = raw[0].astype(np.uint8)
# Unpad + restore to original resolution
class_mask_full = unpad_mask(
class_mask, orig_h, orig_w, scale, pad_l, pad_t)
trav_mask = cityscapes_to_traversability(class_mask_full)
except Exception as e:
self.get_logger().warn(
f"Inference error: {e}", throttle_duration_sec=5.0)
trav_mask = np.full((orig_h, orig_w), 4, dtype=np.uint8) # all unknown
else:
trav_mask = np.full((orig_h, orig_w), 4, dtype=np.uint8) # all unknown
with self._last_mask_lock:
self._last_mask = trav_mask
stamp = msg.header.stamp
# ── Publish segmentation mask ──────────────────────────────────────────
mask_msg = self._bridge.cv2_to_imgmsg(trav_mask, encoding="mono8")
mask_msg.header.stamp = stamp
mask_msg.header.frame_id = "camera_color_optical_frame"
self._mask_pub.publish(mask_msg)
# ── Publish costmap ────────────────────────────────────────────────────
costmap_msg = self._build_costmap(trav_mask, stamp)
self._cost_pub.publish(costmap_msg)
# ── Debug visualisation ────────────────────────────────────────────────
if self._p["publish_debug"]:
vis = colorise_traversability(trav_mask)
debug_msg = self._bridge.cv2_to_imgmsg(vis, encoding="bgr8")
debug_msg.header.stamp = stamp
debug_msg.header.frame_id = "camera_color_optical_frame"
self._debug_pub.publish(debug_msg)
if self._frame_count % 30 == 0:
fps_str = f"{1.0/self._t_infer:.1f} fps" if self._t_infer > 0 else "no inference"
self.get_logger().debug(
f"Seg frame {self._frame_count}: {fps_str}",
throttle_duration_sec=5.0,
)
# ── Costmap projection ─────────────────────────────────────────────────────
def _build_costmap(self, trav_mask: np.ndarray, stamp) -> OccupancyGrid:
"""
Project the lower portion of the segmentation mask into a flat
OccupancyGrid in the base_link frame.
The projection uses a simple inverse-perspective ground model:
- Only pixels in the lower half of the image are projected
(upper half is unlikely to be ground plane in front of robot)
- Each pixel row maps to a forward distance using pinhole geometry
with the camera mount height and pitch angle.
- Columns map to lateral (Y) position via horizontal FOV.
The resulting OccupancyGrid covers [0, costmap_range_m] forward
and [-costmap_range_m/2, costmap_range_m/2] laterally in base_link.
"""
p = self._p
res = p["costmap_resolution"] # metres per cell
rng = p["costmap_range_m"] # look-ahead range
cam_h = p["camera_height_m"]
cam_pit = np.deg2rad(p["camera_pitch_deg"])
# Costmap grid dimensions
grid_h = int(rng / res) # forward cells (X direction)
grid_w = int(rng / res) # lateral cells (Y direction)
grid_cx = grid_w // 2 # lateral centre cell
cost_map_int8 = traversability_to_costmap(
trav_mask,
unknown_as_obstacle=p["unknown_as_obstacle"],
)
img_h, img_w = trav_mask.shape
# Only process lower 60% of image (ground plane region)
row_start = int(img_h * 0.40)
# RealSense D435i approximate intrinsics at 640×480
# (horizontal FOV ~87°, vertical FOV ~58°)
fx = img_w / (2.0 * np.tan(np.deg2rad(87.0 / 2)))
fy = img_h / (2.0 * np.tan(np.deg2rad(58.0 / 2)))
cx = img_w / 2.0
cy = img_h / 2.0
# Output occupancy grid (init to -1 = unknown)
grid = np.full((grid_h, grid_w), -1, dtype=np.int8)
for row in range(row_start, img_h):
# Vertical angle from optical axis
alpha = np.arctan2(-(row - cy), fy) + cam_pit
if alpha >= 0:
continue # ray points up — skip
# Ground distance from camera base (forward)
fwd_dist = cam_h / np.tan(-alpha)
if fwd_dist <= 0 or fwd_dist > rng:
continue
grid_row = int(fwd_dist / res)
if grid_row >= grid_h:
continue
for col in range(0, img_w):
# Lateral angle
beta = np.arctan2(col - cx, fx)
lat_dist = fwd_dist * np.tan(beta)
grid_col = grid_cx + int(lat_dist / res)
if grid_col < 0 or grid_col >= grid_w:
continue
cell_cost = int(cost_map_int8[row, col])
existing = int(grid[grid_row, grid_col])
# Max-cost merge: most conservative estimate wins
if existing < 0 or cell_cost > existing:
grid[grid_row, grid_col] = np.int8(cell_cost)
# Build OccupancyGrid message
msg = OccupancyGrid()
msg.header.stamp = stamp
msg.header.frame_id = "base_link"
msg.info = MapMetaData()
msg.info.resolution = res
msg.info.width = grid_w
msg.info.height = grid_h
msg.info.map_load_time = stamp
# Origin: lower-left corner in base_link
# Grid row 0 = closest (0m forward), row grid_h-1 = rng forward
# Grid col 0 = leftmost, col grid_cx = straight ahead
msg.info.origin.position.x = 0.0
msg.info.origin.position.y = -(rng / 2.0)
msg.info.origin.position.z = 0.0
msg.info.origin.orientation.w = 1.0
msg.data = grid.flatten().tolist()
return msg
# ── Entry point ───────────────────────────────────────────────────────────────
def main(args=None):
rclpy.init(args=args)
node = SidewalkSegNode()
try:
rclpy.spin(node)
except KeyboardInterrupt:
pass
finally:
node.destroy_node()
rclpy.try_shutdown()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,398 @@
#!/usr/bin/env python3
"""
build_engine.py Convert BiSeNetV2/DDRNet PyTorch model ONNX TensorRT FP16 engine.
Run ONCE on the Jetson Orin Nano Super. The .engine file is hardware-specific
(cannot be transferred between machines).
Usage
# Build BiSeNetV2 engine (default, fastest):
python3 build_engine.py
# Build DDRNet-23-slim engine (higher mIoU, slightly slower):
python3 build_engine.py --model ddrnet
# Custom output paths:
python3 build_engine.py --onnx /tmp/model.onnx --engine /tmp/model.engine
# Re-export ONNX from local weights (skip download):
python3 build_engine.py --weights /path/to/bisenetv2.pth
Prerequisites
pip install torch torchvision onnx onnxruntime-gpu
# TensorRT is pre-installed with JetPack 6 at /usr/lib/python3/dist-packages/tensorrt
Outputs
/mnt/nvme/saltybot/models/bisenetv2_cityscapes_512x256.onnx (FP32)
/mnt/nvme/saltybot/models/bisenetv2_cityscapes_512x256.engine (TRT FP16)
Model sources
BiSeNetV2 pretrained on Cityscapes:
https://github.com/CoinCheung/BiSeNet (MIT License)
Checkpoint: cp/model_final_v2_city.pth (~25MB)
DDRNet-23-slim pretrained on Cityscapes:
https://github.com/ydhongHIT/DDRNet (Apache License)
Checkpoint: DDRNet23s_imagenet.pth + Cityscapes fine-tune
Performance on Orin Nano Super (1024-core Ampere, 20 TOPS)
BiSeNetV2 FP32 ONNX: ~12ms (~83fps theoretical)
BiSeNetV2 FP16 TRT: ~5ms (~200fps theoretical, real ~50fps w/ overhead)
DDRNet-23 FP16 TRT: ~8ms (~125fps theoretical, real ~40fps)
Target: >15fps including ROS2 overhead at 512×256
"""
import argparse
import os
import sys
import time
# ── Default paths ─────────────────────────────────────────────────────────────
DEFAULT_MODEL_DIR = "/mnt/nvme/saltybot/models"
INPUT_W, INPUT_H = 512, 256
BATCH_SIZE = 1
MODEL_CONFIGS = {
"bisenetv2": {
"onnx_name": "bisenetv2_cityscapes_512x256.onnx",
"engine_name": "bisenetv2_cityscapes_512x256.engine",
"repo_url": "https://github.com/CoinCheung/BiSeNet.git",
"weights_url": "https://github.com/CoinCheung/BiSeNet/releases/download/0.0.0/model_final_v2_city.pth",
"num_classes": 19,
"description": "BiSeNetV2 Cityscapes — 72.6 mIoU, ~50fps on Orin",
},
"ddrnet": {
"onnx_name": "ddrnet23s_cityscapes_512x256.onnx",
"engine_name": "ddrnet23s_cityscapes_512x256.engine",
"repo_url": "https://github.com/ydhongHIT/DDRNet.git",
"weights_url": None, # must supply manually
"num_classes": 19,
"description": "DDRNet-23-slim Cityscapes — 79.5 mIoU, ~40fps on Orin",
},
}
def parse_args():
p = argparse.ArgumentParser(description="Build TensorRT segmentation engine")
p.add_argument("--model", default="bisenetv2",
choices=list(MODEL_CONFIGS.keys()),
help="Model architecture")
p.add_argument("--weights", default=None,
help="Path to .pth weights file (downloads if not set)")
p.add_argument("--onnx", default=None, help="Output ONNX path")
p.add_argument("--engine", default=None, help="Output TRT engine path")
p.add_argument("--fp32", action="store_true",
help="Build FP32 engine instead of FP16 (slower, more accurate)")
p.add_argument("--workspace-gb", type=float, default=2.0,
help="TRT builder workspace in GB (default 2.0)")
p.add_argument("--skip-onnx", action="store_true",
help="Skip ONNX export (use existing .onnx file)")
return p.parse_args()
def ensure_dir(path: str):
os.makedirs(path, exist_ok=True)
def download_weights(url: str, dest: str):
"""Download model weights with progress display."""
import urllib.request
print(f" Downloading weights from {url}")
print(f"{dest}")
def progress(count, block, total):
pct = min(count * block / total * 100, 100)
bar = "#" * int(pct / 2)
sys.stdout.write(f"\r [{bar:<50}] {pct:.1f}%")
sys.stdout.flush()
urllib.request.urlretrieve(url, dest, reporthook=progress)
print()
def export_bisenetv2_onnx(weights_path: str, onnx_path: str, num_classes: int = 19):
"""
Export BiSeNetV2 from CoinCheung/BiSeNet to ONNX.
Expects BiSeNet repo to be cloned in /tmp/BiSeNet or PYTHONPATH set.
"""
import torch
# Try to import BiSeNetV2 — clone repo if needed
try:
sys.path.insert(0, "/tmp/BiSeNet")
from lib.models.bisenetv2 import BiSeNetV2
except ImportError:
print(" Cloning BiSeNet repository to /tmp/BiSeNet ...")
os.system("git clone --depth 1 https://github.com/CoinCheung/BiSeNet.git /tmp/BiSeNet")
sys.path.insert(0, "/tmp/BiSeNet")
from lib.models.bisenetv2 import BiSeNetV2
print(" Loading BiSeNetV2 weights ...")
net = BiSeNetV2(num_classes)
state = torch.load(weights_path, map_location="cpu")
# Handle both direct state_dict and checkpoint wrapper
state_dict = state.get("state_dict", state.get("model", state))
# Strip module. prefix from DataParallel checkpoints
state_dict = {k.replace("module.", ""): v for k, v in state_dict.items()}
net.load_state_dict(state_dict, strict=False)
net.eval()
print(f" Exporting ONNX → {onnx_path}")
dummy = torch.randn(BATCH_SIZE, 3, INPUT_H, INPUT_W)
import torch.onnx
# BiSeNetV2 inference returns (logits, *aux) — we only want logits
class _Wrapper(torch.nn.Module):
def __init__(self, net):
super().__init__()
self.net = net
def forward(self, x):
out = self.net(x)
return out[0] if isinstance(out, (list, tuple)) else out
wrapped = _Wrapper(net)
torch.onnx.export(
wrapped, dummy, onnx_path,
opset_version=12,
input_names=["image"],
output_names=["logits"],
dynamic_axes=None, # fixed batch=1 for TRT
do_constant_folding=True,
)
# Verify
import onnx
model = onnx.load(onnx_path)
onnx.checker.check_model(model)
print(f" ONNX export verified ({os.path.getsize(onnx_path) / 1e6:.1f} MB)")
def export_ddrnet_onnx(weights_path: str, onnx_path: str, num_classes: int = 19):
"""Export DDRNet-23-slim to ONNX."""
import torch
try:
sys.path.insert(0, "/tmp/DDRNet/tools/../lib")
from models.ddrnet_23_slim import get_seg_model
except ImportError:
print(" Cloning DDRNet repository to /tmp/DDRNet ...")
os.system("git clone --depth 1 https://github.com/ydhongHIT/DDRNet.git /tmp/DDRNet")
sys.path.insert(0, "/tmp/DDRNet/lib")
from models.ddrnet_23_slim import get_seg_model
print(" Loading DDRNet weights ...")
net = get_seg_model(num_classes=num_classes)
net.load_state_dict(
{k.replace("module.", ""): v
for k, v in torch.load(weights_path, map_location="cpu").items()},
strict=False
)
net.eval()
print(f" Exporting ONNX → {onnx_path}")
dummy = torch.randn(BATCH_SIZE, 3, INPUT_H, INPUT_W)
import torch.onnx
torch.onnx.export(
net, dummy, onnx_path,
opset_version=12,
input_names=["image"],
output_names=["logits"],
do_constant_folding=True,
)
import onnx
onnx.checker.check_model(onnx.load(onnx_path))
print(f" ONNX export verified ({os.path.getsize(onnx_path) / 1e6:.1f} MB)")
def build_trt_engine(onnx_path: str, engine_path: str,
fp16: bool = True, workspace_gb: float = 2.0):
"""
Build a TensorRT engine from an ONNX model.
Uses explicit batch dimension (not implicit batch).
Enables FP16 mode by default (2× speedup on Orin Ampere vs FP32).
"""
try:
import tensorrt as trt
except ImportError:
print("ERROR: TensorRT not found. Install JetPack 6 or:")
print(" pip install tensorrt (x86 only)")
sys.exit(1)
logger = trt.Logger(trt.Logger.INFO)
print(f"\n Building TRT engine ({'FP16' if fp16 else 'FP32'}) from: {onnx_path}")
print(f" Output: {engine_path}")
print(f" Workspace: {workspace_gb:.1f} GB")
print(" This may take 515 minutes on first build (layer calibration)...")
t0 = time.time()
builder = trt.Builder(logger)
network = builder.create_network(
1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
)
parser = trt.OnnxParser(network, logger)
config = builder.create_builder_config()
# Workspace
config.set_memory_pool_limit(
trt.MemoryPoolType.WORKSPACE, int(workspace_gb * 1024**3)
)
# FP16 precision
if fp16 and builder.platform_has_fast_fp16:
config.set_flag(trt.BuilderFlag.FP16)
print(" FP16 mode enabled")
elif fp16:
print(" WARNING: FP16 not available on this platform, using FP32")
# Parse ONNX
with open(onnx_path, "rb") as f:
if not parser.parse(f.read()):
for i in range(parser.num_errors):
print(f" ONNX parse error: {parser.get_error(i)}")
sys.exit(1)
print(f" Network: {network.num_inputs} input(s), {network.num_outputs} output(s)")
inp = network.get_input(0)
print(f" Input shape: {inp.shape} dtype: {inp.dtype}")
# Build
serialized_engine = builder.build_serialized_network(network, config)
if serialized_engine is None:
print("ERROR: TRT engine build failed")
sys.exit(1)
with open(engine_path, "wb") as f:
f.write(serialized_engine)
elapsed = time.time() - t0
size_mb = os.path.getsize(engine_path) / 1e6
print(f"\n Engine built in {elapsed:.1f}s ({size_mb:.1f} MB)")
print(f" Saved: {engine_path}")
def validate_engine(engine_path: str):
"""Run a dummy inference to confirm the engine works and measure latency."""
try:
import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit # noqa
import numpy as np
except ImportError:
print(" Skipping validation (pycuda not available)")
return
print("\n Validating engine with dummy input...")
trt_logger = trt.Logger(trt.Logger.WARNING)
with open(engine_path, "rb") as f:
engine = trt.Runtime(trt_logger).deserialize_cuda_engine(f.read())
ctx = engine.create_execution_context()
stream = cuda.Stream()
bindings = []
host_in = host_out = dev_in = dev_out = None
for i in range(engine.num_bindings):
shape = engine.get_binding_shape(i)
dtype = trt.nptype(engine.get_binding_dtype(i))
h_buf = cuda.pagelocked_empty(int(np.prod(shape)), dtype)
d_buf = cuda.mem_alloc(h_buf.nbytes)
bindings.append(int(d_buf))
if engine.binding_is_input(i):
host_in, dev_in = h_buf, d_buf
host_in[:] = np.random.randn(*shape).astype(dtype).ravel()
else:
host_out, dev_out = h_buf, d_buf
# Warm up
for _ in range(5):
cuda.memcpy_htod_async(dev_in, host_in, stream)
ctx.execute_async_v2(bindings, stream.handle)
cuda.memcpy_dtoh_async(host_out, dev_out, stream)
stream.synchronize()
# Benchmark
N = 20
t0 = time.time()
for _ in range(N):
cuda.memcpy_htod_async(dev_in, host_in, stream)
ctx.execute_async_v2(bindings, stream.handle)
cuda.memcpy_dtoh_async(host_out, dev_out, stream)
stream.synchronize()
elapsed_ms = (time.time() - t0) * 1000 / N
out_shape = engine.get_binding_shape(engine.num_bindings - 1)
print(f" Output shape: {out_shape}")
print(f" Inference latency: {elapsed_ms:.1f}ms ({1000/elapsed_ms:.1f} fps) [avg {N} runs]")
if elapsed_ms < 1000 / 15:
print(" PASS: target >15fps achieved")
else:
print(f" WARNING: latency {elapsed_ms:.1f}ms > target {1000/15:.1f}ms")
def main():
args = parse_args()
cfg = MODEL_CONFIGS[args.model]
print(f"\nBuilding TRT engine for: {cfg['description']}")
ensure_dir(DEFAULT_MODEL_DIR)
onnx_path = args.onnx or os.path.join(DEFAULT_MODEL_DIR, cfg["onnx_name"])
engine_path = args.engine or os.path.join(DEFAULT_MODEL_DIR, cfg["engine_name"])
# ── Step 1: Download weights if needed ────────────────────────────────────
if not args.skip_onnx:
weights_path = args.weights
if weights_path is None:
if cfg["weights_url"] is None:
print(f"ERROR: No weights URL for {args.model}. Supply --weights /path/to.pth")
sys.exit(1)
weights_path = os.path.join(DEFAULT_MODEL_DIR, f"{args.model}_cityscapes.pth")
if not os.path.exists(weights_path):
print("\nStep 1: Downloading pretrained weights")
download_weights(cfg["weights_url"], weights_path)
else:
print(f"\nStep 1: Using cached weights: {weights_path}")
else:
print(f"\nStep 1: Using provided weights: {weights_path}")
# ── Step 2: Export ONNX ────────────────────────────────────────────────
print(f"\nStep 2: Exporting {args.model.upper()} to ONNX ({INPUT_W}×{INPUT_H})")
if args.model == "bisenetv2":
export_bisenetv2_onnx(weights_path, onnx_path, cfg["num_classes"])
else:
export_ddrnet_onnx(weights_path, onnx_path, cfg["num_classes"])
else:
print(f"\nStep 1+2: Skipping ONNX export, using: {onnx_path}")
if not os.path.exists(onnx_path):
print(f"ERROR: ONNX file not found: {onnx_path}")
sys.exit(1)
# ── Step 3: Build TRT engine ───────────────────────────────────────────────
print(f"\nStep 3: Building TensorRT engine")
build_trt_engine(
onnx_path, engine_path,
fp16=not args.fp32,
workspace_gb=args.workspace_gb,
)
# ── Step 4: Validate ───────────────────────────────────────────────────────
print(f"\nStep 4: Validating engine")
validate_engine(engine_path)
print(f"\nDone. Engine: {engine_path}")
print("Start the node:")
print(f" ros2 launch saltybot_segmentation sidewalk_segmentation.launch.py \\")
print(f" engine_path:={engine_path}")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,400 @@
#!/usr/bin/env python3
"""
fine_tune.py Fine-tune BiSeNetV2 on custom sidewalk data.
Walk-and-label workflow:
1. Record a ROS2 bag while walking the route:
ros2 bag record /camera/color/image_raw -o sidewalk_route
2. Extract frames from the bag:
python3 fine_tune.py --extract-frames sidewalk_route/ --output-dir data/raw/ --every-n 5
3. Label 50+ frames in LabelMe (or CVAT):
labelme data/raw/ (save as JSON)
4. Convert labels to Cityscapes-format masks:
python3 fine_tune.py --convert-labels data/raw/ --output-dir data/labels/
5. Fine-tune:
python3 fine_tune.py --train --images data/raw/ --labels data/labels/ \
--weights /mnt/nvme/saltybot/models/bisenetv2_cityscapes.pth \
--output /mnt/nvme/saltybot/models/bisenetv2_custom.pth
6. Build new TRT engine:
python3 build_engine.py --weights /mnt/nvme/saltybot/models/bisenetv2_custom.pth
LabelMe class names (must match exactly in labelme JSON):
sidewalk Cityscapes ID 1 TRAV_SIDEWALK
road Cityscapes ID 0 TRAV_ROAD
grass Cityscapes ID 8 TRAV_GRASS (use 'vegetation')
obstacle Cityscapes ID 2 TRAV_OBSTACLE (use 'building' for static)
person Cityscapes ID 11
car Cityscapes ID 13
Requirements:
pip install torch torchvision albumentations labelme opencv-python
"""
import argparse
import json
import os
import sys
import numpy as np
# ── Label → Cityscapes ID mapping for custom annotation ──────────────────────
LABEL_TO_CITYSCAPES = {
"road": 0,
"sidewalk": 1,
"building": 2,
"wall": 3,
"fence": 4,
"pole": 5,
"traffic light": 6,
"traffic sign": 7,
"vegetation": 8,
"grass": 8, # alias
"terrain": 9,
"sky": 10,
"person": 11,
"rider": 12,
"car": 13,
"truck": 14,
"bus": 15,
"train": 16,
"motorcycle": 17,
"bicycle": 18,
# saltybot-specific aliases
"kerb": 1, # map kerb → sidewalk
"pavement": 1, # British English
"path": 1,
"tarmac": 0, # map tarmac → road
"shrub": 8, # map shrub → vegetation
}
DEFAULT_CLASS = 255 # unlabelled pixels → ignore in loss
def parse_args():
p = argparse.ArgumentParser(description="Fine-tune BiSeNetV2 on custom data")
p.add_argument("--extract-frames", metavar="BAG_DIR",
help="Extract frames from a ROS2 bag")
p.add_argument("--output-dir", default="data/raw",
help="Output directory for extracted frames or converted labels")
p.add_argument("--every-n", type=int, default=5,
help="Extract every N-th frame from bag")
p.add_argument("--convert-labels", metavar="LABELME_DIR",
help="Convert LabelMe JSON annotations to Cityscapes masks")
p.add_argument("--train", action="store_true",
help="Run fine-tuning loop")
p.add_argument("--images", default="data/raw",
help="Directory of training images")
p.add_argument("--labels", default="data/labels",
help="Directory of label masks (PNG, Cityscapes IDs)")
p.add_argument("--weights", required=False,
default="/mnt/nvme/saltybot/models/bisenetv2_cityscapes.pth",
help="Path to pretrained BiSeNetV2 weights")
p.add_argument("--output",
default="/mnt/nvme/saltybot/models/bisenetv2_custom.pth",
help="Output path for fine-tuned weights")
p.add_argument("--epochs", type=int, default=20)
p.add_argument("--lr", type=float, default=1e-4)
p.add_argument("--batch-size", type=int, default=4)
p.add_argument("--input-size", nargs=2, type=int, default=[512, 256],
help="Model input size: W H")
p.add_argument("--eval", action="store_true",
help="Evaluate mIoU on labelled data instead of training")
return p.parse_args()
# ── Frame extraction from ROS2 bag ────────────────────────────────────────────
def extract_frames(bag_dir: str, output_dir: str, every_n: int = 5):
"""Extract /camera/color/image_raw frames from a ROS2 bag."""
try:
import rclpy
from rosbag2_py import SequentialReader, StorageOptions, ConverterOptions
from rclpy.serialization import deserialize_message
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import cv2
except ImportError:
print("ERROR: rosbag2_py / rclpy not available. Must run in ROS2 environment.")
sys.exit(1)
os.makedirs(output_dir, exist_ok=True)
bridge = CvBridge()
reader = SequentialReader()
reader.open(
StorageOptions(uri=bag_dir, storage_id="sqlite3"),
ConverterOptions("", ""),
)
topic = "/camera/color/image_raw"
count = 0
saved = 0
while reader.has_next():
topic_name, data, ts = reader.read_next()
if topic_name != topic:
continue
count += 1
if count % every_n != 0:
continue
msg = deserialize_message(data, Image)
bgr = bridge.imgmsg_to_cv2(msg, "bgr8")
fname = os.path.join(output_dir, f"frame_{ts:020d}.jpg")
cv2.imwrite(fname, bgr, [cv2.IMWRITE_JPEG_QUALITY, 95])
saved += 1
if saved % 10 == 0:
print(f" Saved {saved} frames...")
print(f"Extracted {saved} frames from {count} total to {output_dir}")
# ── LabelMe JSON → Cityscapes mask ───────────────────────────────────────────
def convert_labelme_to_masks(labelme_dir: str, output_dir: str):
"""Convert LabelMe polygon annotations to per-pixel Cityscapes-ID PNG masks."""
try:
import cv2
except ImportError:
print("ERROR: opencv-python not installed. pip install opencv-python")
sys.exit(1)
os.makedirs(output_dir, exist_ok=True)
json_files = [f for f in os.listdir(labelme_dir) if f.endswith(".json")]
if not json_files:
print(f"No .json annotation files found in {labelme_dir}")
sys.exit(1)
print(f"Converting {len(json_files)} annotation files...")
for jf in json_files:
with open(os.path.join(labelme_dir, jf)) as f:
anno = json.load(f)
h = anno["imageHeight"]
w = anno["imageWidth"]
mask = np.full((h, w), DEFAULT_CLASS, dtype=np.uint8)
for shape in anno.get("shapes", []):
label = shape["label"].lower().strip()
cid = LABEL_TO_CITYSCAPES.get(label)
if cid is None:
print(f" WARNING: unknown label '{label}' in {jf} — skipping")
continue
pts = np.array(shape["points"], dtype=np.int32)
cv2.fillPoly(mask, [pts], cid)
out_name = os.path.splitext(jf)[0] + "_mask.png"
cv2.imwrite(os.path.join(output_dir, out_name), mask)
print(f"Masks saved to {output_dir}")
# ── Training loop ─────────────────────────────────────────────────────────────
def train(args):
"""Fine-tune BiSeNetV2 with cross-entropy loss + ignore_index=255."""
try:
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
import cv2
import albumentations as A
from albumentations.pytorch import ToTensorV2
except ImportError as e:
print(f"ERROR: missing dependency — {e}")
print("pip install torch torchvision albumentations opencv-python")
sys.exit(1)
# -- Dataset ---------------------------------------------------------------
class SegDataset(Dataset):
def __init__(self, img_dir, lbl_dir, transform=None):
self.imgs = sorted([
os.path.join(img_dir, f)
for f in os.listdir(img_dir)
if f.endswith((".jpg", ".png"))
])
self.lbls = sorted([
os.path.join(lbl_dir, f)
for f in os.listdir(lbl_dir)
if f.endswith(".png")
])
if len(self.imgs) != len(self.lbls):
raise ValueError(
f"Image/label count mismatch: {len(self.imgs)} vs {len(self.lbls)}"
)
self.transform = transform
def __len__(self): return len(self.imgs)
def __getitem__(self, idx):
img = cv2.cvtColor(cv2.imread(self.imgs[idx]), cv2.COLOR_BGR2RGB)
lbl = cv2.imread(self.lbls[idx], cv2.IMREAD_GRAYSCALE)
if self.transform:
aug = self.transform(image=img, mask=lbl)
img, lbl = aug["image"], aug["mask"]
return img, lbl.long()
iw, ih = args.input_size
transform = A.Compose([
A.Resize(ih, iw),
A.HorizontalFlip(p=0.5),
A.ColorJitter(brightness=0.3, contrast=0.3, saturation=0.3, hue=0.1, p=0.5),
A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
ToTensorV2(),
])
dataset = SegDataset(args.images, args.labels, transform=transform)
loader = DataLoader(dataset, batch_size=args.batch_size, shuffle=True,
num_workers=2, pin_memory=True)
print(f"Dataset: {len(dataset)} samples batch={args.batch_size}")
# -- Model -----------------------------------------------------------------
sys.path.insert(0, "/tmp/BiSeNet")
try:
from lib.models.bisenetv2 import BiSeNetV2
except ImportError:
os.system("git clone --depth 1 https://github.com/CoinCheung/BiSeNet.git /tmp/BiSeNet")
from lib.models.bisenetv2 import BiSeNetV2
net = BiSeNetV2(n_classes=19)
state = torch.load(args.weights, map_location="cpu")
state_dict = state.get("state_dict", state.get("model", state))
state_dict = {k.replace("module.", ""): v for k, v in state_dict.items()}
net.load_state_dict(state_dict, strict=False)
device = "cuda" if torch.cuda.is_available() else "cpu"
net = net.to(device)
print(f"Training on: {device}")
# -- Optimiser (low LR to preserve Cityscapes knowledge) -------------------
optimiser = optim.AdamW(net.parameters(), lr=args.lr, weight_decay=1e-4)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimiser, T_max=args.epochs)
criterion = nn.CrossEntropyLoss(ignore_index=DEFAULT_CLASS)
# -- Loop ------------------------------------------------------------------
best_loss = float("inf")
for epoch in range(1, args.epochs + 1):
net.train()
total_loss = 0.0
for imgs, lbls in loader:
imgs = imgs.to(device, dtype=torch.float32)
lbls = lbls.to(device)
out = net(imgs)
logits = out[0] if isinstance(out, (list, tuple)) else out
loss = criterion(logits, lbls)
optimiser.zero_grad()
loss.backward()
optimiser.step()
total_loss += loss.item()
scheduler.step()
avg = total_loss / len(loader)
print(f" Epoch {epoch:3d}/{args.epochs} loss={avg:.4f} lr={scheduler.get_last_lr()[0]:.2e}")
if avg < best_loss:
best_loss = avg
torch.save(net.state_dict(), args.output)
print(f" Saved best model → {args.output}")
print(f"\nFine-tuning done. Best loss: {best_loss:.4f} Saved: {args.output}")
print("\nNext step: rebuild TRT engine:")
print(f" python3 build_engine.py --weights {args.output}")
# ── mIoU evaluation ───────────────────────────────────────────────────────────
def evaluate_miou(args):
"""Compute mIoU on labelled validation data."""
try:
import torch
import cv2
except ImportError:
print("ERROR: torch/cv2 not available")
sys.exit(1)
from saltybot_segmentation.seg_utils import cityscapes_to_traversability
NUM_CLASSES = 19
confusion = np.zeros((NUM_CLASSES, NUM_CLASSES), dtype=np.int64)
sys.path.insert(0, "/tmp/BiSeNet")
from lib.models.bisenetv2 import BiSeNetV2
import torch
net = BiSeNetV2(n_classes=NUM_CLASSES)
state_dict = {k.replace("module.", ""): v
for k, v in torch.load(args.weights, map_location="cpu").items()}
net.load_state_dict(state_dict, strict=False)
net.eval().cuda() if torch.cuda.is_available() else net.eval()
device = next(net.parameters()).device
iw, ih = args.input_size
img_files = sorted([f for f in os.listdir(args.images) if f.endswith((".jpg", ".png"))])
lbl_files = sorted([f for f in os.listdir(args.labels) if f.endswith(".png")])
from saltybot_segmentation.seg_utils import preprocess_for_inference, unpad_mask
with torch.no_grad():
for imgf, lblf in zip(img_files, lbl_files):
img = cv2.imread(os.path.join(args.images, imgf))
lbl = cv2.imread(os.path.join(args.labels, lblf), cv2.IMREAD_GRAYSCALE)
H, W = img.shape[:2]
blob, scale, pad_l, pad_t = preprocess_for_inference(img, iw, ih)
t_blob = torch.from_numpy(blob).to(device)
out = net(t_blob)
logits = out[0] if isinstance(out, (list, tuple)) else out
pred = logits[0].argmax(0).cpu().numpy().astype(np.uint8)
pred = unpad_mask(pred, H, W, scale, pad_l, pad_t)
valid = lbl != DEFAULT_CLASS
np.add.at(confusion, (lbl[valid].ravel(), pred[valid].ravel()), 1)
# Per-class IoU
iou_per_class = []
for c in range(NUM_CLASSES):
tp = confusion[c, c]
fp = confusion[:, c].sum() - tp
fn = confusion[c, :].sum() - tp
denom = tp + fp + fn
iou_per_class.append(tp / denom if denom > 0 else float("nan"))
valid_iou = [v for v in iou_per_class if not np.isnan(v)]
miou = np.mean(valid_iou)
CITYSCAPES_NAMES = ["road", "sidewalk", "building", "wall", "fence", "pole",
"traffic light", "traffic sign", "vegetation", "terrain",
"sky", "person", "rider", "car", "truck", "bus", "train",
"motorcycle", "bicycle"]
print("\nmIoU per class:")
for i, (name, iou) in enumerate(zip(CITYSCAPES_NAMES, iou_per_class)):
marker = " *" if not np.isnan(iou) else ""
print(f" {i:2d} {name:<16} {iou*100:5.1f}%{marker}")
print(f"\nmIoU: {miou*100:.2f}% ({len(valid_iou)}/{NUM_CLASSES} classes present)")
def main():
args = parse_args()
if args.extract_frames:
extract_frames(args.extract_frames, args.output_dir, args.every_n)
elif args.convert_labels:
convert_labelme_to_masks(args.convert_labels, args.output_dir)
elif args.eval:
evaluate_miou(args)
elif args.train:
train(args)
else:
print("Specify one of: --extract-frames, --convert-labels, --train, --eval")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,4 @@
[develop]
script_dir=$base/lib/saltybot_segmentation
[install]
install_scripts=$base/lib/saltybot_segmentation

View File

@ -0,0 +1,36 @@
from setuptools import setup
import os
from glob import glob
package_name = "saltybot_segmentation"
setup(
name=package_name,
version="0.1.0",
packages=[package_name],
data_files=[
("share/ament_index/resource_index/packages",
["resource/" + package_name]),
("share/" + package_name, ["package.xml"]),
(os.path.join("share", package_name, "launch"),
glob("launch/*.py")),
(os.path.join("share", package_name, "config"),
glob("config/*.yaml")),
(os.path.join("share", package_name, "scripts"),
glob("scripts/*.py")),
(os.path.join("share", package_name, "docs"),
glob("docs/*.md")),
],
install_requires=["setuptools"],
zip_safe=True,
maintainer="seb",
maintainer_email="seb@vayrette.com",
description="Semantic sidewalk segmentation for SaltyBot (BiSeNetV2/DDRNet TensorRT)",
license="MIT",
tests_require=["pytest"],
entry_points={
"console_scripts": [
"sidewalk_seg = saltybot_segmentation.sidewalk_seg_node:main",
],
},
)

View File

@ -0,0 +1,168 @@
"""
test_seg_utils.py Unit tests for seg_utils pure helpers.
No ROS2 / TensorRT / cv2 runtime required plain pytest with numpy.
"""
import numpy as np
import pytest
from saltybot_segmentation.seg_utils import (
TRAV_SIDEWALK, TRAV_GRASS, TRAV_ROAD, TRAV_OBSTACLE, TRAV_UNKNOWN,
TRAV_TO_COST,
cityscapes_to_traversability,
traversability_to_costmap,
)
# ── cityscapes_to_traversability ──────────────────────────────────────────────
class TestCitiyscapesToTraversability:
def test_sidewalk_maps_to_free(self):
mask = np.array([[1]], dtype=np.uint8) # Cityscapes: sidewalk
result = cityscapes_to_traversability(mask)
assert result[0, 0] == TRAV_SIDEWALK
def test_road_maps_to_road(self):
mask = np.array([[0]], dtype=np.uint8) # Cityscapes: road
result = cityscapes_to_traversability(mask)
assert result[0, 0] == TRAV_ROAD
def test_vegetation_maps_to_grass(self):
mask = np.array([[8]], dtype=np.uint8) # Cityscapes: vegetation
result = cityscapes_to_traversability(mask)
assert result[0, 0] == TRAV_GRASS
def test_terrain_maps_to_grass(self):
mask = np.array([[9]], dtype=np.uint8) # Cityscapes: terrain
result = cityscapes_to_traversability(mask)
assert result[0, 0] == TRAV_GRASS
def test_building_maps_to_obstacle(self):
mask = np.array([[2]], dtype=np.uint8) # Cityscapes: building
result = cityscapes_to_traversability(mask)
assert result[0, 0] == TRAV_OBSTACLE
def test_person_maps_to_obstacle(self):
mask = np.array([[11]], dtype=np.uint8) # Cityscapes: person
result = cityscapes_to_traversability(mask)
assert result[0, 0] == TRAV_OBSTACLE
def test_car_maps_to_obstacle(self):
mask = np.array([[13]], dtype=np.uint8) # Cityscapes: car
result = cityscapes_to_traversability(mask)
assert result[0, 0] == TRAV_OBSTACLE
def test_sky_maps_to_unknown(self):
mask = np.array([[10]], dtype=np.uint8) # Cityscapes: sky
result = cityscapes_to_traversability(mask)
assert result[0, 0] == TRAV_UNKNOWN
def test_all_dynamic_obstacles_are_lethal(self):
# person=11, rider=12, car=13, truck=14, bus=15, train=16, motorcycle=17, bicycle=18
for cid in [11, 12, 13, 14, 15, 16, 17, 18]:
mask = np.array([[cid]], dtype=np.uint8)
result = cityscapes_to_traversability(mask)
assert result[0, 0] == TRAV_OBSTACLE, f"class {cid} should be OBSTACLE"
def test_all_static_obstacles_are_lethal(self):
# building=2, wall=3, fence=4, pole=5, traffic light=6, sign=7
for cid in [2, 3, 4, 5, 6, 7]:
mask = np.array([[cid]], dtype=np.uint8)
result = cityscapes_to_traversability(mask)
assert result[0, 0] == TRAV_OBSTACLE, f"class {cid} should be OBSTACLE"
def test_out_of_range_id_clamped(self):
"""Class IDs beyond 18 should not crash — clamped to 18."""
mask = np.array([[200]], dtype=np.uint8)
result = cityscapes_to_traversability(mask)
assert result[0, 0] in range(5) # valid traversability class
def test_multi_class_image(self):
"""A 2×3 image with mixed classes maps correctly."""
mask = np.array([
[1, 0, 8], # sidewalk, road, vegetation
[2, 11, 10], # building, person, sky
], dtype=np.uint8)
result = cityscapes_to_traversability(mask)
assert result[0, 0] == TRAV_SIDEWALK
assert result[0, 1] == TRAV_ROAD
assert result[0, 2] == TRAV_GRASS
assert result[1, 0] == TRAV_OBSTACLE
assert result[1, 1] == TRAV_OBSTACLE
assert result[1, 2] == TRAV_UNKNOWN
def test_output_dtype_is_uint8(self):
mask = np.zeros((4, 4), dtype=np.uint8)
result = cityscapes_to_traversability(mask)
assert result.dtype == np.uint8
def test_output_shape_preserved(self):
mask = np.zeros((32, 64), dtype=np.uint8)
result = cityscapes_to_traversability(mask)
assert result.shape == (32, 64)
# ── traversability_to_costmap ─────────────────────────────────────────────────
class TestTraversabilityToCostmap:
def test_sidewalk_is_free(self):
mask = np.array([[TRAV_SIDEWALK]], dtype=np.uint8)
cost = traversability_to_costmap(mask)
assert cost[0, 0] == 0
def test_road_has_high_cost(self):
mask = np.array([[TRAV_ROAD]], dtype=np.uint8)
cost = traversability_to_costmap(mask)
assert 50 < cost[0, 0] < 100 # high but not lethal
def test_grass_has_medium_cost(self):
mask = np.array([[TRAV_GRASS]], dtype=np.uint8)
cost = traversability_to_costmap(mask)
grass_cost = TRAV_TO_COST[TRAV_GRASS]
assert cost[0, 0] == grass_cost
assert grass_cost < TRAV_TO_COST[TRAV_ROAD] # grass < road cost
def test_obstacle_is_lethal(self):
mask = np.array([[TRAV_OBSTACLE]], dtype=np.uint8)
cost = traversability_to_costmap(mask)
assert cost[0, 0] == 100
def test_unknown_is_neg1_by_default(self):
mask = np.array([[TRAV_UNKNOWN]], dtype=np.uint8)
cost = traversability_to_costmap(mask)
assert cost[0, 0] == -1
def test_unknown_as_obstacle_flag(self):
mask = np.array([[TRAV_UNKNOWN]], dtype=np.uint8)
cost = traversability_to_costmap(mask, unknown_as_obstacle=True)
assert cost[0, 0] == 100
def test_cost_ordering(self):
"""sidewalk < grass < road < obstacle."""
assert TRAV_TO_COST[TRAV_SIDEWALK] < TRAV_TO_COST[TRAV_GRASS]
assert TRAV_TO_COST[TRAV_GRASS] < TRAV_TO_COST[TRAV_ROAD]
assert TRAV_TO_COST[TRAV_ROAD] < TRAV_TO_COST[TRAV_OBSTACLE]
def test_output_dtype_is_int8(self):
mask = np.zeros((4, 4), dtype=np.uint8)
cost = traversability_to_costmap(mask)
assert cost.dtype == np.int8
def test_output_shape_preserved(self):
mask = np.zeros((16, 32), dtype=np.uint8)
cost = traversability_to_costmap(mask)
assert cost.shape == (16, 32)
def test_mixed_mask(self):
mask = np.array([
[TRAV_SIDEWALK, TRAV_ROAD],
[TRAV_OBSTACLE, TRAV_UNKNOWN],
], dtype=np.uint8)
cost = traversability_to_costmap(mask)
assert cost[0, 0] == TRAV_TO_COST[TRAV_SIDEWALK]
assert cost[0, 1] == TRAV_TO_COST[TRAV_ROAD]
assert cost[1, 0] == TRAV_TO_COST[TRAV_OBSTACLE]
assert cost[1, 1] == -1 # unknown

View File

@ -0,0 +1,70 @@
cmake_minimum_required(VERSION 3.8)
project(saltybot_segmentation_costmap)
# Default to C++17
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 17)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
# Dependencies
find_package(ament_cmake REQUIRED)
find_package(rclcpp REQUIRED)
find_package(nav2_costmap_2d REQUIRED)
find_package(nav_msgs REQUIRED)
find_package(pluginlib REQUIRED)
# Shared library (the plugin)
add_library(${PROJECT_NAME} SHARED
src/segmentation_costmap_layer.cpp
)
target_include_directories(${PROJECT_NAME} PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
$<INSTALL_INTERFACE:include>
)
ament_target_dependencies(${PROJECT_NAME}
rclcpp
nav2_costmap_2d
nav_msgs
pluginlib
)
# Export plugin XML
pluginlib_export_plugin_description_file(nav2_costmap_2d plugin.xml)
# Install
install(TARGETS ${PROJECT_NAME}
ARCHIVE DESTINATION lib
LIBRARY DESTINATION lib
RUNTIME DESTINATION bin
)
install(DIRECTORY include/
DESTINATION include
)
install(FILES plugin.xml
DESTINATION share/${PROJECT_NAME}
)
# Tests
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
ament_lint_auto_find_test_dependencies()
endif()
ament_export_include_directories(include)
ament_export_libraries(${PROJECT_NAME})
ament_export_dependencies(
rclcpp
nav2_costmap_2d
nav_msgs
pluginlib
)
ament_package()

View File

@ -0,0 +1,85 @@
#pragma once
#include <mutex>
#include <string>
#include "nav2_costmap_2d/layer.hpp"
#include "nav2_costmap_2d/layered_costmap.hpp"
#include "nav_msgs/msg/occupancy_grid.hpp"
#include "rclcpp/rclcpp.hpp"
namespace saltybot_segmentation_costmap
{
/**
* SegmentationCostmapLayer
*
* Nav2 costmap2d plugin that subscribes to /segmentation/costmap
* (nav_msgs/OccupancyGrid published by sidewalk_seg_node) and merges
* traversability costs into the Nav2 local_costmap.
*
* Merging strategy (combination_method parameter):
* "max" keep the higher cost between existing and new (default, conservative)
* "override" always write the new cost, replacing existing
* "min" keep the lower cost (permissive)
*
* Cost mapping from OccupancyGrid int8 to Nav2 costmap uint8:
* -1 (unknown) not written (cell unchanged)
* 0 (free) FREE_SPACE (0)
* 199 (cost) scaled to Nav2 range [1, INSCRIBED_INFLATED_OBSTACLE-1]
* 100 (lethal) LETHAL_OBSTACLE (254)
*
* Add to Nav2 local_costmap params:
* local_costmap:
* plugins: ["voxel_layer", "inflation_layer", "segmentation_layer"]
* segmentation_layer:
* plugin: "saltybot_segmentation_costmap::SegmentationCostmapLayer"
* enabled: true
* topic: /segmentation/costmap
* combination_method: max
* max_obstacle_distance: 0.0 (use all cells)
*/
class SegmentationCostmapLayer : public nav2_costmap_2d::Layer
{
public:
SegmentationCostmapLayer();
~SegmentationCostmapLayer() override = default;
void onInitialize() override;
void updateBounds(
double robot_x, double robot_y, double robot_yaw,
double * min_x, double * min_y,
double * max_x, double * max_y) override;
void updateCosts(
nav2_costmap_2d::Costmap2D & master_grid,
int min_i, int min_j, int max_i, int max_j) override;
void reset() override;
bool isClearable() override { return true; }
private:
void segCostmapCallback(const nav_msgs::msg::OccupancyGrid::SharedPtr msg);
/**
* Map OccupancyGrid int8 value to Nav2 costmap uint8 cost.
* -1 leave unchanged (return false)
* 0 FREE_SPACE (0)
* 199 1 to INSCRIBED_INFLATED_OBSTACLE-1 (linear scale)
* 100 LETHAL_OBSTACLE (254)
*/
static bool occupancyToCost(int8_t occ, unsigned char & cost_out);
rclcpp::Subscription<nav_msgs::msg::OccupancyGrid>::SharedPtr sub_;
nav_msgs::msg::OccupancyGrid::SharedPtr latest_grid_;
std::mutex grid_mutex_;
std::string topic_;
std::string combination_method_; // "max", "override", "min"
bool enabled_;
// Track dirty bounds for updateBounds()
double last_min_x_, last_min_y_, last_max_x_, last_max_y_;
bool need_update_;
};
} // namespace saltybot_segmentation_costmap

View File

@ -0,0 +1,27 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>saltybot_segmentation_costmap</name>
<version>0.1.0</version>
<description>
Nav2 costmap2d plugin: SegmentationCostmapLayer.
Merges semantic traversability scores from sidewalk_seg_node into
the Nav2 local_costmap — sidewalk free, road high-cost, obstacle lethal.
</description>
<maintainer email="seb@vayrette.com">seb</maintainer>
<license>MIT</license>
<buildtool_depend>ament_cmake</buildtool_depend>
<depend>rclcpp</depend>
<depend>nav2_costmap_2d</depend>
<depend>nav_msgs</depend>
<depend>pluginlib</depend>
<test_depend>ament_lint_auto</test_depend>
<test_depend>ament_lint_common</test_depend>
<export>
<build_type>ament_cmake</build_type>
</export>
</package>

View File

@ -0,0 +1,13 @@
<library path="saltybot_segmentation_costmap">
<class
name="saltybot_segmentation_costmap::SegmentationCostmapLayer"
type="saltybot_segmentation_costmap::SegmentationCostmapLayer"
base_class_type="nav2_costmap_2d::Layer">
<description>
Nav2 costmap layer that merges semantic traversability scores from
/segmentation/costmap (published by sidewalk_seg_node) into the
Nav2 local_costmap. Roads are high-cost, obstacles are lethal,
sidewalks are free.
</description>
</class>
</library>

View File

@ -0,0 +1,207 @@
#include "saltybot_segmentation_costmap/segmentation_costmap_layer.hpp"
#include <algorithm>
#include <string>
#include "nav2_costmap_2d/costmap_2d.hpp"
#include "pluginlib/class_list_macros.hpp"
PLUGINLIB_EXPORT_CLASS(
saltybot_segmentation_costmap::SegmentationCostmapLayer,
nav2_costmap_2d::Layer)
namespace saltybot_segmentation_costmap
{
using nav2_costmap_2d::FREE_SPACE;
using nav2_costmap_2d::LETHAL_OBSTACLE;
using nav2_costmap_2d::INSCRIBED_INFLATED_OBSTACLE;
using nav2_costmap_2d::NO_INFORMATION;
SegmentationCostmapLayer::SegmentationCostmapLayer()
: last_min_x_(-1e9), last_min_y_(-1e9),
last_max_x_(1e9), last_max_y_(1e9),
need_update_(false)
{}
// ── onInitialize ─────────────────────────────────────────────────────────────
void SegmentationCostmapLayer::onInitialize()
{
auto node = node_.lock();
if (!node) {
throw std::runtime_error("SegmentationCostmapLayer: node handle expired");
}
declareParameter("enabled", rclcpp::ParameterValue(true));
declareParameter("topic", rclcpp::ParameterValue(std::string("/segmentation/costmap")));
declareParameter("combination_method", rclcpp::ParameterValue(std::string("max")));
enabled_ = node->get_parameter(name_ + "." + "enabled").as_bool();
topic_ = node->get_parameter(name_ + "." + "topic").as_string();
combination_method_ = node->get_parameter(name_ + "." + "combination_method").as_string();
RCLCPP_INFO(
node->get_logger(),
"SegmentationCostmapLayer: topic=%s method=%s",
topic_.c_str(), combination_method_.c_str());
// Transient local QoS to match the sidewalk_seg_node publisher
rclcpp::QoS qos(rclcpp::KeepLast(1));
qos.transient_local();
qos.reliable();
sub_ = node->create_subscription<nav_msgs::msg::OccupancyGrid>(
topic_, qos,
std::bind(
&SegmentationCostmapLayer::segCostmapCallback, this,
std::placeholders::_1));
current_ = true;
}
// ── Subscription callback ─────────────────────────────────────────────────────
void SegmentationCostmapLayer::segCostmapCallback(
const nav_msgs::msg::OccupancyGrid::SharedPtr msg)
{
std::lock_guard<std::mutex> lock(grid_mutex_);
latest_grid_ = msg;
need_update_ = true;
}
// ── updateBounds ──────────────────────────────────────────────────────────────
void SegmentationCostmapLayer::updateBounds(
double /*robot_x*/, double /*robot_y*/, double /*robot_yaw*/,
double * min_x, double * min_y,
double * max_x, double * max_y)
{
if (!enabled_) {return;}
std::lock_guard<std::mutex> lock(grid_mutex_);
if (!latest_grid_) {return;}
const auto & info = latest_grid_->info;
double ox = info.origin.position.x;
double oy = info.origin.position.y;
double gw = info.width * info.resolution;
double gh = info.height * info.resolution;
last_min_x_ = ox;
last_min_y_ = oy;
last_max_x_ = ox + gw;
last_max_y_ = oy + gh;
*min_x = std::min(*min_x, last_min_x_);
*min_y = std::min(*min_y, last_min_y_);
*max_x = std::max(*max_x, last_max_x_);
*max_y = std::max(*max_y, last_max_y_);
}
// ── updateCosts ───────────────────────────────────────────────────────────────
void SegmentationCostmapLayer::updateCosts(
nav2_costmap_2d::Costmap2D & master,
int min_i, int min_j, int max_i, int max_j)
{
if (!enabled_) {return;}
std::lock_guard<std::mutex> lock(grid_mutex_);
if (!latest_grid_ || !need_update_) {return;}
const auto & info = latest_grid_->info;
const auto & data = latest_grid_->data;
float seg_res = info.resolution;
float seg_ox = static_cast<float>(info.origin.position.x);
float seg_oy = static_cast<float>(info.origin.position.y);
int seg_w = static_cast<int>(info.width);
int seg_h = static_cast<int>(info.height);
float master_res = static_cast<float>(master.getResolution());
// Iterate over the update window
for (int j = min_j; j < max_j; ++j) {
for (int i = min_i; i < max_i; ++i) {
// World coords of this master cell centre
double wx, wy;
master.mapToWorld(
static_cast<unsigned int>(i), static_cast<unsigned int>(j), wx, wy);
// Corresponding cell in segmentation grid
int seg_col = static_cast<int>((wx - seg_ox) / seg_res);
int seg_row = static_cast<int>((wy - seg_oy) / seg_res);
if (seg_col < 0 || seg_col >= seg_w ||
seg_row < 0 || seg_row >= seg_h) {
continue;
}
int seg_idx = seg_row * seg_w + seg_col;
if (seg_idx < 0 || seg_idx >= static_cast<int>(data.size())) {
continue;
}
int8_t occ = data[static_cast<size_t>(seg_idx)];
unsigned char new_cost = 0;
if (!occupancyToCost(occ, new_cost)) {
continue; // unknown cell — leave master unchanged
}
unsigned char existing = master.getCost(
static_cast<unsigned int>(i), static_cast<unsigned int>(j));
unsigned char final_cost = existing;
if (combination_method_ == "override") {
final_cost = new_cost;
} else if (combination_method_ == "min") {
final_cost = std::min(existing, new_cost);
} else {
// "max" (default) — most conservative
final_cost = std::max(existing, new_cost);
}
master.setCost(
static_cast<unsigned int>(i), static_cast<unsigned int>(j),
final_cost);
}
}
need_update_ = false;
}
// ── reset ─────────────────────────────────────────────────────────────────────
void SegmentationCostmapLayer::reset()
{
std::lock_guard<std::mutex> lock(grid_mutex_);
latest_grid_.reset();
need_update_ = false;
current_ = true;
}
// ── occupancyToCost ───────────────────────────────────────────────────────────
bool SegmentationCostmapLayer::occupancyToCost(int8_t occ, unsigned char & cost_out)
{
if (occ < 0) {
return false; // unknown — leave cell unchanged
}
if (occ == 0) {
cost_out = FREE_SPACE;
return true;
}
if (occ >= 100) {
cost_out = LETHAL_OBSTACLE;
return true;
}
// Scale 199 → 1 to (INSCRIBED_INFLATED_OBSTACLE-1) = 1 to 252
int scaled = 1 + static_cast<int>(
(occ - 1) * (INSCRIBED_INFLATED_OBSTACLE - 2) / 98.0f);
cost_out = static_cast<unsigned char>(
std::min(scaled, static_cast<int>(INSCRIBED_INFLATED_OBSTACLE - 1)));
return true;
}
} // namespace saltybot_segmentation_costmap

View File

@ -0,0 +1,96 @@
# speed_profiles.yaml
# Outdoor adaptive speed profiles for SaltyBot follow-me mode.
#
# Run with:
# ros2 launch saltybot_speed_controller outdoor_speed.launch.py
#
# IMPORTANT: This controller outputs /cmd_vel which feeds into cmd_vel_bridge
# (PR #46). The bridge applies its own hard caps (max_linear_vel=0.5 m/s).
# For outdoor/ride profiles you MUST re-launch cmd_vel_bridge with higher limits:
# ros2 launch saltybot_bridge cmd_vel_bridge.launch.py max_linear_vel:=8.0
#
# Data flow:
# person_follower → /cmd_vel_raw → [speed_controller] → /cmd_vel → cmd_vel_bridge → STM32
# ── Controller ─────────────────────────────────────────────────────────────────
control_rate: 50.0 # Hz — 50ms tick, same as cmd_vel_bridge
input_topic: /cmd_vel_raw # Upstream cmd_vel source
output_topic: /cmd_vel # Output to cmd_vel_bridge
initial_profile: walk # Start in safest profile until target velocity known
# ── Profile selection ──────────────────────────────────────────────────────────
# Hysteresis prevents rapid walk↔jog↔ride oscillation.
# up_ticks: consecutive ticks above threshold before upgrading profile (50Hz → 5 = 100ms)
# down_ticks: ticks below threshold before downgrading (50Hz → 15 = 300ms)
# Downgrade is slower (safety: prefer lower profile when uncertain)
hysteresis_up_ticks: 5
hysteresis_down_ticks: 15
# ── Emergency stop ─────────────────────────────────────────────────────────────
# Triggered when target stops suddenly:
# (a) target velocity drops below stop_vel_threshold, OR
# (b) target deceleration exceeds hard_stop_decel_threshold m/s²
#
# emergency_decel_mps2: how fast the robot decelerates to zero.
# Conservative (0.5-1.0 m/s²) is safer for balance; too fast = tip-over.
# At ride speed (8 m/s), 2.0 m/s² = 4 second stop distance ≈ 16m — acceptable.
emergency_decel_mps2: 2.0 # m/s² — hard brake limit
stop_vel_threshold: 0.4 # m/s — target speed below this = stopped
hard_stop_decel_threshold: 3.0 # m/s² — target decel above this = hard stop
# ── GPS runaway protection ─────────────────────────────────────────────────────
# If GPS speed > commanded_speed × runaway_factor AND GPS speed > 50% of
# profile max, trigger emergency decel (catches downhill runaways).
gps_runaway_factor: 1.5 # 1.5× commanded = runaway
gps_timeout: 3.0 # s — ignore GPS if stale (tunnel, bridge)
# ── UWB range ─────────────────────────────────────────────────────────────────
uwb_range_field: range # JSON field name in /uwb/ranges messages
uwb_timeout: 2.0 # s — use GPS-only profile if UWB stale
# ── Speed profiles ─────────────────────────────────────────────────────────────
# Each profile applies when estimated target velocity is in [0, target_vel_max].
# Velocity limits are BEFORE cmd_vel_bridge caps (bridge must also be set high).
#
# Acceleration notes:
# - Balance robot needs time to adjust lean angle when speed changes.
# - accel_limit at RIDE (0.3 m/s²) means 0→8 m/s takes ~27s — intentionally slow.
# - Increase only after validating balance PID tuning at speed.
# - decel_limit can be slightly higher than accel_limit (braking > accelerating).
walk:
# Person walking at 02 m/s (≈ 07 km/h)
max_linear_vel: 1.5 # m/s (= 5.4 km/h)
max_angular_vel: 1.2 # rad/s
accel_limit: 0.6 # m/s² — moderate, balance handles it
decel_limit: 1.0 # m/s²
target_vel_max: 2.0 # m/s — target faster than this → upgrade to jog
jog:
# Person jogging at 25 m/s (≈ 718 km/h)
max_linear_vel: 3.0 # m/s (= 10.8 km/h)
max_angular_vel: 1.5 # rad/s
accel_limit: 0.5 # m/s² — a bit gentler at higher speed
decel_limit: 0.8 # m/s²
target_vel_max: 5.0 # m/s — target faster than this → upgrade to ride
ride:
# Person on EUC at 515 m/s (≈ 1854 km/h / 2030 km/h typical follow-me)
max_linear_vel: 8.0 # m/s (= 28.8 km/h)
max_angular_vel: 1.5 # rad/s — conservative for balance at speed
accel_limit: 0.3 # m/s² — very gentle: balance dynamics change at 8 m/s
decel_limit: 0.5 # m/s² — gradual stop from 8 m/s ≈ 16s / ~64m
target_vel_max: 15.0 # m/s — cap; EUC max ~30 km/h = 8.3 m/s typical
# ── Notes ─────────────────────────────────────────────────────────────────────
# 1. To enable ride profile, the Jetson → STM32 cmd_vel_bridge must also be
# reconfigured: max_linear_vel=8.0, ramp_rate=500 → consider ramp_rate=150
# at ride speed (slower ramp = smoother balance).
#
# 2. The STM32 balance PID gains likely need retuning for ride speed. Expect
# increased sensitivity to pitch angle errors at 8 m/s vs 0.5 m/s.
#
# 3. Test sequence recommendation:
# - Validate walk profile on flat indoor surface first
# - Test jog profile in controlled outdoor space
# - Only enable ride profile after verified balance at jog speeds

View File

@ -0,0 +1,155 @@
"""
outdoor_speed.launch.py Outdoor adaptive speed controller.
Inserts SpeedControllerNode between the person_follower (/cmd_vel_raw)
and cmd_vel_bridge (/cmd_vel), providing adaptive walk/jog/ride profiles
with smooth acceleration curves and emergency deceleration.
IMPORTANT: For ride profile (up to 8 m/s) you must ALSO re-launch
cmd_vel_bridge with matching limits:
ros2 launch saltybot_bridge cmd_vel_bridge.launch.py max_linear_vel:=8.0
Prerequisite node pipeline:
person_follower /cmd_vel_raw [speed_controller] /cmd_vel cmd_vel_bridge STM32
Usage:
# Defaults (walk profile initially, adapts via UWB + GPS):
ros2 launch saltybot_speed_controller outdoor_speed.launch.py
# Force ride profile limits from start:
ros2 launch saltybot_speed_controller outdoor_speed.launch.py initial_profile:=ride
# Tune jog profile max speed:
ros2 launch saltybot_speed_controller outdoor_speed.launch.py jog_max_vel:=4.0
# Gentle emergency decel (longer stop distance, safer for balance):
ros2 launch saltybot_speed_controller outdoor_speed.launch.py emergency_decel:=1.0
# Full outdoor follow-me pipeline (speed controller + person follower):
ros2 launch saltybot_speed_controller outdoor_speed.launch.py \
launch_person_follower:=true
Topics:
/cmd_vel_raw Input (from person_follower or Nav2)
/cmd_vel Output (to cmd_vel_bridge)
/speed_controller/profile Active profile + status (JSON String)
/gps/vel Ground truth speed input (from gps_driver_node)
/uwb/ranges Target range input (for velocity estimation)
"""
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument, IncludeLaunchDescription, OpaqueFunction
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
def _launch_nodes(context, *args, **kwargs):
params_file = LaunchConfiguration("params_file").perform(context)
def _s(k): return LaunchConfiguration(k).perform(context)
def _f(k): return float(LaunchConfiguration(k).perform(context))
def _i(k): return int(LaunchConfiguration(k).perform(context))
def _b(k): return LaunchConfiguration(k).perform(context).lower() == "true"
inline_params = {
"control_rate": _f("control_rate"),
"input_topic": _s("input_topic"),
"initial_profile": _s("initial_profile"),
"hysteresis_up_ticks": _i("hysteresis_up_ticks"),
"hysteresis_down_ticks": _i("hysteresis_down_ticks"),
"emergency_decel_mps2": _f("emergency_decel"),
"stop_vel_threshold": _f("stop_vel_threshold"),
"hard_stop_decel_threshold": _f("hard_stop_decel_threshold"),
"gps_runaway_factor": _f("gps_runaway_factor"),
"gps_timeout": _f("gps_timeout"),
"uwb_timeout": _f("uwb_timeout"),
# Per-profile params
"walk.max_linear_vel": _f("walk_max_vel"),
"walk.accel_limit": _f("walk_accel"),
"walk.decel_limit": _f("walk_decel"),
"jog.max_linear_vel": _f("jog_max_vel"),
"jog.accel_limit": _f("jog_accel"),
"jog.decel_limit": _f("jog_decel"),
"ride.max_linear_vel": _f("ride_max_vel"),
"ride.accel_limit": _f("ride_accel"),
"ride.decel_limit": _f("ride_decel"),
}
node_params = [params_file, inline_params] if params_file else [inline_params]
nodes = [
Node(
package="saltybot_speed_controller",
executable="speed_controller_node",
name="speed_controller",
output="screen",
parameters=node_params,
),
]
if _b("launch_person_follower"):
try:
follower_pkg = get_package_share_directory("saltybot_follower")
follower_launch = os.path.join(
follower_pkg, "launch", "person_follower.launch.py")
nodes.append(IncludeLaunchDescription(
PythonLaunchDescriptionSource(follower_launch),
launch_arguments={"cmd_vel_topic": "/cmd_vel_raw"}.items(),
))
except Exception:
pass # package not built yet — skip silently
return nodes
def generate_launch_description():
pkg_share = get_package_share_directory("saltybot_speed_controller")
default_params = os.path.join(pkg_share, "config", "speed_profiles.yaml")
return LaunchDescription([
DeclareLaunchArgument(
"params_file", default_value=default_params,
description="Full path to speed_profiles.yaml"),
# General
DeclareLaunchArgument("control_rate", default_value="50.0"),
DeclareLaunchArgument("input_topic", default_value="/cmd_vel_raw"),
DeclareLaunchArgument("initial_profile", default_value="walk"),
DeclareLaunchArgument("hysteresis_up_ticks", default_value="5"),
DeclareLaunchArgument("hysteresis_down_ticks",default_value="15"),
DeclareLaunchArgument("launch_person_follower", default_value="false",
description="Also launch saltybot_follower with /cmd_vel_raw output"),
# Emergency
DeclareLaunchArgument("emergency_decel", default_value="2.0",
description="Emergency decel limit (m/s²)"),
DeclareLaunchArgument("stop_vel_threshold", default_value="0.4"),
DeclareLaunchArgument("hard_stop_decel_threshold",default_value="3.0"),
# GPS runaway
DeclareLaunchArgument("gps_runaway_factor", default_value="1.5"),
DeclareLaunchArgument("gps_timeout", default_value="3.0"),
DeclareLaunchArgument("uwb_timeout", default_value="2.0"),
# Walk profile
DeclareLaunchArgument("walk_max_vel", default_value="1.5"),
DeclareLaunchArgument("walk_accel", default_value="0.6"),
DeclareLaunchArgument("walk_decel", default_value="1.0"),
# Jog profile
DeclareLaunchArgument("jog_max_vel", default_value="3.0"),
DeclareLaunchArgument("jog_accel", default_value="0.5"),
DeclareLaunchArgument("jog_decel", default_value="0.8"),
# Ride profile
DeclareLaunchArgument("ride_max_vel", default_value="8.0",
description="Max speed for EUC follow-me (m/s)"),
DeclareLaunchArgument("ride_accel", default_value="0.3",
description="Gentle accel — balance sensitive at 8 m/s"),
DeclareLaunchArgument("ride_decel", default_value="0.5"),
OpaqueFunction(function=_launch_nodes),
])

View File

@ -0,0 +1,31 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>saltybot_speed_controller</name>
<version>0.1.0</version>
<description>
Outdoor adaptive speed controller for SaltyBot follow-me mode.
Inserts between person_follower (/cmd_vel_raw) and cmd_vel_bridge (/cmd_vel),
selecting walk/jog/ride profiles based on estimated target velocity (UWB + GPS),
applying smooth trapezoidal acceleration curves, GPS runaway protection,
and emergency deceleration for balance stability at speeds up to 8 m/s.
</description>
<maintainer email="sl-controls@saltylab.local">sl-controls</maintainer>
<license>MIT</license>
<depend>rclpy</depend>
<depend>geometry_msgs</depend>
<depend>sensor_msgs</depend>
<depend>std_msgs</depend>
<buildtool_depend>ament_python</buildtool_depend>
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<test_depend>python3-pytest</test_depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>

View File

@ -0,0 +1,491 @@
"""
speed_controller_node.py Outdoor adaptive speed controller for SaltyBot.
Sits between upstream cmd_vel sources (person_follower, Nav2) and the
cmd_vel_bridge (PR #46), providing:
Adaptive speed profiles (walk / jog / ride) selected from UWB target
velocity and GPS ground-truth speed
Smooth acceleration curves (trapezoidal ramp) per profile, tuned for
balance stability at speed more aggressive jerk at low speed, gentler
at ride speeds (EUC follow-me at 20-30 km/h)
Emergency deceleration when the target stops suddenly (detected via
UWB range acceleration and target velocity drop)
GPS runaway protection if GPS ground speed significantly overshoots
the commanded velocity (downhill, slippage)
Data flow:
/cmd_vel_raw (Twist) uncapped request from person_follower/Nav2
/gps/vel (TwistStamped) robot ground-truth speed (GPS, 1 Hz)
/uwb/ranges (String JSON) UWB range to target (for velocity estimate)
SpeedControllerNode
/cmd_vel (Twist) smoothed, profile-capped output cmd_vel_bridge
Profiles
walk : 1.5 m/s (person walking 02 m/s)
jog : 3.0 m/s (person jogging 25 m/s)
ride : 8.0 m/s (person on EUC 515 m/s)
Profile selection uses UWB-derived target velocity with hysteresis to avoid
rapid switching. GPS ground speed provides a secondary confirmation and
runaway guard.
"""
import collections
import json
import math
import time
import rclpy
from rclpy.node import Node
from geometry_msgs.msg import Twist, TwistStamped
from std_msgs.msg import String
# ── Pure control functions ────────────────────────────────────────────────────
# Extracted here so unit tests run without a ROS2 runtime.
# Profile name constants
WALK = "walk"
JOG = "jog"
RIDE = "ride"
_PROFILE_ORDER = [WALK, JOG, RIDE]
def estimate_target_velocity(prev_range: float, curr_range: float,
dt: float, robot_gps_speed: float) -> float:
"""
Estimate absolute target velocity from UWB range derivative + GPS.
When the robot follows a target:
range_rate = d(range)/dt (positive = target moving away)
target_vel robot_gps_speed + range_rate
This gives a rough scalar speed suitable for profile classification.
Returns 0 if dt 0 or range values are invalid.
"""
if dt <= 0 or prev_range <= 0 or curr_range <= 0:
return robot_gps_speed # fall back to robot's own speed
range_rate = (curr_range - prev_range) / dt
return max(0.0, robot_gps_speed + range_rate)
def select_profile(target_vel: float, profiles: dict,
current_profile: str,
hysteresis_up: int = 5,
hysteresis_down: int = 10,
_counters: dict = None) -> str:
"""
Select speed profile with up/down hysteresis.
target_vel : estimated target velocity (m/s)
profiles : dict of profile_name {target_vel_min, target_vel_max, ...}
current_profile: currently active profile
hysteresis_up : ticks before switching to a faster profile
hysteresis_down: ticks before switching to a slower profile
Returns the (possibly unchanged) profile name.
_counters is an internal mutable dict passed by the caller for state
(avoids making this a method). Pass {} on first call; reuse thereafter.
"""
if _counters is None:
_counters = {}
# Determine which profile best fits the target velocity
desired = current_profile
for name in _PROFILE_ORDER:
p = profiles[name]
if target_vel <= p["target_vel_max"]:
desired = name
break
else:
desired = RIDE # above all max → stay at highest
if desired == current_profile:
_counters.clear()
return current_profile
key = desired
_counters[key] = _counters.get(key, 0) + 1
idx_desired = _PROFILE_ORDER.index(desired)
idx_current = _PROFILE_ORDER.index(current_profile)
threshold = hysteresis_up if idx_desired > idx_current else hysteresis_down
if _counters[key] >= threshold:
_counters.clear()
return desired
return current_profile
def apply_accel_limit(current: float, target: float,
accel_limit: float, decel_limit: float,
dt: float) -> float:
"""
Apply symmetric trapezoidal acceleration / deceleration limit.
current : current velocity (m/s)
target : desired velocity (m/s)
accel_limit : max acceleration (m/)
decel_limit : max deceleration (m/) positive value
dt : time step (s)
Returns new velocity, ramped toward target within the limits.
"""
delta = target - current
# Decelerating = magnitude is decreasing (current and delta have opposite signs)
decelerating = (current > 0 and delta < 0) or (current < 0 and delta > 0)
limit = decel_limit if decelerating else accel_limit
if delta > 0:
step = min(delta, limit * dt)
else:
step = max(delta, -limit * dt)
return current + step
def limit_velocity(linear_x: float, angular_z: float,
max_linear: float, max_angular: float) -> tuple:
"""Hard-clamp linear and angular velocity to profile limits."""
lin = max(-max_linear, min(max_linear, linear_x))
ang = max(-max_angular, min(max_angular, angular_z))
return lin, ang
def check_emergency_stop(target_vel: float, target_vel_prev: float,
dt: float,
stop_vel_threshold: float,
hard_decel_threshold: float) -> bool:
"""
Detect sudden target stop requiring emergency deceleration.
Triggers if:
(a) target velocity drops below stop_vel_threshold, OR
(b) target deceleration exceeds hard_decel_threshold (m/)
Returns True if emergency stop should be engaged.
"""
if target_vel < stop_vel_threshold:
return True
if dt > 0:
target_decel = (target_vel_prev - target_vel) / dt # positive = decelerating
if target_decel > hard_decel_threshold:
return True
return False
def check_gps_runaway(gps_speed: float, cmd_vel: float,
runaway_factor: float, profile_max: float) -> bool:
"""
Detect GPS runaway: robot moving significantly faster than commanded.
Triggers if gps_speed > max(cmd_vel, 0.5) * runaway_factor
AND gps_speed > profile_max (not a sensor glitch on a stationary robot).
"""
reference = max(abs(cmd_vel), 0.5)
return gps_speed > reference * runaway_factor and gps_speed > profile_max * 0.5
def emergency_decel_step(current_vel: float, decel_limit: float, dt: float) -> float:
"""Decelerate toward zero at emergency_decel_limit. Returns new velocity."""
if current_vel > 0:
return max(0.0, current_vel - decel_limit * dt)
if current_vel < 0:
return min(0.0, current_vel + decel_limit * dt)
return 0.0
# ── Default profiles (overridable via YAML) ──────────────────────────────────
DEFAULT_PROFILES = {
WALK: {
"max_linear_vel": 1.5,
"max_angular_vel": 1.2,
"accel_limit": 0.6, # m/s²
"decel_limit": 1.0, # m/s²
"target_vel_min": 0.0,
"target_vel_max": 2.0,
},
JOG: {
"max_linear_vel": 3.0,
"max_angular_vel": 1.5,
"accel_limit": 0.5,
"decel_limit": 0.8,
"target_vel_min": 2.0,
"target_vel_max": 5.0,
},
RIDE: {
"max_linear_vel": 8.0,
"max_angular_vel": 1.5, # conservative for balance at speed
"accel_limit": 0.3, # very gentle — balance needs time at 8 m/s
"decel_limit": 0.5,
"target_vel_min": 5.0,
"target_vel_max": 15.0,
},
}
# ── ROS2 Node ─────────────────────────────────────────────────────────────────
class SpeedControllerNode(Node):
def __init__(self):
super().__init__("speed_controller")
# ── Parameters ────────────────────────────────────────────────────────
self.declare_parameter("control_rate", 50.0)
self.declare_parameter("input_topic", "/cmd_vel_raw")
self.declare_parameter("output_topic", "/cmd_vel")
self.declare_parameter("initial_profile", WALK)
# Profile selection
self.declare_parameter("hysteresis_up_ticks", 5)
self.declare_parameter("hysteresis_down_ticks", 15)
# Emergency stop
self.declare_parameter("emergency_decel_mps2", 2.0)
self.declare_parameter("stop_vel_threshold", 0.4) # m/s
self.declare_parameter("hard_stop_decel_threshold", 3.0) # m/s²
# GPS runaway
self.declare_parameter("gps_runaway_factor", 1.5) # 1.5× commanded
self.declare_parameter("gps_timeout", 3.0) # s before ignoring GPS
# UWB range
self.declare_parameter("uwb_range_field", "range")
self.declare_parameter("uwb_timeout", 2.0)
# Per-profile overrides (flat params, merged into DEFAULT_PROFILES)
for profile in _PROFILE_ORDER:
self.declare_parameter(f"{profile}.max_linear_vel",
DEFAULT_PROFILES[profile]["max_linear_vel"])
self.declare_parameter(f"{profile}.max_angular_vel",
DEFAULT_PROFILES[profile]["max_angular_vel"])
self.declare_parameter(f"{profile}.accel_limit",
DEFAULT_PROFILES[profile]["accel_limit"])
self.declare_parameter(f"{profile}.decel_limit",
DEFAULT_PROFILES[profile]["decel_limit"])
self.declare_parameter(f"{profile}.target_vel_max",
DEFAULT_PROFILES[profile]["target_vel_max"])
self._profiles = self._load_profiles()
self._p = {
"control_rate": self.get_parameter("control_rate").value,
"hysteresis_up": self.get_parameter("hysteresis_up_ticks").value,
"hysteresis_down": self.get_parameter("hysteresis_down_ticks").value,
"emergency_decel": self.get_parameter("emergency_decel_mps2").value,
"stop_vel_threshold": self.get_parameter("stop_vel_threshold").value,
"hard_stop_decel_threshold": self.get_parameter("hard_stop_decel_threshold").value,
"gps_runaway_factor": self.get_parameter("gps_runaway_factor").value,
"gps_timeout": self.get_parameter("gps_timeout").value,
"uwb_range_field": self.get_parameter("uwb_range_field").value,
"uwb_timeout": self.get_parameter("uwb_timeout").value,
}
dt = 1.0 / self._p["control_rate"]
# ── State ─────────────────────────────────────────────────────────────
self._profile = self.get_parameter("initial_profile").value
self._profile_counters: dict = {}
self._current_linear = 0.0 # smoothed output linear.x
self._current_angular= 0.0 # smoothed output angular.z
self._request_linear = 0.0 # latest /cmd_vel_raw linear.x
self._request_angular= 0.0 # latest /cmd_vel_raw angular.z
self._gps_speed = 0.0 # magnitude from /gps/vel
self._gps_stamp = 0.0 # monotonic time of last GPS
self._uwb_range = 0.0 # latest range (m)
self._uwb_range_prev = 0.0
self._uwb_stamp = 0.0
self._uwb_stamp_prev = 0.0
self._target_vel = 0.0
self._target_vel_prev= 0.0
self._emergency = False
self._dt = dt
# History for runaway filter (last 3 GPS samples)
self._gps_history: collections.deque = collections.deque(maxlen=3)
# ── Subscriptions ─────────────────────────────────────────────────────
self.create_subscription(
Twist, self.get_parameter("input_topic").value,
self._cmd_raw_cb, 10)
self.create_subscription(
TwistStamped, "/gps/vel", self._gps_cb, 10)
self.create_subscription(
String, "/uwb/ranges", self._uwb_cb, 10)
# ── Publisher ──────────────────────────────────────────────────────────
self._cmd_pub = self.create_publisher(
Twist, self.get_parameter("output_topic").value, 10)
self._profile_pub = self.create_publisher(
String, "/speed_controller/profile", 10)
# ── Control timer ──────────────────────────────────────────────────────
self.create_timer(dt, self._control_cb)
self.get_logger().info(
f"SpeedController ready profile={self._profile} "
f"rate={self._p['control_rate']}Hz "
f"walk≤{self._profiles[WALK]['max_linear_vel']}m/s "
f"jog≤{self._profiles[JOG]['max_linear_vel']}m/s "
f"ride≤{self._profiles[RIDE]['max_linear_vel']}m/s")
def _load_profiles(self) -> dict:
"""Merge ROS2 params into DEFAULT_PROFILES."""
import copy
profiles = copy.deepcopy(DEFAULT_PROFILES)
for name in _PROFILE_ORDER:
for field in ("max_linear_vel", "max_angular_vel", "accel_limit",
"decel_limit", "target_vel_max"):
val = self.get_parameter(f"{name}.{field}").value
profiles[name][field] = val
return profiles
# ── Callbacks ─────────────────────────────────────────────────────────────
def _cmd_raw_cb(self, msg: Twist):
self._request_linear = msg.linear.x
self._request_angular = msg.angular.z
def _gps_cb(self, msg: TwistStamped):
vx = msg.twist.linear.x
vy = msg.twist.linear.y
self._gps_speed = math.sqrt(vx * vx + vy * vy)
self._gps_stamp = time.monotonic()
self._gps_history.append(self._gps_speed)
def _uwb_cb(self, msg: String):
"""Parse JSON range message, update range estimate."""
try:
data = json.loads(msg.data)
field = self._p["uwb_range_field"]
r = float(data.get(field, data.get("distance", 0.0)))
if r > 0:
self._uwb_range_prev = self._uwb_range
self._uwb_stamp_prev = self._uwb_stamp
self._uwb_range = r
self._uwb_stamp = time.monotonic()
except (json.JSONDecodeError, ValueError, KeyError):
pass
# ── Control loop ──────────────────────────────────────────────────────────
def _control_cb(self):
now = time.monotonic()
dt = self._dt
# ── 1. Estimate target velocity from UWB range derivative + GPS ──────
uwb_fresh = (now - self._uwb_stamp) < self._p["uwb_timeout"]
gps_fresh = (now - self._gps_stamp) < self._p["gps_timeout"]
if uwb_fresh and self._uwb_stamp_prev > 0:
uwb_dt = self._uwb_stamp - self._uwb_stamp_prev
robot_speed = self._gps_speed if gps_fresh else self._current_linear
self._target_vel_prev = self._target_vel
self._target_vel = estimate_target_velocity(
self._uwb_range_prev, self._uwb_range, uwb_dt, robot_speed)
elif gps_fresh:
# No UWB — use GPS speed as proxy for target velocity
self._target_vel_prev = self._target_vel
self._target_vel = self._gps_speed
# else: hold previous estimate
# ── 2. Profile selection ──────────────────────────────────────────────
self._profile = select_profile(
self._target_vel, self._profiles, self._profile,
self._p["hysteresis_up"], self._p["hysteresis_down"],
self._profile_counters,
)
prof = self._profiles[self._profile]
# ── 3. Emergency stop detection ───────────────────────────────────────
emerg = check_emergency_stop(
self._target_vel, self._target_vel_prev, dt,
self._p["stop_vel_threshold"],
self._p["hard_stop_decel_threshold"],
)
# GPS runaway check (use smoothed GPS)
if gps_fresh:
gps_smooth = sum(self._gps_history) / len(self._gps_history)
if check_gps_runaway(
gps_smooth, self._current_linear,
self._p["gps_runaway_factor"],
prof["max_linear_vel"]):
emerg = True
self.get_logger().warn(
f"GPS runaway: gps={gps_smooth:.2f} "
f"cmd={self._current_linear:.2f} m/s — emergency decel",
throttle_duration_sec=2.0)
# Latch emergency; clear when speed reaches zero
if emerg and not self._emergency:
self.get_logger().warn(
f"Emergency stop: target_vel={self._target_vel:.2f} m/s "
f"(profile={self._profile})")
self._emergency = emerg or (
self._emergency and abs(self._current_linear) > 0.05)
# ── 4. Compute target command ─────────────────────────────────────────
if self._emergency:
# Override request: decelerate to zero at emergency rate
target_lin = 0.0
target_ang = 0.0
new_lin = emergency_decel_step(
self._current_linear, self._p["emergency_decel"], dt)
new_ang = emergency_decel_step(
self._current_angular, self._p["emergency_decel"] * 2.0, dt)
else:
# Apply profile caps to request
target_lin, target_ang = limit_velocity(
self._request_linear, self._request_angular,
prof["max_linear_vel"], prof["max_angular_vel"])
# Apply smooth acceleration / deceleration
new_lin = apply_accel_limit(
self._current_linear, target_lin,
prof["accel_limit"], prof["decel_limit"], dt)
new_ang = apply_accel_limit(
self._current_angular, target_ang,
prof["accel_limit"] * 2.0, # angular can be snappier
prof["decel_limit"] * 2.0, dt)
self._current_linear = new_lin
self._current_angular = new_ang
# ── 5. Publish ────────────────────────────────────────────────────────
out = Twist()
out.linear.x = self._current_linear
out.angular.z = self._current_angular
self._cmd_pub.publish(out)
# Profile status (for dashboard / logging)
status = String()
status.data = json.dumps({
"profile": self._profile,
"max_vel": prof["max_linear_vel"],
"target_vel": round(self._target_vel, 2),
"gps_speed": round(self._gps_speed, 2),
"cmd_linear": round(new_lin, 3),
"emergency": self._emergency,
})
self._profile_pub.publish(status)
# ── Entry point ───────────────────────────────────────────────────────────────
def main(args=None):
rclpy.init(args=args)
node = SpeedControllerNode()
try:
rclpy.spin(node)
except KeyboardInterrupt:
pass
finally:
node.destroy_node()
rclpy.try_shutdown()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,4 @@
[develop]
script_dir=$base/lib/saltybot_speed_controller
[install]
install_scripts=$base/lib/saltybot_speed_controller

View File

@ -0,0 +1,31 @@
from setuptools import setup
package_name = "saltybot_speed_controller"
setup(
name=package_name,
version="0.1.0",
packages=[package_name],
data_files=[
("share/ament_index/resource_index/packages", [f"resource/{package_name}"]),
(f"share/{package_name}", ["package.xml"]),
(f"share/{package_name}/launch", ["launch/outdoor_speed.launch.py"]),
(f"share/{package_name}/config", ["config/speed_profiles.yaml"]),
],
install_requires=["setuptools"],
zip_safe=True,
maintainer="sl-controls",
maintainer_email="sl-controls@saltylab.local",
description=(
"Outdoor adaptive speed controller: walk/jog/ride profiles "
"with trapezoidal accel limits and GPS runaway protection"
),
license="MIT",
tests_require=["pytest"],
entry_points={
"console_scripts": [
# Adaptive speed profile node: /cmd_vel_raw → /cmd_vel
"speed_controller_node = saltybot_speed_controller.speed_controller_node:main",
],
},
)

View File

@ -0,0 +1,428 @@
"""
Unit tests for speed_controller_node pure functions.
No ROS2 runtime required.
Run with: pytest jetson/ros2_ws/src/saltybot_speed_controller/test/test_speed_controller.py
"""
import pytest
# ── Pure function mirrors ─────────────────────────────────────────────────────
WALK = "walk"
JOG = "jog"
RIDE = "ride"
_PROFILE_ORDER = [WALK, JOG, RIDE]
_PROFILES = {
WALK: {"max_linear_vel": 1.5, "max_angular_vel": 1.2,
"accel_limit": 0.6, "decel_limit": 1.0,
"target_vel_min": 0.0, "target_vel_max": 2.0},
JOG: {"max_linear_vel": 3.0, "max_angular_vel": 1.5,
"accel_limit": 0.5, "decel_limit": 0.8,
"target_vel_min": 2.0, "target_vel_max": 5.0},
RIDE: {"max_linear_vel": 8.0, "max_angular_vel": 1.5,
"accel_limit": 0.3, "decel_limit": 0.5,
"target_vel_min": 5.0, "target_vel_max": 15.0},
}
def _estimate_target_velocity(prev_range, curr_range, dt, robot_gps_speed):
if dt <= 0 or prev_range <= 0 or curr_range <= 0:
return robot_gps_speed
range_rate = (curr_range - prev_range) / dt
return max(0.0, robot_gps_speed + range_rate)
def _select_profile(target_vel, profiles, current_profile,
hysteresis_up=5, hysteresis_down=10, counters=None):
if counters is None:
counters = {}
desired = current_profile
for name in _PROFILE_ORDER:
if target_vel <= profiles[name]["target_vel_max"]:
desired = name
break
else:
desired = RIDE
if desired == current_profile:
counters.clear()
return current_profile
key = desired
counters[key] = counters.get(key, 0) + 1
idx_d = _PROFILE_ORDER.index(desired)
idx_c = _PROFILE_ORDER.index(current_profile)
threshold = hysteresis_up if idx_d > idx_c else hysteresis_down
if counters[key] >= threshold:
counters.clear()
return desired
return current_profile
def _apply_accel_limit(current, target, accel_limit, decel_limit, dt):
delta = target - current
# Decelerating = magnitude is decreasing (current and delta have opposite signs)
decelerating = (current > 0 and delta < 0) or (current < 0 and delta > 0)
limit = decel_limit if decelerating else accel_limit
if delta > 0:
step = min(delta, limit * dt)
else:
step = max(delta, -limit * dt)
return current + step
def _limit_velocity(linear_x, angular_z, max_linear, max_angular):
return (max(-max_linear, min(max_linear, linear_x)),
max(-max_angular, min(max_angular, angular_z)))
def _check_emergency_stop(target_vel, target_vel_prev, dt,
stop_threshold, hard_decel_threshold):
if target_vel < stop_threshold:
return True
if dt > 0 and (target_vel_prev - target_vel) / dt > hard_decel_threshold:
return True
return False
def _check_gps_runaway(gps_speed, cmd_vel, runaway_factor, profile_max):
reference = max(abs(cmd_vel), 0.5)
return gps_speed > reference * runaway_factor and gps_speed > profile_max * 0.5
def _emergency_decel_step(current_vel, decel_limit, dt):
if current_vel > 0:
return max(0.0, current_vel - decel_limit * dt)
if current_vel < 0:
return min(0.0, current_vel + decel_limit * dt)
return 0.0
# ── Target velocity estimation ────────────────────────────────────────────────
class TestTargetVelocityEstimation:
def test_target_faster_than_robot(self):
# Robot at 2 m/s GPS, range increasing 3 m/s → target at 5 m/s
v = _estimate_target_velocity(5.0, 8.0, 1.0, 2.0) # range: 5→8 in 1s
assert v == pytest.approx(5.0)
def test_target_same_speed_constant_range(self):
v = _estimate_target_velocity(5.0, 5.0, 1.0, 3.0)
assert v == pytest.approx(3.0)
def test_target_stopped_range_closing(self):
# Robot at 3 m/s, range closing 3 m/s → target = 3 - 3 = 0 → clamped to 0
v = _estimate_target_velocity(8.0, 5.0, 1.0, 3.0)
assert v == pytest.approx(0.0)
def test_invalid_dt_returns_gps_speed(self):
assert _estimate_target_velocity(5.0, 6.0, 0.0, 2.5) == pytest.approx(2.5)
def test_invalid_prev_range_returns_gps_speed(self):
assert _estimate_target_velocity(0.0, 6.0, 1.0, 2.5) == pytest.approx(2.5)
def test_result_non_negative(self):
# Range closing much faster than robot → would be negative → clamp to 0
v = _estimate_target_velocity(10.0, 3.0, 0.5, 1.0)
assert v >= 0.0
def test_high_speed_ride_estimate(self):
# Robot at 6 m/s, range increasing 2 m/s → target at 8 m/s
v = _estimate_target_velocity(4.0, 5.0, 0.5, 6.0) # 2 m/s range rate
assert v == pytest.approx(8.0)
# ── Profile selection ─────────────────────────────────────────────────────────
class TestProfileSelection:
def test_walk_range_stays_walk(self):
assert _select_profile(1.0, _PROFILES, WALK) == WALK
def test_jog_range_stays_jog(self):
assert _select_profile(3.5, _PROFILES, JOG) == JOG
def test_ride_range_stays_ride(self):
assert _select_profile(7.0, _PROFILES, RIDE) == RIDE
def test_upgrade_needs_hysteresis_up(self):
# Should NOT upgrade immediately
counters = {}
profile = WALK
# 4 ticks at jog speed — not enough (hysteresis_up=5)
for _ in range(4):
profile = _select_profile(3.0, _PROFILES, profile,
hysteresis_up=5, counters=counters)
assert profile == WALK
def test_upgrade_happens_at_threshold(self):
counters = {}
profile = WALK
for _ in range(5):
profile = _select_profile(3.0, _PROFILES, profile,
hysteresis_up=5, counters=counters)
assert profile == JOG
def test_downgrade_needs_more_hysteresis(self):
counters = {}
profile = JOG
for _ in range(9):
profile = _select_profile(1.0, _PROFILES, profile,
hysteresis_down=10, counters=counters)
assert profile == JOG # not yet
def test_downgrade_happens_at_threshold(self):
counters = {}
profile = JOG
for _ in range(10):
profile = _select_profile(1.0, _PROFILES, profile,
hysteresis_down=10, counters=counters)
assert profile == WALK
def test_counters_cleared_on_stable(self):
counters = {}
# Partial upgrade attempt then target returns to walk range
_select_profile(3.0, _PROFILES, WALK, hysteresis_up=5, counters=counters)
_select_profile(3.0, _PROFILES, WALK, hysteresis_up=5, counters=counters)
assert JOG in counters # counter accumulated
_select_profile(1.0, _PROFILES, WALK, hysteresis_up=5, counters=counters)
assert len(counters) == 0 # cleared when stable
def test_above_all_profiles_selects_ride(self):
profile = _select_profile(20.0, _PROFILES, RIDE)
assert profile == RIDE
def test_boundary_walk_max(self):
# 2.0 m/s is exactly at walk max → stays walk
assert _select_profile(2.0, _PROFILES, WALK) == WALK
def test_just_above_walk_max_triggers_upgrade(self):
counters = {}
profile = WALK
for _ in range(5):
profile = _select_profile(2.01, _PROFILES, profile,
hysteresis_up=5, counters=counters)
assert profile == JOG
# ── Acceleration limiter ──────────────────────────────────────────────────────
class TestAccelLimit:
_DT = 0.02 # 50 Hz
def test_accel_step(self):
# 0 → 1 at 0.5 m/s², 0.02s tick → step = 0.01
v = _apply_accel_limit(0.0, 1.0, 0.5, 1.0, self._DT)
assert v == pytest.approx(0.01)
def test_decel_step(self):
# 1 → 0 at 1.0 m/s², 0.02s → step = 0.02
v = _apply_accel_limit(1.0, 0.0, 0.5, 1.0, self._DT)
assert v == pytest.approx(0.98)
def test_does_not_overshoot(self):
# Close to target (gap < step)
v = _apply_accel_limit(0.995, 1.0, 0.5, 1.0, self._DT)
assert v == pytest.approx(1.0)
def test_does_not_undershoot(self):
v = _apply_accel_limit(0.005, 0.0, 0.5, 1.0, self._DT)
assert v == pytest.approx(0.0)
def test_negative_to_positive(self):
# -1 → 0 (decelerating negative) at decel_limit
v = _apply_accel_limit(-1.0, 0.0, 0.5, 1.0, self._DT)
assert v == pytest.approx(-0.98) # reduced magnitude
def test_zero_to_zero(self):
assert _apply_accel_limit(0.0, 0.0, 0.5, 1.0, self._DT) == pytest.approx(0.0)
def test_ride_accel_gentler_than_walk(self):
dt = 0.02
v_walk = _apply_accel_limit(0.0, 5.0,
_PROFILES[WALK]["accel_limit"],
_PROFILES[WALK]["decel_limit"], dt)
v_ride = _apply_accel_limit(0.0, 5.0,
_PROFILES[RIDE]["accel_limit"],
_PROFILES[RIDE]["decel_limit"], dt)
assert v_walk > v_ride # walk accelerates faster than ride
def test_time_to_reach_walk_max(self):
# walk: accel=0.6 m/s², target=1.5 m/s, dt=0.02s
# expected ticks ≈ 1.5 / (0.6 * 0.02) = 125 ticks
v = 0.0
ticks = 0
for _ in range(200):
v = _apply_accel_limit(v, 1.5, 0.6, 1.0, 0.02)
ticks += 1
if v >= 1.5 - 1e-6:
break
assert v == pytest.approx(1.5, abs=1e-4)
assert 120 <= ticks <= 130 # ≈ 125 ticks
# ── Velocity limiter ──────────────────────────────────────────────────────────
class TestLimitVelocity:
def test_within_limits(self):
lin, ang = _limit_velocity(1.0, 0.5, 8.0, 1.5)
assert lin == pytest.approx(1.0)
assert ang == pytest.approx(0.5)
def test_linear_clamped_high(self):
lin, _ = _limit_velocity(10.0, 0.0, 8.0, 1.5)
assert lin == pytest.approx(8.0)
def test_linear_clamped_low(self):
lin, _ = _limit_velocity(-10.0, 0.0, 8.0, 1.5)
assert lin == pytest.approx(-8.0)
def test_angular_clamped(self):
_, ang = _limit_velocity(0.0, 5.0, 8.0, 1.5)
assert ang == pytest.approx(1.5)
def test_zero_passthrough(self):
lin, ang = _limit_velocity(0.0, 0.0, 8.0, 1.5)
assert lin == pytest.approx(0.0)
assert ang == pytest.approx(0.0)
# ── Emergency stop detection ──────────────────────────────────────────────────
class TestEmergencyStop:
def test_triggers_below_stop_threshold(self):
assert _check_emergency_stop(0.2, 3.0, 0.02, 0.4, 3.0) is True
def test_triggers_on_hard_decel(self):
# target_vel drops from 5 to 3 in 0.02s → decel = 100 m/s² >> 3.0
assert _check_emergency_stop(3.0, 5.0, 0.02, 0.4, 3.0) is True
def test_no_trigger_gradual_decel(self):
# 5 → 4 in 0.02s → decel = 50 m/s² >> threshold... wait, 50 > 3
# Let me use a smaller decel: 5 → 4.9 in 0.02s → decel = 5 m/s²... still > 3
# Use 5 → 4.9 in 0.1s → decel = 1.0 m/s²
assert _check_emergency_stop(4.9, 5.0, 0.1, 0.4, 3.0) is False
def test_no_trigger_at_cruising_speed(self):
assert _check_emergency_stop(6.0, 6.0, 0.02, 0.4, 3.0) is False
def test_boundary_exactly_at_stop_threshold(self):
# target_vel == stop_threshold → NOT < → no trigger
# prev must be close enough that decel check doesn't also fire:
# (0.42 - 0.40) / 0.02 = 1.0 m/s² < 3.0 → OK
assert _check_emergency_stop(0.4, 0.42, 0.02, 0.4, 3.0) is False
def test_triggers_just_below_threshold(self):
assert _check_emergency_stop(0.39, 1.0, 0.02, 0.4, 3.0) is True
# ── GPS runaway detection ─────────────────────────────────────────────────────
class TestGpsRunaway:
def test_runaway_detected(self):
# GPS 5 m/s, commanded 2 m/s → 5 > 2*1.5=3, 5 > 8*0.5=4 → True
assert _check_gps_runaway(5.0, 2.0, 1.5, 8.0) is True
def test_no_runaway_normal(self):
# GPS 2.1 m/s, commanded 2 m/s → 2.1 not > 3.0
assert _check_gps_runaway(2.1, 2.0, 1.5, 8.0) is False
def test_no_runaway_stationary(self):
# Robot stopped (GPS 0.1 m/s, cmd 0), reference = max(0,0.5) = 0.5
# 0.1 not > 0.5*1.5=0.75
assert _check_gps_runaway(0.1, 0.0, 1.5, 1.5) is False
def test_runaway_only_at_meaningful_speed(self):
# GPS=6, cmd=4, factor=1.5 → 6 > 4*1.5=6 → False (not strictly greater)
assert _check_gps_runaway(6.0, 4.0, 1.5, 8.0) is False
def test_runaway_needs_gps_above_half_profile_max(self):
# Even if ratio is high, below 50% of profile max → no trigger
# profile_max=8 → threshold=4; GPS=3 < 4 → no trigger
assert _check_gps_runaway(3.0, 0.5, 1.5, 8.0) is False
# ── Emergency decel step ──────────────────────────────────────────────────────
class TestEmergencyDecelStep:
_DT = 0.02
_DECEL = 2.0
def test_positive_speed_decreases(self):
v = _emergency_decel_step(5.0, self._DECEL, self._DT)
assert v == pytest.approx(5.0 - 2.0 * 0.02)
def test_negative_speed_increases(self):
v = _emergency_decel_step(-5.0, self._DECEL, self._DT)
assert v == pytest.approx(-5.0 + 2.0 * 0.02)
def test_does_not_go_past_zero_positive(self):
v = _emergency_decel_step(0.03, self._DECEL, self._DT)
assert v == pytest.approx(0.0) # 0.03 - 0.04 → clamped to 0
def test_does_not_go_past_zero_negative(self):
v = _emergency_decel_step(-0.03, self._DECEL, self._DT)
assert v == pytest.approx(0.0)
def test_zero_stays_zero(self):
assert _emergency_decel_step(0.0, self._DECEL, self._DT) == 0.0
def test_full_stop_from_ride_speed(self):
# 8 m/s at 2.0 m/s² → ticks to stop = 8 / (2.0 * 0.02) = 200 ticks = 4s
v = 8.0
ticks = 0
for _ in range(500):
v = _emergency_decel_step(v, 2.0, 0.02)
ticks += 1
if v == 0.0:
break
assert v == 0.0
assert 195 <= ticks <= 205
# ── Integration: profile + accel + limit ─────────────────────────────────────
class TestIntegration:
"""End-to-end control step simulation."""
def _step(self, current, request, profile_name, dt=0.02):
p = _PROFILES[profile_name]
capped_lin, capped_ang = _limit_velocity(
request, 0.0, p["max_linear_vel"], p["max_angular_vel"])
new_lin = _apply_accel_limit(
current, capped_lin, p["accel_limit"], p["decel_limit"], dt)
return new_lin
def test_walk_stays_under_1p5(self):
v = 0.0
for _ in range(300):
v = self._step(v, 5.0, WALK) # request far above limit
assert v <= 1.5 + 1e-6
def test_ride_reaches_8_from_zero(self):
v = 0.0
for _ in range(5000):
v = self._step(v, 8.0, RIDE)
assert v == pytest.approx(8.0, abs=1e-4)
def test_emergency_overrides_profile(self):
v = 6.0 # at ride speed
# Emergency: decel toward 0
for _ in range(500):
v = _emergency_decel_step(v, 2.0, 0.02)
if v == 0.0:
break
assert v == pytest.approx(0.0)
def test_profile_upgrade_then_downgrade_cycle(self):
counters = {}
profile = WALK
# Upgrade: 5 ticks at jog speed
for _ in range(5):
profile = _select_profile(3.0, _PROFILES, profile,
hysteresis_up=5, counters=counters)
assert profile == JOG
# Downgrade: 10 ticks at walk speed
counters = {}
for _ in range(10):
profile = _select_profile(1.0, _PROFILES, profile,
hysteresis_down=10, counters=counters)
assert profile == WALK

View File

@ -92,7 +92,8 @@ Jetson Orin Nano Super (Ubuntu 22.04 / JetPack 6 / CUDA 12.x)
| 2a | ✅ Done (PR #17) | Sensor drivers — `saltybot_bringup` package |
| 2a+ | ✅ Done (PR #36) | Orin update: Dockerfile JetPack 6, RTAB-Map launch + config |
| 2b | ✅ Done (PR #49) | Nav2 integration — path planning + obstacle avoidance |
| 2c | ✅ Done (this PR) | 4× IMX219 surround vision + Nav2 camera obstacle layer |
| 2c | ✅ Done (PR #52) | 4× IMX219 surround vision + Nav2 camera obstacle layer |
| 2d | ✅ Done (this PR) | Outdoor navigation — OSM routing + GPS waypoints + geofence |
---