sl-perception b9b866854f feat(fleet): multi-robot SLAM — map sharing + cooperative exploration (Issue #134)
New packages:
  saltybot_fleet_msgs — 4 msgs (RobotInfo, MapChunk, Frontier, FrontierArray)
                        + 2 srvs (RegisterRobot, RequestFrontier)
  saltybot_fleet      — 4 nodes + 2 launch files + config + CLI tool

Nodes:
  map_broadcaster_node  — zlib-compress local OccupancyGrid → /fleet/maps @ 1Hz
                          + /fleet/robots heartbeat with battery/status
  fleet_manager_node    — robot registry, MapMerger (multi-grid SE2-aligned merge),
                          frontier aggregation, /fleet/request_frontier service,
                          heartbeat timeout + stale frontier re-assignment
  frontier_detector_node — scipy label-based frontier detection on merged map
                           → /fleet/exploration_frontiers_raw
  fleet_explorer_node   — Nav2 NavigateToPose cooperative exploration state machine:
                          IDLE→request→NAVIGATING→ARRIVED→IDLE + STALLED backoff

Supporting modules:
  map_compressor.py    — binary serialise + zlib OccupancyGrid encode/decode
  map_merger.py        — SE(2)-transform-aware multi-grid merge with conservative
                         obstacle inflation (occupied beats free on conflict)
  frontier_detector.py — numpy frontier mask + scipy connected-components + scoring
  landmark_aligner.py  — ArUco-landmark SE(2) estimation (Horn 2D least-squares)
                         to align robot map frames into common fleet_map frame

Topic layout:
  /fleet/maps                    MapChunk       per-robot compressed grids
  /fleet/robots                  RobotInfo      heartbeats + status
  /fleet/merged_map              OccupancyGrid  coordinator merged output
  /fleet/exploration_frontiers   FrontierArray  consolidated frontiers
  /fleet/status                  String (JSON)  coordinator health
  /<robot_ns>/rtabmap/map        input per robot
  /<robot_ns>/rtabmap/odom       input per robot
  /<robot_ns>/navigate_to_pose   Nav2 action per robot

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 09:36:17 -05:00
..

Jetson Nano — AI/SLAM Platform Setup

Self-balancing robot: Jetson Nano dev environment for ROS2 Humble + SLAM stack.

Stack

Component Version / Part
Platform Jetson Nano 4GB
JetPack 4.6 (L4T R32.6.1, CUDA 10.2)
ROS2 Humble Hawksbill
DDS CycloneDDS
SLAM slam_toolbox
Nav Nav2
Depth camera Intel RealSense D435i
LiDAR RPLIDAR A1M8
MCU bridge STM32F722 (USB CDC @ 921600)

Quick Start

# 1. Host setup (once, on fresh JetPack 4.6)
sudo bash scripts/setup-jetson.sh

# 2. Build Docker image
bash scripts/build-and-run.sh build

# 3. Start full stack
bash scripts/build-and-run.sh up

# 4. Open ROS2 shell
bash scripts/build-and-run.sh shell

Docs

Files

jetson/
├── Dockerfile              # L4T base + ROS2 Humble + SLAM packages
├── docker-compose.yml      # Multi-service stack (ROS2, RPLIDAR, D435i, STM32)
├── README.md               # This file
├── docs/
│   ├── pinout.md           # GPIO/I2C/UART pinout reference
│   └── power-budget.md     # Power budget analysis (10W envelope)
└── scripts/
    ├── entrypoint.sh       # Docker container entrypoint
    ├── setup-jetson.sh     # Host setup (udev, Docker, nvpmodel)
    └── build-and-run.sh    # Build/run helper

Power Budget (Summary)

Scenario Total
Idle 2.9W
Nominal (SLAM active) ~10.2W
Peak 15.4W

Target: 10W (MAXN nvpmodel). Use RPLIDAR standby + 640p D435i for compliance. See docs/power-budget.md for full analysis.