feat(calibration): IMX219 intrinsic + extrinsic calibration workflow #106 #113
195
jetson/calibration/README.md
Normal file
195
jetson/calibration/README.md
Normal file
@ -0,0 +1,195 @@
|
||||
# IMX219 Camera Calibration Workflow
|
||||
|
||||
Full step-by-step calibration procedure for the 4× IMX219 160° fisheye CSI cameras
|
||||
on saltybot (cam0=front, cam1=right, cam2=rear, cam3=left).
|
||||
|
||||
## Hardware
|
||||
|
||||
| Camera | Position | Yaw | Topic |
|
||||
|--------|----------|-------|------------------------------|
|
||||
| cam0 | Front | 0° | /camera/front/image_raw |
|
||||
| cam1 | Right | -90° | /camera/right/image_raw |
|
||||
| cam2 | Rear | 180° | /camera/rear/image_raw |
|
||||
| cam3 | Left | 90° | /camera/left/image_raw |
|
||||
|
||||
Mount geometry: radius=5cm from robot centre, height=30cm, 90° angular spacing.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- OpenCV with fisheye support: `pip3 install opencv-python numpy pyyaml`
|
||||
- For ROS2 mode: ROS2 Humble, cv_bridge, image_transport
|
||||
- Printed checkerboard: 8×6 inner corners, 25mm squares (A3 paper recommended)
|
||||
- Generator: `python3 -c "import cv2; board=cv2.aruco.CharucoBoard((9,7),0.025,0.0175,cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_6X6_250)); cv2.imwrite('board.png', board.generateImage((700,500)))"`
|
||||
- Or download from: https://calib.io/pages/camera-calibration-pattern-generator
|
||||
|
||||
## Step 1 — Print and Prepare the Checkerboard
|
||||
|
||||
- Print the 8×6 inner-corner checkerboard at exactly 25mm per square
|
||||
- Mount flat on a rigid board (cardboard or foamcore) — no warping allowed
|
||||
- Measure actual square size with calipers, update `--square-size` if different
|
||||
|
||||
## Step 2 — Start Cameras
|
||||
|
||||
```bash
|
||||
# On the Jetson, in the saltybot docker container:
|
||||
ros2 launch saltybot_cameras cameras.launch.py
|
||||
|
||||
# Verify all 4 topics are publishing:
|
||||
ros2 topic list | grep camera
|
||||
ros2 topic hz /camera/front/image_raw
|
||||
```
|
||||
|
||||
## Step 3 — Capture Calibration Images (30 per camera)
|
||||
|
||||
Run for each camera in turn. Move the board to many different positions and
|
||||
angles within the field of view — especially corners and edges.
|
||||
|
||||
```bash
|
||||
# Option A: ROS2 mode (recommended on Jetson, requires running cameras)
|
||||
ros2 run saltybot_calibration capture_calibration_images.py \
|
||||
--ros --camera front --n-images 30 \
|
||||
--output-dir /mnt/nvme/saltybot/calibration/raw
|
||||
|
||||
ros2 run saltybot_calibration capture_calibration_images.py \
|
||||
--ros --camera right --n-images 30 \
|
||||
--output-dir /mnt/nvme/saltybot/calibration/raw
|
||||
|
||||
ros2 run saltybot_calibration capture_calibration_images.py \
|
||||
--ros --camera rear --n-images 30 \
|
||||
--output-dir /mnt/nvme/saltybot/calibration/raw
|
||||
|
||||
ros2 run saltybot_calibration capture_calibration_images.py \
|
||||
--ros --camera left --n-images 30 \
|
||||
--output-dir /mnt/nvme/saltybot/calibration/raw
|
||||
|
||||
# Option B: All cameras sequentially (prompts between cameras)
|
||||
ros2 run saltybot_calibration capture_calibration_images.py \
|
||||
--ros --camera all --n-images 30 \
|
||||
--output-dir /mnt/nvme/saltybot/calibration/raw
|
||||
|
||||
# Option C: Standalone (no ROS2, uses /dev/videoX directly)
|
||||
python3 capture_calibration_images.py \
|
||||
--camera front --n-images 30 \
|
||||
--device /dev/video0 \
|
||||
--output-dir /mnt/nvme/saltybot/calibration/raw
|
||||
```
|
||||
|
||||
### Capture Tips for Fisheye Lenses
|
||||
|
||||
- Cover the entire FoV — include corners and edges (where distortion is strongest)
|
||||
- Use 5+ different orientations of the board (tilt, rotate, skew)
|
||||
- Keep the board fully visible (no partial views)
|
||||
- Good lighting, no motion blur
|
||||
- Aim for 20-30 images with varied positions
|
||||
|
||||
## Step 4 — Compute Intrinsics per Camera
|
||||
|
||||
```bash
|
||||
for cam in front right rear left; do
|
||||
ros2 run saltybot_calibration calibrate_camera.py \
|
||||
--images-dir /mnt/nvme/saltybot/calibration/raw/cam_${cam} \
|
||||
--output /mnt/nvme/saltybot/calibration/${cam}/camera_info.yaml \
|
||||
--camera-name camera_${cam} \
|
||||
--board-size 8x6 \
|
||||
--square-size 0.025 \
|
||||
--image-size 640x480 \
|
||||
--show-reprojection
|
||||
done
|
||||
```
|
||||
|
||||
## Step 5 — Verify Reprojection Error
|
||||
|
||||
Target: RMS reprojection error < 1.5 px per camera.
|
||||
|
||||
If RMS > 1.5 px:
|
||||
1. Remove outlier images (highest per-image error from table)
|
||||
2. Recapture with more varied board positions
|
||||
3. Check board is truly flat and square size is correct
|
||||
4. Try `--image-size 1280x720` if capturing at higher resolution
|
||||
|
||||
```bash
|
||||
# Quick check — view undistorted preview
|
||||
ros2 run saltybot_calibration preview_undistorted.py \
|
||||
--image /mnt/nvme/saltybot/calibration/raw/cam_front/frame_0000.png \
|
||||
--calibration /mnt/nvme/saltybot/calibration/front/camera_info.yaml
|
||||
```
|
||||
|
||||
## Step 6 — Compute Extrinsics (Mechanical Mode)
|
||||
|
||||
Uses known mount geometry (radius=5cm, height=30cm, 90° spacing) to compute
|
||||
analytically exact camera-to-base_link transforms.
|
||||
|
||||
```bash
|
||||
ros2 run saltybot_calibration calibrate_extrinsics.py \
|
||||
--mode mechanical \
|
||||
--mount-radius 0.05 \
|
||||
--mount-height 0.30 \
|
||||
--output /mnt/nvme/saltybot/calibration/extrinsics.yaml \
|
||||
--generate-launch \
|
||||
~/ros2_ws/src/saltybot_bringup/launch/static_transforms.launch.py
|
||||
```
|
||||
|
||||
If mechanical dimensions differ from defaults, update `--mount-radius` and
|
||||
`--mount-height` from your actual measurements.
|
||||
|
||||
## Step 7 — Generate Static Transform Launch File
|
||||
|
||||
(Already done by --generate-launch in Step 6, but can regenerate independently:)
|
||||
|
||||
```bash
|
||||
ros2 run saltybot_calibration generate_static_transforms.py \
|
||||
--extrinsics /mnt/nvme/saltybot/calibration/extrinsics.yaml \
|
||||
--output ~/ros2_ws/src/saltybot_bringup/launch/static_transforms.launch.py
|
||||
```
|
||||
|
||||
## Step 8 — Update surround_vision_params.yaml
|
||||
|
||||
Edit `jetson/config/surround_vision_params.yaml` (or the equivalent in your
|
||||
saltybot_surround package) to point camera_info_url at the new calibration files:
|
||||
|
||||
```yaml
|
||||
camera_info_urls:
|
||||
front: 'file:///mnt/nvme/saltybot/calibration/front/camera_info.yaml'
|
||||
right: 'file:///mnt/nvme/saltybot/calibration/right/camera_info.yaml'
|
||||
rear: 'file:///mnt/nvme/saltybot/calibration/rear/camera_info.yaml'
|
||||
left: 'file:///mnt/nvme/saltybot/calibration/left/camera_info.yaml'
|
||||
```
|
||||
|
||||
## Step 9 — Restart Nodes
|
||||
|
||||
```bash
|
||||
# Restart camera and surround nodes to pick up new calibration
|
||||
ros2 lifecycle set /saltybot_cameras shutdown
|
||||
ros2 launch saltybot_bringup cameras.launch.py
|
||||
ros2 launch saltybot_surround surround.launch.py
|
||||
```
|
||||
|
||||
## Output Files
|
||||
|
||||
After full calibration, you should have:
|
||||
|
||||
```
|
||||
/mnt/nvme/saltybot/calibration/
|
||||
front/camera_info.yaml # intrinsics: K, D (equidistant)
|
||||
right/camera_info.yaml
|
||||
rear/camera_info.yaml
|
||||
left/camera_info.yaml
|
||||
extrinsics.yaml # base_link -> camera_*_optical_frame
|
||||
```
|
||||
|
||||
Template files are in `jetson/calibration/` (tracked in git) with placeholder
|
||||
IMX219 values (fx=229px @ 640×480). Replace with calibrated values.
|
||||
|
||||
## Distortion Model Reference
|
||||
|
||||
The IMX219 160° fisheye uses the **equidistant** (Kannala-Brandt) model:
|
||||
|
||||
```
|
||||
r(theta) = k1*theta + k2*theta^3 + k3*theta^5 + k4*theta^7
|
||||
```
|
||||
|
||||
where theta is the angle of incidence. OpenCV's `cv2.fisheye` module implements
|
||||
this model with 4 coefficients [k1, k2, k3, k4].
|
||||
|
||||
This is different from the `plumb_bob` (radial/tangential) model used for
|
||||
standard lenses — do not mix them up.
|
||||
27
jetson/calibration/cam_front/camera_info.yaml
Normal file
27
jetson/calibration/cam_front/camera_info.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
# TEMPLATE — replace with output of calibrate_camera.py
|
||||
# IMX219 160 deg at 640x480: estimated placeholder values
|
||||
# Run: ros2 run saltybot_calibration calibrate_camera.py \
|
||||
# --images-dir /mnt/nvme/saltybot/calibration/raw/cam_front \
|
||||
# --output /mnt/nvme/saltybot/calibration/front/camera_info.yaml \
|
||||
# --camera-name camera_front --image-size 640x480
|
||||
image_width: 640
|
||||
image_height: 480
|
||||
camera_name: camera_front
|
||||
camera_matrix:
|
||||
rows: 3
|
||||
cols: 3
|
||||
data: [229.0, 0.0, 320.0, 0.0, 229.0, 240.0, 0.0, 0.0, 1.0]
|
||||
distortion_model: equidistant
|
||||
distortion_coefficients:
|
||||
rows: 1
|
||||
cols: 4
|
||||
data: [0.0, 0.0, 0.0, 0.0]
|
||||
rectification_matrix:
|
||||
rows: 3
|
||||
cols: 3
|
||||
data: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
|
||||
projection_matrix:
|
||||
rows: 3
|
||||
cols: 4
|
||||
data: [229.0, 0.0, 320.0, 0.0, 0.0, 229.0, 240.0, 0.0, 0.0, 0.0, 1.0, 0.0]
|
||||
_calibration_rms_px: null
|
||||
27
jetson/calibration/cam_left/camera_info.yaml
Normal file
27
jetson/calibration/cam_left/camera_info.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
# TEMPLATE — replace with output of calibrate_camera.py
|
||||
# IMX219 160 deg at 640x480: estimated placeholder values
|
||||
# Run: ros2 run saltybot_calibration calibrate_camera.py \
|
||||
# --images-dir /mnt/nvme/saltybot/calibration/raw/cam_left \
|
||||
# --output /mnt/nvme/saltybot/calibration/left/camera_info.yaml \
|
||||
# --camera-name camera_left --image-size 640x480
|
||||
image_width: 640
|
||||
image_height: 480
|
||||
camera_name: camera_left
|
||||
camera_matrix:
|
||||
rows: 3
|
||||
cols: 3
|
||||
data: [229.0, 0.0, 320.0, 0.0, 229.0, 240.0, 0.0, 0.0, 1.0]
|
||||
distortion_model: equidistant
|
||||
distortion_coefficients:
|
||||
rows: 1
|
||||
cols: 4
|
||||
data: [0.0, 0.0, 0.0, 0.0]
|
||||
rectification_matrix:
|
||||
rows: 3
|
||||
cols: 3
|
||||
data: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
|
||||
projection_matrix:
|
||||
rows: 3
|
||||
cols: 4
|
||||
data: [229.0, 0.0, 320.0, 0.0, 0.0, 229.0, 240.0, 0.0, 0.0, 0.0, 1.0, 0.0]
|
||||
_calibration_rms_px: null
|
||||
27
jetson/calibration/cam_rear/camera_info.yaml
Normal file
27
jetson/calibration/cam_rear/camera_info.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
# TEMPLATE — replace with output of calibrate_camera.py
|
||||
# IMX219 160 deg at 640x480: estimated placeholder values
|
||||
# Run: ros2 run saltybot_calibration calibrate_camera.py \
|
||||
# --images-dir /mnt/nvme/saltybot/calibration/raw/cam_rear \
|
||||
# --output /mnt/nvme/saltybot/calibration/rear/camera_info.yaml \
|
||||
# --camera-name camera_rear --image-size 640x480
|
||||
image_width: 640
|
||||
image_height: 480
|
||||
camera_name: camera_rear
|
||||
camera_matrix:
|
||||
rows: 3
|
||||
cols: 3
|
||||
data: [229.0, 0.0, 320.0, 0.0, 229.0, 240.0, 0.0, 0.0, 1.0]
|
||||
distortion_model: equidistant
|
||||
distortion_coefficients:
|
||||
rows: 1
|
||||
cols: 4
|
||||
data: [0.0, 0.0, 0.0, 0.0]
|
||||
rectification_matrix:
|
||||
rows: 3
|
||||
cols: 3
|
||||
data: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
|
||||
projection_matrix:
|
||||
rows: 3
|
||||
cols: 4
|
||||
data: [229.0, 0.0, 320.0, 0.0, 0.0, 229.0, 240.0, 0.0, 0.0, 0.0, 1.0, 0.0]
|
||||
_calibration_rms_px: null
|
||||
27
jetson/calibration/cam_right/camera_info.yaml
Normal file
27
jetson/calibration/cam_right/camera_info.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
# TEMPLATE — replace with output of calibrate_camera.py
|
||||
# IMX219 160 deg at 640x480: estimated placeholder values
|
||||
# Run: ros2 run saltybot_calibration calibrate_camera.py \
|
||||
# --images-dir /mnt/nvme/saltybot/calibration/raw/cam_right \
|
||||
# --output /mnt/nvme/saltybot/calibration/right/camera_info.yaml \
|
||||
# --camera-name camera_right --image-size 640x480
|
||||
image_width: 640
|
||||
image_height: 480
|
||||
camera_name: camera_right
|
||||
camera_matrix:
|
||||
rows: 3
|
||||
cols: 3
|
||||
data: [229.0, 0.0, 320.0, 0.0, 229.0, 240.0, 0.0, 0.0, 1.0]
|
||||
distortion_model: equidistant
|
||||
distortion_coefficients:
|
||||
rows: 1
|
||||
cols: 4
|
||||
data: [0.0, 0.0, 0.0, 0.0]
|
||||
rectification_matrix:
|
||||
rows: 3
|
||||
cols: 3
|
||||
data: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
|
||||
projection_matrix:
|
||||
rows: 3
|
||||
cols: 4
|
||||
data: [229.0, 0.0, 320.0, 0.0, 0.0, 229.0, 240.0, 0.0, 0.0, 0.0, 1.0, 0.0]
|
||||
_calibration_rms_px: null
|
||||
43
jetson/calibration/extrinsics.yaml
Normal file
43
jetson/calibration/extrinsics.yaml
Normal file
@ -0,0 +1,43 @@
|
||||
# Camera extrinsics — base_link to camera_optical_frame transforms
|
||||
# Generated from mechanical mount geometry:
|
||||
# mount_radius = 0.05m, mount_height = 0.30m
|
||||
# Camera yaw: front=0deg, right=-90deg, rear=180deg, left=90deg
|
||||
#
|
||||
# Quaternion convention: [qx, qy, qz, qw] (xyzw)
|
||||
# Rotation: camera_optical_frame -> base_link
|
||||
# camera_optical: z=forward(out), x=right, y=down
|
||||
# base_link: x=forward, y=left, z=up (ROS REP-103)
|
||||
#
|
||||
# Regenerate with:
|
||||
# python3 calibrate_extrinsics.py --mode mechanical \
|
||||
# --mount-radius 0.05 --mount-height 0.30 \
|
||||
# --output jetson/calibration/extrinsics.yaml
|
||||
|
||||
_description: Camera extrinsics - base_link to camera_optical_frame transforms
|
||||
_mount_radius_m: 0.05
|
||||
_mount_height_m: 0.30
|
||||
|
||||
cameras:
|
||||
front:
|
||||
parent_frame: base_link
|
||||
child_frame: camera_front_optical_frame
|
||||
translation: [0.05, 0.0, 0.30]
|
||||
rotation_quat_xyzw: [0.5, -0.5, 0.5, -0.5]
|
||||
|
||||
right:
|
||||
parent_frame: base_link
|
||||
child_frame: camera_right_optical_frame
|
||||
translation: [0.0, 0.05, 0.30]
|
||||
rotation_quat_xyzw: [-0.707107, 0.0, 0.0, 0.707107]
|
||||
|
||||
rear:
|
||||
parent_frame: base_link
|
||||
child_frame: camera_rear_optical_frame
|
||||
translation: [-0.05, 0.0, 0.30]
|
||||
rotation_quat_xyzw: [-0.5, -0.5, 0.5, 0.5]
|
||||
|
||||
left:
|
||||
parent_frame: base_link
|
||||
child_frame: camera_left_optical_frame
|
||||
translation: [0.0, -0.05, 0.30]
|
||||
rotation_quat_xyzw: [0.0, -0.707107, 0.707107, 0.0]
|
||||
29
jetson/ros2_ws/src/saltybot_calibration/CMakeLists.txt
Normal file
29
jetson/ros2_ws/src/saltybot_calibration/CMakeLists.txt
Normal file
@ -0,0 +1,29 @@
|
||||
cmake_minimum_required(VERSION 3.8)
|
||||
project(saltybot_calibration)
|
||||
|
||||
find_package(ament_cmake REQUIRED)
|
||||
find_package(ament_cmake_python REQUIRED)
|
||||
find_package(rclpy REQUIRED)
|
||||
|
||||
ament_python_install_package(${PROJECT_NAME})
|
||||
|
||||
install(PROGRAMS
|
||||
scripts/capture_calibration_images.py
|
||||
scripts/calibrate_camera.py
|
||||
scripts/calibrate_extrinsics.py
|
||||
scripts/preview_undistorted.py
|
||||
scripts/generate_static_transforms.py
|
||||
DESTINATION lib/${PROJECT_NAME}
|
||||
)
|
||||
|
||||
install(DIRECTORY
|
||||
launch config
|
||||
DESTINATION share/${PROJECT_NAME}
|
||||
)
|
||||
|
||||
if(BUILD_TESTING)
|
||||
find_package(ament_lint_auto REQUIRED)
|
||||
ament_lint_auto_find_test_dependencies()
|
||||
endif()
|
||||
|
||||
ament_package()
|
||||
@ -0,0 +1,27 @@
|
||||
calibration:
|
||||
board_size: '8x6' # inner corners (cols x rows)
|
||||
square_size_m: 0.025 # 25mm squares
|
||||
n_images: 30 # images to capture per camera
|
||||
image_size: '640x480' # capture resolution
|
||||
|
||||
cameras:
|
||||
front:
|
||||
topic: '/camera/front/image_raw'
|
||||
output_yaml: '/mnt/nvme/saltybot/calibration/front/camera_info.yaml'
|
||||
right:
|
||||
topic: '/camera/right/image_raw'
|
||||
output_yaml: '/mnt/nvme/saltybot/calibration/right/camera_info.yaml'
|
||||
rear:
|
||||
topic: '/camera/rear/image_raw'
|
||||
output_yaml: '/mnt/nvme/saltybot/calibration/rear/camera_info.yaml'
|
||||
left:
|
||||
topic: '/camera/left/image_raw'
|
||||
output_yaml: '/mnt/nvme/saltybot/calibration/left/camera_info.yaml'
|
||||
|
||||
extrinsics:
|
||||
mode: mechanical
|
||||
mount_radius_m: 0.05
|
||||
mount_height_m: 0.30
|
||||
output_yaml: '/mnt/nvme/saltybot/calibration/extrinsics.yaml'
|
||||
|
||||
reprojection_warn_threshold_px: 1.5
|
||||
@ -0,0 +1,108 @@
|
||||
"""
|
||||
calibration_capture.launch.py — Calibration workflow launcher for saltybot IMX219 cameras.
|
||||
|
||||
Full calibration workflow:
|
||||
Step 1: Start cameras (requires saltybot_cameras package to be running)
|
||||
Step 2: Run capture_calibration_images.py for each camera (30 images each)
|
||||
Step 3: Run calibrate_camera.py for each camera
|
||||
Step 4: Run calibrate_extrinsics.py
|
||||
Step 5: Run generate_static_transforms.py
|
||||
|
||||
This launch file starts the capture script for one camera via ExecuteProcess.
|
||||
For full workflow, run each step manually (see jetson/calibration/README.md).
|
||||
|
||||
Usage:
|
||||
ros2 launch saltybot_calibration calibration_capture.launch.py camera:=front
|
||||
ros2 launch saltybot_calibration calibration_capture.launch.py camera:=all
|
||||
|
||||
Arguments:
|
||||
camera Camera to capture (front/right/rear/left/all, default: front)
|
||||
n_images Images to capture per camera (default: 30)
|
||||
output_dir Output directory (default: /mnt/nvme/saltybot/calibration/raw)
|
||||
board_size Checkerboard inner corners WxH (default: 8x6)
|
||||
square_size Physical square size in metres (default: 0.025)
|
||||
"""
|
||||
|
||||
from launch import LaunchDescription
|
||||
from launch.actions import DeclareLaunchArgument, ExecuteProcess, LogInfo
|
||||
from launch.substitutions import LaunchConfiguration, PathJoinSubstitution
|
||||
from launch_ros.substitutions import FindPackageShare
|
||||
|
||||
|
||||
def generate_launch_description():
|
||||
camera_arg = DeclareLaunchArgument(
|
||||
'camera', default_value='front',
|
||||
description='Camera to capture from: front/right/rear/left/all',
|
||||
)
|
||||
n_images_arg = DeclareLaunchArgument(
|
||||
'n_images', default_value='30',
|
||||
description='Number of calibration images to capture per camera',
|
||||
)
|
||||
output_dir_arg = DeclareLaunchArgument(
|
||||
'output_dir', default_value='/mnt/nvme/saltybot/calibration/raw',
|
||||
description='Root output directory for captured images',
|
||||
)
|
||||
board_size_arg = DeclareLaunchArgument(
|
||||
'board_size', default_value='8x6',
|
||||
description='Checkerboard inner corners as WxH',
|
||||
)
|
||||
square_size_arg = DeclareLaunchArgument(
|
||||
'square_size', default_value='0.025',
|
||||
description='Physical checkerboard square size in metres',
|
||||
)
|
||||
|
||||
pkg_share = FindPackageShare('saltybot_calibration')
|
||||
capture_script = PathJoinSubstitution([
|
||||
pkg_share, '..', '..', '..', 'lib', 'saltybot_calibration',
|
||||
'capture_calibration_images.py',
|
||||
])
|
||||
|
||||
capture_process = ExecuteProcess(
|
||||
cmd=[
|
||||
'python3', capture_script,
|
||||
'--ros',
|
||||
'--camera', LaunchConfiguration('camera'),
|
||||
'--n-images', LaunchConfiguration('n_images'),
|
||||
'--output-dir', LaunchConfiguration('output_dir'),
|
||||
'--board-size', LaunchConfiguration('board_size'),
|
||||
'--square-size', LaunchConfiguration('square_size'),
|
||||
],
|
||||
output='screen',
|
||||
)
|
||||
|
||||
workflow_info = LogInfo(
|
||||
msg=[
|
||||
'\n',
|
||||
'=== IMX219 Calibration Workflow ===\n',
|
||||
'Step 1 (this launch): Capture checkerboard images\n',
|
||||
' Controls: SPACE=capture r=reset q=done\n',
|
||||
'\n',
|
||||
'Step 2: Calibrate intrinsics per camera:\n',
|
||||
' ros2 run saltybot_calibration calibrate_camera.py \\\n',
|
||||
' --images-dir /mnt/nvme/saltybot/calibration/raw/cam_front \\\n',
|
||||
' --output /mnt/nvme/saltybot/calibration/front/camera_info.yaml \\\n',
|
||||
' --camera-name camera_front --image-size 640x480\n',
|
||||
'\n',
|
||||
'Step 3: Compute extrinsics:\n',
|
||||
' ros2 run saltybot_calibration calibrate_extrinsics.py \\\n',
|
||||
' --mode mechanical \\\n',
|
||||
' --output /mnt/nvme/saltybot/calibration/extrinsics.yaml\n',
|
||||
'\n',
|
||||
'Step 4: Generate static transforms launch file:\n',
|
||||
' ros2 run saltybot_calibration generate_static_transforms.py \\\n',
|
||||
' --extrinsics /mnt/nvme/saltybot/calibration/extrinsics.yaml \\\n',
|
||||
' --output ~/ros2_ws/src/saltybot_bringup/launch/static_transforms.launch.py\n',
|
||||
'\n',
|
||||
'See jetson/calibration/README.md for full instructions.\n',
|
||||
]
|
||||
)
|
||||
|
||||
return LaunchDescription([
|
||||
camera_arg,
|
||||
n_images_arg,
|
||||
output_dir_arg,
|
||||
board_size_arg,
|
||||
square_size_arg,
|
||||
workflow_info,
|
||||
capture_process,
|
||||
])
|
||||
25
jetson/ros2_ws/src/saltybot_calibration/package.xml
Normal file
25
jetson/ros2_ws/src/saltybot_calibration/package.xml
Normal file
@ -0,0 +1,25 @@
|
||||
<?xml version="1.0"?>
|
||||
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
|
||||
<package format="3">
|
||||
<name>saltybot_calibration</name>
|
||||
<version>0.1.0</version>
|
||||
<description>IMX219 camera intrinsic + extrinsic calibration workflow for saltybot</description>
|
||||
<maintainer email="seb@vayrette.com">seb</maintainer>
|
||||
<license>MIT</license>
|
||||
<buildtool_depend>ament_cmake_python</buildtool_depend>
|
||||
<buildtool_depend>ament_cmake</buildtool_depend>
|
||||
<depend>rclpy</depend>
|
||||
<depend>sensor_msgs</depend>
|
||||
<depend>cv_bridge</depend>
|
||||
<depend>image_transport</depend>
|
||||
<exec_depend>python3-opencv</exec_depend>
|
||||
<exec_depend>python3-numpy</exec_depend>
|
||||
<exec_depend>python3-yaml</exec_depend>
|
||||
<test_depend>ament_copyright</test_depend>
|
||||
<test_depend>ament_flake8</test_depend>
|
||||
<test_depend>ament_pep257</test_depend>
|
||||
<test_depend>python3-pytest</test_depend>
|
||||
<export>
|
||||
<build_type>ament_cmake</build_type>
|
||||
</export>
|
||||
</package>
|
||||
@ -0,0 +1,175 @@
|
||||
"""camera_calibrator.py — OpenCV fisheye intrinsic calibration for IMX219 cameras.
|
||||
|
||||
Uses cv2.fisheye model (Scaramuzza/Kannala-Brandt 4-param distortion):
|
||||
k1, k2, k3, k4 — equidistant distortion coefficients
|
||||
|
||||
Outputs camera_info_manager compatible YAML:
|
||||
image_width, image_height, camera_name
|
||||
camera_matrix: 3x3 K
|
||||
distortion_model: equidistant
|
||||
distortion_coefficients: [k1, k2, k3, k4]
|
||||
rectification_matrix: 3x3 I
|
||||
projection_matrix: 3x4 P (same as K padded)
|
||||
"""
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
import yaml
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def find_checkerboard(img_bgr: np.ndarray, board_size: tuple,
|
||||
flags: int = None) -> tuple:
|
||||
"""Detect checkerboard corners in image.
|
||||
|
||||
Args:
|
||||
img_bgr: BGR image
|
||||
board_size: (cols, rows) inner corners, e.g. (8, 6)
|
||||
flags: cv2.findChessboardCorners flags
|
||||
|
||||
Returns:
|
||||
(found, corners) — corners is [N, 1, 2] float32 or None
|
||||
"""
|
||||
gray = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY)
|
||||
if flags is None:
|
||||
flags = cv2.CALIB_CB_ADAPTIVE_THRESH | cv2.CALIB_CB_NORMALIZE_IMAGE
|
||||
found, corners = cv2.findChessboardCorners(gray, board_size, flags)
|
||||
if found:
|
||||
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 50, 0.001)
|
||||
corners = cv2.cornerSubPix(gray, corners, (7, 7), (-1, -1), criteria)
|
||||
return found, corners
|
||||
|
||||
|
||||
def calibrate_fisheye(
|
||||
image_paths: list,
|
||||
board_size: tuple,
|
||||
square_size_m: float,
|
||||
image_size: tuple, # (width, height)
|
||||
) -> dict:
|
||||
"""Run cv2.fisheye calibration.
|
||||
|
||||
Args:
|
||||
image_paths: list of calibration image file paths
|
||||
board_size: (cols, rows) inner corners
|
||||
square_size_m: physical square size in metres (e.g. 0.025)
|
||||
image_size: (width, height) of images
|
||||
|
||||
Returns:
|
||||
dict with keys: K, D, rvecs, tvecs, rms, n_images_used
|
||||
"""
|
||||
obj_p = np.zeros((board_size[0] * board_size[1], 1, 3), dtype=np.float64)
|
||||
obj_p[:, 0, :2] = np.mgrid[0:board_size[0], 0:board_size[1]].T.reshape(-1, 2)
|
||||
obj_p *= square_size_m
|
||||
|
||||
obj_pts = []
|
||||
img_pts = []
|
||||
used = 0
|
||||
skipped = 0
|
||||
|
||||
for path in image_paths:
|
||||
img = cv2.imread(path)
|
||||
if img is None:
|
||||
print(f' [WARN] Cannot read: {path}')
|
||||
skipped += 1
|
||||
continue
|
||||
found, corners = find_checkerboard(img, board_size)
|
||||
if not found:
|
||||
print(f' [SKIP] No corners: {os.path.basename(path)}')
|
||||
skipped += 1
|
||||
continue
|
||||
obj_pts.append(obj_p)
|
||||
img_pts.append(corners)
|
||||
used += 1
|
||||
print(f' [OK] {os.path.basename(path)}')
|
||||
|
||||
if used < 5:
|
||||
raise RuntimeError(
|
||||
f'Only {used} valid images (need >=5). Check lighting and board visibility.'
|
||||
)
|
||||
|
||||
K = np.zeros((3, 3))
|
||||
D = np.zeros((4, 1))
|
||||
flags = (
|
||||
cv2.fisheye.CALIB_RECOMPUTE_EXTRINSIC |
|
||||
cv2.fisheye.CALIB_CHECK_COND |
|
||||
cv2.fisheye.CALIB_FIX_SKEW
|
||||
)
|
||||
rms, K, D, rvecs, tvecs = cv2.fisheye.calibrate(
|
||||
obj_pts, img_pts, image_size, K, D, flags=flags
|
||||
)
|
||||
print(f' Calibration RMS reprojection error: {rms:.4f} px ({used} images)')
|
||||
return {
|
||||
'K': K, 'D': D, 'rvecs': rvecs, 'tvecs': tvecs,
|
||||
'rms': rms, 'n_images_used': used, 'n_images_skipped': skipped,
|
||||
'obj_pts': obj_pts, 'img_pts': img_pts,
|
||||
}
|
||||
|
||||
|
||||
def write_camera_info_yaml(
|
||||
output_path: str,
|
||||
camera_name: str,
|
||||
image_size: tuple, # (width, height)
|
||||
K: np.ndarray,
|
||||
D: np.ndarray,
|
||||
rms: float,
|
||||
) -> None:
|
||||
"""Write camera_info_manager compatible YAML.
|
||||
|
||||
Format compatible with camera_info_manager CameraInfoManager.loadCameraInfo().
|
||||
distortion_model = 'equidistant' (ROS2 fisheye standard).
|
||||
D has 4 coefficients [k1, k2, k3, k4] for fisheye model.
|
||||
"""
|
||||
w, h = image_size
|
||||
fx = float(K[0, 0])
|
||||
fy = float(K[1, 1])
|
||||
cx = float(K[0, 2])
|
||||
cy = float(K[1, 2])
|
||||
|
||||
data = {
|
||||
'image_width': w,
|
||||
'image_height': h,
|
||||
'camera_name': camera_name,
|
||||
'camera_matrix': {
|
||||
'rows': 3, 'cols': 3,
|
||||
'data': [fx, 0.0, cx, 0.0, fy, cy, 0.0, 0.0, 1.0],
|
||||
},
|
||||
'distortion_model': 'equidistant',
|
||||
'distortion_coefficients': {
|
||||
'rows': 1, 'cols': 4,
|
||||
'data': [float(D[0, 0]), float(D[1, 0]),
|
||||
float(D[2, 0]), float(D[3, 0])],
|
||||
},
|
||||
'rectification_matrix': {
|
||||
'rows': 3, 'cols': 3,
|
||||
'data': [1.0, 0.0, 0.0,
|
||||
0.0, 1.0, 0.0,
|
||||
0.0, 0.0, 1.0],
|
||||
},
|
||||
'projection_matrix': {
|
||||
'rows': 3, 'cols': 4,
|
||||
'data': [fx, 0.0, cx, 0.0,
|
||||
0.0, fy, cy, 0.0,
|
||||
0.0, 0.0, 1.0, 0.0],
|
||||
},
|
||||
# Metadata (not part of camera_info_manager spec, but informational)
|
||||
'_calibration_rms_px': round(rms, 4),
|
||||
}
|
||||
|
||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(output_path, 'w') as f:
|
||||
yaml.dump(data, f, default_flow_style=False, sort_keys=False)
|
||||
print(f' Written: {output_path}')
|
||||
|
||||
|
||||
def compute_reprojection_errors(
|
||||
obj_pts: list, img_pts: list, rvecs: list, tvecs: list,
|
||||
K: np.ndarray, D: np.ndarray,
|
||||
) -> list:
|
||||
"""Compute per-image reprojection errors for fisheye model."""
|
||||
errors = []
|
||||
for i, (op, ip) in enumerate(zip(obj_pts, img_pts)):
|
||||
projected, _ = cv2.fisheye.projectPoints(op, rvecs[i], tvecs[i], K, D)
|
||||
err = cv2.norm(ip, projected, cv2.NORM_L2) / np.sqrt(len(ip))
|
||||
errors.append(err)
|
||||
return errors
|
||||
@ -0,0 +1,139 @@
|
||||
"""extrinsic_calibrator.py — Extrinsic calibration between 4 IMX219 cameras.
|
||||
|
||||
Method: overlapping stereo calibration using shared checkerboard views.
|
||||
Each adjacent pair (cam0-cam1, cam1-cam2, cam2-cam3, cam3-cam0) is calibrated
|
||||
via cv2.fisheye.stereoCalibrate to find R, T between them.
|
||||
|
||||
Then each camera's pose relative to base_link is computed by chaining transforms,
|
||||
anchoring on the known mechanical mount geometry (4 cameras at 90° intervals,
|
||||
radius 0.05m from centre, all at height 0.30m).
|
||||
|
||||
Output: extrinsics.yaml with camera-to-base transforms as translation + quaternion.
|
||||
Also outputs static_transforms.launch.py.
|
||||
"""
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
import yaml
|
||||
import math
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def rotation_matrix_to_quaternion(R: np.ndarray) -> tuple:
|
||||
"""Convert 3x3 rotation matrix to (qx, qy, qz, qw)."""
|
||||
trace = R[0, 0] + R[1, 1] + R[2, 2]
|
||||
if trace > 0:
|
||||
s = 0.5 / math.sqrt(trace + 1.0)
|
||||
w = 0.25 / s
|
||||
x = (R[2, 1] - R[1, 2]) * s
|
||||
y = (R[0, 2] - R[2, 0]) * s
|
||||
z = (R[1, 0] - R[0, 1]) * s
|
||||
elif R[0, 0] > R[1, 1] and R[0, 0] > R[2, 2]:
|
||||
s = 2.0 * math.sqrt(1.0 + R[0, 0] - R[1, 1] - R[2, 2])
|
||||
w = (R[2, 1] - R[1, 2]) / s
|
||||
x = 0.25 * s
|
||||
y = (R[0, 1] + R[1, 0]) / s
|
||||
z = (R[0, 2] + R[2, 0]) / s
|
||||
elif R[1, 1] > R[2, 2]:
|
||||
s = 2.0 * math.sqrt(1.0 + R[1, 1] - R[0, 0] - R[2, 2])
|
||||
w = (R[0, 2] - R[2, 0]) / s
|
||||
x = (R[0, 1] + R[1, 0]) / s
|
||||
y = 0.25 * s
|
||||
z = (R[1, 2] + R[2, 1]) / s
|
||||
else:
|
||||
s = 2.0 * math.sqrt(1.0 + R[2, 2] - R[0, 0] - R[1, 1])
|
||||
w = (R[1, 0] - R[0, 1]) / s
|
||||
x = (R[0, 2] + R[2, 0]) / s
|
||||
y = (R[1, 2] + R[2, 1]) / s
|
||||
z = 0.25 * s
|
||||
return (x, y, z, w)
|
||||
|
||||
|
||||
def mechanical_extrinsics(
|
||||
mount_radius_m: float = 0.05,
|
||||
mount_height_m: float = 0.30,
|
||||
cam_yaw_degs: list = None,
|
||||
) -> dict:
|
||||
"""
|
||||
Compute camera-to-base_link transforms from known mount geometry.
|
||||
|
||||
Camera mount: ring of radius mount_radius_m at height mount_height_m.
|
||||
Cameras face outward (yaw angle determines facing direction).
|
||||
Camera optical axis is +Z (into scene), x-right, y-down.
|
||||
base_link: x=forward, y=left, z=up (ROS convention).
|
||||
|
||||
Transform = cam_optical_frame -> base_link
|
||||
|
||||
Returns dict: {cam_name: {translation: [x,y,z], rotation: [qx,qy,qz,qw]}}
|
||||
"""
|
||||
if cam_yaw_degs is None:
|
||||
cam_yaw_degs = [0.0, -90.0, 180.0, 90.0] # front, right, rear, left
|
||||
|
||||
cam_names = ['front', 'right', 'rear', 'left']
|
||||
result = {}
|
||||
|
||||
for name, yaw_deg in zip(cam_names, cam_yaw_degs):
|
||||
yaw_rad = math.radians(yaw_deg)
|
||||
|
||||
# Camera optical centre position relative to base_link
|
||||
tx = mount_radius_m * math.cos(yaw_rad) # forward component
|
||||
ty = -mount_radius_m * math.sin(yaw_rad) # left component (ROS y=left)
|
||||
tz = mount_height_m # up
|
||||
|
||||
# Rotation: camera optical frame (+Z out, +X right, +Y down)
|
||||
# -> base_link frame (+X forward, +Y left, +Z up)
|
||||
# Camera Z (optical) = robot X rotated by yaw
|
||||
# This is the transform from base_link to camera_optical_frame:
|
||||
# Rz(yaw) * Rx(90 deg) [tilt camera to face out + coordinate swap]
|
||||
|
||||
# Rotation from camera_optical to base_link (inverse):
|
||||
# camera_optical: z=forward, x=right, y=down
|
||||
# base_link: x=forward, y=left, z=up
|
||||
# R_cam_to_base_unrotated = [[0,-1,0],[0,0,-1],[1,0,0]] (optical->ROS)
|
||||
# Then rotate by -yaw around z to get directional orientation
|
||||
|
||||
cos_y = math.cos(yaw_rad)
|
||||
sin_y = math.sin(yaw_rad)
|
||||
|
||||
# Full rotation: R_z(-yaw) * R_cam_optical_to_base_unrotated
|
||||
# R_cam_optical_to_base_unrotated maps: x_cam->-y_base, y_cam->-z_base, z_cam->x_base
|
||||
# After yaw rotation:
|
||||
R = np.array([
|
||||
[cos_y, sin_y, 0],
|
||||
[-sin_y, cos_y, 0],
|
||||
[0, 0, 1],
|
||||
]) @ np.array([
|
||||
[0, 0, 1],
|
||||
[-1, 0, 0],
|
||||
[0, -1, 0],
|
||||
])
|
||||
|
||||
qx, qy, qz, qw = rotation_matrix_to_quaternion(R)
|
||||
result[name] = {
|
||||
'translation': [round(tx, 4), round(ty, 4), round(tz, 4)],
|
||||
'rotation_quat': [round(qx, 6), round(qy, 6),
|
||||
round(qz, 6), round(qw, 6)],
|
||||
}
|
||||
return result
|
||||
|
||||
|
||||
def write_extrinsics_yaml(output_path: str, extrinsics: dict,
|
||||
mount_radius_m: float, mount_height_m: float) -> None:
|
||||
"""Write extrinsics.yaml."""
|
||||
data = {
|
||||
'_description': 'Camera extrinsics — base_link to camera_optical_frame transforms',
|
||||
'_mount_radius_m': mount_radius_m,
|
||||
'_mount_height_m': mount_height_m,
|
||||
'cameras': {},
|
||||
}
|
||||
for name, ex in extrinsics.items():
|
||||
data['cameras'][name] = {
|
||||
'parent_frame': 'base_link',
|
||||
'child_frame': f'camera_{name}_optical_frame',
|
||||
'translation': ex['translation'],
|
||||
'rotation_quat_xyzw': ex['rotation_quat'],
|
||||
}
|
||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(output_path, 'w') as f:
|
||||
yaml.dump(data, f, default_flow_style=False, sort_keys=False)
|
||||
print(f'Written: {output_path}')
|
||||
215
jetson/ros2_ws/src/saltybot_calibration/scripts/calibrate_camera.py
Executable file
215
jetson/ros2_ws/src/saltybot_calibration/scripts/calibrate_camera.py
Executable file
@ -0,0 +1,215 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
calibrate_camera.py — Run OpenCV fisheye calibration from captured images.
|
||||
|
||||
Usage:
|
||||
python3 calibrate_camera.py \\
|
||||
--images-dir /path/to/cal/cam_front \\
|
||||
--output /path/to/calibration/front/camera_info.yaml \\
|
||||
--camera-name camera_front \\
|
||||
--board-size 8x6 \\
|
||||
--square-size 0.025 \\
|
||||
--image-size 640x480 \\
|
||||
[--show-reprojection]
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
|
||||
def parse_size(s: str) -> tuple:
|
||||
"""Parse 'WxH' string to (width, height) tuple of ints."""
|
||||
parts = s.lower().split('x')
|
||||
if len(parts) != 2:
|
||||
raise argparse.ArgumentTypeError(f'Size must be WxH, got: {s}')
|
||||
return (int(parts[0]), int(parts[1]))
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Compute IMX219 fisheye camera intrinsics from checkerboard images.',
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog=__doc__,
|
||||
)
|
||||
parser.add_argument(
|
||||
'--images-dir', required=True,
|
||||
help='Directory containing captured calibration images (*.png, *.jpg).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--output', required=True,
|
||||
help='Output path for camera_info.yaml.',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--camera-name', required=True,
|
||||
help='Camera name for YAML header (e.g. camera_front).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--board-size', type=str, default='8x6',
|
||||
help='Checkerboard inner corners as WxH (default: 8x6).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--square-size', type=float, default=0.025,
|
||||
help='Physical square size in metres (default: 0.025 = 25mm).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--image-size', type=str, default='640x480',
|
||||
help='Image dimensions as WxH (default: 640x480).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--show-reprojection', action='store_true',
|
||||
help='Display original vs reprojected corners for each image.',
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
# Parse board and image sizes
|
||||
bw, bh = parse_size(args.board_size)
|
||||
board_size = (bw, bh)
|
||||
image_size = parse_size(args.image_size)
|
||||
|
||||
# Collect image paths
|
||||
images_dir = Path(args.images_dir)
|
||||
if not images_dir.exists():
|
||||
print(f'[ERROR] Images directory not found: {images_dir}')
|
||||
sys.exit(1)
|
||||
|
||||
image_paths = sorted(
|
||||
[str(p) for p in images_dir.glob('*.png')] +
|
||||
[str(p) for p in images_dir.glob('*.jpg')] +
|
||||
[str(p) for p in images_dir.glob('*.jpeg')]
|
||||
)
|
||||
if not image_paths:
|
||||
print(f'[ERROR] No images found in: {images_dir}')
|
||||
sys.exit(1)
|
||||
|
||||
print(f'Board size: {board_size[0]}x{board_size[1]} inner corners')
|
||||
print(f'Square size: {args.square_size * 1000:.0f}mm')
|
||||
print(f'Image size: {image_size[0]}x{image_size[1]}')
|
||||
print(f'Images found: {len(image_paths)}')
|
||||
print(f'Output: {args.output}')
|
||||
print()
|
||||
|
||||
# Run calibration using camera_calibrator module
|
||||
# Add package to path for standalone use
|
||||
script_dir = Path(__file__).resolve().parent
|
||||
pkg_dir = script_dir.parent
|
||||
if str(pkg_dir) not in sys.path:
|
||||
sys.path.insert(0, str(pkg_dir))
|
||||
|
||||
from saltybot_calibration.camera_calibrator import (
|
||||
calibrate_fisheye,
|
||||
write_camera_info_yaml,
|
||||
compute_reprojection_errors,
|
||||
)
|
||||
|
||||
print('Processing images...')
|
||||
result = calibrate_fisheye(image_paths, board_size, args.square_size, image_size)
|
||||
|
||||
K = result['K']
|
||||
D = result['D']
|
||||
rms = result['rms']
|
||||
rvecs = result['rvecs']
|
||||
tvecs = result['tvecs']
|
||||
obj_pts = result['obj_pts']
|
||||
img_pts = result['img_pts']
|
||||
n_used = result['n_images_used']
|
||||
n_skipped = result['n_images_skipped']
|
||||
|
||||
# Per-image reprojection errors
|
||||
errors = compute_reprojection_errors(obj_pts, img_pts, rvecs, tvecs, K, D)
|
||||
|
||||
# Summary table
|
||||
print()
|
||||
print(f'{"Image":<40} {"RMS error (px)":>14}')
|
||||
print('-' * 57)
|
||||
for path, err in zip(
|
||||
[p for p in image_paths if Path(p).name not in
|
||||
{Path(p).name for p in image_paths if not Path(p).exists()}],
|
||||
errors
|
||||
):
|
||||
flag = ' <<' if err > 2.0 else ''
|
||||
print(f' {Path(path).name:<38} {err:>14.4f}{flag}')
|
||||
|
||||
print('-' * 57)
|
||||
print(f' {"Overall RMS":<38} {rms:>14.4f}')
|
||||
print(f' {"Images used":<38} {n_used:>14}')
|
||||
print(f' {"Images skipped":<38} {n_skipped:>14}')
|
||||
print()
|
||||
|
||||
# Warn on high reprojection error
|
||||
if rms > 1.5:
|
||||
print(f'[WARN] RMS error {rms:.4f} px > 1.5 px threshold.')
|
||||
print(' Suggestions:')
|
||||
print(' - Remove images with corners at extreme edges (strong fisheye distortion)')
|
||||
print(' - Ensure board is flat and rigidly printed (not bent)')
|
||||
print(' - Increase n_images and vary board orientation more')
|
||||
print(' - Check that --image-size matches actual capture resolution')
|
||||
else:
|
||||
print(f'[OK] RMS error {rms:.4f} px — good calibration.')
|
||||
|
||||
# Intrinsics summary
|
||||
print()
|
||||
print('Intrinsics:')
|
||||
print(f' fx={K[0,0]:.2f} fy={K[1,1]:.2f} cx={K[0,2]:.2f} cy={K[1,2]:.2f}')
|
||||
print(f' D=[{D[0,0]:.6f}, {D[1,0]:.6f}, {D[2,0]:.6f}, {D[3,0]:.6f}]')
|
||||
print()
|
||||
|
||||
# Write YAML
|
||||
write_camera_info_yaml(
|
||||
output_path=args.output,
|
||||
camera_name=args.camera_name,
|
||||
image_size=image_size,
|
||||
K=K,
|
||||
D=D,
|
||||
rms=rms,
|
||||
)
|
||||
|
||||
# Show reprojection visualisation if requested
|
||||
if args.show_reprojection:
|
||||
print('Showing reprojection errors. Press any key to advance, q to quit.')
|
||||
for i, (path, err) in enumerate(zip(
|
||||
[p for p in image_paths],
|
||||
errors + [0.0] * (len(image_paths) - len(errors))
|
||||
)):
|
||||
img = cv2.imread(path)
|
||||
if img is None:
|
||||
continue
|
||||
|
||||
# Project points back
|
||||
projected, _ = cv2.fisheye.projectPoints(
|
||||
obj_pts[i] if i < len(obj_pts) else None,
|
||||
rvecs[i] if i < len(rvecs) else None,
|
||||
tvecs[i] if i < len(tvecs) else None,
|
||||
K, D
|
||||
) if i < len(obj_pts) else (None, None)
|
||||
|
||||
vis = img.copy()
|
||||
if projected is not None and i < len(img_pts):
|
||||
# Draw original corners (green)
|
||||
for pt in img_pts[i]:
|
||||
cv2.circle(vis, tuple(pt[0].astype(int)), 5, (0, 255, 0), 2)
|
||||
# Draw reprojected corners (red)
|
||||
for pt in projected:
|
||||
cv2.circle(vis, tuple(pt[0].astype(int)), 3, (0, 0, 255), -1)
|
||||
|
||||
cv2.putText(vis, f'{Path(path).name} RMS={err:.3f}px', (10, 30),
|
||||
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2)
|
||||
cv2.putText(vis, 'Green=detected Red=reprojected', (10, 60),
|
||||
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (200, 200, 200), 1)
|
||||
|
||||
cv2.imshow('Reprojection', vis)
|
||||
key = cv2.waitKey(0) & 0xFF
|
||||
if key == ord('q'):
|
||||
break
|
||||
|
||||
cv2.destroyAllWindows()
|
||||
|
||||
print('Calibration complete.')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
238
jetson/ros2_ws/src/saltybot_calibration/scripts/calibrate_extrinsics.py
Executable file
238
jetson/ros2_ws/src/saltybot_calibration/scripts/calibrate_extrinsics.py
Executable file
@ -0,0 +1,238 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
calibrate_extrinsics.py — Compute camera-to-base_link extrinsic transforms.
|
||||
|
||||
Two modes:
|
||||
1. --mode mechanical: use known mount geometry (radius, height, yaw angles)
|
||||
-> produces analytically exact transforms from CAD measurements
|
||||
2. --mode stereo: use cv2.fisheye.stereoCalibrate on paired checkerboard images
|
||||
-> requires shared-view images from adjacent camera pairs
|
||||
|
||||
Usage (mechanical mode, recommended first pass):
|
||||
python3 calibrate_extrinsics.py \\
|
||||
--mode mechanical \\
|
||||
--mount-radius 0.05 \\
|
||||
--mount-height 0.30 \\
|
||||
--output /path/to/calibration/extrinsics.yaml \\
|
||||
--generate-launch /path/to/jetson/launch/static_transforms.launch.py
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import math
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def print_extrinsics_table(extrinsics: dict) -> None:
|
||||
"""Print formatted extrinsics table to stdout."""
|
||||
print()
|
||||
print(f'{"Camera":<8} {"tx":>8} {"ty":>8} {"tz":>8} '
|
||||
f'{"qx":>10} {"qy":>10} {"qz":>10} {"qw":>10}')
|
||||
print('-' * 86)
|
||||
for name, ex in extrinsics.items():
|
||||
tx, ty, tz = ex['translation']
|
||||
qx, qy, qz, qw = ex['rotation_quat']
|
||||
print(f' {name:<6} {tx:>8.4f} {ty:>8.4f} {tz:>8.4f} '
|
||||
f'{qx:>10.6f} {qy:>10.6f} {qz:>10.6f} {qw:>10.6f}')
|
||||
print()
|
||||
|
||||
|
||||
def generate_static_transforms_launch(extrinsics: dict, output_path: str) -> None:
|
||||
"""Generate static_transforms.launch.py from extrinsics dict."""
|
||||
# Add package to path
|
||||
script_dir = Path(__file__).resolve().parent
|
||||
pkg_dir = script_dir.parent
|
||||
if str(pkg_dir) not in sys.path:
|
||||
sys.path.insert(0, str(pkg_dir))
|
||||
|
||||
from saltybot_calibration.extrinsic_calibrator import write_extrinsics_yaml
|
||||
|
||||
lines = [
|
||||
'"""',
|
||||
'static_transforms.launch.py — Auto-generated by calibrate_extrinsics.py',
|
||||
'DO NOT EDIT manually — regenerate via:',
|
||||
' python3 calibrate_extrinsics.py --mode mechanical --generate-launch <this_file>',
|
||||
'"""',
|
||||
'',
|
||||
'from launch import LaunchDescription',
|
||||
'from launch_ros.actions import Node',
|
||||
'',
|
||||
'',
|
||||
'def generate_launch_description():',
|
||||
' return LaunchDescription([',
|
||||
]
|
||||
|
||||
for name, ex in extrinsics.items():
|
||||
tx, ty, tz = ex['translation']
|
||||
qx, qy, qz, qw = ex['rotation_quat']
|
||||
child_frame = f'camera_{name}_optical_frame'
|
||||
lines += [
|
||||
f' # camera_{name}: base_link -> {child_frame}',
|
||||
f' Node(',
|
||||
f' package=\'tf2_ros\',',
|
||||
f' executable=\'static_transform_publisher\',',
|
||||
f' name=\'tf_base_to_camera_{name}\',',
|
||||
f' arguments=[',
|
||||
f' \'{tx}\', \'{ty}\', \'{tz}\',',
|
||||
f' \'{qx}\', \'{qy}\', \'{qz}\', \'{qw}\',',
|
||||
f' \'base_link\', \'{child_frame}\',',
|
||||
f' ],',
|
||||
f' ),',
|
||||
]
|
||||
|
||||
lines += [
|
||||
' ])',
|
||||
'',
|
||||
]
|
||||
|
||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(output_path, 'w') as f:
|
||||
f.write('\n'.join(lines))
|
||||
print(f'Generated launch file: {output_path}')
|
||||
|
||||
|
||||
def run_mechanical_mode(args) -> None:
|
||||
"""Run mechanical extrinsics calculation."""
|
||||
script_dir = Path(__file__).resolve().parent
|
||||
pkg_dir = script_dir.parent
|
||||
if str(pkg_dir) not in sys.path:
|
||||
sys.path.insert(0, str(pkg_dir))
|
||||
|
||||
from saltybot_calibration.extrinsic_calibrator import (
|
||||
mechanical_extrinsics,
|
||||
write_extrinsics_yaml,
|
||||
)
|
||||
|
||||
print(f'Mode: mechanical')
|
||||
print(f'Mount radius: {args.mount_radius * 100:.1f}cm')
|
||||
print(f'Mount height: {args.mount_height * 100:.1f}cm')
|
||||
print(f'Camera yaw angles: front=0 deg, right=-90 deg, rear=180 deg, left=90 deg')
|
||||
print()
|
||||
|
||||
extrinsics = mechanical_extrinsics(
|
||||
mount_radius_m=args.mount_radius,
|
||||
mount_height_m=args.mount_height,
|
||||
)
|
||||
|
||||
print_extrinsics_table(extrinsics)
|
||||
|
||||
write_extrinsics_yaml(
|
||||
output_path=args.output,
|
||||
extrinsics=extrinsics,
|
||||
mount_radius_m=args.mount_radius,
|
||||
mount_height_m=args.mount_height,
|
||||
)
|
||||
|
||||
if args.generate_launch:
|
||||
generate_static_transforms_launch(extrinsics, args.generate_launch)
|
||||
|
||||
|
||||
def run_stereo_mode(args) -> None:
|
||||
"""Stereo extrinsics calibration — stub for future implementation.
|
||||
|
||||
TODO: Implement cv2.fisheye.stereoCalibrate for adjacent camera pairs.
|
||||
|
||||
This mode requires:
|
||||
- Shared-view checkerboard images from each adjacent camera pair:
|
||||
{images_dir}/pair_front_right/*.png (visible to both cam0 and cam1)
|
||||
{images_dir}/pair_right_rear/*.png
|
||||
{images_dir}/pair_rear_left/*.png
|
||||
{images_dir}/pair_left_front/*.png
|
||||
- Per-camera intrinsics (camera_info.yaml) for each camera
|
||||
- Each pair: cv2.fisheye.stereoCalibrate -> R, T between cameras
|
||||
- Chain transforms: cam0->cam1->cam2->cam3, anchor to base_link via
|
||||
mechanical position of cam0
|
||||
|
||||
Recommended workflow:
|
||||
1. First run --mode mechanical to get approximate transforms
|
||||
2. Mount robot in place, place checkerboard in overlap zones
|
||||
3. Capture simultaneous images from adjacent pairs (requires hardware sync
|
||||
or approximate temporal sync)
|
||||
4. Run --mode stereo to refine R, T
|
||||
5. Run generate_static_transforms.py to update launch file
|
||||
|
||||
For now, use --mode mechanical which gives correct transforms from CAD
|
||||
dimensions (typically accurate to ~5mm / ~1 deg for a rigid mount).
|
||||
"""
|
||||
print('[ERROR] Stereo mode is not yet implemented.')
|
||||
print()
|
||||
print('Use --mode mechanical for now (accurate from CAD measurements).')
|
||||
print()
|
||||
print('Future implementation plan:')
|
||||
print(' 1. Capture simultaneous checkerboard images from adjacent camera pairs')
|
||||
print(' in their overlapping FoV regions (approx 70 deg overlap per pair)')
|
||||
print(' 2. For each pair: cv2.fisheye.stereoCalibrate with per-camera intrinsics')
|
||||
print(' 3. Chain transforms cam0->cam1->cam2->cam3, anchor to base_link')
|
||||
print(' 4. Refine against mechanical prior using least-squares optimisation')
|
||||
print()
|
||||
print('Required directory structure for stereo mode:')
|
||||
print(' {images_dir}/pair_front_right/ — images visible to both front+right')
|
||||
print(' {images_dir}/pair_right_rear/')
|
||||
print(' {images_dir}/pair_rear_left/')
|
||||
print(' {images_dir}/pair_left_front/')
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Compute camera-to-base_link extrinsic transforms for 4x IMX219 cameras.',
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog=__doc__,
|
||||
)
|
||||
parser.add_argument(
|
||||
'--mode', choices=['mechanical', 'stereo'], default='mechanical',
|
||||
help='Calibration mode: mechanical (from CAD) or stereo (from images). '
|
||||
'Default: mechanical.',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--mount-radius', type=float, default=0.05,
|
||||
help='Camera mount ring radius in metres (default: 0.05).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--mount-height', type=float, default=0.30,
|
||||
help='Camera mount height above base_link in metres (default: 0.30).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--output', required=True,
|
||||
help='Output path for extrinsics.yaml.',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--generate-launch', type=str, default=None,
|
||||
metavar='LAUNCH_FILE',
|
||||
help='Also generate static_transforms.launch.py at this path.',
|
||||
)
|
||||
# Stereo mode arguments (for future use)
|
||||
parser.add_argument(
|
||||
'--images-dir', type=str, default=None,
|
||||
help='[stereo mode] Directory containing paired calibration images.',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--intrinsics-dir', type=str, default=None,
|
||||
help='[stereo mode] Directory containing per-camera camera_info.yaml files.',
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
print('=== IMX219 Extrinsic Calibration ===')
|
||||
print()
|
||||
|
||||
if args.mode == 'mechanical':
|
||||
run_mechanical_mode(args)
|
||||
elif args.mode == 'stereo':
|
||||
run_stereo_mode(args)
|
||||
|
||||
print()
|
||||
print('Extrinsics calibration complete.')
|
||||
print(f'Output: {args.output}')
|
||||
if args.generate_launch:
|
||||
print(f'Launch: {args.generate_launch}')
|
||||
print()
|
||||
print('Next steps:')
|
||||
print(' 1. Update saltybot_surround/config/surround_vision_params.yaml')
|
||||
print(' with camera_info_url paths pointing to calibrated YAMLs')
|
||||
print(' 2. Run generate_static_transforms.py (or use --generate-launch above)')
|
||||
print(' 3. Restart saltybot_cameras and saltybot_surround nodes')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
316
jetson/ros2_ws/src/saltybot_calibration/scripts/capture_calibration_images.py
Executable file
316
jetson/ros2_ws/src/saltybot_calibration/scripts/capture_calibration_images.py
Executable file
@ -0,0 +1,316 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
capture_calibration_images.py — Capture checkerboard calibration images from IMX219 cameras.
|
||||
|
||||
Usage (standalone, no ROS2 required — uses OpenCV VideoCapture for CSI):
|
||||
python3 capture_calibration_images.py --camera front --n-images 30 --output-dir /tmp/cal
|
||||
|
||||
Usage (with ROS2 — subscribes to image topic):
|
||||
python3 capture_calibration_images.py --ros --camera front --n-images 30
|
||||
|
||||
Controls:
|
||||
SPACE — capture current frame (if checkerboard detected)
|
||||
q — quit / done
|
||||
r — reset captured images for this camera
|
||||
|
||||
Outputs: output_dir/cam_{name}/frame_{NNNN}.png
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
|
||||
CAMERA_TOPICS = {
|
||||
'front': '/camera/front/image_raw',
|
||||
'right': '/camera/right/image_raw',
|
||||
'rear': '/camera/rear/image_raw',
|
||||
'left': '/camera/left/image_raw',
|
||||
}
|
||||
|
||||
ALL_CAMERAS = ['front', 'right', 'rear', 'left']
|
||||
|
||||
|
||||
def parse_board_size(s: str) -> tuple:
|
||||
"""Parse 'WxH' string to (cols, rows) tuple of ints."""
|
||||
parts = s.lower().split('x')
|
||||
if len(parts) != 2:
|
||||
raise argparse.ArgumentTypeError(f'board-size must be WxH, got: {s}')
|
||||
return (int(parts[0]), int(parts[1]))
|
||||
|
||||
|
||||
def find_checkerboard(img_bgr: np.ndarray, board_size: tuple):
|
||||
"""Detect checkerboard corners. Returns (found, corners, gray)."""
|
||||
gray = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY)
|
||||
flags = cv2.CALIB_CB_ADAPTIVE_THRESH | cv2.CALIB_CB_NORMALIZE_IMAGE
|
||||
found, corners = cv2.findChessboardCorners(gray, board_size, flags)
|
||||
if found:
|
||||
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
|
||||
corners = cv2.cornerSubPix(gray, corners, (7, 7), (-1, -1), criteria)
|
||||
return found, corners, gray
|
||||
|
||||
|
||||
def draw_overlay(img: np.ndarray, board_size: tuple, found: bool,
|
||||
corners, captured: int, n_images: int) -> np.ndarray:
|
||||
"""Draw corner detections and status overlay onto a copy of img."""
|
||||
display = img.copy()
|
||||
if found and corners is not None:
|
||||
cv2.drawChessboardCorners(display, board_size, corners, found)
|
||||
|
||||
# Status bar background
|
||||
cv2.rectangle(display, (0, 0), (display.shape[1], 36), (0, 0, 0), -1)
|
||||
|
||||
status_color = (0, 255, 0) if found else (0, 80, 220)
|
||||
status_text = 'BOARD DETECTED' if found else 'searching for board...'
|
||||
cv2.putText(display, status_text, (10, 24),
|
||||
cv2.FONT_HERSHEY_SIMPLEX, 0.7, status_color, 2)
|
||||
|
||||
count_text = f'Captured: {captured}/{n_images} [SPACE=capture r=reset q=quit]'
|
||||
cv2.putText(display, count_text, (display.shape[1] // 2 - 10, 24),
|
||||
cv2.FONT_HERSHEY_SIMPLEX, 0.55, (200, 200, 200), 1)
|
||||
|
||||
if captured >= n_images:
|
||||
cv2.rectangle(display, (0, 0), (display.shape[1], display.shape[0]),
|
||||
(0, 200, 0), 8)
|
||||
cv2.putText(display, 'DONE! Press q to finish.', (60, display.shape[0] // 2),
|
||||
cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 255, 0), 3)
|
||||
|
||||
return display
|
||||
|
||||
|
||||
def capture_loop_opencv(cam_name: str, device: str, board_size: tuple,
|
||||
n_images: int, output_dir: Path) -> int:
|
||||
"""Capture calibration images using cv2.VideoCapture (non-ROS mode).
|
||||
|
||||
Returns number of images captured.
|
||||
"""
|
||||
out_dir = output_dir / f'cam_{cam_name}'
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
cap = cv2.VideoCapture(device)
|
||||
if not cap.isOpened():
|
||||
print(f'[ERROR] Cannot open device: {device}')
|
||||
return 0
|
||||
|
||||
# Try to set resolution
|
||||
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
|
||||
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
|
||||
|
||||
captured = 0
|
||||
frame_idx = 0
|
||||
window_title = f'Calibration Capture — {cam_name} | SPACE=capture q=quit r=reset'
|
||||
cv2.namedWindow(window_title, cv2.WINDOW_NORMAL)
|
||||
|
||||
print(f'\n[{cam_name.upper()}] Capturing {n_images} images. Output: {out_dir}')
|
||||
print(' Point the checkerboard at the camera, press SPACE when board is detected.')
|
||||
|
||||
while captured < n_images:
|
||||
ret, frame = cap.read()
|
||||
if not ret:
|
||||
print('[WARN] Failed to read frame')
|
||||
continue
|
||||
|
||||
found, corners, _ = find_checkerboard(frame, board_size)
|
||||
display = draw_overlay(frame, board_size, found, corners, captured, n_images)
|
||||
cv2.imshow(window_title, display)
|
||||
|
||||
key = cv2.waitKey(1) & 0xFF
|
||||
if key == ord('q'):
|
||||
print(f' Quit requested. Captured {captured}/{n_images} images.')
|
||||
break
|
||||
elif key == ord('r'):
|
||||
# Remove all saved images and reset counter
|
||||
for f in out_dir.glob('frame_*.png'):
|
||||
f.unlink()
|
||||
captured = 0
|
||||
frame_idx = 0
|
||||
print(f' [RESET] Cleared captured images for {cam_name}.')
|
||||
elif key == ord(' '):
|
||||
if found:
|
||||
save_path = out_dir / f'frame_{frame_idx:04d}.png'
|
||||
cv2.imwrite(str(save_path), frame)
|
||||
captured += 1
|
||||
frame_idx += 1
|
||||
print(f' Saved ({captured}/{n_images}): {save_path.name}')
|
||||
# Brief flash to confirm capture
|
||||
flash = frame.copy()
|
||||
cv2.rectangle(flash, (0, 0), (flash.shape[1], flash.shape[0]),
|
||||
(0, 255, 0), 12)
|
||||
cv2.imshow(window_title, flash)
|
||||
cv2.waitKey(120)
|
||||
else:
|
||||
print(' [SKIP] No checkerboard detected — move board into view.')
|
||||
|
||||
cap.release()
|
||||
cv2.destroyWindow(window_title)
|
||||
print(f' Done: {captured} images saved to {out_dir}')
|
||||
return captured
|
||||
|
||||
|
||||
def capture_loop_ros(cam_name: str, board_size: tuple,
|
||||
n_images: int, output_dir: Path) -> int:
|
||||
"""Capture calibration images via ROS2 topic subscription.
|
||||
|
||||
Returns number of images captured.
|
||||
"""
|
||||
try:
|
||||
import rclpy
|
||||
from rclpy.node import Node
|
||||
from sensor_msgs.msg import Image
|
||||
from cv_bridge import CvBridge
|
||||
except ImportError as e:
|
||||
print(f'[ERROR] ROS2 not available: {e}')
|
||||
print(' Run without --ros for standalone OpenCV mode.')
|
||||
return 0
|
||||
|
||||
out_dir = output_dir / f'cam_{cam_name}'
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
topic = CAMERA_TOPICS[cam_name]
|
||||
|
||||
print(f'\n[{cam_name.upper()}] Subscribing to: {topic}')
|
||||
print(f' Capturing {n_images} images. Output: {out_dir}')
|
||||
print(' Press SPACE in the preview window when the board is detected.')
|
||||
|
||||
rclpy.init()
|
||||
node = Node('calibration_capture')
|
||||
bridge = CvBridge()
|
||||
|
||||
captured = [0]
|
||||
frame_idx = [0]
|
||||
latest_frame = [None]
|
||||
|
||||
def image_callback(msg):
|
||||
try:
|
||||
latest_frame[0] = bridge.imgmsg_to_cv2(msg, 'bgr8')
|
||||
except Exception as e:
|
||||
node.get_logger().warn(f'cv_bridge error: {e}')
|
||||
|
||||
sub = node.create_subscription(Image, topic, image_callback, 1) # noqa: F841
|
||||
window_title = f'Calibration Capture — {cam_name} (ROS2)'
|
||||
cv2.namedWindow(window_title, cv2.WINDOW_NORMAL)
|
||||
|
||||
import threading
|
||||
spin_thread = threading.Thread(target=rclpy.spin, args=(node,), daemon=True)
|
||||
spin_thread.start()
|
||||
|
||||
try:
|
||||
while captured[0] < n_images:
|
||||
frame = latest_frame[0]
|
||||
if frame is None:
|
||||
cv2.waitKey(50)
|
||||
continue
|
||||
|
||||
found, corners, _ = find_checkerboard(frame, board_size)
|
||||
display = draw_overlay(frame, board_size, found, corners,
|
||||
captured[0], n_images)
|
||||
cv2.imshow(window_title, display)
|
||||
|
||||
key = cv2.waitKey(1) & 0xFF
|
||||
if key == ord('q'):
|
||||
print(f' Quit. Captured {captured[0]}/{n_images}.')
|
||||
break
|
||||
elif key == ord('r'):
|
||||
for f in out_dir.glob('frame_*.png'):
|
||||
f.unlink()
|
||||
captured[0] = 0
|
||||
frame_idx[0] = 0
|
||||
print(f' [RESET] Cleared images for {cam_name}.')
|
||||
elif key == ord(' '):
|
||||
if found:
|
||||
save_path = out_dir / f'frame_{frame_idx[0]:04d}.png'
|
||||
cv2.imwrite(str(save_path), frame)
|
||||
captured[0] += 1
|
||||
frame_idx[0] += 1
|
||||
print(f' Saved ({captured[0]}/{n_images}): {save_path.name}')
|
||||
else:
|
||||
print(' [SKIP] No board detected.')
|
||||
finally:
|
||||
cv2.destroyWindow(window_title)
|
||||
node.destroy_node()
|
||||
rclpy.shutdown()
|
||||
|
||||
print(f' Done: {captured[0]} images saved to {out_dir}')
|
||||
return captured[0]
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Capture checkerboard images for IMX219 fisheye calibration.',
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog=__doc__,
|
||||
)
|
||||
parser.add_argument(
|
||||
'--camera', choices=['front', 'right', 'rear', 'left', 'all'],
|
||||
default='front',
|
||||
help='Which camera to capture from (default: front). Use "all" to iterate all 4.',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--n-images', type=int, default=30,
|
||||
help='Number of images to capture per camera (default: 30).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--output-dir', type=str, default='/tmp/saltybot_cal',
|
||||
help='Root output directory. Images saved to {output_dir}/cam_{name}/ (default: /tmp/saltybot_cal).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--board-size', type=str, default='8x6',
|
||||
help='Checkerboard inner corners as WxH (default: 8x6).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--square-size', type=float, default=0.025,
|
||||
help='Physical square size in metres (default: 0.025 = 25mm).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--ros', action='store_true',
|
||||
help='Use ROS2 topic subscription instead of direct VideoCapture.',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--device', type=str, default='/dev/video0',
|
||||
help='Video device (non-ROS mode only, default: /dev/video0).',
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
board_size = parse_board_size(args.board_size)
|
||||
output_dir = Path(args.output_dir)
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
cameras = ALL_CAMERAS if args.camera == 'all' else [args.camera]
|
||||
|
||||
print(f'Checkerboard: {board_size[0]}x{board_size[1]} inner corners, '
|
||||
f'{args.square_size * 1000:.0f}mm squares')
|
||||
print(f'Target: {args.n_images} images per camera')
|
||||
print(f'Output: {output_dir}')
|
||||
print(f'Mode: {"ROS2 topics" if args.ros else f"VideoCapture ({args.device})"}')
|
||||
|
||||
for i, cam_name in enumerate(cameras):
|
||||
if len(cameras) > 1:
|
||||
print(f'\n--- Camera {i + 1}/{len(cameras)}: {cam_name.upper()} ---')
|
||||
if i > 0:
|
||||
input(' Position the board in view of this camera, then press ENTER...')
|
||||
|
||||
if args.ros:
|
||||
n_captured = capture_loop_ros(cam_name, board_size, args.n_images, output_dir)
|
||||
else:
|
||||
n_captured = capture_loop_opencv(cam_name, args.device, board_size,
|
||||
args.n_images, output_dir)
|
||||
|
||||
if n_captured < 5:
|
||||
print(f'[WARN] Only {n_captured} images captured for {cam_name} — '
|
||||
f'calibration may fail (need >= 5, recommend >= 20).')
|
||||
|
||||
print('\nCapture complete. Run calibrate_camera.py for each camera to compute intrinsics.')
|
||||
print('Example:')
|
||||
for cam in cameras:
|
||||
print(f' python3 calibrate_camera.py \\')
|
||||
print(f' --images-dir {output_dir}/cam_{cam} \\')
|
||||
print(f' --output /mnt/nvme/saltybot/calibration/{cam}/camera_info.yaml \\')
|
||||
print(f' --camera-name camera_{cam} --board-size {args.board_size} \\')
|
||||
print(f' --square-size {args.square_size} --image-size 640x480')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
142
jetson/ros2_ws/src/saltybot_calibration/scripts/generate_static_transforms.py
Executable file
142
jetson/ros2_ws/src/saltybot_calibration/scripts/generate_static_transforms.py
Executable file
@ -0,0 +1,142 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
generate_static_transforms.py — Generate static_transforms.launch.py from extrinsics.yaml.
|
||||
|
||||
Usage:
|
||||
python3 generate_static_transforms.py \\
|
||||
--extrinsics /path/to/calibration/extrinsics.yaml \\
|
||||
--output /path/to/jetson/ros2_ws/src/saltybot_bringup/launch/static_transforms.launch.py
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
|
||||
def load_extrinsics(yaml_path: str) -> dict:
|
||||
"""Load and parse extrinsics.yaml.
|
||||
|
||||
Returns the 'cameras' sub-dict with per-camera transforms.
|
||||
"""
|
||||
with open(yaml_path, 'r') as f:
|
||||
data = yaml.safe_load(f)
|
||||
if 'cameras' not in data:
|
||||
raise ValueError(f'extrinsics.yaml missing "cameras" key: {yaml_path}')
|
||||
return data['cameras'], data
|
||||
|
||||
|
||||
def generate_launch_content(cameras: dict) -> str:
|
||||
"""Generate static_transforms.launch.py content from cameras dict."""
|
||||
lines = [
|
||||
'"""',
|
||||
'static_transforms.launch.py — Auto-generated by generate_static_transforms.py',
|
||||
'DO NOT EDIT manually — regenerate with:',
|
||||
' python3 generate_static_transforms.py \\',
|
||||
' --extrinsics /path/to/calibration/extrinsics.yaml \\',
|
||||
' --output <this_file>',
|
||||
'"""',
|
||||
'',
|
||||
'from launch import LaunchDescription',
|
||||
'from launch_ros.actions import Node',
|
||||
'',
|
||||
'',
|
||||
'def generate_launch_description():',
|
||||
' return LaunchDescription([',
|
||||
]
|
||||
|
||||
for cam_name, cam_data in cameras.items():
|
||||
parent = cam_data.get('parent_frame', 'base_link')
|
||||
child = cam_data.get('child_frame', f'camera_{cam_name}_optical_frame')
|
||||
tx, ty, tz = cam_data['translation']
|
||||
qx, qy, qz, qw = cam_data['rotation_quat_xyzw']
|
||||
|
||||
lines += [
|
||||
f' # {cam_name}: {parent} -> {child}',
|
||||
f' Node(',
|
||||
f' package=\'tf2_ros\',',
|
||||
f' executable=\'static_transform_publisher\',',
|
||||
f' name=\'tf_base_to_camera_{cam_name}\',',
|
||||
f' arguments=[',
|
||||
f' \'{tx}\', \'{ty}\', \'{tz}\',',
|
||||
f' \'{qx}\', \'{qy}\', \'{qz}\', \'{qw}\',',
|
||||
f' \'{parent}\', \'{child}\',',
|
||||
f' ],',
|
||||
f' ),',
|
||||
]
|
||||
|
||||
# Also add RealSense D435i frame at origin (identity) — placeholder
|
||||
# (actual D435i extrinsics from its own calibration)
|
||||
lines += [
|
||||
' # RealSense D435i: base_link -> camera_color_optical_frame',
|
||||
' # NOTE: Replace with actual D435i extrinsics from stereo calibration',
|
||||
' Node(',
|
||||
' package=\'tf2_ros\',',
|
||||
' executable=\'static_transform_publisher\',',
|
||||
' name=\'tf_base_to_realsense\',',
|
||||
' arguments=[',
|
||||
' \'0.0\', \'0.0\', \'0.25\',',
|
||||
' \'0.0\', \'0.0\', \'0.0\', \'1.0\',',
|
||||
' \'base_link\', \'camera_color_optical_frame\',',
|
||||
' ],',
|
||||
' ),',
|
||||
' ])',
|
||||
'',
|
||||
]
|
||||
|
||||
return '\n'.join(lines)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Generate static_transforms.launch.py from extrinsics.yaml.',
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog=__doc__,
|
||||
)
|
||||
parser.add_argument(
|
||||
'--extrinsics', required=True,
|
||||
help='Path to extrinsics.yaml (produced by calibrate_extrinsics.py).',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--output', required=True,
|
||||
help='Output path for static_transforms.launch.py.',
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
extrinsics_path = args.extrinsics
|
||||
if not Path(extrinsics_path).exists():
|
||||
print(f'[ERROR] extrinsics.yaml not found: {extrinsics_path}')
|
||||
sys.exit(1)
|
||||
|
||||
print(f'Loading extrinsics: {extrinsics_path}')
|
||||
cameras, full_data = load_extrinsics(extrinsics_path)
|
||||
|
||||
print(f'Found {len(cameras)} camera(s): {", ".join(cameras.keys())}')
|
||||
|
||||
content = generate_launch_content(cameras)
|
||||
|
||||
output_path = Path(args.output)
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(output_path, 'w') as f:
|
||||
f.write(content)
|
||||
|
||||
print(f'Written: {output_path}')
|
||||
print()
|
||||
print('Transforms:')
|
||||
print(f' {"Camera":<8} {"Parent":<12} {"Child frame":<36} {"Translation"}')
|
||||
print(' ' + '-' * 80)
|
||||
for cam_name, cam_data in cameras.items():
|
||||
parent = cam_data.get('parent_frame', 'base_link')
|
||||
child = cam_data.get('child_frame', f'camera_{cam_name}_optical_frame')
|
||||
tx, ty, tz = cam_data['translation']
|
||||
print(f' {cam_name:<8} {parent:<12} {child:<36} '
|
||||
f'[{tx:.3f}, {ty:.3f}, {tz:.3f}]')
|
||||
print()
|
||||
print('Next step: include static_transforms.launch.py in your main launch file:')
|
||||
print(' from launch.actions import IncludeLaunchDescription')
|
||||
print(' from launch.launch_description_sources import PythonLaunchDescriptionSource')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
259
jetson/ros2_ws/src/saltybot_calibration/scripts/preview_undistorted.py
Executable file
259
jetson/ros2_ws/src/saltybot_calibration/scripts/preview_undistorted.py
Executable file
@ -0,0 +1,259 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
preview_undistorted.py — Preview undistorted IMX219 image using calibration YAML.
|
||||
|
||||
Usage (ROS2 mode):
|
||||
python3 preview_undistorted.py \\
|
||||
--camera front \\
|
||||
--calibration /path/to/calibration/front/camera_info.yaml
|
||||
|
||||
Usage (image file mode):
|
||||
python3 preview_undistorted.py \\
|
||||
--image /path/to/image.png \\
|
||||
--calibration /path/to/calibration/front/camera_info.yaml
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
import yaml
|
||||
|
||||
|
||||
CAMERA_TOPICS = {
|
||||
'front': '/camera/front/image_raw',
|
||||
'right': '/camera/right/image_raw',
|
||||
'rear': '/camera/rear/image_raw',
|
||||
'left': '/camera/left/image_raw',
|
||||
}
|
||||
|
||||
|
||||
def load_camera_info(yaml_path: str) -> dict:
|
||||
"""Load and parse camera_info YAML.
|
||||
|
||||
Returns dict with keys: K, D, image_size, camera_name, distortion_model
|
||||
"""
|
||||
with open(yaml_path, 'r') as f:
|
||||
data = yaml.safe_load(f)
|
||||
|
||||
w = data['image_width']
|
||||
h = data['image_height']
|
||||
K_data = data['camera_matrix']['data']
|
||||
K = np.array(K_data, dtype=np.float64).reshape(3, 3)
|
||||
|
||||
D_data = data['distortion_coefficients']['data']
|
||||
D = np.array(D_data, dtype=np.float64).reshape(-1, 1)
|
||||
|
||||
model = data.get('distortion_model', 'equidistant')
|
||||
name = data.get('camera_name', 'unknown')
|
||||
|
||||
return {
|
||||
'K': K,
|
||||
'D': D,
|
||||
'image_size': (w, h),
|
||||
'camera_name': name,
|
||||
'distortion_model': model,
|
||||
}
|
||||
|
||||
|
||||
def build_undistort_maps(K: np.ndarray, D: np.ndarray,
|
||||
image_size: tuple, balance: float = 0.0) -> tuple:
|
||||
"""Compute fisheye undistortion maps.
|
||||
|
||||
Args:
|
||||
K: 3x3 camera matrix
|
||||
D: (4,1) fisheye distortion coefficients
|
||||
image_size: (width, height)
|
||||
balance: 0.0 = no black borders (crop), 1.0 = keep all pixels (black borders)
|
||||
|
||||
Returns:
|
||||
(map1, map2) suitable for cv2.remap
|
||||
"""
|
||||
new_K = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(
|
||||
K, D, image_size, np.eye(3), balance=balance
|
||||
)
|
||||
map1, map2 = cv2.fisheye.initUndistortRectifyMap(
|
||||
K, D, np.eye(3), new_K, image_size, cv2.CV_16SC2
|
||||
)
|
||||
return map1, map2, new_K
|
||||
|
||||
|
||||
def undistort_image(img: np.ndarray, map1: np.ndarray, map2: np.ndarray) -> np.ndarray:
|
||||
"""Apply precomputed undistortion maps to image."""
|
||||
return cv2.remap(img, map1, map2, interpolation=cv2.INTER_LINEAR,
|
||||
borderMode=cv2.BORDER_CONSTANT)
|
||||
|
||||
|
||||
def show_side_by_side(original: np.ndarray, undistorted: np.ndarray,
|
||||
title: str = '', rms_info: str = '') -> np.ndarray:
|
||||
"""Create side-by-side comparison image."""
|
||||
# Resize to same height if needed
|
||||
h1, w1 = original.shape[:2]
|
||||
h2, w2 = undistorted.shape[:2]
|
||||
if h1 != h2:
|
||||
undistorted = cv2.resize(undistorted, (w2 * h1 // h2, h1))
|
||||
|
||||
divider = np.zeros((h1, 4, 3), dtype=np.uint8)
|
||||
divider[:] = (80, 80, 80)
|
||||
|
||||
combined = np.hstack([original, divider, undistorted])
|
||||
|
||||
# Labels
|
||||
cv2.putText(combined, 'ORIGINAL (fisheye)', (10, 28),
|
||||
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 220, 0), 2)
|
||||
cv2.putText(combined, 'UNDISTORTED', (w1 + 14, 28),
|
||||
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 220, 255), 2)
|
||||
if title:
|
||||
cv2.putText(combined, title, (10, combined.shape[0] - 10),
|
||||
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (200, 200, 200), 1)
|
||||
if rms_info:
|
||||
cv2.putText(combined, rms_info, (w1 + 14, combined.shape[0] - 10),
|
||||
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (200, 200, 200), 1)
|
||||
return combined
|
||||
|
||||
|
||||
def preview_image_file(image_path: str, cal_info: dict, balance: float) -> None:
|
||||
"""Show undistorted preview of a single image file."""
|
||||
img = cv2.imread(image_path)
|
||||
if img is None:
|
||||
print(f'[ERROR] Cannot read image: {image_path}')
|
||||
sys.exit(1)
|
||||
|
||||
K = cal_info['K']
|
||||
D = cal_info['D']
|
||||
image_size = cal_info['image_size']
|
||||
name = cal_info['camera_name']
|
||||
|
||||
print(f'Camera: {name}')
|
||||
print(f'Image size: {image_size[0]}x{image_size[1]}')
|
||||
print(f'fx={K[0,0]:.2f} fy={K[1,1]:.2f} cx={K[0,2]:.2f} cy={K[1,2]:.2f}')
|
||||
print(f'D={D.flatten().tolist()}')
|
||||
|
||||
# Resize input if needed
|
||||
ih, iw = img.shape[:2]
|
||||
if (iw, ih) != image_size:
|
||||
print(f'[WARN] Image size {iw}x{ih} != calibration size {image_size}. Resizing.')
|
||||
img = cv2.resize(img, image_size)
|
||||
|
||||
map1, map2, new_K = build_undistort_maps(K, D, image_size, balance=balance)
|
||||
undist = undistort_image(img, map1, map2)
|
||||
|
||||
print(f'New K after undistort (balance={balance}):')
|
||||
print(f' fx={new_K[0,0]:.2f} fy={new_K[1,1]:.2f} cx={new_K[0,2]:.2f} cy={new_K[1,2]:.2f}')
|
||||
|
||||
combined = show_side_by_side(img, undist, title=str(Path(image_path).name))
|
||||
cv2.imshow('Undistorted Preview (q=quit)', combined)
|
||||
print('Press any key to close...')
|
||||
cv2.waitKey(0)
|
||||
cv2.destroyAllWindows()
|
||||
|
||||
|
||||
def preview_ros_live(cam_name: str, cal_info: dict, balance: float) -> None:
|
||||
"""Subscribe to ROS2 camera topic and show live undistorted preview."""
|
||||
try:
|
||||
import rclpy
|
||||
from rclpy.node import Node
|
||||
from sensor_msgs.msg import Image
|
||||
from cv_bridge import CvBridge
|
||||
except ImportError as e:
|
||||
print(f'[ERROR] ROS2 not available: {e}')
|
||||
sys.exit(1)
|
||||
|
||||
K = cal_info['K']
|
||||
D = cal_info['D']
|
||||
image_size = cal_info['image_size']
|
||||
topic = CAMERA_TOPICS[cam_name]
|
||||
|
||||
print(f'Subscribing to: {topic}')
|
||||
print(f'Image size: {image_size[0]}x{image_size[1]}')
|
||||
print('Press q in preview window to quit.')
|
||||
|
||||
map1, map2, new_K = build_undistort_maps(K, D, image_size, balance=balance)
|
||||
|
||||
rclpy.init()
|
||||
node = Node('preview_undistorted')
|
||||
bridge = CvBridge()
|
||||
latest = [None]
|
||||
|
||||
def cb(msg):
|
||||
try:
|
||||
latest[0] = bridge.imgmsg_to_cv2(msg, 'bgr8')
|
||||
except Exception as e:
|
||||
node.get_logger().warn(f'cv_bridge: {e}')
|
||||
|
||||
node.create_subscription(Image, topic, cb, 1)
|
||||
|
||||
import threading
|
||||
threading.Thread(target=rclpy.spin, args=(node,), daemon=True).start()
|
||||
|
||||
window = f'Undistorted — {cam_name}'
|
||||
cv2.namedWindow(window, cv2.WINDOW_NORMAL)
|
||||
|
||||
try:
|
||||
while True:
|
||||
frame = latest[0]
|
||||
if frame is not None:
|
||||
fh, fw = frame.shape[:2]
|
||||
if (fw, fh) != image_size:
|
||||
frame = cv2.resize(frame, image_size)
|
||||
undist = undistort_image(frame, map1, map2)
|
||||
combined = show_side_by_side(frame, undist)
|
||||
cv2.imshow(window, combined)
|
||||
|
||||
key = cv2.waitKey(1) & 0xFF
|
||||
if key == ord('q'):
|
||||
break
|
||||
finally:
|
||||
cv2.destroyAllWindows()
|
||||
node.destroy_node()
|
||||
rclpy.shutdown()
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='Preview undistorted IMX219 fisheye image from calibration YAML.',
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog=__doc__,
|
||||
)
|
||||
mode_group = parser.add_mutually_exclusive_group(required=True)
|
||||
mode_group.add_argument(
|
||||
'--camera', choices=['front', 'right', 'rear', 'left'],
|
||||
help='Camera name for ROS2 live preview mode.',
|
||||
)
|
||||
mode_group.add_argument(
|
||||
'--image', type=str,
|
||||
help='Path to a single image file for static preview.',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--calibration', required=True,
|
||||
help='Path to camera_info.yaml produced by calibrate_camera.py.',
|
||||
)
|
||||
parser.add_argument(
|
||||
'--balance', type=float, default=0.0,
|
||||
help='Undistort balance: 0.0=crop to valid pixels, 1.0=keep all (black borders). '
|
||||
'Default: 0.0.',
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
cal_path = args.calibration
|
||||
if not Path(cal_path).exists():
|
||||
print(f'[ERROR] Calibration file not found: {cal_path}')
|
||||
sys.exit(1)
|
||||
|
||||
print(f'Loading calibration: {cal_path}')
|
||||
cal_info = load_camera_info(cal_path)
|
||||
|
||||
if cal_info['distortion_model'] not in ('equidistant', 'fisheye'):
|
||||
print(f'[WARN] Distortion model "{cal_info["distortion_model"]}" — '
|
||||
f'expected "equidistant". Results may be incorrect.')
|
||||
|
||||
if args.image:
|
||||
preview_image_file(args.image, cal_info, args.balance)
|
||||
else:
|
||||
preview_ros_live(args.camera, cal_info, args.balance)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
Loading…
x
Reference in New Issue
Block a user