Some checks failed
social-bot integration tests / Lint (flake8 + pep257) (push) Failing after 2s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (push) Has been skipped
social-bot integration tests / Lint (flake8 + pep257) (pull_request) Failing after 8s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (pull_request) Has been skipped
social-bot integration tests / Latency profiling (GPU, Orin) (push) Has been cancelled
social-bot integration tests / Latency profiling (GPU, Orin) (pull_request) Has been cancelled
- Add SpeechTranscript.language (BCP-47), ConversationResponse.language fields
- speech_pipeline_node: whisper_language param (""=auto-detect via Whisper LID);
detected language published in every transcript
- conversation_node: track per-speaker language; inject "[Please respond in X.]"
hint for non-English speakers; propagate language to ConversationResponse.
_LANG_NAMES: 24 BCP-47 codes -> English names. Also adds Issue #161 emotion
context plumbing (co-located in same branch for clean merge)
- tts_node: voice_map_json param (JSON BCP-47->ONNX path); lazy voice loading
per language; playback queue now carries (text, lang) tuples for voice routing
- speech_params.yaml, tts_params.yaml: new language params with docs
- 47/47 tests pass (test_multilang.py)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
15 lines
570 B
YAML
15 lines
570 B
YAML
speech_pipeline_node:
|
|
ros__parameters:
|
|
mic_device_index: -1 # -1 = system default; use `arecord -l` to list
|
|
sample_rate: 16000
|
|
wake_word_model: "hey_salty"
|
|
wake_word_threshold: 0.5
|
|
vad_threshold_db: -35.0
|
|
use_silero_vad: true
|
|
whisper_model: "small" # small (~500ms), medium (better quality, ~900ms)
|
|
whisper_compute_type: "float16"
|
|
whisper_language: "" # "" = auto-detect; set e.g. "fr" to force
|
|
speaker_threshold: 0.65
|
|
speaker_db_path: "/social_db/speaker_embeddings.json"
|
|
publish_partial: true
|