Some checks failed
social-bot integration tests / Lint (flake8 + pep257) (push) Failing after 2s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (push) Has been skipped
social-bot integration tests / Lint (flake8 + pep257) (pull_request) Failing after 10s
social-bot integration tests / Core integration tests (mock sensors, no GPU) (pull_request) Has been skipped
social-bot integration tests / Latency profiling (GPU, Orin) (push) Has been cancelled
social-bot integration tests / Latency profiling (GPU, Orin) (pull_request) Has been cancelled
- Add SpeechTranscript.language (BCP-47), ConversationResponse.language fields
- speech_pipeline_node: whisper_language param (""=auto-detect via Whisper LID);
detected language published in every transcript
- conversation_node: track per-speaker language; inject "[Please respond in X.]"
hint for non-English speakers; propagate language to ConversationResponse.
_LANG_NAMES: 24 BCP-47 codes -> English names. Also adds Issue #161 emotion
context plumbing (co-located in same branch for clean merge)
- tts_node: voice_map_json param (JSON BCP-47->ONNX path); lazy voice loading
per language; playback queue now carries (text, lang) tuples for voice routing
- speech_params.yaml, tts_params.yaml: new language params with docs
- 47/47 tests pass (test_multilang.py)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
12 lines
548 B
Plaintext
12 lines
548 B
Plaintext
# SpeechTranscript.msg — Result of STT with speaker identification.
|
|
# Published by speech_pipeline_node on /social/speech/transcript
|
|
|
|
std_msgs/Header header
|
|
|
|
string text # Transcribed text (UTF-8)
|
|
string speaker_id # e.g. "person_42" or "unknown"
|
|
float32 confidence # ASR confidence 0..1
|
|
float32 audio_duration # Duration of the utterance in seconds
|
|
bool is_partial # true = intermediate streaming result, false = final
|
|
string language # BCP-47 detected language code e.g. "en" "fr" "es" (empty = unknown)
|