feat(social-bot): Conversation engine — local LLM with per-person context #83

Closed
opened 2026-03-01 22:28:19 -05:00 by sl-jetson · 0 comments
Collaborator

Summary

Local LLM inference for natural conversation with per-person memory and group awareness.

Requirements

  • LLM: Phi-3-mini or Llama-3.2-3B quantized (GGUF Q4_K_M) via llama.cpp on Orin
  • Per-person context: Conversation history per person_id, relationship memory
  • Group mode: Multi-person conversation, address individuals by name
  • System prompt: Loaded from SOUL.md personality file
  • ROS2: Subscribe /social/speech/transcript, publish /social/conversation/response
  • Streaming: Token-by-token output for low first-response latency
  • Context window: 4K tokens, sliding window with summary compression

Agent: sl-jetson

Labels: social-bot

## Summary Local LLM inference for natural conversation with per-person memory and group awareness. ## Requirements - **LLM**: Phi-3-mini or Llama-3.2-3B quantized (GGUF Q4_K_M) via llama.cpp on Orin - **Per-person context**: Conversation history per person_id, relationship memory - **Group mode**: Multi-person conversation, address individuals by name - **System prompt**: Loaded from SOUL.md personality file - **ROS2**: Subscribe /social/speech/transcript, publish /social/conversation/response - **Streaming**: Token-by-token output for low first-response latency - **Context window**: 4K tokens, sliding window with summary compression ## Agent: sl-jetson ## Labels: social-bot
Sign in to join this conversation.
No Label
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: seb/saltylab-firmware#83
No description provided.