- No OpenTelemetry setup required
- Automatic input/output capture from agent invocations
- Model call tracking with token usage
- Tool call span tracking
- Works with both TypeScript and Python SDKs
Installation
- TypeScript
- Python
Quick Start
- TypeScript
- Python
What Gets Traced
The Strands integration automatically captures:- Agent invocations — input prompt (last user message), output text, model name
- Token usage — prompt_tokens and completion_tokens from model responses
- Model calls — output and usage extracted from each model call within an invocation
- Tool calls — tool name, input, and output for each tool invocation
- Errors — captured and re-raised; telemetry failures never crash your pipeline
Configuration
- TypeScript
- Python
Identifying Users
- TypeScript
- Python
Signals (Feedback)
Track user feedback on AI responses:- TypeScript
- Python
Flush & Shutdown
Always flush before your process exits to ensure all data is sent:- TypeScript
- Python
Known Limitations
- Streaming: The integration captures the final result of each model call. Streaming token-by-token output is not individually traced.
- Multi-agent: Each agent needs its own
registerHooks()/register_hooks()call. Nested agent hierarchies are not automatically linked. - Python WeakRef: The Python handler uses
id(agent)for internal tracking since some agent-like objects don’t support weak references. Context is cleaned up explicitly after each invocation.
That’s it! You’re ready to explore your Strands agent events in the Raindrop dashboard. Ping us on Slack or email us if you get stuck!