Installation
pip install raindrop-agno agno
Quick Start
from raindrop_agno import create_raindrop_agno
from agno.agent import Agent
from agno.models.openai import OpenAIChat
raindrop = create_raindrop_agno(
api_key="rk_...", # Required: your Raindrop API key
user_id="user-123",
tracing_enabled=True, # Recommended: enables nested trace spans
)
agent = Agent(model=OpenAIChat(id="gpt-4o"))
wrapped = raindrop["wrap"](agent)
result = wrapped.run("What is the capital of France?")
print(result.content)
raindrop["shutdown"]()
What Gets Traced
The Agno integration automatically captures:
- Agent runs — input prompt, output text, model name
- Token usage — input_tokens and output_tokens from the Agno RunOutput metrics
- Tool calls — nested spans with name, arguments, result, errors, and duration
- Model calls — LLM invocations as nested child spans under the agent run
- Team delegation — member agent calls appear as nested spans under the team
- Errors — captured (with error message) and re-raised to the caller
- Async support — both
run() (sync) and arun() (async) are instrumented
- Agno identity — run_id, session_id, agent_name forwarded as properties
Configuration
raindrop = create_raindrop_agno(
api_key="rk_...", # Required: your Raindrop API key
user_id="user-123", # Optional: associate events with a user
convo_id="convo-456", # Optional: conversation/thread ID
tracing_enabled=True, # Recommended: enables the Agno OpenInference instrumentor
)
When tracing_enabled=True, the integration enables the Agno OpenInference
instrumentor (Instruments.AGNO), which automatically creates properly nested
OTEL spans for agent runs, model calls, and tool executions. This gives full
trace visibility in the Raindrop dashboard with a hierarchical span tree:
Agent run
├── Model call (gpt-4o-mini) — 1.2s
├── Tool: get_stock_price — 0.1ms
├── Model call (gpt-4o-mini) — 0.8s
└── Tool: calculate — 0.1ms
Wrapping Agents
Use raindrop["wrap"]() to instrument an agent. The wrapped agent behaves identically
to the original — run() and arun() return the same RunOutput objects:
from agno.agent import Agent
from agno.models.openai import OpenAIChat
agent = Agent(
name="Stock Price Agent",
model=OpenAIChat(id="gpt-4o-mini"),
tools=[get_stock_price],
instructions="Answer questions in the style of a stock analyst.",
)
wrapped = raindrop["wrap"](agent)
result = wrapped.run("What is the current price of AAPL?")
print(result.content)
When your agent uses tools and tracing is enabled, each tool execution appears
as a nested span in the trace view:
def get_stock_price(symbol: str) -> str:
"""Get the current stock price."""
return "189.50"
agent = Agent(
model=OpenAIChat(id="gpt-4o-mini"),
tools=[get_stock_price],
)
wrapped = raindrop["wrap"](agent)
result = wrapped.run("What is the price of AAPL?")
# Tool calls appear as nested spans with:
# - tool name, input arguments, output result
# - errors (if any), duration
The number of tool calls is also included in event properties as
agno.tool_calls_count.
Teams
For Agno Teams, wrap each member agent. The OpenInference instrumentor
automatically traces team coordination, member delegation, and nested agent calls:
from agno.agent import Agent
from agno.team import Team
from agno.models.openai import OpenAIChat
researcher = Agent(name="Researcher", model=OpenAIChat(id="gpt-4o-mini"),
tools=[search_tool], role="Research analyst")
writer = Agent(name="Writer", model=OpenAIChat(id="gpt-4o-mini"),
role="Content writer")
raindrop["wrap"](researcher)
raindrop["wrap"](writer)
team = Team(name="ResearchTeam", mode="coordinate",
model=OpenAIChat(id="gpt-4o-mini"),
members=[researcher, writer])
result = team.run("Write a report on AI trends")
Wrapping Workflows
The same wrap() function works with Agno Workflows:
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.workflow import Workflow
class ResearchWorkflow(Workflow):
researcher: Agent = Agent(
model=OpenAIChat(id="gpt-4o"),
instructions="Research the given topic thoroughly.",
)
writer: Agent = Agent(
model=OpenAIChat(id="gpt-4o"),
instructions="Write a summary based on the research.",
)
workflow = ResearchWorkflow()
raindrop["wrap"](workflow.researcher)
raindrop["wrap"](workflow.writer)
wrapped = raindrop["wrap"](workflow)
result = wrapped.run("Explain quantum computing")
print(result.content)
Async Usage
The wrapper supports both sync and async agent runs:
import asyncio
async def main():
result = await wrapped.arun("What is quantum computing?")
print(result.content)
raindrop["shutdown"]()
asyncio.run(main())
Captured Properties
Each event includes the following properties when available:
| Property | Description |
|---|
ai.usage.prompt_tokens | Input token count |
ai.usage.completion_tokens | Output token count |
ai.model | Model name |
agno.run_id | Agno run identifier |
agno.session_id | Agno session identifier |
agno.agent_id | Agno agent identifier |
agno.agent_name | Agent name |
agno.workflow_id | Workflow identifier (if applicable) |
agno.tool_calls_count | Number of tool calls in the run |
Flushing and Shutdown
Always call shutdown() before your process exits to ensure all telemetry is shipped:
raindrop["shutdown"]() # flush + release resources
Known Limitations
- Streaming:
run(stream=True) does not produce events, but trace spans are still captured when tracing_enabled=True.
- Multi-step agent runs: The event captures the final result. Individual LLM and tool calls appear as nested trace spans when
tracing_enabled=True.
For identify(), track_signal(), and other SDK functions, use import raindrop.analytics as raindrop directly alongside the wrapper.