Skip to main content

Installation

pip install raindrop-pydantic-ai pydantic-ai

Quick Start

from raindrop_pydantic_ai import create_raindrop_pydantic_ai
from pydantic_ai import Agent

raindrop = create_raindrop_pydantic_ai(
    api_key="rk_...",
    user_id="user-123",
)

agent = Agent("openai:gpt-4o", system_prompt="Be helpful")
wrapped = raindrop["wrap"](agent)

result = wrapped.run_sync("What is the capital of France?")
print(result.output)

raindrop["flush"]()

What Gets Traced

The Pydantic AI integration automatically captures:
  • Agent runs — input prompt, output text (including structured Pydantic model output), model name
  • Token usage — input_tokens and output_tokens from the agent result
  • Errors — captured and re-raised to the caller
  • Async support — both run() (async) and run_sync() (sync) are instrumented

Configuration

raindrop = create_raindrop_pydantic_ai(
    api_key="rk_...",           # Required: your Raindrop API key
    user_id="user-123",        # Optional: associate events with a user
    convo_id="convo-456",      # Optional: conversation/thread ID
)

Structured Output

The integration handles Pydantic AI’s structured output types — the output is serialized to JSON for telemetry:
from pydantic import BaseModel

class CityInfo(BaseModel):
    name: str
    country: str
    population: int

agent = Agent("openai:gpt-4o", output_type=CityInfo)
wrapped = raindrop["wrap"](agent)

result = wrapped.run_sync("Tell me about Paris")
print(result.output)  # CityInfo(name='Paris', country='France', population=2161000)

raindrop["flush"]()

Async Usage

The wrapper supports both sync and async agent runs:
import asyncio

async def main():
    result = await wrapped.run("What is quantum computing?")
    print(result.output)
    raindrop["flush"]()

asyncio.run(main())

Flushing and Shutdown

Always call flush() before your process exits to ensure all telemetry is shipped:
raindrop["flush"]()     # flush pending data
raindrop["shutdown"]()  # flush + release resources

Known Limitations

  • run_stream() is not instrumented — only run() and run_sync() are captured. Streaming runs produce no telemetry.
  • Multi-step agent runs: In agents with multiple LLM calls (e.g., tool use loops), only the final result’s data is captured. Intermediate LLM calls are not tracked individually.