Documentation Index
Fetch the complete documentation index at: https://raindrop.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Installation
Quick Start
What Gets Traced
- generateContent — input text (user messages only), output, model, token usage (promptTokenCount/candidatesTokenCount)
- Cached tokens —
cached_content_token_countfrom usage metadata →ai.usage.cached_tokens - Thinking tokens —
thoughts_token_countfrom usage metadata (Gemini 2.5) →ai.usage.thoughts_tokens - Finish reason —
candidate.finish_reason(STOP, MAX_TOKENS, SAFETY, RECITATION) →vertex_ai.finish_reason - Errors — captured with error status, re-thrown to caller
Configuration
identify()
track_signal()
Flushing and Shutdown
finish_reason Tracking
The Python wrapper capturescandidate.finish_reason from Vertex AI responses and maps it to vertex_ai.finish_reason in event properties. Possible values: STOP, MAX_TOKENS, SAFETY, RECITATION.
Token Tracking
The following token usage fields are captured fromusage_metadata:
| Field | Property Key | Description |
|---|---|---|
prompt_token_count | ai.usage.prompt_tokens | Input tokens |
candidates_token_count | ai.usage.completion_tokens | Output tokens |
cached_content_token_count | ai.usage.cached_tokens | Cached input tokens |
thoughts_token_count | ai.usage.thoughts_tokens | Thinking tokens (Gemini 2.5) |
Factory Function
Acreate_raindrop_vertex_ai() factory is also available:
Known Limitations
- Python SDK: No
events.*API — useraindrop.analyticsdirectly.identify()andtrack_signal()are available on the wrapper instance. - Streaming:
generateContentStream()is not instrumented. OnlygenerateContent()is traced.