Installation
Install with your package manager of choice:
import { Raindrop } from "raindrop-ai";
// Replace with the key from your Raindrop dashboard
const raindrop = new Raindrop({ writeKey: RAINDROP_API_KEY });
Quick Start: Interaction API
The Interaction API uses a simple three-step pattern:
begin() – Create an interaction and log the initial user input
- Update – Optionally call
setProperty, setProperties, or addAttachments
finish() – Record the AI’s final output and close the interaction
Using Vercel AI SDK? Check out our automatic integration to track AI events and traces with zero configuration.
Example: Chat Completion
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { randomUUID } from "crypto";
import { Raindrop } from "raindrop-ai";
const raindrop = new Raindrop({ writeKey: RAINDROP_API_KEY });
const message = "What is love?";
const eventId = randomUUID(); // Generate your own ID for log correlation
// 1. Start the interaction
const interaction = raindrop.begin({
eventId,
event: "chat_message",
userId: "user_123",
input: message,
model: "gpt-4o",
convoId: "convo_123",
properties: {
tool_call: "reasoning_engine",
system_prompt: "you are a helpful...",
experiment: "experiment_a",
},
});
// 2. Make the LLM call
const { text } = await generateText({
model: openai("gpt-4o"),
prompt: message,
});
// 3. Finish the interaction
interaction.finish({
output: text,
});
Updating an Interaction
Update an interaction at any point using setProperty, setProperties, or addAttachments:
interaction.setProperty("stage", "embedding");
interaction.addAttachments([
{
type: "text",
name: "Additional Info",
value: "A very long document",
role: "input",
},
{ type: "image", value: "https://example.com/image.png", role: "output" },
{
type: "iframe",
name: "Generated UI",
value: "https://newui.generated.com",
role: "output",
},
]);
Resuming an Interaction
If you no longer have the interaction object returned from begin(), resume it with resumeInteraction():
const interaction = raindrop.resumeInteraction(eventId);
Interactions are subject to a 1 MB event limit. Oversized payloads will be truncated. Contact us if you have custom requirements.
Single-Shot Tracking (trackAi)
For simple request-response interactions, you can use trackAi() directly:
raindrop.trackAi({
event: "user_message",
userId: "user123",
model: "gpt-4o-mini",
input: "Who won the 2023 AFL Grand Final?",
output: "Collingwood by four points!",
properties: {
tool_call: "reasoning_engine",
system_prompt: "you are a helpful...",
experiment: "experiment_a",
},
});
We recommend using begin() → finish() for new code to take advantage of partial-event buffering, tracing, and upcoming features like automatic token counts.
Tracking Signals (Feedback)
Signals capture quality ratings on AI events. Use trackSignal() with the same eventId from begin() or trackAi():
| Parameter | Type | Description |
|---|
eventId | string | The ID of the AI event you’re evaluating |
name | "thumbs_up", "thumbs_down", string | Signal name |
type | "default", "feedback", "edit" | Optional, defaults to "default" |
comment | string | User comment (for feedback signals) |
after | string | User’s edited content (for edit signals) |
sentiment | "POSITIVE", "NEGATIVE" | Signal sentiment (defaults to "NEGATIVE") |
await raindrop.trackSignal({
eventId: "my_event_id",
name: "thumbs_down",
comment: "Answer was off-topic",
});
Attachments
Attachments let you include additional context—documents, images, code, or embedded content—with your events. They work with both begin() interactions and trackAi() calls.
| Property | Type | Description |
|---|
type | string | "code", "text", "image", or "iframe" |
name | string | Optional display name |
value | string | Content or URL |
role | string | "input" or "output" |
language | string | Programming language (for code attachments) |
interaction.addAttachments([
{
type: "code",
role: "input",
language: "typescript",
name: "example.ts",
value: "console.log('hello');",
},
{
type: "text",
name: "Additional Info",
value: "Some extra text",
role: "input",
},
{ type: "image", value: "https://example.com/image.png", role: "output" },
{ type: "iframe", value: "https://example.com/embed", role: "output" },
]);
Identifying Users
raindrop.setUserDetails({
userId: "user123",
traits: {
name: "Jane",
email: "[email protected]",
plan: "pro",
os: "macOS",
},
});
PII Redaction
Read more about how Raindrop handles privacy and PII redaction here. Enable client-side PII redaction when initializing the SDK:
new Raindrop({
writeKey: RAINDROP_API_KEY,
redactPii: true,
});
Error Handling
Exceptions are raised when errors occur while sending events to Raindrop. Handle these appropriately in your application.
Configuration
new Raindrop({
writeKey: RAINDROP_API_KEY,
debugLogs: process.env.NODE_ENV !== "production", // Print queued events
disabled: process.env.NODE_ENV === "test", // Disable all tracking
});
Call await raindrop.close() before your process exits to flush any buffered events.
Tracing
Tracing captures detailed execution information from your AI pipelines—multi-model interactions, chained prompts, and tool calls. This helps you:
- Visualize the full execution flow of your AI application
- Debug and optimize prompt chains
- Understand the intermediate steps that led to a response
Getting Started
Wrap your code with withSpan or withTool on an interaction, and LLM calls inside are automatically captured:
import { Raindrop } from "raindrop-ai";
const raindrop = new Raindrop({ writeKey: RAINDROP_API_KEY });
const interaction = raindrop.begin({ ... });
await interaction.withSpan({ name: "my_task" }, async () => {
// LLM calls here are automatically traced
});
Next.js users: Add raindrop-ai to serverExternalPackages in your config:
// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
serverExternalPackages: ['raindrop-ai'],
};
module.exports = nextConfig;
Using withSpan
Use withSpan to trace tasks or operations. Any LLM calls within the span are automatically captured:
// Basic span
const result = await interaction.withSpan(
{ name: "generate_response" },
async () => {
return "Generated response";
}
);
// Span with metadata
const result = await interaction.withSpan(
{
name: "embedding_generation",
properties: { model: "text-embedding-3-large" },
inputParameters: ["What is the weather today?"],
},
async () => {
return [0.1, 0.2, 0.3, 0.4];
}
);
| Parameter | Type | Description |
|---|
name | string | Name for identification in traces |
properties | Record<string, string> | Additional metadata |
inputParameters | unknown[] | Input parameters for the task |
Use withTool to trace agent actions—memory operations, web searches, API calls, and more:
// Basic tool call
const result = await interaction.withTool(
{ name: "search_tool" },
async () => {
return "Search results";
}
);
// Tool with metadata
const result = await interaction.withTool(
{
name: "calculator",
properties: { operation: "multiply" },
inputParameters: { a: 5, b: 10 },
},
async () => {
return "Result: 50";
}
);
| Parameter | Type | Description |
|---|
name | string | Name for identification in traces |
version | number | Version number of the tool |
properties | Record<string, string> | Additional metadata |
inputParameters | Record<string, any> | Input parameters for the tool |
traceContent | boolean | Whether to trace content |
suppressTracing | boolean | Suppress tracing for this invocation |
For more control over tool span tracking, use trackTool or startToolSpan.
Use trackTool to log a tool call after it has completed:
const interaction = raindrop.begin({
eventId: "my-event",
event: "agent_run",
userId: "user_123",
input: "Search for weather data",
});
// Log a completed tool call
interaction.trackTool({
name: "web_search",
input: { query: "weather in NYC" },
output: { results: ["Sunny, 72°F", "Clear skies"] },
durationMs: 150,
properties: { engine: "google" },
});
// Log a failed tool call
interaction.trackTool({
name: "database_query",
input: { query: "SELECT * FROM users" },
durationMs: 50,
error: new Error("Connection timeout"),
});
interaction.finish({ output: "Weather search complete" });
| Parameter | Type | Description |
|---|
name | string | Name of the tool |
input | unknown | Input passed to the tool |
output | unknown | Output returned by the tool |
durationMs | number | Duration in milliseconds |
startTime | Date | number | When the tool started (defaults to now - durationMs) |
error | Error | string | Error if the tool failed |
properties | Record<string, string> | Additional metadata |
Use startToolSpan to track a tool as it executes:
const interaction = raindrop.begin({
eventId: "my-event",
event: "agent_run",
userId: "user_123",
input: "Process this data",
});
const toolSpan = interaction.startToolSpan({
name: "api_call",
properties: { endpoint: "/api/data" },
inputParameters: { method: "GET", path: "/api/data" },
});
try {
const result = await fetchData();
toolSpan.setOutput(result);
} catch (error) {
toolSpan.setError(error);
} finally {
toolSpan.end();
}
interaction.finish({ output: "Data processed" });
| Method | Description |
|---|
setInput(input) | Set the input (JSON stringified if object) |
setOutput(output) | Set the output (JSON stringified if object) |
setError(error) | Mark the span as failed |
end() | End the span (required when execution completes) |
Module Instrumentation
In some environments, automatic instrumentation may not work due to module loading order or bundler behavior. Use instrumentModules to explicitly specify which modules to instrument:
Anthropic users: You must use a module namespace import (import * as ...), not the default export.
import OpenAI from "openai";
import Anthropic from "@anthropic-ai/sdk";
import * as AnthropicModule from "@anthropic-ai/sdk"; // Required for instrumentation
import { Raindrop } from "raindrop-ai";
const raindrop = new Raindrop({
writeKey: RAINDROP_API_KEY,
instrumentModules: {
openAI: OpenAI,
anthropic: AnthropicModule, // Pass the module namespace, not the default export
},
});
Supported modules: openAI, anthropic, cohere, bedrock, google_vertexai, google_aiplatform, pinecone, together, langchain, llamaIndex, chromadb, qdrant, mcp.
OpenTelemetry Integration
If you already have an OpenTelemetry setup (Sentry, Datadog, Honeycomb, etc.), integrate Raindrop alongside it using useExternalOtel:
import { NodeSDK } from "@opentelemetry/sdk-node";
import * as AnthropicModule from "@anthropic-ai/sdk";
import Anthropic from "@anthropic-ai/sdk";
import { Raindrop } from "raindrop-ai";
// 1. Create Raindrop with useExternalOtel
const raindrop = new Raindrop({
writeKey: RAINDROP_API_KEY,
useExternalOtel: true,
instrumentModules: { anthropic: AnthropicModule },
});
// 2. Add Raindrop's processor and instrumentations to your NodeSDK
const sdk = new NodeSDK({
spanProcessors: [
raindrop.createSpanProcessor(), // Sends traces to Raindrop
sentryProcessor, // Your existing processor
],
instrumentations: raindrop.getInstrumentations(),
});
sdk.start();
// 3. Create AI clients AFTER SDK starts
const anthropic = new Anthropic({ apiKey: "..." });
// 4. Use Raindrop normally
const interaction = raindrop.begin({
eventId: "my-event",
event: "chat_request",
userId: "user_123",
input: "Hello!",
});
await interaction.withSpan({ name: "generate_response" }, async () => {
const response = await anthropic.messages.create({
model: "claude-3-haiku-20240307",
max_tokens: 100,
messages: [{ role: "user", content: "Hello!" }],
});
return response;
});
interaction.finish({ output: "Response from Claude" });
| Method | Description |
|---|
createSpanProcessor() | Returns a span processor that sends traces to Raindrop |
getInstrumentations() | Returns OpenTelemetry instrumentations for AI libraries |
Without instrumentModules, getInstrumentations() returns instrumentations for all supported AI libraries. Specify instrumentModules to instrument only specific libraries.
That’s it! You’re ready to explore your events in the Raindrop dashboard. Ping us on Slack or email us if you get stuck!