Documentation Index
Fetch the complete documentation index at: https://raindrop.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Installation
npm install @raindrop-ai/bedrock @aws-sdk/client-bedrock-runtime
Quick Start
import { createRaindropBedrock } from "@raindrop-ai/bedrock";
import { BedrockRuntimeClient, ConverseCommand } from "@aws-sdk/client-bedrock-runtime";
const raindrop = createRaindropBedrock({
writeKey: "your-write-key",
userId: "user-123",
});
const client = new BedrockRuntimeClient({ region: "us-east-1" });
const wrapped = raindrop.wrap(client);
const response = await wrapped.send(
new ConverseCommand({
modelId: "anthropic.claude-3-5-sonnet-20241022-v2:0",
messages: [
{ role: "user", content: [{ text: "What is the capital of France?" }] },
],
}),
);
console.log(response.output?.message?.content?.[0]?.text);
await raindrop.flush();
What Gets Traced
The Bedrock integration automatically captures:
- Converse API — input messages, output text, model ID, token usage (inputTokens/outputTokens), stop reason, cached token counts
- InvokeModel API — raw request/response, model ID, token usage (supports Claude, Titan, and Llama response formats), stop reason, cached tokens (Claude)
- Errors — captured with error status on the span, re-thrown to caller
Captured Properties
| Property | Source | Description |
|---|
ai.usage.prompt_tokens | Both APIs | Input/prompt token count |
ai.usage.completion_tokens | Both APIs | Output/completion token count |
ai.usage.cached_tokens | Converse: cacheReadInputTokenCount; Claude InvokeModel: cache_read_input_tokens | Tokens read from cache |
ai.usage.cache_write_tokens | Converse: cacheWriteInputTokenCount | Tokens written to cache |
bedrock.finish_reason | Converse: stopReason; InvokeModel: varies by model | Why the model stopped generating (e.g. end_turn, tool_use, max_tokens) |
Configuration
const raindrop = createRaindropBedrock({
writeKey: "your-write-key", // Optional: your Raindrop write key (omit to disable telemetry)
endpoint: "...", // Optional: custom API endpoint
userId: "user-123", // Optional: associate events with a user
convoId: "convo-456", // Optional: conversation/thread ID
debug: false, // Optional: enable verbose logging
});
Using InvokeModel
The wrapper also supports the legacy InvokeModel API. Token usage extraction works with Claude, Titan, and Llama response formats:
import { InvokeModelCommand } from "@aws-sdk/client-bedrock-runtime";
const response = await wrapped.send(
new InvokeModelCommand({
modelId: "anthropic.claude-3-5-sonnet-20241022-v2:0",
contentType: "application/json",
body: JSON.stringify({
anthropic_version: "bedrock-2023-05-31",
messages: [{ role: "user", content: "Hello!" }],
max_tokens: 256,
}),
}),
);
const result = JSON.parse(new TextDecoder().decode(response.body));
console.log(result.content[0].text);
await raindrop.flush();
Identifying Users
Associate a user with optional traits:
raindrop.identify("user-123", { plan: "pro", company: "Acme" });
Tracking Signals
Track feedback, edits, or custom signals:
raindrop.trackSignal({
eventId: "evt_abc123",
name: "thumbs_up",
signalType: "feedback",
sentiment: "POSITIVE",
comment: "Great answer!",
});
Flushing and Shutdown
Always call flush() before your process exits to ensure all telemetry is shipped:
await raindrop.flush(); // flush pending data
await raindrop.shutdown(); // flush + release resources