Documentation Index
Fetch the complete documentation index at: https://raindrop.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Installation
npm install @raindrop-ai/langchain @langchain/core @langchain/openai
Quick Start
import { createRaindropLangChain } from "@raindrop-ai/langchain";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
const raindrop = createRaindropLangChain({
writeKey: "your-write-key",
userId: "user-123",
});
const model = new ChatOpenAI({ model: "gpt-4o" });
const result = await model.invoke(
[new HumanMessage("What is the capital of France?")],
{ callbacks: [raindrop.handler] },
);
await raindrop.flush();
What Gets Traced
The LangChain integration automatically captures:
- LLM calls — model name, input messages, output text, token usage (prompt/completion/total), finish reason
- Tool calls — tool name, input arguments, output, duration (via
interaction.track_tool() spans)
- Chains — chain execution with nested spans
- Retrievers — query text and document count
- Agent actions — tool selection and execution
- Errors — captured with error status on the span
- Tags and metadata — LangChain tags and metadata are forwarded to Raindrop event properties
- Extended token categories — cached tokens (
ai.usage.cached_tokens) and reasoning tokens (ai.usage.thoughts_tokens) when available from the provider (e.g. OpenAI)
- Finish reason — captured as
ai.finish_reason in event properties (e.g. "stop", "length")
All operations are linked with parent-child relationships, so you can see the full execution tree in the Raindrop dashboard.
Configuration
const raindrop = createRaindropLangChain({
writeKey: "your-write-key", // Optional: your Raindrop write key (omit to disable telemetry)
endpoint: "...", // Optional: custom API endpoint
userId: "user-123", // Optional: associate events with a user
convoId: "convo-456", // Optional: conversation/thread ID
debug: false, // Optional: enable verbose logging
traceChains: true, // Optional: create spans for chain execution (default: true)
traceRetrievers: true, // Optional: create spans for retriever calls (default: true)
filterLangGraphInternals: true, // Optional: filter LangGraph noise + dedup (default: true)
});
Using with LangGraph
The Raindrop handler works with LangGraph out of the box. It automatically filters
LangGraph-internal chain events (graph executor, __start__, __end__, channel nodes)
and deduplicates LLM callbacks that LangGraph may fire multiple times.
Pass the handler both at graph invocation time and inside your LLM node:
import { createRaindropLangChain } from "@raindrop-ai/langchain";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { StateGraph, MessagesAnnotation, END } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const raindrop = createRaindropLangChain({
writeKey: "your-write-key",
userId: "user-123",
convoId: "convo-456",
});
const getWeather = tool(
async ({ city }) => `The weather in ${city} is 22°C.`,
{ name: "get_weather", description: "Get weather", schema: z.object({ city: z.string() }) },
);
const model = new ChatOpenAI({ model: "gpt-4o-mini" }).bindTools([getWeather]);
const callModel = async (state: typeof MessagesAnnotation.State) => {
const response = await model.invoke(state.messages, {
callbacks: [raindrop.handler],
});
return { messages: [response] };
};
const graph = new StateGraph(MessagesAnnotation)
.addNode("agent", callModel)
.addNode("tools", new ToolNode([getWeather]))
.addEdge("__start__", "agent")
.addConditionalEdges("agent", (state) => {
const last = state.messages[state.messages.length - 1];
if ("tool_calls" in last && Array.isArray(last.tool_calls) && last.tool_calls.length > 0) return "tools";
return END;
})
.addEdge("tools", "agent")
.compile();
// Pass callbacks only to the model inside the node — NOT to graph.invoke().
// This ensures clean, deduplicated events without LangGraph internal noise.
const result = await graph.invoke(
{ messages: [new HumanMessage("What's the weather in Paris?")] },
);
await raindrop.shutdown();
LangGraph Best Practices
- Pass callbacks to the model, not the graph — use
callbacks: [raindrop.handler] inside your LLM node function only. Do NOT pass callbacks to graph.invoke() — LangGraph’s internal callback propagation causes duplicate events and noisy chain spans when callbacks are on the graph level.
- Create a new handler per request in server environments to avoid state collisions between concurrent graph executions.
- LangGraph-internal filtering is on by default — the
filterLangGraphInternals option (default: true) skips noisy internal chain events from the graph executor and node wrappers. Set to false if you want full visibility into LangGraph internals.
Using with LangSmith
Raindrop and LangSmith can coexist — both use LangChain’s callback system and receive
the same events independently. There is no conflict, but you should be aware of the
following:
- Both tracers are active simultaneously when
LANGSMITH_TRACING=true is set. Each
builds its own trace tree from the same callback events. This is safe but means LLM
calls are traced twice (once by each system).
- To use Raindrop only, disable LangSmith tracing:
- To use both, no changes needed. Both handlers receive callbacks and ship data
independently. Raindrop traces appear in the Raindrop dashboard; LangSmith traces
appear in LangSmith.
- Performance: having two tracers adds minimal overhead since both operate
asynchronously and don’t block the LLM pipeline.
Usage with Chains
The same handler works with plain LangChain chains:
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const prompt = ChatPromptTemplate.fromTemplate("Tell me about {topic}");
const chain = prompt.pipe(model).pipe(new StringOutputParser());
const result = await chain.invoke(
{ topic: "quantum computing" },
{ callbacks: [raindrop.handler] },
);
await raindrop.flush();
LangChain tags and metadata are forwarded to Raindrop event properties:
const result = await model.invoke(
[new HumanMessage("Hello")],
{
callbacks: [raindrop.handler],
tags: ["production", "v2"],
metadata: { experimentId: "exp-123" },
},
);
Identify Users
Associate events with a user identity after initialization:
raindrop.identify("user-123", { name: "Alice", plan: "pro" });
Track Signals
Send feedback, edits, or custom signals tied to a specific event:
raindrop.trackSignal({
eventId: "evt-abc",
name: "thumbs_up",
signalType: "feedback",
sentiment: "POSITIVE",
});
Flushing and Shutdown
Always call flush() before your process exits to ensure all telemetry is shipped:
await raindrop.flush(); // flush pending data
await raindrop.shutdown(); // flush + release resources