MyNitor SDK Docs
MyNitor provides fire-and-forget telemetry for LLM applications (latency, token usage, errors, and workflow-level visibility) with no impact on your app's response path.
Quickstart (Python)
1) Install
pip install mynitor2) Save your API key safely
- Production: store the key in your cloud secrets manager / platform environment variables.
- Local dev: store it in an environment variable or a
.envfile. - Never commit API keys to git or ship them to the browser.
Option A: Environment variable
export MYNITOR_API_KEY="pk_dev_..."setx MYNITOR_API_KEY "pk_dev_..."Option B: .env file (local dev)
MYNITOR_API_KEY=pk_dev_...(Load it with your preferred .env loader.)
3) Universal Instrumentation (v0.2.1)
Put this at the top of your application entry point (example: main.py, app.py). One line covers OpenAI, Anthropic, and Google Gemini (Sync & Async).
# main.py
import mynitor
from openai import OpenAI
# Reads MYNITOR_API_KEY from the environment and starts "Zero-Code" tracking
mynitor.init().instrument(agent="support-bot-v2")
client = OpenAI()
# Use OpenAI normally - everything is auto-captured!
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Help me."}]
)
print(response.choices[0].message.content)4) Verify
- Run your app.
- Open /events in the MyNitor dashboard to see events live (best for debugging).
- Use /dashboard for aggregated cost/token/latency metrics.
5) Serverless note (IMPORTANT)
In AWS Lambda / Vercel / Netlify, call mn.flush() before returning, or the runtime may freeze before background telemetry is delivered.
import mynitor
from openai import OpenAI
mn = mynitor.init()
client = OpenAI()
mn.instrument_openai(client, agent="support-bot-v2")
def handler(event, context):
# ... call OpenAI ...
mn.flush() # ✅ important in serverless
return {"ok": True}Quickstart (TypeScript / Node.js)
1) Install
npm install @mynitorai/sdk openai2) Save your API key safely
- Production: use your platform's secret manager / environment variables.
- Local dev:
.envis fine.
MYNITOR_API_KEY=pk_dev_...
OPENAI_API_KEY=sk-...3) Initialize at the very top of your entry point (CRITICAL)
Because MyNitor uses global patching in Node, initialize it at the top of your entry file (before other imports execute code that calls OpenAI).
// src/index.ts
import { MyNitor } from "@mynitorai/sdk";
const mn = MyNitor.init({ apiKey: process.env.MYNITOR_API_KEY! });
mn.instrument();4) Use OpenAI normally
import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const result = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Help me." }],
});
console.log(result.choices[0].message?.content);5) Serverless note (IMPORTANT)
In AWS Lambda / Vercel / Netlify, call await mn.flush() before returning.
import { MyNitor } from "@mynitorai/sdk";
import OpenAI from "openai";
const mn = MyNitor.init({ apiKey: process.env.MYNITOR_API_KEY! });
mn.instrument();
export const handler = async () => {
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Help me." }],
});
await mn.flush(); // ✅ important in serverless
return { statusCode: 200, body: "ok" };
};Concepts
Agent vs Workflow
Agent = who is running this?
A stable identifier for the service or system producing calls.
Workflow = what task is being performed?
A logical task name used to group calls.
Automatic workflow naming (the default)
If you don't provide a workflow, MyNitor auto-detects it from the callsite as:
<filename>Example: chat_logic
Python and TypeScript both roll up multiple functions into a single file-based workflow for a cleaner dashboard view. You can still see function-level details in the "Codebase Attribution" tab.
Environments (key prefixes)
MyNitor routes data automatically based on the key prefix:
pk_live_...→ Production (persisted indefinitely)pk_stage_...→ Staging / QApk_dev_...→ Development (ephemeral)
Swapping the key routes data to the corresponding environment automatically.
Python SDK
Initialization
import mynitor
mn = mynitor.init() # reads MYNITOR_API_KEY
# or:
mn = mynitor.init(api_key="pk_...") # explicitOpenAI instrumentation
from openai import OpenAI
import mynitor
mn = mynitor.init()
client = OpenAI()
mn.instrument_openai(client, agent="support-bot-v2")Override workflow name (optional)
mn.instrument_openai(client, agent="support-bot-v2", workflow="customer_chat")Anthropic & Gemini (Sync & Async)
The Python SDK provides full parity for Anthropic and Google Gemini. For the best experience, use the universal instrument() method.
import mynitor
from anthropic import AsyncAnthropic # or Sync
import google.generativeai as genai
mynitor.init().instrument()
# Both are now automatically tracked!
async_client = AsyncAnthropic()
model = genai.GenerativeModel("gemini-1.5-pro")Universal Auto-Instrumentation
The recommended way to enable tracking for all supported providers at once.
import mynitor
mynitor.init(api_key="pk_...")
mynitor.instrument(agent="support-bot-v2", workflow="my-app")Manual tracking
Use monitor() when you want to track something MyNitor doesn't auto-instrument yet (or when you want explicit control).
import mynitor
mn = mynitor.init()
with mn.monitor(agent="support-bot-v2", model="llama-3.1", provider="other") as t:
result = call_custom_model(...)
t.set_usage(input_tokens=120, output_tokens=40)
t.set_retry(2)
t.set_metadata("customer_id", "cus_123")Flush
- Long-running apps: flushing is handled automatically on process exit.
- Serverless: call
mn.flush()before returning.
mn.flush()Instance-level Workflow Overrides
You can force a specific workflow name for all events from a given SDK instance.
from mynitor import Mynitor
mn = Mynitor(workflow_id="data-pipeline")
mn.instrument()TypeScript SDK
Initialization
import { MyNitor } from "@mynitorai/sdk";
const mn = MyNitor.init({
apiKey: process.env.MYNITOR_API_KEY!,
});Auto-instrumentation
mn.instrument();Flush
await mn.flush();Custom endpoint (optional)
const mn = MyNitor.init({
apiKey: process.env.MYNITOR_API_KEY!,
endpoint: "https://app.mynitor.ai/api/v1/events",
});
mn.instrument();API Reference (HTTP)
If you aren't using an official SDK, you can send events directly via HTTP.
Event endpoint
POST https://app.mynitor.ai/api/v1/events
Content-Type: application/json
Authorization: Bearer <API_KEY>Payload schema
{
"event_version": "1.0",
"timestamp": "ISO8601 String",
"agent": "agent-identifier",
"workflow": "workflow-name",
"provider": "openai",
"model": "gpt-4o",
"latency_ms": 450,
"input_tokens": 120,
"output_tokens": 40,
"status": "success",
"error_type": "Error",
"file": "main.py",
"function_name": "run",
"line_number": 42,
"metadata": { "any_key": "any_value" },
"retry_count": 0,
"request_id": "optional-request-id"
}Troubleshooting
Connectivity Diagnostics (CLI)
Use our built-in CLI tools to verify your environment and connection independently of your application code.
1. Ping (Network Check)
Send a lightweight verification signal to ensure your firewall allows traffic to MyNitor Cloud.
export MYNITOR_API_KEY="your_key_here" && npx @mynitorai/sdk@latest pingexport MYNITOR_API_KEY="your_key_here" && python3 -m mynitor ping2. Doctor (Environment Check)
Automatically verify your credential status and cloud reachability.
export MYNITOR_API_KEY="your_key_here" && npx @mynitorai/sdk@latest doctorexport MYNITOR_API_KEY="your_key_here" && python3 -m mynitor doctor3. Mock (Dashboard Test)
Simulate a realistic AI event to test your dashboard's breakdown and pricing logic.
export MYNITOR_API_KEY="your_key_here" && npx @mynitorai/sdk@latest mockexport MYNITOR_API_KEY="your_key_here" && python3 -m mynitor mockNo events showing up
- Confirm
MYNITOR_API_KEYis set (and you're using the correct key prefix for the environment). - Confirm your instrumentation runs before the first LLM call.
- If you're serverless, ensure you call:
- Python:
mn.flush() - TypeScript:
await mn.flush()
- Python:
- Check /events first (live debugging), then /dashboard for aggregated metrics.