MyNitor SDK Docs

MyNitor provides fire-and-forget telemetry for LLM applications (latency, token usage, errors, and workflow-level visibility) with no impact on your app's response path.


Quickstart (Python)

1) Install

bash
pip install mynitor

2) Save your API key safely

  • Production: store the key in your cloud secrets manager / platform environment variables.
  • Local dev: store it in an environment variable or a .env file.
  • Never commit API keys to git or ship them to the browser.

Option A: Environment variable

macOS / Linux
export MYNITOR_API_KEY="pk_dev_..."
Windows PowerShell
setx MYNITOR_API_KEY "pk_dev_..."

Option B: .env file (local dev)

.env
MYNITOR_API_KEY=pk_dev_...

(Load it with your preferred .env loader.)

3) Universal Instrumentation (v0.2.1)

Put this at the top of your application entry point (example: main.py, app.py). One line covers OpenAI, Anthropic, and Google Gemini (Sync & Async).

main.py
# main.py
import mynitor
from openai import OpenAI

# Reads MYNITOR_API_KEY from the environment and starts "Zero-Code" tracking
mynitor.init().instrument(agent="support-bot-v2")

client = OpenAI()

# Use OpenAI normally - everything is auto-captured!
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Help me."}]
)

print(response.choices[0].message.content)

4) Verify

  1. Run your app.
  2. Open /events in the MyNitor dashboard to see events live (best for debugging).
  3. Use /dashboard for aggregated cost/token/latency metrics.

5) Serverless note (IMPORTANT)

In AWS Lambda / Vercel / Netlify, call mn.flush() before returning, or the runtime may freeze before background telemetry is delivered.

python
import mynitor
from openai import OpenAI

mn = mynitor.init()
client = OpenAI()
mn.instrument_openai(client, agent="support-bot-v2")

def handler(event, context):
    # ... call OpenAI ...
    mn.flush()  # ✅ important in serverless
    return {"ok": True}

Quickstart (TypeScript / Node.js)

1) Install

bash
npm install @mynitorai/sdk openai
The TypeScript SDK auto-instruments OpenAI, Anthropic, and Gemini.

2) Save your API key safely

  • Production: use your platform's secret manager / environment variables.
  • Local dev: .env is fine.
.env
MYNITOR_API_KEY=pk_dev_...
OPENAI_API_KEY=sk-...

3) Initialize at the very top of your entry point (CRITICAL)

Because MyNitor uses global patching in Node, initialize it at the top of your entry file (before other imports execute code that calls OpenAI).

src/index.ts
// src/index.ts
import { MyNitor } from "@mynitorai/sdk";

const mn = MyNitor.init({ apiKey: process.env.MYNITOR_API_KEY! });
mn.instrument();

4) Use OpenAI normally

typescript
import OpenAI from "openai";

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const result = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Help me." }],
});

console.log(result.choices[0].message?.content);

5) Serverless note (IMPORTANT)

In AWS Lambda / Vercel / Netlify, call await mn.flush() before returning.

typescript
import { MyNitor } from "@mynitorai/sdk";
import OpenAI from "openai";

const mn = MyNitor.init({ apiKey: process.env.MYNITOR_API_KEY! });
mn.instrument();

export const handler = async () => {
  const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

  await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Help me." }],
  });

  await mn.flush(); // ✅ important in serverless
  return { statusCode: 200, body: "ok" };
};

Concepts

Agent vs Workflow

Agent = who is running this?
A stable identifier for the service or system producing calls.

support-bot-v2
triage-agent
invoice-extractor-prod

Workflow = what task is being performed?
A logical task name used to group calls.

Automatic workflow naming (the default)

If you don't provide a workflow, MyNitor auto-detects it from the callsite as:

text
<filename>

Example: chat_logic

Python and TypeScript both roll up multiple functions into a single file-based workflow for a cleaner dashboard view. You can still see function-level details in the "Codebase Attribution" tab.

Environments (key prefixes)

MyNitor routes data automatically based on the key prefix:

  • pk_live_... → Production (persisted indefinitely)
  • pk_stage_... → Staging / QA
  • pk_dev_... → Development (ephemeral)

Swapping the key routes data to the corresponding environment automatically.


Python SDK

Initialization

python
import mynitor
mn = mynitor.init()                 # reads MYNITOR_API_KEY
# or:
mn = mynitor.init(api_key="pk_...") # explicit

OpenAI instrumentation

python
from openai import OpenAI
import mynitor

mn = mynitor.init()
client = OpenAI()

mn.instrument_openai(client, agent="support-bot-v2")

Override workflow name (optional)

python
mn.instrument_openai(client, agent="support-bot-v2", workflow="customer_chat")

Anthropic & Gemini (Sync & Async)

The Python SDK provides full parity for Anthropic and Google Gemini. For the best experience, use the universal instrument() method.

python
import mynitor
from anthropic import AsyncAnthropic # or Sync
import google.generativeai as genai

mynitor.init().instrument()

# Both are now automatically tracked!
async_client = AsyncAnthropic()
model = genai.GenerativeModel("gemini-1.5-pro")

Universal Auto-Instrumentation

The recommended way to enable tracking for all supported providers at once.

python
import mynitor

mynitor.init(api_key="pk_...")
mynitor.instrument(agent="support-bot-v2", workflow="my-app")

Manual tracking

Use monitor() when you want to track something MyNitor doesn't auto-instrument yet (or when you want explicit control).

python
import mynitor

mn = mynitor.init()

with mn.monitor(agent="support-bot-v2", model="llama-3.1", provider="other") as t:
    result = call_custom_model(...)
    t.set_usage(input_tokens=120, output_tokens=40)
    t.set_retry(2)
    t.set_metadata("customer_id", "cus_123")

Flush

  • Long-running apps: flushing is handled automatically on process exit.
  • Serverless: call mn.flush() before returning.
python
mn.flush()

Instance-level Workflow Overrides

You can force a specific workflow name for all events from a given SDK instance.

python
from mynitor import Mynitor
mn = Mynitor(workflow_id="data-pipeline")
mn.instrument()

TypeScript SDK

Initialization

typescript
import { MyNitor } from "@mynitorai/sdk";

const mn = MyNitor.init({
  apiKey: process.env.MYNITOR_API_KEY!,
});

Auto-instrumentation

typescript
mn.instrument();

Flush

typescript
await mn.flush();

Custom endpoint (optional)

typescript
const mn = MyNitor.init({
  apiKey: process.env.MYNITOR_API_KEY!,
  endpoint: "https://app.mynitor.ai/api/v1/events",
});
mn.instrument();

API Reference (HTTP)

If you aren't using an official SDK, you can send events directly via HTTP.

Event endpoint

http
POST https://app.mynitor.ai/api/v1/events
Content-Type: application/json
Authorization: Bearer <API_KEY>

Payload schema

json
{
  "event_version": "1.0",
  "timestamp": "ISO8601 String",

  "agent": "agent-identifier",
  "workflow": "workflow-name",

  "provider": "openai",
  "model": "gpt-4o",

  "latency_ms": 450,
  "input_tokens": 120,
  "output_tokens": 40,

  "status": "success",
  "error_type": "Error",

  "file": "main.py",
  "function_name": "run",
  "line_number": 42,

  "metadata": { "any_key": "any_value" },
  "retry_count": 0,
  "request_id": "optional-request-id"
}

Troubleshooting

Connectivity Diagnostics (CLI)

Use our built-in CLI tools to verify your environment and connection independently of your application code.

1. Ping (Network Check)

Send a lightweight verification signal to ensure your firewall allows traffic to MyNitor Cloud.

TypeScript
export MYNITOR_API_KEY="your_key_here" && npx @mynitorai/sdk@latest ping
Python
export MYNITOR_API_KEY="your_key_here" && python3 -m mynitor ping

2. Doctor (Environment Check)

Automatically verify your credential status and cloud reachability.

TypeScript
export MYNITOR_API_KEY="your_key_here" && npx @mynitorai/sdk@latest doctor
Python
export MYNITOR_API_KEY="your_key_here" && python3 -m mynitor doctor

3. Mock (Dashboard Test)

Simulate a realistic AI event to test your dashboard's breakdown and pricing logic.

TypeScript
export MYNITOR_API_KEY="your_key_here" && npx @mynitorai/sdk@latest mock
Python
export MYNITOR_API_KEY="your_key_here" && python3 -m mynitor mock

No events showing up

  • Confirm MYNITOR_API_KEY is set (and you're using the correct key prefix for the environment).
  • Confirm your instrumentation runs before the first LLM call.
  • If you're serverless, ensure you call:
    • Python: mn.flush()
    • TypeScript: await mn.flush()
  • Check /events first (live debugging), then /dashboard for aggregated metrics.