SDK Reference

Complete TypeScript SDK documentation for @promptops/sdk.

Installation

npm install @promptops/sdk
# or
yarn add @promptops/sdk
# or
pnpm add @promptops/sdk

Requirements: Node.js 18+ (native fetch), TypeScript 5+. Zero production dependencies.

Quick Start

import { PromptOps } from '@promptops/sdk'

const client = new PromptOps({
  apiKey: process.env.PROMPTOPS_API_KEY!,
  baseUrl: 'https://your-api-url',
})

const prompt = await client.getPrompt('welcome-email')
const rendered = client.render(prompt, { userName: 'Sarah' })

Constructor: new PromptOps(config)

Creates a new PromptOps client instance.

Config Options

interface PromptOpsConfig {
  // Required: Your PromptOps API key
  apiKey: string

  // Required: Base URL of your PromptOps API
  baseUrl: string

  // Optional: Default environment to fetch prompts for
  // Default: 'production'
  defaultEnvironment?: string

  // Optional: Cache TTL in milliseconds
  // Default: 60000 (60 seconds)
  cacheTtlMs?: number

  // Optional: Request timeout in milliseconds
  // Default: 5000 (5 seconds)
  timeoutMs?: number
}

Example

const client = new PromptOps({
  apiKey: 'po_live_abc1234567890...',
  baseUrl: 'https://api.your-domain.com',
  defaultEnvironment: 'staging',
  cacheTtlMs: 30000,  // 30 seconds
  timeoutMs: 3000,    // 3 seconds
})

Method: getPrompt(slug, options?)

Fetches the currently active version of a prompt for the specified environment.

Parameters

  • slug (string) — The prompt's unique identifier (e.g., "welcome-email")
  • options.environment (string, optional) — Override the default environment

Returns

interface ResolvedPrompt {
  slug: string
  versionNumber: number
  systemPrompt: string | null
  userTemplate: string | null
  model: string
  temperature: number
  maxTokens: number | null
  metadata: Record<string, unknown>
}

Behavior

  1. Cache hit (fresh) — Returns immediately from memory, no API call.
  2. Cache miss / expired — Fetches from the API, updates cache, returns.
  3. API error + stale cache — Returns the stale cached value and logs a warning.
  4. API error + no cache — Throws an error.

Example

// Default environment (production)
const prompt = await client.getPrompt('welcome-email')

// Specific environment
const devPrompt = await client.getPrompt('welcome-email', {
  environment: 'dev',
})

Method: render(prompt, variables)

Interpolates {{variable}} placeholders in the prompt's userTemplate with the provided values.

Parameters

  • prompt (ResolvedPrompt) — The prompt object from getPrompt()
  • variables (Record<string, string>) — Key-value pairs to substitute

Returns

string — The rendered template with all variables replaced.

Example

const prompt = await client.getPrompt('welcome-email')

// If userTemplate is "Hello {{userName}}, welcome to {{plan}}!"
const message = client.render(prompt, {
  userName: 'Sarah',
  plan: 'Pro',
})
// → "Hello Sarah, welcome to Pro!"

Caching

The SDK maintains an in-memory cache keyed by slug + environment.

  • TTL — Configurable via cacheTtlMs (default: 60s). During this window, getPrompt() returns instantly.
  • Stale Fallback — If the API is down and the cache has expired, the last-known-good value is served. This prevents your app from crashing due to PromptOps outages.
  • Per-instance — Each PromptOps instance has its own cache. If you need shared caching, use a singleton.

Error Handling

try {
  const prompt = await client.getPrompt('my-prompt')
} catch (error) {
  // Error is thrown only if:
  // 1. The API returns an error, AND
  // 2. There is no cached (even stale) value
  console.error('Failed to fetch prompt:', error.message)
}

Usage with LLMs

PromptOps is LLM-agnostic. Here's how to use it with popular providers:

OpenAI

const prompt = await client.getPrompt('classifier')
const message = client.render(prompt, { content: userInput })

const response = await openai.chat.completions.create({
  model: prompt.model,
  temperature: prompt.temperature,
  max_tokens: prompt.maxTokens ?? undefined,
  messages: [
    { role: 'system', content: prompt.systemPrompt ?? '' },
    { role: 'user', content: message },
  ],
})

Anthropic (Claude)

const prompt = await client.getPrompt('assistant')
const message = client.render(prompt, { question: userQuery })

const response = await anthropic.messages.create({
  model: prompt.model, // e.g. "claude-3-sonnet-20240229"
  max_tokens: prompt.maxTokens ?? 1024,
  system: prompt.systemPrompt ?? '',
  messages: [{ role: 'user', content: message }],
})