Skip to main content
Layer AI provides an OpenAI-compatible API endpoint. You can use the official OpenAI SDK by pointing it to Layer’s base URL instead of OpenAI’s.

Quick Start

Prerequisites

  • Layer AI account (sign up)
  • Layer API key from your Dashboard
  • A configured gate (gate UUID)

Installation

npm install openai

Configuration

Update your OpenAI client initialization - just 2 lines:
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

Making Requests

Layer supports three ways to specify your gate:

Streaming Example

Streaming works exactly like the standard OpenAI SDK:
const stream = await openai.chat.completions.create({
  gateId: process.env.GATE_ID,
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Tell me a joke.' }
  ],
  stream: true,
});

for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content || '';
  process.stdout.write(content);
}

What’s Supported

Full support for stream: true with identical behavior to OpenAI
Works across all providers - Layer handles format conversion
Image inputs in messages fully supported
System, user, assistant, and tool messages all supported
temperature, max_tokens, top_p, and other OpenAI parameters
Standard response.usage with token counts, plus Layer’s cost field

Language Examples

import OpenAI from 'openai';

const openai = new OpenAI({
  baseURL: 'https://api.uselayer.ai/v1',
  apiKey: process.env.LAYER_API_KEY,
});

const response = await openai.chat.completions.create({
  gateId: process.env.GATE_ID,
  messages: [
    { role: 'user', content: 'Hello!' }
  ],
});

console.log(response.choices[0].message.content);

Migration Guide

Follow these steps to migrate an existing OpenAI application:

1. Update Client Initialization

Find where you initialize the OpenAI client and update it:
// Before
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// After
const openai = new OpenAI({
  baseURL: 'https://api.uselayer.ai/v1',
  apiKey: process.env.LAYER_API_KEY,
});

2. Add Gate ID

Add gateId to your completion calls:
// Before
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [...],
});

// After
const response = await openai.chat.completions.create({
  gateId: process.env.GATE_ID,  // � Add this
  messages: [...],               // Everything else stays the same
});

3. Update Environment Variables

.env
LAYER_API_KEY=layer_your_key
GATE_ID=your-gate-uuid

4. Test & Deploy

  1. Test in development
  2. Verify requests appear in Layer dashboard
  3. Check cost tracking is working
  4. Deploy to production
Easy Rollback: If anything breaks, revert the 2 changes (baseURL + apiKey) - takes 30 seconds.

Advanced Usage

Multiple Gates

Use different gates based on the task:
const COMPLEX_TASK_GATE = 'uuid-for-gpt-4o';
const SIMPLE_TASK_GATE = 'uuid-for-gpt-4o-mini';

async function complexTask(prompt: string) {
  return await openai.chat.completions.create({
    gateId: COMPLEX_TASK_GATE,
    messages: [{ role: 'user', content: prompt }],
  });
}

async function simpleTask(prompt: string) {
  return await openai.chat.completions.create({
    gateId: SIMPLE_TASK_GATE,
    messages: [{ role: 'user', content: prompt }],
  });
}

Environment-Specific Configuration

const baseURL = process.env.NODE_ENV === 'production'
  ? 'https://api.uselayer.ai/v1'
  : 'http://localhost:3001/v1';  // Local Layer API for dev

const openai = new OpenAI({ baseURL, apiKey: process.env.LAYER_API_KEY });

Error Handling

try {
  const response = await openai.chat.completions.create({
    gateId: process.env.GATE_ID,
    messages: [{ role: 'user', content: 'Hello!' }],
  });

  console.log(response.choices[0].message.content);
} catch (error) {
  if (error.status === 404) {
    console.error('Gate not found - check your GATE_ID');
  } else if (error.status === 429) {
    console.error('Rate limit exceeded');
  } else {
    console.error('Request failed:', error.message);
  }
}

How It Works

  1. You send  OpenAI SDK request to https://api.uselayer.ai/v1/chat/completions
  2. Layer receives  Validates gate, applies routing rules
  3. Layer routes  Sends to the model configured in your gate (GPT, Claude, Gemini, etc.)
  4. Provider responds  Returns response in provider’s format
  5. Layer normalizes  Converts back to OpenAI format
  6. You receive  Standard OpenAI response with added cost/metadata
Your code doesn’t know the difference.

FAQ

No. Just change baseURL and apiKey in the OpenAI client initialization.
Yes. Just revert baseURL and apiKey to OpenAI values.
Yes. stream: true works exactly like OpenAI.
Yes. Configure your gate to use any model  your code stays the same.
Fully supported. Layer handles the conversion across all providers.
No. This approach uses only the OpenAI SDK.
Yes. The Vercel AI SDK’s OpenAI adapter works with Layer’s OpenAI-compatible endpoint.

Comparison: OpenAI SDK vs Layer SDK

FeatureOpenAI SDK + LayerLayer SDK
Migration Effort2 lines of codeFull refactor
API FormatOpenAI formatLayer native format
Use CaseDrop-in replacementNew projects
Multi-modalChat onlyChat, image, video, audio
Streaming
Tool Calling
Cost Tracking
Admin OperationsL (via Admin SDK)
Recommendation:
  • Existing OpenAI apps � Use OpenAI SDK + Layer (this guide)
  • New projects � Use Layer SDK for full feature set

Next Steps