Skip to main content
The video() method generates videos from text descriptions using AI models like Runway, Kling, and Luma.

Basic Video Generation

const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'A serene ocean sunset with gentle waves rolling onto a sandy beach'
  }
});

console.log(response.videos[0].url);
console.log('Duration:', response.videos[0].duration, 'seconds');
console.log('Cost:', response.cost);

Video Sizes

Control video dimensions and aspect ratio:
const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'A bustling city street at night',
    size: '1280x720'  // Landscape HD
  }
});

Supported Sizes

SizeAspect RatioUse Case
1280x72016:9Landscape/YouTube
720x12809:16Portrait/TikTok/Instagram
1024x17929:16Portrait/high-res
1792x102416:9Landscape/high-res
Use 720x1280 for social media vertical videos (TikTok, Instagram Reels, YouTube Shorts).

Video Duration

Set how long the video should be:
const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'Time-lapse of clouds moving across a blue sky',
    duration: 10  // 10 seconds
  }
});
Supported durations vary by model. Most support 5-10 seconds. Check your gate’s model capabilities.

Image-to-Video

Generate video from a starting image:
const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'Camera slowly zooms into the building',
    image: {
      url: 'https://example.com/building.jpg'
    }
  }
});

Base64 Images

const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'The flower blooms and petals unfold',
    image: {
      base64: 'data:image/jpeg;base64,/9j/4AAQSkZJRg...'
    }
  }
});

Reference Images

Guide video generation with reference images:
const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'A person walking through a forest',
    referenceImages: [
      {
        url: 'https://example.com/person.jpg',
        referenceType: 'subject'  // Keep this person's appearance
      },
      {
        url: 'https://example.com/forest-style.jpg',
        referenceType: 'style'  // Match this artistic style
      }
    ]
  }
});

Reference Types

TypePurpose
subjectMaintain subject appearance
styleMatch artistic style
assetInclude specific objects

Advanced Parameters

FPS (Frames Per Second)

const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'Smooth camera pan across a mountain range',
    fps: 30  // Smoother motion
  }
});

Negative Prompts

Specify what to avoid:
const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'A peaceful garden scene',
    negativePrompt: 'people, cars, buildings, modern objects'
  }
});

Multiple Videos

Generate variations:
const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'Aurora borealis dancing in the night sky',
    numberOfVideos: 3  // Generate 3 variations
  }
});

response.videos.forEach((video, i) => {
  console.log(`Video ${i + 1}:`, video.url);
});

Deterministic Generation

Use seeds for reproducible results:
const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'Abstract colorful patterns morphing',
    seed: 42  // Same seed + prompt = same video
  }
});

Parameters

Request Parameters

ParameterTypeDefaultDescription
promptstringRequiredVideo description
durationnumber | stringModel defaultVideo length in seconds
sizestringModel defaultVideo dimensions
fpsnumberModel defaultFrames per second
seednumberRandomSeed for reproducibility
negativePromptstringNoneWhat to avoid
numberOfVideosnumber1Number of videos to generate
imageobjectNoneStarting image
lastFrameobjectNoneEnding image
referenceImagesarrayNoneStyle/subject references
personGenerationstringNonePerson generation mode

Response

interface VideoResponse {
  id: string;                    // Request ID
  model: string;                 // Model used
  videos: VideoOutput[];         // Generated videos
  cost: number;                  // Request cost in USD
  latency: number;               // Response time in ms
}

interface VideoOutput {
  url?: string;                  // Hosted video URL
  base64?: string;               // Base64-encoded video data
  duration?: number;             // Video length in seconds
  revisedPrompt?: string;        // AI-revised prompt
}

Best Practices

1. Write Motion-Focused Prompts

// Good ✅ - Describes movement
{
  prompt: 'Camera slowly pans right across a mountain landscape, revealing a lake in the distance'
}

// Less effective - Static description
{
  prompt: 'A mountain landscape with a lake'
}

2. Be Specific About Camera Movement

// Good ✅ - Clear camera direction
{
  prompt: 'Drone shot rising up from ground level, revealing a coastal town at sunset'
}

// Good ✅ - Detailed motion
{
  prompt: 'Close-up of raindrops hitting a window, camera slowly pulling back to reveal a city view'
}

3. Use Appropriate Dimensions

// Good ✅ - Vertical for social media
{
  prompt: 'Fashion model walking down a runway',
  size: '720x1280'  // TikTok/Instagram format
}

// Good ✅ - Landscape for YouTube
{
  prompt: 'Product demonstration video',
  size: '1280x720'  // YouTube format
}

4. Always Handle Errors

try {
  const response = await layer.video({
    gateId: 'your-gate-id',
    data: {
      prompt: 'A sunset over the mountains'
    }
  });

  return response.videos[0].url;
} catch (error) {
  console.error('Video generation failed:', error);
  return null;
}

5. Monitor Costs

Video generation is more expensive than images:
const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'Ocean waves',
    duration: 10
  }
});

console.log(`Duration: ${response.videos[0].duration}s`);
console.log(`Cost: $${response.cost.toFixed(4)}`);
console.log(`Cost per second: $${(response.cost / response.videos[0].duration).toFixed(4)}`);

Examples

Social Media Content

const response = await layer.video({
  gateId: 'your-gate-id',
  metadata: { platform: 'instagram' },
  data: {
    prompt: 'Product reveal: sleek smartphone rotating on a pedestal with studio lighting',
    size: '720x1280',  // Vertical
    duration: 5
  }
});

Marketing Video

const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'Aerial drone shot flying over a modern office building, smooth descent revealing the entrance',
    size: '1280x720',  // Landscape
    duration: 10,
    fps: 30
  }
});

Animation from Image

const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'The character turns their head and smiles at the camera',
    image: {
      url: 'https://example.com/character.jpg'
    },
    duration: 5
  }
});

Nature Scene

const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'Time-lapse of sunrise over a misty forest, camera slowly panning left',
    size: '1792x1024',
    duration: 8,
    negativePrompt: 'people, buildings, cars'
  }
});

Style-Consistent Video

const response = await layer.video({
  gateId: 'your-gate-id',
  data: {
    prompt: 'A cat playing with a ball of yarn',
    referenceImages: [
      {
        url: 'https://example.com/art-style.jpg',
        referenceType: 'style'
      }
    ],
    duration: 6
  }
});

Advanced Usage

Batch Generation

Generate multiple videos:
const prompts = [
  'Waves crashing on a beach',
  'City traffic at night',
  'Leaves falling in autumn'
];

const videos = await Promise.all(
  prompts.map(prompt =>
    layer.video({
      gateId: 'your-gate-id',
      data: { prompt, size: '1280x720' }
    })
  )
);

videos.forEach((response, i) => {
  console.log(prompts[i], '->', response.videos[0].url);
});

Override Model

Use a specific video model:
const response = await layer.video({
  gateId: 'your-gate-id',
  model: 'runway-gen3',  // Force specific model
  data: {
    prompt: 'A futuristic city scene'
  }
});

Custom Metadata

Track video generation:
const response = await layer.video({
  gateId: 'your-gate-id',
  metadata: {
    userId: 'user-123',
    campaign: 'summer-launch',
    platform: 'youtube-shorts'
  },
  data: {
    prompt: 'Product demonstration with dynamic camera movement',
    size: '720x1280',
    duration: 15
  }
});

Next Steps