Skip to main content

Lab

Train, evaluate, and deploy AI models with our frontier research infrastructure.

Hosted Training

Run large-scale RL training without managing infrastructure

Environments Hub

Create, share, and discover RL environments for training and evaluation

Evaluations

Run evaluations on your environments with hosted inference

verifiers

Our library for environments and evals

Sandboxes

Secure code execution environments for AI agents

prime-rl

Our large-scale async RL framework

Inference

Access frontier models via our inference API

Guides

Step-by-step workflows for training, environments, and Lab

Compute

Access GPUs and infrastructure for AI workloads.

On-Demand Cloud

Deploy single GPU instances in under a minute

Multi-Node Clusters

Scale to 64+ H100 clusters for distributed training

Storage

Persistent storage for datasets and checkpoints

Reserved Clusters

Dedicated GPU clusters with monitoring

Resources

API Reference

REST API documentation

CLI Reference

Prime CLI commands

Compute FAQ

Common questions