Prime Intellect Docs home pagelight logodark logo
  • Support
  • Platform
  • Platform
Getting Started
Introduction
Documentation
API Reference
CLI Reference
  • Discord
  • Twitter
  • Blog
  • Getting Started
    • Introduction
    • Quickstart - Deploy a Pod
    • Concepts
    On-Demand Cloud
    • Run Jupyter Notebooks
    • Deploy Llama via VLLM
    • Deploy Custom Docker Image
    Multi-Node Clusters
    • Deploy Multi-Node Cluster
    • Run Torch FSDP
    • Run Megatron-Deepspeed
    • Run Hugging Face Accelerate
    • Llama 405B Inference in BF16
    • Run DistillKit
    Decentralized Intelligence
    • Contribute Compute
    • Contribution Guidelines
    Getting Started

    Introduction

    Welcome to the Prime Intellect Docs!

    1

    Search across all clouds and find the best and cheapest GPUs available.

    2

    Ready to use docker containers to get your AI workloads up to speed.

    3

    1-Click Deploy your GPUs. No quotas, commitments or hidden fees.

    4

    Scale to 64+ Multi-node H100 Clusters On-Demand

    Quickstart - Deploy a Pod

    The most cost effective GPUs deployed in less than 1 min

    FAQ

    Answers to common questions about our platform

    Tutorials

    Guides to get your use case running with Prime Intellect

    Multi-Node On-Demand

    Guides on how to deploy multi-node clusters of 16-64+ H100s on-demand

    Quickstart - Deploy a Pod
    xgithublinkedin
    Powered by Mintlify