Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Response
Successful Response
ID of the user associated with this pod, if applicable.
Name of the pod.
Type of the pod, based on PodTypeEnum.
HOSTED
, EXTERNAL
Type of provider associated with the pod, based on ProviderTypeEnum.
runpod
, fluidstack
, lambdalabs
, hyperstack
, oblivus
, cudocompute
, scaleway
, tensordock
, datacrunch
, latitude
, crusoecloud
, massedcompute
, akash
, primeintellect
, primecompute
, dc_impala
, dc_kudu
, dc_roan
, nebius
, dc_eland
, dc_wildebeest
Model of the GPU allocated.
CPU_NODE
, A10_24GB
, A100_80GB
, A100_40GB
, A30_24GB
, A40_48GB
, B200_180GB
, RTX3070_8GB
, RTX3070_8GB
, RTX3080_10GB
, RTX3080Ti_12GB
, RTX3090_24GB
, RTX3090Ti_24GB
, RTX4070Ti_12GB
, RTX4080_16GB
, RTX4080Ti_16GB
, RTX4090_24GB
, RTX5090_32GB
, H100_80GB
, H200_96GB
, GH200_96GB
, H200_141GB
, GH200_480GB
, GH200_624GB
, L4_24GB
, L40_48GB
, L40S_48GB
, RTX4000_8GB
, RTX5000_16GB
, RTX6000_24GB
, RTX8000_48GB
, RTX2000Ada_16GB
, RTX4000Ada_20GB
, RTX5000Ada_32GB
, RTX6000Ada_48GB
, A2000_6GB
, A4000_16GB
, A4500_20GB
, A5000_24GB
, A6000_48GB
, V100_16GB
, V100_32GB
, P100_16GB
, T4_16GB
, P4_8GB
, P40_24GB
Number of GPUs allocated to the node.
1
Type of socket used by the GPU.
PCIe
, SXM2
, SXM3
, SXM4
, SXM5
, SXM6
Hourly price for running the pod.
1.23
Unique identifier for the pod, generated as a UUID.
ID of the team owning this pod, if applicable.
ID of the wallet associated with this pod for billing or resource tracking.
Current status of the pod, based on PodStatusEnum.
PROVISIONING
, PENDING
, ACTIVE
, STOPPED
, ERROR
, DELETING
, UNKNOWN
, TERMINATED
Installation status of the pod, based on InstallationStatusEnum.
PENDING
, ACTIVE
, FINISHED
, ERROR
, TERMINATED
Finalizer status of the pod, based on FinalizerStatusEnum.
PENDING
, ERROR
, SUCCESS
Details about any installation failures that occurred, if applicable.
Percentage of the installation process completed.
Timestamp when the pod was created.
Timestamp when the pod was last updated.
Timestamp when the pod was terminated.
Timestamp when the provider machine became ready (SSH accessible).
Timestamp when Prime Intellect installation completed.
Password for accessing the Jupyter environment on the pod, if applicable.
Hourly price when the pod is stopped. If empty then priceHr
is used.
0.005
Hourly price during the provisioning process. If empty then priceHr
is used.
Base hourly price for the pod. If the base currency is set.
Currency in which the base price is calculated.
Type of image selected for the pod.
ubuntu_22_cuda_12
, cuda_12_1_pytorch_2_2
, cuda_11_8_pytorch_2_1
, cuda_12_1_pytorch_2_3
, cuda_12_1_pytorch_2_4
, cuda_12_4_pytorch_2_4
, cuda_12_4_pytorch_2_5
, cuda_12_4_pytorch_2_6
, cuda_12_6_pytorch_2_7
, stable_diffusion
, axolotl
, bittensor
, hivemind
, petals_llama
, vllm_llama_8b
, vllm_llama_70b
, vllm_llama_405b
, custom_template
, flux
, prime_rl
ID of the custom template applied to the pod, if applicable.
ID of the cluster to which the pod belongs.
Port mapping.
SSH connection/connections details.
IP address/addresses of the instance.
Instance resources.
Instance attached resources.
Whether the instance is spot.
Automatically restart the instance.