Get Pods History
curl --request GET \
--url https://api.primeintellect.ai/api/v1/pods/history \
--header 'Authorization: Bearer <token>'
{
"total_count": 0,
"offset": 0,
"limit": 100,
"data": [
{
"id": "<string>",
"name": "<string>",
"type": "HOSTED",
"providerType": "runpod",
"createdAt": "2023-11-07T05:31:56Z",
"terminatedAt": "2023-11-07T05:31:56Z",
"gpuName": "CPU_NODE",
"gpuCount": 1,
"socket": "PCIe",
"priceHr": 1.23,
"userId": "<string>",
"teamId": "<string>",
"customTemplateId": "<string>",
"environmentType": "ubuntu_22_cuda_12"
}
]
}
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Response
Unique identifier for the record.
Name assigned to the resource.
Type of the pod, either EXTERNAL or HOSTED.
HOSTED
, EXTERNAL
ID of the provider.
runpod
, fluidstack
, lambdalabs
, hyperstack
, oblivus
, cudocompute
, scaleway
, tensordock
, datacrunch
, latitude
, crusoecloud
, massedcompute
, akash
, primeintellect
, primecompute
, dc_impala
, dc_kudu
, dc_roan
Timestamp when the resource was created.
Model of the GPU used in the resource.
CPU_NODE
, A10_24GB
, A100_80GB
, A100_40GB
, A30_24GB
, A40_48GB
, RTX3070_8GB
, RTX3070_8GB
, RTX3080_10GB
, RTX3080Ti_12GB
, RTX3090_24GB
, RTX3090Ti_24GB
, RTX4070Ti_12GB
, RTX4080_16GB
, RTX4080Ti_16GB
, RTX4090_24GB
, H100_80GB
, H200_96GB
, H200_141GB
, GH200_480GB
, GH200_624GB
, L4_24GB
, L40_48GB
, L40S_48GB
, RTX4000_8GB
, RTX5000_16GB
, RTX6000_24GB
, RTX8000_48GB
, RTX4000Ada_20GB
, RTX5000Ada_32GB
, RTX6000Ada_48GB
, A2000_6GB
, A4000_16GB
, A4500_20GB
, A5000_24GB
, A6000_48GB
, V100_16GB
, V100_32GB
, P100_16GB
, T4_16GB
, P4_8GB
, P40_24GB
Number of GPUs allocated to the resource.
1
Type of socket used by the GPU, if applicable.
PCIe
, SXM2
, SXM3
, SXM4
, SXM5
Price per hour for using the resource (average).
1.23
ID of the user who owns the resource.
Timestamp when the resource was terminated.
ID of the team to which the resource is assigned, if applicable.
ID of the custom template used for the resource, if applicable.
Type of image used.
ubuntu_22_cuda_12
, cuda_12_1_pytorch_2_2
, cuda_11_8_pytorch_2_1
, cuda_12_1_pytorch_2_3
, cuda_12_1_pytorch_2_4
, cuda_12_4_pytorch_2_4
, cuda_12_4_pytorch_2_5
, stable_diffusion
, axolotl
, bittensor
, hivemind
, petals_llama
, vllm_llama_8b
, vllm_llama_70b
, vllm_llama_405b
, custom_template
, flux
Total number of items available in the dataset
Number of items to skip before starting to collect the result set
x >= 0
Maximum number of items to return
x >= 0
curl --request GET \
--url https://api.primeintellect.ai/api/v1/pods/history \
--header 'Authorization: Bearer <token>'
{
"total_count": 0,
"offset": 0,
"limit": 100,
"data": [
{
"id": "<string>",
"name": "<string>",
"type": "HOSTED",
"providerType": "runpod",
"createdAt": "2023-11-07T05:31:56Z",
"terminatedAt": "2023-11-07T05:31:56Z",
"gpuName": "CPU_NODE",
"gpuCount": 1,
"socket": "PCIe",
"priceHr": 1.23,
"userId": "<string>",
"teamId": "<string>",
"customTemplateId": "<string>",
"environmentType": "ubuntu_22_cuda_12"
}
]
}