Skip to main content
POST
/
api
/
v1
/
hosted-evaluations
Create Hosted Evaluation
curl --request POST \
  --url https://api.primeintellect.ai/api/v1/hosted-evaluations \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "environment_ids": [
    "<string>"
  ],
  "inference_model": "<string>",
  "eval_config": {
    "num_examples": 4503599627370495,
    "rollouts_per_example": 1024,
    "env_args": {},
    "allow_sandbox_access": false,
    "allow_instances_access": false,
    "timeout_minutes": 780,
    "custom_secrets": {}
  },
  "team_id": "<string>",
  "name": "<string>"
}
'
{
  "evaluation_id": "<string>",
  "status": "<string>",
  "sandbox_id": "<string>",
  "evaluation_ids": [
    "<string>"
  ],
  "error": "<string>"
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json

Request to create and start a hosted evaluation

environment_ids
string[]
required

List of environment IDs to evaluate

Minimum array length: 1
inference_model
string
required

Model ID for inference

eval_config
HostedEvalConfig · object
required

Evaluation configuration

team_id
string | null

Optional team ID to own the hosted evaluation

name
string | null

Optional custom evaluation name

Response

Successful Response

Response after creating a hosted evaluation

evaluation_id
string
required

ID of the created evaluation

status
string
required

Current status of the evaluation

sandbox_id
string | null

ID of the sandbox running the evaluation

evaluation_ids
string[] | null

List of evaluation IDs if multiple environments were provided

error
string | null

Error message if creation failed