This section covers how to use Verifiers environments for RL training with our Hosted Training platform, our open-sourceDocumentation Index
Fetch the complete documentation index at: https://docs.primeintellect.ai/llms.txt
Use this file to discover all available pages before exploring further.
prime-rl trainer, or other supported libraries.
Table of Contents
- Hosted Training
- Training with
prime-rl - Prompt Optimization with
prime gepa run - RL Rules of Thumb
- Other Trainers
Hosted Training
Hosted Training, available within our Lab platform, enables you to automatically train models viaprime-rl without needing to manage your own infrastructure. Hosted Training supports LoRA for RL training, and can be used with any environment built with Verifiers.
Configuration
Use theprime lab setup script to download example configuration files for Hosted Training into your workspace:
configs/rl/, example eval configs into configs/eval/, along with configs/endpoints.toml and GEPA starter configs in configs/gepa/:
primeintellect/reverse-text environment with Qwen/Qwen3.5-4B:
taskset and harness:
Qwen/Qwen3-30B-A3B-Instruct-2507Qwen/Qwen3-30B-A3B-Thinking-2507Qwen/Qwen3-4B-Instruct-2507Qwen/Qwen3-4B-Thinking-2507Qwen/Qwen3-VL-4B-InstructQwen/Qwen3.5-0.8BQwen/Qwen3.5-2BQwen/Qwen3.5-4BQwen/Qwen3.5-9BQwen/Qwen3.5-35B-A3BQwen/Qwen3.5-122B-A10BQwen/Qwen3.5-397B-A17Bmeta-llama/Llama-3.2-1B-Instructmeta-llama/Llama-3.2-3B-Instructnvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16openai/gpt-oss-20bopenai/gpt-oss-120bzai-org/GLM-4.7
Training with prime-rl
Our prime-rl trainer is a production-ready async RL training framework that supports large-scale multi-node training, agentic rollouts with Verifiers environments, Mixture-of-Experts (MoE) models, LoRA adapters, and other training algorithms such as SFT and online distillation. We recommend using prime-rl for training with Verifiers environments on self-managed GPU infrastructure. The default configuration distills the best practices from our research team’s experience and the broader community into a stable, easy-to-use recipe, including advanced features such as online difficulty filtering, continuous batching, in-flight weight updates, importance sampling and logprob clipping for stability, and more.
Setup and Configuration
To set up your workspace for training withprime-rl, run:
prime-rl trainer and its dependencies. For configuration files and launch commands, use the prime-rl documentation.
Prompt Optimization with prime gepa run
prime gepa run is the CLI entrypoint for automatic system prompt optimization using GEPA (Genetic-Pareto prompt optimization). It iteratively refines your environment’s system prompt using a teacher LLM to reflect on evaluation results, without requiring gradient-based training. Current support is for system prompt optimization only.
Usage
Basic usage mirrorsprime eval run:
wiki-search environment using the specified model for both evaluation rollouts and reflection. Results are saved to environments/wiki-search/outputs/gepa/.
Key options:
--model/-m: Model for evaluation rollouts--reflection-model/-M: Teacher model for prompt reflection (defaults to--model)--max-calls/-B: Evaluation budget (default: 500)--num-train/-n: Training examples (default: 100)--num-val/-N: Validation examples (default: 50)--minibatch-size: Number of examples evaluated together per reflection step (default: 3)--perfect-score: Maximum score for a rollout in your environment (if applicable); minibatches achieving this score are skipped during reflection (useful if your environment has a known max score)--state-columns: Additional state columns to copy into the reflection dataset. By default,query,completion,expected_answer,reward, anderrorare included. Use this to add environment-specific state fields (e.g.,--state-columns tool_calls reasoning_trace)
max_calls, num_train, num_val, minibatch_size, and max_concurrent under [gepa]. Put generation parameters such as max_tokens and temperature under [sampling]; the CLI passes that table through as sampling_args. Use [[env]] for one or more environments; GEPA samples train and validation examples uniformly by environment. A single [env] table is still accepted for older configs.
Output
After optimization, you’ll find:system_prompt.txt- The optimized system prompt. Load it withvf.SystemMessage.from_path("/path/to/system_prompt.txt").results.jsonl- Candidate prompt rows for evaluation upload; GEPA-specific fields live underinfo.pareto_frontier.jsonl- Best candidate references per validation examplemetadata.json- Run configuration and summary
prime eval run to verify performance before and after optimization.
RL Rules of Thumb
RL training can be sensitive to implementation details and hyperparameters. Some simple practical guidance:Before Training
- Evaluate baseline performance: If your model gets 0% reward after 10+ attempts, the task is too hard
- Check task difficulty: If baseline is already 80%+, consider harder examples
- Ensure reward diversity: You want varied scores within each generation group
Performance Trade-offs
For more aggressive training (higher risk of collapse):- Increase learning rate (1e-5 to 1e-4 for LoRA, 1e-6 to 1e-5 for full finetuning)
- Decrease
rollouts_per_exampleandbatch_sizefor faster generation
- Increase
rollouts_per_example(16-32) - Increase
batch_size(512-1024) - Use larger models (14B+)
prime-rl, you can enable online difficulty filtering to ensure that rollout groups used for training always contain a diversity of rewards.
Inference Client Types
The rollout client’sclient_type controls how prompt assembly and token state flow between the inference server and the trainer. For RL the trainer must see the exact tokens the server sampled — re-tokenization across turns drifts under BPE round-trip and fragments multi-turn rollouts into multiple training samples.
openai_chat_completions(MITO, messages-in): standard OpenAI-compatible path. Server-side chat templating, returns text. The trainer re-tokenizes — fine for eval and short single-turn training, but can fragment multi-turn rollouts.openai_chat_completions_token(TITO, token-in): server-side templating, but returns prompt and completion token IDs alongside text so the trainer doesn’t re-tokenize. Use when you trust the server’s chat template to be stable across turns.renderer(experimental): client-side tokenization via a per-model renderer in therendererspackage. Install it withuv add "verifiers[renderers]"before usingclient_type="renderer". The trainer renders messages to token IDs locally and sends those to vLLM’s/v1/generateendpoint. The renderer’sbridge_to_next_turnextends prior-turn tokens verbatim across multi-turn boundaries (the extension property) and synthesizes the canonical turn-close on mid-completion truncation, so multi-turn rollouts merge into one training sample with one clean loss mask.
openai_chat_completions_token — it’s the tried-and-tested path with broad model coverage. The renderer client is newer and offers stronger token-preservation guarantees in theory, but is experimental: hand-coded renderers exist only for a subset of models, and corner cases are still being shaken out. See reference § Built-in Clients for the full list.
Common Issues
Non-Increasing Chat Templates: The Qwen3 and DeepSeek-R1 model series both remove<think> sections from messages when processing inputs, which violates the increasing context requirement for multi-turn training. We provide versions of many of these models with modified chat templates here.
OOM during generation:
- Reduce
rollouts_per_exampleormicro_batch_size - Use LoRA instead of full finetuning
- Check vLLM server has sufficient memory
- Decrease learning rate
- Increase
rollouts_per_example - Increase
batch_size
- Increase learning rate
- Leverage continuous rewards
- Use online difficulty filtering
- Calibrate difficulty appropriately via smarter models, easier tasks
Other Trainers
verifiers is intended to be largely trainer-agnostic and is straightforward to support for any trainer which can expose an OpenAI-compatible inference client for rollouts.
vf.RLTrainer (Legacy)
The legacy vf.RLTrainer still exists for educational and experimental purposes via the optional verifiers-rl package and the legacy RL CLI entrypoint, but it is not actively maintained. It is a compact single-node async RL trainer with a narrower feature set than production trainers. Its core implementation (trainer.py and orchestrator.py under packages/verifiers-rl/verifiers_rl/rl/trainer/) remains intentionally lightweight for algorithm experimentation. For production training and current guidance, use prime-rl.
Tinker
Tinker supports Verifiers environments via thetinker-cookbook recipes.
SkyRL
SkyRL supports Verifiers environments via itsskyrl-train integration.