The steps on the UI will guide you through setting up your worker.
Ensure you’re contributing a physical machine with GPU access or VM (Virtual Machine) with GPU passthrough. Containerized cloud environments (e.g. RunPod, TensorDock) are not supported.
Once you’ve set up your worker, you can run it using the command provided by the guided setup UI.If you’re using compute in the cloud, we recommend using tmux, or a similar tool, to run the worker to ensure it stays running even if you disconnect from your ssh session.The prime worker command also provides additional help if you suffix your command with --help, e.g.
Copy
Ask AI
prime-worker --help
Or with a specific command:
Copy
Ask AI
prime-worker run --help
You can shut-down the worker at any time when you’re done contributing.If you have issues setting up your worker, contact us through our support page or in the #protocol channel on our Discord.
Your contributions may be slashed if you try to act maliciously on the network; e.g. faking GPU hardware, submitting fake data, etc.
More details in contribution guidelines
You can monitor your worker’s status on the dashboard and the CLI output.
The dashboard may take up to 10 minutes to update with the latest worker status, so check the CLI for the most up-to-date status.
Once your worker is active it’ll join the network and start submitting work.
Contributions are tracked based on work submitted, so it may take up to 24 hours to start seeing contributions increase.You’ll also be able to see your contributions on the compute pool dashboard.
Managing Multiple Nodes with a Single Provider Key
If you’re contributing multiple nodes/workers to a compute pool, we recommend using the same provider private key and generating new node subkeys for each new worker.
This reduces the number of independent private keys you have to manage, and simplifies the management of your provider’s stake.The guided setup flow will walk you through how to do this when you’re setting up additional workers.