This option is for more technical users with their own GPU hardware.

Deploy Worker on Your Own GPUs

Video Walkthrough

1 - Select Contribute Compute

Click the “Contribute Compute” button on the dashboard of the compute pool you want to contribute to.

2 - Select Self-Hosted Option

Select the option to self-host your worker.

3 - Follow the Guided Setup

The steps on the UI will guide you through setting up your worker.

Ensure you’re contributing a physical machine with GPU access or VM (Virtual Machine) with GPU passthrough. Containerized cloud environments (e.g. RunPod, TensorDock) are not supported.

4 - Run Worker CLI

Install Pre-Requisites

In order to run the CLI, you’ll need to have the following tools already installed.

Docker

NVIDIA Container Toolkit

tmux (optional)

  • We recommend using tmux if you’re using compute from a cloud or hosted provider
  • You can check if you have it installed by running the tmux command in your terminal
  • If you need to install it, you can follow the instructions here: https://github.com/tmux/tmux/wiki/Installing

Run Worker

Once you’ve set up your worker, you can run it using the command provided by the guided setup UI.

If you’re using compute in the cloud, we recommend using tmux, or a similar tool, to run the worker to ensure it stays running even if you disconnect from your ssh session.

The prime worker command also provides additional help if you suffix your command with --help, e.g.

prime-worker --help

Or with a specific command:

prime-worker run --help

You can shut-down the worker at any time when you’re done contributing.

If you have issues setting up your worker, contact us through our support page or in the #protocol channel on our Discord.

Your contributions may be slashed if you try to act maliciously on the network; e.g. faking GPU hardware, submitting fake data, etc. More details in contribution guidelines

5 - Monitor Status

You can monitor your worker’s status on the dashboard and the CLI output. The dashboard may take up to 10 minutes to update with the latest worker status, so check the CLI for the most up-to-date status.

6 - Earn contributions once active

Once your worker is active it’ll join the network and start submitting work. Contributions are tracked based on work submitted, so it may take up to 24 hours to start seeing contributions increase.

You’ll also be able to see your contributions on the compute pool dashboard.

Troubleshooting / Help

If your issue is not addressed here, contact us through our support page or in the #protocol channel on our Discord.

Managing Multiple Nodes with a Single Provider Key

If you’re contributing multiple nodes/workers to a compute pool, we recommend using the same provider private key and generating new node subkeys for each new worker. This reduces the number of independent private keys you have to manage, and simplifies the management of your provider’s stake.

The guided setup flow will walk you through how to do this when you’re setting up additional workers.

Docker API Permission Denied Error

If you get the error Docker API Permission Denied ... You may need to add your user to the docker group

  1. Add your user to the docker group:
  sudo usermod -aG docker $USER
  1. Log out and in again to your SSH session

View Worker Source Code

This option is for more technical users with their own GPU hardware.

Deploy Worker on Your Own GPUs

Video Walkthrough

1 - Select Contribute Compute

Click the “Contribute Compute” button on the dashboard of the compute pool you want to contribute to.

2 - Select Self-Hosted Option

Select the option to self-host your worker.

3 - Follow the Guided Setup

The steps on the UI will guide you through setting up your worker.

Ensure you’re contributing a physical machine with GPU access or VM (Virtual Machine) with GPU passthrough. Containerized cloud environments (e.g. RunPod, TensorDock) are not supported.

4 - Run Worker CLI

Install Pre-Requisites

In order to run the CLI, you’ll need to have the following tools already installed.

Docker

NVIDIA Container Toolkit

tmux (optional)

  • We recommend using tmux if you’re using compute from a cloud or hosted provider
  • You can check if you have it installed by running the tmux command in your terminal
  • If you need to install it, you can follow the instructions here: https://github.com/tmux/tmux/wiki/Installing

Run Worker

Once you’ve set up your worker, you can run it using the command provided by the guided setup UI.

If you’re using compute in the cloud, we recommend using tmux, or a similar tool, to run the worker to ensure it stays running even if you disconnect from your ssh session.

The prime worker command also provides additional help if you suffix your command with --help, e.g.

prime-worker --help

Or with a specific command:

prime-worker run --help

You can shut-down the worker at any time when you’re done contributing.

If you have issues setting up your worker, contact us through our support page or in the #protocol channel on our Discord.

Your contributions may be slashed if you try to act maliciously on the network; e.g. faking GPU hardware, submitting fake data, etc. More details in contribution guidelines

5 - Monitor Status

You can monitor your worker’s status on the dashboard and the CLI output. The dashboard may take up to 10 minutes to update with the latest worker status, so check the CLI for the most up-to-date status.

6 - Earn contributions once active

Once your worker is active it’ll join the network and start submitting work. Contributions are tracked based on work submitted, so it may take up to 24 hours to start seeing contributions increase.

You’ll also be able to see your contributions on the compute pool dashboard.

Troubleshooting / Help

If your issue is not addressed here, contact us through our support page or in the #protocol channel on our Discord.

Managing Multiple Nodes with a Single Provider Key

If you’re contributing multiple nodes/workers to a compute pool, we recommend using the same provider private key and generating new node subkeys for each new worker. This reduces the number of independent private keys you have to manage, and simplifies the management of your provider’s stake.

The guided setup flow will walk you through how to do this when you’re setting up additional workers.

Docker API Permission Denied Error

If you get the error Docker API Permission Denied ... You may need to add your user to the docker group

  1. Add your user to the docker group:
  sudo usermod -aG docker $USER
  1. Log out and in again to your SSH session

View Worker Source Code