Skip to content

Commit 2da020d

Browse files
committed
Added GPU inside container docs
1 parent ee794b4 commit 2da020d

2 files changed

Lines changed: 11 additions & 0 deletions

File tree

content/en/docs/installation/installation-linux.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -323,6 +323,13 @@ See [NVIDIA NVML metric provider]({{< relref "/docs/measuring/metric-providers/g
323323

324324
On *Ubuntu* and *Fedora* you can just append `--nvidia-gpu` to the install script to try an auto-install.
325325

326+
#### Using a GPU inside the container
327+
328+
If the GPU is accessed inside the container directly you also need to install the [NVIDIA container toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) and activate the GPU via a docker run arg.
329+
330+
See the [Measuring AI/ML Applications]({{< relref "docs/measuring/measuring-ai-ml-applications" >}}) sub-page for more details.
331+
332+
326333
### DC Metrics Provider
327334

328335
Some of our PSU metrics providers may need specific hardware attached to your machine in order to run. These include the [Powerspy]({{< relref "docs/measuring/metric-providers/psu-energy-ac-powerspy2-machine" >}}), [MCP]({{< relref "docs/measuring/metric-providers/psu-energy-ac-mcp-machine" >}}), and [Picolog]({{< relref "docs/measuring/metric-providers/psu-energy-dc-picolog-machine" >}}) metric providers. Please look for details in each provider's corresponding documentation

content/en/docs/measuring/measuring-ai-ml-applications.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,10 @@ See [our example ML example application](https://github.com/green-coding-solutio
1717
The simplest way is to use [ollama](https://ollama.com) as a manager and encapsulate it inside of the GMT.
1818

1919
See [our example ollama LLM example application](https://github.com/green-coding-solutions/example-applications/tree/main/ai-model) to have a usage scenario to get started.
20+
This contains a [usage scenario to run on the CPU only](https://github.com/green-coding-solutions/example-applications/blob/main/ai-model/usage_scenario_gpu.yml) and one
21+
that [runs on the GPU](https://github.com/green-coding-solutions/example-applications/blob/main/ai-model/usage_scenario_gpu.yml).
22+
23+
For the latter scenario you can see that the GPU must be activated for the docker container by passing the *docker run arg* `--gpus=all`. This functionaly requires the [NVIDIA container toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) to be installed on your host.
2024

2125
#### Quick LLM query measuring
2226

0 commit comments

Comments
 (0)