-
Jan Siwiec authoredJan Siwiec authored
Using AMD Partition
For testing your application on the AMD partition, you need to prepare a job script for that partition or use the interactive job:
salloc -N 1 -c 64 -A PROJECT-ID -p p03-amd --gres=gpu:4 --time=08:00:00
where:
-
-N 1
means allocating one server, -
-c 64
means allocating 64 cores, -
-A
is your project, -
-p p03-amd
is AMD partition, -
--gres=gpu:4
means allocating all 4 GPUs of the node, -
--time=08:00:00
means allocation for 8 hours.
You have also an option to allocate subset of the resources only,
by reducing the -c
and --gres=gpu
to smaller values.
salloc -N 1 -c 48 -A PROJECT-ID -p p03-amd --gres=gpu:3 --time=08:00:00
salloc -N 1 -c 32 -A PROJECT-ID -p p03-amd --gres=gpu:2 --time=08:00:00
salloc -N 1 -c 16 -A PROJECT-ID -p p03-amd --gres=gpu:1 --time=08:00:00
!!! Note
p03-amd01 server has hyperthreading enabled therefore htop shows 128 cores.
p03-amd02 server has hyperthreading disabled therefore htop shows 64 cores.
Using AMD MI100 GPUs
The AMD GPUs can be programmed using the ROCm open-source platform.
ROCm and related libraries are installed directly in the system. You can find it here:
/opt/rocm/
The actual version can be found here:
[user@p03-amd02.cs]$ cat /opt/rocm/.info/version
5.5.1-74
Basic HIP Code
The first way how to program AMD GPUs is to use HIP.
The basic vector addition code in HIP looks like this.
This a full code and you can copy and paste it into a file.
For this example we use vector_add.hip.cpp
.
#include <cstdio>
#include <hip/hip_runtime.h>
__global__ void add_vectors(float * x, float * y, float alpha, int count)
{
long long idx = blockIdx.x * blockDim.x + threadIdx.x;
if(idx < count)
y[idx] += alpha * x[idx];
}
int main()
{
// number of elements in the vectors
long long count = 10;
// allocation and initialization of data on the host (CPU memory)
float * h_x = new float[count];
float * h_y = new float[count];
for(long long i = 0; i < count; i++)
{
h_x[i] = i;
h_y[i] = 10 * i;
}
// print the input data
printf("X:");
for(long long i = 0; i < count; i++)
printf(" %7.2f", h_x[i]);
printf("\n");
printf("Y:");
for(long long i = 0; i < count; i++)
printf(" %7.2f", h_y[i]);
printf("\n");
// allocation of memory on the GPU device
float * d_x;
float * d_y;
hipMalloc(&d_x, count * sizeof(float));
hipMalloc(&d_y, count * sizeof(float));
// copy the data from host memory to the device
hipMemcpy(d_x, h_x, count * sizeof(float), hipMemcpyHostToDevice);
hipMemcpy(d_y, h_y, count * sizeof(float), hipMemcpyHostToDevice);
int tpb = 256;
int bpg = (count - 1) / tpb + 1;
// launch the kernel on the GPU
add_vectors<<< bpg, tpb >>>(d_x, d_y, 100, count);
// hipLaunchKernelGGL(add_vectors, bpg, tpb, 0, 0, d_x, d_y, 100, count);
// copy the result back to CPU memory
hipMemcpy(h_y, d_y, count * sizeof(float), hipMemcpyDeviceToHost);
// print the results
printf("Y:");
for(long long i = 0; i < count; i++)
printf(" %7.2f", h_y[i]);
printf("\n");
// free the allocated memory
hipFree(d_x);
hipFree(d_y);
delete[] h_x;
delete[] h_y;
return 0;
}
To compile the code we use hipcc
compiler.
For compiler information, use hipcc --version
:
[user@p03-amd02.cs ~]$ hipcc --version
HIP version: 5.5.30202-eaf00c0b
AMD clang version 16.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-5.5.1 23194 69ef12a7c3cc5b0ccf820bc007bd87e8b3ac3037)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /opt/rocm-5.5.1/llvm/bin
The code is compiled a follows:
hipcc vector_add.hip.cpp -o vector_add.x
The correct output of the code is:
[user@p03-amd02.cs ~]$ ./vector_add.x
X: 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00
Y: 0.00 10.00 20.00 30.00 40.00 50.00 60.00 70.00 80.00 90.00
Y: 0.00 110.00 220.00 330.00 440.00 550.00 660.00 770.00 880.00 990.00
More details on HIP programming is in the HIP Programming Guide
HIP and ROCm Libraries
The list of official AMD libraries can be found here.
The libraries are installed in the same directory is ROCm
/opt/rocm/
Following libraries are installed: