Skip to content

NVLINK Aware Scheduling #214

@tuttlebr

Description

@tuttlebr

When deploying some workloads on a K8s cluster where NVLINK is installed intra-node, scheduling techniques should let the user guarantee their pod lands within a single NVLINK domain. With existing scheduling, if there is a node with 8 NVIDIA GPUs, with each pair connected over NVLINK, I cannot guarantee my pod lands on GPU2 GPU3 in the event GPU0 was occupied.

Example of NVLINK pair topology:

$ nvidia-smi topo -m
        GPU0    GPU1    GPU2    GPU3    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV12    SYS     SYS     SYS     0-23    0               N/A
GPU1    NV12     X      SYS     SYS     SYS     24-47   1               N/A
GPU2    SYS     SYS      X      NV12    SYS     48-71   2               N/A
GPU3    SYS     SYS     NV12     X      SYS     72-95   3               N/A
NIC0    SYS     SYS     SYS     SYS      X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Currently, the GPU Operator will use a best effort policy but this will not guarantee NVLINK pairs. With DRA, there is also be some prior work which has been tested with MIG setups.

Metadata

Metadata

Assignees

Labels

featureissue/PR that proposes a new feature or functionality

Type

No type

Projects

Status

No status

Relationships

None yet

Development

No branches or pull requests

Issue actions