Logo Xingxin on Bug

uv sync Different CUDA PyTorch and Torch-Dependent Libraries

December 21, 2025
4 min read
Question

Have you ever struggling to

  • manage different CUDA PyTorch in one project? e.g. torch2.8.0+cu128 vs torch2.6.0+cu124
  • add torch-dependent library as dependency? e.g. curobo or pytorch3d

This blog post will answer all of these. This is an opinionated guide on using uv for PyTorch-related projects .

Remark

I strongly prefer uv over conda for managing complex research environments due to its speed and transparent dependency handling.

Scenario 1: I have CUDA 12.4 on A but I have CUDA 13.0 on B

It is common to face hardware mismatch when working on a project across different machines(NVIDIA Graphics Card).

For example, my workflow often spans 2 distinct environments:

  1. HPC Cluster: Limited to CUDA 12.4
  2. My workstation: Equipped with an NVIDIA RTX 5090, which requires CUDA 12.8 or higher.

Since these environments cannot share the same CUDA/PyTorch binary, I use the optional-dependencies with conflicts to create a branching version on PyTorch.

See the magic setting from pyproject.toml in the following.

dependencies = [
]
 
[project.optional-dependencies]
cu124 = [
  "torch>=2.6.0",
  "torchvision>=0.21.0"
]
cu130 = [
  "torch>=2.6.0",
  "torchvision>=0.21.0"
]
 
[tool.uv]
conflicts = [
  [
    { extra = "cu124" },
    { extra = "cu130" },
  ],
]
 
[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
 
[[tool.uv.index]]
name = "pytorch-cu130"
url = "https://download.pytorch.org/whl/cu130"
explicit = true
 
[tool.uv.sources]
torch = [
  { index = "pytorch-cu124", extra = "cu124", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
  { index = "pytorch-cu130", extra = "cu130", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
]
torchvision = [
  { index = "pytorch-cu124", extra = "cu124", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
  { index = "pytorch-cu130", extra = "cu130", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
]

With this setup, I can easily use different CUDA with different PyTorch in the same project.


📌in HPC cluster

uv sync --extra cu124
python do_something.py

or

uv run --extra cu124 do_something.py

📌in my workstation

uv sync --extra cu130
python do_something.py

or

uv run --extra cu130 do_something.py

Scenario 2: PyTorch3d doesn’t provide CUDA 12.8 wheel

Some libraries lack pre-built wheels for the latest CUDA or PyTorch versions. PyTorch3D, for example, often lacks support for newer environments. (like CUDA 12.8).

Building PyTorch3D or other “torch-dependent” libraries like cuRobo: CUDA Accelerated Robot Library is tricky because it requires torch and nvcc as build dependencies. By default, uv builds packages in isolation, which fails if the builder can’t find torch. I solve this using the no-build-isolation feature.

Example

In the setup.py, the PyTorch3D requires torch and nvcc.

dependencies = [
    "pytorch3d"
]
 
[tool.uv]
conflicts = [
  [
    { extra = "cu124" },
    { extra = "cu130" },
  ],
]
no-build-isolation-package = ["pytorch3d"]
 
[tool.uv.sources]
pytorch3d = { git = "https://github.com/facebookresearch/pytorch3d.git" }

With this setup, the uv compiles PyTorch3D directly against the specific CUDA and torch version active in your environment, ensuring compatibility even without a pre-built wheel.