For NVIDIA GPUs, we currently support Ampere, Ada Lovelace, and (new in 25.2!) Hopper architectures. I always have to refer back to this chart to see what that maps to in terms of CUDA compute capabilities, but that’s roughly sm_80 to sm_90a.
We wanted to make sure we provided solid support for the most heavily used NVIDIA GPUs in AI applications first before expanding out from there. We’re a relatively small team, so we have to pick where to focus our efforts and we want to thoroughly test each generation we support. Additionally, Pascal and Volta architectures seem like they’re reaching end-of-life for support from NVIDIA.
As we add new compute capability support to MAX, we’ll be sure to announce in the nightly changelog and can update here. Thanks for asking about it!