Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was half hoping this meant running CUDA code on AMD GPUs. Thanks for clarifying.


I know AMD has a whole bunch of (related?) projects for GPU compute, but man - if they could just provide an interop layer that Just Works they'd get immediate access to so much more market share.


Eh well, it is very close to just working. From "Training LLMs with AMD MI250 GPUs and MosaicML":

> It all just works. No code changes were needed.

https://www.mosaicml.com/blog/amd-mi250


“Just works” in this context means executing the compiled CUDA or the PTX bytecode without recompiling. Nobody is ever going to utilize ROCm if it requires distributing as source and recompiling.

To make it even more insulting, even simply installing ROCm itself is a massive burden, even on an ostensibly-supported (as geohot discovered) and even just “it works out of the box if you distribute and compile it locally” is ignoring that whole massive “draw the rest of the owl” stage of getting ROCm installed and building properly in your environment.


> “Just works” in this context means executing the compiled CUDA or the PTX bytecode without recompiling. Nobody is ever going to utilize ROCm if it requires distributing as source and recompiling.

Even a source-compatible layer that let you just recompile CUDA code for an AMD GPU would be a huge improvement. That alone would eliminate the CUDA lock-in.


Don't forget AMD doesn't seem to even care about ROCm themselves. Six months in and RDNA3 cards still don't support it. Can you imagine if Nvidia launched RTX40- cards with no DLSS even though 30- cards already had it, and six months started boasting about how DLSS support was "coming this fall"?


ROCm is for CDNA not RDNA. It has limited, best-effort RDNA support for a few cards.


I've been running PyTorch on my Radeon 7900 XT using ROCm. Is that not supposed to work?


No, it actually isn't supposed to work, it's not officially supported. https://sep5.readthedocs.io/en/latest/Installation_Guide/Ins...


The hardware that is officially supported is a subset of the hardware that works. You are correct that the RX 7900 XT is not officially supported, but I must point out that you are linking to a fork of the documentation from 2019. This is the official ROCm documentation: https://rocm.docs.amd.com/en/latest/release/gpu_os_support.h...


Fascinating. And yet.


Having to use a special kernel for ROCm is a real pain, I can't just use it like I can with Mesa.

I have enough issues using graphics already so I'll stick with Mesa.


That's HIP / ROCm: https://rocm.docs.amd.com/projects/HIP/en/latest/index.html

But it currently only runs on CDNA boards and enterprise-y Linux distros (Ubuntu LTS, Centos, etc.)


It is coming from what I can tell


It's been coming for years now. I will probably be years before it really is here.


Then they too could call themselves an AI company!


I've been hoping for it for so long - I wonder if there's enough interest that someone could do a GoFundMe to hire at least one full time dev lol.


Shameless plug: https://www.osti.gov/servlets/purl/1892137

TLDR; If you provide even more functions through the overloaded headers, incl. "hidden ones", e.g., `__cudaPushCallConfiguration`, you can use LLVM/Clang as a CUDA compiler and target AMD GPUs, the host, and soon GPUs of two other manufacturers.


This is really amazing work! Is it still ongoing/funded?


Yes, though with caveats. The driver and parts of the extended API we used to lower CUDA calls are in upstream LLVM. The wrapper headers are not. We will continue the process of getting it all to work in upstream/vanilla LLVM soon though. Help is always appreciated.

FWIW, we have some alternative ideas on how to get out of the vendor trap, as well as some existing prototypes to deal with things like CUBLAS and Thrust. Feel free to reach out, or just keep an eye out.


Vulkan doesn't exactly work great on AMD either. I'm in the process of returning a 7900XTX right now because of AMD's busted Vulkan drivers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: