C++ interop support?

Are there any plans for C++ interop/FFI along the lines of pybind11 or nanobind, but targeting Mojo interop instead of Python?

We have a set of Python bindings for our C++ simulation library ( GitHub - AMReX-Codes/pyamrex: GPU-Enabled, Zero-Copy AMReX Python Bindings including AI/ML · GitHub ), but it looks like current we need to either use the C FFI interface, or call the C++ library via the Python bindings. Is there a better option I’m overlooking?

We currently have pybind11 support + DLPack support ( [WIP] Implement DLPack by ax3l · Pull Request #454 · AMReX-Codes/pyamrex · GitHub ) to share multidimensional arrays. The target application is to write GPU kernels over these arrays in Mojo.

Thanks,
Ben

You’re already forced to go over a C-ish API to get the data over to a GPU, even in Mojo, so that should provide a logical point to work from. You should be able to call Mojo kernels from C++ using this mechansims at roughly the same performance as you would calling CUDA kernels from C++, provided you leave Mojo entirely on the GPU. If you want Mojo to come over to the CPU side, then there are problems, since C++ is not a simple language to integrate with. At present, the best path forwards is likely to wait for ClangIR and then use Mojo’s ability to interface with arbitrary MLIR to handle the interop layer.

C++ is one heck of a language to try to do interop with, and bidirectional interop is likely going to take a long time. Carbon (the language) is trying, but it’s causing them a lot of headaches.

Ok, maybe we just have to wait then.

In C++/CUDA, we don’t actually have to go to a C-ish API for GPU kernels, because we do lambda captures of wrapper classes that have overloaded operator[] that access device buffers directly. It looks like that is not possible in Mojo today.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.