I asked @lesoup-mxd to move some discord discussion over here.
I think their stabs at kernel development, as well as potential user confusion about “Why can’t I open a file in a MAX kernel?” or similar issues, are sufficiently motivating for a discussion about how Mojo is going to classify different targets. The clearest is example of this differentiation is MAX accelerators.
If we’re on the host CPU, you can do whatever you want, even if it’s a bit ill advised to go open a TCP socket in the middle of a matmul.
If you’re on a Nvidia DC GPU with a Nvidia NIC, you have RDMA, file access via GDS, NCCL, NVSHMEM, printf and even Ethernet networking. That’s a pretty substantial amount of IO capabilities.
Take away the Nvidia NIC and the DC GPU loses a lot of capabilities.
Consumer Nvidia GPUs get a restricted form of GDS and printf, and most other things are locked down.
NPUs like the Qualcomm Hexagon NPU are essentially general purpose processors with big vector units. They don’t really have access to the OS but can run more or less any freestanding C code you want.
Then we have fixed function hardware, the “Hand me 2 matrices and I’ll multiply them” NPUs. These are unlikely to ever run MAX, but might provide a useful target for offloading particular parts of the stdlib, as would various cryptographic accelerators.
Rust has the idea of separate libraries:
core
: Everything that should be able to function on a Turing machine. This is where core language concepts like “What is a u8
?”` live.
alloc
: Stuff that needs a global, general purpose allocator to be present.
std
: Things that could be reasonably assumed to require an operating system of some sort.
However, this approach has caused issues for Rust and there was discussion around making things more capability based. This is partially do deal with OS differences, and partially to make Rust extend into embedded better. As many of you know, I am strongly in favor of capability-based abstractions, but I want to have some discussion around how others think this should work.
cc @joe