Hey all, I’m a Swift engineer who’s been following Modular, Mojo, and Max from a distance (since Chris Lattner of course) and wanting to upgrade my skills to learn ai, mojo and move beyond apple platforms development.
I’m curious, will Max (which as I understand it is like CUDA but open?) run on cerebras.ai hardware? They seem to have compelling performance advantages over traditional hardware.
At the same time, they are focused on inference, is that what Max is focused on or is Max for training models?
In addition, what about RISC V hardware from places like tenstorrent or SiFive?
I think the only reasonable answer here right now is, „maybe” and „not anytime in the directly foreseeable future”.
But it does beg the question, which I personally find interesting: what does it take to support a new platform in practice? How does one go about practically doing it themselves? I don’t mean getting SOTA performance on that platform (which is step 2), but just getting it running at all. For example, I’d love to see Apple GPU and ANE support and would be very interested to understand where to begin.
Support for new hardware families is something that MAX is designed to do, but is still a signficant investment for us. We’re working on completing our support for GPUs, which will just started with the 24.6 release a couple weeks ago and will converge rapidly in early 2025. We’ll have to carefully consider which hardware to branch out to after that, which is a complicated equation that involves both business concerns and technical concerns. We haven’t made any specific announcements yet.