A couple of questions:
- Julia’s ecosystem has offered a mature GPU programming model for NVIDIA and AMD for a few years. Other than “Julia is not Python”, what’s your take on the technical reasons behind Julia not reaching the level of adoption in the AI community to be considered a viable alternative to CUDA?
- Many of the AI ASIC vendors like Cerebras, Samba Nova Systems, AWS Trainium, Groq, etc… have built out their own AI graph compilers and low-level kernel programming interfaces. Essentially, they have taken a page out of NVIDIA’s playbook and are building their own “CUDA” specifically for their hardware. What’s your take on how Modular’s vision fits within this growing fragmentation of the accelerated computing software stack?