Problem statement

That’s exciting to hear, there’s definitely a lot that can be explored in this area.

Before providing more concrete suggestions, is there a general subject matter concentration your group wants to focus on: generative AI, traditional AI / ML, physics simulation, data processing, etc.? In my experience, it works best when you align your interests or those of your department with the project you choose.

We’re still in the early stages of rolling out our GPU programming documentation and resources, with much more to come soon. This will be an active area over the next few months, so there are plenty of opportunities to explore new applications of MAX and Mojo to high-performance calculations on GPUs.

As an introductory suggestion: there has been a massive amount of research published about various algorithms implemented using CUDA across many domains. You could potentially survey some of the highest-profile recent publications and see if one of those areas of research would be interesting to translate from the initial CUDA implementations to MAX. We already have a lot of areas in traditional LLMs and multimodal image + text models covered in MAX, but there’s plenty outside of those kinds of models still to be built. For example, diffusion language models, Mamba-style architectures, etc.

1 Like