MAX AI Kernels are now open for contributions 🎉

As of today, we’re officially accepting community contributions to the MAX AI Kernels!

These kernels are a core part of Mojo and MAX, powering high-performance CPU and GPU operations across the stack. If you’re interested in adding new kernels, fixing bugs, or helping shape the project through proposals, we’d love your help.

We welcome contributions to:

  • New kernels: BMM, MLA, MOE, GEMV, NMS, grouped matmuls, 2D convolutions, and more
  • Support for new hardware platforms (Blackwell, Hopper, MI3xx, and others)
  • Bug reports, performance improvements, and documentation updates

To get started, check out the contributing guide.

Have a big idea? If your change may affect core performance or architecture, please start with a proposal. The process is designed to make it easy to gather early feedback and align with project goals.

Thanks for being part of the community! Let’s build the future of AI kernels together.

— The Modular Team

4 Likes

To enable a better build experience for the Mojo standard library and the libraries of kernels, we’re moving toward using Bazel in the modular GitHub repository. We’ve provided detailed instructions for how to use this new Bazel build system.

Building via Bazel is how you can test out new or enhanced kernels on your local checkout. Once you’ve built your own custom versions of the standard library with

./bazelw build //mojo/stdlib/stdlib
./bazelw build //max/kernels/layout:all
...

You can use your local version with

export MODULAR_MOJO_MAX_IMPORT_PATH=[...]/modular/bazel-bin/

Why Bazel? This is what we use internally at Modular , and we’ve found that it scales better to the complexity of building the various Mojo libraries across a variety of platforms.

1 Like