The difference between the rmo dialect and the mo dialect

  1. What are the functional differences between the Rmo dialect and the Mo dialect?
  2. By the way, in Max, what language is used for MLIR-related code—C++, Python, or Mojo?
  3. What is the general process of MLIR lowering in Max?
1 Like

Great questions!

  1. What are the functional differences between the Rmo dialect and the Mo dialect?

MO is an optimization dialect. it has RISC-y behavior (a limited number of operations) and very strict shape semantics. This allows the graph compiler to optimize MO effectively. But it makes MO really difficult to reason about for humans. The strict shape semantics make it very terse.

RMO (Relaxed MO) sit on top of MO, and is designed to be easier for humans to use. RMO ops lower to a composition of MO operators. They automatically convert shape constraints to symbolic runtime checks, which are typically simplified away by shape inference. We can lower RMO → custom operators as the ultimate escape hatch.

  1. By the way, in Max, what language is used for MLIR-related code—C++, Python, or Mojo?

The code that interacts with MLIR is written in C++.

  1. What is the general process of MLIR lowering in Max?

The lowering in the graph compiler move through these dialects:

RMO (Relaxed MO)
MO (Modular Operators)
MOGG(Modular Generators)
MGP(Modular Primitives)
MEF (Modular Execution Format, not MLIR)

In general, we’re hoping to share more about the tech stack internals soon, so stay tuned!

6 Likes

Thanks very much for your reply. Regarding Max’s graph compiler, is its kernel generation process more similar to TensorFlow’s XLA, or does it generate Mojo code like PyTorch Inductor generates Triton code?

It is a bit in between the two. in MAX our kernels are hand-written in Mojo by humans. But the graph compiler does have the ability to interact with kernels and fuse them together. So it does produce new kernels, but it does not go as far as XLA, for example.

We like to view the graph compiler as “infrastructure for kernel authors.”

1 Like