Mojo and the missing layer in heterogeneous compute

In 2000 I was working on static scheduling for heterogeneous architectures. Since then I’ve spent my time in data center networking and distributed systems.

There’s a lot of energy now around heterogeneous compute. Most of the talk is about making exotic hardware programmable - Mojo and related work fit that story. That work matters. It still leaves a gap: how compute gets found, reserved, coordinated, and paid for across a loose network of different machines.

Programmable flops aren’t the same as economically addressable flops.

In other words: a compute market. DCP (Distributive Compute Platform, from Distributive) is an example: jobs go out, workers compete, execution lands wherever the network can place it—from a browser tab to a rack of GPUs, or a moving Tesla.

What I don’t see yet is a straight line from languages like Mojo into that world.

So I built a small bridge: a Mojo→JavaScript transpiler that can ship work to DCP from the browser. The JS isn’t the thesis; it’s a way to show Mojo-originated for-loops unrolled across a distributed compute economy.

Prototype: https://exergy-connect.github.io/mojo-js/web/dcp.html

Closing that gap for real isn’t a weekend hack. It sits between the language/runtime stack and the coordination and economics layer, and it probably needs serious alignment—e.g. between Modular-style runtimes and systems like DCP.

Without something in that slot, heterogeneous compute stays fenced in: either one runtime, or one facility.

With it, the fence is optional.

I’m interested in how people building Mojo and nearby stacks picture that middle layer—as infrastructure, as protocol, or as something else entirely.

I think that is not even remotely what Mojo and Modular require or see as heterogeneity. I would say it is a non-starter. Just too hap hazard.

Mojo is much more like how LLVM transformed the embedded space opening up programmability, freeing up hardware vendors and OEMs who could simply target LLVM, without worrying about the fragmented space of bespoke compilers & assembly.
Now, Mojo is doing the same via MLIR (and LLVM). I think the next exciting thing for Mojo could be being a first class citizen on Photonic chips, which too target LLVM IR and MLIR.

Something that has a resemblance to this DCP thing, but much older and well researched, and has clear standards is Grid Computing. Has proven to be particularly useful in Bio-informatics. Mojo and MAX could end up being useful in that space, with a robust GSI in place, VOs (virtual organizations of trusted research groups, hospitals), network segmentation, etc.

But otherwise, the enterprise grade focus, standards, and tooling that Modular and Mojo peg themselves at, is opposite to this kind of random, bits & pieces compute without accountability/auditability/tracing, performance metrics, security policing. Mojo is about architectural orchestration, DCP is opportunistic scavenging.

I think you’re mapping this to opportunistic compute, which isn’t the point I’m making. Systems like BOINC already demonstrate that heterogeneous, globally distributed compute can be organized, verified, and used for real workloads at scale.

The question I’m raising is different: what’s the path from a language/runtime like Mojo into that kind of coordination layer—whether it’s BOINC-style grids, markets like DCP, or something else entirely?

1 Like

Modular has previously offered Mammoth along the same lines. In other words, not just heterogenous compute, but coordinated compute for lowering TCO. It was called Mammoth as recently as 25.2 though the link to Mammoth on the main site has gone mia. This video regarding Mammoth is from 8 months ago so it’s an area that they are invested in.

Remember that Modular offers managed cloud services so you may not be seeing many external signs of coordinated heterogenous code because it’s part of their enterprise product offering.

Tim Davis, co- founder had a video that I can’t seen to find on YT about how Modular was used in a compute marketplace to help coordinate workloads with variable pricing.

1 Like