High performance, fixed size 1D and 2D arrays on a CPU

I’m just kicking Mojo’s tires at the moment. What are the data types of high performance, fixed size at runtime, 1D and 2D arrays on a CPU? I found GitHub - Mojo-Numerics-and-Algorithms-group/NuMojo: NuMojo is a library for numerical computing in Mojo 🔥 similar to numpy in Python. and somewhat complex tensor_builder | Modular. Is there a performance difference between these two on a CPU? Any other options I missed?

You should use Tensors, since those integrate with the rest of the Mojo ecosystem in a much better way. Fixed-size at runtime just means no capacity variable.

High performance is a matter of perspective, since Mojo is a systems language you can make a normal List perform pretty close to what Numpy can do with a bit of effort. That for that last 1% it might matter, but instead of chasing that last 1% of CPU it might be worthwhile to instead investigate GPUs, where Mojo can run away with a performance win compared to most other options.

1 Like

Do you mean LayoutTensor type? I see notices of Tensor being deprecated. Any issues with using seemingly more user-friendly NuMojo library?

I’m a Mojo noob @pauljurczak. NuMojo seems like a good option, but there’s a caveat. When you interop between Python’s numpy.ndarray and NuMojo’s NDArray, you ideally want zero‑copy access so both sides share the same memory buffer instead of doing expensive data copies. NuMojo needs an NDArrayView type (a lightweight view into an existing buffer) but that isn’t implemented yet.

This is because Mojo itself currently lacks support for Python’s buffer protocol.

1 Like

Yes, I mean LayoutTensor. NuMojo isn’t really integrated with the rest of the Mojo ecosystem, and LayoutTensor is the core datatype for linear algebra and you have to use it if you want to use MAX.