Releasing the Nabla Python API

Switching gears a bit: I’ve been heads down rebuilding Nabla in Python. This new pip-installable library uses Numpy for eager execution and MAX for CPU+GPU acceleration.

Nabla used to be a Mojo-only project, but since the MAX-Python API is currently much more stable than the MAX-Mojo API, I decided to focus on Python for now as well; while keeping the Nabla-Mojo API as an optional, experimental submodule of the main repo. :fire::snake:

This new library is actually much closer to JAX than anything I’ve built so far, both in terms of features and speed.

Tutorials and API: https://www.nablaml.com

8 Likes

This is super amazing TilliFe, I think that aligning with the Python MAX APIs is a great call - it will allow us all to move faster together, and we can re-engage with Mojo at the framework level when it gets a few more high level features (eg classes).

Your progress here is super :fire::fire::fire::fire:!!

1 Like

Thank you very much Sir! Looking forward to what’s ahead of us.

This is very impressive progress for such a short turnaround time! Congratulations! I am eager to try this out when I have a quiet moment.

I noticed on the website that the GPU training example is crashing MLP Training (GPU) — Nabla, but I’m sure it’s nothing major.

I also wondered where the choice of Python leaves us longer term? Is this just a temporary choice due to Mojo still being under heavy development, or can we expect Python to be the focus more generally for now? I ask because I’m eager to remove Python from our stack whereas this development seems to be leaning into it more than ever.

1 Like

Great questions, thank you.

  • Re. GPU crashes: I’m GPU-poor at the moment and only use NVIDIA T4 GPUs (in Colab or in the cloud). However, there are known issues with MAX’s matmul on T4 GPUs (see the issue on GitHub). I haven’t actually tested the GPU examples on other GPUs yet—this is a big TODO!

  • Re. roadmap (Python vs. Mojo): Originally, I was drawn to the Modular ecosystem specifically to build in Mojo. My general goal was, is, and will be to build a great Scientific Computing framework in Mojo, since I believe Mojo is much better suited for proper high-performance, type-safe scientific programming than Python (which is slow, difficult to debug, and has many other limitations).

    My honest perspective: I do see potential for Nabla in the Python ecosystem—it might be faster than competitors, has deep integration with custom Mojo kernels, and is much more lightweight (less than 7k lines of code, giving it educational value too). However, I also recognize that this space is already quite saturated, so folks might not see the benefits of transitioning to yet another framework.
    A proper Scientific Computing library in Mojo seems much more valuable in the long run. Therefore, my plan is to return to Mojo as soon as the MAX-Mojo API continues to be actively maintained. I’m not sure when that will be, but once it happens, I’m confident that Mojo will be much more Pythonic than it currently is (with proper classes, etc.). This should make the transition from Nabla Python back to Nabla Mojo a no-brainer.
    Since Mojo and Python can now be integrated much more smoothly with each other, the line between Python and Mojo APIs might even blur over time, potentially resulting in one unified framework for Differentiable Programming. In this vision, all the performance-critical components would be written in Mojo, while at the highest API level, users could choose which language to use.

1 Like