Today we are releasing a research preview of NABLA - a framework for differentiable programming in Mojo. Nabla aims to bring to Mojo what parts of JAX and PyTorch brought to Python: a high-level API for general program transformations, including vmap, jit, vjp, jvp & grad.
Unlike previous attempts (e.g. Endia) that failed by attempting to rebuild the entire stack, Nabla was built from the ground up as a thin wrapper around Mojo and MAX to provide the same performance guarantees as them. (The Nabla core does NOT include any low-level kernels, for example.) There are many rough edges and features that still need to be implemented (operator coverage, GPU support, etc.), but the core AD engine has proven effective in initial tests. We hope you like it!
I love this example you have in the README. Its literally the textbook description of a neural net and backprop in the minimum number of lines of code such a thing could be described.