Great foundational work for Mojo, thanks for this ecosystem contribution!
As always, I’ll encourage anyone writing Mojo libraries to add some benchmarks. We’ve had more than a few “accidentally faster than python’s SOTA” libraries.
Hi Owen - happy to add a benchmark- although this is a simple parsing package which should be sufficiently fast for real world use cases - will try and add a comparison with the Python package / Python standard lib in 3.11+
Thanks @owenhilyard! Great timing on the benchmark suggestion - v0.5.0 now includes a comprehensive benchmarking system!
We’re running both mojo-toml and Python’s tomllib (stdlib) on the same test cases with full machine specs:
pixi run benchmark-mojo # mojo-toml performance
pixi run benchmark-python # Python baseline
Both generate markdown reports in benchmarks/reports/ with system info (CPU, GPU, RAM, Mojo/Python versions).
Current results (Apple M1 Pro):
Real-world config files (pixi.toml): ~2ms parse time
Simple configs: 40K+ parses/sec
Python’s tomllib is currently 2-10x faster (it’s implemented in optimised C)
Not “accidentally faster” yet , but very competitive for a pure Mojo implementation. More importantly, it’s fast enough for the use case - config files parse in milliseconds, which is imperceptible for typical config loading. The focus has been on correctness, completeness, and usability rather than raw speed.
As Mojo’s compiler optimisations mature, I would expect to close the gap or exceed Python. Details in PERFORMANCE.md and the benchmark reports.
Would love to hear if there are specific performance-sensitive use cases we should optimise for or just specific performance tweaks as I am still very new to Mojo!