Recently I began looking for an AI coding assistant for Mojo because I’d found similar tools helpful with Python code (e.g. Sourcery). I decided to try Augment Code due to its tight integration with VS Code, extensive capabilities, and free tier. I’m still on the 14-day trial (600 prompts), and am very, very impressed with it.
I cloned the Modular repo and Python repo locally, then added each as a context references. My main personal project uses both Mojo and Python code extensively.
I started with simple stuff like creating/updating Docstrings for a module or two, then asked it to suggest changes to existing code which I then accepted, allowing Augment Code to edit my source files interactively via a diff interface. All of this went really well. Not perfect, but I was impressed. All of this was in ‘Chat’ mode.
In ‘Agent’ mode, I then asked it to add local persistence to an existing Oauth module, with encryption on the stored access/refresh tokens and application secrets using existing RSA/AES crypto modules in my project. This is where it got fun!
Within the VS Code IDE, it created a couple of new Mojo source code files based on my original files, then went to work modifying the new files. It was very interesting watching it ‘think’ through different aspects of the task while developing a solution. It made changes, tried them out, made more changes/corrections, tried them out…iteratively while executing the task. There were some steps I needed to allow, but Augment Code did the work while I watched. It wasn’t perfect though.
It missed some syntax issues, like str() -> String()
, used unnecessary var
keyword on assignment statements, and inout
versus mut
. I brought issues to Augment Code’s attention on subsequent prompts, and it fixed them, plus found other syntax issues that I had not yet discovered and fixed those too. I knew of a potentially tricky problem it might miss, and it did. To its credit, it understood and fixed the problem when I pointed it out. That was cool.
I then asked it to write new tests to verify existing and new functionality. My prompt asked it to follow the design pattern for test files I’d previously written and it did, even using a helper function I created for testing in exactly the correct way. It then ran the tests, and iteratively debugged and changed the source files and/or tests. Again, not perfect. I had to point out issues here and there that it missed, but once known it fixed every one of them. If a fix caused other problems, it found and fixed those too, mostly without my guidance.
Because I am still getting familiar with Augment Code’s capabilities and I was tired (late at night ), I asked it to review and update the tests for my AES crypto module. My crypto modules wrap a few app-specific functions around existing Python crypto libraries, so my existing tests were very basic. Augment Code noted these were basic tests, then proceeded to create tests with much better test coverage. And guess what? It found then fixed a serious bug of which I was completely unaware. Essentially, my Python class created random values when the first object was instantiated, then reused those values for all subsequent objects. Yikes!! I was not expecting to find an issue, just playing around really, but I’m very glad I asked for the help.
I thought I’d share this experience while it’s fresh in my mind. From my limited experience with AI coding assistants/agents, Augment Code is an awesome tool for pair programming. It integrates with VS Code and JetBrains IDE’s.
Cheers!