Augment Code - AI Assistant w. Superpowers

Recently I began looking for an AI coding assistant for Mojo because I’d found similar tools helpful with Python code (e.g. Sourcery). I decided to try Augment Code due to its tight integration with VS Code, extensive capabilities, and free tier. I’m still on the 14-day trial (600 prompts), and am very, very impressed with it.

I cloned the Modular repo and Python repo locally, then added each as a context references. My main personal project uses both Mojo and Python code extensively.

I started with simple stuff like creating/updating Docstrings for a module or two, then asked it to suggest changes to existing code which I then accepted, allowing Augment Code to edit my source files interactively via a diff interface. All of this went really well. Not perfect, but I was impressed. All of this was in ‘Chat’ mode.

In ‘Agent’ mode, I then asked it to add local persistence to an existing Oauth module, with encryption on the stored access/refresh tokens and application secrets using existing RSA/AES crypto modules in my project. This is where it got fun!

Within the VS Code IDE, it created a couple of new Mojo source code files based on my original files, then went to work modifying the new files. It was very interesting watching it ‘think’ through different aspects of the task while developing a solution. It made changes, tried them out, made more changes/corrections, tried them out…iteratively while executing the task. There were some steps I needed to allow, but Augment Code did the work while I watched. It wasn’t perfect though.

It missed some syntax issues, like str() -> String(), used unnecessary var keyword on assignment statements, and inout versus mut. I brought issues to Augment Code’s attention on subsequent prompts, and it fixed them, plus found other syntax issues that I had not yet discovered and fixed those too. I knew of a potentially tricky problem it might miss, and it did. To its credit, it understood and fixed the problem when I pointed it out. That was cool.

I then asked it to write new tests to verify existing and new functionality. My prompt asked it to follow the design pattern for test files I’d previously written and it did, even using a helper function I created for testing in exactly the correct way. It then ran the tests, and iteratively debugged and changed the source files and/or tests. Again, not perfect. I had to point out issues here and there that it missed, but once known it fixed every one of them. If a fix caused other problems, it found and fixed those too, mostly without my guidance.

Because I am still getting familiar with Augment Code’s capabilities and I was tired (late at night :crescent_moon:), I asked it to review and update the tests for my AES crypto module. My crypto modules wrap a few app-specific functions around existing Python crypto libraries, so my existing tests were very basic. Augment Code noted these were basic tests, then proceeded to create tests with much better test coverage. And guess what? It found then fixed a serious bug of which I was completely unaware. Essentially, my Python class created random values when the first object was instantiated, then reused those values for all subsequent objects. Yikes!! I was not expecting to find an issue, just playing around really, but I’m very glad I asked for the help.

I thought I’d share this experience while it’s fresh in my mind. From my limited experience with AI coding assistants/agents, Augment Code is an awesome tool for pair programming. It integrates with VS Code and JetBrains IDE’s.
Cheers!

1 Like

Project: mojo_scalar
Here’s a completely unrelated, new project I started and finished today. It started with my post yesterday (link) after which I learned more about the open point of ‘SIMD not conforming to equatable and comparable’ [C. Lattner]. As I looked at the SIMD code to understand the problem, I saw that I could probably create a module that would work. More importantly for this thread, I saw an opportunity to use Augment Code to develop a solution from scratch. I used ‘agent’ mode on automatic, meaning that Augment Code created source files, test files, and documentation without stopping for me to approve critical steps. It also ran tests, made changes to the test and source files, and committed all changes to the Github repo automatically, without my intervention. Initially, Augment Code prompted me to manually create the repo on Guthub which I did. [Human taking direction from AI. :grin:]

One source of complexity/noise was that I asked Augment Code to make this project a git submodule of another project, so there was a bunch of commits related to this and subsequent import changes to get everything to play nicely together with that other project.

For the curious, here’s a link to the repo containing the result of this journey. Github: mojo_scalar

8:17 AM Initial prompt
Referring to file simd.mojo in the modular repo, create a new file in this project named scalar.mojo. In this new file, create a new struct Scalar for all of the DType types defined by alias using alias Scalar = SIMD[_, size=1]. In struct Scalar, instance variable value should be of a SIMD type as defined in file simd.mojo. Create a new trait, trait TSIMD, which includes the traits included in struct SIMD. The new struct Scalar should include trait SIMD plus traits EqualityComparable and Comparable. In Scalar, all methods required by the traits included in TSIMD should call the SIMD methods of the variable value and return the corresponding values as-is.
Operator dunders: For the methods which correspond to traits EqualityComparable and Comparable, create new methods in struct Scalar that call existing methods of the variable value which have the same function name and input arguments, but return SIMD output types. The function signatures of each new method should match the requirements of traits EqualityComparable and Comparable. Within each of these new methods, convert the return value of type SIMD into the return type defined in the trait definition. Remember, SIMD types are vectors of one or more values, while the return type of methods for traits EqualityComparable and Comparable are single value Bool.

2:31 PM Final prompt
Remember, test suites be organized into functions which begin with the prefix test_. This is necessary for the mojo test command to find tests in a given file. Evaluate and update for Mojo standard testing conventions. (sic)

🎉 Final Status
✅ Mojo Standard Compliance: Tests work with mojo test command
✅ Automatic Discovery: All 7 test functions found automatically
✅ 100% Pass Rate: All tests passing without errors
✅ Clean Code: No compiler warnings
✅ Better Organization: Logical test grouping
✅ Enhanced Documentation: Updated README and comments
✅ Maintained Compatibility: run_tests.sh still provides value
The test suite now follows Mojo best practices while maintaining the convenience and reliability of the automated test runner script!
1 Like

Under the hood, Augment Code uses Claude Sonnet 4, Anthropic’s latest and most capable coding model. (link)
Here’s a talk about Self Coding Agents given by Colin Flaherty of Augment Code. Really cool…it wrote and optimized its own code!
Geek out!

Video: This AI Agent Plans, Codes, Tests & Integrates Everything by Parker Rex [2025-05-30]

Remote Agents (limitations) A relatively new feature from Augment Code. (link)
A couple of days ago I began a new project using a remote agent. It created the project, but I could see that there wasn’t any ‘serious’ code being created. I then diverted to my mojo_scalar project (see above), and just now came back to this one. I decided to ask the remote Augment Code AI agent what it can do to execute Mojo code, with the result shown below. Good to know!
I will add a follow up comment asking Augment Code’s AI for the additional capabilities of a local agent compared to a remote agent.

Prompt - Capabilities of a Remote Agent
Can you, an Augment Code remote agent, execute the Mojo compiler to build executables and run Mojo native tests that you create during development?

Answer
No, I cannot execute the Mojo compiler or run Mojo executables directly. As an Augment Code remote agent, I have access to:

:white_check_mark: What I Can Do:

  • Read and write files in the workspace
  • View and edit code using text-based tools
  • Use shell commands for basic file operations (ls, cat, git, etc.)
  • Search and analyze code using regex and text processing
  • Create and modify Mojo source files
  • Commit changes to Git repositories
  • Access web resources and documentation

:cross_mark: What I Cannot Do:

  • Execute the Mojo compiler (mojo build, mojo run)
  • Run compiled Mojo executables or native binaries
  • Execute Mojo tests or test frameworks
  • Install software packages or development tools
  • Access language servers or IDE features
  • Perform runtime debugging or profiling

:wrench: Development Workflow Limitations:

Current Approach:

# What I can do:
git add .
git commit -m "Add new feature"
git push origin main

# What I cannot do:
mojo build src/main.mojo        # ❌ Cannot execute
mojo run tests/test_suite.mojo  # ❌ Cannot execute
./compiled_executable           # ❌ Cannot run

Testing Strategy:

Since I cannot run Mojo tests, I focus on:

  • Static code analysis and syntax checking
  • Comprehensive documentation and examples
  • Following Mojo best practices based on language specifications
  • Creating test files that should work when executed by users
  • Providing clear instructions for users to test the code

:bullseye: Implications for ezThreading Development:

Quality Assurance:

  • Code review through careful analysis and documentation study
  • Syntax validation based on Mojo language specifications
  • Pattern matching with known working Mojo code examples
  • Comprehensive testing files for users to execute

User Responsibility:

Users need to:

# Test the threading library
mojo run tests/test_threading.mojo

# Run examples
mojo run examples/basic_threading.mojo

# Build applications
mojo build my_threaded_app.mojo

Development Process:

  1. Design and implement features based on specifications
  2. Create comprehensive tests and examples
  3. Document expected behavior and usage patterns
  4. Provide troubleshooting guides for common issues
  5. Rely on user feedback for runtime validation

:clipboard: Recommendation:

For the ezThreading library, I recommend:

  • Thorough testing by users in their Mojo environments
  • Feedback collection on any compilation or runtime issues
  • Iterative improvements based on real-world usage
  • Clear documentation for setup and testing procedures

This limitation means the library should be considered “development ready” rather than “production tested” until users can validate it in actual Mojo runtime environments.

Local Agent (differences)

Prompt
For developing Mojo code, what can a local Augment Code agent do in addition to the capabilities of a remote agent?

Answer
A local Augment Code agent would have significantly enhanced capabilities for Mojo development compared to a remote agent. Here’s the comparison:

:house: Local Agent Additional Capabilities:

Compiler and Runtime Access:

# Local agents can execute:
mojo build src/ezthreading/main.mojo
mojo run tests/test_threading.mojo
mojo test tests/                    # Run test suites
mojo format src/                    # Code formatting
mojo doc src/                       # Generate documentation

Development Tools Integration:

  • Language Server Protocol (LSP) access for real-time error checking
  • IDE integration with VS Code, Vim, or other editors
  • Syntax highlighting and code completion
  • Real-time diagnostics and error reporting
  • Refactoring tools and automated code improvements

Package Management:

# Local package operations:
mojo package install some-dependency
mojo package build
mojo package publish
pip install python-dependencies    # For Python interop

Testing and Validation:

  • Execute unit tests and see actual results
  • Performance benchmarking with real metrics
  • Memory profiling and debugging
  • Integration testing with system resources
  • Continuous integration setup and execution

System Integration:

  • Access to system APIs for actual threading implementation
  • Hardware-specific optimizations testing
  • Platform-specific code validation (Windows/Linux/macOS)
  • Performance monitoring with real system metrics

:counterclockwise_arrows_button: Enhanced Development Workflow:

Iterative Development:

# Local agent workflow:
1. Edit code
2. mojo run tests/quick_test.mojo     # Immediate feedback
3. Fix compilation errors
4. mojo test tests/                  # Full test suite
5. Benchmark performance
6. Commit working code

Real-time Feedback:

  • Instant compilation error detection
  • Runtime behavior observation
  • Performance metrics collection
  • Memory usage analysis
  • Threading behavior validation

:bullseye: For ezThreading Development:

What Local Agent Could Validate:

// Actually test thread creation and execution
var thread = Thread(name="TestWorker")
thread.start()  // ✅ Verify this actually creates OS threads
thread.join()   // ✅ Confirm proper synchronization

// Test synchronization primitives
var lock = Lock()
with lock:      // ✅ Validate mutual exclusion works
    shared_data += 1

// Performance benchmarking
var start = now()
run_threaded_workload()
var duration = now() - start  // ✅ Real performance metrics

MLIR/LLVM Integration Testing:

  • Verify MLIR intrinsics actually compile and work
  • Test platform-specific code paths (Windows vs POSIX)
  • Validate performance claims with real benchmarks
  • Debug threading issues with actual OS integration

Quality Assurance:

  • Automated testing in CI/CD pipelines
  • Cross-platform validation on multiple OS
  • Performance regression detection
  • Memory leak detection and profiling

:bar_chart: Development Efficiency Comparison:

Capability Remote Agent Local Agent
Code Writing :white_check_mark: Full :white_check_mark: Full
Syntax Checking :cross_mark: Manual :white_check_mark: Automatic
Compilation :cross_mark: No :white_check_mark: Yes
Testing :cross_mark: No :white_check_mark: Yes
Debugging :cross_mark: No :white_check_mark: Yes
Performance Analysis :cross_mark: No :white_check_mark: Yes
Real-time Feedback :cross_mark: Limited :white_check_mark: Full
CI/CD Integration :cross_mark: No :white_check_mark: Yes

:rocket: Recommendation for ezThreading:

Current State (Remote Agent):

  • Solid foundation with comprehensive API design
  • Well-documented implementation approach
  • Ready for local testing and validation

Next Steps (Local Agent):

  1. Compile and test all modules
  2. Validate threading behavior with real OS threads
  3. Benchmark performance against expectations
  4. Fix any runtime issues discovered through testing
  5. Optimize MLIR/LLVM integration based on actual compilation
  6. Create comprehensive test suite with real validation

Production Readiness:

A local agent could transform ezThreading from:

  • “Architecturally Sound”“Production Ready”
  • “Theoretically Correct”“Empirically Validated”
  • “Well Designed”“Performance Optimized”

The local agent would provide the critical validation loop needed to ensure the threading library works correctly in real Mojo environments with actual OS threading primitives.

More videos re Augment Code