I’m new to Mojo and like to learn the language.
I’m used to working with Python together with Miniconda to create and manage virtual environments in VS Code.
I’m having trouble how to do the same thing with Mojo and Pixi.
Additionally, is it possible to integrate llms.txt files into VS Code to improve support from Gemini Code Assistant?
Is there a beginner-friendly walkthrough on how to set all of this up?
I’m trying to figure out which of these files I need: llms.txt, llms-full.txt, llms-mojo.txt, or llms-python.txt.
Do we need all four?
Also, could you describe how to use them with the Gemini Code Assist? Do I need to download and copy them to my VS Code workspace and add them to the Gemini Code Assist context, or can I link them via their URLs?
If I’m using VS Code with Gemini Code Assist, would the chapters titled “Enhance LLM guidance with .cursorules” and “Use Claude Code” have any relevance to me, or can I skip them?
Or should I just change CLAUDE.md file:
Rename “CLAUDE.md” file to “GEMINI.md”.
Change “CLAUDE.md” within the file to “GEMINI.md”
Change “Claude Code (claude.ai/code)” to “Gemini Code Assist”.
My initial project structure would look like this:
@Ehsan, I’ve made a small test using the llms txt files and the GEMINI.md file as context:
Question:
Write an efficient Mojo version of comb(n, k).
In main(), compare its output to Python’s math.comb(10,5).
The generated code looked like this:
from python import Python
fn comb(n: Int, k: Int) -> Int:
"""
Calculates the number of ways to choose k items from n items without repetition and without order.
"""
if k < 0 or k > n:
return 0
if k == 0 or k == n:
return 1
var k_var: Int = k
if k_var > n // 2:
k_var = n - k_var
var res: Int = 1
for i in range(k_var):
res = res * (n - i) // (i + 1)
return res
fn main():
try:
#fn main():
# error: cannot call function that may raise in a context
# note: try surrounding the call in a 'try' block
var math = Python.import_module("math")
# let math = Python.import_module("math")
# error: use of unknown declaration 'let'
var n: Int = 10
var k: Int = 5
var mojo_result: Int = comb(n, k)
var python_result: Int = Int(math.comb(n, k))
# var python_result: Int = math.comb(n, k)
# error: cannot implicitly convert 'PythonObject' value to 'Int'
print("Mojo: C(" , n,",", k, ") = ", mojo_result)
print("Python: C(" , n,",", k, ") = ", python_result)
print("Results match:", mojo_result == python_result)
except e:
print("Error: ", e)
it failed with several compiler errors:
fn main() without “raises”:
using let instead of var
type conversion issues
After that I reran the test without any extra context files.
The result had similar errors except that this time the code was wrapped in a try except block. That would make raises in " fn main() raises:" redundand.
For this simple example adding files as context made the result worse.
Thanks for trying that out and let us know! Agree, those are common issues I faced too. Cursor works better IMO when you add our official docs which gets indexed so will help a lot in the retrieval process to make the LLM context and with a minimal Cursor rules you can avoid issues like let doesn’t exist anymore etc.
I’ve run another test with Gemini Code Assist (GCA) and have some updates.
I’ve now structured the context files in a .gemini folder, including all the modified Cursor rule files (now as .md files) and the llms text files, like this:
I gave GCA the instruction: “Before we start, review all files in the .gemini directory to understand the project.”
The generated code for the comb(n, k) test was actually slightly better this time. It correctly used var instead of let, which is explicitly mentioned as deprecated in llms-full.txt. This suggests GCA might be consuming the provided files at least to some extent.
Unfortunately, when I then asked GCA to fix the remaining compilation errors (related to raises and type conversions), I hit a “Quota exceeded” error (Gemini 2.5 Pro Requests per day per user).
My conclusion:
Providing all these context files might improve code generation quality to some degree but it also seems to significantly increase API requests which quickly exhaust the daily free quota.
This method doesn’t seem to be an option for free users of GCA (for now).
I’ve personally had pretty good results with generating Mojo code from Gemini’s CLI when working within a checkout of the modular repository with an appropriate GEMINI.md. For that, I first fired up gemini in the modular directory and prompted it to create an updated GEMINI.md based on the CLAUDE.md that already exists in the repository. I then exited that session and started a new one, with context populated from that GEMINI.md.
I tend to use the smaller models, in this case gemini-2.5-flash, but make sure they operate with a defined plan and specifically instruct them to build the files and run tests against them to make sure they work as desired. Without the build / test cycle for verification, even the best models hallucinate bad syntax or only half-complete functions. With all the Mojo code we have in the modular repository, when they get something wrong they can often search out the right way to do it and eventually get the code right without outside intervention.
If Gemini Code Assist (GCA) works pretty good with your setup then mine is probably messed up.
Just to clarify, here’s what I’ve done so far:
VS Code Extensions:
I have installed the Mojo language support (stable version) extension.
I have installed GCA and activated its Agent mode.
Project Setup:
I generated a Mojo test project using pixi.
Within this project’s workspace, I’ve created a .gemini folder (see tree above).
I renamed the provided CLAUDE.md file to GEMINI.md, adjusted it and placed it inside the .gemini folder.
I have a subfolder containing four LLM-related files (llms.txt, llms-full.txt, llms-mojo.txt, llms-python.txt). => Do I need these files?
I’ve modified the provided Cursor .mdc files into standard Markdown (.md) files and placed them into a subfolder named rules.
I’ve added the .gemini folder to the context of GCA but I assume GCA should find the folder automatically within the workspace, too.
Global settings:
I’ve not generated a global settings file ~/.gemini/settings.json file. Not sure if that’s necessary.
I’m using the free GCA version.
What exactly do I need to adjust or configure to get GCA working optimally with my Mojo project?
Can you give me some advice?
Specifically, when working with a new Mojo project I’d set up a dedicated GEMINI.md inside that project or (I forget if Gemini supports this, or only Claude) a user-wide GEMINI.md. In that specific GEMINI.md, I’d have it point to the CLAUDE.md in the local modular as a primary reference to our Mojo and MAX code. I’d also give it links to the llms.txt on the website and tell it to use those as primary API references.
A new project doesn’t have the same structure and context as the larger modular repository, so you want your new project to know where that repository exists and what the rules are for it. I’ve found that to be key when starting a new Mojo project. For working with Claude Code, I found this to be important enough that I put those links and references into my user-wide CLAUDE.md so that all new projects would start with them in the context.
For an agent-based new project, the most valuable resource is for it to know how to read the code in the modular OSS repository, closely followed by being able to read our API docs through the llms.txt files we maintain on our docs website. It’s also important to indicate to the agent in the GEMINI.md for the new project that Mojo and MAX change very fast, so it should always try to reference the latest code and API docs rather than relying on its training data.