Is the model repo up to date?
it seems there are no new models added in a while?
Im specifically trying to understand if GGUF/GGML and other quantizations of recently released models (like GPT-OSS, Mistral 3.2) are supported?
Is the model repo up to date?
it seems there are no new models added in a while?
Im specifically trying to understand if GGUF/GGML and other quantizations of recently released models (like GPT-OSS, Mistral 3.2) are supported?
Currently we offer GPT-OSS at the Mammoth level through Large Scale Inference | SF Compute for batched inference workload.
Im an individual and looking for local inference (SF Computes offering you linked to doesnt satisfy either of those use cases)
Unfortunately we don’t provide local inference now. Stay tuned!