Right now, model support is added either because one of Modular’s customers wants that model, the model is notable (gpt-oss-120b), or because a community member contributes an implementation. As far as I am aware, MAX has no reason to reject encoder-decoder models and contributions are welcome, but having Modular employees implement support will likely be low priority until Modular has a need for it, since Modular is also trying to keep up with the tidal wave of LLMs.