My concern is that Mojo doesn’t have HM type inference, which means that “going backwards” is hard for it.
The most reasonable way I can think of for this to work is to first implement currying, which would require first decisions about whether the value is provided at comptime or at runtime, and then about what type the value is so that overload resolution can work.
Consider the following:
fn foo(a: Float32, b: Float32, c: Float32) -> Float32:
"""Lower precision variation."""
return a + (b * c)
fn foo(a: Float64, b: Float32, c: Float32) -> Float64:
"""Higher precision variation."""
return a + (Float64(b) * Float64(c))
If I were to curry this function, and do foo(_, b, c), then I first need to figure out whether b and c are being curried at comptime, in which case both functions get specialized with b and c constant folded, producing a fn(a: Float32) -> Float32 and a fn(a: Float64) -> Float64. However, if the currying happens at runtime, then currying these would produce closures, which are a different type, aren’t function pointers, and can’t be optimized as aggressively.
Mojo might be able to deal with it if it’s the comptime version, but if it’s the runtime version then we would need function overloading on closures. Even if it’s the compile-time version, Mojo would need to propagate, that, for example a is a Float64 through f, which set’s f’s return type, which may have similar knock-on effects in g and h. Based on what I know about how the parser works, the parser won’t be able to figure that out and you’ll lose type inference for values “downstream” of this construct. If the parser can’t figure it out, then you’ll need to do explicit casting or provide type hints, which very quickly makes this more verbose.