Make 'mut' keyword work for GPU function parameters

For reasons I haven’t been able to do anything with Mojo for a few months. Today I decided to catch up with what’s new, install the latest Mojo, and make sure my Julia set generator in Mojo for CPU and GPU still ran. (https://github.com/laranzu/multijulia)

With the new version (0.26.1.0) I could not declare a GPU function parameter as mutas I expected. I strongly suggest this ought to work as it does for CPU code.

The other changes I had to make to since last October were easy enough. alias becomes comptime, OK. gpu.id for block dimensions and thread IDs is now just gpu, OK.

A new error though, the last line of

fn julia(pixels: UnsafePointer[UInt32], size: Float32, scale: Float32, iterations: Int)

pixels[idx] = rgb;

was now being reported as an error, “Expression must be mutable in assignment”

I assumed that the fix would be just like the CPU version, declare that array as

mut pixels: UnsafePointer[UInt32]

But no, I still got the same error message. And now I was deeply confused.

Eventually I worked out that

pixels: UnsafePointer[UInt32, MutAnyOrigin]

did the trick, based on one of the GPU puzzle examples. But I still don’t understand why I couldn’t just declare it as mut.

I’m sure there is a technical reason, but for a programming language designed to be used on both CPU and GPU, I really want the code to work the same way, as much as possible, in both cases. Keep the extra MutAnyOrigin and variant attributes for people who really want or need to go into that level of detail, but can we please make a simple mut just work as well?

UnsafePointers used to be mutable (parametric actually?) by default; now you’d have to write UnsafePointer[mut=True, T]. It’s different from a mut argument: mut=True says that you can mutate through this pointer.

Yes, it’s the difference between const T *and T const * in C or C++.

My point is that as a Python developer coming to Mojo, it’s not a distinction I care about, or even want to know about. In Mojo on the CPU, prefixing a parameter with mut means I can assign to it. I expect the same behaviour for GPU code.

I don’t see an inconsistency in the meaning of such a program when running on CPU vs. GPU. Could you please elaborate?

On the CPU, I can use a List. If I declare the parameter
fn julia(pixels: List[UInt32] …

then pixels[idx] = rgbgives a “must be mutable” error. Fix is to change the parameter to
fn julia(mut pixels: List[UInt32] …

Mojo is consistent about this, whether scalar or compound, parameters are read-only unless you declare them mut. And it’s better than C/C++ in not drawing a distinction between read-only pointers and pointers to read-only, a frequent cause of confusion for C/C++ programmers.

When I have an UnsafePointer as a function parameter, I use it as an array (or list). I wanted to write pixels[idx] = rgb just as I would for a List (or InlineArray), so I assumed that I’d have to add mutbefore the parameter because that’s how every other type in Mojo works.

Instead, I have to think like a C/C++ programmer for this special case. And I don’t want to. Mojo is supposed to be better at this high level stuff! I see no reason to force Mojo programmers to learn the difference between const T * and T const *: 99 times out of 100 it’s useless in C/C++.

The current Mojo behaviour may be consistent whether running on CPU vs GPU, but it’s consistently confusing. That doesn’t help.

(Real fix is to allow Lists as parameters to GPU functions, I have another topic post about that.)

There are actually a lot of types in Mojo which have that “interior mutability”. This is a consquence of Origins (and mutability with them) actually being first-class types instead of a thing stuck on the side. You can think of mut foo: UnsafePointer[T, MutOrigin[...]] as having the same API contract as **T. var foo is *T, and read foo is actually const**T. The compiler plays a few games to enable passing things in registers instead of needing a full double pointer, but it needs to act as if you were actually passing a double pointer there to maintain consistency. This is why most code passes pointers by ownership.

edit: read foo is actually const**T, not *const*T.

As a representative of Real Application Programmers, my reaction to the concept of “interior mutability” is, I’m sorry to say, oh $DEITY why would you do that?

I’m aware that fine grained access controls are useful as a higher level concept in application design. I should have write access to my bank account address and phone number, but not to my balance. This does not make them useful for compiled programming languages.

C/C++ have the const T * vs T const * distinction, and 99% of the time only the first form is useful. My experience is that C/C++ programmers rarely use T const * even when they could, it’s not a useful distinction to make.

C++ also has the wonderful mutablekeyword, where you can designate bits of an object as non-const even when the method says const. Yeah, that never gets confusing.

In the case of Mojo, isn’t it supposed to be simple? Here’s what the Get Started with Mojo pages say:

If you’d like your function to receive a mutable reference, add the mut keyword in front of the argument name. You can think of mut like this: it means any changes to the value inside the function are visible outside the function.

Get Started with Mojo - Ownership

IMHO, complicated rules about read-write vs read-only never help programmers. Mojo should just do the simple and obvious thing.

I was talking about *const*T, which is a mutable pointer to a constant, which is what passing mut UnsafePointer[mut=False] does. Whenever you pass by reference in Mojo, you need to think of it like adding a layer of pointer.

We aim to make things as simple as possible, but we have this little problem called “the GPU cannot read that part of main memory” stopping us from doing what you ask. GPUs do not actually have the ability to do arbitrary reads into host memory because it was discovered that allowing that power is a horrible idea (see Apple Firewire and the resulting security vulnerabilities it caused). There is a half solution, but it makes all allocations on the order of 100x slower, doesn’t work with some GPUs, means that Mojo would need to set up a session with the GPU before main, and still wouldn’t work for any memory you got from FFI. Other languages looked at this and made the decision to lie to you about how the hardware works and do huge copies behind your back.

2 Likes

@laranzu, while I disagree with many of the judgments you made about Mojo in this thread, I’d still like to invite you to share your opinion with the team here.

1 Like