In my opinion, python’s generators (as used in list comprehensions, not the yield ones), tend to get unreadable as you stack up longer chains. It’s usually not a simple map and filter where problems happen, but one where you want to grab a window of a given size, map it, filter that result, make another window, filter that, only grab the first value of the window, and then convert the result to a list.
I find list comprehensions to be perfectly readable. They look exactly like loops and can be easily converted into loops. Are you able to refactor long filter, map, flatten chains into loops easily?
I think there is an overreliance on iterator the iterator interface. It turns out processing one item at a time is often insufficient.
I find them less and less readable as you stack more stuff on them. Iterators easily map to loops so that’s not a problem. Also, nothing says you have to do one item at a time with iterators, chunked iterators are very much a thing.
You may find it more readable, it also allows you to write minimal code, but it comes at the cost of multiple iteration patterns.
I agree that yield is the better option for writing iterator adapters. However, common stuff like map and filter is just noise to write yourself. I could refactor the adapter code to be:
var bar = foo
.filter(is_even) #yes, I made a mistake when I wrote the example code, so this chain will never yield a value.
.filter(is_prime)
.map(square)
.collect()
Here, I only need to write functions that handle a single step on a single item at a time, which makes them easy to code review, and I can more aggressively compose functionality with a variety of domain-specific types of filters and transformations.
This is a valid point of comparison. Lazy evaluation is used in Rust because it often preserves cache locality much better than eager evaluation, which means that it will have higher performance, and it makes it so that you don’t have to figure out where to store an unknown quantity of data in the mean time, you just have some stack space.
It is also possible to have good cache locality with eager evaluation. Resizable bump allocators are a good example.
I’m not sure I agree. If I do map(foo, collection) to something with 80k entries, I’ve just totally blown my l1 and l2 cache and will continue to do so for every stage of processing. Eager evaluation works when everything fits into memory, but doing a lazy evaluation with possibly some batching will always work unless your CPU has a very weird icache.
Lazy evaluation is fundamentally not compatible with the escape analysis of closures. Thus you either have to eat the complexity like rust and model closures as traits or deal with memory fragmentation or in the case of java with garbage collection if the closures is around for multiple GC cycles.
I don’t think you can really model closures as a class of types any way except for traits. They are all fundamentally different types with different sizes, borrows, and some may be trivial types while others won’t be. Some will be possible to synthesize move/copy ctors for, others won’t be. You’re asking to cut out intrinsic complexity about how the type actually works if you erase those details, or you’ll introduce annoying restrictions.
see no extra allocations here. I can reuse collected by passing it into the function that does the computation for as long as I want. It could be built on top of a static piece of memory I allocated in the binary if we had custom allocators and this would still work just fine.
You seem to forget that closures can mutably capture variables, which is a side effect.
All of those functions I gave as examples are mathematically pure functions with no side effects. Yes, you can do mutation in closures and cause side effects, just like you can allocate memory in a closure. I was making the point that you can have chains of iterator adapters with zero overhead vs writing a bunch of for loops yourself.
Less code and variable declarations (.): This is less readable.
I’m not sure what you’re referring to here.
Dynamic dispatch: This is not useful with static return types.
This is very useful even with static return types. For instance, to implement flexible event handers.
Overriding methods (...). This is not possible without implicit boxing or opaque return types.
If you mean the ability to patch a function at runtime, I agree.
Dunder methods and opaque return types allow efficient and readable implementations of custom methods.
I think we have disagreements on what opaque return types are. T is a named, generic type in that example, which likely has type bounds on it.
Dunder methods clearly signal to call the freestanding function, while still allowing for extensibility.
This is far better than rust, where only iterator consumers and optimization methods can be extended.
I’m not sure what you mean, could you clarify what you mean by “iterator consumers” and “optimization methods”?