Ressurrect "assignment as first line of for"

Hi all,

I’d like to resurrect the discussion around this ticket

I’m not naive about how delicate the handling of Position information is in the parser… but this rewrite rule seems like it could be pretty simple. Could we please reconsider opening it?

In particular, for functional programming (which uses for comprehensions as a key language feature), this can be really annoying when it hits as it requires working around something as simple as assignment.


Worth at least considering whether it’s more implementable in Dotty. I agree that this is a constant annoyance…

That would be very handy indeed. As for now there are two workarounds, either declare a val before for comprehension, or lift the value to a monad you work with. The former looks ugly, the latter makes unnecessary allocation.

It could be pretty simple if you use the workaround that you describe in the ticket. But I’m not sure adding more desugarings to parser is the way to go. The Scala tooling needs to catch up with all of them, and this process is expensive as well as hideous. I feel that we should be moving in the opposite direction: removing desugarings from undesired places (like parser and typer) to phases where they make sense (iff they can be independent, some of the desugarings in typer must be there for several reasons).

That being said, I must admit I’ve never had this issue, though I acknowledge others may have bumped into it while working a lot with for comprehensions. As Paul comments on that ticket, adding this in a “principled” way has too high of a cost. I don’t think this has changed now, though it’s worth a cursory look at least.

I’m not sure if this change requires a spec change (is this a limitation enforced by the spec?), but if it does I’m happy to help anyone submitting a SIP!

I would say the latter looks ugly too.

Well baz <- calculateBaz(foo, bar).pure[F] is not that ugly IMO :slight_smile:

Wait, why is this code ugly?

val x = foo(bar)
for {y <- x} ...

Especially, why is this ugly enough that you’d prefer

for {
  x <- foo(bar).pure[F]
  y <- x
} ...

It’s not that I don’t believe you, but I’d like one good example to see it myself… @jducoeur ?

But didn’t scala.meta want to offer a parser API designed to be reusable (as opposed to repurposing one from a compiler)? .NET already did this. Then this concern would disappear.

1 Like

(Summoned, I appear.)

Well, keep in mind that I think the pure version is much uglier – I’m not defending that.

It’s the inconsistency that always bugs me, and leads to annoyances. I’m very used to doing assignments within the body of a for comprehension, and the result is that I fairly frequently mistype and wind up with one at the beginning, resulting in compile errors. This is especially common when I’m refactoring code, pulling an assignment up to the top of the algorithm.

In other words, my annoyance is with the inconsistency in:

val x = foo(bar)
for {
  y <- x
  z = baz(y)
  blah <- mah(z)
  yield ...

You can (and I routinely) do assignments inside of the for, but can’t at the start. At best I find that aesthetically annoying, and I hit that wart often.

Hence, it would be nice if the desugaring was smart enough to desugar based on the first Monad found, rather than requiring it to be the top of the for. I have no idea whether that’s even remotely feasible – I’m not at all surprised that it isn’t in Scala 2 – but it would IMO make for more consistent code if it was…


OK, so the point is not “the workaround is ugly”, but “the restriction is arbitrary”. Makes sense to me. In fact, the restriction complicates defining and implementing the desugaring.

Inferring the monad to use, and then call pure, doesn’t work with the current stdlib API—you’d need to change the API to do that. Which probably would be best here. If you don’t want a type-directed desugaring (and you don’t, for consistency), call a method pure(x) (or some other name) and let users shadow it.

However, value definitions and using pure don’t have the same behavior. I’ll get back to that.

In fact, the restriction seems to complicate the spec and the compiler. They both rewrite p <- e; p' = e' to

(p, p′) <- for (x@p <- e) yield { val x′@p′ = e′; (x, x′) }

The compiler also handles up to 21 value definitions.

Instead, if you try harder to use structural recursion, you can replace that rule by adding:

for {} yield e => e // feel free to forbid this in source Scala if you dislike it, but it’s an arbitrary restriction
for { p = e; generators } yield e => val p = e; for { generators } yield e

As a side effect, the new rules allow p = e at the top. I expect they’re not fully equivalent in corner cases (like if patterns use side-effectful extractors, which is however “evil”), but they might be good enough.

A bigger potential problem is: how do you handle p = e if p is refutable? Because the above scheme gives a match failure, unlike p <- pure(e). I semi-remember complaints here.

But the current translation fails there too—that matches the spec, in fact, narrowly. If you applied the rewrite rule before inserting filter calls for refutable patterns (against the current spec), you’d have a behavior that:

  • handles refutable patterns in value definitions p = e
  • but makes it much harder/impossible to have value definitions first, unless you use the pure-based desugaring.
  • in turn, using pure has a performance cost unless you have some optimizer (I think it needn’t be smart. pure isn’t a virtual call.) On the other hand, creating tuples isn’t free either.

Right now, we seem to have the worst of both worlds—value definitions are inconsistent with generator regarding refutability, and they can’t come first.

scala> for {x <- List(()); (a, b) = List[Any](1): Any} yield 2
scala.MatchError: List(1) (of class scala.collection.immutable.$colon$colon)
  at .$anonfun$res4$1(<console>:12)
  ... 29 elided

scala> for {x <- List(()); (a, b) <- List(List[Any](1): Any)} yield 2
res5: List[Int] = List()