Updated Proposal: Revisiting Implicits

The difference is that we expect the apply methods of the collections to create new collections, primarily because collections are, of course, about collections. In contrast, we expect a JSON library to be primarily about converting things to JSON and back. Just like Js.Str(String) converts a String to a JSON string, so it is completely natural to expect Js.Arr(Seq) to convert a Seq to a JSON array. Why isn’t it Js.Str.from(String)?

I guess you can argue that the current API is better, but for sure it is not immediately obvious. So there is a potential for error. On the other hand, is that error really due to implicits, or couldn’t you just easily have the same problem without implicits?

1 Like

I’d say it probably isn’t directly related to implicits, as you could have the same issue without it. A more implicit-centric error is that methods like Json.obj and Json.arr provide cut points for the parser, so if you have a typo it’ll narrow down a bit the search space.

On the other hand, Json4s has a DSL which is completely driven by implicit conversions, and if you misplace a comma somewhere in the middle the whole thing fails to resolve and you get very little indication where the error is. It’s really unpleasant to debug.

JSON libraries do multiple things: conversion, construction, serialization, parsing, and much more. You cannot call the wrong method in the wrong part of a library and expect to get the right output. ujson.Arr and friends are for you to conveniently construct JSON fragments, not as a way to convert Scala datatypes to JSON

All this is documented thoroughly, with reference docs, blog posts, and lots of online examples, all following existing standard library conventions down to the exact same method names and signatures. If that isn’t enough, there’s literally nothing else I can give

You are right though that the debate over apply vs from has nothing to do with implicit conversions

3 Likes

What’s wrong with 1.as: Target?

Other than being a visually unusual way to specify types, nothing.

That’s a big caveat though, I can’t remember the last time I saw code which specified the type for a method using that idiom, but that may simply be an artifact of the style of the code I generally work with.

At the risk of going down a tangent, I’d expect Js.Arr(items) to take a Seq[Js] and return the flat JSON array, and Js.arr(items) to take Js:_* and return the nested array. The first looks like a companion object shorthand for a call to new, the second looks like a DSL-style helper, so their behavior is surprisingly counter to the intuition I’ve built up about Scala conventions.

1 Like

Compare

foo.as[Bar]
  .baz(1.as[Baz], "hi")
  .bumble()
  .as[String]
  .split(",")


((foo.as: Bar)
  .baz((1.as: Baz), "hi")
  .bumble()
  .as: String)
  .split(",")
1 Like

I see your point, but you don’t need the parens inside the argument to baz. Also, you could use pipe:

foo.pipe[Bar](_.as)
  .baz(1.as: Baz, "hi")
  .bumble()
  .pipe[String](_.as)
  .split(",")

Ok so you’ve come up with a workaround that somewhat mitigates the deficiencies of the API. Though not fully, because .pipe[Bar](_.as) is still much worse than .as[Bar], both ergonomically as well as wrt performance. What is the downside of as[T], which is worth reaching for such a much more heavy, complex workaround as piping?

5 Likes

I did not say there was a downside to it. In fact, it’s pretty easy to achieve that syntax:

trait Convertible[-A, +B] extends (A => B) with
  def[B0 >: B] (x: A) as: B0 = apply(x)

used as:

given Convertible[Int, String] = _.toString

println(1.as[String])

(It’s irrelevant to the discussion, but calling piping a heavy, complex mechanism is a stretch, don’t you think? It’s just postfix function application.)

I actually do think that it is a very heavy mechanism to pull in, for what we’re trying to achieve. And all the features and chain of logic that goes into arriving at the design of the expression 1.pipe[String](_.as) is very kludgy IMO.

We start with a task “I want to convert 1 to a String.”

-> I then use the fact that conversions in Scala3 are done via x.as so I use x.as
-> I then see an ambiguous implicit error saying there are multiple implicits that provide an as method.
-> If I am very familiar with the language already, I correctly diagnose that the problem is that implicit resolution works in such a way that I need to provide the right ascribed type in order to disambiguate which implicit applies. If I am not already familiar with the language, I am instead frustrated, and this roadblock tarnishes my impression of Scala.

-> If I am more experienced, I decide to work around this by ascribing the target type with : String, and continue.
-> If I am an extreme keener, probably one of the biggest Scala nerds at my organization, I will know about obscure corners in the stdlib and the pipe higher order method, which allows me to refactor 1.as into postfix position as 1.pipe(_.as) which is more convenient because pipe allows me to ascribe a type to the result of the lambda, so I ascribe String there to get 1.pipe[String](_.as).

-> After this point, any colleagues I have which are coming from Java/JS/Kotlin/Go/Python/PHP are irritated that for basic things like converting Int to String, they have to learn about higher order combinators like pipe, anonymous function syntax with _, decorators, ambiguous implicit errors, etc. The colleagues that are very familiar with Scala will debate with me in PR’s about what the point of this .pipe[String](_.as) is, and I have to explain the logic. We continue to have difference of opinion about whether 1.pipe[String](_.as) is better or if (1.as: String) is better. As a general rule, one can tell who wrote which parts of the code by their use of 1.as: String vs 1.pipe[String](_.as).

5 Likes

I’m not sure that’s irrelevant, as it would need to be used extensively to make _.as: B work without adding a bunch of parens.

Also, isn’t pipe hidden behind an import of scala.util.chaining._?

I’ve been exploiting tap and pipe whenever possible. I think it should be easier and cheaper.

Just as we look at old Scala code and laugh at those foolish semicolons, we should feel embarrassed by our temporary locals.

Recently someone pushed back on my excessive tapping, and I agreed I was tapped out.

I’m going to start using an alias again for the scala command because I always need:

-Yimports:java.lang,scala,scala.Predef,scala.util.chaining

Anyway, I’m enjoying the discussion, and also I’d like an extreme keener t-shirt.

7 Likes

Other niceties of having a Converter typeclass are the following aliases:

type FromString[+A] = Converter[String, A]
type ToString[-A] = Converter[A, String]

The FromString typeclass is super useful when building command line, or config file, parsers. I just noticed that the dotty library already appears to include an ad-hoc trait for this purpose. The ToString class is useful for building customizable string interpolators, the behavior of which can be modified by changing an import or two. I really ought to finish and publish my library. But, again, I would very much love to see a Converter typeclass in the standard library. Cheers.

Unless I’m mistaken, wouldn’t this allow for using existing Conversions in a more deliberate manner:

given[A,B](given C: Conversion[A,B]): Converter[A,B] = C(_)

This would nicely reduce duplication, and allow mitigating some of the issues with implicit conversions. As an example, if the Json4s DSL broke, you could drop a few temporary .as[JValue] and bisect the statement to figure out where you missed a comma.

No, unless I misunderstand what you are trying to say. That doesn’t work. It order to get a Converter[A,B] you would need to have a given Conversion[A,B] available, the presence of which would allow for implicit conversion of A to B which is what I’m trying to avoid.

The key is that, without the feature flag, the implicit conversion wouldn’t work - but an explicit call to Converter still would.

The explicit call should override the implicit conversion, so even if they’re enabled, it should let you give hints to the typer about what you expect at particular part of some expression tree in a way that’s easy to remove when you’re done.

I tried it out in Scastie, and it looks like it works:

object thirdParty {
  given Conversion[Long, String] = l => s"${l}L"
}

package definition {
  trait Convertible[-A,+B] extends (A => B)
  trait TightlyScopedImplicitConversionsLift {
    import scala.language.implicitConversions
    given[A,B](given C: Conversion[A,B]): Convertible[A,B] = C(_)
  }
  
  object Convertible extends TightlyScopedImplicitConversionsLift {
    trait ConvertibleOps[A] {
      def[B] (a: A) as(given C: Convertible[A,B]): B = C(a)
    }
    given[A]: ConvertibleOps[A]
  }
}


package runner {
  object Main extends App {
    import definition.Convertible.given
    
    {
      import thirdParty.given // Commenting out this lines makes the next line fail to compile
      println(1L.as[String])
    }
    
    {
      import scala.language.implicitConversions
      given definition.Convertible[Long, String] = l => s"Convertible($l)"
      given Conversion[Long, String] = l => s"Converter($l)"
      println(1L.as[String])
    }
  }
}

I also tried the encoding for as given here:

But it kept inferring B as Nothing or Any.

1 Like

At the risk of sounding like a complete fool, wouldn’t the never-ending debate around what is (or seems) confusing be completely nullified by some data-driven experiment :

  1. Create (or find) a Scala 2 small-ish codebase that contains a significant amount of logic revolving around implicits
  2. Create a compiler plugin (or build-tool plugin) that pushes compile information (compile errors, tasty trees, whatever is relevant) to some backend
  3. Find a big enough pool of developers from different backgrounds / level of experience
  4. Ask each of them to manually rewrite the Scala 2 codebase in Dotty over the course of a couple weeks
  5. Analyse compile information to understand preferred usage / confusion
  6. Iterate from there

No matter the difference of point of view, I’m sure everyone in this discussion wants Dotty/Scala to succeed as a language, and many would happily subject themselves to such an experiment.

2 Likes

It’s not a bad idea, IIUC, this would basically be a telemetry enhanced trial group.

I’m not sure how feasible it would be to add that instrumentation, but it would need to be done with obsessively detailed levels of transparency, as the some of the bigger tech companies have a lust for data collection (and a tendency to omit mentioning the inclusion of telemetry tech in their products) than has tainted the technology’s reputation somewhat.

Nothing is holding you back from doing that right now IIUC: scala/src/library/scala/math/Ordering.scala at v2.13.0-M4 · scala/scala · GitHub

I’d like to revise the claim above. I’m not so sure that all people always learn a new programming language or reason about programs via formal semantics or desugaring to formal semantics. Wittgenstein’s theory of language-game seems to suggest that people may learn to use a language without knowing the formal semantics.

As an example, experienced Scala programmers know how the desugaring of for-comprehension works. However, when given a snippet of code with for-comprehensions, we don’t desugar the code in order to understand what the code snippet means.

There might be research on the psychology of programming language learning and program understanding that can shed more light on this.

2 Likes