Pre-SIP: Equivalent of `inline implicit def`

The unrestricted use of implicit conversions is a code smell, which makes several aspects of the Scala type system and type inference fragile and slow. In current Scala 3, conversions require a language import at each use site. I am in favor of tightening the screws further and making use of conversions without that language import an error instead of just a feature warning.

See Proposed Changes and Restrictions For Implicit Conversions

I plan to bring this up as a Pre-SIP soon. We had a long backup of other things that prevented this being proposed sooner. But it’s not forgotten.

In light of all this I am against making implicit conversions more powerful and more of a feature. In particular combining implicit conversions with macros makes me shudder.

Some libraries will have to be rewritten if that’s not longer possible, but that will be a good thing. We collectively need to wean ourselves off this implicit conversion drug. Maybe some specialized idioms will no longer be expressible. If we get a cleaner language with fewer footguns for it I am willing to make that trade.

6 Likes

Perhaps, but that’s a fairly casual statement with deep implications. If “fewer footguns” means that popular and useful libraries no longer work as well, that may be a pretty bad trade for the language’s future. I’d recommend careful study of the ramifications before cutting features like this.

(Mind, I don’t know myself how deep the implications are in this specific case. But some of the tools linked above aren’t minor, and making them worse is a potentially significant consequence, especially when the language is already fighting for mindshare.)

11 Likes

I do not see how preventing libraries from existing is a good thing. There are valid use-cases for implicit conversions, and if we do not at least try to provide alternatives for them, we are splitting the ecosystem without a good cause IMO.

9 Likes

I wonder if we should also investigate better foot-armor. For instance, use the code-rewriting tools we already have for Scala 2 to 3 to offer to rewrite implicit conversions (maybe triggered by a @desugar("conversions") annotation); or a compiler flag to yell more expressively when something doesn’t work and implicit conversions touch it.

I don’t think there are a very large number of cases where code subtly fails to produce the correct result because of implicit conversions, are there? Mostly it doesn’t work at all, but you have no idea why; or it does work, but you also have no idea why, because of the conversions.

This suggests to me that it’s worth at least investigating whether the pain points can be addressed without restricting the feature yet further.

2 Likes

In general it’s true that this could lead to pretty wild implicit conversions, but the motivating case for this Pre-SIP is Iron where it is only used to introduce compile-time checks to restrict usage of the implicit conversion from an unrefined to a refined type which sounds legitimate to me.

9 Likes

I agree with this at a high level, but the devil is in the details

My first big worry is that this is fighting the last decade’s battles. It isn’t 2014 anymore; and people going crazy with implicit conversions isn’t really a thing. Sure, it was a thing in the past, but it isn’t a major problem now. Run a poll on reddit or twitter or hacker news and I’m sure you will find that implicit conversions are not top of mind (v.s. things like IntelliJ, SBT, compile times, compatibility/stability/maturity, etc.).

We should first find the usage in the wild thst we consider problematic, what usages aren’t problemstic, and then decide what language changes we need to make. The current proposal of “Martin decides what is bad, all these libraries that people love are declared to be code-non-grata” is the tail wagging the dog.

If it turns out that the tradeoff is “we limit a feature that doesn’t affect anyone’s day to day, and wide swathes of the ecosystem break”, that seems like an exceedingly bad trade. Less “Python 3” and more “Elm 0.19” level of bad (Elm 0.19 limited use of a few key advanced features to core contributors only, without offering alternatives. This resulted in many large codebases being forever unable to upgrade without a huge rewrite, and the callous way the breakage was done damaged trust in the platform. The previously-vibrant community has largely disintegrated)

My second big worry is that there have been several threads here trying to spark discussion on current use cases for implicit conversions. People are genuinely willing to work together to find a compromise path forward, but those threads largely haven’t seen any engagement by the core Scala 3 team

Without such discussion and shared understanding, it’s hard to see how we will come up with a good plan to move off implicit conversions. That’s not to say such a plan doesn’t exist, just that to get it we will require much more open collaboration than the proposed “your code will break and that is a sacrifice i am willing to make” strategy described above

That said, I am still in favor of limiting implicit converisons overall. It just needs to be done for the right reasons, and with the right approach, in order to be successful. The devil is in the details.

29 Likes

I can understand the general approach, we all where there last 2 decades, but for balance: iron is today the most, perhaps only pure scala 3 library I saw that made me think that Scala 3 has huge benefices, and not only costs. And the presentation on it at scala.io was a real vitrine for scala 3, and with some really nice glimpses about how zero-cost abstractions can be done in scala 3.

Having the message that the most exciting scala 3 lib to my knowledge is not handling it correctly and thus shall have its wings cut is extremelly surprising, and even gloomy.

6 Likes

I don’t think implicit conversions with macros are as scary as they used to be in Scala 2.

In Scala 3 implicit conversions already are more restricted and clean than in Scala 2:

  • It requires a language import
  • Orphan instances need to be imported explicitly or via given
  • Use a typeclass Conversion instead of a special syntax implicit def

The macro API is also cleaner and in Scala 3 macros are less powerful (in a good way). Note that the proposed solution is for blackbox/non-transparent implicit macros which pose fewer problems.

As far as I know all you can do with blackbox macros throwing warning/error messages, checking things at compile-time (e.g Iron/Refined constraints) and returning a “custom” expression which still needs to satisfy the output type of the conversion. That does not sound like much magic to me.

Do you have a specific use in mind that is problematic?

5 Likes

inline given Conversion[T, U] might still be useful even after implicit conversions are removed. Indeed the following will remain possible:

val x: into PosInt = 5 //Compiles
val y: into PosInt = -5 //Error: -5 <= 0

I agree that macros the way they are used in Iron or Refined are among the most well-behaved.

But in general, macros can take very long to compile and can produce arbitary code. The least I want is to see a call in the source code that might cause this. So, if one argues for mixing macros and implicit conversions, one also needs to have a convincing story how to prevent abuses. So far I have not seen that.

Even for refinement checking, is it really that much to ask to write (say) e.narrow if an expression e should be narrowed to a refinement type? Refinement type checking is probably several orders of magnitude slower than normal type checking, so this would be welcome as an indicator for where compilation time is spent. In our own research on refinement types we find explicit narrowing acceptable.

1 Like

Indeed this does not look so bad:

extension [T](inline t: T)
  inline def convert[U](using inline conversion: InlineConversion[T, U]): U = conversion(t)

given InlineConversion[Int, PosInt] with
  inline def apply(inline value: Int): PosInt = ${autoPosIntImpl('value)}

val x: PosInt = 5.convert
val y: PosInt = -5.convert //Error: -5 <= 0

Is that the right order of priorities?

Somewhere in my giant pile of buttons, I have one that reads, “There is not now, nor will there ever be, a programming language in which it is the least bit difficult to write bad code”. It’s blithe, but I’ve never found a reason to doubt it.

IMO, the issue isn’t whether it is possible to abuse a feature; the issue is whether a feature invites frequent abuse. It’s true that there was a time when Scala 2 and its idioms around implicit conversions invited that, but it seems to me like that time has passed – in practice, those bad patterns seem to be dying out even in Scala 2, much less Scala 3. The community has, by and large, learned better; that’s way more important than all the protections built into the language.

I suspect you’re fighting yesterday’s war here, and that it’s being a bit counter-productive…

10 Likes

It is, unfortunately. Scala has a history of being banned in certain companies because existing code is too obscure and hard to maintain. I wish Scala programmers were all responsible and would adhere to the principle of least power (Haoyi’s interpretation, not the finally tagless one, which is very clever but misses the point). But many of them are not. I am not talking about thought leaders who are indeed now largely beyond it, I am talking about random Scala programmers in companies who want to show they are smart or indispensable by writing overly clever code. All the feedback I get is that this is currently problem #1.

Other languages don’t have both implicit conversions and macros. So if we want to have both it is our due diligence to show how abuses can be kept in check.

1 Like

As a person who’s company is currently entirely dependent on Scala, I see it’s much more likely companies will ban Scala if it just breaks their existing code from ever working. If you truly care about industry, then talk and listen to industry. Implicit conversion “obscurity” is not what we worry about.

15 Likes

Where are you getting this feedback? The 2023 Scala Language Survey lists Tooling and Compile Speeds, as the top two things that need improving by an overwhelming margin, from a broad base of 1300ish responses

I can speak for my own part of industry in Databricks. 1,000s of engineers, 1,000,000s of lines of code. Implicit conversions are not a problem for us. Migration difficulties are. Each and every breaking change materially impacts upgrade timelines, while obscure language features do not cause us difficulty. For Databricks folks, the Scala Survey priorities for improvement basically match our own: tooling and compile speeds are top pain points. Some things we can improve on our own, while others we need to rely on upstream projects to improve. We havent encountered any specifically macro-related compile time issues.

Perhaps other folks from industry can chip in if they have different experiences

16 Likes

To emphasize the impact this has: we still regularly hear about customers stuck on 2.12 due to spark. Spark has been released for Scala 2.13 since 3.2.0 / October 2021 – but it’s not the default, and to this date, both Databricks Runtime and Amazon EMR (I guess these are the main hosted Spark offerings) offer 2.12 only.

Hopefully they will move to 2.13 soon (the spark 3.5 release notes say 4.0 will drop Scala 2.12), which should also enable Scala 3 (for3Use2_13).

I can second that. 2.13 was a library release, so the sort of issues migrating to 2.13 are different than migrating to 3. But still, every breaking change is a lot of work, things like changing the Seq alias to immutable or the removal of breakOut in a 20MLOC codebase mean a lot of work (with automatic rewrites).

11 Likes

One small step I think would be worthwhile would be to rename InlineConversion to something like MacroConversion

Programmers have a tendency to make everything they can inline, since it can only make their code faster at runtime.
We do so even in cases where the optimiser would have inlined anyways, as we don’t necessarily know that would have been the case !

Therefore, I fear people would use InlineConversion instead of Conversion when not using macros.

Furthermore, “macro” has an aura of danger which implicitly means “You better know what you’re doing”

We could also lock MacroConversion/InlineConversion behind an import specifically for it
This would allow us to write a strong and clear warning in the “missing feature import” error message
Since errors can be multiline, we even have the space to propose alternate, non-macro, ways of getting some tasks done

3 Likes

Agreed – the primary complaint I hear in the field is always about Scala being bad about backward compatibility.

Implicit conversions are a super-niche complaint, that a few opinionated hardcore programmers (rarely the ones in charge) care about. I’ve literally never heard this complaint raised at work, across a succession of companies.

The decision-makers care primarily about compatibility (along with difficulty of finding experienced engineers), and the amount of programmer effort needed to maintain their large code bases. Every change that requires rewrites that aren’t 100% reliably automated is a gigantic negative.

8 Likes

Kind of a boost to the importance of tooling: I have heard complaints about this, but the last time I can remember it being an issue was at least 6-7 years ago, and especially since IntelliJ shipped some really nice tooling improvements around implicit conversions and parameters a couple years back (2018?) it’s been a non-issue.

3 Likes

I’m not sure I qualify as “industry” since my company barely touched a dozen scala dev in 15 years but:

  • implicit conversion never were an issue. I mean, there was a period circa 2012 when their over-use, without community “best practices” in place, was a pain (hard to navigate for ex), but compared to scalac bugs, a non-existing binary compat story, or IDE/build tool extremely poor support at the time, even with dedicated effort to bring eclipse to some kind of reliability, they were a distant concern;
  • on the other hand, breaking source/binary compat is horrible. The progress here are extremely worthy, but 3 years and counting, and we are still not able to migrate to scala 3, even with a lot of dedicated effort and calling recognized expert on scala 3 migration for help (ie, I stand for Two More Old Cents on Scala 3 Migration // - but hey, progress, we’re almost done). And this is with me directing the project, ie with all the knowledge of the extrem importance of being up to date to be in a sustainable path - in most companies, the challenge we are facing would lead to "scala is crap, we are migrating awayt to [rust|typescript|kotlin|whatever is hype today].

Appart for source/binary breakage with difficult or non applicable workaround and tooling (which are industry killers), the most irritating things are for us are “efficiency”:

  • the slow feedback loop on complex/big project.
  • the difficulty to put in place 0 cost abstractions / low vm-churn patterns, esp for super standard FP dessign patterns (like new type, value class, enum) - of course “at the best the underlying platform can do”, I’m no asking to make Vahala live. And here, things are better here with scala 3, and it’s a nice enhancement, a positive dynamic.

I still want to point out that scala remains a super effective language for our case (ie a small team with a long lived, complex multi-maintained branches project), and we wouldn’t have been able to maintain a complex project with super-low operability overhead for on-premise customers without it. But sometime, it’s a calling.

9 Likes