Sorry, it’s the class files. But there again, the first step is to realize which sources or class files form part of a system. And in the Java world, that’s not so easy…
I’m not really sure what’s the problem with Java world, but anyways the compiler suggestions don’t need to be totally exhaustive or perfectly accurate (I’ve shown an example where Rust compiler suggestion was inaccurate and that wasn’t a deal breaker). Scala compiler can recognize only a subset of patterns related to definition of typeclasses, extension methods, etc with the intention of suggesting them to programmer. For example Scala compiler could try following strategies:
- global typeclasses recognition, i.e. finding all implicit vals and defs with stable paths and recording types for which they can be instantiated (without any extra imports in scope)
- local typeclasses search - try members of all values/ results of methods in scope to see if they provide relevant typeclass instance
- same goes for implicit conversions / extension methods
Library authors would then probably reorganize their libraries to fit into the suggestions heuristics so library users would have easier time finding implicits thanks to compiler suggestions.
Above strategy misses some cases e.g. implicit method with stable path, but implicit argument that requires special import, i.e.:
implicit def methodWithStablePath[T](implicit arg: ThisTypeNeedsExtraImport[T]): Result[T] = ???
The final form of suggestions heuristics would be a result of negotiation between library authors (who would vote on most useful heuristics) and compiler autors (who would reject infeasible solutions).
A post was merged into an existing topic: Proposal To Revise Implicit Parameters
I think this is a failed assumption. People often complain about things which aren’t the actual problem. For example a lot of people complain about lazy evaluation in Haskell, but a lot of people that program in Haskell say that isn’t the real issue, and really this complaint is more about other issues (for example how hard is it to diagnose space leaks).
Matter of fact is, when you are to a language and you are learning it, you will often complain about something vaguely related to the actual problem you have but not the actual problem and often this can be even contradictory. For example, I have seen people complain about “magic implicits in Scala” and yet have no issue with Guice DI in Java, or Scala collections (which rely on implicits).
I think when evaluating these questions, you really need to look into the details rather than just saying in a reductionist manner that what is being complained about needs to be reworked because some people complained about it. That is not to say I am dismissing feedback, when people complain about a problem there is a problem. The point is figuring out where the problem lies.
I have personally had to teach Scala many many times to new people, and from personal experience the number one grand issue with implicits are not the concept, nor the syntax. Its the fact that up until recently there has been no tooling to reveal both how an implicit is used and where an implicit comes from plus the fact that the compiler errors for implicits are incredibly misleading.
What do you think will happen when you have an implicit marshaller which in turn asks for an implicit decoder, you are missing the decoder in scope but the Scala compiler usefully tells you that the marshaller is missing (even though you have an implicit marshaller ....
there?
This isn’t really true. All other languags that have some form of implicits (barring Prolog) have always phrased it in terms of parameter passing. In haskell its the same, and in OCaml the proposal of implicits is also the same. None of these languages have come up with keywords or alternate constructs to “hide” the fact that you are passing parameters.
Rust is a bad comparison, it doesn’t even have implicits. Rust is also as explicit as you can get for a language as well. I am not sure why we are even comparing traits and typeclasses to implicits, we are comparing apples to oranges.
I am not trying to sound negative here, but I have issues with the core hypothesis that is the impetus for this proposal for reasons I have already stated. In my opinion, we put the cart before the horse where some people where complaining about implicits (which if we were going to put into perspective, the largest complaints where happening around 5-10 years ago and they centered around implicit conversions which isn’t even related to what we are talking about) and because of this we have to redesign everything.
I mean does this mean we have to completely reinvent how Scala does static typing because people have problems with how types are inferred in Scala, or should we completely reinvent subtyping because of all of the problems associated with it? Or the crazy DSL’s/overloading/pure FP monad thingies?
Because I can say one thing, I have heard many more complaints from those things rather than implicits. I very rarely here people complaining about implicits in the public sphere in an intellectually honest manner, often its just trolling or its someone who is already predispositioned to not like the language anyways and they are stating their “opinion” (i.e. someone who is convinced on dynamic typing is usually always going to complain about the complexity of static types in a language like Scala).
Recalling from memory from teaching new people the language, the only real complaints I hear about implicits is how terrible the compiler is about reporting them.
Also saying that the proposal is incoherent and inconsistent with the rest of the language, using the same reasoning we should also get rid of final
, case class
, lazy
etc etc. There are other inconsistencies such as overloading keywords (i.e. for
) which has no precedent in the language. In fact a lot of things about this proposal seems to be creating precedents which haven’t existed before.
This is also a really good point. I mean implicits are essentially a subset of prolog, but that doesn’t mean that generally speaking people like writing programs in terms of proofs theorems.
This is honestly the main issue and I don’t see how this SIP addresses that (if it did I would be more amenable to it). The underlying fundamentals in implicits aren’t being changed, we are just repackaging them. As far as I can see, you can change the syntax and hide the parameter passing as much as you want, but if the Scala compiler still gives you completely wild error messages than most of this effort is wasted (in fact you can make the problem worse).
If Intellij can do it (which it does currently almost flawlessly now), than I don’t see why the actual compiler can’t do this, especially considering we are going to be shipping libraries with TASTY trees. Honestly I would rather the effort be spent in making the error reporting around missing/incorrect implicits better, I mean if you are taking Rust (as a language example) thats probably the main thing what is making it easier for new comers. Its not how they designed traits, but its the fact that for the most part, the Rust compiler is really helpful. The rest of the Rust language is actually just as hard/confusing/obtuse as Scala, tracking memory management through linear types is not that easy especially when you see how people do async programming in Rust (using their so called “easy” traits)
I think @mdedetrich is correct in saying that error messages are the current biggest problem with implicits, and not the superficial spelling/syntax. The fact that when you get an implicit wrong, you get zero help from the compiler about what to do, and in many cases the compiler actively misleads you: that is what confuses newbies the most, and even experienced people.
This can be seen from @tarsa’s comments, @Mocuto’s comments, and even @Ichoran’s comments on another thread Proposal: Implied Imports - #11 by soronpo and /u/kag0’s comment on reddit. I’ll chip in my own experience and say it applies to me as well: when learning how to use Future
s, when learning how to use SLICK, the imports-are-wrong-and-error-messages-awful property was far more demoralizing than things like the choice of syntax or interference with currying
Better error messages should definitely be possible in the compiler just by scanning the classpath; no package manager involved. People generally don’t have “forgot to add an SBT dependency to pull in implicit from other ivy artifact” problem, people have a “forgot to add an import for an implicit already on the classpath, often in the same library” problem. All this stuff is already on the classpath, we just need to scan for it in the compiler and surface it. I think @tarsa makes a reasonably compelling argument that Rust’s usability comes from the error reporting, and not from the superficial syntax.
Error messages like:
methodA is not a member of ClassA, but there is an implicit extension
foo.bar.ClassAImplicits.extensions
available
implicit
scala.concurrent.ExecutionContext
is not found, but there are available implicit instances atscala.concurrent.ExecutionContext.Implicits.global
andscala.concurrent.ExecutionContext.Implicits.parasitic
available
Would make much more of a difference than changing the syntax. In fact, in the last case you can even see people have manually tried to implement this missing feature using @implicitNotFound
annotations, when really this should be the standard auto-generated error message for any implicit that is not found.
@jvican let’s keep feedback on the proposal’s syntax/spelling and other details on its own thread Proposal To Revise Implicit Parameters. This thread was spun off explicitly to avoid being bogged down in such detailed discussions
@jvican I’ve moved my comments there, I haven’t been following these discussions so thanks for pointing out I was off topic here.
Yes, this!
Especially when using libraries such as cats, doobie, shapeless, etc. where knowing the correct import syntax.thing._
or import instances.thing._
incantation is essential.
For simple context/injected parameters like ExecutionContext
we can use @implicitNotFound
- this same logic could easily be extended to have the compiler error list possible instances of the implicit type where @implictNotFound
isn’t specified, so long as:
- those instances are defined in the codebase being compiled, or in the library where the implicit trait is defined
- there aren’t too many
It doesn’t even need to be fast, as this is a special case for an error that’s not expected to happen in most compiles.
The other problem is derived types, such as Doobie’s Read
and Circe’s Encoder
/Decoder
. Here the problem is usually one where you have a nested tree of case classes (or equivalent) and some nested member of that structure doesn’t have a defined instance for the relevant case class.
An error message of the form
Unable to derive an instance of Read[Foo] because a Get instance for nested type UUID was not available
Would go a very long way to helping debug such usages!
Now that we have HLists baked into the language, I can only see such patterns becoming more common, and any improvements to the generic derivation machinery that allow for such errors would be a godsend.
If Intellij can do it (which it does currently almost flawlessly now), than I don’t see why the actual compiler can’t do this, especially considering we are going to be shipping libraries with TASTY trees.
It’s because the actual compiler gets invoked with a set of files to compile, and a classpath. The classpath points to a transitively unbounded number of classes. As long as the compiler’s interface stays like this, there is no way to establish a boundary what should be considered and what should not be.
A build tool or an IDE have advantages here because they have a project description. They know what files are part of your project, and what your direct dependencies are. So they can choose to search for candidates in that set of files, for instance.
Don’t get me wrong. It would be great if the compiler could give that help, and we should think hard how to do it. It will require a major redesign of the tool chain. In particular it would completely change the interface between compilers, build tools, and IDEs. So it requires buy-in from many people.
That said, I am still convinced that fixing some technical issues is not enough to solve the implicit problem. Yes, some problems can be addressed by tooling. I put a lot of effort into better recursive implicit search diagnostics and that already pays off (for instance, it would correctly diagnose the Marshaller/Decoder
problem that you mention). And we can and will do a lot more. But I am convinced we need a better language design as well.
This isn’t really true. All other languags that have some form of implicits (barring Prolog) have always phrased it in terms of parameter passing. In haskell its the same.
Haskell has implicit parameters, which nobody uses. I was referring to typeclasses which do term inference in a way that hides dictionary parameters completely.
That’s not entirely true. Given a Class
instance, the compiler can use getResource(...)
or getProtectionDomain()
to identify the Jar file in which a given implicit trait is defined. This forms a very natural boundary for places where instances of the trait might be searched.
It could be a phased search. First the local project source code, then the jar in which the sought type is defined, then identify types imported in the same package as that definition and transitively scan the jars of those types.
And it’s perfectly okay for the search to be slow, because there’s no way that it could be slower than a human having to dig through pages of documentation and examples to find out what they might be missing!
It could be a phased search. First the local project source code, then the jar in which the sought type is defined, then identify types imported in the same package as that definition and transitively scan the jars of those types.
The way I understood it is there is no “sought type”. The compiler sees x.foo
where foo
is an extension method on x
that is not co-defined with any part of the type of x
. What to do then?
For x.foo
with x: X
, look for implicit conversions from X
to something and extension methods for X
. X
is the sought type.
Unless I am misunderstanding something, with the new Dotty infrustructure, all dependencies on the classpath will actuall contain TASTY trees in the maven jars (rather than the binary JVM bytecode JARS we deal with now), and TASTY trees are basically compact serialized source code that has already been typechecked (this is also how we are “solving” the binary compatibility problem with Dotty)
This should already have all of the information, its basically the exact same set of information that Intellij has. The only thing that build tools need to do is to provide the full set of dependencies in the classpath, which they are doing anyways?
Also note that Intellij currently relies on the Scala compiler whenever you do a “find where this implicit is being used” in the project, as well as “where is this implicit coming from” because it needs the bytecode of the current set of the project (which is similar to how metals works)
For
x.foo
withx: X
, look for implicit conversions fromX
to something and extension methods forX
.X
is the sought type.
If that was the case we would not need an import. The typical example that fails is @ichoran’s toOK
from the implied imports thread:
implicit class EitherCanBeOk[L,R](private val underlying: Either[L,R]) {
def toOk: Ok[L, R] = underlying match {
case scala.util.Right(r) => Yes(r)
case scala.util.Left(l) => No(l)
}
}
it is defined on Either
but resides with OK
. Given a value of e
of Either
type, and an expression e.toOK
, what does the compiler need to do to help?
It should look everywhere, searching for an implicit conversion/extension methods for Either
.
The time it takes to look everywhere is irrelevant. At this point the compilation has already failed, and the compiler should go an extra mile to help the user fixing their issue.
It should look everywhere , searching for an implicit conversion/extension methods for
Either
.
Exactly. But what is everywhere? Without a project description we don’t know. The classpath is not enough, we might have lots of stuff on it that’s completely irrelevant for the project at hand. It could even point to malformed files that make the compiler choke or loop.
I think what this would lead to in the end is a model similar to Rust where the build tool (cargo) and the Rust compiler are integrated. We have to rethink the build tool/compiler interface. But given the heterogeneity of build tools in the Java space, that looks like a tough problem.
An even tougher example is the some
method from Cats.
It’s certainly not defined on the source type; nor is it an enrichment of Option
So how can the compiler help if you attempt to call wibble.some
without the correct import in scope?
It should scan the classpath. Not everything, and not potentially malformed files… just all the TASTY trees that can be located (because implicits will only be coming from scala classes anyway). It should identify if there are any implicits available that would provide this method and then list these possibilities in the reported error, much as we now do with ExecutionContext
This could be further enhanced with an export
directive. Allowing library authors to identify exactly what implicits are eligible to be considered in such a search and massively simplifying the problem.
That’s exactly the case in my Rust example presented here: Principles for Implicits in Scala 3 - #16 by tarsa
I provided an example where all definitions are placed in one file which can be misleading. Consider instead a multi-file project with following files:
struct_here.rs:
pub struct MyStruct;
impl MyStruct {}
traits/mod.rs:
pub mod trait_here;
traits/trait_here.rs:
use struct_here::MyStruct;
pub trait MyTrait {
fn print_hello(&self);
}
impl MyTrait for MyStruct {
fn print_hello(&self) {
println!("Hello, World!")
}
}
and finally the main file (library entry point) lib.rs:
pub mod struct_here;
pub mod traits;
use struct_here::MyStruct;
fn entry_point() {
let instance = MyStruct {};
instance.<caret is here>
}
At the specified place IntelliJ suggests me that I can use print_hello()
method on instance
of type struct_here::MyStruct
. Note that traits::trait_here::MyTrait
is not yet imported (with use
keyword). Also struct_here/MyStruct.rs
doesn’t have any reference to traits::trait_here::MyTrait
so IntelliJ really need to scan everything to provide relevant suggestion.
When I choose the suggestion final code looks like this (lib.rs):
pub mod struct_here;
pub mod traits;
use struct_here::MyStruct;
use traits::trait_here::MyTrait; // automatically added line by IntelliJ
fn entry_point() {
let instance = MyStruct {};
instance.print_hello(); // print_hello from autocomplete
}
Rust compiler does similar job to IntelliJ, but in non-interactive way. Rust compiler doesn’t provide auto-complete, but it does suggest relevant import (use
) clause when compilation fails.
There’s also a question why we must import the traits in Rust to make them useable. Answer is simple - with all traits implicitly imported there would be name clashes. Consider this: IeObsi - Online IDE & Debugging Tool - Ideone.com
mod my_struct {
pub struct MyStruct {}
}
mod traits_1 {
pub trait Trait1 {
fn print(&self);
}
impl Trait1 for ::my_struct::MyStruct {
fn print(&self) {
println!("Good morning!");
}
}
}
mod traits_2 {
pub trait Trait2 {
fn print(&self);
}
impl Trait2 for ::my_struct::MyStruct {
fn print(&self) {
println!("Goodbye!");
}
}
}
fn main() {
use traits_1::Trait1;
use traits_2::Trait2;
let instance = my_struct::MyStruct {};
println!("{}", instance.print());
}
Result:
error[E0034]: multiple applicable items in scope
--> src/main.rs:35:29
|
35 | println!("{}", instance.print());
| ^^^^^ multiple `print` found
|
note: candidate #1 is defined in an impl of the trait `traits_1::Trait1` for the type `my_struct::MyStruct`
--> src/main.rs:12:9
|
12 | fn print(&self) {
| ^^^^^^^^^^^^^^^
note: candidate #2 is defined in an impl of the trait `traits_2::Trait2` for the type `my_struct::MyStruct`
--> src/main.rs:24:9
|
24 | fn print(&self) {
| ^^^^^^^^^^^^^^^
If the compiler, who is almost infinitely faster than I am, cannot even find this information, what hope do I, a poor biological creature, have in figuring it out?
It doesn’t need to be infallible, just helpful.