There’s a big difference between rebinding this in the context of an obvious class definition, and rebinding this in completely ordinary-looking code!
What should a user udnerstand, when they see this?
val x = foo(bar)
someType.baz {
x + 1
}
Is x coming from the outside, or is it coming from some imported this scope? No one can tell without looking at the definiton. It’s a net loss in terms of language understanding IMHO.
Now, say x was coming from the outside scope, but that the baz method did in fact import some this scope (where x was not defined). What if someone later adds an x field to the corresponding type? We may break (sometimes silently) all this-importing user code out there that were capturing reference to an outside x variable. For instance, the meaning of the code above will change.
This is not dissimilar to surprises that can arise when you’re wildcard-importing an object into your scope (as in import obj._). But people are more careful with this sort of things, and it catches the eye. Wildcard imports should only be performed on robust APIs that are not expected to change in unexpected ways. But this scope injection exposes people to the API of random classes, without a visual clue of what’s happening from the use site.
The only real difference between my solution and yours is:
value.tap { it =>
assert(it == value)
}
vs
value.inContext {
assert(it == value)
}
It doesn’t seem to me that extra implicits are bringing any improment here. Your mechanism is very convoluted.
No. Implicit function types were to reduce bolierplate when passing the same implicits over and over (the primary example given by Martin Odersky was Context in Scala or Dotty compiler).
// Context is uselessly bound to ParamType here
def functionWithContext[ParamType: Context](param: Param): ReturnType = {
val context = implicitly[Context[ParamType]]
// use 'context' directly
}
Implicit function types let you instead formulate that:
// concise and non-hacky way
// also you can shovel multiple implicit parameters into 'InContext' type
def functionWithContext(param: Param): InContext[ReturnType] = {
// use 'context' directly
}
Despite of the syntax InContext[ReturnType] the implicit parameter doesn’t have to be related to ReturnType in any way. Desugaring could be as follows:
As for now you can’t even explicitly import this into scope. I think below code is explicit enough to be comprehensible:
val x = foo(bar)
someType.baz { this => // this screams: caution! things can be shadowed here!
x + 1
}
That would be very useful in tests:
"sth" must "be red" in test(args) { this => // importing fixture members
... // short test
}
"sth" must "be green" in test(args) { this => // importing fixture members
... // short test
}
"sth" must "be blue" in test(args) { this => // importing fixture members
... // short test
}
private def test(params: Params)(action: Fixture => Unit): Unit = ???
Now, say x was coming from the outside scope, but that the baz method did in fact import some this scope (where x was not defined). What if someone later adds an x field to the corresponding type?
Can’t the same thing be used as an argument against inheritance?
// otherfile.scala
trait T {
// def x = 5 // user-added field
}
// main.scala
class A {
def x = 1
class B extends T {
def y = 2
def addThem = y + x // where does x come from?
}
}
You still have to look at otherfile.scala to figure out where x is coming from.
This is not dissimilar to surprises that can arise when you’re wildcard-importing an object into your scope (as in import obj._ ). But people are more careful with this sort of things, and it catches the eye.
In my experience a lot if not most Scala libraries recommend or at least use wildcard imports in their ‘Getting started’-section. This is often more or less a requirement to get the library working, not something that’s carefully added by the user.
But I’m not married to the idea of being able to rebind this specifically. The mechanism that enables all this in Kotlin is scope-local extensions (which, for Kotlin, is also the enabler of this-rebinding)
I’ve mentioned this in the very beginning of my answer: in code like yours it’s obvious that you’re creating a new scope in which some things may be implicitly imported (by inheritance). Not the case at all in regular code where this would be imported Kotlin-style without any clue at the use site.
On a side note, your example also reminds me of the “fragile base class” problem. Which is why deep, wild class hierarchies are discouraged in favor of small ADTs, type class hierarchies, and modules with clear interfaces.
I think most library authors are careful about what the put in the objects they recommend importing with a wildcard. At least, they should be!
I think we’re all missing that this can already be enriched by scoping in Dotty, via implicit functions:
object StringScope {
def (s: String) capWords = s.split(" ").map(_.capitalize).mkString(" ")
}
def stringScope[A](f: (given StringScope.type) => A): A = {
f(given StringScope)
}
object TopLevelScope {
def (`this`: Any) capWords(s: String) = StringScope.capWords(s)
}
def topScope[A](f: (given TopLevelScope.type) => A): A = {
f(given TopLevelScope)
}
object App extends App {
stringScope (
println("hello world".capWords)
)
topScope {
// this is ok
println(this.capWords("hello world again"))
// but not this...
// println(capWords("hello world again"))
}
}
The only restriction is that you HAVE to write this as a prefix to call an extension method, you can’t call it as a top-level name. I think this restriction is arbitrary, all scopes in Scala 2 so far have an implicit this, if extensions methods could apply to this when calling an unqualified method this would in effect allow adding new “top-level” methods by enriching (self: Any)
Highly unlikely. Method calls on the JVM are themselves implemented by hashtable lookups, I.e. hashmap lookup should have the same cost as calling a method - the difference you see is likely purely artifacts of your profiler
import java.util
object RowPerformanceTest {
val dsSize = 100000
val rowSize = 20
val repeatCnt = 1000
var startTime:Long = _
var endTime: Long = _
val columnMap: util.HashMap[String,Int] = {
val result = new util.HashMap[String,Int]
var j = 0
while(j<rowSize){
result.put(s"column_$j",j)
j=j+1
}
result
}
val dummyArray:Array[Int] = {
val result = new Array[Int] (rowSize)
columnMap.forEach{ (n,i) =>
result(i)=i
}
result
}
val columnArray:Array[String] = {
val result = new Array[String] (rowSize)
columnMap.forEach{ (n,i) =>
result(i)=n
}
result
}
val dataSet:Array[Array[Long]] = {
val result = new Array[Array[Long]](dsSize)
var i = 0
while(i<dsSize){
val array = new Array[Long](rowSize)
result(i) = array
var j = 0
while(j<columnArray.length){
array(j)=1
j=j+1
}
i=i+1
}
result
}
def begin(): Unit ={
startTime = System.nanoTime()
}
def end(): Unit = {
endTime = System.nanoTime()
}
def main(args: Array[String]): Unit = {
def testByKey(): Unit ={
var i = 0
var sum = 0L
while(i<dataSet.length){
var j = 0
val row = dataSet(i)
while(j<columnArray.length){
sum = sum + row(columnMap.get(columnArray(j)))
j=j+1
}
i=i+1
}
//println(s"testByKeySum:$sum")
}
def testByIndex(): Unit = {
var i = 0
var sum = 0L
while(i<dataSet.length){
val row = dataSet(i)
var j = 0
while(j<row.length){
sum = sum + row(dummyArray(j))
j=j+1
}
i=i+1
}
//println(s"testByIndex:$sum")
}
begin()
testByKey()
end()
begin()
for(i<-1 to repeatCnt){
testByKey()
}
end()
println("by key")
val byKeyTotolTime = BigDecimal(endTime-startTime)/1000000000
println(s" total time:$byKeyTotolTime")
begin()
testByIndex()
end()
begin()
for(i<-1 to repeatCnt) {
testByIndex()
}
end()
println("by index")
val byIndexTotolTime = BigDecimal(endTime-startTime)/1000000000
println(s" total time:$byIndexTotolTime")
println("ratio")
println(s" time:${byKeyTotolTime/byIndexTotolTime}")
}
}
Output
by key
total time:12.119882852
by index
total time:1.641546377
ratio
time:7.383210746777457631341718711612130
It is 7 times slower.
May be I have mistaken somewhere.
May be it is very unusual case.
When I see value.method(arguments) I expect that either method is defined on type of value or there is implicit conversion in scope that provides that method. It doesn’t seem helpful to add receiver methods to that list. I would need to search in three kinds of places instead of two.
Also, Rust compiler has no problems suggesting that you forgot to add:
No, it’s more consistent. whatever(a,b) desugars to this.whatever(a, b) if this.whatever exists, always, UNLESS it’s an extension method. This is an inconsistency with usual methods and this is the ONLY place where member methods are treated differently from extension methods. Applying extension methods here would remove an exception from the language and provide scope extensions with one stone.
first in parameters of methods and functions we’re in
then it’s searched in this
then in outer classes
also it’s searched in imported members
If something is both imported explicitly and available from this or outer class then scalac fails with ambiguous error (unless the member imported is the same as the member without import). Specifying e.g. this.whatever(args) helps resolving that problem.
So generally in expression subject.member(args) the rules for finding subject are different from rules for finding member (and I haven’t yet touched extension methods / implicit conversions here).
Odersky has written:
-Third party serialization packages are typical examples of orphan instances. They require import implied.
To say the truth, I don’t know good decision for orphan tasks. It is not rare and currently we have writen our own base types for all primitives and excluding orphan is one of the Major aim. I can not say that receiver is more worse for such case.
Requiring import is not excluding. Orphan instances require import for sanity. Otherwise:
there would be compilation performance penalty not only during error reporting with import suggestion, but also during normal compilation passes (automatic orphan imports require scanning the whole classpath)
it would be very easy to have ambiguities. Let’s say you had only one Monoid[Int] on classpath and were happy with automatically imported orphan instances. Then you add some library to you app and that library brings another orphan Monoid[Int]. Suddenly all of your code that relied on automatically imported orphan Monoid[Int] instance breaks because of ambiguity.
Going back to receiver methods:
My stance is that Scala should encourage pure code over side-effecting code. Receiver functions are practically fully mutability oriented, i.e. all examples of receiver functions usage revolved around mutable builders or some other ugly imperative Javaism like that (and I haven’t switched from Java to Scala only to see more ugly imperative Javaisms).
receiver.function { this =>
... here we have new 'this'
}
closely matches what Kotlin’s receiver functions do.
This syntax introduces small penalty for receiver functions, makes them perfectly comprehensible and also allows users to opt-in or opt-out whenerver they want at use-site (receiver functions from Kotlin don’t have that flexibility).
Scala has penalties for mutability oriented code in other places, e.g.:
case class primary constructor parameters are vals by default - you have to add explicit var if you want mutability
methods and function parameters (and also intermediate values in for-comprehensions) are vals and you can’t change them at all - you need to copy them to some other vars explicitly
default collections available without prefix are (almost?) all immutable ones - you need to explicitly import the mutable ones
you can’t import from a var but you can import from a val
there’s no continue keyword, return works often by throwin exceptions (so it breaks in async code then), break is absent and you need to use scala.util.control.Breaks (which I never seen used)
etc there are plenty of such examples
therefore if you’re after mutability oriented code then you’ll want to avoid Scala anyway and Scala wants to avoid you Mutability restrictions in Scala are not as tough as in Haskell (which outright rejects all mutable code no wrapped in IO type), but still Haskell is a strong inspiration (see scalaz, cats, etc)
You’re just saying that they’re different, which is known, and that’s exactly the inconsinstency I’m talking about - do they have to be different wrt extension methods?
That’s good, esp Kotlin example in https://github.com/lampepfl/dotty/issues/5591 shows that Kotlin does resolve extension methods of this unqualified. So at least for designers of Kotlin it made sense that this.method and method are the exact same thing without weird exceptions…