What can make scala more popular?

I think it is a bad vision.
IMHO: when some business company choose a language they always look up on real use cases.
It is very easy to assosiate scala with:

  • Functional
  • Parallel
  • Big Data
  • Reactive programming

It is enough to look up on recent news: news from the scala moocs

There is nothing like “if you need high performance you should avoid jvm”
It is ironic but it is written in the same messages:

I completely agree with it, for example we can look up at “Direct vs. non-direct buffers”
So I think if you really care about performance you should use right paradigm.
They can say do no use dynamic languages:

They can say, we should use access method by index, but it does not automatically mean we must use case classes.

Actually if we look at slick getting started
We can see

  // Insert some suppliers
  suppliers += (101, "Acme, Inc.",      "99 Market Street", "Groundsville", "CA", "95199"),
  suppliers += ( 49, "Superior Coffee", "1 Party Place",    "Mendocino",    "CA", "95460"),

Actually it should be array. And it is very inconvenient when we have more than 10 columns.

val q2 = for {
  c <- coffees if c.price < 9.0
  s <- suppliers if s.id === c.supID
} yield (c.name, s.name)

Usual our report can have more than 20 columns, and can return more than 100000 rows.
I just do not understand how such abstraction can be considered as good.

So it seems that scala is very good for big data, but only if you have less than 5 columns by row or static object oriented scheme.

IMHO: If you have dynamic rows with large amount of columns, scala has a lack of abstraction for:

Of course they can say scala is not the language for everything.
But I am sure if they want that scala gains more popularity.
Thay should look up at TIOBE Index - TIOBE

  • SQL 1.935%
  • PL/SQL 0.822%
  • Transact-SQL 0.569%
  • Scala 0.442%

It is very important at least at our company to have the ability to process comfortably rows with large amount of columns for big data.