There was an interesting talk at Off the Beaten Track 2018 (co-located with POPL), named Explaining Type Errors.
The gist of the idea was that we should be moving away from “static error messages” and towards “interactive error explanations.” Different people from different backgrounds look for different things in error messages, and displaying too much information would make the errors too verbose. Instead, we should be able to click and expand dependent error explanations (to see the potential sources of the main error), query the types of the terms involved, display the implicits in scope, access the documentation on certain features, etc.
I think this would work particularly well with implicits, which are currently pretty painful to debug, but for which you typically don’t want to display all available information (the entire resolution trace) to the users.
This is one more reason to use error ADTs as the core representation of errors, and then let interpreters do the job of presenting them to the users. It seems that it would be feasible to keep a “backward compatibility” interpreter that generates error messages the old way, so as to avoid breakage from legacy tools.
Two notable reactions to the OBT’18 talk were:
Martin mentioned that keeping track of precise error sources (as presented in the talk, which was about a Hindley-Milner-style system) may not scale, as the compiler is typically already under some memory pressure; re-running a more precise type-checking algorithm only if an error is encountered is an option, but it means users have to wait even longer to get feedback (also, I think it may be harder to maintain both the fast and precise checkers in parallel).
Someone involved in the implementation of Rust mentioned that having precise errors, and in particular precise position spans, was far from trivial – it involved a great deal of adaptation to the compiler and the addition of dedicated runtime structures.