TL;DR
After a year of building production Scala systems where AI agents (Claude, Codex) write 90%+ of the implementation code, I’ve found that FP’s decades-old “weakness” — implementation complexity — becomes irrelevant when AI writes it, while FP’s strength — honest, expressive type signatures — becomes the primary interface between humans and AI.
The full article (with code examples): Ming’s Spell Compendium #4 — The Art of Whipping AI Grunts: FP’s Great Comeback?
The core argument
In an AI-native workflow, humans review signatures, AI writes implementations. This flips the traditional readability calculus:
// Human reviews this (1 line, complete information):
def fetchUser(id: UserId): IO[Either[AppError, User]]
// AI writes this (humans never need to read it):
EitherT(fetchUser(id))
.subflatMap(validate)
.semiflatMap(user => fetchScore(user.email).map(Profile(user, _)))
.bimap(toHttpResult, toHttpResult)
.merge.value
Scala-specific practices covered
-
sealed trait+ ADT error enums over exception hierarchies — compiler-enforced exhaustiveness as the contract between AI sessions -
opaque type(ProjectId, OrgId) — eliminating parameter mix-ups that AI agents are surprisingly prone to -
EitherT/ cats combinators — AI handles the “alien scripture” effortlessly; humans never need to -
Tagless final discipline — why “always use tagless final” is a slogan, not an executable rule, and what actionable rules look like
-
Metals MCP vs Grep — when to use LSP vs text search (given/implicit resolution, extension methods, overload disambiguation)
-
Rule engineering — writing CLAUDE.md / agent rules with military-grade precision to prevent style drift across stateless AI sessions
The ironic punchline
FP has been criticized for decades as “unreadable without a PhD.” But humans now carefully read signatures — FP’s most readable part. And humans skim implementations — FP’s most off-putting part.
Discussion questions
-
Has anyone else noticed AI agents performing measurably better with rich type signatures vs stringly-typed code?
-
For those using Metals MCP or similar LSP tooling with AI agents — what’s your experience with given/implicit resolution?
-
The article argues helper function extraction should require human approval in AI-native codebases (DRY becomes counter-productive when agents have no shared memory). Controversial — thoughts?
I’d love to hear from anyone experimenting with AI-assisted Scala development, whether you agree or violently disagree.