Scala Revisit 2025

From bibbleWiki
Jump to navigation Jump to search

Introduction

This page is to summarize stuff I needed when revisiting Scala. The last time I looked at this was 2021 and I guess things have changed a bit, both for me and for Scala

Building Scala Projects

There appears to be two key ways to do this, with Mill or SBT.

Create Project

To do this you do

sbt new playframework/play-scala-seed.g8

build.sbt

A bit about packages so I remember. They are made up of Group ID, Artifact ID and Version e.g.

"org.playframework" %% "play-slick" % "6.2.0". 

You can go to https://index.scala-lang.org/ to find packages. An example of a build.sbt is

name := """play-scala-seed"""
organization := "nz.co.bibble"

version := "1.0-SNAPSHOT"

lazy val root = (project in file(".")).enablePlugins(PlayScala)

scalaVersion := "2.13.16"

// Add individual dependencies
libraryDependencies += guice
libraryDependencies += "org.scalatestplus.play" %% "scalatestplus-play" % "7.0.2" % Test

// Add Database and Slick dependencies
libraryDependencies ++= Seq(
  "org.playframework" %% "play-slick" % "6.2.0",
  "org.playframework" %% "play-slick-evolutions" % "6.2.0",
  "org.postgresql" % "postgresql" % "42.7.3",
  "com.typesafe.slick" %% "slick" % "3.6.1",
  "com.typesafe.slick" %% "slick-hikaricp" % "3.6.1"
)

Commands I have used

Seems to be a bit Mavenesk to the commands I have used so far are

sbt update // Update Packages, looks like vscode handles this too.
sbt reload
sbt run
sbt compile
sbt clean
sbt scalafixAll

scalefixAll

This is the sbt plugin for Scalafix, a refactoring and linting tool for Scala. For instructions on how to install the plugin, refer to the website. Which can be found at https://github.com/scalacenter/sbt-scalafix. When I installed it I need to add the following to plugins.sbt.

addSbtPlugin("ch.epfl.scala" % "sbt-scalafix" % "0.14.3")

I also needed to create a .scalafix.conf

rules = [
  RemoveUnused,
  DisableSyntax
]

DisableSyntax.noVars = true
DisableSyntax.noNulls = true
DisableSyntax.noThrows = true

This did reorganize my imports and removed unused. Definitely should be used at check-in and in CI/CD

Finding Packages

Have yet discovered the way to find package dependencies. The individual documentation information e.g. https://www.playframework.com/documentation/3.0.x/PlaySlick but I am looking more for namespace. Hang on lets ask the robot. It seems https://index.scala-lang.org is the way to go

VS Code

Took a while to get this going. It is always the ecosystem which takes the time. So I guess you install the Scala megapack or what ever it is called. Create a launch.json. The trick are to set the buildTarget correctly. It offered me under "Metals:Display Build Target Info", "dvdrental_api_scala-build", "root" and "root-test". I of course choose the name of the project - derrrrr.

{
  "configurations": [
    {
      "type": "scala",
      "request": "attach",
      "name": "dvdrental",
      "buildTarget": "root",
      "hostName": "localhost",
      "port": 5005
    }
  ]
}

Now, set your breakpoints and run in debug with

sbt -jvm-debug 5005 run

REST API

When I build a REST API I usually use the controller->Service->DAO approach. I will try and touch on the things as they happen so I capture the struggles.

Configuration

I need to pass configuration to my project as always. With slick you can do this by putting this into the application.conf. This is a format called HOCON which stands for Human Optimized Config Object Notation. It can replace values with environment variables.

slick.dbs.default.profile = "slick.jdbc.PostgresProfile$"
slick.dbs.default.db {
    driver = "org.postgresql.Driver",
    url = ${?DATABASE_URL}
    user = ${?DATABASE_USER}
    password = ${?DATABASE_PASSWORD}
}

Dependency Injection

Never used this and is here to show an example. Need to check doing things the right way. So basically you write the classes as you would normally do. Here is the DAO and DAO. Note interfaces are traits in Scala. I am a bit rusty in Scala hahaha

trait ActorDAO {
  def all(): Future[Seq[Actor]]
...
}

And the implementation.

@Singleton
class ActorDAOImpl @Inject() (
    protected val dbConfigProvider: DatabaseConfigProvider
)(implicit ec: ExecutionContext)
    extends HasDatabaseConfigProvider[JdbcProfile]
    with ActorDAO {

  import profile.api._

  private val actors = TableQuery[ActorTable]

  // Fetch all actors
  def all(): Future[Seq[Actor]] =
    db.run(actors.result)
...
}

Then we need to make a class to hold the mapping of trait to implementation so made a file app/modules/DaoModule.scala

class DaoModule extends AbstractModule {
  override def configure(): Unit = {
    bind(classOf[ActorDAO]).to(classOf[ActorDAOImpl])
  }
}

And then we need to have this loaded at startup via the application.conf

play.modules.enabled += "modules.DaoModule"

Routing

This is relatively easy as it is done via configuration in a file under conf/routes

GET /actor controllers.ActorController.getAll()
GET /film  controllers.FilmController.getAll()

DAO

Well made an implementation of the ActorDAO but also needed to make a ActorTable class which is really just a mapper from the database to the case class. I will need to understand again the def * bit. I understand it is a two-way mapping from a Tuple (not two things in scala) to a case class. The object is a companion object or static function on ActorTable, a bit kotlinesk - haha

class ActorTable(tag: Tag) extends Table[Actor](tag, "actor") {
  def actorId = column[Long]("actor_id", O.PrimaryKey, O.AutoInc)
  def firstName = column[String]("first_name")
  def lastName = column[String]("last_name")
  def lastUpdate = column[Timestamp]("last_update")
  def * = (
    actorId,
    firstName,
    lastName,
    lastUpdate
  ) <> (Actor.tupled, Actor.unapply)
}

object ActorTable {
  val actors = TableQuery[ActorTable]
}

Funny my this is removed in Scala 3 - well sort of. They replaced it with mapTo but in only supports 12 fields. So for actor all good but for Film not so good.

class ActorTable(tag: Tag) extends Table[Actor](tag, "actor") {
  def actorId = column[Long]("actor_id", O.PrimaryKey, O.AutoInc)
  def firstName = column[String]("first_name")
  def lastName = column[String]("last_name")
  def lastUpdate = column[Timestamp]("last_update")
  def * : ProvenShape[Actor] = (
    actorId,
    firstName,
    lastName,
    lastUpdate
  ).mapTo[Actor]
}

object ActorTable {
  val actors = TableQuery[ActorTable]
}

For film I had to go back to the Tuple approach but to do this I needed to add the tupled to the model Film case class

package models

case class Film(
    filmId: Long,
    title: String,
    description: String,
    releaseYear: String,
    languageId: Long,
    rentalDuration: Long,
    rentalRate: Double,
    length: Long,
    replacementCost: Double,
    rating: String,
    lastUpdate: String,
    specialFeatures: List[String],
    fullText: String
)

object Film {
  val tupled = Film.apply.tupled
}

And revert back to the old way of doing it. The tupled in Scala 3 is no longer free and you have to define it.

class FilmTable(tag: Tag) extends Table[Film](tag, "film") {
  def filmId = column[Long]("film_id", O.PrimaryKey, O.AutoInc)
...
  def fullText = column[String]("fulltext")

  def * : ProvenShape[Film] = (
    filmId,
...
    fullText
  ) <> (Film.tupled, Film.unapply)
}

object FilmTable {
  val films = TableQuery[FilmTable]
}

I struggled a lot with the syntax for the

Syntax Component Meaning Plain English
`def *` Defines a method named `*` Slick convention for mapping a DB row to a Scala object
`: ProvenShape[Film]` Return type of the method “This mapping produces a `Film` object”
`(filmId, title, ..., fullText)` Tuple of table columns These are the columns Slick reads from the DB
`<>` Slick’s two-way mapping operator Bridges between tuple and case class
`Film.tupled` Converts a tuple into a `Film` Used when reading from the DB
`Film.unapply` Converts a `Film` into a tuple Used when writing to the DB

When you did deeper it start to make sense. The <> is just a method e.g.

def <>[P, R](f: P => R, g: R => Option[P]): MappedProjection[R, P]

Custom Database Types

It the postgres database we have custom types. One is Rating which can be one of G,PG,PG13,R, NC17. To mimic this in rust I used an enum. This can be done for scala too. First we create a type. It got a bit message as I want to cater for bad data.

package models

// Scala 3 enum with explicit DB string value (keeps DB labels flexible)
enum Rating(val dbValue: String) {
  case G extends Rating("G")
  case PG extends Rating("PG")
  case PG13 extends Rating("PG-13")
  case R extends Rating("R")
  case NC17 extends Rating("NC-17")
  // Unknown preserves the original DB value when the database contains
  // a label not currently modeled by the enum. This avoids throwing on
  // unexpected database values and keeps the raw text available.
  case Unknown(override val dbValue: String) extends Rating(dbValue)
}
...

Now we can handle the Unknown values.

object Rating {
  // Explicit list of known singleton cases. We avoid using `values` because
  // the enum also defines a non-singleton `Unknown` case and Scala won't
  // generate a `values` array for enums with non-singleton cases.
  val known: Seq[Rating] = Seq(G, PG, PG13, R, NC17)

  def fromDb(s: String): Option[Rating] = known.find(_.dbValue == s)

  /** Convert from a DB string to a Rating, but never throw: unknown labels are
    * wrapped in `Rating.Unknown(value)`.
    */
  def fromDbOrUnknown(s: String): Rating = fromDb(s).getOrElse(Unknown(s))

  // throws if not found (use with caution)
  def unsafeFromDb(s: String): Rating =
    fromDb(s).getOrElse(
      throw new IllegalArgumentException(s"Unknown Rating value: $s")
    )
...

And now the JSON stuff to

  // Play JSON implicits so Json macros can find them
  import play.api.libs.json._

  implicit val ratingWrites: Writes[Rating] = Writes(r => JsString(r.dbValue))
  implicit val ratingReads: Reads[Rating] = Reads {
    case JsString(s) =>
      fromDb(s) match {
        case Some(r) => JsSuccess(r)
        case None    => JsError(s"Unknown rating: $s")
      }
    case _ => JsError("Rating must be a string")
  }

  // See below for the formatting.
  implicit val ratingFormat: Format[Rating] = Format(ratingReads, ratingWrites)
}

Now in utils/JsonFormats we write the Formater. This is quite a bit of code for one Enum and we are done yet

package utils

object JsonFormats {
  implicit val ratingFormat: Format[Rating] = new Format[Rating] {
    def writes(r: Rating): JsValue = JsString(r.dbValue)
    def reads(json: JsValue): JsResult[Rating] =
      json.validate[String].flatMap { s =>
        Rating.fromDb(s) match {
          case Some(r) => JsSuccess(r)
          case None    => JsError(s"Unknown rating: $s")
        }
      }
  }
}

In the postgres we need to handle this new type. We do this by defining ColumnType and providing functions for from and to the database

package db

import com.github.tminglei.slickpg._
import play.api.libs.json.JsValue
import play.api.libs.json._
import models.Rating
import models.Year

import slick.jdbc.JdbcType
import slick.ast.BaseTypedType

trait MyPostgresProfile
    extends ExPostgresProfile
    with PgArraySupport
    with PgDate2Support
    ....  {
  def pgjson =
    "jsonb" // jsonb support is in postgres 9.4.0 onward; for 9.3.x use "json"

  // Add back `capabilities.insertOrUpdate` to enable native `upsert` support; for postgres 9.5+
  override protected def computeCapabilities: Set[slick.basic.Capability] =
    super.computeCapabilities + slick.jdbc.JdbcCapabilities.insertOrUpdate

  override val api = MyAPI

  object MyAPI
      extends ExtPostgresAPI
      with ArrayImplicits
      with Date2DateTimeImplicitsDuration
      ... {

    // Add custom column Rating mapping
    implicit val ratingColumnType: JdbcType[Rating] & BaseTypedType[Rating] =
      MappedColumnType.base[Rating, String](
        // to DB
        (r: Rating) => r.dbValue,
        // from DB (never throws; returns Unknown(value) if not found)
        (s: String) => Rating.fromDbOrUnknown(s)
      )
...

Now at last we can use it in out FilmTable class

package dao

import models.Film

import db.MyPostgresProfile.api._
import db.MyPostgresProfile.MyAPI.strListTypeMapper
import db.MyPostgresProfile.MyAPI.ratingColumnType
import db.MyPostgresProfile.MyAPI.yearColumnType
import slick.lifted.ProvenShape
import models.{Rating, Year}

class FilmTable(tag: Tag) extends Table[Film](tag, "film") {
  def filmId = column[Long]("film_id", O.PrimaryKey, O.AutoInc)
  def title = column[String]("title")
...
  def rating = column[Rating]("rating", O.SqlType("rating"))
...
  def fullText = column[String]("fulltext", O.SqlType("tsvector"))
  def * : ProvenShape[Film] = (
    filmId,
    title,
...
    rating,
    fullText
  ) <> (Film.tupled, Film.unapply)
}
....

Dependency Inject

Gosh this looks an awful lot like Dagger for Android. This is a package called guice. https://github.com/google/guice. It basically seems to work the same as Dagger. Basically you do the following for this simple example.

  • Create you thing e.g. a Service
  • Create a module
  • Add the module to the application.conf

Create Service

I wanted to emulate the Update JWKS that I wrote in Rust. I had already done it in Java but it does seem rather easy to do in Scala. (Wish it had two l's Scalla as I would pronounce it correctly). The service uses something called the ActorSystem provided with Play 3 which is now Apache Pekko formally Akka after a bun fight around BSL. Does get more simpler than this.

@Singleton
class JwksRefreshService @Inject() (
    actorSystem: ActorSystem,
    verifier: JwtVerifier,
    configuration: Configuration,
    lifecycle: ApplicationLifecycle
)(implicit ec: ExecutionContext)
    extends Logging {

  // Add logging a instantiation
  logger.info("Starting JwksRefreshService to periodically refresh JWKS keys")

  private val refreshIntervalSeconds = configuration
    .getOptional[Int]("keycloak.jwks-refresh-seconds")
    .getOrElse(600)

  // schedule via ActorSystem scheduler
  private val cancellable: Cancellable =
    actorSystem.scheduler.scheduleAtFixedRate(
      0.seconds,
      refreshIntervalSeconds.seconds
    )(() => {
      logger.info(s"Refreshing JWKS keys every $refreshIntervalSeconds seconds")

      // trigger refresh asynchronously
      verifier.refreshKeys()
      ()
    })

  // cancel the scheduled job on application shutdown
  lifecycle.addStopHook { () =>
    try {
      logger.info(
        "Shutting down JwksRefreshService and cancelling scheduled refresh"
      )
      cancellable.cancel()
    } catch {
      case t: Throwable =>
        logger.warn("Error cancelling JWKS refresh cancellable", t)
    }
    scala.concurrent.Future.successful(())
  }
}

Create a module

Now like dagger we wrap it in a module. And just like Dagger we include the necessary components.

class AuthenticationModule extends AbstractModule {
  override def configure(): Unit = {
    bind(classOf[authentication.services.JwksRefreshService]).asEagerSingleton()
  }

  @Provides
  @Singleton
  def provideJwtVerifier(
      ws: WSClient,
      configuration: Configuration
  )(using ec: ExecutionContext): JwtVerifier = {
    val issuer = configuration
      .getOptional[String]("keycloak.issuer")
      .getOrElse(
        throw new IllegalStateException("keycloak.issuer not configured")
      )

    val audience = configuration
      .getOptional[String]("keycloak.audience")
      .getOrElse(
        throw new IllegalStateException("keycloak.audience not configured")
      )

    val verifier = new JwtVerifier(ws, issuer, audience, Clock.systemUTC())

    verifier
  }
}

Add the module to the application.conf

And just for completeness I show it in the application.conf

play.modules.enabled += "modules.AuthenticationModule"