Is there a future for Scala Future? Or is there only ZIO?

Concurrency in Scala

How plain Scala approaches concurrency? Future “monad” is the answer (actor model was also part of Scala but got deprecated in Scala 2.10). Everyone used or use Scala Futures. People coming to Scala from Java are thrilled by the API it offers (comparing to Java Future). It is also quite fast, nicely composable. As a result Future is the first choice everywhere where the asynchronous operation is required. So it is used for both performing time consuming computations and to call external services. Everything that may happen in the future. It makes writing concurrent programs a lot easier.

Basic Future semantic

Scala’s Future[T], present in the scala.concurrent package, is a type that represents a computation that is expected to eventually result in a value of type T. The computation also might break or time out so completed future might be successful or failed with an exception. Alas, the computation might go wrong or time out, so when the future is completed, it may not have been successful after all, in which case it contains an exception instead.

Future vs functional programming

Let’s look at the Scala Future from the functional programming perspective. Technically it is a monad. However is Future really a monad?

What is monad exactly? Briefly speaking, a monad is a container defining at least two functions on type A:

  • identity (unit)def unit[A](x: A): Future[A]
  • bind (flatMap)def bind[A, B](fa: Future[A])(f: A => Future[B]): Future[B]

Additionally these functions must satisfy three laws: left identity, right identity and associativity.
From the mathematical perspective monads only describe values.

Scala Future obviously follows all above so it can be called a monad. However, there is also another approach that says that Future used for wrapping side effects (like calling an external API) cannot be treated as monad. Why? Because when Future does it, it is no longer a value.

What is more, Future executes upon data construction. This makes it difficult to follow referential transparency which should allow substitution of the expression with its evaluated value.

Example:

def sideEffects = (
  Future {println("side effect")},
  Future {println("side effect")}
)

sideEffects
sideEffects

It produces the following output:

side effect
side effect
side effect
side effect

Now if Future was a value we would be able to extract the common expression which is:

lazy val anEffect = Future{println("side effect")}
def sideEffects = (anEffect, anEffect)

And calling it like this should present the same results as in the previous example:

sideEffects
sideEffects

But it does not, it prints:

side effect

The first call to sideEffects runs the future and caches the result. When the sideEffects is called second time the code inside Future is not called at all.

This behavior clearly breaks referential transparency. However, if it also makes the Future not a monad is far longer discussion so let’s leave it for now.

Another problem with Future is ExecutionContext. They are inseparable. Future (and it’s functions like map, foreach, etc.) needs to know where to execute.

def foreach[U](f: T => U)(implicit executor: ExecutionContext): Unit

By default scala.concurrent.ExecutionContext.Implicits.global execution context is imported everywhere the Future is used. On one hand it is a bad practice because the execution context is decided early and then it is fixed. It also makes it impossible for the callers of the function to decide on which ExecutionContext they want to run the function. Obviously it is possible to make the ExecutionContext the parameter of the function but then it propagates into all of the codebase. It needs to be added to whole stack of function calls. Boilerplate code. Preferably we want to decide on the context on which we execute the functions as late as possible, generally when the program starts the actual execution.

Future performance

Let’s look at the performance of the Scala Future. In the later chapter we will compare it to other constructs that I suggest as a replacement.

We will run two computations:

  • eight concurrent and sequential operations computing trigonometric tangent of an angle and returning the sum of the values
  • three concurrent and sequntial operations finding all prime numbers lower then n and producing a sum of them

The source code for the test can be found here: https://github.com/damianbl/scala-future-benchmark

The machine used to run this benchmark is an Intel(R) Core(TM) i9 2,4 GHz 8-Core with 32 GiB of memory running on macOS Catalina 10.15.2.

Results:

[info] Result "com.dblazejewski.benchmark.FutureMathTanBenchmark.futureForComprehensionTestCase":
[info]   17049.404 ±(99.9%) 259.822 ns/op [Average]
[info]   (min, avg, max) = (16682.279, 17049.404, 18281.647), stdev = 346.855
[info]   CI (99.9%): [16789.582, 17309.226] (assumes normal distribution)

What instead of Future?

When we now some of the limitations of the Scala Future let’s introduce a possible replacement.

In the last months there is a lot of hype around ZIO library.

At the first glance the ZIO looks really powerful. It provides an effect data types that are meant to be high performant (we will see this in the performance tests), functional, easily testable and resilient. The compose very well and are easy to reason about.

ZIO contains number of data types that help to run concurrent and asynchronous programs. Most important are:

  • Fiber – fiber models an IO that started running, it is more lightweight than threads
by John A. De Goes
  • ZIO – it is a value that models an effectful program, it might fail or succeed
  • Promise – it is a model of a variable that may be set a single time, and awaited on by many fibers
  • Schedule – schedule is a model of a recurring schedule, which can be used for repeating successful IO values, or retrying failed IO values

The main building block of the ZIO is the functional effect:

IO[E, A]

IO[E, A] is an immutable data type which describes the effectful program. The program may fail with error of type E or succeed with value of type A.

I would like to dive a bit deeper into Fiber data type. Fiber is base building block of the ZIO concurrency model. It is significantly different when you compare it to the to thread based concurrency model. In the thread based concurrency model every thread is mapped to the OS threads. Whereas the ZIO Fiber is more like a green thread. Green threads are threads that are scheduled by a runtime library or virtual machine instead of natively by the underlying operating system. They make it possible to emulate multithreaded environment without relying on the operating system capabilities. Green threads in most of the cases outperform the native threads but there are cases when the native threads are better.

Fibers are really powerful and nicely separated. Every fiber has its own stack, interpreter and that is how it executes the IO program.

Fibers scalability compared to Green Threads an Native Threads (John A. De Goes)

There is also one interesting thing about Fibers. They can be garbage collected. Let’s imaging a Fiber that runs infinitely. If it does not do any work at a particular time and there is no way to reach it from other components then it can be actually garbage collected. No memory leaks. Threads need to be shutdown and managed carefully in such case.

Let’s see how we can use Fibers. Imagine a situation when we have two functions:

  • validate – it does complex data validation
  • processData – it does complex and time consuming data processing

We would like to start the validation and processing data at the same time. If the validation is successful then the processing continues. If the validation fails then we stop processing. Implementing it with ZIO and Fibers is pretty straightforward:

    val result = for {
      processDataFiber  <- processData(data).fork
      validateDataFiber <- validateData(data).fork
      isValid <- validateDataFiber.join
      _ <- if (!isValid) processDataFiber.interrupt
      else IO.unit
      processingResult <- processDataFiber.join
    } yield processingResult

It starts processing and validating the data in lines (2) and (3). fork function returns an effect that forks this effect into a separate fiber. This fiber is returned immediately. In line (4) the fiber is joined, which suspends the joining fiber until the result of the fiber has been determined. If the validation result is false then we stop the processData fiber immediately, else we continue. Then in line (7) we wait for finishing the data processing.

This looks like the code that should run immediately, the same as similar code using Scala Futures. However, this is not the case for ZIO. In the above program we describe the functionality but we do not say how to run it. Running part is defined as late as possible. This feature also makes it pure functional. We can create a runtime and pass it around to run the effects:

  val runtime = new DefaultRuntime {}
  runtime.unsafeRun(DataProcessor.process())

Having the basic knowledge, we can write benchmarks we wrote with Scala Future before using the ZIO library.

The code can be found here:

Here are the results:

[info] Result "com.dblazejewski.benchmark.zio.ZioMathTanBenchmark.zioForComprehensionTestCase":
[info]   1090.656 ±(99.9%) 79.370 ns/op [Average]
[info]   (min, avg, max) = (1007.467, 1090.656, 1164.134), stdev = 52.498
[info]   CI (99.9%): [1011.286, 1170.026] (assumes normal distribution)

The performance difference is enormous. It only confirms what was already published by John A. De Goes on Twitter some time ago:

I am not going to dive in this post into the details of using ZIO library. There are already great resources provided:

  • Functional Scala - Modern Data Driven Applications with ZIO Streams by Itamar Ravid

Conclusion

Scala Future was a very good improvement of poor Java Future. When jumping from Java to Scala it was so huge difference in "developer-friendliness" that every come back to Java was a real pain.

However the more you dive into functional programming the more limitations of the Scala Future you see. Lack of referential transparency, accidental sequentiality, ExecutionContext are only a few of those limitations.

ZIO is still in the early stage but I am sure it will really shine in the future. Also the trend that is happening in the Scala ecosystem these days where there is shift from the early object-functional programming to current purely functional programming will also favour purely functional solutions like ZIO.

AWS, SQS, Alpakka, Akka Streams – go reactive – Part 1

I am going to write a series of a few posts about the transition from the basic AWS ElasticBeanstalk SQS integration to the more generic, reactive model.

Transition means that we have a current state that we want to change. So let’s start with the basic AWS SQS integration.

Amazon SQS is a HTTP-based managed service responsible for handling message queues. It offers two kinds of queues: standard queues (maximum throughput, at-least-once delivery) and FIFO queues (preserving messages order, exactly-once delivery).

Just for the record – it is possible to integrate with AWS SQS using the SDK in many languages (Java, Ruby, .Net, Php, etc.) which give you both synchronous and asynchronous interface. What is more, SQS is also Java Message Services (JMS) 1.1 compatible (Amazon SQS Java Messaging Library).

Let’s assume our application performs some sort of operations that can take a bit longer to complete and can be done asynchronously in the background. However, they are mostly triggered as a result of user interaction with the system. These can be sending an email, generating and sending invoices to the clients, processing images, generating huge reports.

The first, naive implementation could be to spawn the local, asynchronous task and handle the job there. The drawbacks of such solution are obvious: it consumes local resources that could be used for handling more user requests, it is hard to scale, it is not manageable, etc.

The better solution is to introduce some kind of middleware layer that allows distributing the work among the workers.

Therefore we deploy the system in the Elastic Beanstalk environment with the following setup:

 

AWS Elastic Beanstalk setup
AWS Elastic Beanstalk setup

 

The web/api servers process user requests and offload the background tasks to the SQS queue using the AWS SDK SQS Api. Elastic Beanstalk has the daemon process on each instance that has the responsibility to fetch the messages from the SQS queue and push them to the worker server. The worker servers only expose the REST POST endpoint that is used by the daemon process to push the messages. The whole setup can be fairly configured using AWS WEB console and also monitoring is quite decent. There is also one feature that is quite hidden but worth mentioning. The Elastic Beanstalk daemon process makes sure that the worker servers do not become overwhelmed and it POSTs messages to them only if they are capable of processing them. The limitations are as following: only one SQS queue can be used – all messages are put into one queue and are processed sequentially and the system gets tightly coupled to AWS environment (Elastic Beanstalk).

What I am suggesting in this post is still sticking to the SQS queues and keeping the worker servers but changing the way the messages are put into workers. Instead of relying on Elastic Beanstalk daemon process let’s implement reactive streams on each of the servers that are connected to the SQS queues.

What we gain in such solution is the ability to use multiple queues, if we properly implement the integration layer we are not that tightly coupled to the AWS infrastructure too. This is in regards to the “reading” part. In the following posts, we will see what advantages we get if we also implement the “writing” part using the reactive streams.

Before we dive into the details, let’s quickly introduce the term reactive streams and reactive programming. Both terms are strictly connected.

Generally, reactive programming is programming model that deals with asynchronous streams of data. It is also a model in which components are loosely coupled. Actually, it is nothing new. We have seen event buses for a long time already. Also if we look at the user interaction with the application or website it is mostly stream of click or other actions. The real changer though is the way we handle those data streams.

Instead of repeating the theory and well-known phrases about reactive programming I will try to quickly show the difference between reactive and proactive programming on a simple examples.

Imagine we have a switch and a light bulb. How would we program switching on the bulb? The first solution is obvious. The switch knows what light bulb it controls. It sends the command to the specific light bulb. The switch is proactive and the light bulb is passive. It means that the switch sends commands and the bulb only reacts to those commands.

This is how we would program it:

import LightBulb._

case class LightBulb(state: Int, power: Int) {
  def changeState(state: Int): LightBulb = this.copy(state = state)
}

object LightBulb {
  val Off = 0
  val On = 1
}

case class Switch(state: Int, lightBulb: LightBulb = LightBulb(Off, 60)) {
  def onSwitch(state: Int): LightBulb = lightBulb.changeState(state)
}

The limitations are obvious. The code is tightly coupled. The switch needs to know about the bulb. What if want it to control more bulbs? What if we want one switch to control the light bulb and the wall socket? What about turning on the same light bulb using two different switches? The tightly coupled code is not open for extensions.

What is the reactive solution to this problem?

The switch is responsible for changing its state only. The light bulb listens to the state changes of any switch and based on this modifies its state. In this model the bulb is reactive – changing its state as a reaction to the switch state change and the switch is observable – its state is being observed by other components (bulbs, sockets, etc.)

The reactive solution:

import LightBulb._

case class Notification(state: Int)

trait Observer[T] {
  def update(notification: T)
}

trait Observable[T] {
  private var observers: List[Observer[Notification]] = Nil

  def addObserver(observer: Observer[Notification]): List[Observer[Notification]] = observer :: observers

  def removeObserver(observer: T): Unit = observers ::= observer

  def notifyObservers(notification: Notification): Unit = observers.foreach(_.update(notification))
}

case class LightBulb(state: Int, power: Int) extends Observer[Notification] {
  override def update(notification: Notification): LightBulb = this.copy(state = notification.state)
}

object LightBulb {
  val Off = 0
  val On = 1
}

case class Switch(state: Int) extends Observable[LightBulb] {
  def onSwitch(state: Int): Switch = {
    notifyObservers(Notification(state))
    this.copy(state = state)
  }
}

object SwitchBulb {
  val lightBulb = LightBulb(Off, 60)

  val switch = Switch(Off)
  switch.addObserver(lightBulb)
}

The end result of those two approaches is the same. What are the differences then?

In the first solution, some external entity (switch) controls the bulb. To do so it needs to have the reference to the bulb. However, in the reactive model, the bulb controls itself. Another difference is that in the proactive model, switch itself determines what it controls. In the reactive model, it does not know what it controls, it just has a list of items that are interested in its state changes. Then those items determine what to do as a reaction to the change.

The models look similar, actually, they mirror each other. However, there is a subtle but really important difference. In the proactive model, components control each other directly while in the reactive model the components control each other indirectly and are not coupled together.

The reactive approach let us build the systems that are:

  • message driven
  • scalable
  • resilient
  • responsive

In the next post of this series I will present the actual integration with AWS SQS  using Akka Streams and Alpakka AWS SQS Connector. 

Stay tuned.

Scala – tagged types

Data types in a programming language are the description or classification of the data that instructs the compiler how to treat the data. Of course, they are not only for the compiler or interpreter but also for us, the developers, as they help us understand the code big time.

This is a valid definition of the data which type is Map[String, String]:

val bookingPaymentMapping : Map[String, String] = Map(booking1.id -> payment1.id, booking2.id -> payment2.id)

This is the valid definition for our domain because both, booking and payment ids have the type String. Also for this trivial example, the type definition looks perfectly fine and pretty enough. However, we can imagine that in a more complex situation, in a bigger codebase we may lack some information when we see definitions like this:

val bookingPaymentMapping : Map[String, String]

As a result, it is not that rare that we see comments like this:

val bookingPaymentMapping : Map[String, String] //maps booking id -> payment id

We also quickly notice that it is not only about the readability of our code but also about the safety. This code perfectly compiles but it is not valid in our domain, it introduces a well-hidden bug:

val bookingPaymentMapping : Map[String, String] = Map(booking1.id -> payment1.id, payment2.id -> booking2.id)

What if we would like to add some extra information to the type definition? Something like “metadata” for the types? The information that not only helps the developers to comprehend the code but also introduces additional “type safety” by the compiler/interpreter.

The solution is there in the Scala ecosystem and it is called Tagged types.

There are two main implementations of Tagged typesScalaz and Shapeless but also SoftwareMill’s Scala Common implementation should be mentioned. In this post, I will shortly show the usage of Shapeless Tagged types.

This is how the simple model that we convert to the new one using tagged types looks like:

import java.time.LocalDate

case class Booking(id: String, date: LocalDate)

case class Payment(id: String, bookingId: String, date: LocalDate)

object TaggedTypes {
  val booking1 = Booking("bookingId1", LocalDate.now)
  val booking2 = Booking("bookingId2", LocalDate.now)

  val payment1 = Payment("paymentId1", booking1.id, LocalDate.now)
  val payment2 = Payment("paymentId2", booking2.id, LocalDate.now)

  val bookingPaymentMapping: Map[String, String] = Map(booking1.id -> payment1.id, booking2.id -> payment2.id)
}

object Payments {
  def payBooking(bookingId: String) = Payment("paymentId", bookingId, LocalDate.now)
}

The final code should look close to this:

import java.time.LocalDate

case class Booking(id: BookingId, date: LocalDate)

case class Payment(id: PaymentId, bookingId: BookingId, date: LocalDate)

object TaggedTypes {
  val booking1 = Booking("bookingId1", LocalDate.now)
  val booking2 = Booking("bookingId2", LocalDate.now)

  val payment1 = Payment("paymentId1", booking1.id, LocalDate.now)
  val payment2 = Payment("paymentId2", booking2.id, LocalDate.now)

  val bookingPaymentMapping: Map[BookingId, PaymentId] = Map(booking1.id -> payment1.id, booking2.id -> payment2.id)
}

object Payments {
  def payBooking(bookingId: BookingId) = Payment("paymentId", bookingId, LocalDate.now)
}

We can already see that this code is easier to comprehend:

val bookingPaymentMapping: Map[BookingId, PaymentId]

but what we do not see yet is the fact that we also introduced the additional “type safety”.

How to get to that second implementation? Let’s do it step by step.

First, we need to create the tags:

trait BookingIdTag

trait PaymentIdTag

These are simple Scala traits but actually other types can be used here as well. However, the trait is most convenient. The names have suffix Tag by convention.

We can use those tags to create tagged types:

import java.time.LocalDate
import shapeless.tag.@@

trait BookingIdTag

trait PaymentIdTag

case class Booking(id: String @@ BookingIdTag, date: LocalDate)

case class Payment(id: String @@ PaymentIdTag, bookingId: String @@ BookingIdTag, date: LocalDate)

object TaggedTypes {
  val booking1 = Booking("bookingId1", LocalDate.now)
  val booking2 = Booking("bookingId2", LocalDate.now)

  val payment1 = Payment("paymentId1", booking1.id, LocalDate.now)
  val payment2 = Payment("paymentId2", booking2.id, LocalDate.now)

  val bookingPaymentMapping: Map[String @@ BookingIdTag, String @@ PaymentIdTag] = Map(booking1.id -> payment1.id, booking2.id -> payment2.id)
}

object Payments {
  def payBooking(bookingId: String @@ BookingIdTag) = Payment("paymentId", bookingId, LocalDate.now)
}

This is basically how we tag the types. We say what type (String, Int, etc.) is tagged by which tag (String @@ StringTag, Int @@ IdTag, etc.).

But with this code we are still a bit far from our desired implementation. It is clear that these parts are boilerplate:

String @@ BookingIdTag
String @@ PaymentIdTag

We can easily replace them with type aliases (also presented in the previous post):

trait BookingIdTag

trait PaymentIdTag

package object tags {
  type BookingId = String @@ BookingIdTag
  type PaymentId = String @@ PaymentIdTag
}

case class Booking(id: BookingId, date: LocalDate)

case class Payment(id: PaymentId, bookingId: BookingId, date: LocalDate)

object TaggedTypes {

  val booking1 = Booking("bookingId1", LocalDate.now)
  val booking2 = Booking("bookingId2", LocalDate.now)

  val payment1 = Payment("paymentId1", booking1.id, LocalDate.now)
  val payment2 = Payment("paymentId2", booking2.id, LocalDate.now)

  val bookingPaymentMapping: Map[BookingId, PaymentId] = Map(booking1.id -> payment1.id,
    booking2.id -> payment2.id)
}

object Payments {
  def payBooking(bookingId: BookingId) = Payment("paymentId", bookingId, LocalDate.now)
}

With this implementation, we are very close to what we would expect but this code still does not compile:

[error] /Users/Damian/local_repos/scala-tagged-types/src/main/scala/com/dblazejewski/taggedtypes/TaggedTypes.scala:23: type mismatch;
[error]  found   : String("bookingId1")
[error]  required: com.dblazejewski.taggedtypes.tags.BookingId
[error]     (which expands to)  String with shapeless.tag.Tagged[com.dblazejewski.taggedtypes.BookingIdTag]
[error]   val booking1 = Booking("bookingId1", LocalDate.now)

This says that we are using String type in the parameter which is expected to be a tagged type:

val booking1 = Booking("bookingId1", LocalDate.now)

The constructor (apply() method) of Booking case class expects tagged type but we supplied it with simple String. To fix this we need to make sure that we create the instance of the tagged type. This is how it can be done:

import com.dblazejewski.taggedtypes.tags.{BookingId, PaymentId}
import shapeless.tag.@@
import shapeless.tag

trait BookingIdTag

trait PaymentIdTag

package object tags {
  type BookingId = String @@ BookingIdTag
  type PaymentId = String @@ PaymentIdTag
}

case class Booking(id: BookingId, date: LocalDate)

case class Payment(id: PaymentId, bookingId: BookingId, date: LocalDate)

object TaggedTypes {
  val bookingId1: BookingId = tag[BookingIdTag][String]("bookingId1")
  val bookingId2: BookingId = tag[BookingIdTag][String]("bookingId2")


  val paymentId: PaymentId = tag[PaymentIdTag][String]("paymentId")
  val paymentId1: PaymentId = tag[PaymentIdTag][String]("paymentId1")
  val paymentId2: PaymentId = tag[PaymentIdTag][String]("paymentId2")


  val booking1 = Booking(bookingId1, LocalDate.now)
  val booking2 = Booking(bookingId1, LocalDate.now)

  val payment1 = Payment(paymentId1, booking1.id, LocalDate.now)
  val payment2 = Payment(paymentId2, booking2.id, LocalDate.now)

  val bookingPaymentMapping: Map[BookingId, PaymentId] = Map(booking1.id -> payment1.id,
    booking2.id -> payment2.id)
}

object Payments {

  import TaggedTypes._

  def payBooking(bookingId: BookingId) = Payment(paymentId, bookingId, LocalDate.now)
}

This is how we defined instances of tagged types:

val bookingId1: BookingId = tag[BookingIdTag][String]("bookingId1")
val bookingId2: BookingId = tag[BookingIdTag][String]("bookingId2")

Now the code compiles.

The code is also on the github.

 

Let summarize what we achieved here:

  • the intention of the code is clearly visible:
val bookingPaymentMapping: Map[BookingId, PaymentId]

We know immediately that the bookingPaymentMapping maps booking ids to payment ids.

  • we get errors in the compilation time when we accidentally switch the ids:
val bookingPaymentMapping: Map[BookingId, PaymentId] = Map(booking1.id -> payment1.id,<br>  payment2.id -> booking2.id)

 

The examples presented in this post are trivial but even though we see the clear benefits of using tagged types. Imagine the complex project and I think we are fully convinced that this is really usefuly technique for every Scala developer toolset.

Scala – a few useful tips

I’ve been doing programming in Scala for more than 2 years already. I think I can position myself somewhere here:

Age Mooij @ infoq.com

From time to time I look at Scala Exercises or similar pages where I usually find some things that I do not know yet.

In this blog I would like to share some of the Scala features I  found interesting recently.

Collections

Let’s start with Scala Collections framework. Scala Collections is the framework/library which is a very good example of applying Don’t Repeat Yourself principle. The aim was to design the collections library that avoids code duplication as much as possible. Therefore most operations are defined in collection templates. The templates can be easily and flexibly inherited from individual base classes and implementations.

High level overview of Scala Collections
High-level overview of Scala Collections

 

Avoid creating temporary collections

val seq = "first" :: "second" :: "last" :: Nil

val f: (String) => Option[Int] = ???
val g: (Option[Int]) => Seq[Int] = ???
val h: (Int) => Boolean = ???

seq.map(f).flatMap(g).filter(h).reduce(???)

In this sequence of operations a temporary, intermediate collection is created. We do not need those collections implicitly. They only unnecessarily take heap space and burden the GC. The question is why those intermediate collections are created. The answer is because the collections (except Stream) transformers like a map, flatMap, etc. are “strict”. It means that the new collection is always constructed as a result of the transformer. Obviously, there are also “non-strict” transformers available (lazyMap). But how to avoid creating those temporary collections why still using the “strict” transformers? There is a systematic way to turn every collection into a lazy one which is a view. This is special kind of collection that implements all transformers in a lazy way:

val seq = "first" :: "second" :: "last" :: Nil

val f: (String) => Option[Int] = ???
val g: (Option[Int]) => Seq[Int] = ???
val h: (Int) => Boolean = ???

seq.view.map(f).flatMap(g).filter(h).reduce(???)

Now the temporary collections are not created and elements are not stored in the memory.

It is also possible to use views when instead of reducing to a single element the new collection of the same type is created. However, in this case, the force method call is needed:

val seq = "first" :: "second" :: "last" :: Nil

val f: (String) => Option[Int] = ???
val g: (Option[Int]) => Seq[Int] = ???
val h: (Int) => Boolean = ???

seq.view.map(f).flatMap(g).filter(h).force

When transformation creates a collection of different type, instead of force method the suitable converted can be used:

val seq = "first" :: "second" :: "last" :: Nil

val f: (String) => Option[Int] = ???
val g: (Option[Int]) => Seq[Int] = ???
val h: (Int) => Boolean = ???

seq.view.map(f).flatMap(g).filter(h).toList

Calling toSeq on a “non-strict” collection

When Seq(…) is used the new “strict” collection is created. It might look obvious that when we call toSeq on a “non-strict” collection (Stream, Iterator) we create a “strict” Seq. However, when toSeq is called actually we call TraversableOnce.toSeq which returns Stream under the hood which is “lazy” collection. This may lead to hard to track bugs or performance issues.

val source = Source.fromFile("file.txt")
val lines = source.getLines.toSeq
source.close()
lines.foreach(println)

The code seems to look good however when we run it throws IOException complaining that the stream is already closed. Based on what we said above it makes sense since the toSeq call does not create a new “strict” collection but rather returns the Stream. The solution is to either to call toStream implicitly or if we need a strict collection we should use toVector instead of toSeq.

Never-ending Traversable

Every collection in Scala is Traversable. Traversables among many useful operations (map, flatMap) have also functions to get the information about the collection size: isEmpty, nonEmpty, size.

Generally, when we call size on the Traversable we expect the get the number of elements in the collection. When we do something like this:

List(1, 2, 3).size

we get 3, indeed.

Imaging that our API function accepts any Traversable and the user provides us with Stream.from(1) valueStream is also Traversable but the difference is that this is one of the collections which is lazy by default. As a result, it does not have a definite size.

So when we call

Stream.from(1).size

this method never returns. It is definitely not what we expect.

Luckily, we have a method hasDefiniteSize which says if it is safe to call size on the Traversable.

Stream.from(1).hasDefiniteSize //false
List(1, 2).hasDefiniteSize //true

One thing to remember is that if hasDefiniteSize returns true, it means that the collection is finite for sure. However, the other way around is not always guaranteed:

Stream.from(1).take(5).hasDefiniteSize //fasle
Stream.from(1).take(5).size //5

Difference, intersection, union

This example is self-explanatory:

val numbers1 = Seq(1, 2, 3, 4, 5, 6) 
val numbers2 = Seq(4, 5, 6, 7, 8, 9) 

numbers1.diff(numbers2) 
List(1, 2, 3): scala.collection.Se

numbers1.intersect(numbers2) 
List(4, 5, 6): scala.collection.Seq

numbers1.union(numbers2)
List(1, 2, 3, 4, 5, 6, 4, 5, 6, 7, 8, 9): scala.collection.Seq

The union, however, keeps duplicates. It’s not what we want most of the times. In order to get rid of them we have a distinct function:

val numbers1 = Seq(1, 2, 3, 4, 5, 6) 
val numbers2 = Seq(4, 5, 6, 7, 8, 9) 
​
numbers1.union(numbers2).distinct
List(1, 2, 3, 4, 5, 6, 7, 8, 9): scala.collection.Seq

Collection types performance implications

Seq

Seq most of the available operations are linear which means they will take time proportional to the collection size (L ~ O(n)). For example, append operation will take linear time on Seq which is not what we would expect. What is more, it means that if we have an infinite collection, some of the linear operations will not terminate. On the other hand, head or tail operations are very efficient.

List

Time Complexity Description
C The operation takes (fast) constant time.
L The operation is linear meaning it takes time proportional to the collection size.
Operation Time Complexity
head C ~ O(1)
tail C ~ O(1)
apply L ~ O(n)
update L ~ O(n)
prepend C ~ O(1)
append L ~ O(n)

Head, tail and prepend operations take constant time which means they are fast and do not depend on the collection size.

Vector

Vector performance is generally very good:

Operation Time Complexity
head eC ~ O(1)
tail eC ~ O(1)
apply eC ~ O(1)
update eC ~ O(1)
prepend eC ~ O(1)
append eC ~ O(1)

head or tail operations are slower than on the List but not by much.

Conclusion

Always use a right tool for the job. Knowing the performance characteristics of different collection types we can choose the one that is the fastest for the kind of operations we do.

Type aliases

Type aliases in Scala allow us to create an alternate name for the type and (sometimes) for its companion object. Usually, we use them to create a simple alias for a more complex type.

type Matrix = List[List[Int]]

However, type aliases can be also helpful for API usability. When our API refers to some external types:

import spray.http.ContentType

final case class ReturnValue (data: String, contentType: ContentType)

We always force users of our API to import those types:

import spray.http.ContentType

val value = ReturnValue("data", ContentType.`application/json`)

By defining the type alias in the base package we can give users the dependencies for free:

package com.dblazejewski

package object types {
    type ContentType = spray.http.ContentType
}
import com.dblazejewski._

val value = ReturnValue("data", ContentType.`application/json`)

Another use case that comes to my mind is simplifications of type signatures:

def authenticate[T](auth: RequestContext => Future[Either[ErrorMessage, T]]) = ???

By introducing two type aliases:

package object authentication {
    type AuthResult[T] = Either[ErrorMessage, T]
    type Authenticator[T] = RequestContext => Future[AuthResult[T]]
}

We hide the complexity:

def authenticate[T](auth: Authenticator[T]) = ???

Actually, scala.Predef is full of type aliases:

scala.Predef type aliases
scala.Predef type aliases

Auto lifted partial functions

A partial function PartialFunction[A, B] is a function defined for some subset of domain A. The subset is defined by the isDefined method.  

A partial function PartialFunction[A, B] can be lifted into a function Function[A, Option[B]]. The lifted function is defined over the whole domain but the values of type Option[B]

Example:

val pf: PartialFunction[Int, Boolean] = { 
  case i if i > 0 => i % 2 == 0
}

val liftedF = pf.lift

liftedF(-1) 
//None: scala.Option

liftedF(1)
//Some(false): scala.Option

Thanks to the lifting instead of doing sth like this:

future.map { result => result match {
    case Foo(foo) => ???
    case Bar(bar) => ???
}

Scala allows us to do it in a simpler way:

future.map {
    case Foo(foo) => ???
    case Bar(bar) => ???
}

ImplicitNotFound

From the Scala docs:

class implicitNotFound extends Annotation with StaticAnnotation

An annotation that specifies the error message that is emitted when the compiler cannot find an implicit value of the annotated type.

Let’s look at the example:

trait Serializer[T] {
  def serialize(t: T): String
}

trait Deserializer[T] {
  def deserialize(data: String): T
}

def foo[T: Serializer](x: T) = x

foo(42)

When we run this code we get a rather vague error message:

 

However, when we add a simple implicitNotFound annotation:

import annotation.implicitNotFound

@implicitNotFound("Cannot find Serializer type class for type ${T}")
trait Serializer[T] {
  def serialize(t: T): String
}

@implicitNotFound("Cannot find Deserializer type class for type ${T}")
trait Deserializer[T] {
  def deserialize(data: String): T
}

def foo[T: Serializer](x: T) = x

foo(42)
foo("text")

We get more meaningful errors:

Conclusion

Scala is really powerful and expressive language that I like very much. On the other hand, it takes quite some time to get really proficient in it. The tips presented in this post are rather basic but hopefully in the following posts we will dive into more advanced, functional aspects of the Scala language.

It is always a matter of common sense when we develop software in Scala to decide if we keep the code simple enough and easy to understand for other developers. This is very important to remember since using Scala we have the tool to make the code really complicated and not understandable.

With Scala it is even possible to break the GitHub linter: ContentType.scala  🙂

ElasticMQ – the SQS power available locally

Amazon Simple Queue Service (Amazon SQS) is a distributed message queuing service. It is similar to other well-known messaging solutions like RabbitMQ, ActiveMQ, etc. but it is hosted by Amazon.
It is a really fast, configurable and relatively simple messaging solution.

In my current company we strongly rely on the AWS infrastructure. One major Amazon cloud component we use is the SQS (Simple Queue Service).
It allows us to decouple components in the application. We also send lots of notification through SQS which makes handling them reliable.

In short words, SQS works perfectly for us. The only issue we had was running the application in development mode which means running it locally without the need to integrate with the AWS infrastructure. The same problem arises in the integration tests.
The best solution would be to have the SQS running locally or even better have the service that can be embedded into the application.

And here Adam Warski and his ElasticMQ comes to the rescue.
ElasticMQ is a message queue system that can be run either stand-alone or embedded.
It has Amazon-SQS compatible interface which means implementing some of the SQS query API
(I am not sure if the full API is implemented but all the standard queries are there).

As already mentioned, ElasticMQ can be run in two ways:

1. Stand-alone service

It is as simple as running the command:

java -Dconfig.file=custom.conf -jar elasticmq-server-0.10.0.jar

Then the application needs to be configured with the proper SQS url, for example:

http://localhost:9324/queue/wl-test-queue

 

2. Embedded mode

This mode is perfect for integrating the ElasticMQ into the application running on the developer’s station or run it during the integration tests.

First step is to setup the ElasticMQ server. We use Play 2.4 and Guice as the dependency injection framework.
In order to start the ElasticMQ when the application starts I simply extend Guice AbstractModule:

import com.google.inject.AbstractModule
class StartupModule extends AbstractModule {
   override def configure() = {
      bind(classOf[ElasticMqLauncher]).to(classOf[ElasticMqLauncherImpl]).asEagerSingleton()
   }
}

The launcher itself starts the ElasticMQ server:

val config = ConfigFactory.load()
val server = new ElasticMQServer(new ElasticMQServerConfig(config))
val shutdown = server.start()

Having the ElasticMQ server running that way and changing only the SQS queue url in application properties we can run the infrastructure depending on the SQS locally without need to communicate with the SQS cloud service.

Scala function error “Type mismatch” in Intellij Idea vs Scala Eclipse

Using Scala we can define a function in a form when name and parameters are specified followed by the definition of the function body.

For example:

def abs(x: Double) = if (x < 0) -x else x
As you can see, curly braces are not mandatory when all those elements are put in one line.
When you put that code in Scala worksheet both in Intellij Idea and Scala Eclipse (Scala Eclipse IDE) everything is all right. Code compiles and runs.

The previous example was very simple. However, functions are more complex in real life so there is no way to stick them in one line. It is not a problem at all, you define multiline functions simply by embracing the function body in curly braces:

def listLength(list: List[_]): Int = {
  if (list == Nil) 0 else 1 + listLength(list.tail)
}

Code of course compiles and runs both in Intellij Idea and Scala Eclipse.

What happens if you omit the braces?

def listLength(list: List[_]): Int = if (list == Nil) 0 else 1 + listLength(list.tail)
That is the point where I spot difference in behavior between those two IDE.
While Scala Eclipse runs that code without any error, Intellij Idea gets stuck with the following error message:
error: type mismatch;
   found   : Unit
   required: Int
             if (list == Nil) 0
             ^
I am sure that that error message does not tell much to Scala beginners about the source of the problem.
Let’s investigate the error message more deeply.

Firstly, find out what type Unit is.
Type Unit is similar to what is known in Java as void. In Java void means that function does not return anything. However, every method in Scala returns a value. To deal with the situation when Scala function does not have anything to return, type Unit was introduced. Eventually Unit is defined on the bytecode level as void, but in Scala source level it is still a type.Come back to the error message. It says that expected returned type is Int but the function returns Unit in fact. What probably happened is situation, when only line:

if (list == Nil) 0

is interpreted, Scala adds the else statement in that form:

else ()

where the mentioned rule of returning Unit when there is nothing to return, is applied.
So the rewritten if finally looks as follows:

if (list == Nil) 0 else ()

If part returns Int, while else part returns Unit. More general type is taken and the whole function tries to return Unit while expecting Int from function definition.

I met that problem when trying to run in Intellij Idea the code example presented in Scala Eclipse. Multiline function was coded without curly braces and only Intellij Idea thrown the error presented.