Friday, 28 July 2017

[Scala / Threads / Parallel collections] How to configure number of threads on parallel collection ?

I've already written an article about controlling pool size while using Java's parallel stream. Recently I needed exactly the same feature in Scala. Scala collection can be turned into parallel collection by invoking par method (introduced by Parallelizable trait).
trait Parallelizable[+A, +ParRepr <: Parallel] extends Any
so assuming you don't break basic rules of functional programming (immutablility, statelessness etc.) you can make your algorithms much faster by adding .par. Unfortunately .par method (exactly like .parallelStream() in Java) doesn't take any parameter which could controll the size of a pool.
/** Returns a parallel implementation of this collection.
   *
   *  For most collection types, this method creates a new parallel collection by copying
   *  all the elements. For these collection, `par` takes linear time. Mutable collections
   *  in this category do not produce a mutable parallel collection that has the same
   *  underlying dataset, so changes in one collection will not be reflected in the other one.
   *
   *  Specific collections (e.g. `ParArray` or `mutable.ParHashMap`) override this default
   *  behaviour by creating a parallel collection which shares the same underlying dataset.
   *  For these collections, `par` takes constant or sublinear time.
   *
   *  All parallel collections return a reference to themselves.
   *
   *  @return  a parallel implementation of this collection
   */
  def par: ParRepr = {
    val cb = parCombiner
    for (x <- seq) cb += x
    cb.result()
  }
However you can set the number of threads using ForkJoinTaskSupport like that:
object ParExample extends App {
  val friends = List("rachel", "ross", "joey", "chandler", "pheebs", "monica").par
  friends.tasksupport = new ForkJoinTaskSupport(new ForkJoinPool(6))
}
Let's prove it works:
object ParExample extends App {
  val friends = List("rachel", "ross", "joey", "chandler", "pheebs", "monica", "gunther", "mike", "richard").par
  val stopwatch = Stopwatch.createStarted()
  friends.foreach(f => Thread.sleep(5000))
  println(stopwatch.elapsed(MILLISECONDS))
}
It prints 10025 so the pool must have been smaller than the list which contains 9 strings. Now I'll set the pool size:
object ParExample extends App {
  val friends = List("rachel", "ross", "joey", "chandler", "pheebs", "monica", "gunther", "mike", "richard").par
  friends.tasksupport = new ForkJoinTaskSupport(new ForkJoinPool(9))
  val stopwatch = Stopwatch.createStarted()
  friends.foreach(f => Thread.sleep(5000))
  println(stopwatch.elapsed(MILLISECONDS))
}
This time it took only 5024 milliseconds because each element of the list has been processed by separate thread. The solution is ok for me but I'd rather want to use concise one-liners so I've created implicit conversion between ParSeq and RichParSeq that adds parallelismLevel(numberOfThreads: Int) method.
class RichParSeq[T](val p: ParSeq[T]) {
  def parallelismLevel(numberOfThreads: Int): ParSeq[T] = {
    p.tasksupport = new ForkJoinTaskSupport(new ForkJoinPool(numberOfThreads))
    p
  }
}

object ConfigurableParallelism {
  implicit def parIterableToRichParIterable[T](p: ParSeq[T]): RichParSeq[T] = new RichParSeq[T](p)
}
Now you can use it like that:
import ConfigurableParallelism._

object ParExample extends App {
  val stopwatch = Stopwatch.createStarted()
  val friends = List("rachel", "ross", "joey", "chandler", "pheebs", "monica", "gunther", "mike", "richard").par
    .parallelismLevel(9)
    .foreach(f => Thread.sleep(5000))
  println(stopwatch.elapsed(MILLISECONDS))
}
It took 5231 milliseconds.

Wednesday, 12 April 2017

[Scala] How to convert dates implicitly ?

If you read any book or tutorial about scala you probably know what implicit keyword means. Basically you can create a method that will be called when compiler thinks it's a good idea to do so. Although I'm not a fan of implicit conversions (when you overuse it the code becomes less readable and clear) it's very useful to get rid of method calls in some cases.

When you create a DAO you probably extend some abstract data access object that provides entity manager, jdbc template or other object that connects your application with a database.

I've been using Jooq for 16 months now to construct SQL queries. It's quite nice tool that prevents from typical sql typos. See example below:
def findForUpdate(searchKey: DateTime, productId: String): Option[SearchHistoryReport] = {
    val sql = dslContext.select(SEARCH_HISTORY_REPORT.SEARCH_DATE_AND_HOUR,
      SEARCH_HISTORY_REPORT.PRODUCT_ID,
      SEARCH_HISTORY_REPORT.SEARCH_SCORE)
      .from(SEARCH_HISTORY_REPORT)
      .where(SEARCH_HISTORY_REPORT.SEARCH_DATE_AND_HOUR.equal(new Timestamp(searchKey.getMillis)))
      .and(SEARCH_HISTORY_REPORT.PRODUCT_ID.equal(productId))
      .forUpdate().getSQL(INLINED)
    Try({
      npjt.queryForObject(sql, noParams(), (rs: ResultSet, rowNum: Int) => SearchHistoryReport.builder()
        .searchDateAndHour(fromTimestamp(rs.getTimestamp(SEARCH_HISTORY_REPORT.SEARCH_DATE_AND_HOUR.toString)))
        .productId(rs.getString(SEARCH_HISTORY_REPORT.PRODUCT_ID.toString))
        .searchScore(rs.getBigDecimal(SEARCH_HISTORY_REPORT.SEARCH_SCORE.toString)).build())
    }).toOption
  }
Search date and hour is a datetime column in the database:
MariaDB [_censored]> describe search_history_report;
+----------------------+---------------+------+-----+---------+-------+
| Field                | Type          | Null | Key | Default | Extra |
+----------------------+---------------+------+-----+---------+-------+
| product_id           | varchar(255)  | NO   | PRI | NULL    |       |
| search_date_and_hour | datetime      | NO   | PRI | NULL    |       |
| search_score         | decimal(19,2) | NO   |     | NULL    |       |
+----------------------+---------------+------+-----+---------+-------+
Datetime can be compared to java.sql.Timestamp but all dates in my domain objects are Joda's DateTime.

Let's see another example:
val sql = dslContext.select(sum(SEARCH_HISTORY_REPORT.SEARCH_SCORE).as(SCORE_SUM_ALIAS), SEARCH_HISTORY_REPORT.PRODUCT_ID)
      .from(SEARCH_HISTORY_REPORT)
      .where(SEARCH_HISTORY_REPORT.PRODUCT_ID.in(productIds))
      .and(SEARCH_HISTORY_REPORT.SEARCH_DATE_AND_HOUR.between(new Timestamp(startDate.getMillis)).and(new Timestamp(endDate.getMillis)))
      .groupBy(SEARCH_HISTORY_REPORT.PRODUCT_ID).getSQL(INLINED)
I simply want to get rid of DateTime => Timestamp conversion.

Let's assume that my abstract dao looks like that:
trait Dao {
  def dslContext(): DSLContext = ???

  def jdbcTemplate(): NamedParameterJdbcTemplate = ???
}
It provides methods that return dslContext (jooq) and jdbcTemplate (spring).

I'd be really happy if all my daos could automatically convert org.joda.time.DateTime to java.sql.Timestamp.

Let's create implicit conversion between those types.
trait Dao {
  implicit def asTimestamp(date: DateTime): Timestamp = new Timestamp(date.getMillis)

  def dslContext(): DSLContext = ???

  def jdbcTemplate(): NamedParameterJdbcTemplate = ???
}
That's all. Now when I pass DateTime to method that takes Timestamp the compiler knows that implicit conversion can be used. It calls asTimestamp method that returns Timestamp behind the hood. Programmer doesn't have to remember that jooq likes timestamps only.

Query that uses implicit conversion looks like that:
val sql = dslContext.select(sum(SEARCH_HISTORY_REPORT.SEARCH_SCORE).as(SCORE_SUM_ALIAS), SEARCH_HISTORY_REPORT.PRODUCT_ID)
      .from(SEARCH_HISTORY_REPORT)
      .where(SEARCH_HISTORY_REPORT.PRODUCT_ID.in(productIds))
      .and(SEARCH_HISTORY_REPORT.SEARCH_DATE_AND_HOUR.between(startDate).and(endDate))
      .groupBy(SEARCH_HISTORY_REPORT.PRODUCT_ID).getSQL(INLINED)
I guess it's perfect use case when implicit conversion can be used. The code is very consise and readable and it's natural to pass DateTime so noone has to remember about this conversion.

Wednesday, 22 March 2017

[Scala] How to transform tuple to class instance ?

In scala TupleN contains N fields and this is basically all that tuple can do (except swapping elements in Tuple2). Although it's really simple data structure it's extremely useful especially when you do some collection's processing.

Imagine that users in your system make orders and you want to find all users with their orders.
object A extends App {
  List("some@email.com", "another@email.com", "and@other.com")
    .map(email => (email, findOrders(email)))

  private def findOrders(email: String): List[Order] = List() // call some dao here...
}

case class Account(email: String, orders: List[Order])
As a result we have collection of tuples that contain email and list of orders. We could have created Account object instead of tuple like that:
.map(email => Account(email, findOrders(email))
but let's say we want only users who made at least one order so we have to make additional filtering:
List("some@email.com", "another@email.com", "and@other.com")
    .map(email => (email, findOrders(email)))
    .filterNot(_._2.isEmpty)
In the end we want to return list of accounts which means the tuple has to be transformed into Account instance. The easiest way would be something like that:
List("some@email.com", "another@email.com", "and@other.com")
    .map(email => (email, findOrders(email)))
    .filterNot(_._2.isEmpty)
    .map(t => Account(t._1, t._2))
but it just doesn't feel right. You might have noticed that we used case class instead of class so tupled method can be invoked (it comes from Function2 trait).
/** Creates a tupled version of this function: instead of 2 arguments,
   *  it accepts a single [[scala.Tuple2]] argument.
   *
   *  @return   a function `f` such that `f((x1, x2)) == f(Tuple2(x1, x2)) == apply(x1, x2)`
   */
  @annotation.unspecialized def tupled: Tuple2[T1, T2] => R = {
    case Tuple2(x1, x2) => apply(x1, x2)
  }
As you see in scaladoc it allows to create object's instance from tuple so considering our Account we can either call Account(email, orders) or Account((email, orders)). And this is exaclty what we're looking for:
List("some@email.com", "another@email.com", "and@other.com")
    .map(email => (email, findOrders(email)))
    .filterNot(_._2.isEmpty)
    .map(Account.tupled)