Thanks to visit codestin.com
Credit goes to GitHub.com

Skip to content

ExoQuery/ExoQuery

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ExoQuery

GitHub Workflow Status Plugin Version Runner Version

Language Integrated Query for Kotlin Multiplatform

  • SQL Queries at Compile Time
  • Forget eq, use regular EqEq
  • Forget Case().When, use regular if and when.
  • Forget Column<T>, use regular primitives!
  • Cross-Platform: JVM, iOS, Android, Linux, Windows, MacOS, JS-coming soon!
  • Functional, Composable, Powerful, and Fun!
ExoQueryV2.mp4

Introduction

Question: Why does querying a database need to be any harder than traversing an array?


Let's say something like:

people.map { p -> p.name }

Naturally we're pretty sure it should look something like:

SELECT name
FROM Person

That is why in C# when you write a statement like this:

var q = people.Select(p => p.name);   // Select is C#'s .map function

In LINQ we understand that it's this:

var q = from p in people
        select p.name

So in Kotlin, let's just do this:

val q = sql.select {
  val p = from(people)
  p.name
}

Then build it for a specific SQL dialect and run it on a database!

val data: List<Person> = q.buildFor.Postgres().runOn(myDatabase)
//> SELECT p.name FROM Person p

Welcome to ExoQuery!


...but wait, don't databases have complicated things like joins, case-statements, and subqueries?


Let's take some data:

@Ser
data class Person(val name: String, val age: Int, val companyId: Int)

@Ser
data class Address(val city: String, val personId: Int)

@Ser
data class Company(val name: String, val id: Int)
// Going to use @Ser as a concatenation of @Serializeable for now
val people: SqlQuery<Person> = sql { Table<Person>() }
val addresses: SqlQuery<Address> = sql { Table<Address>() }
val companies: SqlQuery<Company> = sql { Table<Company>() }

Here is a query with some Joins:

sql.select {
  val p: Person = from(people)
  val a: Address = join(addresses) { a -> a.personId == p.id }
  Data(p.name, a.city)
}
//> SELECT p.name, a.city FROM Person p JOIN Address a ON a.personId = p.id

Compared to Microsoft LINQ where it would look like this:

var q = from p in people
        join a in addresses on a.personId == p.id
        select Data(p.name, a.city)

Let's add some case-statements:

sql.select {
  val p = from(people)
  val a = join(addresses) { a -> a.personId == p.id }
  Data(p.name, a.city, if (p.age > 18) 'adult' else 'minor')
}
//> SELECT p.name, a.city, CASE WHEN p.age > 18 THEN 'adult' ELSE 'minor' END FROM Person p JOIN Address a ON a.personId = p.id

Now let's try a subquery:

sql.select {
  val (c, p) = from(
    select {
      val c: Company = from(companies)
      val p: Person = join(people) { p -> p.companyId == c.id }
      c to p
    } // -> SqlQuery<Pair<Company, Person>>
  )
  val a: Address = join(addresses) { a -> a.personId == p.id }
  Data(p.name, c.name, a.city)
}
//> SELECT p.name, c.name, a.city FROM (
//   SELECT p.name, p.age, p.companyId FROM Person p JOIN companies c ON c.id = p.companyId
//  ) p JOIN Address a ON a.personId = p.id

Notice how the types compose completely fluidly? The output of a subquery is the same datatype as a table.


...but wait, how can you use EqEq, or regular if or regular case classes in a DSL?


By using the sql construct to delineate relevant code snippets and a compiler-plugin to transform them, I can synthesize a SQL query the second your code is compiled in most cases.


You can even see it in the build output in a file. Have a look at the `build/generated/exoquery` directory.

So I can just use normal Kotlin to write Queries?


That's right! You can use regular Kotlin constructs that you know and love in order to write SQL code including:

  • Elvis operators
    people.map { p ->
      p.name ?: "default"
    }
    //> SELECT CASE WHEN p.name IS NULL THEN 'default' ELSE p.name END FROM Person p
  • Question marks and nullary .let statements
    people.map { p ->
        p.name?.let { free("mySuperSpecialUDF($it)").asPure<String>() } ?: "default"
    }
    //> SELECT CASE WHEN p.name IS NULL THEN 'default' ELSE mySuperSpecialUDF(p.name) END FROM Person p
  • If and When
    people.map { p ->
      when {
        p.age >= 18 -> "adult"
        p.age < 18 && p.age > 10 -> "minor"
        else -> "child"
      } 
    }
    //>  SELECT CASE WHEN p.age >= 18 THEN 'adult' WHEN p.age < 18 AND p.age > 10 THEN 'minor' ELSE 'child' END AS value FROM Person p
  • Simple arithmetic, simple functions on datatypes
    @SqlFragment
    fun peRatioWeighted(stock: Stock, weight: Double): Double = sql.expression {
      (stock.price / stock.earnings) * weight
    }
    sql {
      Table<Stock>().map { stock -> peRatioWeighted(stock, stock.marketCap/totalWeight) } 
    }
    //> SELECT (s.price / s.earnings) * s.marketCap / totalWeight FROM Stock s
  • Pairs and Tuples
    val query: SqlQuery<Pair<String, String>> = sql {
      Table<Person>().map { p -> p.name to p.age }  
       // or Triple(p.name, p.age, p.companyId), or MyDataClass(p.name, p.age, p.companyId)
    }
    //> SELECT p.name, p.age FROM Person p
    You can use pairs and tuples with the whole row too! In fact, you can output any simple data-class. See below for examples.

What is this sql thing?


The sql construct is both a tiny builder-DSL and a compile-time capture. When you enter an sql { ... } or sql.select { ... } or sql.expression { ... } block, the ExoQuery compiler plugin reifies the code you write into a SQL AST at compile time. This is how ExoQuery lets you use regular Kotlin constructs like if, when, ?:, and let in queries. There are a few different kinds of sql blocks you can start with:

  • A regular table-expression

    val people: SqlQuery<Person> = sql { Table<Person>() }
    //> SELECT x.name, x.age FROM Person x
    val joes: SqlQuery<Person> = sql { Table<Person>().where { p -> p.name == "Joe" } }
    //> SELECT p.name, p.age FROM Person p WHERE p.name = 'Joe'
    // (Notice how ExoQuery knows that 'p' should be the variable in this case?)
  • A table-select function. This is a special syntax for doing joins, groupBy, and other complex expressions.

    val people: SqlQuery<Pair<Person, Address>> = sql.select {
      val p = from(people)
      val a = join(addresses) { a -> a.personId == p.id }
      p to a
    }
    //> SELECT p.id, p.name, p.age, a.personId, a.street, a.zip  FROM Person p JOIN Address a ON a.personId = p.id 

    (This is actually just a shortening of the sql { select { ... } } expression.)

  • An arbitrary code snippet

    val nameIsJoe: SqlExpression<(Person) -> Boolean> = sql.expression {
      { p: Person -> p.name == "Joe" }
    }

    You can them use them with normal queries like this:

    // The .use function changes it from SqlExpression<(Person) -> Boolean> to just (Person) -> Boolean
    // you can only use it inside of a `sql` block. 
    sql { Table<Person>().filter { p -> nameIsJoe.use(p) } }

How do I use normal runtime data inside my sql blocks?


For most data types use param(...). For example:

val runtimeName = "Joe"
val q = sql { Table<Person>().filter { p -> p.name == param(runtimeName) } }
q.buildFor.Postgres().runOn(myDatabase)
//> SELECT p.id, p.name, p.age FROM Person p WHERE p.name = ?

The ExoQuery infrastructure will know to inject the value of runtimeName into the ? placeholder in the prepared-statement of the database driver.

This will work for primitives, and KMP and Java date-types (i.e. from java.time.*) ExoQuery uses kotlinx.serialization behind the scenes. In some cases you might1.4. want to use paramCtx to create a contextual parameter (Kotlin docs: contextual-serialization) or paramCustom to sepecify a custom serializer (Kotlin docs: custom-serializers). See the Parameters and Serialization section for more details.

Getting Started

Add the following to your build.gradle.kts. First add the plugin and then one of the below dependency blocks.

// First add the plugin:
plugins {
  id("io.exoquery.exoquery-plugin") version "2.3.0-1.7.1.PL"
  kotlin("plugin.serialization") version "2.3.0" // exoquery relies on this
}

// Then add a runner...
// For Java (JDBC):
dependencies {
  implementation("io.exoquery:exoquery-runner-jdbc:1.7.1.PL")
  implementation("org.postgresql:postgresql:42.7.0") // Remember to include the right JDBC Driver
}

// For Java (R2DBC - Reactive):
dependencies {
  implementation("io.exoquery:exoquery-runner-r2dbc:1.7.1.PL")
  implementation("org.postgresql:r2dbc-postgresql:1.0.5.RELEASE") // Remember to include the right R2DBC Driver
}

// For: IOS, OSX, Native Linux, and Mingw using Kotlin Multiplatform
dependencies {
  implementation("io.exoquery:exoquery-runner-native:1.7.1.PL")
  // implementation("app.cash.sqldelight:native-driver:2.0.2") // Optional
}

// For Android:
dependencies {
  implementation("io.exoquery:exoquery-runner-android:1.7.1.PL")
  implementation("androidx.sqlite:sqlite-framework:2.4.0")
}

Why the funny verision numbers?
ExoQuery's Compiler Plugin component has versions that look like this: <KotlinVersion-ExoQueryPluginVersion.PL(.RC?)>
The runners have a version that looks like this: <ExoQueryPluginVersion.PL(.RC?)>.
That way the only the plugin-version needs to be bumped whenever a new version of Kotlin is released, not all of the runners too.

Getting Started with JDBC

You can get started writing queries like this:

// Create a JVM DataSource the way you normally would
val ds: DataSource = ...
val controller = JdbcControllers.Postgres(ds)

// Create a query:
val query = sql {
  Table<Person>().filter { p -> p.name == "Joe" }
}

// Build it for the right dialect and execute:
val output = query.buildFor.Postgres().runOn(controller)

Getting Started with R2DBC (Reactive)

ExoQuery provides full support for R2DBC (Reactive Relational Database Connectivity), enabling non-blocking, reactive database operations. R2DBC is ideal for high-concurrency applications and reactive frameworks like Spring WebFlux or Ktor.

Setup

First, add the R2DBC driver dependency for your database:

dependencies {
  implementation("io.exoquery:exoquery-runner-r2dbc:1.7.1.PL")

  // Choose your database driver:
  implementation("org.postgresql:r2dbc-postgresql:1.0.5.RELEASE")  // PostgreSQL
  // implementation("io.asyncer:r2dbc-mysql:1.0.5")          // MySQL
  // implementation("io.r2dbc:r2dbc-mssql:1.0.2.RELEASE")    // SQL Server
  // implementation("io.r2dbc:r2dbc-h2:1.0.0.RELEASE")       // H2
}

Basic Usage

import io.r2dbc.spi.ConnectionFactories
import io.r2dbc.spi.ConnectionFactoryOptions
import io.exoquery.controller.r2dbc.R2dbcControllers

// Create an R2DBC connection factory
val connectionFactory = ConnectionFactories.get(
  ConnectionFactoryOptions.builder()
    .option(ConnectionFactoryOptions.DRIVER, "postgresql")
    .option(ConnectionFactoryOptions.HOST, "localhost")
    .option(ConnectionFactoryOptions.PORT, 5432)
    .option(ConnectionFactoryOptions.DATABASE, "mydb")
    .option(ConnectionFactoryOptions.USER, "user")
    .option(ConnectionFactoryOptions.PASSWORD, "password")
    .build()
)

// Create an R2DBC controller
val controller = R2dbcControllers.Postgres(connectionFactory = connectionFactory)

// Write queries using the same DSL as JDBC
val query = sql {
  Table<Person>().filter { p -> p.age > 21 }
}

// Execute the query reactively
val results: List<Person> = query.buildFor.Postgres().runOn(controller)

Supported Databases

R2DBC support is available for:

  • PostgreSQL - Full support including RETURNING clauses and DISTINCT ON
  • MySQL - Full support with ON DUPLICATE KEY UPDATE
  • SQL Server - Full support with OUTPUT clauses and IDENTITY_INSERT handling
  • H2 - Full support for testing and development
  • Oracle - Infrastructure ready (connection factory available)

Custom Type Encoding

R2DBC supports custom encoders/decoders for domain-specific types:

import io.exoquery.controller.r2dbc.R2dbcEncodingConfig
import io.exoquery.controller.r2dbc.R2dbcBasicEncoding

// Define a custom type
data class PersonId(val value: Int)

// Create a controller with custom encoding
val controller = R2dbcControllers.Postgres(
  encodingConfig = R2dbcEncodingConfig.Default(
    encoders = setOf(
      R2dbcBasicEncoding.IntEncoder.contramap { id: PersonId -> id.value }
    ),
    decoders = setOf(
      R2dbcBasicEncoding.IntDecoder.map { PersonId(it) }
    )
  ),
  connectionFactory = connectionFactory
)

Key Features

  • Non-blocking I/O - Fully reactive, non-blocking database operations
  • Identical DSL - Same query syntax as JDBC for easy migration
  • Type Safety - Compile-time type checking for queries
  • Backpressure - Built-in support through reactive streams
  • All Query Features - Supports joins, aggregations, subqueries, window functions, etc.
  • Batch Operations - Efficient batch inserts, updates, and deletes
  • Transactions - Full transaction support with the same API as JDBC

When to Use R2DBC

R2DBC provides benefits in:

  • High-concurrency applications
  • Reactive frameworks (Spring WebFlux, Ktor)
  • I/O-bound workloads
  • Microservices requiring non-blocking behavior
  • Applications needing efficient resource utilization

For CPU-bound operations or simpler applications, JDBC may be more appropriate.


Have a look at code samples for starter projects here:

ExoQuery Features

ExoQuery has a plethora of features that allow you to write SQL queries, these are enumerated below. Instead of using reflection to encode and decode data, ExoQuery builds on top of the Terpal-SQL database controller in order to encode and decode Kotlin data classes in a fully cross-platform way. Although you do not need data classes to be @Serializeable in order to build the actual queries, you do need it in order to run them.

Composing Queries

ExoQuery compose allow you to perform functions on SqlQuery instances. These functions are not available outside of a compose block becuase the compose block delimiates the boundary of the query. SqlQuery instances created by the Table<MyRow> function representing simple table elements. Following functional-programming principles, any transformation on any SqlQuery instance work, and work the same way.

Map

This is also known as an SQL projection. It allows you to select a subset of the columns in a table.

val q = sql {
  Table<Person>().map { p -> p.name }
}
q.buildFor.Postgres().runOn(myDatabase)
//> SELECT p.id, p.name FROM Person p

You can also use the .map funtion to perform simple aggreations on tables. For example:

val q = sql {
  Table<Person>().map { p -> Triple(min(p.age), max(p.age), avg(p.age)) }
}
q.buildFor.Postgres().runOn(myDatabase)
//> SELECT min(p.age), max(p.age), avg(p.age) FROM Person p

Filter

This is also known as a SQL where clause. It allows you to filter the rows in a table.

val q = sql {
  Table<Person>().filter { p -> p.name == "Joe" }
}
q.buildFor.Postgres().runOn(myDatabase)
//> SELECT p.id, p.name, p.age FROM Person p WHERE p.name = 'Joe'

You can also do .where { name == "Joe" } for a slightly more SQL-diomatic experience but this function is not as powerful.

Also, if you are using a sql.select block, you can also use the where function to filter the rows:

val q = sql.select {
  val p = from(Table<Person>())
  where { p.name == "Joe" }
}

Correlated Subqueries

You can use a combine filter and map functions to create correlated subqueries. For example, let's say we want to find all the people over the average age:

val q = sql {
  Table<Person>().filter { p -> p.age > Table<Person>().map { it.age }.avg() }
}
//> SELECT p.id, p.name, p.age FROM Person p WHERE p.age > (SELECT avg(p1.age) FROM Person p1)

Recall that you could table the .map function with table aggregations. Let's say you want to get not average but also use that together with another aggreagtor (e.g. the average minus the minimum). Normally you could use an expression avg(p.age) + min(p.age) with a the .map function.

val customExpr: SqlQuery<Double> = sql.select { Table<Person>().map { p -> avg(p.age) + min(p.age) } }

If you want to use a statement like this inside of a correlated subquery, we can use the .value() function inside of a capture block to convert a SqlQuery<T> into just T.

val q = sql {
  Table<Person>().filter { p -> p.age > Table<Person>().map { p -> avg(p.age) + min(p.age) }.value() }
}
q.buildFor.Postgres().runOn(myDatabase)
//> SELECT p.id, p.name, p.age FROM Person p WHERE p.age > (SELECT avg(p1.age) + min(p1.age) FROM Person p1)

SortedBy

The kotlin collections sortedBy and sortedByDescending functions are also available on SqlQuery instances.

val q = sql {
  Table<Person>().sortedBy { p -> p.name }
}
//> SELECT p.id, p.name, p.age FROM Person p ORDER BY p.name

val q = sql {
  Table<Person>().sortedByDescending { p -> p.name }
}
//> SELECT p.id, p.name, p.age FROM Person p ORDER BY p.name DESC

When you want to do advanced sorting (e.g. different sorting for different columns) use a select block and the sortBy function inside.

val q = sql.select {
  val p = from(Table<Person>())
  sortBy(p.name to Asc, p.age to Desc)
}
//> SELECT p.id, p.name, p.age FROM Person p ORDER BY p.name ASC, p.age DESC

Joins

Use the sql.select to do as many joins as you need.

val q: SqlQuery<Pair<Person, Address>> =
  sql.select {
    val p = from(Table<Person>())
    val a = join(Table<Address>()) { a -> a.ownerId == p.id }
    p to a
  }
q.buildFor.Postgres().runOn(myDatabase)
//> SELECT p.id, p.name, p.age, a.ownerId, a.street, a.zip FROM Person p JOIN Address a ON a.personId = p.id

Let's add a left-join:

val q: SqlQuery<Pair<Person, Address?>> =
  sql.select {
    val p = from(Table<Person>())
    val a = join(Table<Address>()) { a -> a.ownerId == p.id }
    val f = joinLeft(Table<Furnitire>()) { f -> f.locatedAt == a.id }
    Triple(p, a, f)
  }
//> SELECT p.id, p.name, p.age, a.ownerId, a.street, a.zip, f.name, f.locatedAt FROM Person p 
//  LEFT JOIN Address a ON a.personId = p.id 
//  LEFT JOIN Furnitire f ON f.locatedAt = a.id

Notice that the Address table is now nullable. This is because the left-join can return null values.

What can go inside of the sql.select function is very carefully controlled by ExoQuery. It needs to be one of the following:

  • A from statement
  • A join or leftJoin statement
  • A where clause (see the filter section above for more details).
  • A sortBy clause (see the sortBy section above for more details).
  • A groupBy clause (see the groupBy section below for more details).

You can use all of these features all together. For example:

val q: SqlQuery<Pair<Person, Address>> =
  sql.select {
    val p = from(Table<Person>())
    val a = join(Table<Address>()) { a -> a.ownerId == p.id }
    where { p.name == "Joe" }
    sortBy(p.name to Asc, p.age to Desc)
    groupBy(p.name)
    p to a
  }

q.buildFor.Postgres().runOn(myDatabase)
//> SELECT p.id, p.name, p.age, a.ownerId, a.street, a.city FROM Person p 
//  INNER JOIN Address a ON p.id = a.ownerId 
//  WHERE p.age > 18 GROUP BY p.age, a.street ORDER BY p.age ASC, a.street DESC

Also note that you can do implicit joins using the sql.select function if desired as well. For example, the following query is perfectly reasonable:

val q = sql.select {
  val p = from(Table<Person>())
  val a = from(Table<Address>())
  val r = from(Table<Robot>())
  where {
    p.id == a.ownerId && p.id == r.ownerId && p.name == "Joe"
  }
  Triple(p, a, r)
}
//> SELECT p.id, p.name, p.age, a.ownerId, a.street, a.zip, r.ownerId, r.model FROM Person p, Address a, Robot r 
//  WHERE p.id = a.ownerId AND p.id = r.ownerId AND p.name = 'Joe'

GroupBy

Use a sql.select function to do groupBy. You can use the groupBy function to group by multiple columns.

val q: SqlQuery<Pair<Person, Address>> =
  sql.select {
    val p = from(Table<Person>())
    val a = join(Table<Address>()) { a -> a.ownerId == p.id }
    groupBy(p.name, a.street)
    MyData(p.name, a.street, avg(p.age)) // Average age of all people named X on the same street
  }
//> SELECT p.name, a.street, avg(p.age) FROM Person p

Having

You can use the having function to filter the results of a groupBy query.

val q: SqlQuery<Pair<Person, Address>> =
  sql.select {
    val p = from(Table<Person>())
    val a = join(Table<Address>()) { a -> a.ownerId == p.id }
    groupBy(p.name, a.street)
    having { avg(p.age) > 18 }
    MyData(p.name, a.street, avg(p.age)) // Average age of all people named X on the same street
  }
//> SELECT p.name, a.street, avg(p.age) FROM Person p
//  JOIN Address a ON a.personId = p.id
//  GROUP BY p.name, a.street
//  HAVING avg(p.age) > 18

SQL Actions

ExoQuery has a simple DSL for performing SQL actions.

Insert

val q =
  sql {
    insert<Person> { set(name to "Joe", age to 44, companyId to 123) }
  }
q.buildFor.Postgres().runOn(myDatabase)
//> INSERT INTO Person (name, age, companyId) VALUES ('Joe', 44, 123)

Typically you will use param to insert data from runtime values:

val nameVal = "Joe"
val ageVal = 44
val companyIdVal = 123
val q =
  sql {
    insert<Person> { set(name to param(nameVal), age to param(ageVal), companyId to param(companyIdVal)) }
  }
//> INSERT INTO Person (name, age, companyId) VALUES (?, ?, ?)

When this gets to cumbersome (and it will!) see how to insert a whole row.

Insert a Whole Row

You can insert and entire person object using setParams.

val person = Person("Joe", 44, 123)
val q =
  sql {
    insert<Person> { setParams(person) }
  }
//> INSERT INTO Person (name, age, companyId) VALUES (?, ?, ?)

Insert with Exclusions

Wait a second, don't database-model objects like Person typically have one or more primary keys key columns that need to be excluded during the insert because the database generates them?

Here is how to do that:

val person = Person(id = 0, "Joe", 44, 123)

val q =
  sql {
    insert<Person> { setParams(person).excluding(id) } // you can add multiple exclusions here e.g. exlcuding(id, id1, id2, ...)
  }
//> INSERT INTO Person (name, age, companyId) VALUES (?, ?, ?)

Insert with Returning ID

What do we do if we need to know the row id of the row we just inserted? The best way to do that is to use the returning function to add a RETURNING clause to the insert statement.

data class Person(val id: Int, val name: String, val age: Int, val companyId: Int)

val person = Person(id = 0, "Joe", 44, 123)
val q =
  sql {
    insert<Person> { setParams(person).excluding(id) }.returning { p -> p.id }
  }

q.buildFor.Postgres().runOn(myDatabase) // Also works with SQLite
//> INSERT INTO Person (name, age, companyId) VALUES (?, ?, ?) RETURNING id
q.buildFor.SqlServer().runOn(myDatabase)
//> INSERT INTO Person (name, age, companyId) OUTPUT INSERTED.id VALUES (?, ?, ?)

This will work for Postgres, SQLite, and SqlServer. For other databases use .returningKeys { id } which will instruct the database-driver to return the inserted row keys on a more low level. This function is more limited than what returning can do, and it is prone to various database-driver quirks so be sure to test it on your database appropriately.

The returning function is more flexible that returningKeys because it allows you to return not only any column in the inserted row but also collect these columns into a composite object of your choice. For example:

data class MyOutputData(val id: Int, val name: String)

val person = Person(id = 0, "Joe", 44, 123)

val q =
  sql {
    insert<Person> { setParams(person).excluding(id) }
      .returning { p -> MyOutputData(p.id, p.name + "-suffix") }
  }

val output: List<MyOutputData> = q.buildFor.Postgres().runOn(myDatabase)
//> INSERT INTO Person (name, age, companyId) VALUES (?, ?, ?) RETURNING id, name || '-suffix' AS name

Insert with OnConflict

You can use the onConflict function to specify what to do in case of a conflict. This is supported in Postgres, Sqlite, and MySQL.

Use onConflictUpdate to update rows when a conflict occurs. Use the excluded argument to refer to the row that is being inserted. This will become EXCLUDED term in the generated SQL.

val person = Person(id = 0, "Joe", 44, 123)
val q =
  sql {
    insert<Person> { setParams(person) }
      onConflictUpdate(id) { excluded -> set(name to excluded.name) }
  }
// Postgres and SQLite:
// INSERT INTO Person (name, age, companyId) VALUES (?, ?, ?) ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name
// MySQL
// INSERT INTO Person (name, age, companyId) VALUES (?, ?, ?) AS x ON DUPLICATE KEY UPDATE name = x.name

The onConflictUpdate supports complex expressions as well. For example:

val person = Person(id = 0, "Joe", 44, 123)
val q =
  sql {
    insert<Person> { setParams(person) }
    onConflictUpdate(id) { excluded -> set(name to name + "-" + excluded.name) }
  }
// Postgres and SQLite:
// INSERT INTO Person AS x (name, age, companyId) VALUES (?, ?, ?) ON CONFLICT (id) DO UPDATE SET name = x.name || '-' || EXCLUDED.name
// MySQL
// INSERT INTO Person (name, age, companyId) VALUES (?, ?, ?) AS x ON DUPLICATE KEY UPDATE name = Person.name || '-' || x.name

(Note that quite ironically, the table alias x is understood by Postgres and Sqlite to be the previous value, it understood by MySQL as the incoming value.)

Use onConflictIgnore to ignore the insert if a conflict occurs. This is also supported in Postgres, Sqlite, and MySQL.

val person = Person(id = 0, "Joe", 44, 123)
val q =
  sql {
    insert<Person> { setParams(person) }
    onConflictIgnore(id)
  }
// Postgres and SQLite:
// INSERT INTO Person (name, age, companyId) VALUES (?, ?, ?) ON CONFLICT (id) DO NOTHING
// MySQL:
// INSERT IGNORE INTO Person (name, age, companyId) VALUES (?, ?, ?)

Update

The update statement is similar to the insert statement. You can use the set function to set the values of the columns you want to update, and typically you will use param to set SQL placeholders for runtime values. Use a .where clause to filter your update query.

val joeName = "Joe"
val joeAge = 44
val joeId = 123

val q =
  sql {
    update<Person> {
      set(name to param(joeName), age to param(joeAge))
        .where { id == param(joeId) }
    }
  }
q.buildFor.Postgres().runOn(myDatabase)
// > UPDATE Person SET name = ?, age = ?, companyId = ? WHERE id = 1

Similar to INSERT, you can use setParams to set columns from the entire person object. Combine this with excluding to exclude the primary key column from the update statement and use the where clause to filter your update query.

val person = Person(id = 1, "Joe", 44, 123)
val q =
  sql {
    update<Person> { setParams(person).excluding(id) }
      .where { id == param(joeId) }
  }
q.buildFor.Postgres().runOn(myDatabase)
//> UPDATE Person SET name = ?, age = ?, companyId = ? WHERE id = 1

You can also use a returning clause to return the updated row if your database supports it.

val person = Person(id = 1, "Joe", 44, 123)

val q =
  sql {
    update<Person> { setParams(person).excluding(id) }
      .where { id == param(joeId) }
      .returning { p -> p.id }
  }

q.buildFor.Postgres().runOn(myDatabase) // Also works with SQLite
//> UPDATE Person SET name = ?, age = ?, companyId = ? WHERE id = ? RETURNING id
q.buildFor.SqlServer().runOn(myDatabase)
//> UPDATE Person SET name = ?, age = ?, companyId = ? OUTPUT INSERTED.id WHERE id = ?

Delete

Delete works exactly the same as insert and updated but there is no set clause since no values are being set.

val joeId = 123
val q =
  sql {
    delete<Person>.where { id == param(joeId) }
  }
q.buildFor.Postgres().runOn(myDatabase)
//> DELETE FROM Person WHERE id = ?

The Delete DSL also supports returning clauses:

val joeId = 123
val q =
  sql {
    delete<Person>
      .where { id == param(joeId) }
      .returning { p -> p.id }
  }
q.buildFor.Postgres().runOn(myDatabase) // Also works with SQLite
//> DELETE FROM Person WHERE id = ? RETURNING id
q.buildFor.SqlServer().runOn(myDatabase)
//> DELETE FROM Person OUTPUT DELETED.id WHERE id = ?

Batch Actions

Batch queries are supported (only JDBC for now) for insert, update, and delete actions as well as all of the features they support (e.g. returning, excluding, etc.).

val people = listOf(
  Person(id = 0, name = "Joe", age = 33, companyId = 123),
  Person(id = 0, name = "Jim", age = 44, companyId = 456),
  Person(id = 0, name = "Jack", age = 55, companyId = 789)
)
val q =
  sql { p ->
    sql.batch(people) { p ->
      insert<Person> {
        set(
          name to param(p.name),
          age to param(p.age),
          companyId to (p.companyId)
        )
      }
    }
  }

q.buildFor.Postgres().runOn(myDatabase)
//> INSERT INTO Person (name, age, companyId) VALUES (?, ?, ?)

This will tell the JDBC driver to use a PreparedStatement with .addBatch() and executeBatch() between which every insert will be executed. Batch queries for update and delete work the same way.

Column and Table Naming

If you need your table or columns to be named differently that than the data-class name or it's fields you can use the kotlinx.serialization SerialName("...") annotation:

@SerialName("corp_customer")
data class CorpCustomer {
  val name: String

  @SerialName("num_orders")
  val numOrders: Int

  @SerialName("created_at")
  val createdAt: Int
}

val q = sql { Table<CorpCustomer>() }
q.buildFor.Postgres().runOn(myDatabase)
//> 

If you do not want to use this annotation for somemthing else (e.g. JSON field names) you can also use @ExoEntity and @ExoField to do the same thing.

@ExoEntity("corp_customer")
data class CorpCustomer {
  val name: String

  @ExoField("num_orders")
  val numOrders: Int

  @ExoField("created_at")
  val createdAt: Int
}

val q = sql { Table<CorpCustomer>() }
q.buildFor.Postgres().runOn(myDatabase)
//> SELECT x.name, x.num_orders, x.created_at FROM corp_customer x

SQL Fragment Functions

SQL fragment functions allow you to use Kotlin functions inside of sql blocks. Writing a SQL fragment function is as simple as adding the @SqlFragment annotation to a function that returns a SqlQuery<T> or SqlExpression<T> instance. Recall that in the introduction we saw a SQL fragment function that calculated the P/E ratio of a stock:

  @SqlFragment
fun peRatioWeighted(stock: Stock, weight: Double): Double = sql.expression {
    (stock.price / stock.earnings) * weight
  }

Once this function is defined you can use it inside an sql block like this:

sql {
  Table<Stock>().map { stock -> peRatioWeighted(stock, stock.marketCap / totalWeight) }
}

Note that SQL fragment functions can call other SQL fragment functions, for example:

@SqlFragment
fun peRationSimple(stock: Stock): Double = sql.expression {
    stock.price / stock.earnings
  }

@SqlFragment
fun peRatioWeighted(stock: Stock, weight: Double): Double = sql.expression {
  peRationSimple(stock) * weight
}
sql {
  Table<Stock>().map { stock -> peRatioWeighted(stock, stock.marketCap / totalWeight) }
}

Also note that SQL fragment functions can make use of the context-receiver position. For example, let's make the marketCap field into a function:

@SqlFragment
fun Stock.marketCap() = sql.expression {
    price * sharesOutstanding
  }
val q = sql {
  val totalWeight = Table<Stock>().map { it.marketCap().use }.sum() // A local variable used in the query!
  Table<Stock>().map { stock -> peRatioWeighted(stock, stock.marketCap().use / totalWeight) }
}
println(q.buildFor.Postgres().value)
// SELECT (stock.price / stock.earnings) * ((this.price * this.sharesOutstanding) / (SELECT sum(this.price * this.sharesOutstanding) FROM Stock it)) AS value FROM Stock stock

Since Sql Fragment Functions guarantee that the code inside of them leads to a compile-time generated query they cannot be used arbitrarily. They can only contain a single capture, capture.select, or capture.expression block. They cannot have any other kind of control logic (e.g. if, when, for, etc.) inside of them. If you want a more flexible mechanism for writing queries see the dynamic queries section below.

Common SQL Functions

This section summarizes the most common string functions you can use directly inside captured SQL blocks. There are two sources of functions:

  • Kotlin String methods that are whitelisted by the compiler plugin (via MethodWhiteList) and translated by SqlIdiom
  • Additional helper methods available via the StringSqlDsl

Supported Kotlin String methods (via MethodWhiteList):

  • substring(start, end) → SUBSTRING(column, start, end)
  • uppercase() → UPPER(column)
  • lowercase() → LOWER(column)
  • length → LENGTH/CHAR_LENGTH (dialect-dependent; may be rendered as LEN in some dialects)
  • trim() → TRIM(column)
    • Note: Only the zero-argument variation is supported. If you need to trim specific characters or trim only one side, use the StringSqlDsl helpers below.

StringSqlDsl helpers you can call on string expressions inside a captured block:

  • left(n: Int) → LEFT(column, n)
  • right(n: Int) → RIGHT(column, n)
  • replace(old: String, new: String) → REPLACE(column, old, new)
  • substring(start: Int, end: Int) → SUBSTRING(column, start, end)
  • uppercase() → UPPER(column)
  • lowercase() → LOWER(column)
  • trimBoth(charsToTrim: String) → TRIM(BOTH charsToTrim FROM column)
  • trimRight(charsToTrim: String) → TRIM(TRAILING charsToTrim FROM column)
  • trimLeft(charsToTrim: String) → TRIM(LEADING charsToTrim FROM column)

Examples:

sql.select {
  val p = from(Table<Person>())
  // Kotlin String methods (whitelisted)
  val a = p.name.substring(1, 3)      // SUBSTRING(p.name, 1, 3)
  val b = p.name.uppercase()           // UPPER(p.name)
  val c = p.name.lowercase()           // LOWER(p.name)
  val d = p.name.trim()                // TRIM(p.name) – only zero-arg is supported
  val e = p.name.length                // LENGTH(p.name) or LEN(p.name) depending on dialect

  // StringSqlDsl helpers
  val f = p.name.left(5)               // LEFT(p.name, 5)
  val g = p.name.right(2)              // RIGHT(p.name, 2)
  val h = p.name.replace(" ", "_")   // REPLACE(p.name, ' ', '_')
  val i = p.name.trimBoth("_")        // TRIM(BOTH '_' FROM p.name)
}

Polymorphic Query Abstraction

Continuing from the section on SQL Fragment Functions above, captured functions can use generics and polymorphism in order to create highly abstractable query components. For example:

interface Locateable {
  val locationId: Int
}
data class Person(val name: String, val age: Int, locationId: Int)
data class Robot(val model: String, val createdOn: LocalDate, val locationId: Int)
data class Address(val id: Int, val street: String, val zip: String)

// Now let's create a captured function that can be used with any Locateable object:
@SqlFragment
fun <L : Locateable> joinLocation(locateable: SqlQuery<L>): SqlQuery<Pair<L, Address>> =
  sql.select {
    val l = from(locateable)
    val a = join(addresses) { a -> a.id == l.locationId }
    l to a
  }

Now I can use this function with the Person table

val people: SqlQuery<Pair<Person, Address>> = sql {
  joinLocation(Table<Person>().filter { p -> p.name == "Joe" })
}
people.buildFor.Postgres().runOn(myDatabase)
//> SELECT p.name, p.age, p.locationId, a.id, a.street, a.zip FROM Person p JOIN Address a ON a.id = p.locationId WHERE p.name = 'Joe'

As well as the Robot table:

val robots: SqlQuery<Pair<Robot, Address>> = sql {
  joinLocation(Table<Robot>().filter { r -> r.model == "R2D2" })
}
//> SELECT r.model, r.createdOn, r.locationId, a.id, a.street, a.zip FROM Robot r JOIN Address a ON a.id = r.locationId WHERE r.model = 'R2D2'

You can then continue to compose the output of this function to get more and more powerful queries!

Local Variables

Captured functions can also be used to define local variables inside of a capture block. In the introduction we saw a query that looked like this:

sql {
  Table<Stock>().map { stock -> peRatioWeighted(stock, stock.marketCap / totalWeight) }
}

Note how I intentionally left the totalWeight variable undefined. Let's try to define it as a local varaible:

val q =
  sql {
    val totalWeight = Table<Stock>().map { it.marketCap().use }.sum()
    Table<Stock>().map { stock -> peRatioWeighted(stock, stock.marketCap / totalWeight) }
  }
q.buildFor.Postgres().runOn(myDatabase)
//> SELECT (stock.price / stock.earnings) * ((this.price * this.sharesOutstanding) / (SELECT sum(this.price * this.sharesOutstanding) FROM Stock it)) AS value FROM Stock stock

Transactions

ExoQuery supports transactions! Once a query or action is built e.g. once you do .buildFor.Postgres() you will get one of three possbile things:

  1. A SqlCompiledQuery object. This is a query that can be executed on the database.
  2. A SqlCompiledAction object. This is an action that can be executed on the database.
  3. A SqlCompiledBatchAction object This is a SQL batch action, typically it is not supported for transactions.

Once you have imported a ExoQuery runner project (e.g. exoquery-runner-jdbc) and created a DatabaseController (e.g. JdbcControllers.Postgres), you can run the query or action:

val ds: DataSource = ...
val controller = JdbcControllers.Postgres(ds)

val getJoes = sql {
  Table<Person>().filter { p -> p.name == "Joe" }
}.buildFor.Postgres()

fun setJoesToJims(ids: List<String>) = sql {
  update<Person> { set(name to "Jim").where { p -> p.id in params(ids) } }
}.buildFor.Postgres()

controller.transaction {
  val allJoes = getJoes.runOn(controller)
  // Execute some runtime logic to filter the people we want to update i.e. `shouldActuallyBeUpdated`
  val someJoes = allJoes.filter { p -> shouldActuallyBeUpdated(p) }
  setJoesToJims(someJoes).runOn(controller)
}

Free Blocks

Calling UDFs

In situations where you need to use a SQL UDF that is available directly on the database, or when you need to use custom SQL syntax that is not supported by ExoQuery, you can use a free block.

val q = sql {
  Table<Person>().filter { p -> free("mySpecialDatabaseUDF(${p.name})") == "Joe" }
}
//> SELECT p.id, p.name, p.age FROM Person p WHERE mySpecialDatabaseUDF(p.name) = 'Joe'

You can pass param-calls into the free-block as well.

val myRuntimeVar = ...
val q = sql {
  Table<Person>().filter { p -> p.name == free("mySpecialDatabaseUDF(${param(myRuntimeVar)})") }
}
//> SELECT p.id, p.name, p.age FROM Person p WHERE p.name = mySpecialDatabaseUDF(?)

Enriching Queries

Free blocks are also useful for adding information before/after an entire query. For example:

val q = sql {
  free("${Table<Person>().filter { p -> p.name == "Joe" }} FOR UPDATE").asPure<SqlQuery<Person>>()
}
//> SELECT p.id, p.name, p.age FROM Person p WHERE p.name = 'Joe' FOR UPDATE

This is technique is quite powerful when combined with SQL Fragment Functions to abstract out logic:

@SqlFragment
fun <T : Person> forUpdate(v: SqlQuery<T>) = sql {
    free("${v} FOR UPDATE").asPure<SqlQuery<T>>()
  }

val q = sql {
  forUpdate(Table<Person>().filter { p -> p.age > 21 })
}
//> (SELECT p.id, p.name, p.age FROM Person p WHERE p.age > 21) FOR UPDATE

Enriching Actions

Free blocks can even be used with Action (i.e. insert, update, delete) statements:

val q = sql {
  insert<Person> { setParams(Person(1, "Joe", "Bloggs", 111)) }
}
val qq = sql {
  free("SET IDENTITY_INSERT Person ON ${q} SET IDENTITY_INSERT Person OFF").asPure<SqlAction<Person, Long>>()
}
qq.build<SqlServerDialect>().runOn(ctx)
//> SET IDENTITY_INSERT Person ON  INSERT INTO Person (id, name, age, companyId) VALUES (?, ?, ?, ?) SET IDENTITY_INSERT Person OFF

Window Functions

ExoQuery supports a variety of SQL window functions of of the box as well as the ability to use custom ones. Use the over().partitionBy()/orderBy() functions to specify the partitioning and/or ordering of the window function and then specify the window function itself.

data class Customer(val name: String, val age: Int, val membership: String)

val q = 
  sql.select {
    val c = from(Table<Customer>())
    Pair(
      c.name,
      over().partitionBy(c.membership).orderBy(c.name).avg(c.age) // Average age of customers in the same membership group, ordered by name
    )
  }
//> SELECT c.name, AVG(c.age) OVER (PARTITION BY c.membership ORDER BY c.name) FROM Customer c

Also note tha that the partitionBy and orderBy functions can take multiple columns as well e.g. partitionBy(c.membership, c.age) or orderBy(c.name, c.age).

Use the .frame(...) function to specify a custom window function, typically you will use this with the free("...sql...") function to specify the SQL for the window.

val q = 
  sql.select {
    val c = from(Table<Customer>())
    Pair(
      c.name,
      over().partitionBy(c.membership).orderBy(c.age).frame(free("NTILE(5)").invoke<Int>())
    )
  }
//> SELECT c.name, NTILE(5) OVER (PARTITION BY c.membership ORDER BY c.age) FROM Customer c

Parameters and Serialization

ExoQuery builds on top of kotlinx.serialization in order to encode/decode information into SQL prepared-statements and result-sets. The param function param is used to bring runtime data into capture functions which are processed at compile-time. It does this in an SQL-injection-proof fashion by using parameterized queries on the driver-level.

val runtimeName = "Joe"
val q = sql { Table<Person>().filter { p -> p.name == param(runtimeName) } }
q.buildFor.Postgres().runOn(myDatabase)
//> SELECT p.id, p.name, p.age FROM Person p WHERE p.name = ?

param

The following data-types can be used with param

  • Primitives: String, Int, Long, Short, Byte, Float, Double, Boolean
  • Time Types: java.util.Date, LocalDate, LocalTime, LocalDateTime, ZonedDateTime, Instant, OffsetTime, OffsetDateTime
  • Kotlin Multiplatform Time Types: kotlinx.datetime.LocalDate, kotlinx.datetime.LocalTime, kotlinx.datetime.LocalDateTime, kotlinx.datetime.Instant
  • SQL Time Types: java.sql.Date, java.sql.Timestamp, java.sql.Time
  • Other: BigDecimal, ByteArray
  • Note that in all the time-types Nano-second granularity is not supported. It will be rounded to the nearest millisecond.

Note that for Kotlin native things like java.sql.Date and java.sql.Time do not exist. Kotlin Multiplatform uses kotlinx.datetime objects instead.

If you've used Terpal-SQL you'll notice these are the same as the Wrapped-Types that it supports. This is because Terpal-SQL and ExoQuery use the same underlying Database Controllers.

params

If you want to do in-set SQL checks with runtime collections, use the params function:

val runtimeNames = listOf("Joe", "Jim", "Jack")
val q = sql { Table<Person>().filter { p -> p.name in params(runtimeNames) } }
q.buildFor.Postgres().runOn(myDatabase)
//> SELECT p.id, p.name, p.age FROM Person p WHERE p.name IN (?, ?, ?)

Internally this is handled almost the same way as param. It finds an appropriate kotlinx.serialization Serializer and uses it to encode the collection into a SQL prepared-statement.

(Note are also paramsCustom and paramsCtx functions that are analogous to paramCustom and paramCtx described below.)

paramCustom

If you want to use a custom serializer for a specific type, you can use the paramCustom function. This is frequently useful when you want to map structured-data to a primitive type:

// Define a serializer for the custom type
object EmailSerializer : KSerializer<Email> {
  override val descriptor: SerialDescriptor = PrimitiveSerialDescriptor("Email", PrimitiveKind.STRING)
  override fun serialize(encoder: Encoder, value: Email) = encoder.encodeString(value.value)
  override fun deserialize(decoder: Decoder) = Email(decoder.decodeString())
}

val email: Email = Email.safeEncode("[email protected]")
val q = sql {
  Table<User>().filter { p -> p.email == paramCustom(email, EmailSerializer) }
}

If you're wondering what the User class looks like, remember that using Kotlin serialization, this kind of data most likely uses a property-based-serialization annotation!

@Serializable
data class User(
  val id: Int,
  val name: String,
  @Serializable(with = EmailSerializer::class)
  val email: Email
)

You can essentially think of paramCustom as way to bring custom-serialized entities into a capture block. The way they are set up on the data-class coming out of the Query can be a property-based serializer, a particular-type serializer, a file-specified serializer, or even a typealias-serializer serializer. When you want to any of thse kinds of things brought in as param into a capture block, use paramCustom to do that.

paramCtx

The paramCtx function allows you to use contextual-serialization for a specific type. Essentially this is like telling ExoQuery, and kotlinx.serialization "don't worry about not having a serializer here... I got this." This means that you eventually need to provide a low-level Database-Driver encoder/decoder pair into Database-Controller that you are going to use.

Let's say for example that you have a highly customized encoded type that can only be processed correctly at a low level.

data class ByteContent(val bytes: InputStream) {
  companion object {
    fun bytesFrom(input: InputStream) = ByteContent(ByteArrayInputStream(input.readAllBytes()))
  }
}

Then when creating the Database-Controller, provide a low-level encoder/decoder pair:

val myDatabase = object : JdbcControllers.Postgres(postgres.postgresDatabase) {
  override val additionalDecoders =
    super.additionalDecoders + JdbcDecoderAny.fromFunction { ctx, i -> ByteContent(ctx.row.getBinaryStream(i)) }
  override val additionalEncoders =
    super.additionalEncoders + JdbcEncoderAny.fromFunction(Types.BLOB) { ctx, v: ByteContent, i ->
      ctx.stmt.setBinaryStream(
        i,
        v.bytes
      )
    }
}

Then you can execute queries using ByteContent instances like this:

val bc: ByteContent = ByteContent.bytesFrom(File("myfile.txt").inputStream())
val q = sql {
  Table<MyBlobTable>().filter { b -> b.content == paramCtx(bc) }
}
q.buildFor.Postgres().runOn(myDatabase)

If you are wondering how what MyBlobTable looks like, it is a simple data-class with a ByteContent field that specifies a contextual-serializer. This is in fact required so that you can get instances of MyBlobTable out of the database.

@Serializable
data class Image(val id: Int, @Contextual val content: ByteContent)

Without this q.buildFor.Postgres() will work but .runOn(myDatabase) will not.

Note that this section is largely taken from Custom Primitives in the Terpal-SQL Database Controller documentation which also points to a code-sample for

a Contextual Column Clob.

If you are having any difficulty getting the above example to work with the Database-Controller, have a look at the link above.

Playing well with other Kotlinx Formats

When using ExoQuery with kotlinx-serialization with other formats such as JSON in real-world situations, you may frequently need either different encodings or even entirely different schemas for the same data. For example, you may want to encode a LocalDate using the SQL DATE type, but when sending the data to a REST API you may want to encode the same LocalDate as a String in ISO-8601 format (i.e. using DateTimeFormatter.ISO_LOCAL_DATE).

There are several ways to do this in Kotlinx-serialization, I will discuss two of them.

Using a Contextual Serializer

The simplest way to have a different encoding for the same data in different contexts is to use a contextual serializer.

@Serializable
data class Customer(val id: Int, val firstName: String, val lastName: String, @Contextual val createdAt: LocalDate)

// Database Schema:
// CREATE TABLE customers (id INT, first_name TEXT, last_name TEXT, created_at DATE)

// This Serializer encodes the LocalDate as a String in ISO-8601 format and it will only be used for JSON encoding.
object DateAsIsoSerializer : KSerializer<LocalDate> {
  override val descriptor = PrimitiveSerialDescriptor("LocalDate", PrimitiveKind.STRING)
  override fun serialize(encoder: Encoder, value: LocalDate) =
    encoder.encodeString(value.format(DateTimeFormatter.ISO_LOCAL_DATE))
  override fun deserialize(decoder: Decoder): LocalDate =
    LocalDate.parse(decoder.decodeString(), DateTimeFormatter.ISO_LOCAL_DATE)
}

// When working with the database, the LocalDate will be encoded as a SQL DATE type. The Terpal Driver knows
// will behave this way by default when a field is marked as @Contextual.
val ctx = JdbcController.Postgres.fromConfig("mydb")
val c = Customer(1, "Alice", "Smith", LocalDate.of(2021, 1, 1))
val q = sql {
  insert<Customer> {
    set(
      firstName to param(c.firstName),
      lastName to param(c.lastName),
      createdAt to paramCtx(c.createdAt)
    )
  }
}
q.buildFor.Postgres().runOn(ctx)
//> INSERT INTO customers (first_name, last_name, created_at) VALUES (?, ?, ?)

// Then later when encoding the data as JSON, the make sure to specify the DateAsIsoSerializer in the serializers-module.
val json = Json {
  serializersModule = SerializersModule {
    contextual(LocalDate::class, DateAsIsoSerializer)
  }
}
val jsonCustomer = json.encodeToString(Customer.serializer(), customer)
println(jsonCustomer)
//> {"id":1,"firstName":"Alice","lastName":"Smith","createdAt":"2021-01-01"}

The Terpal-SQL repository has a useful code sample relevant to this use-case. See the Playing Well using Different Encoders example for more details.

Using Row-Surrogate Encoder

When the changes in encoding between the Database and JSON are more complex, you may want to use a row-surrogate encoder.

A row-surrogate encoder will take a data-class and copy it into another data-class (i.e. the surrogate data-class) whose schema is appropriate for the target format. The surrogate data-class needs to also be serializable and know how to create itself from the original data-class.

// Create the "original" data class
@Serializable
data class Customer(
  val id: Int,
  val firstName: String,
  val lastName: String,
  @Serializable(with = DateAsIsoSerializer::class) val createdAt: LocalDate
)

// Create the "surrogate" data class
@Serializable
data class CustomerSurrogate(
  val id: Int,
  val firstName: String,
  val lastName: String,
  @Contextual val createdAt: LocalDate
) {
  fun toCustomer() = Customer(id, firstName, lastName, createdAt)

  companion object {
    fun fromCustomer(customer: Customer): CustomerSurrogate {
      return CustomerSurrogate(customer.id, customer.firstName, customer.lastName, customer.createdAt)
    }
  }
}

Then create a surrogate serializer which uses the surrogate data-class to encode the original data-class.

object CustomerSurrogateSerializer : KSerializer<Customer> {
  override val descriptor = CustomerSurrogate.serializer().descriptor
  override fun serialize(encoder: Encoder, value: Customer) =
    encoder.encodeSerializableValue(CustomerSurrogate.serializer(), CustomerSurrogate.fromCustomer(value))
  override fun deserialize(decoder: Decoder): Customer =
    decoder.decodeSerializableValue(CustomerSurrogate.serializer()).toCustomer()
}

Then use the surrogate serializer when reading data from the database.

// You can then use the surrogate class when reading/writing information from the database:
val customers = sql {
  Table<Customer>().filter { c -> c.firstName == "Joe" }
}.buildFor.Postgres().runOn(ctx, CustomerSurrogateSerializer)
//> SELECT c.id, c.firstName, c.lastName, c.createdAt FROM customers c WHERE c.firstName = 'Joe'

// ...and use the regular data-class/serializer when encoding/decoding to JSON
println(Json.encodeToString(ListSerializer(Customer.serializer()), customers))
//> [{"id":1,"firstName":"Alice","lastName":"Smith","createdAt":"2021-01-01"}]

The Terpal-SQL repository has a useful code sample relevant to this use-case. See the Playing Well using Row-Surrogate Encoder section for more details.

JSON Columns

ExoQuery provides support for working with JSON and JSONB columns in PostgreSQL, MySQL, Sqlite, and SQL Server. You can select entire JSON objects, query specific fields, and insert JSON data using regular Kotlin data classes.

Setup

To use JSON columns, annotate your Kotlin data class with @SqlJsonValue. This tells ExoQuery to serialize/deserialize the class as JSON when interacting with the database:

@SqlJsonValue
@Serializable
data class ContactInfo(val email: String, val phone: String)

@Serializable
data class User(val id: Int, val name: String, val contacts: ContactInfo)

Database Schema

Your PostgreSQL table can use either JSON or JSONB column types:

CREATE TABLE Users (
    id       SERIAL PRIMARY KEY,
    name     VARCHAR(255),
    contacts JSONB
);

Selecting Entire Rows

You can select entire rows including JSON columns, and ExoQuery will automatically deserialize the JSON data:

val users = sql {
  Table<User>()
}.buildFor.Postgres().runOn(ctx)
//> SELECT id, name, contacts FROM Users
// [User(1, "Alice", ContactInfo("[email protected]", "555-1234"))]

Selecting JSON Fields

You can map to just the JSON field, which will be deserialized into your Kotlin object:

val contacts = sql {
  Table<User>().map { it.contacts }
}.buildFor.Postgres().runOn(ctx)
//  SELECT contacts FROM Users
//> [ContactInfo("[email protected]", "555-1234")]

Selecting JSON fields inside JSON objects (implicit JSON extraction)

ExoQuery can also implicitly extract individual fields from JSON columns when the column’s Kotlin type is annotated with @SqlJsonValue.

That means you can write normal Kotlin property access (e.g. it.contacts.email) and ExoQuery will translate this to the correct JSON operator for your database:

  • Postgres/SQLite: -> for objects and ->> for text values
  • MySQL: JSON_EXTRACT(..., '$.path') and JSON_UNQUOTE(...)
  • SQL Server: JSON_QUERY(..., '$.path') for objects and JSON_VALUE(..., '$.path') for scalar values

This works recursively for nested JSON objects as well.

Example A: row -> JSON Object -> field

Suppose you have a contacts JSON column with ContactInfo(email: String, phone: String) and @SqlJsonValue is applied to ContactInfo:

@SqlJsonValue
@Serializable
data class ContactInfo(val email: String, val phone: String)

@Serializable
data class User(val id: Int, val name: String, val contacts: ContactInfo)

val emails = sql {
  Table<User>().map { it.contacts.email }
}.buildFor.Postgres().runOn(ctx)
// Postgres: SELECT contacts ->> 'email' AS value FROM Users
// SQLite:   SELECT contacts ->> 'email' AS value FROM Users
// MySQL:    SELECT JSON_UNQUOTE(JSON_EXTRACT(contacts, '$.email')) AS value FROM Users
// SQLSrv:   SELECT JSON_VALUE(contacts, '$.email') AS value FROM Users

You can also filter by JSON fields using the same property syntax:

val q = sql { Table<User>().filter { it.contacts.phone == "555-1234" } }
// Postgres/SQLite: WHERE contacts ->> 'phone' = '555-1234'
// MySQL:           WHERE JSON_UNQUOTE(JSON_EXTRACT(contacts, '$.phone')) = '555-1234'
// SQL Server:      WHERE JSON_VALUE(contacts, '$.phone') = '555-1234'
Example B: row -> JSON Object -> JSON Object -> field

Now imagine an orders table with a shipping JSON column where ShippingInfo contains an Address JSON object, and all JSON types are annotated with @SqlJsonValue:

@SqlJsonValue
@Serializable
data class Address(val street: String, val city: String, val country: String)

@SqlJsonValue
@Serializable
data class ShippingInfo(val carrier: String, val address: Address)

@Serializable
data class Order(val id: Int, val amount: Double, val shipping: ShippingInfo)

val cities = sql {
  Table<Order>().map { it.shipping.address.city }
}.buildFor.Postgres().runOn(ctx)
// Postgres: SELECT shipping -> 'address' ->> 'city' AS value FROM Orders
// SQLite:   SELECT shipping -> 'address' ->> 'city' AS value FROM Orders
// MySQL:    SELECT JSON_UNQUOTE(JSON_EXTRACT(JSON_EXTRACT(shipping, '$.address'), '$.city')) AS value FROM Orders
// SQLSrv:   SELECT JSON_VALUE(JSON_QUERY(shipping, '$.address'), '$.city') AS value FROM Orders

val canadianOrders = sql {
  Table<Order>().filter { it.shipping.address.country == "CA" }
}
// Postgres/SQLite: WHERE shipping -> 'address' ->> 'country' = 'CA'
// MySQL:           WHERE JSON_UNQUOTE(JSON_EXTRACT(JSON_EXTRACT(shipping, '$.address'), '$.country')) = 'CA'
// SQL Server:      WHERE JSON_VALUE(JSON_QUERY(shipping, '$.address'), '$.country') = 'CA'

This implicit JSON-field selection lets you keep writing idiomatic Kotlin property access without hand-writing JSON operators. ExoQuery figures out the proper SQL JSON extraction for your target database — even for deeply nested structures.

Selecting JSON fields inside JSON objects (explicit JSON extraction)

In addition to implicit extraction via Kotlin property access, you can explicitly extract JSON fields using the json DSL:

  • json.extract(column, fieldName): returns the JSON node/object at the given field path
  • json.extractAsText(column, fieldName): returns the scalar value of the field as text

Explicit extraction is useful when you want to be explicit about JSON operations, need to compose nested extractions, or are working in places where property access is less convenient. See the JsonOpSpec for more examples.

The following examples mirror the implicit versions above but use explicit extraction calls.

Example A (explicit): row -> JSON Object -> field

Suppose you have a contacts JSON column with ContactInfo(email: String, phone: String) and @SqlJsonValue is applied to ContactInfo:

@SqlJsonValue
@Serializable
data class ContactInfo(val email: String, val phone: String)

@Serializable
data class User(val id: Int, val name: String, val contacts: ContactInfo)

val emails = sql {
  Table<User>().map { json.extractAsText(it.contacts, "email") }
}.buildFor.Postgres().runOn(ctx)
// Postgres: SELECT contacts ->> 'email' AS value FROM Users
// SQLite:   SELECT contacts ->> 'email' AS value FROM Users
// MySQL:    SELECT JSON_UNQUOTE(JSON_EXTRACT(contacts, '$.email')) AS value FROM Users
// SQLSrv:   SELECT JSON_VALUE(contacts, '$.email') AS value FROM Users

You can also filter by JSON fields using explicit extraction:

val q = sql { Table<User>().filter { json.extractAsText(it.contacts, "phone") == "555-1234" } }
// Postgres/SQLite: WHERE contacts ->> 'phone' = '555-1234'
// MySQL:           WHERE JSON_UNQUOTE(JSON_EXTRACT(contacts, '$.phone')) = '555-1234'
// SQL Server:      WHERE JSON_VALUE(contacts, '$.phone') = '555-1234'
Example B (explicit): row -> JSON Object -> JSON Object -> field

Now imagine an orders table with a shipping JSON column where ShippingInfo contains an Address JSON object, and all JSON types are annotated with @SqlJsonValue:

@SqlJsonValue
@Serializable
data class Address(val street: String, val city: String, val country: String)

@SqlJsonValue
@Serializable
data class ShippingInfo(val carrier: String, val address: Address)

@Serializable
data class Order(val id: Int, val amount: Double, val shipping: ShippingInfo)

val cities = sql {
  Table<Order>().map { ex -> json.extractAsText(json.extract(ex.shipping, "address"), "city") }
}.buildFor.Postgres().runOn(ctx)
// Postgres: SELECT shipping -> 'address' ->> 'city' AS value FROM Orders
// SQLite:   SELECT shipping -> 'address' ->> 'city' AS value FROM Orders
// MySQL:    SELECT JSON_UNQUOTE(JSON_EXTRACT(JSON_EXTRACT(shipping, '$.address'), '$.city')) AS value FROM Orders
// SQLSrv:   SELECT JSON_VALUE(JSON_QUERY(shipping, '$.address'), '$.city') AS value FROM Orders

val canadianOrders = sql {
  Table<Order>().filter { json.extractAsText(json.extract(it.shipping, "address"), "country") == "CA" }
}
// Postgres/SQLite: WHERE shipping -> 'address' ->> 'country' = 'CA'
// MySQL:           WHERE JSON_UNQUOTE(JSON_EXTRACT(JSON_EXTRACT(shipping, '$.address'), '$.country')) = 'CA'
// SQL Server:      WHERE JSON_VALUE(JSON_QUERY(shipping, '$.address'), '$.country') = 'CA'

These explicit JSON operations provide fine-grained control and can be composed for arbitrarily deep paths while maintaining clear intent.

Inserting JSON Data

Use param() to insert JSON data just like any other value:

val newContacts = ContactInfo("[email protected]", "555-5678")
val q = sql {
  insert<User> {
    set(id to 2, name to "Bob", contacts to param(newContacts))
  }
}
q.build<PostgresDialect>().runOn(ctx)
//> INSERT INTO Users (id, name, contacts) VALUES (?, ?, ?)

In this situation, ExoQuery automatically delegates param(newContacts) to the paramCustom method.

The ContactInfo object will be automatically serialized to JSON format when inserted into the database.

Inserting Complete Rows

You can insert entire rows using setParams():

val user = User(2, "Bob", ContactInfo("[email protected]", "555-5678"))
val q = sql {
  insert<User> { setParams(user).excluding(id) }
}

Dynamic Queries

There are certain situations where ExoQuery cannot generate a query at compile-time. Most notably this happens when runtime values are used to choose a particular instance of SqlQuery or SqlExpression to be used. For example:

val someFlag: Boolean = someRuntimeLogic()
val q = sql {
  if (someFlag) {
    Table<Person>().filter { p -> p.name == "Joe" }
  } else {
    Table<Person>().filter { p -> p.name == "Jim" }
  }
}

ExoQuery does not know the value of someFlag at compile-time and therefore cannot generate a query at that point. This means that ExoQuery needs to run the query at runtime as the capture block is executed. This is called a dynamic query. Dynamic queries are extremely flexible and ExoQuery is very good at handling them however there are a few caveats:

  • Dynamic queries require the ExoQuery Query-Compiler to run with your runtime-code. Specifically, wherever you call the .buildFor.SomeDatabase() function.
  • It can be problematic to call this code from performance-critical areas because their cost can be in the order of milliseconds (whereas non-dynamic queries have zero runtime cost since they are created inside the compiler). Be sure to test out how much time your dynamic-queries are taking if you have any concerns. Kotlin's measureTime function is useful for this so long as you run it 5-10 times to let JIT do its work.
  • You will not see dynamic-queries in the SQL log output because they are not generated until runtime, although you will see a message that the query is dynamic.

Dynamic queries effectively allow you pass around SqlQuery<T> and SqlExpression<T> objects without any restrictions or limitations. For example:

@SqlDynamic
fun filteredIds(robotsAllowed: Boolean, value: SqlExpression<String>) =
  if (robotsAllowed)
    sql {
      Table<Person>().filter { p -> p.name == value }.map { p -> p.id }
      union
      Table<Robot>().filter { r -> r.model == value }.map { r -> r.id }
    }
  else
    sql { Table<Person>().filter { p -> p.name == value }.map { r -> r.id } }

val q = sql {
  Table<Tenants>().filter { c -> filteredIds(true, sql.expression { c.signatureName }) }
}
//> SELECT c.signatureName, c.rentalCode, c.moveInDate FROM Tenants c WHERE c.signatureName IN (SELECT p.id FROM Person p WHERE p.name = ? UNION SELECT r.id FROM Robot r WHERE r.model = ?)

Note several things:

  • The @SqlFragment annotation is used to mark the function as a dynamic function, otherwise a parsing error along the lines of:
    Could not understand the SqlExpression (from the scaffold-call) that you are attempting to call `.use` on...
    ------------ Source ------------
    filteredIds(true, sql.expression { c.signatureName })
    
    or possibly:
    It looks like you are attempting to call the external function `joinedClauses` in a captured block
    
    will occur. This is because ExoQuery tries to be extra careful about controlling what goes inside a capture block, otherwise "Backend Lowering Exceptions" can occur which are notoriously hard to debug.
  • Captured functions will typically have one or two runtime flags and the other parameters going in should be SqlQuery<T> or SqlExpression<T> objects. If you do not need to pass around SqlQuery<T> or SqlExpression<T> objects like this, you probably do not need SQL-Dynamics at all and instead should use compile-time SQL Fragment Functions.
  • Notice how above I used capture.expression { c.signatureName } nested inside another capture block. This is required because c.signatureName is merely a String, not a SqlExpression<String> type. More importantly, it is a compile-time symbolic value (i.e. literally the expression "c.signatureName") that relies on the c variable, defined inside the parent capture block. It is essential to understand that this c variable is not actually a real variable that exists "out there" in the runtime scope of accessible variables. Unless we re-wrap it, it is forced to remain inside the outer capture block in which it was defined.**

** NOTE: This kind of logic is not arbitrary or magical. It is deeply rooted in the principle of Phase Consistency that is well understood in the academic literature of compiler metaprogramming theory. Don't bother learning any of these things, you don't need to know them in order to use ExoQuery effectively. Only be aware of the fact ExoQuery is fundamentally lawful behind the scenes.

Another advantage of dynamic queries is that you can use them to create query-fragments inside of collections. For example, the following code takes a list of possible names and creates a person.name == x || person.name == y || ... set of clauses from the list.

val possibleNames = listOf("Joe", "Jack")

@SqlDynamic
fun joinedClauses(p: SqlExpression<Person>) =
  possibleNames.map { n -> sql.expression { p.use.name == param(n) } }
    .reduce { a, b -> sql.expression { a.use || b.use } }

val filteredPeople = sql {
  Table<Person>().filter { p -> joinedClauses(sql.expression { p }).use }
}

filteredPeople.buildFor.Postgres()
//> SELECT p.id, p.name, p.age FROM Person p WHERE p.name = ? OR p.name = ?

Again, notice how I wrapped p into capture.expression { p } to make it a SqlExpression<Person> type. What is being passed from filteredPeople to joinedClauses is literally the symbolic expression p.

Nested Datatypes

TBD

Schema-First Development

In ExoQuery, queries are synthesized from a DSL so all you need for a Schema-First (or Database-First) development workflow is the ability to generate entity-classes (i.e. Annotated Data-Classes) from your database schema. ExoQuery provides a way to generate entity-classes at compile-time and automatically adds them to the classpath so that you can use them to write queries queries right away.

Entity Generation

Let's say that you have a fairly consistent Postgres database schema that looks like this:

CREATE TABLE Person (id SERIAL PRIMARY KEY, first_name VARCHAR, last_name VARCHAR, age INT);
CREATE TABLE Address (id SERIAL PRIMARY KEY, owner_id INT REFERENCES Person(id), street VARCHAR, zip INT);

Firstly, add the JDBC driver to your ExoQuery plugin dependencies. For example:

exoQuery {
  codegenDrivers.add("org.postgresql:postgresql:42.7.3")
}

Then, add the following code to your source files:

// GeneratedEntitiesExample.kt
package my.example.app

fun myFunction() {
  sql.generate(
    Code.Entities(
      CodeVersion.Fixed("1.0.0"),
      DatabaseDriver.Postgres("jdbc:postgresql://<db-host>:<db-port>/<db-name>"),
      "my.example.app",   // root package for generated entities
      // Since database-schema is snake_case, use SnakeCase parser to convert table/column names to CamelCase class/property names
      nameParser = NameParser.SnakeCase
    )
  )
}

Then compile GeneratedEntitiesExample.kt the following files will be generated in your MyProject/entities/main/kotlin directory:

// MyProject/entities/main/kotlin/my/example/app/<db-schema>/Person.kt
package my.example.app.<db-schema>
...
@Serializable
data class Person(val id: Int, @SerialName("first_name") val firstName: String, @SerialName("last_name") val lastName: String, val age: Int)

and...

// MyProject/entities/main/kotlin/my/example/app/<db-schema>/Address.kt
package my.example.app.<db-schema>
...
@Serializable
data class Address(val id: Int, @SerialName("owner_id") val ownerId: Int, val street: String, val zip: String)

These files will automatically be added to your classpath so you can use them in your queries:

package my.example.app
...
fun myQuery() {
  val q = sql.select {
    val p = from(Table<Person>())
    val a = join(Table<Address>()) { a -> a.ownerId == p.id }
    Pair(p, a)
  }
  println(q.buildFor.Postgres().xr.show())
}

Typically you will want to store the database credentials in a .codegen.properties file in your project root.

# MyProject/.codegen.properties
user=postgres
password=postgres

(You can also tell the code-generator to look for credentials in the environment variables of your choice.)

NOTE: If you are using Postgres, it may be necessary to add the <db-schema> that you are using to the search path. You can do this by adding ?currentSchema=public,...,<db-schema> to the end of your JDBC URL.

For more details, have a look at the Code Generation sample: exoquery-sample-codegen.

Pay Attention to CodeVersion

Whenever you change the CodeVersion.Fixed("1.0.0") setting, the entities will be regenerated. This means that if you change your database schema, you can simply bump the version number! When you do that, be sure to actually recompile the file containing the capture.generate(...) call since that is what triggers the regeneration of the entities. If the file is not recompiled, the entities will not be regenerated.

Using AI for Cleaner Entities

If you want to use AI to clean up the generated entities, you can use the NameParser.UsingLLM option. Currently OpenAI and Ollama are supported.

// GeneratedEntitiesExample.kt
package my.example.app

fun myFunction() {
  sql.generate(
    Code.Entities(
      CodeVersion.Fixed("1.0.0"),
      DatabaseDriver.Postgres("jdbc:postgresql://<db-host>:<db-port>/<db-name>"),
      "my.example.app",
      nameParser = NameParser.UsingLLM(LLM.OpenAI())
    )
  )
}

Be sure to set your OpenAI API key in the .codegen.properties for example:

# MyProject/.codegen.properties
user=postgres                  # For the database
password=postgres              # For the database
api-key=jsnfdkjsnf24920924...  # For OpenAI

For more details, have a look at the Code Generation sample using AI: exoquery-sample-codegen-ai

In the Nuts and Bolts

WARNING: This section is not for the faint of heart. It is a deep-dive into the more obscure parts of ExoQuery, and is not necessary to understand in order to use ExoQuery effectively. Please proceed with caution.

ExoQuery is based on my learnings from the Quill Language Integrated Query library that I have maintained for the better part of a decade. It incorporates fundamental ideas from Functional Programming, Metaprogramming, and Category Theory in a non-invasive way. It was inspired by a series of precedents.

One important thing to understand about ExoQuery from a theoretical standpoint is that it is fundamentally monadic, just like Microsoft LINQ. The construct capture.select bundles the sequence of linear variable assignments until a nesting of flatMaps is created. This is based on the novel approach of Monadless.

Take for example this query:

val q = sql.select {
  val p = from(Table<Person>())
  val a = join(Table<Address>()) { a -> p.id == a.ownerId }
  val r = join(Table<Robot>()) { r -> p.id == r.ownerId }
  Triple(p, a, r)
}
println(q.normalizeSelects().xr.show())

// Will yield:
Table(Person).flatMap { p ->
  Table(Address).join { a -> p.id == a.ownerId }.flatMap { a ->
    Table(Robot).join { r -> p.id == r.ownerId }.map { r ->
      Triple(first = p, second = a, third = r)
    }
  }
}

ExoQuery supports a user-accessible flatMap function that can literally be used to create the same exact structure:

val q2 = sql {
  Table<Person>().flatMap { p ->
    internal.flatJoin(Table<Address>()) { a -> p.id == a.ownerId }.flatMap { a ->
      internal.flatJoin(Table<Robot>()) { r -> p.id == r.ownerId }.map { r ->
        Triple(p, a, r)
      }
    }
  }
}
println(q2.show())

// Creates the same thing!   
Table(Person).flatMap { p ->
  Table(Address).join { a -> p.id == a.ownerId }.flatMap { a ->
    Table(Robot).join { r -> p.id == r.ownerId }.map { r ->
      Triple(first = p, second = a, third = r)
    }
  }
}

The takeaway here is that the from() and join() functions are actually customized variations of the monadic bind, embedded into a direct-style syntactic sugar.

About

Language Integrated Query for Kotlin

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages