-
Notifications
You must be signed in to change notification settings - Fork 328
Description
Hey 👋
Kysely's builder pattern, fluent interface, whatever you wanna call it, API, is mostly additive.
You can repeat the same method throughout the chain, and it would add to existing context, not replace.
const rows = await db
.selectFrom("person")
.innerJoin("pet", "owner_id", "person.id")
.where("first_name", "=", sql.lit("Jennifer"))
.where("species", "=", species)
.groupBy("first_name")
.groupBy("last_name")
.having("first_name", "=", "bruce")
.having("last_name", ">", "josh")
.select(["first_name", "pet.name as pet_name"])
.select("name")
.execute();
Compiles to:
SELECT
"first_name",
"pet"."name" AS "pet_name",
"name"
FROM
"person"
INNER JOIN "pet" ON "owner_id" = "person"."id"
WHERE
"first_name" = 'Jennifer'
AND "species" = $1
GROUP BY
"first_name",
"last_name"
HAVING
"first_name" = $2
AND "last_name" > $3
See how multiple where
and having
calls are AND
ed, how multiple select
calls just add to the projection list.
Same happens with updates:
await db
.updateTable("person")
.set("first_name", "moshe")
.set("gender", "other")
.execute();
Compiles to:
UPDATE "person"
SET
"first_name" = $1,
"gender" = $2
The same cannot be said about InsertQueryBuilder
's values
method.
await db
.insertInto("person")
.values([{ first_name: "haim", gender: "male" }])
.values([{ first_name: "moshe", gender: "other" }])
.execute();
Compiles to:
INSERT INTO
"person" ("first_name", "gender")
VALUES
($1, $2)
Instead of:
INSERT INTO
"person" ("first_name", "gender")
VALUES
($1, $2),
($3, $4)
A single inserted row instead of two in this example.
This is unexpected behavior, given that the rest of the API is mostly additive.
We need to:
- make it additive.
- still allow for people who enjoyed the current behavior, to overwrite - e.g. a
clearValues
method.
Step 1 is problematic to ship immediately. If people abused the current behavior to replace state when they reuse the same query instance, this might result in unwanted outcomes:
a. the query will now attempt to re-insert already inserted rows leading to possible unique constraint errors, if you haven't dealt with duplicates in your query.
b. the query will potentially accumulate a LOT of rows in-memory if the latest query instance replaces the previous one - memory leak.
c. the database will potentially and unintentionally have a LOT of duplicate rows if re-inserting the same rows doesn't lead to unique constraint errors.
d. the query will accumulate so many rows, that the database will error on row or parameter count limits for insertions.
We might want to consider shipping step 2 first, and throw an error telling people to use clearValues
before values
when trying to replace the values in an existing query instance. It is a breaking change, but a safer one, for production users.
In a release following that one, we can make it additive, and stop throwing errors.