Thanks to visit codestin.com
Credit goes to github.com

Skip to content

tvallotton/pgc

Repository files navigation

Pgc – PostgreSQL Query Compiler

Note pgc is still under development and does not yet ship a stable release.

Pgc is a type-safe SQL code generator for PostgreSQL, inspired by sqlc. It parses SQL queries, validates them against your schema, and generates strongly-typed models and async methods to execute them from your application code.

Getting started

Install pgc

Install pgc running the following line:

curl -fsSL https://raw.githubusercontent.com/tvallotton/pgc/main/scripts/install.sh | bash

Setup config

Pgc needs a config file to work. You can create a default one with the following command:

$ pgc init

Schema

Pgc needs to know the database schema to generate models. Create a migrations folder, and create a file named schema.sql inside it with the following contents:

-- migrations/schema.sql
create table author (
  id uuid primary key default gen_random_uuid(),
  name text not null,
  birthday date
);

create table genre (
    id text primary key
);

create table book (
    id uuid primary key default gen_random_uuid(),
    title text not null,
    author_id uuid not null references author(id),
    is_best_seller bool default false,
    genre text not null references genre(id)
);

insert into genre values ('comedy'), ('science fiction'), ('fantasy');

Queries

Pgc will look for SQL files at queries/. Create a directory named queries, and a file named author.sql inside it with the following contents:

-- @name: insert :exec
insert into author values (
    $(author.id),
    $(author.name),
    $(author.birthday)
);

-- @name: get_by_id :one
select author from author where id = $id;

Finally, run:

$ pgc build

This should create a directory at package/queries with python classes for each table, as well as a Queries class. Our generated queries can be used as follows:

import asyncpg
from datetime import date
from package.queries import Queries, init_connection
from package.queries.models import Author
from uuid import uuid4



conn = await asyncpg.connect()

# register type codecs
await init_connection(conn) 

# create the queries object
queries = Queries(conn) 

author = Author(
    id=uuid4(),
    name="Mary Shelly",
    birthday=date.today()
)

await queries.author.insert(author)

author2 = await queries.author.get_by_id(author.id)

assert author2 == author

The init_connection function will register type codecs on the connection so row types can be decoded into models directly. When using a pool the init= argument can be used to have the pool initialize every connection.

Namespaced queries

Queries are grouped by file name or an explicit @namespace directive:

-- book.sql
-- by default queries on this file will be found at queries.book.*

-- @name: get_by_id :one
select book.* from book where id = $id;

-- @namespace: author
-- @name: get_books :many
select book.* from book
join author on author.id = book.author_id
where author.id = $author_id

Now if we want to access each query we can use:

await queries.book.get_by_id(book_id)
await queries.author.get_books(author_id)

Nested namespaces are also supported:

-- @namespace: book.metrics
-- @name: get_best_sellers :many
select book from book where book.is_best_seller;

Then this method can be accessed as:

books: list[Book] = await queries.book.metrics.get_best_sellers()

Row types

PostgreSQL supports returning composite row types directly. Pgc takes advantage of this to provide rich typed results for joined queries:

-- author.sql
-- @name: get_author_with_books :one
select author, array_agg(book) as books
from author
join book on author.id = book.author_id
where book.id = $book_id
group by author.id
row = await queries.author.get_author_with_books(author.id)
assert isinstance(Author, row.author)
assert isinstance(Book, row.books[0])

This saves us the need to construct an instance of Book and Author in our application from the resulting row.

Argument grouping

When passing multiple arguments (e.g., in INSERT or UPDATE), use field path syntax for clarity and grouping:

  • $(record.field): for required agruments
  • ?(record.field): for optional agruments
-- @name: upsert :one
insert into book
values (
    $(book.title),
    $(book.author_id),
    $(book.is_best_seller),
    $(book.genre)
)
on conflict (id) do update set
    title =          $(book.title),
    author_id =      $(book.author_id),
    is_best_seller = $(book.is_best_seller),
    genre =          $(book.genre)
returning book;
await queries.book.upsert(book=book)

Optional parameters

You may use ? instead of $ to declare an optional parameter:

select * from book
offset coalesce(?offset, 0)
limit coalesce(?limit, 24)

Foreign key enums

Instead of using raw enum types in Postgres, prefer foreign-key-backed enums for extensibility:

create table genre (
    id text primary key
);

insert into genre values
    ('science fiction'),
    ('fantasy'),
    ('biography');

Mark these as enums in your config:

codegen:
  options:
    enums:
      - genre

This generates:

class Genre(enum.StrEnum):
    SCIENCE_FICTION = 'science fiction'
    FANTASY = 'fantasy'
    BIOGRAPHY = 'biography'

However, if you don't specify your values in your schema, you may specify them in your config file

enums:
  - genre:
    - "science fiction"
    - "fantasy"
    - "biography"

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published