A collection of useful tricks for PostgreSQL
It can be useful to generate new hardcoded records which are not returned from data in your tables.
For example, it may be useful to have an arbitrary empty record at the end of this result set, filled with NULLs:
SELECT
id,
title
FROM
competence_levels
ORDER BY id DESC NULLS LAST;
id | title
----+----------------------
6 | Proficient
5 | Skilled
4 | Experienced
3 | Familiar
2 | Practical Experience
1 | Basic Understanding
(6 rows)To create this new empty row at the bottom of the result set, the UNION ALL operator can be used to add an additional row:
SELECT
id,
title
FROM
(
SELECT id, title FROM competence_levels
UNION ALL
SELECT NULL AS id, NULL AS title
) AS competence_levels_with_nulls
ORDER BY id DESC NULLS LAST;
id | title
----+----------------------
6 | Proficient
5 | Skilled
4 | Experienced
3 | Familiar
2 | Practical Experience
1 | Basic Understanding
|
(7 rows)To add multiple records, another option is to use the VALUES keyword:
SELECT
id,
title
FROM
(
SELECT id, title FROM competence_levels
UNION ALL
SELECT id, title FROM (VALUES
(NULL, NULL),
(1, NULL),
(NULL, 'Basic Understanding')
) AS hardcoded_competence_levels(id, title)
) AS competence_levels_with_nulls
ORDER BY id DESC NULLS LAST;
id | title
----+----------------------
6 | Proficient
5 | Skilled
4 | Experienced
3 | Familiar
2 | Practical Experience
1 | Basic Understanding
1 |
| Basic Understanding
|
(9 rows)When adding data to a database for testing purposes, it's often useful to have explicit id values to reference records in other tables via foreign keys.
However, these explicit id values are incompatible with identity fields such as a field specified with id PRIMARY KEY GENERATED ALWAYS AS IDENTITY - inserting records with explicit id values will lead to cannot insert a non-DEFAULT value into column errors from PostgreSQL:
2025-01-06 12:49:32.107 UTC [17659] ERROR: cannot insert a non-DEFAULT value into column "id"
2025-01-06 12:49:32.107 UTC [17659] DETAIL: Column "id" is an identity column defined as GENERATED ALWAYS.
2025-01-06 12:49:32.107 UTC [17659] HINT: Use OVERRIDING SYSTEM VALUE to override.
2025-01-06 12:49:32.107 UTC [17659] STATEMENT:
INSERT INTO
regions (id, slug, title)
VALUES
($1, $2, $3),
($4, $5, $6)
ON CONFLICT (id) DO UPDATE
SET
id = excluded.id,
slug = excluded.slug,
title = excluded.title
To use explicit id values in test fixture data while using generated identity fields, drop the identity, insert the records and add the identity back.
The following example of this approach uses:
seedFixtures.ts
import { readdir } from 'node:fs/promises';
import postgres from 'postgres';
if (!process.env.FEATURE_TEST_SEEDING) {
throw new Error('Set the environment variable FEATURE_TEST_SEEDING to seed database with test data');
}
const sql = postgres({
transform: postgres.camel,
});
const testFixtures = (await readdir('./tables', { withFileTypes: true }))
.filter((entry) => {
return entry.isFile() && /^\d+-[^.]+\.fixture\.ts$/.test(entry.name);
})
.sort((a, b) => {
return parseInt(a.name.split('-')[0]!) - parseInt(b.name.split('-')[0]!);
});
for (const testFixture of testFixtures) {
const testFixtureModule = (await import(`../tables/${testFixture.name}`)) as {
[key: string]: { [key: string]: { id: number } };
};
for (const [exportName, fixturesObj] of Object.entries(testFixtureModule)) {
const tableName = camelToSnake(
exportName
.replace(/^test/, '')
.replace(/^[A-Z]/, (letter) => letter.toLowerCase()),
) as string;
const fixtures = Object.values(fixturesObj);
if (fixtures.length > 0) {
const idFieldIsIdentity =
(
await sql<{ isIdentity: 'YES' | 'NO' }[]>`
SELECT
is_identity
FROM
information_schema.columns
WHERE
table_name = ${tableName}
AND column_name = 'id'
`
)[0]!.isIdentity === 'YES';
if (idFieldIsIdentity) {
await sql`
ALTER TABLE ${sql(tableName)}
ALTER COLUMN id
DROP IDENTITY
`;
}
await sql`
INSERT INTO
${sql(tableName)} ${sql(fixtures)}
`;
if (idFieldIsIdentity) {
// Only configure GENERATED ALWAYS, to avoid inconsistencies with GENERATED BY DEFAULT
await sql`
ALTER TABLE ${sql(tableName)}
ALTER COLUMN id
ADD GENERATED ALWAYS AS IDENTITY
`;
// Reset sequence to the next record id, to allow for new
// record inserts with the sequentially generated index
// (PRIMARY KEY GENERATED ALWAYS AS IDENTITY)
await sql`
SELECT
setval(
pg_get_serial_sequence(
${tableName},
'id'
),
(
SELECT
max(id)
FROM
${sql(tableName)}
)
)
`;
}
console.log(`✔️ Inserted ${fixtures.length} records into ${tableName}`);
}
}
}
await sql.end();
console.log('Done syncing test seeding fixtures to database');
type CamelToSnake<
S extends string,
Result extends string = '',
> = S extends `${infer First}${infer Rest}`
? First extends Capitalize<First>
? CamelToSnake<Rest, `${Result}_${Lowercase<First>}`>
: CamelToSnake<Rest, `${Result}${First}`>
: Result;
function camelToSnake<CamelCaseString extends string>(
camelCaseString: CamelCaseString,
): CamelToSnake<CamelCaseString> {
return camelCaseString.replace(
/[A-Z]/g,
(letter) => `_${letter.toLowerCase()}`,
) as CamelToSnake<CamelCaseString>;
}Fixture files look like this:
tables/001-regions.fixture.ts
type Region = {
id: number;
slug: string;
title: string;
};
export const testRegions = {
europe: {
id: 1,
slug: 'eu',
title: 'Europe',
},
australia: {
id: 2,
slug: 'au',
title: 'Australia',
},
} as const satisfies { [key: string]: Region };This allows the id values to be used in other tables as foreign keys, eg. testRegions.europe.id could be imported in tables/002-campuses.fixture.ts to use as a value for a foreign key field campuses.region_id.
To upgrade from macOS Homebrew PostgreSQL 17 to PostgreSQL 18, enable checksums on the existing cluster, upgrade the cluster and update the paths:
First, stop PostgreSQL 17, eg. via brew services stop postgresql@17, or control-C if it's running in the foreground.
Then install PostgreSQL 18 and enable data checksums on your v17 cluster (so it matches the v18 cluster with checksums default-enabled that Homebrew created):
brew install postgresql@18
"$(brew --prefix postgresql@17)/bin/pg_checksums" \
--pgdata="$(brew --prefix)/var/postgresql@17" \
--enableNext, create a temporary working directory for pg_upgrade output, run a dry run first, then run the actual upgrade:
mkdir -p "$HOME/pg-upgrade-17-to-18"
cd "$HOME/pg-upgrade-17-to-18"
"$(brew --prefix postgresql@18)/bin/pg_upgrade" \
--old-bindir="$(brew --prefix postgresql@17)/bin" \
--new-bindir="$(brew --prefix postgresql@18)/bin" \
--old-datadir="$(brew --prefix)/var/postgresql@17" \
--new-datadir="$(brew --prefix)/var/postgresql@18" \
--check
"$(brew --prefix postgresql@18)/bin/pg_upgrade" \
--old-bindir="$(brew --prefix postgresql@17)/bin" \
--new-bindir="$(brew --prefix postgresql@18)/bin" \
--old-datadir="$(brew --prefix)/var/postgresql@17" \
--new-datadir="$(brew --prefix)/var/postgresql@18"Update Homebrew's links to make the v18 accessible via PATH, update PGDATA from postgresql@17 to postgresql@18 in the shell rc file, and set PostgreSQL timezone settings to UTC:
brew link --force postgresql@18
perl -pi -e 's/postgresql\@17/postgresql\@18/g' "$HOME/$([[ $SHELL == *"zsh" ]] && echo ".zshrc" || echo ".bash_profile")"
source "$HOME/$([[ $SHELL == *"zsh" ]] && echo ".zshrc" || echo ".bash_profile")"
perl -i -pe "s/^[#\\s]*(timezone|log_timezone)\\s*=.+$/\\1 = 'UTC'/" "$PGDATA/postgresql.conf"Start PostgreSQL 18, eg. via brew services start postgresql@18, or postgres to run in the foreground.
Finally, refresh statistics, verify the upgrade, remove the old cluster and uninstall PostgreSQL 17:
vacuumdb --all --analyze-in-stages --missing-stats-only
vacuumdb --all --analyze-only
psql --dbname=postgres --command "SELECT version();"
cd "$HOME/pg-upgrade-17-to-18"
./delete_old_cluster.sh
rm -r "$HOME/pg-upgrade-17-to-18"
brew uninstall postgresql@17