PostgreSQL Reference Guide: Administration, Configuration, and Troubleshooting
Running PostgreSQL with Docker
Quick Start with Docker
Launch a PostgreSQL 16 instance with Docker:
# Run PostgreSQL 16 with Alpine Linux (lightweight)
docker run -it --name postgres -e POSTGRES_PASSWORD=pwd -p 5432:5432 postgres:16-alpine
For more persistent setups:
# With volume mounting for data persistence
docker run --name pgsql -p 5432:5432 \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=pwd \
-e POSTGRES_DB=postgres \
-v pgdata:/var/lib/postgresql/data \
-d postgres:16-alpine
User and Database Management
Creating Users and Databases
Create a new user and database with full privileges:
-- Create a test user and database
CREATE USER testuser WITH PASSWORD 'testuser';
CREATE DATABASE testdb ENCODING 'UTF8' OWNER testuser;
GRANT ALL PRIVILEGES ON DATABASE testdb TO testuser;
ALTER ROLE testuser WITH LOGIN;
Another example:
-- Create a demo user and database
CREATE USER demo WITH PASSWORD 'pwd';
CREATE DATABASE demo_db ENCODING 'UTF8' OWNER demo;
GRANT ALL PRIVILEGES ON DATABASE demo_db TO demo;
ALTER ROLE demo WITH LOGIN;
Enabling Extensions
Enable the UUID extension for generating UUIDs:
-- Connect to the database first
\connect testdb
-- Enable the UUID extension
CREATE EXTENSION "uuid-ossp";
Backup and Restore
Restoring from SQL Files
Restore a database from a SQL backup file:
# Basic restore
psql -U username -d database_name -f objects.sql
# Restore with error handling (stops on first error)
psql -U username --set ON_ERROR_STOP=on -f backupfile
Working with pg_dump Files
Restore a database from a pg_dump file:
# Standard restore
pg_restore --dbname=testdb <dump_file>
# Restore with different owner
pg_restore --dbname=testdb --no-owner --role=new-owner <dump_file>
Creating Database Backups
Example backup script that dumps a database and uploads it to S3-compatible storage:
#!/usr/bin/env bash
# Navigate to home directory and load environment variables
cd /home/ubuntu
source .envrc
# Create a compressed binary format dump
pg_dump -Fc --no-acl <database_name> -h localhost -U <username> -f /home/ubuntu/pgdb.dump -w
# Upload to S3-compatible storage with timestamp
/usr/local/bin/mc cp /home/ubuntu/pgdb.dump s3scw/backups/postgres-pgdb-$(date --utc +%Y-%m-%d-%H-%M-%S)
This script:
- Creates a compressed binary format dump (
-Fc) without ACL entries (--no-acl) - Saves it locally
- Uploads it to an S3-compatible storage with a timestamp in the filename
User Management
Changing User Passwords
To change a user’s password:
-- Change password for a specific role
ALTER ROLE xyz WITH PASSWORD 'secrets';
Development and Testing
Running PostgreSQL Locally
Quick command to run PostgreSQL in a Docker container:
# Run PostgreSQL in detached mode
docker run --name pgsql -p 5432:5432 \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=pwd \
-e POSTGRES_DB=postgres \
-d postgres:16
Debugging in Kubernetes
Create a temporary pod with PostgreSQL client tools for debugging:
# Create a temporary pod with PostgreSQL client
kubectl run -i --tty --rm postgresdebug \
--image=alpine:3.17 \
--restart=Never \
-- sh
# Inside the pod, install PostgreSQL client
apk add postgresql-client
# Now you can use psql to connect to your database
# psql -h <host> -U <username> -d <database>
Connection Troubleshooting
Monitoring Active Connections
View connections grouped by client address and database:
-- Group connections by client address and database
SELECT client_addr, datname, COUNT(*)
FROM pg_stat_activity
GROUP BY client_addr, datname;
Terminating Problematic Connections
Terminate connections from a specific IP address:
-- Terminate all connections from a specific IP address
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE client_addr = '10.201.0.56';
Connection Statistics
Check total active connections:
-- Count all active connections
SELECT COUNT(1) FROM pg_stat_activity;
-- Show maximum allowed connections
SHOW max_connections;
-- Show reserved superuser connections
SHOW superuser_reserved_connections;
Detailed Connection Information
View detailed information about all connections:
-- Get detailed connection information
SELECT
pid,
state,
client_addr,
client_hostname,
query_start,
query
FROM pg_stat_activity
ORDER BY pid;
-- Count connections by client address
SELECT client_addr, COUNT(1)
FROM pg_stat_activity
GROUP BY client_addr;
-- Group by connection state and user
SELECT
DISTINCT state,
client_addr,
client_hostname,
usename,
COUNT(*)
FROM pg_stat_activity
GROUP BY state, client_addr, client_hostname, usename;
PSQL Command Line Tips
Switching Users and Databases
In the psql command line client, you can switch between users and databases:
-- Switch to a different user while keeping the same database
\c - username
-- Switch to a different database and user
\c database_name username
PostgreSQL 15 Permission Changes
Public Schema Permission Issue
PostgreSQL 15 introduced a significant change to default permissions. Previously, users with database creation privileges automatically had table creation rights in the public schema. This is no longer the case.
The Problem
Users now encounter this error when trying to create tables:
ERROR: permission denied for schema public
The Solution
When creating a new user and database in PostgreSQL 15+, you must explicitly grant schema permissions:
-- Create database and user
CREATE DATABASE example_db;
CREATE USER example_user WITH ENCRYPTED PASSWORD 'Sup3rS3cret';
-- Grant database privileges
GRANT ALL PRIVILEGES ON DATABASE example_db TO example_user;
-- Connect to the new database
\c example_db postgres
-- NEW STEP: Grant schema privileges (required in PostgreSQL 15+)
GRANT ALL ON SCHEMA public TO example_user;
This last step is now required in PostgreSQL 15 and later versions.
Source: Stack Overflow Answer
Schema Management
Viewing Schema Ownership
Query to list all schemas with their owners:
-- List all schemas with their owners
SELECT
r.rolname AS schema_owner,
ns.nspname AS schema_name,
ns.oid AS schema_id
FROM
pg_namespace ns
JOIN
pg_roles r ON ns.nspowner = r.oid
ORDER BY
schema_name;
Closing Database Connections
To terminate all connections to a specific database (useful before dropping a database):
-- Terminate all connections to a specific database except your own
SELECT
pg_terminate_backend(pid)
FROM
pg_stat_activity
WHERE
-- Don't kill my own connection!
pid <> pg_backend_pid()
-- Only target connections to the specified database
AND datname = 'database_name';
This is particularly useful when you need to perform maintenance operations that require exclusive access to a database.
Reassigning Object Ownership
To transfer ownership of all database objects owned by one role to another:
-- Transfer ownership of all objects from one role to another
REASSIGN OWNED BY old_role [, ...] TO new_role;
This is useful when removing users or reorganizing database ownership.
Schema Privileges Management
PostgreSQL uses schemas to organize database objects. Understanding schema privileges is crucial for proper security:
Basic Schema Privileges
-- Allow users to access objects in a schema
GRANT USAGE ON SCHEMA schema_name TO role_name;
-- Allow users to create objects in a schema
GRANT CREATE ON SCHEMA schema_name TO user_name;
Schema Operations
-- Create a new schema with specified owner
CREATE SCHEMA IF NOT EXISTS demo AUTHORIZATION demo;
-- Change schema owner
ALTER SCHEMA schema_name OWNER TO { new_owner | CURRENT_USER | SESSION_USER };
-- Drop a schema
DROP SCHEMA schema_name;
Listing Schemas
In psql, list all schemas:
\dn
Note: Prior to PostgreSQL 15, every user had CREATE and USAGE privileges on the public schema by default. This changed in PostgreSQL 15, as discussed earlier.
Setting the Schema Search Path
The search path determines which schemas PostgreSQL looks in when resolving unqualified object names:
-- Set the search path for the current session
SET search_path TO app, public;
-- View the current search path
SHOW search_path;
Managing Functions and Extensions
Listing Defined Functions
To list all defined functions in the database:
-- List all functions
\df
Managing Extensions and Their Schemas
Create an extension in a specific schema:
-- Create the UUID extension in the 'app' schema
CREATE EXTENSION IF NOT EXISTS "uuid-ossp" SCHEMA app;
View extension locations:
-- List all extensions and their schemas
SELECT
e.extname AS extension_name,
n.nspname AS schema_name,
extrelocatable AS can_be_relocated
FROM
pg_catalog.pg_extension e
JOIN
pg_catalog.pg_namespace n ON n.oid = e.extnamespace;
Move an extension to a different schema:
-- Move the UUID extension to the public schema
ALTER EXTENSION "uuid-ossp" SET SCHEMA public;
Best Practices for Extension Management
By default, CREATE EXTENSION "uuid-ossp" installs the extension into the public schema. Here’s a better approach for managing extensions:
Creating a Dedicated Extensions Schema
-- Create a dedicated schema for extensions
CREATE SCHEMA extensions;
-- Grant usage permissions to all users
GRANT USAGE ON SCHEMA extensions TO public;
-- Grant execute permissions on all functions
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA extensions TO public;
-- Set up default privileges for future extensions
ALTER DEFAULT PRIVILEGES IN SCHEMA extensions
GRANT EXECUTE ON FUNCTIONS TO public;
ALTER DEFAULT PRIVILEGES IN SCHEMA extensions
GRANT USAGE ON TYPES TO public;
Installing Extensions in the Dedicated Schema
-- Install extensions in the dedicated schema
CREATE EXTENSION unaccent SCHEMA extensions;
Configuring the Search Path
-- Include the extensions schema in your search path
SET search_path = my_schema, extensions;
Making Settings Permanent
To apply these settings to all new databases:
- Connect to the template1 database
- Run the schema creation and permission commands
- Set the default search path in postgresql.conf:
search_path = '$user',public,extensions
Or use ALTER SYSTEM:
ALTER SYSTEM SET search_path = '$user',public,extensions;
Granting Public Schema Access to Specific Users
-- Grant schema usage to a specific user/role
GRANT USAGE ON SCHEMA public TO app;
-- Grant execute permissions on all functions
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO app;
-- Set default privileges for future functions
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT EXECUTE ON FUNCTIONS TO app;
-- Set default privileges for future types
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT USAGE ON TYPES TO app;
-- Install extension in public schema
CREATE EXTENSION "uuid-ossp" SCHEMA public;
This approach ensures that the specified user/role has access to all current and future functions and types in the public schema.
Schema Management Best Practices
Extension Installation Guidelines
Where to Install Extensions
-
Avoid pg_catalog: Never install extensions to the
pg_catalogschema unless they’re specifically designed for it. Modifying system catalogs can cause serious problems. -
Understanding the public schema: The
publicschema is not special - it’s just a default schema included in standard distributions. Some DBAs avoid using it entirely or even delete it. -
Extension placement: Extensions are installed into the current schema by default, unless specified otherwise:
-- From PostgreSQL documentation:
-- schema_name: The name of the schema in which to install the extension's objects,
-- given that the extension allows its contents to be relocated.
CREATE EXTENSION some_extension SCHEMA schema_name;
Note: Extensions themselves are not considered to be within any schema - they have unqualified names that must be unique database-wide. However, the objects belonging to the extension can be within schemas.
Recommended Approach
A recommended practice is to create a dedicated extensions schema:
-- Create extensions schema
CREATE SCHEMA extensions;
-- Set search path in postgresql.conf
-- search_path = "$user",public,extensions
-- Or for a simpler approach
-- search_path = public,extensions
-- Install extensions in the dedicated schema
CREATE EXTENSION some_extension SCHEMA extensions;
This approach provides several benefits:
- Clean separation between your own functions and extension objects
- Ability to “hide” extension objects behind same-named objects in the public schema
- Consistent organization across databases
The key is to decide on a schema management strategy and apply it consistently across your databases.
schema
Qualified names are tedious to write, and it’s often best not to wire a particular schema name into applications anyway. Therefore tables are often referred to by unqualified names, which consist of just the table name. The system determines which table is meant by following a search path, which is a list of schemas to look in. The first matching table in the search path is taken to be the one wanted. If there is no match in the search path, an error is reported, even if matching table names exist in other schemas in the database.
The ability to create like-named objects in different schemas complicates writing a query that references precisely the same objects every time. It also opens up the potential for users to change the behavior of other users’ queries, maliciously or accidentally. Due to the prevalence of unqualified names in queries and their use in PostgreSQL internals, adding a schema to search_path effectively trusts all users having CREATE privilege on that schema. When you run an ordinary query, a malicious user able to create objects in a schema of your search path can take control and execute arbitrary SQL functions as though you executed them.
The first schema named in the search path is called the current schema. Aside from being the first schema searched, it is also the schema in which new tables will be created if the CREATE TABLE command does not specify a schema name.
To show the current search path, use the following command:
SHOW search_path;
In the default setup this returns:
search_path
“$user”, public
The first element specifies that a schema with the same name as the current user is to be searched. If no such schema exists, the entry is ignored. The second element refers to the public schema that we have seen already.
\conninfo
backup
```sql in powercard_db as postgres user — make sure everybody can use everything in the public schema GRANT USAGE ON SCHEMA public TO app; GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO app; — include future extensions ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT EXECUTE ON FUNCTIONS TO app;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE ON TYPES TO app; ```
test user with “SET ROLE” permission
CREATE Role test_role WITH PASSWORD 'pwd';
CREATE DATABASE test_db ENCODING 'UTF8' OWNER test_role;
GRANT ALL PRIVILEGES ON DATABASE test_db TO test_role;
GRANT CREATE ON DATABASE test_db TO test_role;
ALTER ROLE test_role WITH LOGIN;
ALTER ROLE test_role WITH CREATEROLE;
\c test_db test_role
CREATE Role test_app WITH PASSWORD 'pwd';
-- Very Important
GRANT ROLE test_app TO test_role;
CREATE SCHEMA IF NOT EXISTS test_app AUTHORIZATION test_app;
drop schema test_app;
drop role test_app;
drop database test_db;
drop role test_role;
Privileges
—ACCESS DB REVOKE CONNECT ON DATABASE example_db FROM PUBLIC; GRANT CONNECT ON DATABASE example_db TO user;
—ACCESS SCHEMA REVOKE ALL ON SCHEMA public FROM PUBLIC; GRANT USAGE ON SCHEMA public TO user;
—ACCESS TABLES REVOKE ALL ON ALL TABLES IN SCHEMA public FROM PUBLIC ; GRANT SELECT ON ALL TABLES IN SCHEMA public TO read_only ; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO read_write ; GRANT ALL ON ALL TABLES IN SCHEMA public TO admin ;
Create schema with given role
CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_specification
The role name of the user who will own the new schema. If omitted, defaults to the user executing the command. To create a schema owned by another role, you must be able to SET ROLE to that role.
UPdate extension schema
If that is the case, then you will need to update all the tables and fields that reference that function with the path and function you want. So for one table and one column something like this:
alter table foo2 alter COLUMN uid set default public.gen_random_uuid();
Allow creating foreign key
GRANT REFERENCES
we need two permission in order to make it work
GRANT REFERENCES ON ALL TABLES IN SCHEMA schema_name TO role_name; GRANT USAGE ON SCHEMA schema_name TO role_name;
cluster
kubectl apply --server-side -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.23/releases/cnpg-1.23.2.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cluster-demo
spec:
instances: 3
imageName: ghcr.io/cloudnative-pg/postgresql:16.3-10-bullseye
storage:
storageClass: standard
size: 10Gi
permission check
Table permissions:
SELECT * FROM information_schema.role_table_grants WHERE grantee = ‘YOUR_USER’;
Ownership:
SELECT * FROM pg_tables WHERE tableowner = ‘YOUR_USER’;
Schema permissions:
SELECT r.usename AS grantor,
e.usename AS grantee,
nspname,
privilege_type,
is_grantable
FROM pg_namespace
JOIN LATERAL (SELECT * FROM aclexplode(nspacl) AS x) a ON true JOIN pg_user e ON a.grantee = e.usesysid JOIN pg_user r ON a.grantor = r.usesysid WHERE e.usename = ‘YOUR_USER’;
role check
\du
pg_user pg_roles
SELECT r.rolname, r.rolsuper, r.rolinherit,
r.rolcreaterole, r.rolcreatedb, r.rolcanlogin,
r.rolconnlimit, r.rolvaliduntil,
ARRAY(SELECT b.rolname
FROM pg_catalog.pg_auth_members m
JOIN pg_catalog.pg_roles b ON (m.roleid = b.oid)
WHERE m.member = r.oid) as memberof
, r.rolreplication
, r.rolbypassrls
FROM pg_catalog.pg_roles r
WHERE r.rolname !~ '^pg_'
ORDER BY 1;
Count all records in databases
set -e
# Set your database connection parameters
HOST="<YOUR_HOST>"
PASSWORD="<YOUR_PASSWORD>"
# The command you want to run in each database
# This command generates a list of SQL statements to count rows in each table
COMMAND="select 'SELECT count(*) from ' ||table_name||';' from information_schema.tables WHERE table_schema = 'public' order by table_name;"
# Retrieve the list of databases, excluding template databases
DATABASES=$(PGPASSWORD=$PASSWORD psql -t -U postgres -h $HOST -c "SELECT datname FROM pg_database WHERE datistemplate = false and datname not like 'rdsadmin' and datname not like 'cloudsqladmin' order by datname;")
# Loop through each database and execute your command
for DB in $DATABASES; do
echo "Going in database: $DB"
IFS=';'
TABLES=$(PGPASSWORD=$PASSWORD psql -t -U postgres -h $HOST -d "$DB" -c "$COMMAND")
# Execute each generated count query
for TABLE in $TABLES; do
echo "$TABLE(database: $DB)"
PGPASSWORD=$PASSWORD psql -t -U postgres -h $HOST -d "$DB" -c "$TABLE"
done
done
find column name in schema
select table_name from information_schema.columns where column_name = ‘your_column_name’
uuid column
UPDATE your_table SET department_uuid = NULL WHERE some_condition;
UPDATE your_table SET department_uuid = ‘00000000-0000-0000-0000-000000000000’ WHERE some_condition;
List tables in schema
\dn public.*
table schema \d public.users
count table rows
set -e
# Set your database connection parameters
HOST="<YOUR_HOST>"
PASSWORD="<YOUR_PASSWORD>"
# The command you want to run in each database
# This command generates a list of SQL statements to count rows in each table across multiple schemas
COMMAND="select 'SELECT count(*) from ' ||table_schema||'.'||table_name||';' from information_schema.tables WHERE table_schema in ('public', 'schema1', 'schema2', 'schema3') order by table_schema, table_name;"
# Retrieve the list of databases, excluding template databases
DATABASES=$(PGPASSWORD=$PASSWORD psql -t -U postgres -h $HOST -c "SELECT datname FROM pg_database WHERE datistemplate = false and datname not like 'rdsadmin' and datname not like 'cloudsqladmin' order by datname;")
# Loop through each database and execute your command
for DB in $DATABASES; do
echo "Going in database: $DB"
IFS=';'
TABLES=$(PGPASSWORD=$PASSWORD psql -t -U postgres -h $HOST -d "$DB" -c "$COMMAND")
# Execute each generated count query
for TABLE in $TABLES; do
echo "$TABLE(database: $DB)"
PGPASSWORD=$PASSWORD psql -t -U postgres -h $HOST -d "$DB" -c "$TABLE"
done
done
Ensure You Are Connecting to the Primary Node in cluster environment
• Confirm your application’s connection string or load balancer configuration is directing write requests to the primary node.
Check Primary Node: Run the following query to identify the primary node:
SELECT pg_is_in_recovery();
kubectl get pods -l app.kubernetes.io/name=postgresql
kubectl exec -it <primary-pod-name> — psql -c “SELECT pg_is_in_recovery();“
Install psql
brew install postgresql@16
pgdump
kubectl run -i —tty —rm pgdump —image=alpine:3.17 —restart=Never — sh