pgmonkey: One Config File to Rule All Your PostgreSQL Connections

Summary: pgmonkey is a Python library that unifies PostgreSQL connection management — normal, pooled, async, and async-pooled — behind a single YAML config file and a clean API. Now at v3, it has grown from a configuration wrapper into a production-ready connection layer with caching, lifecycle management, and a CLI that does more than you’d expect.

The Problem Nobody Talks About

PostgreSQL connection management in Python isn’t hard. It’s tedious. And tedious things have a way of going wrong quietly.

You start a project with a simple synchronous connection. A few months in, someone needs connection pooling for a high-traffic endpoint. Then the async rewrite lands, and suddenly you’re maintaining two different connection patterns with two different configuration approaches. Before long, you’ve got connection logic scattered across modules, credentials duplicated in multiple files, and SSL settings that may or may not match between your sync and async code paths.

The individual pieces all work fine. psycopg is excellent. Connection pooling is well-understood. Async support in Python is mature. The problem isn’t any single piece — it’s the glue between them. That’s where pgmonkey steps in.


What pgmonkey Actually Does

At its core, pgmonkey wraps PostgreSQL connection management into a single, consistent interface. You write one YAML configuration file. You call one API. You specify which connection type you want, and pgmonkey handles the rest.

from pgmonkey import PGConnectionManager

manager = PGConnectionManager()
conn = manager.get_database_connection('config.yaml', 'normal')
conn = manager.get_database_connection('config.yaml', 'pool')
conn = await manager.get_database_connection('config.yaml', 'async')
conn = await manager.get_database_connection('config.yaml', 'async_pool')


Code language: JavaScript (javascript)

Same config file. Same manager. Four different connection types. The connection type is just a parameter — you don’t rewire your configuration or switch libraries. That’s the core idea, and it’s a good one.

pgmonkey builds on psycopg v3 and psycopg_pool — the modern PostgreSQL adapter and its official pooling companion. It doesn’t replace them. It orchestrates them behind a unified interface so you can stop thinking about connection plumbing and start thinking about your application.


One YAML File, Zero Guesswork

The configuration story is where pgmonkey really earns its keep. A single YAML file holds everything: credentials, SSL certificates, keepalive tuning, pool sizing and lifecycle limits, and session-level PostgreSQL parameters for async connections. All four connection types share the same connection_settings block, and each gets its own section for type-specific behavior.

connection_type: 'normal'

connection_settings:
  user: 'app_user'
  password: 'password'
  host: 'db.example.com'
  port: '5432'
  dbname: 'myapp'
  sslmode: 'verify-full'
  sslcert: '/certs/client.crt'
  sslkey: '/certs/client.key'
  sslrootcert: '/certs/ca.crt'
  connect_timeout: '10'
  application_name: 'myapp'
  keepalives: '1'
  keepalives_idle: '60'
  keepalives_interval: '15'
  keepalives_count: '5'

pool_settings:
  min_size: 5
  max_size: 20
  timeout: 30
  max_idle: 300
  max_lifetime: 3600
  check_on_checkout: false

async_settings:
  idle_in_transaction_session_timeout: '5000'
  statement_timeout: '30000'
  lock_timeout: '10000'

async_pool_settings:
  min_size: 5
  max_size: 20
  timeout: 30
  max_idle: 300
  max_lifetime: 3600
  check_on_checkout: false


Code language: JavaScript (javascript)

No more wondering whether your async connections use the same SSL settings as your sync ones. No more hunting across files to figure out which pool size applies where. It’s all in one place, and pgmonkey validates the configuration before it tries to connect.

Key point: A unified config isn’t just convenient — it eliminates an entire class of “works in dev, breaks in production” bugs caused by configuration drift between connection types.


Four Connection Types, One Mental Model

pgmonkey supports four connection paradigms, and switching between them is trivial:

Connection TypeUse CaseSync/Async
normalSimple scripts, one-off queries, admin tasksSync
poolWeb apps, APIs, anything with concurrent requestsSync
asyncAsync frameworks (FastAPI, aiohttp), I/O-bound workloadsAsync
async_poolHigh-concurrency async servicesAsync

You’re not learning four different libraries or four different configuration formats. You learn pgmonkey once, and the connection type becomes a parameter rather than an architectural decision.


Built for Production

What separates pgmonkey v3 from a simple configuration wrapper is the production machinery underneath.

Connection caching prevents pool storms — those moments when a burst of requests all try to create new connections simultaneously and overwhelm the database. pgmonkey caches connections with thread-safe locking, so concurrent requests share pools instead of fighting over them.

Lifecycle management handles the cleanup that’s easy to forget: closing pools on process exit, managing async pool lifetimes, and making sure resources don’t leak when your application shuts down. The kind of thing that works fine in development and causes connection exhaustion in production if you ignore it.

Configuration validation catches problems before they reach your database server. Mistyped SSL modes, invalid pool sizes, missing required fields — pgmonkey flags these at startup rather than letting them surface as cryptic connection errors at 2 AM.

Worth noting: These aren’t exotic features. They’re the kind of production concerns that every team eventually builds in-house. pgmonkey just ships them out of the box.


Authentication That Just Works

PostgreSQL supports a range of authentication methods, and pgmonkey handles three of the most common ones:

  • Password-based: the standard username/password approach, configured in the YAML and passed through to the connection
  • SSL/TLS encryption: all six PostgreSQL modes from disable through verify-full, with certificate paths right in the config
  • Certificate-based authentication: client certificates for enterprise environments where passwords alone aren’t sufficient

The SSL certificate paths sit alongside your other connection settings — no separate configuration, no guessing which mode your production server expects.


A CLI That Punches Above Its Weight

pgmonkey ships with a CLI that goes well beyond basic utility:

  • Config generation: pgmonkey pgconfig create produces a fully commented YAML template with sensible defaults
  • Connection testing: validate that your config actually connects, across all four connection types — invaluable when debugging SSL or firewall issues
  • Code generation: point pgmonkey at a config file and a connection type, and it generates working Python code targeting either pgmonkey’s own API or native psycopg/psycopg_pool — your choice
  • Server config recommendations: pgmonkey reads your client config and suggests matching postgresql.conf and pg_hba.conf entries
  • Live server audit: the --audit flag goes further, performing a read-only comparison of your server’s current settings against the recommendations — catching mismatches before they become production incidents
  • Data import/export: pgimport and pgexport commands for moving CSV data in and out of PostgreSQL tables

Key point: It’s rare for a client library to help you configure the server side too. The audit feature bridges the gap between “my app config is right” and “but the server isn’t set up for it.”


Who Benefits Most

pgmonkey isn’t for everyone, and that’s fine. It shines in specific situations:

  • Teams managing multiple services that each need PostgreSQL connections with consistent configuration
  • Projects that mix sync and async code, like a FastAPI backend using async pools alongside sync admin scripts, both reading from the same config
  • Flask and FastAPI developers who want battle-tested connection patterns without reinventing them — pgmonkey includes best-practice recipes for both frameworks
  • Environments with strict SSL/certificate requirements where getting the auth config wrong means the app doesn’t start
  • Anyone tired of writing the same connection boilerplate across projects

If you’re writing a one-off script that connects to a local database, plain psycopg is probably fine. But the moment you have multiple connection types, multiple environments, or a team that needs consistency, pgmonkey starts paying for itself.

It’s also worth noting what pgmonkey isn’t: it’s not an ORM. It doesn’t compete with SQLAlchemy or Django’s ORM. It manages connections — the layer underneath your ORM, or the direct psycopg access you use when an ORM is overkill. It slots in alongside what you’re already using. You adopt what you need and ignore what you don’t.


Closing Thoughts

Connection management is one of those problems that feels solved until it isn’t. You get by with manual setup until the project grows, the team grows, or the infrastructure requirements change. Then you’re debugging why async connections use different SSL settings than sync ones, or why the pool size in staging doesn’t match production, or why your connection pools keep getting exhausted under load.

pgmonkey addresses this by making the boring parts genuinely boring: one config file, one API, four connection types, production-ready caching and lifecycle management, done. It supports Python 3.10 through 3.13, builds on psycopg’s excellent foundation rather than replacing it, and adds practical tooling that saves real time.

For Python developers working with PostgreSQL, it’s worth a look. Install it with pip install pgmonkey, generate a config template, and see how it fits your workflow. The documentation lives at pgmonkey.net, with the source on GitHub.

This post only scratches the surface. There’s a lot more to cover — connection pooling strategies, async patterns in practice, SSL configuration walkthroughs, Flask and FastAPI integration, data import/export workflows, server auditing, and getting the most out of the CLI. An upcoming tutorial series will dig into all of it, step by step. Stay tuned.

Similar Posts

Leave a Reply