Let’s assume that we are starting a new system today. What database should we pick first?
My answer is simple: PostgreSQL. Start with SQL. Keep SQL. Add more capabilities when needed, without changing the database every quarter.
Start from SQL, then go to Aurora DSQL Link to heading
PostgreSQL gives us ACID transactions, joins, indexes, and a mature query planner. For many teams this is already enough for years.
But what if we need global scale and multi-Region architecture? Aurora DSQL keeps PostgreSQL compatibility and adds serverless distributed infrastructure. Same SQL mindset, bigger scale envelope.
Need NoSQL? PostgreSQL can do it too Link to heading
PostgreSQL jsonb allows document-style workloads in the same database.
You can keep semi-structured payloads, add JSON indexes, and still join with relational tables.
So yes, when somebody says “Now we need NoSQL”, in many cases PostgreSQL is already enough.
Need API access? RDS Data API can do it Link to heading
With Aurora + RDS Data API, we can execute SQL over HTTPS API calls, with IAM and Secrets Manager integration. This works well for serverless and event-driven systems where long-lived DB connections are not ideal.
And if you want a full backend platform around PostgreSQL, Supabase is another popular route: PostgREST/GraphQL APIs, auth, realtime, storage, and dashboard on top of Postgres.
Need graph? We can do it Link to heading
There are at least three ways:
- Use recursive CTE (
WITH RECURSIVE) for hierarchy and graph-like traversals. - Use extension-based approach (for example Apache AGE) for property-graph style queries.
- Use
pgRouting(usually with PostGIS) for shortest-path and graph traversal problems.
Supabase has a practical write-up here: Postgres as a Graph Database: (Ab)using pgRouting
Need vector search? We can do it Link to heading
With pgvector, PostgreSQL can store embeddings and run similarity search.
This is enough for recommendation, semantic search, and RAG-like retrieval, while transactional metadata stays in the same place.
Need cronjobs, pub/sub, and background patterns? We can do it Link to heading
This is another area where people often add extra systems too early. PostgreSQL already gives us many useful building blocks:
- Cron-like jobs with
pg_cronfor periodic SQL tasks (cleanup, rollups, sync jobs). - Lightweight pub/sub with
LISTENandNOTIFYfor near-real-time app events. - Durable queue workers with
FOR UPDATE SKIP LOCKED. - Outbox pattern: write business data + event in one transaction, then publish safely from outbox table.
- Retry and dead-letter behavior with status columns (
pending,processing,failed) and retry counters. - Idempotent job execution using unique keys, constraints, and advisory locks.
- Supabase path: Postgres changes can flow into Realtime subscriptions for client updates.
Do we still sometimes need Kafka, SQS, or EventBridge? Yes, of course. But for many product workloads, starting from PostgreSQL patterns is simpler and faster.
Need backend for React app? Yes, also possible Link to heading
Typical pattern:
- React frontend
- API layer (Node/Express, Lambda, or GraphQL resolver)
- Aurora PostgreSQL or Aurora DSQL
- Optional RDS Data API for HTTP-based SQL calls
Reference: React App video
Also popular: React + Supabase client + Supabase Auth + Postgres database.
25 things you can do with PostgreSQL (and Aurora PostgreSQL-compatible options) Link to heading
- Core OLTP transactions with ACID guarantees
- Complex relational joins across normalized models
- Window-function analytics in SQL
- Recursive hierarchy traversal with
WITH RECURSIVE - Background worker queues with
FOR UPDATE SKIP LOCKED - Materialized views for precomputed reads
- Declarative partitioning for large tables
- Row-level security for multi-tenant isolation
- Logical replication and CDC pipelines
- Cross-database federation with
postgres_fdw - Document workloads via
jsonandjsonb - JSON field indexing with GIN
- Full-text search and ranking
- Event signaling with
LISTENandNOTIFY - Generated columns for computed values
- API-driven SQL execution through RDS Data API
- IAM-authenticated data access patterns on AWS
- Lambda to Aurora integrations without connection pool management
- GraphQL resolvers backed by PostgreSQL (or Supabase
pg_graphqlpath) - Geospatial queries with PostGIS
- Graph modeling via recursive SQL patterns
- Property graph extensions such as Apache AGE
- Vector similarity search with
pgvector - Hybrid AI workloads: metadata + embeddings + transactions
- Global, serverless PostgreSQL-compatible deployments with Aurora DSQL
PostgreSQL vs Supabase (quick compare) Link to heading
This part is important because these are not the same type of product:
- PostgreSQL = database engine
- Supabase = platform that uses PostgreSQL as the core database, plus API/auth/realtime/storage tooling
So “PostgreSQL vs Supabase” is usually not a hard either/or. In practice:
- Choose pure PostgreSQL when you want maximum low-level control and custom platform design.
- Choose Supabase when you want faster product delivery with managed developer tooling around PostgreSQL.
- Choose Aurora PostgreSQL/Aurora DSQL when your priority is AWS-native operations, IAM integration, and multi-Region managed architecture.
Is this GenAI? Yes Link to heading
“Is this GenAI, Yes. So here we will imagy what PostreSQL can do in the future”
These are our shared guesses, both yours and mine:
- In-memory database architecture: most hot data and execution in memory, with durable S3-like backend layers for persistence and recovery.
- Global worldwide database at massive scale: near-native multi-Region active-active behavior as a default, not a premium edge case.
- Agent runtime inside PostgreSQL: AI agents hosted as database procedures/functions, close to the data, with transactional guarantees and policy controls.
Maybe not all of this will happen exactly in this form. But the direction is clear: PostgreSQL keeps absorbing new workloads without losing the SQL foundation.
Final take Link to heading
In practice, most teams do not need five different databases for five data patterns. PostgreSQL plus Aurora PostgreSQL-compatible options already cover most workloads:
- SQL
- NoSQL-like JSON
- API access
- Graph patterns
- Vector search
- Global distributed PostgreSQL-compatible infrastructure
So if you ask me where to start: start with PostgreSQL. Then add complexity only when you really need it.