Mastering PostgreSQL Read Replicas in Quarkus: Smart Read/Write Splitting with Hibernate Panache
Scale your Java applications with Quarkus multitenancy and PostgreSQL replication. Boost performance by routing reads to replicas and writes to the primary.
Database read replicas are powerful. They let you scale out your read-heavy workloads without hammering the primary database. But manually wiring two datasources can get messy.
Quarkus gives us a better option: multitenancy with tenant resolvers. By modeling primary and replica as separate “tenants,” we can use annotations to decide at runtime whether a request should use the read-only replica or the read-write primary.
This tutorial shows you how to configure PostgreSQL replication and implement a TenantResolver that automatically routes queries based on endpoint annotations.
The difference to Hibernate
Hibernate natively handles read-only replicas by letting you configure separate SessionFactory (or EntityManagerFactory) instances: one pointing to the primary for reads and writes, the other to a replica with second-level cache disabled. In newer Hibernate versions, you can even open a read-only Session directly with .readOnly(true).initialCacheMode(CacheMode.IGNORE) to ensure no stale or non-replicated state leaks through the cache. Quarkus with Panache, however, abstracts away the direct use of Session or SessionFactory. Instead of manually opening read-only sessions, Panache encourages a higher-level repository pattern that hides entity manager management. That’s why, in Quarkus, the practical solution is configuring multiple datasources and routing between them (for example with a TenantResolver), rather than juggling raw Hibernate session factories.
Environment Setup
This post is a little configuration heavy. If you want to skip ahead to the implementation or directly start with the code, grab it from my Github repository.
We’ll start with a primary/replica PostgreSQL setup using Compose.
Create compose-devservices.yml: this needs to go into your Quarkus project root after you created the project later!
services:
postgres-primary:
image: postgres:17
container_name: postgres-primary
environment:
POSTGRES_DB: ecommerce
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin123
ports:
- "5432:5432"
volumes:
- ./init-primary.sql:/docker-entrypoint-initdb.d/init-primary.sql
- ./postgresql.conf:/etc/postgresql/postgresql.conf
- ./pg_hba.conf:/etc/postgresql/pg_hba.conf
- primary_data:/var/lib/postgresql/data
command: >
postgres
-c config_file=/etc/postgresql/postgresql.conf
-c hba_file=/etc/postgresql/pg_hba.conf
healthcheck:
test: pg_isready -U admin -d ecommerce
interval: 5s
timeout: 3s
retries: 3
postgres-replica:
image: postgres:17
container_name: postgres-replica
environment:
POSTGRES_PASSWORD: replicator123
POSTGRES_USER: replicator
PGPASSWORD: replicator123
ports:
- "5433:5432"
volumes:
- replica_data:/var/lib/postgresql/data
depends_on:
postgres-primary
command: >
bash -c "
until pg_basebackup --pgdata=/var/lib/postgresql/data -R --slot=replication_slot --host=postgres-primary --port=5432 --username=replicator; do
echo 'Waiting for primary to connect...'
sleep 1s
done; echo 'Backup done, starting replica...'; chown -R postgres:postgres /var/lib/postgresql/data; chmod 0700 /var/lib/postgresql/data; su postgres -c 'postgres'
"
volumes:
primary_data:
replica_data:
PostgreSQL access control
Create pg_hba.conf: put this also into the Quarkus project root!
# TYPE DATABASE USER ADDRESS METHOD
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
host all all 0.0.0.0/0 md5
host replication replicator 0.0.0.0/0 md5
This lets our replica connect to the primary.
This postgresql.conf is the configuration file that makes your primary database replication-ready and ensures the replica can catch up.
# Network settings
listen_addresses = '*'
# Replication settings
wal_level = replica
hot_standby = on
max_wal_senders = 10
max_replication_slots = 10
hot_standby_feedback = on
wal_keep_size = 64
max_wal_size = 1GBLet’s walk through it line by line:
Network settings
listen_addresses = '*'By default, PostgreSQL only listens on
localhost.Setting it to
*makes it accept connections from any host.This is required so that your replica container can connect to the primary and pull WAL (Write Ahead Log) changes.
Replication settings
wal_level = replicaWAL (Write Ahead Log) is the transaction log PostgreSQL uses for durability.
Setting
wal_leveltoreplicaensures enough information is written to WAL so that replicas can replay it.Without this, the standby server can’t replicate changes.
hot_standby = onAllows replicas to run read-only queries while replaying WAL changes.
If this is off, your replica would be unusable for reads.
max_wal_senders = 10Defines how many concurrent WAL sender processes can stream data to replicas. A WAL sender process (short for Write Ahead Log sender) is a background process on the primary PostgreSQL server that streams database changes to a replica.
Each replica needs one sender connection.
With
10, you can support up to 10 replicas.
max_replication_slots = 10Replication slots prevent the primary from removing WAL segments that replicas haven’t consumed yet.
This ensures slow replicas don’t fall out of sync.
Setting
10means you can create up to 10 slots for different replicas.
hot_standby_feedback = onPrevents queries on replicas from being canceled due to vacuum operations on the primary.
Helps long-running read queries survive without conflict.
Be cautious: this can cause bloat if replicas fall behind.
wal_keep_size = 64Keeps at least 64 MB of WAL files on disk, even if not strictly needed.
This acts as a buffer so that replicas that briefly disconnect don’t immediately fall out of sync.
max_wal_size = 1GBMaximum size WAL logs can grow before forcing a checkpoint.
Larger WAL size reduces checkpoint frequency (good for performance), but increases disk usage.
Did I say finally? :) Well, we also need to init the database:
This init-primary.sql is the script that bootstraps the primary database when your container starts for the first time. It ensures replication is enabled, permissions are correct, tables exist, and you have some test data to play with. Let’s break it down at a high level:
-- Create replication user
CREATE USER replicator REPLICATION LOGIN CONNECTION LIMIT 100 ENCRYPTED PASSWORD 'replicator123';
SELECT * FROM pg_create_physical_replication_slot('replication_slot');
-- Grant permissions for replicator user to access tables (needed for Hibernate validation)
GRANT CONNECT ON DATABASE ecommerce TO replicator;
GRANT USAGE ON SCHEMA public TO replicator;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO replicator;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO replicator;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO replicator;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON SEQUENCES TO replicator;
-- Domain tables
CREATE TABLE products (
id BIGSERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
price DECIMAL(10,2) NOT NULL,
description TEXT,
category VARCHAR(100),
stock_quantity INTEGER DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE orders (
id BIGSERIAL PRIMARY KEY,
customer_name VARCHAR(255) NOT NULL,
total_amount DECIMAL(10,2) NOT NULL,
status VARCHAR(50) DEFAULT 'PENDING',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE order_items (
id BIGSERIAL PRIMARY KEY,
order_id BIGINT REFERENCES orders(id),
product_id BIGINT REFERENCES products(id),
quantity INTEGER NOT NULL,
unit_price DECIMAL(10,2) NOT NULL
);
-- Create sequences for Hibernate
CREATE SEQUENCE order_items_SEQ START WITH 1 INCREMENT BY 50;
CREATE SEQUENCE orders_SEQ START WITH 1 INCREMENT BY 50;
CREATE SEQUENCE products_SEQ START WITH 1 INCREMENT BY 50;
-- Set the sequences to start after the existing data
SELECT setval('products_SEQ', (SELECT COALESCE(MAX(id), 0) + 1 FROM products));
SELECT setval('orders_SEQ', (SELECT COALESCE(MAX(id), 0) + 1 FROM orders));
SELECT setval('order_items_SEQ', (SELECT COALESCE(MAX(id), 0) + 1 FROM order_items));
-- Seed data
INSERT INTO products (name, price, description, category, stock_quantity) VALUES
('Laptop Pro', 1299.99, 'High-performance laptop', 'Electronics', 50),
('Wireless Mouse', 29.99, 'Bluetooth wireless mouse', 'Electronics', 200),
('Office Chair', 199.99, 'Ergonomic office chair', 'Furniture', 30);
INSERT INTO orders (customer_name, total_amount, status) VALUES
('John Doe', 1329.98, 'COMPLETED'),
('Jane Smith', 199.99, 'PENDING');
INSERT INTO order_items (order_id, product_id, quantity, unit_price) VALUES
(1, 1, 1, 1299.99),
(1, 2, 1, 29.99),
(2, 3, 1, 199.99);Time to start coding:
Quarkus Multitenancy Setup
Generate a new Quarkus project:
mvn io.quarkus.platform:quarkus-maven-plugin:3.28.1:create \
-DprojectGroupId=org.acme \
-DprojectArtifactId=quarkus-read-replica \
-DclassName="org.acme.ProductResource" \
-Dpath="/products" \
-Dextensions="hibernate-orm-panache,jdbc-postgresql,rest-jackson"Application Properties
This application.properties is the core configuration that tells Quarkus how to connect to both the primary (read-write) and the replica (read-only) databases, and how to apply Hibernate ORM in a multitenant setup. Let’s walk through it section by section:
# Schema generation
# Only applies to the default tenant (the primary, in this case).
# In dev mode, Quarkus will drop all tables and recreate them at startup.
# Handy for demos, but in production you’d usually set this to none and
# manage migrations with Flyway or Liquibase.
quarkus.hibernate-orm.database.generation=drop-and-create
# Activates database-based multitenancy in Quarkus Hibernate ORM.
# Instead of every entity manager always using one datasource, Quarkus
# now chooses a datasource per request using your custom TenantResolver.
# This is what makes our @ReadWrite annotation possible: one tenant =
# primary, another = replica.
quarkus.hibernate-orm.multitenant=DATABASE
#Set the default tenant (read-write)
quarkus.hibernate-orm.datasource=read-write
# Default datasource (read-write)
quarkus.datasource.read-write.jdbc.url=jdbc:postgresql://localhost:5432/ecommerce
quarkus.datasource.read-write.username=admin
quarkus.datasource.read-write.password=admin123
quarkus.datasource.read-write.db-kind=postgresql
# Replica datasource (read-only)
quarkus.datasource.read-only.jdbc.url=jdbc:postgresql://localhost:5433/ecommerce?readOnly=true
quarkus.datasource.read-only.username=admin
quarkus.datasource.read-only.password=admin123
quarkus.datasource.read-only.db-kind=postgresql
In Quarkus multitenancy, the default tenant must point to the primary (read-write) database, because Hibernate ORM uses it for schema validation and generation during startup. If you set the replica as default, Quarkus would try to run schema operations against a read-only connection and fail. At runtime, this is not a problem: our
TenantResolvertakes over and routes queries to the replica unless an endpoint is explicitly marked with@ReadWrite. This way startup works reliably on the primary, while day-to-day reads scale out on the replica.
Now Quarkus knows about two datasources:
read-write→ primary databaseread-only→ replica
Tenant Resolver
We’ll decide tenant selection at runtime with an annotation.
Marker Annotation
package com.example.tenant;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
@Target({ ElementType.METHOD, ElementType.TYPE })
@Retention(RetentionPolicy.RUNTIME)
public @interface ReadWrite {
}
Add @ReadWrite to any endpoint or class that should hit the primary database.
The @ReadWrite annotation belongs on resources rather than repositories because it controls the transactional context of an entire request, not just a single database operation. Just like @Transactional, it ensures that all repository calls within an endpoint run against the same database (primary or replica) and share the same transaction. Annotating repositories would either force you to create separate read/write repositories for every entity or add boilerplate per method, which quickly becomes unmanageable. By placing @ReadWrite on resources, you keep the decision at the request level—where it naturally belongs—while repositories remain focused on persistence logic without needing to know which database is being targeted.
Resolver Implementation
package com.example.tenant;
import io.quarkus.hibernate.orm.PersistenceUnitExtension;
import io.quarkus.hibernate.orm.runtime.tenant.TenantResolver;
import io.quarkus.logging.Log;
import jakarta.enterprise.context.RequestScoped;
import jakarta.inject.Inject;
import jakarta.ws.rs.container.ResourceInfo;
@PersistenceUnitExtension
@RequestScoped
public class MyTenantResolver implements TenantResolver {
@Inject
ResourceInfo resource;
@Override
public String getDefaultTenantId() {
return "read-only";
}
@Override
public String resolveTenantId() {
String tenant = getDefaultTenantId();
if (resource != null && (resource.getResourceMethod().isAnnotationPresent(ReadWrite.class)
|| resource.getResourceClass().isAnnotationPresent(ReadWrite.class))) {
tenant = "read-write";
}
Log.infof("Resolved tenant: %s for %s", tenant,
resource != null ? resource.getResourceMethod().getName() : "unknown");
return tenant;
}
}With this, any endpoint without @ReadWrite uses the replica.
Entities and Repository
A product entity:
package com.example.entity;
import java.math.BigDecimal;
import java.time.LocalDateTime;
import io.quarkus.hibernate.orm.panache.PanacheEntity;
import jakarta.persistence.Column;
import jakarta.persistence.Entity;
import jakarta.persistence.PreUpdate;
import jakarta.persistence.Table;
@Entity
@Table(name = "products")
public class Product extends PanacheEntity {
@Column(nullable = false)
public String name;
@Column(nullable = false, precision = 10, scale = 2)
public BigDecimal price;
@Column(columnDefinition = "TEXT")
public String description;
@Column(length = 100)
public String category;
@Column(name = "stock_quantity")
public Integer stockQuantity = 0;
@Column(name = "created_at")
public LocalDateTime createdAt = LocalDateTime.now();
@Column(name = "updated_at")
public LocalDateTime updatedAt = LocalDateTime.now();
@PreUpdate
public void preUpdate() {
this.updatedAt = LocalDateTime.now();
}
}An order entity:
package com.example.entity;
import java.math.BigDecimal;
import java.time.LocalDateTime;
import java.util.List;
import io.quarkus.hibernate.orm.panache.PanacheEntity;
import jakarta.persistence.Column;
import jakarta.persistence.Entity;
import jakarta.persistence.FetchType;
import jakarta.persistence.OneToMany;
import jakarta.persistence.PreUpdate;
import jakarta.persistence.Table;
@Entity
@Table(name = "orders")
public class Order extends PanacheEntity {
@Column(name = "customer_name", nullable = false)
public String customerName;
@Column(name = "total_amount", nullable = false, precision = 10, scale = 2)
public BigDecimal totalAmount;
@Column(length = 50)
public String status = "PENDING";
@Column(name = "created_at")
public LocalDateTime createdAt = LocalDateTime.now();
@Column(name = "updated_at")
public LocalDateTime updatedAt = LocalDateTime.now();
@OneToMany(mappedBy = "orderId", fetch = FetchType.LAZY)
public List<OrderItem> items;
@PreUpdate
public void preUpdate() {
this.updatedAt = LocalDateTime.now();
}
}An OrderItem entity:
package com.example.entity;
import java.math.BigDecimal;
import io.quarkus.hibernate.orm.panache.PanacheEntity;
import jakarta.persistence.Column;
import jakarta.persistence.Entity;
import jakarta.persistence.FetchType;
import jakarta.persistence.JoinColumn;
import jakarta.persistence.ManyToOne;
import jakarta.persistence.Table;
@Entity
@Table(name = "order_items")
public class OrderItem extends PanacheEntity {
@Column(name = "order_id", nullable = false)
public Long orderId;
@Column(name = "product_id", nullable = false)
public Long productId;
@Column(nullable = false)
public Integer quantity;
@Column(name = "unit_price", nullable = false, precision = 10, scale = 2)
public BigDecimal unitPrice;
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "product_id", insertable = false, updatable = false)
public Product product;
}Let’s also use repositories for more complex reads/writes:
OrderRepository:
package com.example.repository;
import java.time.LocalDateTime;
import java.util.List;
import com.example.entity.Order;
import io.quarkus.hibernate.orm.panache.PanacheRepository;
import jakarta.enterprise.context.ApplicationScoped;
@ApplicationScoped
public class OrderRepository implements PanacheRepository<Order> {
public List<Order> findByStatus(String status) {
return find("status", status).list();
}
public List<Order> findByDateRange(LocalDateTime startDate, LocalDateTime endDate) {
return find("createdAt BETWEEN ?1 AND ?2", startDate, endDate).list();
}
public Order save(Order order) {
if (order.id == null) {
persist(order);
} else {
order = getEntityManager().merge(order);
}
return order;
}
public Order updateStatus(Long orderId, String newStatus) {
Order order = findById(orderId);
if (order != null) {
order.status = newStatus;
order.preUpdate();
return order;
}
return null;
}
}ProductRepository:
package com.example.repository;
import java.math.BigDecimal;
import java.util.List;
import com.example.entity.Product;
import io.quarkus.hibernate.orm.panache.PanacheRepository;
import jakarta.enterprise.context.ApplicationScoped;
@ApplicationScoped
public class ProductRepository implements PanacheRepository<Product> {
public List<Product> findByCategory(String category) {
return find("category", category).list();
}
public List<Product> findByPriceRange(BigDecimal minPrice, BigDecimal maxPrice) {
return find("price BETWEEN ?1 AND ?2", minPrice, maxPrice).list();
}
public long countForRead() {
return count();
}
public Product save(Product product) {
if (product.id == null) {
persist(product);
} else {
product = getEntityManager().merge(product);
}
return product;
}
public void deleteProduct(Long id) {
deleteById(id);
}
public Product updateStock(Long productId, Integer newStock) {
Product product = findById(productId);
if (product != null) {
product.stockQuantity = newStock;
product.preUpdate();
return product;
}
return null;
}
}Product REST Resource
I’ve continued using Order and Product entities and respective repositories so far. For the demo, I am going to only implement the product resource.
Let’s build the endpoints:
package com.example.resource;
import java.math.BigDecimal;
import java.util.List;
import com.example.entity.Product;
import com.example.repository.ProductRepository;
import com.example.tenant.ReadWrite;
import jakarta.inject.Inject;
import jakarta.transaction.Transactional;
import jakarta.ws.rs.Consumes;
import jakarta.ws.rs.DELETE;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.POST;
import jakarta.ws.rs.PUT;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.QueryParam;
import jakarta.ws.rs.core.MediaType;
import jakarta.ws.rs.core.Response;
@Path("/products")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public class ProductResource {
@Inject
ProductRepository productRepository;
@GET
public List<Product> getAllProducts() {
return productRepository.findAll().list();
}
@GET
@Path("/{id}")
public Response getProduct(@PathParam("id") Long id) {
return productRepository.findByIdOptional(id).map(product -> Response.ok(product).build())
.orElse(Response.status(Response.Status.NOT_FOUND).build());
}
@GET
@Path("/category/{category}")
public List<Product> getProductsByCategory(@PathParam("category") String category) {
return productRepository.findByCategory(category);
}
@GET
@Path("/price-range")
public List<Product> getProductsByPriceRange(@QueryParam("min") BigDecimal minPrice,
@QueryParam("max") BigDecimal maxPrice) {
return productRepository.findByPriceRange(minPrice, maxPrice);
}
@GET
@Path("/count")
public Response getProductCount() {
long count = productRepository.countForRead();
return Response.ok().entity("{\"count\": " + count + "}").build();
}
@ReadWrite
@Transactional
@POST
public Response createProduct(Product product) {
Product created = productRepository.save(product);
return Response.status(Response.Status.CREATED).entity(created).build();
}
@ReadWrite
@Transactional
@PUT
@Path("/{id}")
public Response updateProduct(@PathParam("id") Long id, Product product) {
product.id = id;
Product updated = productRepository.save(product);
return Response.ok(updated).build();
}
@ReadWrite
@Transactional
@PUT
@Path("/{id}/stock")
public Response updateStock(@PathParam("id") Long id, @QueryParam("stock") Integer stock) {
Product updated = productRepository.updateStock(id, stock);
if (updated != null) {
return Response.ok(updated).build();
}
return Response.status(Response.Status.NOT_FOUND).build();
}
@ReadWrite
@Transactional
@DELETE
@Path("/{id}")
public Response deleteProduct(@PathParam("id") Long id) {
productRepository.deleteProduct(id);
return Response.noContent().build();
}
}Notice how only write endpoints carry @ReadWrite.
Running and Testing
Quarkus starts the Postgres instances for you. Run Quarkus:
./mvnw quarkus:devTest reads (uses replica):
curl http://localhost:8080/productsResult:
[
{
"id": 1,
"name": "Laptop Pro",
"price": 1299.99,
"description": "High-performance laptop",
"category": "Electronics",
"stockQuantity": 50,
"createdAt": "2025-10-01T13:52:23.555386",
"updatedAt": "2025-10-01T13:52:23.555386"
},
{
"id": 2,
"name": "Wireless Mouse",
"price": 29.99,
"description": "Bluetooth wireless mouse",
"category": "Electronics",
"stockQuantity": 200,
"createdAt": "2025-10-01T13:52:23.555386",
"updatedAt": "2025-10-01T13:52:23.555386"
},
{
"id": 3,
"name": "Office Chair",
"price": 199.99,
"description": "Ergonomic office chair",
"category": "Furniture",
"stockQuantity": 30,
"createdAt": "2025-10-01T13:52:23.555386",
"updatedAt": "2025-10-01T13:52:23.555386"
}
]Test writes (uses primary):
curl -X POST http://localhost:8080/products \
-H "Content-Type: application/json" \
-d '{
"name": "Coffee Maker",
"price": 99.99,
"description": "Automatic coffee maker",
"category": "Appliances",
"stockQuantity": 20
}'Result:
{
"id": 55,
"name": "Coffee Maker",
"price": 99.99,
"description": "Automatic coffee maker",
"category": "Appliances",
"stockQuantity": 20,
"createdAt": "2025-10-01T15:57:29.489747",
"updatedAt": "2025-10-01T15:57:29.489763"
}What Database connection is being used for which operation?
Just check the logs:
INFO [com.exa.ten.MyTenantResolver] (executor-thread-1) Resolved tenant: read-only for getAllProductsCheck the Replica
PostgreSQL exposes this through the pg_stat_replication view, but you have to query it on the primary database, since that’s where replication connections are tracked.
Open a shell into the primary container
With Podman:
podman exec -it postgres-primary psql -U admin -d ecommerce(Docker equivalent: docker exec -it postgres-primary psql -U admin -d ecommerce)
Run the replication status query
Once inside the psql prompt, run:
SELECT client_addr, state, sync_state, write_lag, flush_lag, replay_lag
FROM pg_stat_replication;
client_addr→ IP of the replicastate→ current replication state (e.g.streaming)sync_state→sync,async, orpotential*_lagcolumns → show replication delay in different phases
Example output
client_addr | state | sync_state | write_lag | flush_lag | replay_lag
-------------+-----------+------------+-----------------+-----------------+-----------------
10.89.0.3 | streaming | async | 00:00:00.000563 | 00:00:00.001495 | 00:00:00.001715If you see streaming and small (ideally 0) lag values, your replica is healthy.
Why this approach works
Annotation-driven: Business logic stays clean; you just mark endpoints.
Centralized routing: Tenant resolver handles switching.
Scalable: Add more replicas as tenants if needed.
Quarkus-native: No manual
EntityManagerjuggling.
Next Steps
Add a replication lag monitor to avoid stale reads.
Use load balancing across multiple replicas.
Combine with CDI interceptors to apply
@ReadWritedynamically based on transaction context.
This setup gives you automatic read/write separation in Quarkus without complex datasource handling. Clean, declarative, and ready for production.



