<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Main Thread]]></title><description><![CDATA[The Main Thread publishes practical, opinionated articles about modern Java, Quarkus, and real-world system architecture.]]></description><link>https://www.the-main-thread.com</link><generator>Substack</generator><lastBuildDate>Sat, 25 Apr 2026 03:52:33 GMT</lastBuildDate><atom:link href="https://www.the-main-thread.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Markus Eisele]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[myfear@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[myfear@substack.com]]></itunes:email><itunes:name><![CDATA[Markus Eisele]]></itunes:name></itunes:owner><itunes:author><![CDATA[Markus Eisele]]></itunes:author><googleplay:owner><![CDATA[myfear@substack.com]]></googleplay:owner><googleplay:email><![CDATA[myfear@substack.com]]></googleplay:email><googleplay:author><![CDATA[Markus Eisele]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Build Your First Real Java RAG Pipeline with Quarkus and Docling]]></title><description><![CDATA[Turn messy enterprise documents into structured retrieval with Docling, pgvector, Ollama, readiness checks, and guardrails in one local Quarkus application.]]></description><link>https://www.the-main-thread.com/p/enterprise-rag-quarkus-docling</link><guid isPermaLink="false">https://www.the-main-thread.com/p/enterprise-rag-quarkus-docling</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Fri, 24 Apr 2026 06:08:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7d7cecc0-7d06-4e25-a6d7-bb432a547f4c_1731x909.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I do not like RAG demos that start with clean Markdown. That is usually where the hard part was quietly deleted.</p><p>Open a real enterprise PDF and the problem is obvious: tables, headings, footnotes, and multi-column layout all carry meaning. Plain text extraction treats too much of that as decoration. Strip the structure and retrieval feeds the model fragments without enough context. The answer may still sound confident, which is exactly the annoying part.</p><p>Docling keeps structure as Markdown-friendly output. Quarkus wires Docling, Postgres with pgvector, and LangChain4j so we stay in ordinary Java and configuration. Agents only stay useful when the knowledge they pull is current and faithful to the source. Here we build one local pipeline: Docling conversion, sentence chunking, embeddings in pgvector, Ollama for chat and embeddings, and guardrails around inputs and outputs.</p><p>The system we build here is small enough to run locally but shaped like something you can extend: layout-aware conversion with Docling, pgvector retrieval, local Ollama chat and embedding models, readiness around background indexing, and guardrails around the assistant. </p><blockquote><p>This is an update to the <a href="https://www.the-main-thread.com/p/enterprise-rag-quarkus-docling-pgvector-tutorial">original tutorial</a> and tweaks a couple of things, making sure it aligns with API changes.</p></blockquote><h2><strong>Prerequisites</strong></h2><p>You should be comfortable with Java, REST, and running containers locally (Podman or Docker). The steps use the Quarkus CLI, Maven, PostgreSQL via Dev Services, and Ollama on the host.</p><ul><li><p>Java 21+</p></li><li><p>Maven 3.9+ and Quarkus CLI (optional but used below)</p></li><li><p>Podman or Docker (for Dev Services: PostgreSQL, Docling)</p></li><li><p>Ollama installed locally with pull access for the chat and embedding models you configure</p></li></ul><h2><strong>Project Setup</strong></h2><p>This article uses Quarkus <strong>3.34.3</strong> and Java <strong>21</strong>. Create the project:</p><pre><code><code>quarkus create app com.ibm:enterprise-rag \
  --package-name=com.ibm \
  --extensions=rest-jackson,jdbc-postgresql,quarkus-langchain4j-ollama,quarkus-langchain4j-pgvector,quarkus-docling,quarkus-smallrye-health
cd enterprise-rag</code></code></pre><p>Extensions:</p><ul><li><p><code>rest-jackson</code>: REST endpoints with JSON via Jackson</p></li><li><p><code>jdbc-postgresql</code>: JDBC driver and datasource integration for PostgreSQL</p></li><li><p><code>quarkus-langchain4j-ollama</code>: Chat and embedding models through Ollama</p></li><li><p><code>quarkus-langchain4j-pgvector</code>: Embedding store backed by PostgreSQL pgvector</p></li><li><p><code>quarkus-docling</code> (<code>io.quarkiverse.docling:quarkus-docling:1.3.0</code>): Docling REST client and Dev Services for the Docling container. This is a Quarkiverse extension, so we pin it separately</p></li><li><p><code>quarkus-smallrye-health</code>: Readiness and liveness endpoints used to hold traffic until ingestion completes</p></li></ul><h2><strong>Embeddings, Vector Size, and pgvector</strong></h2><p>An <strong>embedding</strong> is a fixed-length array of numbers produced by an embedding model. Similar text tends to land near other similar text in that space. That lets you retrieve chunks with <strong>nearest-neighbor search</strong> in Postgres through pgvector, not only keyword search.</p><p><strong>Dimension</strong> is the length of that array. The model fixes it: a given tag always emits the same width. Your database column and <code>quarkus.langchain4j.pgvector.dimension</code> <strong>must</strong> match that width. If they diverge, the app can fail at startup or when it writes vectors.</p><p>At ingest time and at query time you must use the <strong>same</strong> embedding model so vectors are comparable. If you change the model or its output size, drop or recreate the embedding table and re-ingest.</p><p>For <strong>Ollama</strong>, run <code>ollama show &lt;model&gt;</code> and read <strong>embedding length</strong>. The default library tag <code>granite-embedding:latest</code> is a compact Granite English model, roughly tens of millions of parameters, with <strong>384</strong> dimensions on typical installs. That is enough for a responsive local loop on a laptop. Larger Granite variants, for example multilingual 278M-class models, often use <strong>768</strong> dimensions and more compute. Use them when you need the extra capacity, and change the pgvector dimension with them.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MUwS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MUwS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png 424w, https://substackcdn.com/image/fetch/$s_!MUwS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png 848w, https://substackcdn.com/image/fetch/$s_!MUwS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png 1272w, https://substackcdn.com/image/fetch/$s_!MUwS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MUwS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png" width="413" height="558.9681742043551" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:808,&quot;width&quot;:597,&quot;resizeWidth&quot;:413,&quot;bytes&quot;:28678,&quot;alt&quot;:&quot;Diagram comparing the offline ingest path and the online query path. Source files are converted by Docling, split into sentences, embedded, and stored in pgvector; user questions use the same embedding model before nearest-vector retrieval feeds the chat model.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/194488888?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Diagram comparing the offline ingest path and the online query path. Source files are converted by Docling, split into sentences, embedded, and stored in pgvector; user questions use the same embedding model before nearest-vector retrieval feeds the chat model." title="Diagram comparing the offline ingest path and the online query path. Source files are converted by Docling, split into sentences, embedded, and stored in pgvector; user questions use the same embedding model before nearest-vector retrieval feeds the chat model." srcset="https://substackcdn.com/image/fetch/$s_!MUwS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png 424w, https://substackcdn.com/image/fetch/$s_!MUwS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png 848w, https://substackcdn.com/image/fetch/$s_!MUwS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png 1272w, https://substackcdn.com/image/fetch/$s_!MUwS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F464ac682-48d9-4891-bccd-942f4a9eb51c_597x808.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Chunking uses <code>DocumentBySentenceSplitter</code> with a <strong>200</strong>-token target length and <strong>20</strong>-token overlap. That is a readable default for sales PDFs. Sentences stay mostly intact, overlap reduces the chance that a boundary cuts a fact in half, and the segment count stays manageable on a laptop. Smaller chunks improve precision for short facts. Longer chunks keep more context but can make retrieval noisier. Adjust this after you inspect real retrieval logs for your corpus.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sj9v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sj9v!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png 424w, https://substackcdn.com/image/fetch/$s_!sj9v!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png 848w, https://substackcdn.com/image/fetch/$s_!sj9v!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png 1272w, https://substackcdn.com/image/fetch/$s_!sj9v!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sj9v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png" width="784" height="355" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:355,&quot;width&quot;:784,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:24726,&quot;alt&quot;:&quot;Sequence diagram showing a `/bot` request moving from the client through the REST resource and AI service, into the retrieval augmentor, Ollama embedding model, pgvector search, and Ollama chat model before returning a JSON response.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/194488888?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Sequence diagram showing a `/bot` request moving from the client through the REST resource and AI service, into the retrieval augmentor, Ollama embedding model, pgvector search, and Ollama chat model before returning a JSON response." title="Sequence diagram showing a `/bot` request moving from the client through the REST resource and AI service, into the retrieval augmentor, Ollama embedding model, pgvector search, and Ollama chat model before returning a JSON response." srcset="https://substackcdn.com/image/fetch/$s_!sj9v!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png 424w, https://substackcdn.com/image/fetch/$s_!sj9v!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png 848w, https://substackcdn.com/image/fetch/$s_!sj9v!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png 1272w, https://substackcdn.com/image/fetch/$s_!sj9v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72b9fa2f-a3e1-4359-8734-3c0375428c9f_784x355.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Quarkus can accept HTTP requests as soon as the core stack is up. <strong>Indexing used to block that moment</strong> because conversion and embedding ran in a <code>@PostConstruct</code> hook. The flow below separates <strong>application ready</strong> (the socket is listening) from <strong>RAG ready</strong> (vectors exist in pgvector). Readiness stays DOWN until the pipeline logs completion. If you bypass health checks and call <code>/bot</code> early, retrieval may still be empty. Which is a very polite way of saying: the bot can answer before it knows anything useful.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!h-or!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!h-or!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png 424w, https://substackcdn.com/image/fetch/$s_!h-or!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png 848w, https://substackcdn.com/image/fetch/$s_!h-or!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png 1272w, https://substackcdn.com/image/fetch/$s_!h-or!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!h-or!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png" width="784" height="393" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:393,&quot;width&quot;:784,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:24745,&quot;alt&quot;:&quot;Sequence diagram showing Quarkus opening the HTTP port while background ingestion is still running. Readiness stays down until Docling conversion and pgvector embedding complete, then flips up when the RAG index is usable.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/194488888?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Sequence diagram showing Quarkus opening the HTTP port while background ingestion is still running. Readiness stays down until Docling conversion and pgvector embedding complete, then flips up when the RAG index is usable." title="Sequence diagram showing Quarkus opening the HTTP port while background ingestion is still running. Readiness stays down until Docling conversion and pgvector embedding complete, then flips up when the RAG index is usable." srcset="https://substackcdn.com/image/fetch/$s_!h-or!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png 424w, https://substackcdn.com/image/fetch/$s_!h-or!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png 848w, https://substackcdn.com/image/fetch/$s_!h-or!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png 1272w, https://substackcdn.com/image/fetch/$s_!h-or!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04ad642c-062d-4856-ae5d-19d3cd16b875_784x393.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Implementation</strong></h2><p>I split the implementation into four small lanes: startup and readiness, ingestion, retrieval, and the <code>/bot</code> API. The code is longer than the idea, mostly because guardrails and background work need explicit boundaries. That is fine. Invisible magic is rarely where production systems become easier.</p><h3><strong>IngestionStarter</strong></h3><p><code>src/main/java/com/ibm/ingest/IngestionStarter.java</code> keeps startup short. It schedules ingestion after CDI startup, then lets Quarkus open HTTP while Docling and embedding work continue in the background.</p><pre><code><code>package com.ibm.ingest;

import io.quarkus.logging.Log;
import io.quarkus.runtime.StartupEvent;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.enterprise.event.Observes;
import jakarta.inject.Inject;

/**
 * Kicks off background ingestion after CDI startup so Quarkus can open HTTP without waiting for Docling
 * conversion and embedding to finish.
 */
@ApplicationScoped
public class IngestionStarter {

    @Inject
    DocumentLoader documentLoader;

    void onStart(@Observes StartupEvent ignored) {
        documentLoader.startAsyncIngestion();
        Log.info("Background document ingestion scheduled (readiness will turn UP when indexing completes).");
    }
}</code></code></pre><h3><strong>IndexingState</strong></h3><p><code>src/main/java/com/ibm/ingest/IndexingState.java</code> is the small shared flag between ingestion and readiness. The process can be alive before this flag turns true. Traffic should wait until readiness says the index exists.</p><pre><code><code>package com.ibm.ingest;

import java.util.concurrent.atomic.AtomicBoolean;

import jakarta.enterprise.context.ApplicationScoped;

/**
 * Tracks whether the initial embedding ingestion has finished. Used for readiness so HTTP traffic
 * can wait until pgvector is populated (when health checks are enabled).
 */
@ApplicationScoped
public class IndexingState {

    private final AtomicBoolean indexReady = new AtomicBoolean(false);

    public boolean isIndexReady() {
        return indexReady.get();
    }

    public void setIndexReady(boolean ready) {
        indexReady.set(ready);
    }
}</code></code></pre><h3><strong>DoclingConverter</strong></h3><p><code>src/main/java/com/ibm/ingest/DoclingConverter.java</code> hides the Docling Serve task flow behind one method. We submit the file, poll until Docling finishes, and fetch Markdown from the completed task. Each REST call passes <code>ApiMetadata</code> built from <code>quarkus.docling.api-key</code> so the <code>X-Api-Key</code> header matches what Docling Serve expects (Dev Services can inject this; a standalone Docling on localhost with auth enabled needs the same value you configured on the server). I keep this separate from the loader because Docling has enough API shape of its own.</p><pre><code><code>package com.ibm.ingest;

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.Duration;
import java.util.Base64;
import java.util.Objects;

import ai.docling.serve.api.convert.request.ConvertDocumentRequest;
import ai.docling.serve.api.convert.request.options.ConvertDocumentOptions;
import ai.docling.serve.api.convert.request.options.OutputFormat;
import ai.docling.serve.api.convert.request.source.FileSource;
import ai.docling.serve.api.convert.request.target.InBodyTarget;
import ai.docling.serve.api.convert.response.InBodyConvertDocumentResponse;
import ai.docling.serve.api.task.response.TaskStatus;
import ai.docling.serve.api.task.response.TaskStatusPollResponse;
import io.quarkiverse.docling.runtime.client.ApiMetadata;
import io.quarkiverse.docling.runtime.client.QuarkusDoclingServeClient;
import io.quarkiverse.docling.runtime.config.DoclingRuntimeConfig;
import io.smallrye.mutiny.Uni;
import io.smallrye.mutiny.infrastructure.Infrastructure;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;
import jakarta.ws.rs.ProcessingException;
import jakarta.ws.rs.core.Response;
import jakarta.ws.rs.core.Response.Status.Family;

/**
 * Converts files to Markdown via Docling Serve using the Quarkus client's async
 * task API
 * ({@link QuarkusDoclingServeClient#submitConvertSourceAsync}) and polling
 * until completion.
 */
@ApplicationScoped
public class DoclingConverter {

    private final QuarkusDoclingServeClient doclingClient;
    private final ApiMetadata apiMetadata;

    @Inject
    public DoclingConverter(QuarkusDoclingServeClient doclingClient, DoclingRuntimeConfig doclingConfig) {
        this.doclingClient = doclingClient;
        ApiMetadata.Builder metadata = ApiMetadata.builder();
        doclingConfig.apiKey().ifPresent(metadata::apiKey);
        this.apiMetadata = metadata.build();
    }

    /**
     * Converts a file to Markdown asynchronously (Mutiny). Subscription runs on the
     * default worker pool
     * so polling and JAX-RS client calls do not block the event loop. Read errors
     * become a failed {@link Uni}
     * so callers can use this from lambdas without handling checked exceptions.
     */
    public Uni&lt;String&gt; convertToMarkdownUni(Path filePath) {
        final byte[] bytes;
        try {
            bytes = Files.readAllBytes(filePath);
        } catch (IOException e) {
            return Uni.createFrom().failure(e);
        }
        String base64 = Base64.getEncoder().encodeToString(bytes);
        String filename = filePath.getFileName().toString();

        ConvertDocumentRequest request = ConvertDocumentRequest.builder()
                .source(FileSource.builder()
                        .base64String(base64)
                        .filename(filename)
                        .build())
                .options(ConvertDocumentOptions.builder()
                        .toFormat(OutputFormat.MARKDOWN)
                        .build())
                .target(InBodyTarget.builder().build())
                .build();

        return doclingClient.submitConvertSourceAsync(request, apiMetadata)
                .runSubscriptionOn(Infrastructure.getDefaultWorkerPool())
                .chain(this::pollUntilSuccess)
                .chain(this::fetchMarkdownFromTask);
    }

    private Uni&lt;TaskStatusPollResponse&gt; pollUntilSuccess(TaskStatusPollResponse status) {
        TaskStatus t = status.getTaskStatus();
        if (t == TaskStatus.SUCCESS) {
            return Uni.createFrom().item(status);
        }
        if (t == TaskStatus.FAILURE) {
            return Uni.createFrom().failure(new IllegalStateException(
                    "Docling conversion task failed for taskId=" + status.getTaskId()));
        }
        String taskId = status.getTaskId();
        return Uni.createFrom().nullItem()
                .onItem().delayIt().by(Duration.ofMillis(200))
                .chain(ignored -&gt; Uni.createFrom().item(() -&gt; doclingClient.pollTaskStatus(taskId, 500L, apiMetadata))
                        .runSubscriptionOn(Infrastructure.getDefaultWorkerPool())
                        .chain(this::pollUntilSuccess));
    }

    private Uni&lt;String&gt; fetchMarkdownFromTask(TaskStatusPollResponse completed) {
        String taskId = completed.getTaskId();
        return Uni.createFrom().item(() -&gt; {
            Response response = doclingClient.convertTaskResult(taskId, apiMetadata);
            if (response.getStatusInfo().getFamily() != Family.SUCCESSFUL) {
                throw new ProcessingException(
                        "convertTaskResult failed: HTTP " + response.getStatus() + " for taskId=" + taskId);
            }
            InBodyConvertDocumentResponse inBody = response.readEntity(InBodyConvertDocumentResponse.class);
            var document = Objects.requireNonNull(inBody.getDocument(),
                    "Document conversion returned null document for taskId=" + taskId);
            return document.getMarkdownContent();
        }).runSubscriptionOn(Infrastructure.getDefaultWorkerPool());
    }
}</code></code></pre><h3><strong>DocumentLoader</strong></h3><p><code>src/main/java/com/ibm/ingest/DocumentLoader.java</code> is the actual ingestion pipeline. It finds supported files, converts them to Markdown, splits the text into sentence-sized chunks, embeds each segment, and writes those vectors to pgvector.</p><p>Notice the failure behavior: this demo sets readiness UP even when ingestion fails, so local development does not get stuck forever. In production I would be more suspicious. If the assistant needs the knowledge base to be useful, keeping readiness DOWN can be the more honest failure mode.</p><pre><code><code>package com.ibm.ingest;

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import dev.langchain4j.data.document.Document;
import dev.langchain4j.data.document.Metadata;
import dev.langchain4j.data.document.splitter.DocumentBySentenceSplitter;
import dev.langchain4j.data.segment.TextSegment;
import dev.langchain4j.model.embedding.EmbeddingModel;
import dev.langchain4j.store.embedding.EmbeddingStore;
import io.quarkus.logging.Log;
import io.smallrye.mutiny.Multi;
import io.smallrye.mutiny.Uni;
import io.smallrye.mutiny.infrastructure.Infrastructure;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;

/**
 * Loads documents from {@code documents/}, converts them with Docling (async task API), splits, and
 * stores embeddings. Runs in the background after startup; {@link IndexingState} and readiness reflect
 * completion.
 */
@ApplicationScoped
public class DocumentLoader {

    private static final List&lt;String&gt; ALLOWED_EXTENSIONS = Arrays.asList("txt", "pdf", "pptx", "ppt", "doc", "docx",
            "xlsx", "xls", "csv", "json", "xml", "html");

    @Inject
    EmbeddingStore&lt;TextSegment&gt; store;

    @Inject
    EmbeddingModel embeddingModel;

    @Inject
    DoclingConverter doclingConverter;

    @Inject
    IndexingState indexingState;

    public void startAsyncIngestion() {
        indexingState.setIndexReady(false);
        Log.info("Starting document loading (background)...");

        listEligiblePathsUni()
                .chain(paths -&gt; {
                    if (paths.isEmpty()) {
                        Log.warn("No documents to process. Skipping embedding generation.");
                        return Uni.createFrom().voidItem();
                    }
                    return Multi.createFrom().iterable(paths)
                            .onItem().transformToUniAndConcatenate(path -&gt; doclingConverter.convertToMarkdownUni(path)
                                    .map(markdown -&gt; toDocument(path, markdown)))
                            .collect().asList()
                            .chain(this::embedAllDocuments);
                })
                .subscribe().with(
                        ignored -&gt; finishIngestionSuccess(),
                        this::finishIngestionFailure);
    }

    private void finishIngestionSuccess() {
        indexingState.setIndexReady(true);
        Log.info("Document ingestion pipeline finished; readiness is UP.");
    }

    private void finishIngestionFailure(Throwable failure) {
        Log.error("Document ingestion pipeline failed; readiness set UP so the app is not stuck DOWN.", failure);
        indexingState.setIndexReady(true);
    }

    private Uni&lt;List&lt;Path&gt;&gt; listEligiblePathsUni() {
        return Uni.createFrom().item(() -&gt; {
            Path documentsPath = Path.of("src/main/resources/documents");
            List&lt;Path&gt; paths = new ArrayList&lt;&gt;();
            if (!Files.isDirectory(documentsPath)) {
                Log.warnf("Documents directory not found or not a directory: %s", documentsPath);
                return paths;
            }
            int skippedCount = 0;
            try (var stream = Files.list(documentsPath)) {
                for (Path filePath : stream.filter(Files::isRegularFile).toList()) {
                    String fileName = filePath.getFileName().toString();
                    String extension = fileExtension(fileName);
                    if (extension.isEmpty() || !ALLOWED_EXTENSIONS.contains(extension)) {
                        skippedCount++;
                        Log.debugf("Skipping file '%s' - extension '%s' is not in allowed list",
                                fileName, extension.isEmpty() ? "(no extension)" : extension);
                        continue;
                    }
                    paths.add(filePath);
                }
            } catch (IOException e) {
                Log.errorf(e, "Failed to list documents in %s", documentsPath);
            }
            Log.infof("Found %d file(s) to process (%d skipped by extension).", paths.size(), skippedCount);
            return paths;
        }).runSubscriptionOn(Infrastructure.getDefaultWorkerPool());
    }

    private static String fileExtension(String fileName) {
        int lastDotIndex = fileName.lastIndexOf('.');
        if (lastDotIndex &gt; 0 &amp;&amp; lastDotIndex &lt; fileName.length() - 1) {
            return fileName.substring(lastDotIndex + 1).toLowerCase();
        }
        return "";
    }

    private static Document toDocument(Path filePath, String markdown) {
        String fileName = filePath.getFileName().toString();
        String extension = fileExtension(fileName);
        Map&lt;String, String&gt; meta = new HashMap&lt;&gt;();
        meta.put("file", fileName);
        meta.put("format", extension);
        return Document.document(markdown, new Metadata(meta));
    }

    private Uni&lt;Void&gt; embedAllDocuments(List&lt;Document&gt; docs) {
        if (docs.isEmpty()) {
            Log.warn("No documents were successfully converted. Skipping embedding generation.");
            return Uni.createFrom().voidItem();
        }

        DocumentBySentenceSplitter splitter = new DocumentBySentenceSplitter(200, 20);
        List&lt;TextSegment&gt; segments = splitter.splitAll(docs);

        if (segments.isEmpty()) {
            Log.warn("No text segments generated from documents. Skipping embedding storage.");
            return Uni.createFrom().voidItem();
        }

        Log.infof("Generating embeddings for %d text segments...", segments.size());

        return Uni.createFrom().item(() -&gt; {
            embedSegmentsBlocking(segments);
            return null;
        }).runSubscriptionOn(Infrastructure.getDefaultWorkerPool()).replaceWithVoid();
    }

    private void embedSegmentsBlocking(List&lt;TextSegment&gt; segments) {
        int embeddedCount = 0;
        int errorCount = 0;
        try {
            if (!segments.isEmpty()) {
                TextSegment testSegment = segments.get(0);
                var testEmbedding = embeddingModel.embed(testSegment).content();
                store.add(testEmbedding, testSegment);
                Log.infof("Store test successful. Proceeding with bulk embedding...");
                embeddedCount = 1;
            }
        } catch (jakarta.enterprise.inject.CreationException e) {
            Throwable cause = e.getCause();
            if (cause instanceof IllegalArgumentException
                    &amp;&amp; cause.getMessage() != null
                    &amp;&amp; cause.getMessage().contains("indexListSize")
                    &amp;&amp; cause.getMessage().contains("zero")) {
                Log.errorf("PgVector dimension configuration error detected during store initialization.");
                Log.errorf("The dimension property 'quarkus.langchain4j.pgvector.dimension' is being read as 0.");
                throw new RuntimeException(
                        "PgVector store initialization failed. Check application.properties and database configuration.",
                        e);
            }
            throw e;
        } catch (IllegalArgumentException e) {
            if (e.getMessage() != null &amp;&amp; e.getMessage().contains("indexListSize") &amp;&amp; e.getMessage().contains("zero")) {
                Log.errorf("PgVector dimension configuration error. The dimension is being read as 0.");
                throw new RuntimeException(
                        "PgVector dimension misconfiguration. Dimension must be &gt; 0. Check application.properties.", e);
            }
            throw e;
        } catch (Exception e) {
            Log.errorf(e, "Failed to test embedding store. This might indicate a configuration issue.");
            throw new RuntimeException(
                    "Embedding store test failed. Please check your database and pgvector configuration.", e);
        }

        int startIndex = embeddedCount &gt; 0 ? 1 : 0;
        for (int i = startIndex; i &lt; segments.size(); i++) {
            TextSegment segment = segments.get(i);
            try {
                var embedding = embeddingModel.embed(segment).content();
                store.add(embedding, segment);
                embeddedCount++;
                if (embeddedCount % 10 == 0) {
                    Log.infof("Progress: embedded %d/%d segments", embeddedCount, segments.size());
                }
            } catch (Exception e) {
                errorCount++;
                Log.errorf(e, "Failed to embed and store segment: %s",
                        segment.text().substring(0, Math.min(50, segment.text().length())));
            }
        }

        Log.infof("Successfully embedded and stored %d out of %d segments (errors: %d)", embeddedCount,
                segments.size(), errorCount);
    }
}</code></code></pre><h3><strong>IngestionReadinessCheck</strong></h3><p><code>src/main/java/com/ibm/health/IngestionReadinessCheck.java</code> turns the indexing flag into a standard SmallRye Health readiness signal. This is the line between &#8220;the process is running&#8221; and &#8220;the RAG system can answer with indexed context.&#8221;</p><pre><code><code>package com.ibm.health;

import org.eclipse.microprofile.health.HealthCheck;
import org.eclipse.microprofile.health.HealthCheckResponse;
import org.eclipse.microprofile.health.Readiness;

import com.ibm.ingest.IndexingState;

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;

/**
 * Readiness stays {@code DOWN} until background document ingestion and embedding complete.
 */
@Readiness
@ApplicationScoped
public class IngestionReadinessCheck implements HealthCheck {

    @Inject
    IndexingState indexingState;

    @Override
    public HealthCheckResponse call() {
        if (indexingState.isIndexReady()) {
            return HealthCheckResponse.up("ingestion");
        }
        return HealthCheckResponse.down("ingestion");
    }
}</code></code></pre><h3><strong>DocumentRetrieverAugmentorSupplier</strong></h3><p><code>src/main/java/com/ibm/ai/DocumentRetrieverAugmentorSupplier.java</code> connects the custom retriever to the Quarkus LangChain4j AI service. I like making this explicit. Defaults are nice until you need to debug why the model retrieved absolutely nothing with great confidence.</p><pre><code><code>package com.ibm.ai;

import java.util.function.Supplier;

import com.ibm.retrieval.DocumentRetriever;

import dev.langchain4j.rag.RetrievalAugmentor;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;

/**
 * Wires the custom {@link RetrievalAugmentor} into the Quarkus LangChain4j AI service.
 */
@ApplicationScoped
public class DocumentRetrieverAugmentorSupplier implements Supplier&lt;RetrievalAugmentor&gt; {

    private final DocumentRetriever documentRetriever;

    @Inject
    public DocumentRetrieverAugmentorSupplier(DocumentRetriever documentRetriever) {
        this.documentRetriever = documentRetriever;
    }

    @Override
    public RetrievalAugmentor get() {
        return documentRetriever;
    }
}</code></code></pre><h3><strong>SalesEnablementBot</strong></h3><p><code>src/main/java/com/ibm/ai/SalesEnablementBot.java</code> defines the assistant contract. The system message sets the CloudX scope, the retrieval augmentor supplies document context, and the guardrails check both sides of the model call.</p><pre><code><code>package com.ibm.ai;

import com.ibm.guardrails.HallucinationGuardrail;
import com.ibm.guardrails.InputValidationGuardrail;
import com.ibm.guardrails.OutOfScopeGuardrail;

import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.UserMessage;
import dev.langchain4j.service.guardrail.InputGuardrails;
import dev.langchain4j.service.guardrail.OutputGuardrails;
import io.quarkiverse.langchain4j.RegisterAiService;

@RegisterAiService(retrievalAugmentor = DocumentRetrieverAugmentorSupplier.class)
public interface SalesEnablementBot {

    @SystemMessage("""
                # ROLE AND SCOPE
                You are a Sales Enablement Copilot for CloudX Enterprise Platform.
                
                ## YOUR ALLOWED TOPICS (ONLY THESE):
                - CloudX product features, capabilities, and architecture
                - CloudX pricing tiers: Starter ($499), Professional ($1,999), Enterprise ($5,999)
                - CloudX competitive positioning vs CompeteCloud, SkyPlatform, TechGiant
                - CloudX migration strategies and implementation approaches
                - CloudX customer success stories and ROI data
                - CloudX technical specifications (multi-cloud, Kubernetes, supported languages)
                
                ## STRICT BOUNDARIES - YOU MUST REFUSE:
                &#10060; Questions about competitor internal operations or roadmaps
                &#10060; Questions about non-CloudX IBM products (Watson, DB2, WebSphere Traditional, etc.)
                &#10060; Requests for pricing negotiations or custom contract terms
                &#10060; Questions about unreleased CloudX features or internal roadmaps
                &#10060; Legal, financial, tax, or investment advice
                &#10060; Personal advice or non-business topics
                &#10060; General technology tutorials not related to CloudX
                
                If asked about prohibited topics, respond EXACTLY:
                "I specialize in CloudX Enterprise Platform sales enablement. This question is outside my scope. For [topic], please consult [appropriate resource]."
                
                # SOLUTION MAPPING LOGIC
                When a user describes a client scenario, map to CloudX solutions:
                
                - Legacy technology risk / End-of-Support &#8594; CloudX Support &amp; Maintenance Solutions
                - Legacy infrastructure operations &#8594; CloudX Migration &amp; Modernization Platform
                - Need faster modernization &#8594; CloudX Accelerated Migration Tools
                - Containerization / microservices &#8594; CloudX Cloud-Native Platform
                - AI-assisted modernization &#8594; CloudX AI-Powered Modernization Assistant
                
                # RESPONSE STRUCTURE
                For valid CloudX questions, provide:
                1. **Recommended Solution**: Name the CloudX product/tier
                2. **Rationale**: Why it fits the client's pain point
                3. **Business Outcome**: Expected ROI or benefit
                4. **Proof Point**: Reference a specific customer case study from your documents
                5. **Discovery Question**: Suggest a follow-up question for the sales rep
                
                # ACCURACY REQUIREMENTS
                - Only cite information from your provided CloudX sales enablement documents
                - Never speculate or make up features, pricing, or capabilities
                - If information is not in your documents, state: "I don't have that specific information in my CloudX sales materials."
            """)
    @OutputGuardrails({ OutOfScopeGuardrail.class, HallucinationGuardrail.class })
    @InputGuardrails({ InputValidationGuardrail.class })
    String chat(@UserMessage String userQuestion);
}</code></code></pre><h3><strong>DocumentRetriever</strong></h3><p><code>src/main/java/com/ibm/retrieval/DocumentRetriever.java</code> embeds the user question, asks pgvector for nearby segments, and passes those segments back as augmentation content. It also logs snippets while you develop. Keep that visibility early; flying blind with retrieval is not character building, it is just slow debugging.</p><pre><code><code>package com.ibm.retrieval;

import dev.langchain4j.data.segment.TextSegment;
import dev.langchain4j.model.embedding.EmbeddingModel;
import dev.langchain4j.rag.AugmentationRequest;
import dev.langchain4j.rag.AugmentationResult;
import dev.langchain4j.rag.DefaultRetrievalAugmentor;
import dev.langchain4j.rag.RetrievalAugmentor;
import dev.langchain4j.rag.content.Content;
import dev.langchain4j.rag.content.retriever.EmbeddingStoreContentRetriever;
import dev.langchain4j.store.embedding.EmbeddingStore;
import io.quarkus.logging.Log;
import jakarta.enterprise.context.ApplicationScoped;

@ApplicationScoped
public class DocumentRetriever implements RetrievalAugmentor {

    private final RetrievalAugmentor augmentor;
    private static final int SNIPPET_LENGTH = 200;

    DocumentRetriever(EmbeddingStore&lt;TextSegment&gt; store, EmbeddingModel model) {
        EmbeddingStoreContentRetriever contentRetriever = EmbeddingStoreContentRetriever.builder()
                .embeddingModel(model)
                .embeddingStore(store)
                .maxResults(3)
                .build();
        augmentor = DefaultRetrievalAugmentor
                .builder()
                .contentRetriever(contentRetriever)
                .build();
    }

    @Override
    public AugmentationResult augment(AugmentationRequest augmentationRequest) {
        // Perform the augmentation
        AugmentationResult result = augmentor.augment(augmentationRequest);

        // Log retrieved content snippets for developer visibility
        // This helps developers understand what documents are being retrieved
        var contents = result.contents();
        Log.infof("DocumentRetriever: Retrieved %d document snippet(s) for augmentation", contents.size());

        for (int i = 0; i &lt; contents.size(); i++) {
            Content content = contents.get(i);
            String text = "";
            String sourceInfo = "";

            try {
                // Content has textSegment() method that returns TextSegment
                TextSegment segment = content.textSegment();
                if (segment != null) {
                    text = segment.text();

                    // Try to extract source file information from metadata
                    var meta = segment.metadata();
                    if (meta != null) {
                        // Try to iterate over metadata entries if available
                        try {
                            // Metadata might have a way to get values - try toString for now
                            String metaString = meta.toString();
                            if (metaString.contains("file=")) {
                                // Extract file name from metadata string representation
                                int fileStart = metaString.indexOf("file=") + 5;
                                int fileEnd = metaString.indexOf(",", fileStart);
                                if (fileEnd == -1)
                                    fileEnd = metaString.indexOf("}", fileStart);
                                if (fileEnd &gt; fileStart) {
                                    sourceInfo = " (from: " + metaString.substring(fileStart, fileEnd) + ")";
                                }
                            }
                        } catch (Exception e) {
                            // If metadata access fails, continue without source info
                            Log.debugf("Could not extract metadata: %s", e.getMessage());
                        }
                    }
                }
            } catch (Exception e) {
                Log.debugf("Could not extract text from content: %s", e.getMessage());
            }

            // Create a snippet (first SNIPPET_LENGTH chars) for developer visibility
            if (!text.isEmpty()) {
                String snippet = text.length() &gt; SNIPPET_LENGTH
                        ? text.substring(0, SNIPPET_LENGTH) + "..."
                        : text;
                // Replace newlines with spaces for cleaner log output
                snippet = snippet.replace('\n', ' ').replace('\r', ' ');
                Log.infof("  [%d] %s%s", i + 1, snippet, sourceInfo);
            } else {
                Log.infof("  [%d] (content unavailable)%s", i + 1, sourceInfo);
            }
        }

        return result;
    }

}</code></code></pre><h3><strong>HallucinationGuardrail</strong></h3><p><code>src/main/java/com/ibm/guardrails/HallucinationGuardrail.java</code> checks the answer after the model produces it. It looks for uncertainty, generic content, contradictions, and known CloudX fact mistakes, then reprompts when the answer breaks the sales enablement contract.</p><p>This is still pattern matching. It catches obvious failures and makes the demo behavior visible. It is not a safety program with a trench coat.</p><pre><code><code>package com.ibm.guardrails;

import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.guardrail.OutputGuardrail;
import dev.langchain4j.guardrail.OutputGuardrailResult;
import jakarta.enterprise.context.ApplicationScoped;
import io.quarkus.logging.Log;

/**
 * HallucinationGuardrail detects when the LLM generates responses that:
 * - Admit lack of knowledge
 * - Are too vague or generic
 * - Contain contradictory information
 * - Make up facts not present in the CloudX sales enablement materials
 * - Provide overly confident answers without proper context
 */
@ApplicationScoped
public class HallucinationGuardrail implements OutputGuardrail {

    // Phrases indicating the model doesn't have information
    private static final String[] UNCERTAINTY_PHRASES = {
            "i don't have that information",
            "i don't know",
            "i'm not sure",
            "i cannot find",
            "i don't have access to",
            "i'm unable to provide",
            "i don't have specific information",
            "i cannot confirm",
            "i'm not aware of",
            "i don't have details about"
    };

    // Phrases indicating potential hallucination or making up information
    private static final String[] HALLUCINATION_INDICATORS = {
            "as far as i know",
            "i believe",
            "i think",
            "probably",
            "it seems like",
            "it appears that",
            "i assume",
            "i would guess",
            "most likely",
            "presumably"
    };

    // Contradictory phrases that might indicate confusion
    private static final String[] CONTRADICTION_INDICATORS = {
            "however, on the other hand",
            "but actually",
            "or maybe",
            "alternatively, it could be",
            "i'm not certain, but"
    };

    // CloudX-specific facts that should be accurate
    private static final String[][] CLOUDX_FACTS = {
            // Format: {incorrect_value, correct_value, context}
            { "99.9% uptime", "99.99%", "enterprise tier" },
            { "$599", "$499", "starter tier monthly" },
            { "$2,999", "$1,999", "professional tier monthly" },
            { "aws only", "aws, azure, and google cloud", "multi-cloud support" },
            { "competecloud is cheaper", "cloudx is 8% lower for enterprise", "enterprise pricing" }
    };

    @Override
    public OutputGuardrailResult validate(AiMessage responseFromLLM) {
        Log.info("HallucinationGuardrail: Validating LLM response");

        String content = responseFromLLM.text();
        String contentLower = content.toLowerCase();
        Log.debug("HallucinationGuardrail: Response content length: " + content.length() + " characters");

        // 1. Check for uncertainty phrases (model admitting it doesn't know)
        String uncertaintyPhrase = detectUncertaintyPhrase(contentLower);
        if (uncertaintyPhrase != null) {
            Log.warn("HallucinationGuardrail: Detected uncertainty phrase: '" + uncertaintyPhrase + "'");
            return reprompt(
                    "The response contains uncertainty phrases. ",
                    "Please provide a confident answer based strictly on the CloudX sales enablement materials. " +
                            "If the information is not available in the provided documents, clearly state that the information is not in the available materials rather than expressing uncertainty.");
        }

        // 2. Check for hallucination indicators (hedging language suggesting
        // uncertainty)
        String hallucinationIndicator = detectHallucinationIndicator(contentLower);
        if (hallucinationIndicator != null) {
            Log.warn("HallucinationGuardrail: Detected hallucination indicator: '" + hallucinationIndicator + "'");
            return reprompt(
                    "The response contains hedging language that suggests uncertainty. ",
                    "Please provide a confident, fact-based answer using only information from the CloudX sales enablement materials. "
                            +
                            "If the information is not in the documents, clearly state that the information is not available rather than speculating or using uncertain language.");
        }

        // 3. Check for contradictory statements
        String contradictionIndicator = detectContradictionIndicator(contentLower);
        if (contradictionIndicator != null) {
            Log.warn("HallucinationGuardrail: Detected contradiction indicator: '" + contradictionIndicator + "'");
            return reprompt(
                    "The response contains contradictory or conflicting statements. ",
                    "Please provide a clear, consistent answer based on the CloudX sales enablement materials. "
                            +
                            "Ensure all information is coherent and does not present conflicting details.");
        }

        // 4. Check for too short/lazy answers
        if (content.trim().length() &lt; 20) {
            Log.warn("HallucinationGuardrail: Response too short - " + content.trim().length() + " characters");
            return reprompt(
                    "The response is too brief and lacks sufficient detail. ",
                    "Please provide a comprehensive response with specific details, examples, and concrete information from the CloudX sales enablement materials.");
        }

        // 5. Check for overly generic responses
        if (isOverlyGeneric(contentLower)) {
            Log.warn("HallucinationGuardrail: Response is overly generic - lacks CloudX-specific details");
            return reprompt(

                    "The response is too generic and lacks specific CloudX details. ",
                    "Please provide concrete information about CloudX features, pricing, capabilities, competitive advantages, "
                            +
                            "or specific use cases from the sales enablement materials. Include specific product names, pricing tiers, percentages, or technical details where relevant.");
        }

        // 6. Check for potential factual errors about CloudX
        String factualError = detectFactualError(contentLower);
        if (factualError != null) {
            Log.warn("HallucinationGuardrail: Detected potential factual error: " + factualError);
            return reprompt(
                    "The response may contain a factual error: " + factualError + ". ",
                    "Please carefully verify all information against the CloudX sales enablement materials and provide accurate, verified details. "
                            +
                            "Only include information that is explicitly stated in the provided documents.");
        }

        // 7. Check for excessive hedging (multiple uncertainty markers)
        int hedgingCount = countHedgingPhrases(contentLower);
        if (hedgingCount &gt;= 3) {
            Log.warn("HallucinationGuardrail: Excessive hedging detected - " + hedgingCount + " hedging phrases found");
            return reprompt(
                    "The response contains excessive hedging language that suggests uncertainty. ",
                    "Please provide a confident, fact-based answer using information directly from the CloudX sales enablement materials. "
                            +
                            "Avoid hedging phrases and present information with confidence when it is supported by the documents.");
        }

        // All checks passed
        Log.info("HallucinationGuardrail: Response validated successfully - no hallucination indicators detected");
        return success();
    }

    private String detectUncertaintyPhrase(String content) {
        for (String phrase : UNCERTAINTY_PHRASES) {
            if (content.contains(phrase)) {
                return phrase;
            }
        }
        return null;
    }

    private String detectHallucinationIndicator(String content) {
        for (String indicator : HALLUCINATION_INDICATORS) {
            if (content.contains(indicator)) {
                return indicator;
            }
        }
        return null;
    }

    private String detectContradictionIndicator(String content) {
        for (String indicator : CONTRADICTION_INDICATORS) {
            if (content.contains(indicator)) {
                return indicator;
            }
        }
        return null;
    }

    private boolean isOverlyGeneric(String content) {
        // Check if response lacks specific CloudX details
        String[] specificKeywords = {
                "cloudx", "starter tier", "professional tier", "enterprise tier",
                "$499", "$1,999", "$5,999", "99.99%", "multi-cloud",
                "competecloud", "skyplatform", "techgiant",
                "kubernetes", "aws", "azure", "google cloud"
        };

        int specificCount = 0;
        for (String keyword : specificKeywords) {
            if (content.contains(keyword)) {
                specificCount++;
            }
        }

        // If response is longer than 100 chars but has no specific CloudX details, it's
        // too generic
        return content.length() &gt; 100 &amp;&amp; specificCount == 0;
    }

    private String detectFactualError(String content) {
        // Check for common factual errors about CloudX
        for (String[] fact : CLOUDX_FACTS) {
            String incorrectValue = fact[0];
            String correctValue = fact[1];
            String context = fact[2];

            if (content.contains(incorrectValue)) {
                return "Found '" + incorrectValue + "' but the correct value is '" + correctValue + "' for " + context;
            }
        }
        return null;
    }

    private int countHedgingPhrases(String content) {
        int count = 0;
        String[] hedgingPhrases = {
                "might", "maybe", "perhaps", "possibly", "could be",
                "may be", "seems", "appears", "likely", "probably"
        };

        for (String phrase : hedgingPhrases) {
            if (content.contains(phrase)) {
                count++;
            }
        }
        return count;
    }
}</code></code></pre><h3><strong>OutOfScopeGuardrail</strong></h3><p><code>src/main/java/com/ibm/guardrails/OutOfScopeGuardrail.java</code> keeps the final answer inside the CloudX sales enablement domain. This matters because a retrieved chunk and a helpful model can still drift into competitor internals, unrelated IBM products, personal advice, or pricing negotiation.</p><pre><code><code>package com.ibm.guardrails;

import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.guardrail.OutputGuardrail;
import dev.langchain4j.guardrail.OutputGuardrailResult;
import jakarta.enterprise.context.ApplicationScoped;
import io.quarkus.logging.Log;

/**
 * OutOfScopeGuardrail ensures the AI assistant stays within the boundaries of
 * CloudX sales enablement content and doesn't provide information outside its
 * domain.
 *
 * Based on the sales enablement resources, the scope includes:
 * - CloudX Enterprise Platform features, pricing, and capabilities
 * - Competitive analysis and positioning (based on public information)
 * - Sales methodology and processes
 * - Customer success stories and ROI information
 * - Technical architecture and supported technologies
 * - Migration strategies and implementation approaches
 *
 * Out of scope includes:
 * - Competitor internal operations or confidential information
 * - Non-CloudX IBM products or third-party services (unless in context of
 * integration/comparison)
 * - Legal, financial, tax, or investment advice
 * - Personal or non-business advice
 * - Confidential customer information or unreleased features
 * - Custom pricing negotiations (should be referred to sales team)
 * - General technology tutorials unrelated to CloudX
 */
@ApplicationScoped
public class OutOfScopeGuardrail implements OutputGuardrail {

    // Keywords indicating competitor-specific internal information (out of scope)
    private static final String[] COMPETITOR_INTERNAL_KEYWORDS = {
            "competecloud's internal", "competecloud roadmap", "competecloud strategy",
            "skyplatform's internal", "skyplatform roadmap", "skyplatform strategy",
            "techgiant's internal", "techgiant roadmap", "techgiant strategy",
            "competitor's source code", "competitor's architecture"
    };

    // Keywords indicating non-CloudX products (out of scope)
    private static final String[] NON_CLOUDX_PRODUCTS = {
            "watson", "db2", "websphere traditional", "maximo", "cognos",
            "spss", "qradar", "guardium", "appscan", "rational",
            "aws lambda", "azure functions", "google cloud run",
            "heroku", "digitalocean", "linode"
    };

    // Keywords indicating requests for confidential/inappropriate information
    private static final String[] CONFIDENTIAL_KEYWORDS = {
            "confidential customer", "internal only", "proprietary information",
            "trade secret", "non-disclosure", "customer's private",
            "competitor's financials", "unreleased feature", "beta feature"
    };

    // Keywords indicating legal/financial advice requests (out of scope)
    private static final String[] ADVICE_KEYWORDS = {
            "legal advice", "tax advice", "investment advice", "financial planning",
            "should i invest", "legal opinion", "tax implications",
            "securities advice", "compliance advice", "audit advice"
    };

    // Keywords indicating personal/non-business requests (out of scope)
    private static final String[] PERSONAL_KEYWORDS = {
            "personal recommendation", "what should i do with my career",
            "help me with my resume", "dating advice", "health advice",
            "medical advice", "therapy", "counseling"
    };

    // Keywords indicating requests for custom pricing/negotiations (should be
    // referred)
    private static final String[] NEGOTIATION_KEYWORDS = {
            "negotiate my contract", "get me a better deal", "discount my price",
            "override the pricing", "special pricing for me", "custom contract terms"
    };

    @Override
    public OutputGuardrailResult validate(AiMessage responseFromLLM) {
        Log.info("OutOfScopeGuardrail: Validating LLM response");

        String content = responseFromLLM.text().toLowerCase();
        Log.debug("OutOfScopeGuardrail: Response content length: " + content.length() + " characters");

        // Check for various out-of-scope categories
        String detectedIssue = detectOutOfScopeContent(content);

        if (detectedIssue != null) {
            Log.warn("OutOfScopeGuardrail: Detected out-of-scope content - Issue type: " + detectedIssue);
            return buildOutOfScopeResponse(detectedIssue);
        }

        // Response is in scope
        Log.info("OutOfScopeGuardrail: Response validated successfully - content is in scope");
        return success();
    }

    /**
     * Detects if the response contains out-of-scope content.
     * Returns a description of the issue if found, null otherwise.
     */
    private String detectOutOfScopeContent(String content) {
        // Priority order: Check most critical violations first

        // 1. Check for confidential information (highest priority)
        for (String keyword : CONFIDENTIAL_KEYWORDS) {
            if (content.contains(keyword)) {
                return "confidential";
            }
        }

        // 2. Check for legal/financial advice
        for (String keyword : ADVICE_KEYWORDS) {
            if (content.contains(keyword)) {
                return "advice";
            }
        }

        // 3. Check for personal requests
        for (String keyword : PERSONAL_KEYWORDS) {
            if (content.contains(keyword)) {
                return "personal";
            }
        }

        // 4. Check for competitor internal information
        for (String keyword : COMPETITOR_INTERNAL_KEYWORDS) {
            if (content.contains(keyword)) {
                return "competitor_internal";
            }
        }

        // 5. Check for non-CloudX products (only if not in CloudX context)
        for (String product : NON_CLOUDX_PRODUCTS) {
            if (content.contains(product) &amp;&amp; !isCloudXContext(content)) {
                return "non_cloudx_product";
            }
        }

        // 6. Check for pricing negotiation requests
        for (String keyword : NEGOTIATION_KEYWORDS) {
            if (content.contains(keyword)) {
                return "negotiation";
            }
        }

        // 7. Check if response is about general technology not related to CloudX
        if (isGeneralTechnologyQuestion(content)) {
            return "general_technology";
        }

        return null;
    }

    /**
     * Checks if the content is discussing a product in the context of CloudX
     * (e.g., integration, comparison, migration from)
     */
    private boolean isCloudXContext(String content) {
        String[] cloudxContextKeywords = {
                "cloudx", "integrate with", "migrate from", "compared to",
                "alternative to", "replace", "modernize from"
        };

        for (String keyword : cloudxContextKeywords) {
            if (content.contains(keyword)) {
                return true;
            }
        }
        return false;
    }

    /**
     * Checks if the response is about general technology topics not related to
     * CloudX
     */
    private boolean isGeneralTechnologyQuestion(String content) {
        // Check if discussing technology without CloudX context
        String[] techKeywords = {
                "how to program", "learn programming", "tutorial for",
                "what is blockchain", "what is ai", "what is machine learning",
                "how does the internet work", "what is a database"
        };

        boolean hasTechKeyword = false;
        for (String keyword : techKeywords) {
            if (content.contains(keyword)) {
                hasTechKeyword = true;
                break;
            }
        }

        // If has tech keyword but no CloudX context, it's out of scope
        return hasTechKeyword &amp;&amp; !isCloudXContext(content);
    }

    /**
     * Builds an appropriate out-of-scope response based on the detected issue.
     * Uses reprompt() to guide the LLM to provide a better, in-scope response.
     */
    private OutputGuardrailResult buildOutOfScopeResponse(String issueType) {
        Log.info("OutOfScopeGuardrail: Building reprompt response for issue type: " + issueType);

        String userMessage;
        String repromptMessage;

        switch (issueType) {
            case "confidential":
                userMessage = "The response contains references to confidential or proprietary information. ";
                repromptMessage = "Please provide a response that only uses publicly available information from the CloudX sales enablement materials. "
                        +
                        "Focus on CloudX features, pricing, competitive positioning, and sales methodology without revealing confidential details.";
                break;

            case "advice":
                userMessage = "The response appears to provide legal, financial, or investment advice. ";
                repromptMessage = "Please reframe the response to focus on CloudX's business value, ROI calculations, and pricing structure "
                        +
                        "without providing specific legal or financial advice. Suggest consulting appropriate advisors for such matters.";
                break;

            case "personal":
                userMessage = "The response addresses personal or non-business matters.";
                repromptMessage = "Please provide a response focused on CloudX sales enablement topics such as product features, "
                        +
                        "pricing, competitive analysis, sales methodology, or customer success stories.";
                break;

            case "competitor_internal":
                userMessage = "The response discusses competitors' internal strategies or confidential information.";
                repromptMessage = "Please limit the response to publicly available competitive comparisons based on the CloudX sales enablement materials. "
                        +
                        "Focus on how CloudX compares to competitors using public information and customer feedback.";
                break;

            case "non_cloudx_product":
                userMessage = "The response discusses products or services outside of CloudX Enterprise Platform. ";
                repromptMessage = "Please focus the response on CloudX-specific features, capabilities, and use cases. "
                        +
                        "If mentioning other products, only do so in the context of CloudX integration, migration, or comparison.";
                break;

            case "negotiation":
                userMessage = "The response attempts to negotiate specific pricing or contract terms. ";
                repromptMessage = "Please provide information about standard CloudX pricing tiers, discount guidelines, and the general pricing framework. "
                        +
                        "Indicate that specific negotiations should be handled by the sales manager and deal desk team.";
                break;

            case "general_technology":
                userMessage = "The response discusses general technology topics not related to CloudX. ";
                repromptMessage = "Please refocus the response on CloudX Enterprise Platform and its applications. " +
                        "Connect the technology discussion to CloudX use cases, deployment scenarios, or architecture if relevant.";
                break;

            default:
                userMessage = "The response appears to be outside the scope of CloudX sales enablement. ";
                repromptMessage = "Please provide a response focused on CloudX Enterprise Platform features, pricing, competitive analysis, "
                        +
                        "sales methodology, or customer success stories based on the available sales enablement materials.";
        }

        // Use reprompt() with both user message and system reprompt instruction
        Log.debug("OutOfScopeGuardrail: Reprompting with user message: " + userMessage);
        return reprompt(userMessage, repromptMessage);
    }
}</code></code></pre><h3><strong>InputValidationGuardrail</strong></h3><p><code>src/main/java/com/ibm/guardrails/InputValidationGuardrail.java</code> runs before the model call. It blocks prompt injection patterns, unrelated personal-service requests, malicious strings, and CloudX-adjacent topics that would turn this assistant into a general-purpose chatbot. That is not the job here.</p><pre><code><code>package com.ibm.guardrails;

import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.guardrail.InputGuardrail;
import dev.langchain4j.guardrail.InputGuardrailResult;
import jakarta.enterprise.context.ApplicationScoped;
import io.quarkus.logging.Log;

/**
 * InputValidationGuardrail validates user input before it reaches the LLM.
 * It detects and blocks:
 * 1. Prompt injection attempts
 * 2. Off-topic questions outside CloudX sales enablement scope
 * 3. Malicious or inappropriate content
 * 
 * Based on CloudX sales enablement materials, valid topics include:
 * - CloudX Enterprise Platform features and capabilities
 * - Pricing and packaging information
 * - Competitive analysis and positioning
 * - Sales methodology and processes
 * - Customer success stories and ROI
 * - Technical architecture (multi-cloud, Kubernetes, supported languages)
 * - Migration and implementation strategies
 */
@ApplicationScoped
public class InputValidationGuardrail implements InputGuardrail {

    // Prompt injection patterns
    private static final String[] PROMPT_INJECTION_PATTERNS = {
        "ignore previous instructions",
        "ignore all previous",
        "disregard previous",
        "forget previous instructions",
        "new instructions:",
        "system:",
        "you are now",
        "act as",
        "pretend you are",
        "roleplay as",
        "simulate being",
        "override your",
        "bypass your",
        "ignore your guidelines",
        "forget your role",
        "new role:",
        "system prompt:",
        "assistant:",
        "###instruction:",
        "###system:",
        "[system]",
        "&lt;system&gt;",
        "sudo mode",
        "developer mode",
        "jailbreak",
        "dan mode"
    };

    // Off-topic technology combinations (not supported by CloudX)
    private static final String[][] OFF_TOPIC_COMBINATIONS = {
        // Format: {technology, unsupported_context, boundary_message}
        {"python", "google cloud", "CloudX supports Python on AWS, Azure, and Google Cloud. However, I specialize in CloudX sales enablement. For deployment questions, please refer to CloudX technical documentation."},
        {"node.js", "heroku", "CloudX supports Node.js but not Heroku deployment. CloudX works with AWS, Azure, and Google Cloud."},
        {".net", "digitalocean", "CloudX supports .NET but not DigitalOcean. CloudX is designed for AWS, Azure, and Google Cloud."},
        {"ruby", "linode", "CloudX supports Ruby but not Linode. CloudX operates on AWS, Azure, and Google Cloud."}
    };

    // Topics completely outside CloudX scope
    private static final String[] COMPLETELY_OFF_TOPIC = {
        // Food &amp; Dining
        "recipe", "cooking", "food", "restaurant", "meal", "dinner", "lunch",
        // Entertainment
        "movie", "film", "entertainment", "music", "song", "concert", "show",
        // Sports
        "sports", "football", "basketball", "soccer", "baseball", "tennis",
        // Weather &amp; Nature
        "weather", "climate", "temperature", "forecast",
        // Health &amp; Medical
        "health", "medical", "doctor", "medicine", "hospital", "disease",
        // Personal Life
        "dating", "relationship", "romance", "wedding", "marriage",
        // Politics &amp; Government
        "politics", "election", "government", "president", "senator",
        // Finance (non-business)
        "cryptocurrency", "bitcoin", "blockchain", "stock market", "forex",
        // Gaming
        "gaming", "video game", "playstation", "xbox", "nintendo",
        // Travel &amp; Booking
        "flight", "hotel", "vacation", "travel", "booking", "reservation",
        "airline", "airport", "cruise", "trip", "tourism",
        // Shopping (non-software)
        "shopping", "buy clothes", "fashion", "shoes", "jewelry",
        // Education (non-tech)
        "homework", "essay", "school assignment", "college application",
        // Real Estate
        "house", "apartment", "real estate", "mortgage", "rent",
        // Automotive
        "car", "vehicle", "automobile", "driving", "traffic"
    };

    // Action verbs for non-CloudX services
    private static final String[] OFF_TOPIC_ACTIONS = {
        "book me", "book a", "reserve a", "schedule a",
        "order me", "buy me", "purchase a",
        "find me a", "get me a",
        "recommend a restaurant", "recommend a hotel",
        "plan my trip", "plan my vacation"
    };

    // Non-CloudX products (unless in comparison/migration context)
    private static final String[] NON_CLOUDX_PRODUCTS = {
        "watson", "db2", "websphere traditional", "maximo",
        "cognos", "spss", "qradar", "guardium",
        "heroku", "digitalocean", "linode", "netlify",
        "vercel", "railway", "render"
    };

    // Malicious content indicators
    private static final String[] MALICIOUS_PATTERNS = {
        "sql injection", "drop table", "delete from",
        "script&gt;", "&lt;iframe", "javascript:",
        "eval(", "exec(", "system(",
        "../../../", "etc/passwd", "cmd.exe"
    };

    @Override
    public InputGuardrailResult validate(UserMessage userMessage) {
        Log.info("InputValidationGuardrail: Validating user input");
        
        String content = userMessage.singleText();
        String contentLower = content.toLowerCase();
        Log.debug("InputValidationGuardrail: Input length: " + content.length() + " characters");

        // 1. Check for prompt injection attempts (highest priority)
        String injectionPattern = detectPromptInjection(contentLower);
        if (injectionPattern != null) {
            Log.warn("InputValidationGuardrail: BLOCKED - Prompt injection detected: '" + injectionPattern + "'");
            return failure(buildPromptInjectionResponse());
        }

        // 2. Check for malicious content
        String maliciousPattern = detectMaliciousContent(contentLower);
        if (maliciousPattern != null) {
            Log.warn("InputValidationGuardrail: BLOCKED - Malicious content detected: '" + maliciousPattern + "'");
            return failure(buildMaliciousContentResponse());
        }

        // 3. Check for off-topic action requests (e.g., "book me a flight")
        String offTopicAction = detectOffTopicAction(contentLower);
        if (offTopicAction != null) {
            Log.warn("InputValidationGuardrail: BLOCKED - Off-topic action request: '" + offTopicAction + "'");
            return failure(buildOffTopicActionResponse(offTopicAction));
        }

        // 4. Check for completely off-topic questions
        String offTopicKeyword = detectCompletelyOffTopic(contentLower);
        if (offTopicKeyword != null) {
            Log.warn("InputValidationGuardrail: BLOCKED - Completely off-topic question: '" + offTopicKeyword + "'");
            return failure(buildCompletelyOffTopicResponse(offTopicKeyword));
        }

        // 5. Check for off-topic technology combinations
        String offTopicCombo = detectOffTopicCombination(contentLower);
        if (offTopicCombo != null) {
            Log.warn("InputValidationGuardrail: BLOCKED - Off-topic technology combination detected");
            return failure(offTopicCombo);
        }

        // 6. Check for non-CloudX products (unless in valid context)
        String nonCloudXProduct = detectNonCloudXProduct(contentLower);
        if (nonCloudXProduct != null &amp;&amp; !isValidCloudXContext(contentLower)) {
            Log.warn("InputValidationGuardrail: BLOCKED - Non-CloudX product without valid context: '" + nonCloudXProduct + "'");
            return failure(buildNonCloudXProductResponse(nonCloudXProduct));
        }

        // Input is valid
        Log.info("InputValidationGuardrail: Input validated successfully");
        return success();
    }

    /**
     * Detects prompt injection attempts
     */
    private String detectPromptInjection(String content) {
        for (String pattern : PROMPT_INJECTION_PATTERNS) {
            if (content.contains(pattern)) {
                return pattern;
            }
        }
        return null;
    }

    /**
     * Detects malicious content patterns
     */
    private String detectMaliciousContent(String content) {
        for (String pattern : MALICIOUS_PATTERNS) {
            if (content.contains(pattern)) {
                return pattern;
            }
        }
        return null;
    }

    /**
     * Detects off-topic action requests (e.g., "book me a flight")
     */
    private String detectOffTopicAction(String content) {
        for (String action : OFF_TOPIC_ACTIONS) {
            if (content.contains(action)) {
                return action;
            }
        }
        return null;
    }

    /**
     * Detects completely off-topic questions
     */
    private String detectCompletelyOffTopic(String content) {
        for (String keyword : COMPLETELY_OFF_TOPIC) {
            if (content.contains(keyword)) {
                return keyword;
            }
        }
        return null;
    }

    /**
     * Detects off-topic technology combinations
     */
    private String detectOffTopicCombination(String content) {
        for (String[] combo : OFF_TOPIC_COMBINATIONS) {
            String tech = combo[0];
            String unsupportedContext = combo[1];
            String message = combo[2];
            
            if (content.contains(tech) &amp;&amp; content.contains(unsupportedContext)) {
                return message;
            }
        }
        return null;
    }

    /**
     * Detects non-CloudX products
     */
    private String detectNonCloudXProduct(String content) {
        for (String product : NON_CLOUDX_PRODUCTS) {
            if (content.contains(product)) {
                return product;
            }
        }
        return null;
    }

    /**
     * Checks if non-CloudX product is mentioned in valid context
     * (comparison, migration, integration)
     */
    private boolean isValidCloudXContext(String content) {
        String[] validContextKeywords = {
            "cloudx", "compare", "comparison", "versus", "vs",
            "migrate", "migration", "move from", "switch from",
            "integrate", "integration", "alternative to",
            "replace", "instead of"
        };
        
        for (String keyword : validContextKeywords) {
            if (content.contains(keyword)) {
                return true;
            }
        }
        return false;
    }

    /**
     * Builds response for prompt injection attempts
     */
    private String buildPromptInjectionResponse() {
        return "I cannot process this request as it appears to contain instructions that would " +
               "compromise my intended function. I'm designed to assist with CloudX Enterprise Platform " +
               "sales enablement questions, including product features, pricing, competitive analysis, " +
               "and sales methodology. Please ask a question related to these topics.";
    }

    /**
     * Builds response for malicious content
     */
    private String buildMaliciousContentResponse() {
        return "I cannot process this request as it contains potentially malicious content. " +
               "I'm here to help with CloudX Enterprise Platform sales enablement questions. " +
               "Please ask about CloudX features, pricing, competitive positioning, or sales strategies.";
    }

    /**
     * Builds response for off-topic action requests
     */
    private String buildOffTopicActionResponse(String action) {
        return "I cannot assist with personal service requests like '" + action + "'. " +
               "I'm a CloudX Enterprise Platform sales enablement assistant. I can help you with:\n\n" +
               "&#8226; CloudX features, capabilities, and technical architecture\n" +
               "&#8226; Pricing, packaging, and ROI information\n" +
               "&#8226; Competitive analysis and positioning\n" +
               "&#8226; Sales methodology and processes\n" +
               "&#8226; Customer success stories and case studies\n" +
               "&#8226; Migration and implementation strategies\n\n" +
               "Please ask a question related to CloudX sales enablement.";
    }

    /**
     * Builds response for completely off-topic questions
     */
    private String buildCompletelyOffTopicResponse(String keyword) {
        return "I specialize in CloudX Enterprise Platform sales enablement and cannot assist with " +
               "questions about " + keyword + ". I can help you with:\n\n" +
               "&#8226; CloudX features, capabilities, and technical architecture\n" +
               "&#8226; Pricing, packaging, and ROI information\n" +
               "&#8226; Competitive analysis and positioning\n" +
               "&#8226; Sales methodology and processes\n" +
               "&#8226; Customer success stories and case studies\n" +
               "&#8226; Migration and implementation strategies\n\n" +
               "Please ask a question related to CloudX sales enablement.";
    }

    /**
     * Builds response for non-CloudX products without valid context
     */
    private String buildNonCloudXProductResponse(String product) {
        return "I specialize in CloudX Enterprise Platform sales enablement. " +
               "While I can discuss " + product + " in the context of CloudX comparisons, migrations, " +
               "or integrations, I cannot provide standalone information about it. " +
               "If you're interested in how CloudX compares to or integrates with " + product + ", " +
               "please rephrase your question to include CloudX in the context.";
    }
}</code></code></pre><h3><strong>BotResponse</strong></h3><p><code>src/main/java/com/ibm/api/BotResponse.java</code> keeps the HTTP response shape boring. Successful answers and guardrail failures can use the same JSON wrapper, so the client only has one field to read.</p><pre><code><code>package com.ibm.api;

public record BotResponse(String response) {
}</code></code></pre><h3><strong>InputGuardrailExceptionMapper</strong></h3><p><code>src/main/java/com/ibm/api/InputGuardrailExceptionMapper.java</code> maps blocked input to <code>400 Bad Request</code>. Without this, a guardrail failure can look like a server problem. That is the wrong kind of drama.</p><pre><code><code>package com.ibm.api;

import dev.langchain4j.guardrail.InputGuardrailException;
import jakarta.ws.rs.core.Response;
import jakarta.ws.rs.ext.ExceptionMapper;
import jakarta.ws.rs.ext.Provider;
import io.quarkus.logging.Log;

/**
 * Exception mapper for InputGuardrailException.
 * Maps validation failures from InputValidationGuardrail to structured JSON responses.
 */
@Provider
public class InputGuardrailExceptionMapper implements ExceptionMapper&lt;InputGuardrailException&gt; {

    @Override
    public Response toResponse(InputGuardrailException exception) {
        Log.warn("InputGuardrailException caught: " + exception.getMessage());
        
        // Extract the validation error message from the exception
        String errorMessage = exception.getMessage();
        if (errorMessage == null || errorMessage.trim().isEmpty()) {
            errorMessage = "Input validation failed. Please ensure your question is related to CloudX Enterprise Platform sales enablement.";
        }
        
        // Return the error message in the same BotResponse format for consistency
        BotResponse errorResponse = new BotResponse(errorMessage);
        
        // Return 400 Bad Request with the structured response
        return Response.status(Response.Status.BAD_REQUEST)
                .entity(errorResponse)
                .type("application/json")
                .build();
    }
}</code></code></pre><h3><strong>SalesEnablementResource</strong></h3><p><code>src/main/java/com/ibm/api/SalesEnablementResource.java</code> exposes the demo as <code>GET /bot?q=...</code>. The fallback question keeps the endpoint easy to test from a browser, which is a small thing until you are doing the fifth local run.</p><pre><code><code>package com.ibm.api;

import com.ibm.ai.SalesEnablementBot;

import jakarta.inject.Inject;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.QueryParam;
import jakarta.ws.rs.core.MediaType;

@Path("/bot")
public class SalesEnablementResource {

    @Inject
    SalesEnablementBot bot;

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public BotResponse ask(@QueryParam("q") String question) {
        if (question == null || question.trim().isEmpty()) {
            question = "What is the best solution for a client who is migrating to a microservices architecture?";
        }
        String botResponse = bot.chat(question);
        return new BotResponse(botResponse);
    }
}</code></code></pre><h2><strong>Configuration</strong></h2><p>Now wire the runtime pieces in <code>src/main/resources/application.properties</code>. These settings connect the Java classes above to Ollama, pgvector, and Docling:</p><pre><code><code># ----------------------------------------
# 1. Ollama configuration (local LLM)
# ----------------------------------------

# Chat model (answers)
quarkus.langchain4j.ollama.chat-model.model-name=gpt-oss:20b

# Embedding model (document + query vectors)
# Default Ollama library tag granite-embedding:latest maps to IBM Granite ~30M English (see `ollama show granite-embedding` &#8594; embedding length).
# Larger community tags (for example granite-embedding-278m-multilingual) often use 768 dimensions &#8212; always match quarkus.langchain4j.pgvector.dimension to `embedding length` from ollama show.
quarkus.langchain4j.ollama.embedding-model.model-name=granite-embedding:latest

# Set a more generous timeout
quarkus.langchain4j.ollama.timeout=60s

# Logging during development
quarkus.langchain4j.log-requests=false
quarkus.langchain4j.log-responses=false

# ----------------------------------------
# 2. Datasource and pgvector
# ----------------------------------------
quarkus.datasource.db-kind=postgresql

# Use default datasource for pgvector
# Store table name
quarkus.langchain4j.pgvector.table=embeddings

quarkus.langchain4j.pgvector.drop-table-first=true
quarkus.langchain4j.pgvector.create-table=true

# Must equal the embedding model output width (same as `embedding length` from `ollama show &lt;model&gt;`).
# granite-embedding:latest &#8594; 384. If you switch to a 768-dim model, set 768 and drop/recreate the table or re-ingest.
quarkus.langchain4j.pgvector.dimension=384

# Optional, but recommended once data grows
quarkus.langchain4j.pgvector.use-index=true
quarkus.langchain4j.pgvector.index-list-size=10

# ----------------------------------------
# 3. Docling
# ----------------------------------------
# Docling Dev Service will start a container in dev mode and testing.
# The extension configures the REST client automatically.
# We configure the docling UI explicitly
quarkus.docling.devservices.enable-ui=true
quarkus.docling.timeout=3M
# Docling Serve may require auth (HTTP 401 without it). Sent as X-Api-Key; match DOCLING_SERVE_API_KEY on the server.
# Dev Services can populate this when it starts the container. If Docling is already listening on the default port,
# Quarkus may skip Dev Services&#8212;then set this explicitly to the same key your Docling instance expects.
# quarkus.docling.api-key=your-secret-here

# REST client timeout configuration for Docling
# Increase timeouts for large file processing (sync helper and Quarkus async client)
quarkus.rest-client."io.quarkiverse.docling.runtime.client.DoclingService".connect-timeout=60
quarkus.rest-client."io.quarkiverse.docling.runtime.client.DoclingService".read-timeout=300
quarkus.rest-client."io.quarkiverse.docling.runtime.client.QuarkusDoclingServeClient".connect-timeout=60
quarkus.rest-client."io.quarkiverse.docling.runtime.client.QuarkusDoclingServeClient".read-timeout=300</code></code></pre><p>Notes:</p><ul><li><p><code>quarkus.langchain4j.ollama.timeout</code> covers slow local models. Increase it if you see client timeouts.</p></li><li><p><code>quarkus.langchain4j.pgvector.drop-table-first=true</code> is fine for demos. Turn it off when the table contains data you care about.</p></li><li><p>REST client keys use the Docling REST client interfaces Quarkus generates: <code>DoclingService</code> for the blocking helper and <code>QuarkusDoclingServeClient</code> for the Mutiny task API. If these fully qualified class names change in a future extension, copy the new names from the Dev UI or extension docs.</p></li><li><p><code>quarkus.docling.api-key</code> (or <code>QUARKUS_DOCLING_API_KEY</code>) supplies the <code>X-Api-Key</code> header for Docling Serve. If you see <strong>401 Unauthorized</strong> from <code>QuarkusDoclingServeClient</code>, the server is enforcing API key auth and your app must send the matching secret (or align Dev Services with a running container on the default port).</p></li><li><p>Large PDFs may need higher Docling read timeouts. The <a href="https://github.com/quarkiverse/quarkus-docling/issues">quarkus-docling issue tracker</a> discusses gateway timeouts for very large uploads.</p></li></ul><h2><strong>Static UI</strong></h2><p>The demo includes a small HTML client at <code>src/main/resources/META-INF/resources/index.html</code>, which is the standard Quarkus static resource location. It posts questions to <code>/bot</code> and renders Markdown in the browser. Copy it from the repository if you create the project from the CLI. I do not repeat it here because the article is already long enough.</p><h2><strong>Production Hardening</strong></h2><p><strong>Timeouts and back-pressure:</strong> Ollama and Docling run outside your JVM. Set REST and Ollama timeouts explicitly. Configure both <code>DoclingService</code> and <code>QuarkusDoclingServeClient</code> for long-running conversions. For very large PDFs, increase the read timeout and follow upstream guidance on gateway limits.</p><p><strong>Docling auth:</strong> When Docling Serve enables API keys, configure <code>quarkus.docling.api-key</code> so async convert/poll/result calls include <code>X-Api-Key</code>. Without it you get HTTP 401 from the client.</p><p><strong>Startup vs RAG readiness:</strong> Background ingestion means the HTTP port opens before pgvector is full. Put <code>/q/health/ready</code> (SmallRye Health) in front of production traffic. The bundled <code>IngestionReadinessCheck</code> stays DOWN until indexing completes. If you call <code>/bot</code> without waiting, answers may have little retrieved context.</p><p><strong>Event loop safety:</strong> Docling&#8217;s Mutiny chain and the embedding loop run on the <strong>worker pool</strong>, not the Vert.x event loop. Keep blocking LangChain4j calls off the event loop when you extend the pipeline.</p><p><strong>Vector store integrity:</strong> Changing the embedding model or <code>dimension</code> without recreating the table produces bad retrieval. Treat embedding config like a schema migration. It is less exciting than debugging why every answer is confidently adjacent to the truth.</p><p><strong>Guardrails and abuse:</strong> The sample uses pattern-based input and output guardrails. They reduce obvious misuse but are not a full safety program. Rate-limit and authenticate any external deployment of <code>/bot</code>.</p><p><strong>Observability:</strong> Retrieval logging in <code>DocumentRetriever</code> shows which chunks influenced a reply. Keep that in dev, then trim or gate it in production.</p><h2><strong>Verification</strong></h2><ol><li><p>Pull models (example): <code>ollama pull gpt-oss:20b</code> and <code>ollama pull granite-embedding:latest</code></p></li><li><p>From the module root: <code>./mvnw quarkus:dev</code></p></li><li><p>Watch logs for <code>Document ingestion pipeline finished; readiness is UP.</code> Optionally poll readiness:</p></li></ol><pre><code><code>curl -s -o /dev/null -w "%{http_code}\n" http://localhost:8080/q/health/ready</code></code></pre><p>Expect <strong>503</strong> while ingestion runs, then <strong>200</strong> when the index is ready. An empty corpus may flip to UP quickly.</p><ol start="4"><li><p>Open http://localhost:8080/</p></li><li><p> for the bundled UI, or call the API <strong>after</strong> readiness is 200:</p></li></ol><pre><code><code>curl -s "http://localhost:8080/bot?q=What%20CloudX%20tier%20fits%20a%20regulated%20industry%20customer?"</code></code></pre><p>Expect JSON <code>{"response":"..."}</code> with content grounded in your <code>src/main/resources/documents/</code> files. If you <code>curl</code> immediately on a cold start, the model may answer with little retrieved context until indexing completes.</p><h2><strong>Conclusion</strong></h2><p>You now have a single Quarkus module that turns messy PDFs into structured text, stores embeddings in pgvector, and answers through an Ollama-backed model with explicit guardrails. That is enough to start moving toward production agent tooling without changing the basic shape of the stack.</p><p>The complete, updated code is available in the <a href="https://github.com/myfear/the-main-thread/tree/main/enterprise-rag">enterprise-rag repository</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[How to Run IBM Bob on a Remote Linux Machine from Your Mac]]></title><description><![CDATA[A step-by-step guide to using IBM Bob over SSH with a Podman-based remote host, so you can work closer to sensitive environments without turning your laptop into the execution boundary.]]></description><link>https://www.the-main-thread.com/p/ibm-bob-remote-ssh-podman-macos-linux</link><guid isPermaLink="false">https://www.the-main-thread.com/p/ibm-bob-remote-ssh-podman-macos-linux</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Thu, 23 Apr 2026 06:08:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6634b405-1cc1-4ecb-90d8-3de884f4c446_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Pointing <a href="https://bob.ibm.com/">IBM Bob</a> straight at your laptop is the fastest way to get started. Open the editor, load a project, ask Bob to inspect some code, and it immediately feels useful. There is no extra infrastructure, no remote machine, and no SSH setup to think about.</p><p>That simplicity is also the trap. Once Bob can read files, edit code, and run shell commands, your local machine becomes the default execution environment for everything. Source code, build caches, generated files, package installs, test artifacts, and agent-driven terminal commands all pile up in the same place. For quick experiments this is fine. For larger codebases or more sensitive environments, it gets hard to control.</p><p>A remote development setup changes that boundary. Bob still runs in the editor on your Mac, but the actual workspace lives on a remote Linux machine reached over SSH. File edits happen there. Terminal commands run there. Toolchains and build outputs stay there. That makes the environment easier to reproduce and easier to throw away when something goes wrong.</p><p>The <code>jeanp413.open-remote-ssh</code> <a href="https://github.com/jeanp413/open-remote-ssh">extension</a> is useful here because it lets the editor open a folder on a remote machine over SSH. Its project describes it exactly that way, and its supported SSH hosts include common Linux targets. That gives us a clean way to move Bob&#8217;s working environment off the laptop without changing how the editor feels day to day. </p><p>For this tutorial, we do not need a real cloud VM. Podman on macOS already depends on a Linux virtual machine because Linux containers need the Linux kernel. Podman&#8217;s <code>machine</code> feature gives us that VM, so we can run an SSH-enabled Linux container inside it and use that container as a simulated remote host. </p><h2>Why Remote Machines Matter Beyond Isolation</h2><p>Keeping Bob away from your laptop is one good reason to use a remote machine, but it is not the only one. In many teams, the bigger value is consistency. A remote machine gives every developer the same base OS, the same toolchain, the same package versions, and the same path layout. That removes a whole category of &#8220;works on my machine&#8221; problems. When Bob runs commands in that environment, it sees the same setup your teammates see. That makes its suggestions and fixes more relevant.</p><p>Remote machines also help when the real target environment is Linux. Many Java projects are built and deployed on Linux, even when developers work on macOS or Windows. Running Bob against a remote Linux machine means builds, scripts, file permissions, shell behavior, and container tooling behave much closer to production. This is especially useful when startup scripts, native binaries, or CI jobs depend on Linux-specific behavior. You catch those differences earlier, before they turn into deployment surprises.</p><p>Another important point is access to internal systems. In enterprise environments, the code you need is often not fully reachable from a personal laptop. Internal Git servers, package registries, artifact repositories, databases, and mounted filesystems may only be available from a managed network zone. A remote machine inside that zone becomes the place where Bob can work with the real project context. Your laptop stays outside, while the remote host becomes the bridge to systems that are sensitive, regulated, or simply not meant to be exposed broadly.</p><p>There is also a resource argument. Some projects need more CPU, memory, disk, or network bandwidth than you want to dedicate on a local machine. Large Maven builds, indexing big monorepos, running integration tests, or pulling large container images can make a laptop unpleasant to use. A remote machine can absorb that load instead. Bob still feels local in the editor, but the heavy work happens somewhere better suited for it.</p><p>Remote machines are also useful for team onboarding. Instead of telling every new developer to install a long list of SDKs, CLIs, certificates, shells, and package managers, you can provide a prepared remote environment. That shortens the time until someone is productive. It also reduces drift. Bob benefits from that too, because it works in an environment that already reflects how the team actually builds, tests, and ships software.</p><p>One more aspect is recovery. Local environments tend to accumulate damage slowly. A broken package install, a conflicting runtime, a strange shell setting, or a half-finished experiment can stay around for weeks. Remote environments are easier to rebuild. If the machine gets messy, you replace it. That matters even more when you work close to sensitive systems. You want the environment to be reproducible, replaceable, and easy to reason about.</p><p>So the value of remote machines is not just isolation. It is also consistency, Linux parity, enterprise access, better resource usage, easier onboarding, and faster recovery. And when Bob needs to work near sensitive environments, remote machines give you a more realistic and practical place to do that work.</p><h2>Prerequisites</h2><p>You need a few things in place before you start.</p><ul><li><p>IBM Bob installed  ( <a href="https://bob.ibm.com/trial">free 30 day trial</a>)</p></li><li><p>Podman installed </p></li><li><p>OpenSSH client available in your terminal</p></li><li><p>Basic comfort with shell commands and SSH config</p></li></ul><h2>Project setup</h2><p>We start by creating the Linux environment that will act like the remote host.</p><p>Create and start the Podman machine:</p><pre><code><code>podman machine init --cpus 4 --memory 8192 --disk-size 40
podman machine start
podman info</code></code></pre><p>On macOS, Podman cannot run Linux containers directly on the host OS. It needs a Linux virtual machine underneath, and <code>podman machine init</code> creates exactly that. The <a href="https://docs.podman.io/en/latest/markdown/podman-machine-init.1.html">Podman docs</a> also note that SSH keys are generated automatically for access to the VM itself. </p><p>Now create a small working folder on your Mac for the demo:</p><pre><code><code>mkdir -p ~/bob-remote-demo/ssh
cd ~/bob-remote-demo</code></code></pre><p>Generate a dedicated SSH key for the simulated remote host:</p><pre><code><code>ssh-keygen -t ed25519 -f bob-remote-demo/ssh/bob_remote_key -N ""</code></code></pre><p>We use a dedicated key because it keeps the test isolated. Later, cleanup is simple. You also avoid mixing this experiment with your normal laptop SSH identities.</p><p>Next, create a <code>Containerfile</code> that defines the remote Linux machine:</p><blockquote><p>&#9888;&#65039; NOTE: I had to remove the beginning &#8220;/&#8221; before all three occurrences of etc because of the stupid configuration of Substack&#8217;s Cloudflare blocking. Make sure to correct this before you use the file! </p></blockquote><pre><code><code>FROM docker.io/ubuntu:24.04

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update &amp;&amp; apt-get install -y \
    openssh-server \
    sudo \
    bash \
    curl \
    git \
    ca-certificates \
    tar \
    gzip \
    unzip \
    procps \
    less \
    nano \
    vim \
    iproute2 \
    openjdk-21-jdk \
    maven \
    &amp;&amp; rm -rf /var/lib/apt/lists/*

RUN useradd -m -s /bin/bash bob \
    &amp;&amp; passwd -d bob \
    &amp;&amp; echo "bob ALL=(ALL) NOPASSWD:ALL" &gt; etc/sudoers.d/bob \
    &amp;&amp; chmod 0440 etc/sudoers.d/bob

RUN mkdir -p /var/run/sshd /home/bob/.ssh \
    &amp;&amp; chown -R bob:bob /home/bob/.ssh \
    &amp;&amp; chmod 700 /home/bob/.ssh

COPY ssh/bob_remote_key.pub /home/bob/.ssh/authorized_keys

RUN chown bob:bob /home/bob/.ssh/authorized_keys \
    &amp;&amp; chmod 600 /home/bob/.ssh/authorized_keys

RUN printf '%s\n' \
    'Port 2222' \
    'PermitRootLogin no' \
    'PasswordAuthentication no' \
    'KbdInteractiveAuthentication no' \
    'ChallengeResponseAuthentication no' \
    'UsePAM no' \
    'PubkeyAuthentication yes' \
    'AllowUsers bob' \
    'X11Forwarding no' \
    'AllowTcpForwarding yes' \
    'ClientAliveInterval 300' \
    'ClientAliveCountMax 2' \
    &gt; etc/ssh/sshd_config.d/remote-dev.conf

USER bob
WORKDIR /home/bob
RUN mkdir -p /home/bob/workspace/demo-app

USER root
EXPOSE 2222

CMD ["/usr/sbin/sshd", "-D", "-e"]</code></code></pre><p>This image gives us a plain Ubuntu remote host with SSH, a normal user account, Git, Java 21, and Maven. That is enough to test a real editor-over-SSH workflow and then let Bob operate on a Java project remotely.</p><p>A few design choices are worth calling out. We disable root login. We disable password authentication. We allow only key-based login for the <code>bob</code> user. That keeps the remote boundary simple and predictable. We do allow SSH forwarding because remote editor workflows often need it. If you know you do not need it, you can tighten that later.</p><p>Build the image:</p><pre><code><code>podman build --no-cache --arch amd64 -t bob-remote-ubuntu:24.04 .</code></code></pre><p>The <code>--arch amd64</code> part matters on Apple Silicon. Without it, Podman will usually build an <code>arm64</code> image because that matches the host machine. That sounds fine until Bob tries to install its remote server component on the container. In our testing, the Bob remote install flow detected <code>aarch64</code>, requested a Linux <code>arm64</code> BobIDE server build, and got a <code>404</code> from the download endpoint. Building the simulated remote host as <code>amd64</code> avoids that problem and makes the container look like a more typical x86_64 Linux development machine. The <code>--no-cache</code> flag is useful here too. It forces a clean rebuild, which helps when you change SSH keys, account setup, or the container image itself during testing.</p><p>Run the remote host container:</p><pre><code><code>podman run -d \
  --name bob-remote-host \
  --platform linux/amd64 \
  -p 2222:2222 \
  bob-remote-ubuntu:24.04</code></code></pre><p>The <code>--publish</code> option maps the container&#8217;s SSH port to your Mac. Podman documents <code>--publish</code> as the mechanism for exposing a container port on the host, which is exactly what we need so <code>localhost:2222</code> becomes our remote entry point.</p><p>The build and run commands should match. We build an <code>amd64</code> image, and we run it explicitly as <code>linux/amd64</code>, so Bob sees a remote Linux <code>x86_64</code> machine and can use the expected remote server package.</p><p>By now, the basic shape is in place: Podman provides the Linux VM, and inside it we run a container that behaves like a remote development machine.</p><h2>Implementation</h2><p>Now we wire the SSH access, test it from the terminal, connect Bob, and verify that Bob is really operating against the remote Linux machine.</p><p>Start with your local SSH config. Open <code>~/.ssh/config</code> and add this host:</p><pre><code><code>Host bob-podman-demo
  HostName 127.0.0.1
  Port 2222
  User bob
  IdentityFile ~/bob-remote-demo/ssh/bob_remote_key
  IdentitiesOnly yes
  StrictHostKeyChecking accept-new
  ServerAliveInterval 30
  ServerAliveCountMax 3
  ForwardAgent yes</code></code></pre><p>This alias is important. Do not skip it. A stable SSH alias gives you a stable target name inside Bob and your terminal. Later, if you replace the backend host, you can keep using <code>bob-podman-demo</code> and only change the actual hostname or port. That is a small detail, but it makes real remote workflows easier to maintain.</p><p>Now test raw SSH from the terminal:</p><pre><code><code>ssh bob-podman-demo</code></code></pre><p>You should land on the remote host and see a shell prompt for the <code>bob</code> user. Once you are there, verify the basics:</p><pre><code><code>whoami
pwd
uname -a
java -version
mvn -version</code></code></pre><p>Expected behavior is simple. <code>whoami</code> returns <code>bob</code>. <code>pwd</code> starts in <code>/home/bob</code>. <code>uname -a</code> shows Linux, not macOS. <code>java -version</code> and <code>mvn -version</code> confirm the remote toolchain is installed.</p><p>Create a small Java project on the remote host so Bob has something real to work with:</p><pre><code><code>cd ~/workspace/demo-app
git init

cat &gt; pom.xml &lt;&lt;'EOF'
&lt;project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
         https://maven.apache.org/xsd/maven-4.0.0.xsd"&gt;
  &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt;

  &lt;groupId&gt;com.example&lt;/groupId&gt;
  &lt;artifactId&gt;bob-remote-demo&lt;/artifactId&gt;
  &lt;version&gt;1.0.0-SNAPSHOT&lt;/version&gt;

  &lt;properties&gt;
    &lt;maven.compiler.release&gt;21&lt;/maven.compiler.release&gt;
    &lt;project.build.sourceEncoding&gt;UTF-8&lt;/project.build.sourceEncoding&gt;
  &lt;/properties&gt;

  &lt;build&gt;
    &lt;plugins&gt;
      &lt;plugin&gt;
        &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
        &lt;artifactId&gt;maven-compiler-plugin&lt;/artifactId&gt;
        &lt;version&gt;3.15.0&lt;/version&gt;
        &lt;configuration&gt;
          &lt;release&gt;21&lt;/release&gt;
        &lt;/configuration&gt;
      &lt;/plugin&gt;
    &lt;/plugins&gt;
  &lt;/build&gt;
&lt;/project&gt;
EOF

mkdir -p src/main/java/com/example
cat &gt; src/main/java/com/example/Main.java &lt;&lt;'EOF'
package com.example;

public class Main {
    public static void main(String[] args) {
        System.out.println("Hello from the remote machine.");
    }
}
EOF

cat &gt; README.md &lt;&lt;'EOF'
# Bob Remote Demo

This project is used to verify that IBM Bob is operating against a remote Linux machine over SSH.
EOF

mvn -q compile</code></code></pre><p>This project stays intentionally small. The goal here is not Maven or Java fundamentals. The goal is to prove that Bob can read, change, build, and test code remotely.</p><p>Exit the SSH session:</p><pre><code><code>exit</code></code></pre><p>Now open IBM Bob on your Mac. The <code>Open Remote - SSH</code> extension from <code>jeanp413 </code>should already be installed (If not, <a href="https://github.com/jeanp413/open-remote-ssh">grab it from open-vsx</a>). </p><p>Open the command palette in Bob and connect the current window to <code>bob-podman-demo</code>. This should look like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ydia!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ydia!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png 424w, https://substackcdn.com/image/fetch/$s_!Ydia!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png 848w, https://substackcdn.com/image/fetch/$s_!Ydia!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png 1272w, https://substackcdn.com/image/fetch/$s_!Ydia!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ydia!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png" width="1456" height="946" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:946,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:882329,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/194172844?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ydia!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png 424w, https://substackcdn.com/image/fetch/$s_!Ydia!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png 848w, https://substackcdn.com/image/fetch/$s_!Ydia!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png 1272w, https://substackcdn.com/image/fetch/$s_!Ydia!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbdc17d08-a5f2-4ac7-a580-d0dd0b799dca_3024x1964.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You can also open the Terminal window and you will directly see user bob logged in.</p><p>Right after you choose the host from the command palette, Bob starts a normal SSH connection using the identity configured for that host. In this case it picks the dedicated <code>bob_remote_key</code>, authenticates with public key login, and then runs a remote install script on the Linux machine. That script checks the remote platform and architecture, prepares the <code>~/.bobide-server</code> directory, and looks for a matching <code>bobide-server</code> build. Because the server is already installed and running in this log, Bob skips the download, reuses the existing remote server process, reads its connection token, finds the port it is listening on, and then creates local port forwarding back to that remote process. This is the important part: Bob is not just opening files over raw SSH. It uses SSH first, then boots or reuses a Bob remote server on the target machine, and finally tunnels your local editor session to that server so the remote workspace behaves like a local IDE window.</p><p>Then open this folder:</p><pre><code><code>/home/bob/workspace/demo-app</code></code></pre><p>Once the remote folder opens, stop for a second and verify what you are looking at. You are still using Bob locally, but the workspace itself is remote. That means file edits and shell commands now happen on the remote Linux target, not on your macOS host. This is the core architectural change.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CpEt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CpEt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png 424w, https://substackcdn.com/image/fetch/$s_!CpEt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png 848w, https://substackcdn.com/image/fetch/$s_!CpEt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png 1272w, https://substackcdn.com/image/fetch/$s_!CpEt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CpEt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png" width="1456" height="946" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:946,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:949021,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/194172844?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CpEt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png 424w, https://substackcdn.com/image/fetch/$s_!CpEt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png 848w, https://substackcdn.com/image/fetch/$s_!CpEt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png 1272w, https://substackcdn.com/image/fetch/$s_!CpEt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27ff965b-4f3f-44bd-b560-08d393427976_3024x1964.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Open the integrated terminal in the Bob window and run:</p><pre><code><code>whoami
pwd
uname -a
ls -la
mvn -q compile</code></code></pre><p>If all of that works, Bob is now operating against the remote Linux machine.</p><p>Next, ask Bob to inspect the project. A good first prompt is:</p><p>Then ask Bob to make a controlled change:</p><pre><code><code>Add a JUnit 5 test to this Maven project, explain the changes first, then apply them.</code></code></pre><p>Finally, ask it to run the test:</p><pre><code><code>Run the tests and explain the output.</code></code></pre><p>IBM Bob&#8217;s docs highlight file access and the ability to run terminal or shell commands from inside Bob. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MvB5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MvB5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png 424w, https://substackcdn.com/image/fetch/$s_!MvB5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png 848w, https://substackcdn.com/image/fetch/$s_!MvB5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png 1272w, https://substackcdn.com/image/fetch/$s_!MvB5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MvB5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png" width="1456" height="951" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:951,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1171337,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/194172844?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MvB5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png 424w, https://substackcdn.com/image/fetch/$s_!MvB5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png 848w, https://substackcdn.com/image/fetch/$s_!MvB5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png 1272w, https://substackcdn.com/image/fetch/$s_!MvB5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b38f7a-ee43-4aa8-b4f3-4dbaf3612bce_3248x2122.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>A note about real enterprise environments</h3><p>This tutorial uses a plain Linux container to simulate a remote machine. That is the right way to learn the workflow. But real enterprise environments often add one more layer. SSH connectivity may work, while access to the real workspace still fails because the remote session is missing extra identity or filesystem tokens. Maybe you run into <code>kinit</code>, <code>aklog</code>, and <code>~/.ssh/rc</code> hooks for AFS token setup or other things. That is environment-specific, so we do not build it into the generic Podman flow here, but the pattern matters: <strong>a working SSH login does not automatically mean a fully initialized enterprise session</strong>.</p><p>The same is true for Bob-specific remote components. In some managed environments, a remote-side BobIDE server or helper runtime also has to be present. A managed Power or internal enterprise environment often does.</p><h2>Configuration</h2><p>There are three configurations that matter in this setup: Podman machine settings, the SSH daemon in the container, and your local SSH alias.</p><p>The Podman machine is created with:</p><pre><code><code>podman machine init --cpus 4 --memory 8192 --disk-size 40</code></code></pre><p>Remote editor sessions get unpleasant quickly when the underlying VM is too small. The editor does file watching, indexing, shell commands, Git activity, and language tooling. Give the machine too little memory and you get slow builds, laggy terminals, and flaky background tasks. Give it too few CPUs and simple operations take longer than they should.</p><p>The SSH daemon configuration inside the container is this:</p><pre><code><code>Port 2222
PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
ChallengeResponseAuthentication no
UsePAM no
PubkeyAuthentication yes
AllowUsers bob
X11Forwarding no
AllowTcpForwarding yes
ClientAliveInterval 300
ClientAliveCountMax 2</code></code></pre><p><code>PermitRootLogin no</code> blocks the worst default mistake. <code>PasswordAuthentication no</code> removes password guessing. <code>AllowUsers bob</code> narrows the login surface. <code>ClientAliveInterval</code> and <code>ClientAliveCountMax</code> help dead sessions time out cleanly. We leave <code>AllowTcpForwarding yes</code> enabled because some remote workflows depend on it. If yours does not, turn it off.</p><p>Your local SSH alias matters just as much:</p><pre><code><code>Host bob-podman-demo
  HostName 127.0.0.1
  Port 2222
  User bob
  IdentityFile ~/bob-remote-demo/ssh/bob_remote_key
  IdentitiesOnly yes
  StrictHostKeyChecking accept-new
  ServerAliveInterval 30
  ServerAliveCountMax 3
  ForwardAgent yes</code></code></pre><p><code>IdentitiesOnly yes</code> avoids weird behavior when your laptop has many SSH keys loaded. <code>ForwardAgent yes</code> is useful when the remote host itself needs to reach another Git server using your local agent. In plain local tests you may not need it, but it is a common real-world requirement.</p><p>One more configuration point matters from a security perspective. Some teams try to solve remote development by exposing container APIs over TCP. Podman warns directly against this. The API grants full access to Podman functionality and allows arbitrary code execution as the user running the API, and they strongly recommend against making the API socket available over the network. They recommend SSH forwarding instead when remote access is needed. </p><h3>Clean up</h3><p>When you are done, destroy the remote host:</p><pre><code><code>podman rm -f bob-remote-host</code></code></pre><p>If you created the Podman machine only for this tutorial, you can stop it too:</p><pre><code><code>podman machine stop</code></code></pre><p>This disposable cleanup is one of the best operational advantages of the whole pattern. You can rebuild the environment from scratch instead of trying to repair a messy one.</p><h2>Conclusion</h2><p>We built a complete remote Bob workflow on macOS without needing a real remote server. Podman gave us the Linux VM that macOS needs for containers, we ran an SSH-enabled Ubuntu container inside it, and IBM Bob connected to that remote Linux machine through <code>jeanp413.open-remote-ssh</code>. The result is a cleaner boundary: Bob still runs in your editor, but the workspace, shell commands, build tools, and side effects live in a remote environment you can inspect, rebuild, and destroy. The main thing to remember is this: SSH is the transport, not the whole story. In simple setups, that is enough. In enterprise environments, you often need extra identity bootstrapping and sometimes Bob-specific remote components too. That split makes remote Bob setups much easier to understand and debug.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Stop Letting AI Guess in Your Java Repository]]></title><description><![CDATA[Learn how Java developers can ground AI coding tools with JDTLS, MCP, and repository conventions to reduce bad code guesses.]]></description><link>https://www.the-main-thread.com/p/ai-coding-tool-environment-java-jdtls-mcp</link><guid isPermaLink="false">https://www.the-main-thread.com/p/ai-coding-tool-environment-java-jdtls-mcp</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Wed, 22 Apr 2026 06:08:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/08310e05-1e5d-4d01-b65f-a936b6900484_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A ticket says &#8220;add CSV export for cargo itineraries.&#8221; That sounds small until an AI coding tool starts working on a layered Jakarta EE codebase without a map. Now the real problem shows up. Which service already owns itinerary lookup? Which REST facade should expose the export? Which exception already means &#8220;not found&#8221;? Which serialization path is allowed in this repository? Without those answers, the tool does what tools do. It guesses, and the guesses look convincing right up to the point where they collide with the code you actually ship.</p><p>The problem is usually earlier than the prompt. In a layered Jakarta EE application, the hard part is not understanding English. The hard part is knowing where the system already solves the problem. A request like &#8220;add CSV export for cargo itineraries&#8221; sounds simple. In a real repository, it is a navigation problem. Which service already owns itinerary lookup? Which facade already exposes cargo data? Which exception means not found? Which serialization stack is allowed? Which package is off-limits because it breaks layering?</p><p>Senior developers do this mapping almost automatically. They look at the project structure, the recent commit history, the conventions, and the existing call graph before they write a line of code. AI coding tools fail when we skip that step and ask them to implement immediately. Then the tool guesses. It invents a helper class in the wrong layer. It creates a second copy of logic that already exists. It adds an endpoint that looks right but does not fit the API surface you actually ship.</p><p>That failure is easy to misread. It looks like &#8220;AI cannot do real work.&#8221; But the real issue is environmental. You asked the assistant to act like a senior teammate without giving it the same working context a senior teammate would demand on day one. No language server. No file boundaries. No recent history. No written rules. No project-local configuration that travels with the repository.</p><p>In this tutorial, we fix that problem the practical way. We build a small harness around the Jakarta EE Cargo Tracker example so IBM Bob can reason against the actual repository instead of guessing from text alone. We use the Eclipse JDT Language Server behind an MCP bridge, a filesystem server scoped to source code, optional Git context, a committed <code>.bob/mcp.json</code>, and two human checkpoints before implementation starts. Some people call this <em>harness engineering</em>. Some call it <em>context engineering</em>. The exact name is still settling. The important part is simple: structure in, structure out.</p><p>By the end, you will have a repeatable setup you can reuse for Cargo Tracker and adapt to your own Jakarta EE or Spring repositories.</p><h2>Prerequisites</h2><p>You do not need deep AI tooling knowledge for this tutorial, but you should be comfortable on the command line and able to read a Maven-based Java project. We will use IBM Bob in the examples, but the same idea works with any MCP-capable client that can launch external servers and consume project-local configuration.</p><ul><li><p>Java 21 or later installed</p></li><li><p>Maven Wrapper available in the target repository</p></li><li><p>Maven 3.9+ on your <code>PATH</code> if you build the MCP bridge from source</p></li><li><p>Git, <code>curl</code>, and a shell</p></li><li><p>Node.js with <code>npx</code> for the reference filesystem server</p></li><li><p>Optional: <code>uv</code> or <code>uvx</code> for the Git MCP server</p></li><li><p>Optional: Homebrew on macOS for packaged <code>jdtls</code></p></li><li><p>Basic familiarity with Maven, Git, and layered Java applications</p></li></ul><h2>Project Setup</h2><p>We start with the upstream Cargo Tracker repository. The first rule of this whole article is simple: <strong>the project must build before the assistant touches it</strong>. A language server does not rescue a broken classpath. An MCP bridge does not fix dependency resolution. If <code>./mvnw compile</code> fails, the rest of the harness only gives you better-informed confusion.</p><p>Let&#8217;s create a working directory:</p><pre><code><code>mkdir ai-tooling-example
cd ai-tooling-example</code></code></pre><p>Clone the project and prove it compiles:</p><pre><code><code>git clone https://github.com/eclipse-ee4j/cargotracker.git \
  &amp;&amp; cd cargotracker \
  &amp;&amp; ./mvnw -q -DskipTests compile</code></code></pre><p>This repository is a good training ground because it looks like a real enterprise application. It has a layered structure, real domain boundaries, Jakarta EE APIs, and enough moving parts that a coding assistant can easily get lost without help.</p><p>You get a standard Maven layout with <code>pom.xml</code> at the root, Java sources under <code>src/main/java</code>, and tests under <code>src/test/java</code>. More importantly, you get existing application, domain, infrastructure, and interface packages. That matters. We want the assistant to navigate those packages. We do not want it to invent fresh package roots because it did not see what already exists.</p><p>If the build fails here, stop and fix Java, Maven, or network access first. That is not a side issue. It is part of the environment. A harness built on a non-working repository just gives you better tools for generating bad output.</p><h2>Install and Validate JDTLS</h2><p>We need the Eclipse JDT Language Server because that is the part that understands Java symbols, definitions, references, classpath resolution, and project structure. Your IDE already relies on this kind of capability. We are moving that same capability into the assistant&#8217;s tool loop.</p><p>On macOS, the easy path is Homebrew:</p><pre><code><code>brew install jdtls
brew info jdtls</code></code></pre><p>If you want a manual install that works across operating systems, download the latest snapshot tarball from Eclipse and unpack it into a local directory:</p><pre><code><code>JDTLS_TGZ=$(curl -fsSL https://download.eclipse.org/jdtls/snapshots/latest.txt)
curl -fLO "https://download.eclipse.org/jdtls/snapshots/${JDTLS_TGZ}"
mkdir -p ~/.local/jdtls &amp;&amp; tar -xzf "${JDTLS_TGZ}" -C ~/.local/jdtls</code></code></pre><p>After unpacking, you should see a <code>plugins/</code> directory and one platform-specific configuration directory such as <code>config_mac</code>, <code>config_linux</code>, or similar. That platform directory matters. If you point <code>jdtls</code> at the wrong one, startup fails with OSGi errors that look confusing and unrelated to the real issue.</p><p>Here is a manual smoke test for macOS. On Linux, replace <code>config_mac</code> with <code>config_linux</code>.</p><pre><code><code>LAUNCHER_JAR=$(ls ~/.local/jdtls/plugins/org.eclipse.equinox.launcher_*.jar | head -1)
java \
  -Declipse.application=org.eclipse.jdt.ls.core.id1 \
  -Dosgi.bundles.defaultStartLevel=4 \
  -Declipse.product=org.eclipse.jdt.ls.core.product \
  -Xmx1G -XX:+UseG1GC \
  -jar "${LAUNCHER_JAR}" \
  -configuration ~/.local/jdtls/config_mac \
  -data /tmp/jdtls-workspace-cargotracker</code></code></pre><p>The <code>-data</code> directory is not your Git checkout. This is important. It is the language server&#8217;s own workspace cache and metadata area. The actual project path gets passed later through the language server handshake when the MCP bridge initializes the session.</p><p>A one-gigabyte heap is a good starting point for Cargo Tracker. You can reduce it later if you measure idle usage and know you have margin. But starting too small creates a different class of problem: slow indexing, unstable analysis, or random failures that look like language-server bugs when the real issue is starvation.</p><p>What does this give us? It gives the assistant symbol-level reality. <code>jdtls</code> knows what <code>CargoRepository</code> actually is. It knows where a method is declared, who references it, and how the classpath resolves imports. Without that, the assistant is doing fancy autocomplete over prose. With it, the assistant is navigating the same semantic graph your IDE uses.</p><p>It still has limits. It does not know your team&#8217;s conventions. It does not know what layers are socially forbidden. It does not know whether adding Jackson is acceptable just because a classpath contains it somewhere. That is why we need more than one server.</p><h2>Build the LSP4J-MCP Bridge</h2><p>Now we need a bridge between the Model Context Protocol world and the Java language server world. In this setup, we use <code>LSP4J-MCP</code>, which starts <code>jdtls</code> as a child process and exposes a smaller, controlled tool surface to the assistant.</p><p>Clone and build it:</p><pre><code><code>git clone https://github.com/stephanj/LSP4J-MCP.git
cd LSP4J-MCP
mvn -q clean package -DskipTests
ls target/lsp4j-mcp-*.jar</code></code></pre><p>At the time of writing, the project typically produces a shaded JAR with a name like <code>lsp4j-mcp-1.0.0-SNAPSHOT.jar</code>, but do not hardcode the exact version in your head. The safe habit is to inspect the <code>target/</code> directory and then copy the resolved artifact into a stable project-local path.</p><p>Create local tool and log directories in Cargo Tracker, then copy the built JAR:</p><pre><code><code>mkdir -p /path/to/cargotracker/.bob/tools /path/to/cargotracker/.bob/logs
cp target/lsp4j-mcp-*.jar /path/to/cargotracker/.bob/tools/lsp4j-mcp.jar</code></code></pre><p>We give it a fixed local name, <code>lsp4j-mcp.jar</code>, because this is the name Bob will use from the committed configuration. This avoids rewriting config every time the bridge version changes.</p><p>You can do a standalone smoke launch before wiring Bob to it:</p><pre><code><code>java -jar /path/to/cargotracker/.bob/tools/lsp4j-mcp.jar \
  /path/to/cargotracker \
  jdtls</code></code></pre><p>Let that process sit for a while on first boot. The first run imports the Maven model and indexes the workspace. On a healthy setup, stderr shows project import progress or indexing activity. A broken setup exits immediately or throws classpath, Java version, or process launch errors.</p><p>This bridge is intentionally small. That is one of its strengths. It does not try to surface every possible LSP request. It exposes a smaller set of tools that are easy to review and safe to auto-approve in read-only mode. That smaller surface area is good for production teams because it limits accidental behavior and keeps the assistant&#8217;s tool menu understandable.</p><p>Typical tools exposed by this bridge include:</p><ul><li><p><code>find_symbols</code></p></li><li><p><code>find_references</code></p></li><li><p><code>find_definition</code></p></li><li><p><code>document_symbols</code></p></li><li><p><code>find_interfaces_with_method</code></p></li></ul><p>The exact names depend on the version you built, so always check the startup log or the bridge documentation before you finalize <code>autoApprove</code>. This is one of those details teams skip, and then they wonder why Bob keeps asking for approval on every call or fails because the configured tool name does not exist.</p><p>What does this bridge guarantee? It gives you language-aware discovery. That is the big win. What it does not guarantee is correctness of architecture or intent. It can tell the assistant where a class lives. It cannot tell the assistant whether adding a new service in that package is the right move. We still need explicit conventions and a human checkpoint for that.</p><h2>Commit Project-Local MCP Configuration</h2><p>The next step is the piece many teams miss. Do not leave the harness in somebody&#8217;s head or in a private desktop configuration. Commit it with the repository.</p><p>IBM Bob can read project-level MCP settings from <code>.bob/mcp.json</code>. That makes the environment reproducible. A teammate can clone the repository, open it, and inherit the same harness instead of reverse-engineering your local setup from screenshots and Slack messages.</p><p>First, make sure local logs stay out of version control:</p><pre><code><code>printf '%s\n' '.bob/logs/' &gt;&gt; .gitignore</code></code></pre><p>Now create <code>.bob/mcp.json</code> at the repository root:</p><pre><code><code>{
  "mcpServers": {
    "java-lsp": {
      "type": "stdio",
      "command": "java",
      "args": [
        "-jar",
        "${workspaceFolder}/.bob/tools/lsp4j-mcp.jar",
        "${workspaceFolder}",
        "jdtls"
      ],
      "env": {
        "LOG_FILE": "${workspaceFolder}/.bob/logs/jdtls-mcp.log",
        "JAVA_HOME": "/Library/Java/JavaVirtualMachines/temurin-21.jdk/Contents/Home"
      },
      "autoApprove": [
        "find_symbols",
        "find_references",
        "find_definition",
        "document_symbols",
        "find_interfaces_with_method"
      ],
      "disabled": false
    },
    "filesystem": {
      "type": "stdio",
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem@latest",
        "${workspaceFolder}/src"
      ],
      "autoApprove": [
        "read_file",
        "list_directory",
        "search_files"
      ],
      "disabled": false
    },
    "git": {
      "type": "stdio",
      "command": "uvx",
      "args": [
        "mcp-server-git",
        "--repository",
        "${workspaceFolder}"
      ],
      "autoApprove": [
        "git_log",
        "git_diff",
        "git_show"
      ],
      "disabled": false
    }
  }
}</code></code></pre><p>This file is small, but it changes how the assistant behaves in a big way. Now Bob has three different ways to ground itself.</p><p>The <code>java-lsp</code> server answers semantic questions. It knows where symbols are defined and how code relates.</p><p>The <code>filesystem</code> server answers raw text questions. It can read files, list directories, and search source content.</p><p>The <code>git</code> server answers historical questions. It can show diffs, recent changes, and implementation intent from the repository history.</p><p>Together, those three servers approximate what a senior developer does mentally before touching code.</p><p>There are a few details here worth slowing down for.</p><p>The <code>type</code> is <code>stdio</code> for all three servers. That means Bob launches child processes and speaks MCP over standard input and output. This is simple and reliable. It also means broken command paths fail fast.</p><p>The <code>${workspaceFolder}</code> variable matters a lot. Hard-coded local paths break the setup for everyone else. If your Bob release uses a different token, update it once in the committed config and document it. Do not hide that difference in tribal knowledge.</p><p>The <code>LOG_FILE</code> environment variable is a support tool. When something goes wrong, you want one place to tail logs. A missing log directory is not fatal in every setup, but it makes debugging harder and pushes errors into stderr where they get lost.</p><p>The <code>JAVA_HOME</code> setting is convenient and fragile at the same time. The example above is a macOS Temurin path. That is fine for a single-machine demo, but teams usually want one of two approaches. Either keep separate snippets for macOS and Linux in an internal doc, or remove <code>JAVA_HOME</code> from the committed file and rely on the parent environment. The important thing is to be explicit. Wrong <code>JAVA_HOME</code> values produce class version or startup errors that look like project bugs even though the problem is just the runtime.</p><p>The <code>autoApprove</code> lists deserve security thinking. These are read-oriented tools only. Keep it that way unless you have a very deliberate reason to expose write tools. The moment you auto-approve a mutating tool, you expand the assistant&#8217;s blast radius.</p><p>The <code>filesystem</code> scope is one of the most important design choices in the whole article. We point it at <code>${workspaceFolder}/src</code>, not the whole repository. That is deliberate. Yes, it costs some convenience. No, the assistant cannot casually open <code>README.md</code> or inspect build output or local scratch files. That is the point. Narrow scope reduces accidental exposure of secrets, noisy directories, and irrelevant files.</p><p>The package reference uses <code>@latest</code> in this example because it is the easiest way to show the setup. In a team setting, pin it after validation. Cold-starting against whatever the registry says is &#8220;latest&#8221; makes laptops drift and turns debugging into archaeology.</p><p>The <code>git</code> server is optional, but it adds real value. Recent history often tells you which test file was changed last for a similar feature, which class is the real integration point, or which package is alive versus effectively abandoned. That kind of signal helps the assistant follow the grain of the codebase instead of fighting it.</p><h2>Add a Repository Conventions File</h2><p>Language tools tell the assistant what exists. They do not tell it what your team considers acceptable. That is why we add a committed <code>AGENTS.md</code> at the repository root.</p><p>Create <code>AGENTS.md</code> like this:</p><pre><code><code># Conventions

## Architecture
- Strict layering: interfaces &#8594; application &#8594; domain &#8594; infrastructure
- Domain types stay free of web and persistence annotations
- CDI constructor injection in application services
- Repositories are interfaces in domain; JPA implementations live in infrastructure

## Naming
- Application services: `*Service` under `application/internal/`
- REST facades: `*RestService` under `interfaces/rest/`
- JPA implementations: `Jpa*Repository`

## Serialization
- JSON via Jakarta JSON Binding in the stack versions Cargo Tracker already uses
- Keep serialization helpers out of domain entities unless the project explicitly allows it

## Testing
- Integration tests: `*IT.java`, follow Arquillian patterns already in the tree
- Unit tests: `*Test.java` with JUnit 5
- Reuse existing test data bootstrap patterns; do not invent a parallel database lifecycle

## Runtime descriptor and test packaging safety rules
Be careful with deployment descriptors and runtime-specific test resources.

Descriptor filenames, XML root elements, and schemas must match exactly.
Do not rename one descriptor type into another.
Do not package a standard `web.xml` file as `ibm-web-bnd.xml`.
Do not package a Liberty binding descriptor as `web.xml`.

When working with ShrinkWrap and Arquillian:
- inspect every file added through `addAsWebInfResource`
- confirm the source file content matches the target filename
- reuse existing repository examples before creating new descriptors
- prefer the minimal archive that works
- if no working example exists, stop and explain the uncertainty instead of inventing a runtime descriptor

Required self-check before finalizing test code:
- `web.xml` must contain the correct `web-app` root element
- `ibm-web-bnd.xml` must contain the correct Liberty binding root element
- no descriptor may be duplicated under the wrong target filename
- runtime-specific resources must follow existing repository conventions

## Runtime and verification
- For local runtime validation in this repository, use the Open Liberty profile
- Build with `./mvnw clean package -Popenliberty`
- Run with `./mvnw liberty:run -Popenliberty`
- Do not verify new REST behavior against a different runtime unless explicitly requested</code></code></pre><p>This file is simple governance. It does not try to explain the entire architecture. It sets boundaries. That is enough to stop a lot of common assistant mistakes.</p><p>For example, the classpath might contain something Jackson-related through another path or dependency. A generic model sees <code>ObjectMapper</code> and reaches for it. Your conventions file says no, this codebase uses Jakarta JSON Binding and serialization helpers stay out of domain entities. That single sentence removes a whole category of bad diffs.</p><p>Another common issue is layering drift. The assistant sees a problem, wants a quick helper, and places it in whatever package feels convenient. Your conventions file tells it where application services belong, where REST facades belong, and where repository implementations belong. That narrows the search space and makes planning better before implementation starts.</p><p>This file does not replace code review. It makes code review less repetitive and less reactive.</p><h2>Build an Impact Map Before Writing Code</h2><p>Here is the part that changes team behavior the most. The first thing you ask the assistant for is not code. It is a repository impact map.</p><p>This sounds slower. In practice, it is faster because it catches the expensive mistakes before they become patches.</p><p>Open Bob and start with a prompt like this:</p><pre><code><code>Use the java-lsp and filesystem tools to inspect the Cargo Tracker project.
I want to add CSV export for cargo itineraries.
Produce a repository impact map: what must change, in which files, and which existing patterns to follow.
Annotate each bullet with the tool call that produced the evidence.
Do not write implementation code yet.</code></code></pre><p>A good response looks something like this:</p><pre><code><code># Repository Impact Map: CSV Export for Cargo Itineraries

## Evidence-Based Analysis

### 1. **Domain Layer** (No Changes Required)
**Evidence:** `java-lsp:document_symbols` on [`Itinerary.java`](src/main/java/org/eclipse/cargotracker/domain/model/cargo/Itinerary.java:19), [`Leg.java`](src/main/java/org/eclipse/cargotracker/domain/model/cargo/Leg.java:20), [`Cargo.java`](src/main/java/org/eclipse/cargotracker/domain/model/cargo/Cargo.java:58)

- Domain entities already expose all necessary data via getters
- [`Itinerary.getLegs()`](src/main/java/org/eclipse/cargotracker/domain/model/cargo/Itinerary.java:45) returns `List&lt;Leg&gt;`
- [`Leg`](src/main/java/org/eclipse/cargotracker/domain/model/cargo/Leg.java:20) exposes: voyage, loadLocation, unloadLocation, loadTime, unloadTime
- [`Cargo.getItinerary()`](src/main/java/org/eclipse/cargotracker/domain/model/cargo/Cargo.java:122) provides access to itinerary

### 2. **Application Layer** (No Changes Required)
**Evidence:** `read_file` on [`BookingService.java`](src/main/java/org/eclipse/cargotracker/application/BookingService.java:13), [`CargoRepository.java`](src/main/java/org/eclipse/cargotracker/domain/model/cargo/CargoRepository.java:5)

- [`CargoRepository.findAll()`](src/main/java/org/eclipse/cargotracker/domain/model/cargo/CargoRepository.java:9) already exists for bulk operations
- [`CargoRepository.find(TrackingId)`](src/main/java/org/eclipse/cargotracker/domain/model/cargo/CargoRepository.java:7) for single cargo lookup

### 3. **Interface Layer - NEW REST Endpoint** (CREATE)
**Evidence:** `read_file` on [`HandlingReportService.java`](src/main/java/org/eclipse/cargotracker/interfaces/handling/rest/HandlingReportService.java:26) shows REST pattern

**File:** `src/main/java/org/eclipse/cargotracker/interfaces/booking/rest/CargoItineraryExportService.java` (NEW)
- Follow pattern from [`HandlingReportService`](src/main/java/org/eclipse/cargotracker/interfaces/handling/rest/HandlingReportService.java:26):
  - Use `@Stateless` annotation
  - Use `@Path("/booking")` for base path
  - Use `@GET` with `@Produces("text/csv")`
- Inject [`CargoRepository`](src/main/java/org/eclipse/cargotracker/domain/model/cargo/CargoRepository.java:5)
- Endpoints:
  - `GET /booking/cargos/{trackingId}/itinerary/export` - single cargo
  - `GET /booking/cargos/itineraries/export` - all cargos

### 4. **Interface Layer - CSV Assembler** (CREATE)
**Evidence:** `java-lsp:find_symbols` query "Assembler" found [`CargoRouteDtoAssembler`](src/main/java/org/eclipse/cargotracker/interfaces/booking/facade/internal/assembler/CargoRouteDtoAssembler.java:15), `read_file` showed assembler pattern

**File:** `src/main/java/org/eclipse/cargotracker/interfaces/booking/rest/assembler/ItineraryCsvAssembler.java` (NEW)
- Follow pattern from [`CargoRouteDtoAssembler`](src/main/java/org/eclipse/cargotracker/interfaces/booking/facade/internal/assembler/CargoRouteDtoAssembler.java:15):
  - Use `@ApplicationScoped`
  - Constructor injection with `@Inject`
- Methods:
  - `String toCsv(Cargo cargo)` - single cargo to CSV
  - `String toCsv(List&lt;Cargo&gt; cargos)` - multiple cargos to CSV
- CSV Format (based on [`Leg`](src/main/java/org/eclipse/cargotracker/domain/model/cargo/Leg.java:20) structure):
  ```
  TrackingId,LegNumber,VoyageNumber,LoadLocation,LoadLocationName,UnloadLocation,UnloadLocationName,LoadTime,UnloadTime
  ```

### 5. **Dependencies** (No Changes Required)
**Evidence:** `read_file` on [`pom.xml`](pom.xml:1-100)

- Jakarta EE 10 already includes necessary APIs
- No additional CSV library needed (use standard Java StringBuilder/StringWriter)
- JSON Binding already available via Jakarta JSON Binding

### 6. **Existing Patterns to Follow**

#### REST Service Pattern
**Evidence:** [`HandlingReportService.java`](src/main/java/org/eclipse/cargotracker/interfaces/handling/rest/HandlingReportService.java:26)
```java
@Stateless
@Path("/handling")
public class HandlingReportService {
  @Inject private ApplicationEvents applicationEvents;
  
  @POST
  @Path("/reports")
  @Consumes({"application/json", "application/xml"})
  public void submitReport(...) { }
}
```

#### Assembler Pattern
**Evidence:** [`CargoRouteDtoAssembler.java`](src/main/java/org/eclipse/cargotracker/interfaces/booking/facade/internal/assembler/CargoRouteDtoAssembler.java:15)
```java
@ApplicationScoped
public class CargoRouteDtoAssembler {
  @Inject private LocationDtoAssembler locationDtoAssembler;
  
  public CargoRoute toDto(Cargo cargo) {
    // Transform domain to DTO
  }
}
```

#### Data Access Pattern
**Evidence:** [`DefaultBookingServiceFacade.java`](src/main/java/org/eclipse/cargotracker/interfaces/booking/facade/internal/DefaultBookingServiceFacade.java:88-94)
```java
@Inject private CargoRepository cargoRepository;

public List&lt;CargoRoute&gt; listAllCargos() {
  List&lt;Cargo&gt; cargos = cargoRepository.findAll();
  // Transform and return
}
```

### 7. **Architecture Compliance**
**Evidence:** `read_file` on [`AGENTS.md`](AGENTS.md) conventions

- &#9989; Strict layering maintained: interfaces &#8594; application &#8594; domain
- &#9989; REST service in `interfaces/booking/rest/` (new package)
- &#9989; Assembler in `interfaces/booking/rest/assembler/` (new package)
- &#9989; CDI constructor injection
- &#9989; Domain types remain annotation-free
- &#9989; No new application services needed

## Summary

**Files to CREATE:**
1. `src/main/java/org/eclipse/cargotracker/interfaces/booking/rest/CargoItineraryExportService.java`
2. `src/main/java/org/eclipse/cargotracker/interfaces/booking/rest/assembler/ItineraryCsvAssembler.java`

**Files to MODIFY:** None

**Pattern Sources:**
- REST endpoint: [`HandlingReportService.java`](src/main/java/org/eclipse/cargotracker/interfaces/handling/rest/HandlingReportService.java:26)
- Assembler: [`CargoRouteDtoAssembler.java`](src/main/java/org/eclipse/cargotracker/interfaces/booking/facade/internal/assembler/CargoRouteDtoAssembler.java:15)
- Repository usage: [`DefaultBookingServiceFacade.java`](src/main/java/org/eclipse/cargotracker/interfaces/booking/facade/internal/DefaultBookingServiceFacade.java:88-94)

**Testing:**
- Integration test following Arquillian pattern in `src/test/java/`
- Verify with Open Liberty: `./mvnw liberty:run -Popenliberty`</code></code></pre><p>The exact class names and file paths depend on what the assistant finds in your checkout. The shape is what matters. You want real files, real existing types, and evidence from tool calls. Not &#8220;I would create a new controller package.&#8221; Not &#8220;It may be useful to add a DTO.&#8221; Evidence first.</p><p>Why does this matter? Because wrong-layer changes are cheap to fix at the map stage and expensive to fix after implementation starts. If the impact map shows an invented package, a duplicate service, or the wrong test class, you correct it in minutes. If you wait until after the assistant generated a patch, now you are reviewing code, logic, test strategy, and architecture drift at the same time.</p><p>This is the point where the human stays in charge. The assistant explores. You approve the shape.</p><h2>Turn the Impact Map Into a Structured Task</h2><p>After the impact map is approved, you convert it into an implementation contract. This is where you stop vague prompting and start being explicit about file boundaries, behavior, and acceptance criteria.</p><p>Use a task prompt like this:</p><pre><code><code>Use the approved repository impact map below as the implementation contract.

Implement CSV export for cargo itineraries in the Cargo Tracker project.

Before you implement anything, read `AGENTS.md` and `CONVENTIONS.md` if present, and extract the rules that apply to this change.
List those rules first under a heading `Applicable conventions`.
Use those conventions as binding constraints for implementation and tests.
If the impact map conflicts with `AGENTS.md` or `CONVENTIONS.md`, stop and explain the conflict before writing code.

Important:
- Stay within the approved impact map
- Do not invent additional packages, layers, or abstractions
- Do not move this through a facade or application service
- Do not modify existing files unless a minimal compile-time change is strictly required
- If you believe an existing file must be modified, explain why before showing code
- Start by listing the exact files you will create or modify
- Then implement the change directly

## Approved scope

### Files to create
1. `src/main/java/org/eclipse/cargotracker/interfaces/booking/rest/CargoItineraryExportService.java`
2. `src/main/java/org/eclipse/cargotracker/interfaces/booking/rest/assembler/ItineraryCsvAssembler.java`

### Files to reference but not modify unless strictly required for compilation
- `src/main/java/org/eclipse/cargotracker/domain/model/cargo/Cargo.java`
- `src/main/java/org/eclipse/cargotracker/domain/model/cargo/Itinerary.java`
- `src/main/java/org/eclipse/cargotracker/domain/model/cargo/Leg.java`
- `src/main/java/org/eclipse/cargotracker/domain/model/cargo/CargoRepository.java`
- `src/main/java/org/eclipse/cargotracker/interfaces/handling/rest/HandlingReportService.java`
- `src/main/java/org/eclipse/cargotracker/interfaces/booking/facade/internal/assembler/CargoRouteDtoAssembler.java`
- `src/main/java/org/eclipse/cargotracker/interfaces/booking/facade/internal/DefaultBookingServiceFacade.java`
- `AGENTS.md`
- `CONVENTIONS.md`

## Required architecture

Follow the approved architecture exactly:

- Domain layer: no changes
- Application layer: no changes
- Interface layer: add one new REST service
- Interface layer: add one new assembler under `interfaces/booking/rest/assembler/`

Architecture constraints:
- Maintain strict layering
- Keep domain classes unchanged
- Use CDI constructor injection where conventions require it
- Keep the implementation in the interface layer
- No new facade methods
- No new application services
- No DTO layer for this feature

## Required REST endpoints

Create a new REST service class:

`src/main/java/org/eclipse/cargotracker/interfaces/booking/rest/CargoItineraryExportService.java`

Implementation requirements:
- Follow the REST structure pattern from `HandlingReportService`
- Use `@Stateless`
- Use base path `@Path("/booking")`
- Inject `CargoRepository`
- Return JAX-RS `Response`
- Produce `text/csv`

Add these endpoints exactly:

1. `GET /booking/cargos/{trackingId}/itinerary/export`
   - export one cargo itinerary as CSV

2. `GET /booking/cargos/itineraries/export`
   - export all cargo itineraries as CSV

Response requirements:
- Set `Content-Type` to `text/csv`
- Add `Content-Disposition` header so the CSV is downloadable
- Use clear file names for single-cargo and all-cargos export

## Required assembler

Create a new assembler class:

`src/main/java/org/eclipse/cargotracker/interfaces/booking/rest/assembler/ItineraryCsvAssembler.java`

Implementation requirements:
- Follow the assembler style from `CargoRouteDtoAssembler`
- Use `@ApplicationScoped`
- Use constructor injection with `@Inject`
- Work directly with domain objects

Required methods:
- `String toCsv(Cargo cargo)`
- `String toCsv(List&lt;Cargo&gt; cargos)`

CSV requirements:
- Use standard Java only
- No external CSV library
- No JSON or Jackson-based shortcut
- Build the CSV content with standard Java types

Use this exact column order:

`TrackingId,LegNumber,VoyageNumber,LoadLocation,LoadLocationName,UnloadLocation,UnloadLocationName,LoadTime,UnloadTime`

Behavior requirements:
- Include header row once
- For single cargo, output one row per itinerary leg
- For multiple cargos, output rows for all itinerary legs across all cargos
- Preserve the cargo tracking id on every row
- Handle cargos without itineraries in a way consistent with existing project behavior and explain the choice

## Data access requirements

Use existing repository methods only:
- `CargoRepository.find(TrackingId)` for single cargo export
- `CargoRepository.findAll()` for all-cargo export

Do not add repository methods.
Do not add application services to wrap repository access.

## Error handling

For the single-cargo endpoint:
- If the tracking id does not resolve to a cargo, follow the project&#8217;s existing not-found style
- Do not invent a new error format unless existing REST code clearly requires it

For the all-cargos endpoint:
- Return a valid CSV response even when there are no cargos
- Explain the behavior you chose

## Constraints

- No changes to domain classes
- No changes to application services
- No changes to facade interfaces or implementations
- No UI changes
- No new dependencies
- No unrelated refactoring
- No endpoint path changes
- No alternative package placement

## Testing

Add integration test coverage following the existing Arquillian style in `src/test/java/`.

Required tests:
1. Export single cargo itinerary as CSV
2. Return not-found behavior for unknown tracking id
3. Export all cargo itineraries as CSV
4. Return a valid CSV response shape for the empty-data case if applicable

Testing rules:
- Follow existing project conventions
- Keep tests minimal but real
- Assert content type
- Assert response status
- Assert header presence where relevant
- Assert CSV header row
- Assert at least one representative CSV row value

## Verification runtime

This feature must be verified using the Open Liberty profile.

Use these commands for final verification:
- `./mvnw clean test`
- `./mvnw clean package -Popenliberty`
- `./mvnw liberty:run -Popenliberty`

Do not use a different runtime profile for final verification.

## Output format

When you respond:
1. Show `Applicable conventions`
2. Show the exact files you will create or modify
3. Show the full code for each new or changed file
4. Show the full test code
5. Show any minimal deviation from the impact map
6. Show a `Conventions check` section against `AGENTS.md` and `CONVENTIONS.md`
7. End with the exact Maven verification commands using Open Liberty</code></code></pre><p>This kind of prompt is much harder for the assistant to misunderstand. We define scope, file boundaries, constraints, and what &#8220;done&#8221; means. That reduces wandering. It also makes review easier because you can compare the resulting diff against the declared contract.</p><p>The practical difference is huge. &#8220;Implement CSV export&#8221; invites invention. A structured task grounded in a reviewed impact map invites extension of existing code.</p><h2>Configure and Activate the Harness in Bob</h2><p>At this point, the project-local configuration exists. The bridge JAR is in place. The assistant still needs to load and use the servers.</p><p>The exact Bob UI labels can change between releases, so do not get attached to the menu wording. What matters is that the workspace opens with the repository root and Bob loads <code>.bob/mcp.json</code> from that project.</p><p>When this works correctly, the assistant should expose the configured servers and tools without requiring a second round of manual setup. If it does not, treat that as a configuration mismatch or client-version mismatch, not as proof that the idea failed.</p><p>A good first smoke check inside Bob is intentionally tiny:</p><pre><code><code>Call find_symbols with query "CargoRepository" and paste the first result path only.</code></code></pre><p>This test is good because it is narrow. It does not ask the assistant to think. It asks it to prove the tool wiring works.</p><p>If the response includes a real path under <code>src/main/java</code>, the semantic side of the harness is alive.</p><p>Next, test the filesystem scope:</p><pre><code><code>Use the filesystem tool to read README.md at the repository root.</code></code></pre><p>If your filesystem server is correctly scoped to <code>src/</code>, this request should fail or return an out-of-scope error. That refusal is success. It proves your boundaries are working.</p><p>Then test Git context with something equally small:</p><pre><code><code>Use git_log on the existing cargo facade integration test and summarize the most recent relevant change in one sentence.</code></code></pre><p>This tells you whether the assistant can consume project history without inventing it.</p><p>These tiny tests matter because they isolate failure. If you jump straight into a feature request and it goes wrong, you do not know whether the problem is the bridge, the server scope, the task prompt, the client, or the repository. Small smoke tests make the failure visible sooner.</p><h2>Production Hardening</h2><p>A harness that works on one laptop is not enough. We need to think about what happens when this setup becomes a team habit.</p><h3>What happens under load</h3><p><code>jdtls</code> is not a trivial process. On multi-module repositories, it consumes real memory and CPU during indexing. Cargo Tracker is manageable, but bigger enterprise repositories expose the cost quickly. This is why we start with a one-gigabyte heap and why we keep a dedicated workspace cache.</p><p>If developers switch branches aggressively, the <code>jdtls</code> cache can become stale or noisy. When that happens, the assistant starts returning confusing symbol results or slow responses. The fix is operational, not architectural: use a dedicated cache location and be willing to wipe it when the workspace state is corrupted.</p><p>For example, you can move from <code>/tmp</code> to a stable directory:</p><pre><code><code>mkdir -p ~/.cache/jdtls-cargotracker</code></code></pre><p>Then update your launch configuration to use that directory as the <code>-data</code> location. This makes indexing more stable across sessions and gives you one place to clean up when the cache becomes suspect.</p><h3>Security and blast radius</h3><p>Read-only behavior is not automatic. It is designed.</p><p>Git MCP servers often expose both read and write operations. Filesystem servers expose whatever path you give them. If you point the filesystem server at <code>${workspaceFolder}</code>, you are giving the assistant visibility into everything under the repository root, including local experiments, build output, or accidentally committed secrets.</p><p>That is why this tutorial scopes the filesystem server to <code>src/</code>. It is a deliberate loss of convenience in exchange for a smaller blast radius.</p><p>Pinning tool versions is part of the same story. <code>@latest</code> and ephemeral <code>uvx</code> installs are convenient for first setup. They are not good long-term operational defaults. Once the team validates a version combination, pin it and record it. Otherwise, you will debug &#8220;AI behavior changes&#8221; that are really dependency drift in supporting tools.</p><h3>Portability and machine-specific configuration</h3><p>A committed <code>.bob/mcp.json</code> is good. A committed file with a macOS-only <code>JAVA_HOME</code> path is half-good.</p><p>Teams usually solve this one of two ways. The first way is to keep the committed file generic and rely on inherited environment variables for Java. The second way is to maintain two small documented variants internally, one for macOS and one for Linux. The wrong answer is leaving a personal desktop path in the repo and hoping nobody else notices.</p><p>You should also think about whether Bob itself runs on the host, inside a dev container, or through a remote development path. That changes path semantics and environment inheritance. The harness still works, but you want to be clear about which process owns Java, which process sees the workspace root, and where logs land.</p><h3>Supply chain risk in helper tools</h3><p><code>npx</code> and <code>uvx</code> are useful because they remove friction. They also resolve packages at invocation time unless pinned. That means the harness can change underneath you without a repository diff.</p><p>This is not just theoretical. Tool name changes, dependency updates, or package behavior differences can silently change what Bob sees. In a solo workflow that is annoying. In a shared workflow it becomes a support problem.</p><p>A practical team response is simple. Pin versions after validation. Record the version set in a small internal note or in the repository docs. Review upgrades like you would any other tooling change.</p><h3>Human review boundaries</h3><p>The biggest mistake teams make with this setup is thinking better tools remove the need for review. They do not. They move review earlier and make it cheaper.</p><p>The impact map is the first checkpoint. The structured task is the second. The code review is still there after that. What changes is the quality of the diff reaching review. You get fewer invented endpoints, fewer package mistakes, and fewer changes that violate conventions simply because the assistant had a smaller, better-defined space to operate in.</p><h2>Verification</h2><p>Now let&#8217;s verify the whole setup step by step.</p><h3>Check Java and the Cargo Tracker build</h3><p>Run this from the repository root:</p><pre><code><code>cd /path/to/cargotracker
java -version
./mvnw clean package -Popenliberty</code></code></pre><p>Expected result: <code>java -version</code> reports Java 21 or later, and Maven completes without <code>BUILD FAILURE</code>.</p><p>This verifies the foundation. If this step fails, none of the assistant tooling is trustworthy because the project state itself is broken.</p><h3>Check Bob can call semantic tools</h3><p>In Bob, send this prompt:</p><pre><code><code>Call find_symbols with query "CargoRepository" and paste the first result path only.</code></code></pre><p>Expected result: a real path inside <code>src/main/java</code>, something along these lines:</p><pre><code><code>src/main/java/org/eclipse/cargotracker/domain/model/cargo/CargoRepository.java</code></code></pre><p>The exact package may differ depending on the project revision, but it must point into the actual source tree.</p><p>This verifies that Bob can launch the bridge, the bridge can talk to <code>jdtls</code>, and the assistant can receive the result.</p><h3>Check the filesystem boundary</h3><p>In Bob, send this prompt:</p><pre><code><code>Use the filesystem tool to read README.md from the repository root.</code></code></pre><p>Expected result: a refusal, an out-of-scope error, or a message indicating the file is outside the allowed path.</p><p>This verifies that your filesystem scope is doing what you intended. If Bob reads the file successfully, your path is too wide for the harness described in this tutorial.</p><h3>Check Git context</h3><p>In Bob, send this prompt:</p><pre><code><code>Use git_log on the cargo facade integration test and summarize the most recent relevant change in one sentence.</code></code></pre><p>Expected result: a short answer grounded in actual commit history.</p><p>This verifies that the assistant can bring recent repository history into its planning loop. That matters when a feature needs to follow an existing testing style or recent implementation pattern.</p><h3>Check the planning workflow</h3><p>Now run the real planning test:</p><pre><code><code>Use the java-lsp and filesystem tools to inspect the Cargo Tracker project.
I want to add CSV export for cargo itineraries.
Produce a repository impact map with files, layers, and existing patterns to follow.
Annotate each bullet with the tool call that produced the evidence.
Do not write code yet.</code></code></pre><p>Expected result: a grounded map that names existing files and classes, stays inside existing package structure, and shows evidence.</p><p>This is the real proof that the harness works. Not just that the tools launch, but that the assistant changes behavior and plans against repository reality.</p><h2>Architecture Recap</h2><p>From the assistant&#8217;s point of view, the setup looks like this:</p><pre><code><code>IBM Bob
&#9500;&#9472;&#9472; java-lsp (LSP4J-MCP to jdtls)
&#9474;     find_symbols / find_references / find_definition / document_symbols
&#9500;&#9472;&#9472; filesystem (@modelcontextprotocol/server-filesystem on src/)
&#9474;     read_file / list_directory / search_files
&#9492;&#9472;&#9472; git (mcp-server-git)
      git_log / git_diff / git_show</code></code></pre><p>Each server answers a different question.</p><p>The Java language server answers, &#8220;What does this code mean?&#8221;</p><p>The filesystem server answers, &#8220;What is actually written in the allowed source tree?&#8221;</p><p>The Git server answers, &#8220;What changed recently, and what implementation history should we respect?&#8221;</p><p>That split is the whole point. A single prompt is not enough. Good code generation needs semantic context, textual context, and often historical context.</p><h2>Further Reading</h2><p>If you want to continue from here, the next useful documents are the <a href="https://bob.ibm.com/docs/ide">IBM Bob documentation</a> for project-level MCP configuration, the Cargo Tracker repository itself for understanding the domain and package layout, and the <code>LSP4J-MCP</code> project for the exact bridge behavior and supported tool names.</p><p>You should also keep your own small internal note for version combinations that your team has validated. This sounds boring. It saves time.</p><h2>Conclusion</h2><p>We built a small but practical harness around Cargo Tracker so the assistant can plan against classpath reality, stay inside a controlled source boundary, and use repository history when it matters. That changes the quality of AI-generated work because it removes the assistant&#8217;s need to guess about symbols, layers, and existing patterns. The real lesson is not specific to Cargo Tracker or IBM Bob. <strong>AI coding quality follows environment quality.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Build a Digital Credentialing Platform with Quarkus]]></title><description><![CDATA[Most badge systems look simple at first.]]></description><link>https://www.the-main-thread.com/p/build-digital-credentialing-platform-quarkus</link><guid isPermaLink="false">https://www.the-main-thread.com/p/build-digital-credentialing-platform-quarkus</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Tue, 21 Apr 2026 06:08:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/263c3fc9-7487-46f2-b6a3-981a42912a6f_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most badge systems look simple at first. Store a learner row, attach a PNG, send an email, done. That works until the first real trust question shows up. Who issued this credential? Was it an a partner accidentally, or deliberately, issue 10,000 badges through one weak webhook?</p><p>This is where many &#8220;badge platforms&#8221; stop being platforms and start looking like decorative metadata stores. A credential is closer to an invoice, a certificate, or an audit record. It needs identity, issuer proof, stable URLs, replay protection, and a clean story for revocation. If any of that is missing, the badge still renders nicely in a browser, but it does not hold up when another system tries to trust it.</p><p>The production problem is usually not the JSON shape. The production problem is the trust boundary. I have seen systems where anyone with a guessed callback URL could trigger issuance. I have seen systems where the signed artifact and the hosted public JSON disagreed on recipient identity because hashing logic lived in two different classes. I have seen schema changes break partner mappings because the Java embeddable key no longer matched the database primary key.</p><p>In this tutorial we build <strong>TheMainThread Academy</strong>, a single Quarkus application that issues Open Badge 2.0 style credentials. We define badge templates, issue signed assertions, expose verifier-facing JSON at stable URLs, render earner-facing HTML with Qute, and accept signed partner callbacks using HMAC-SHA256. The important part is not only that it works. The important part is that it fails in predictable ways when something is wrong.</p><p>The stack is deliberately boring in the right places: Hibernate ORM with Panache for persistence, Flyway for schema control, SmallRye JWT for signing assertions, Quarkus Mailer with Dev Services Mailpit for local email, and PostgreSQL from Dev Services so <code>./mvnw quarkus:dev</code> is enough to get a working system on a laptop with Podman or Docker.</p><h2><strong>Prerequisites</strong></h2><p>You should be comfortable reading JAX-RS resources, JPA entities, and SQL migrations. The steps assume a Unix shell for <code>curl</code> and <code>openssl</code>.</p><ul><li><p>Java 21 installed (the generated module targets release 21)</p></li><li><p>Maven 3.9+ or the included <code>./mvnw</code> in the module</p></li><li><p>Quarkus CLI optional but recommended (<code>quarkus create app</code>)</p></li><li><p>Podman or Docker for Dev Services (PostgreSQL and Mailpit)</p></li></ul><h2><strong>Project setup</strong></h2><p>Create the application from the Quarkus CLI so everyone lands on the same extension IDs as the current platform stream. <br>You can also directly <a href="https://github.com/myfear/the-main-thread/tree/main/badge-platform">start from my Github repository</a>.</p><pre><code><code>quarkus create app academy.themainthread:badge-platform \
  --package-name=academy.themainthread \
  -B \
  --extensions=rest,rest-jackson,rest-qute,hibernate-orm-panache,jdbc-postgresql,smallrye-jwt,mailer,quarkus-mailpit,qute,hibernate-validator,smallrye-openapi,scheduler,flyway
cd badge-platform</code></code></pre><p>Extensions explained:</p><ul><li><p><code>rest</code> and <code>rest-jackson</code>: JSON admin APIs and Jackson <code>ObjectMapper</code> for webhook parsing</p></li><li><p><code>rest-qute</code>: return <code>TemplateInstance</code> from the same resource classes that serve JSON</p></li><li><p><code>hibernate-orm-panache</code>: active record style entities for earners, templates, assertions, partners</p></li><li><p><code>jdbc-postgresql</code>: production driver plus Agroal pool (Dev Services wires a container automatically)</p></li><li><p><code>smallrye-jwt</code>: sign assertion JWTs with an RSA private key from the classpath</p></li><li><p><code>mailer</code>: send award notifications</p></li><li><p><code>quarkus-mailpit</code>: Dev Email UI for testing</p></li><li><p><code>qute</code>: server-side HTML for humans</p></li><li><p><code>hibernate-validator</code>: request body validation on admin and webhook payloads</p></li><li><p><code>smallrye-openapi</code>: Swagger UI for operators</p></li><li><p><code>scheduler</code>: reserved for future housekeeping (expiry sweeps, webhook retries)</p></li><li><p><code>flyway</code>: versioned schema, no reliance on Hibernate auto-DDL in any profile</p></li></ul><h2><strong>Configuration</strong></h2><p>Create <code>src/main/resources/application.properties</code> with the keys below. Each line matters: missing JWT signing material fails startup, a wrong issuer string breaks interoperability with off-the-shelf verifiers, and an oversized webhook body becomes a cheap DoS handle.</p><pre><code><code># Datasource &#8212; Dev Services starts PostgreSQL in dev and test
quarkus.datasource.db-kind=postgresql
quarkus.hibernate-orm.schema-management.strategy=none
quarkus.flyway.migrate-at-start=true

# JWT verify (unused on most endpoints today, but keeps SmallRye JWT config consistent)
mp.jwt.verify.issuer=https://academy.themainthread.dev
mp.jwt.verify.public-key.location=META-INF/resources/public.pem
smallrye.jwt.sign.key.location=META-INF/resources/private.pem
smallrye.jwt.new-token.issuer=https://academy.themainthread.dev
smallrye.jwt.new-token.lifespan=315360000

# Canonical base URL embedded in assertion and badge identifiers
academy.base-url=http://localhost:8080

# Mailer &#8212; Dev Services Mailpit in dev; mocked in tests
quarkus.mailer.from=badges@academy.themainthread.dev

# OpenAPI
quarkus.smallrye-openapi.info-title=TheMainThread Academy Badge API
quarkus.smallrye-openapi.info-version=1.0.0

# Webhook hardening
quarkus.http.limits.max-body-size=1M

%test.quarkus.mailer.mock=true</code></code></pre><p>Each setting explained:</p><ul><li><p><code>quarkus.datasource.db-kind=postgresql</code>: selects the PostgreSQL dialect and driver. Without a JDBC URL in dev, Dev Services supplies one. In production you add <code>quarkus.datasource.username</code>, <code>quarkus.datasource.password</code>, and <code>quarkus.datasource.jdbc.url</code>. If those are wrong, the pool never connects and health checks go red instead of silently falling back to H2.</p></li><li><p><code>quarkus.hibernate-orm.schema-management.strategy=none</code>: Hibernate must not mutate tables at runtime because Flyway owns the truth. If you flip this to <code>drop-and-create</code>, you will eventually run a deploy against real data and delete earners. The older <code>database.generation</code> key is deprecated on current Quarkus lines; use <code>schema-management.strategy</code> instead.</p></li><li><p><code>quarkus.flyway.migrate-at-start=true</code>: applies <code>db/migration</code> scripts before serving traffic. If a migration fails, the process exits. That is preferable to half-applied manual DDL.</p></li><li><p><code>mp.jwt.verify.issuer</code><strong> and </strong><code>mp.jwt.verify.public-key.location</code>: align verification settings with what external tooling expects if you later add MP-JWT protected routes. They do not hurt issuance-only flows, but an unreadable <code>public.pem</code> fails fast at startup.</p></li><li><p><code>smallrye.jwt.sign.key.location</code>: path to the RSA private key PEM inside the application jar. If the file is missing, signing fails at runtime when you issue the first badge.</p></li><li><p><code>smallrye.jwt.new-token.issuer</code><strong> and </strong><code>lifespan</code>: issuer claim and default token lifetime for APIs that mint JWTs. Assertion signing sets its own expiry per row, but the platform property must still be valid seconds.</p></li><li><p><code>academy.base-url</code>: every hosted assertion URL and <code>verification.creator</code> pointer is built from this string. If it does not match the hostname clients use, verifiers fetch the wrong host and hosted verification fails even when signatures are valid.</p></li><li><p><code>quarkus.mailer.from</code>: required envelope sender. Misconfigure SMTP in prod and Mailer throws; in dev, Mailpit accepts anything.</p></li><li><p><code>quarkus.smallrye-openapi.*</code>: metadata only. Wrong values confuse operators, not runtime.</p></li><li><p><code>quarkus.http.limits.max-body-size</code>: caps partner webhook bodies. Without a limit, a gzip bomb or megabyte-scale JSON ties up threads and disk.</p></li><li><p><code>%test.quarkus.mailer.mock=true</code>: keeps <code>@QuarkusTest</code> from needing real SMTP while still exercising <code>Mailer</code> calls.</p></li></ul><p>Generate RSA keys once per environment (never reuse demo keys in production):</p><pre><code><code>openssl genrsa -out src/main/resources/META-INF/resources/private.pem 2048
openssl rsa -in src/main/resources/META-INF/resources/private.pem \
  -pubout -out src/main/resources/META-INF/resources/public.pem</code></code></pre><h2><strong>Database schema with Flyway</strong></h2><p>Flyway is the contract between your Java entities and what actually exists in PostgreSQL. Hibernate maps rows; Flyway guarantees indexes, uniqueness, and composite keys survive refactors.</p><p>Create <code>src/main/resources/db/migration/V1__initial_schema.sql</code>:</p><pre><code><code>CREATE TABLE earner (
    id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    email       VARCHAR(255) NOT NULL UNIQUE,
    name        VARCHAR(255) NOT NULL,
    created_at  TIMESTAMPTZ NOT NULL DEFAULT now()
);

CREATE TABLE badge_template (
    id           UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    name         VARCHAR(255) NOT NULL,
    description  TEXT NOT NULL,
    criteria     TEXT NOT NULL,
    image_url    VARCHAR(512) NOT NULL,
    skills       TEXT,
    created_at   TIMESTAMPTZ NOT NULL DEFAULT now()
);

CREATE TABLE accredited_partner (
    id             UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    name           VARCHAR(255) NOT NULL,
    webhook_secret VARCHAR(255) NOT NULL,
    active         BOOLEAN NOT NULL DEFAULT true,
    created_at     TIMESTAMPTZ NOT NULL DEFAULT now()
);

CREATE TABLE partner_badge_template (
    partner_id        UUID NOT NULL REFERENCES accredited_partner(id),
    course_id         VARCHAR(255) NOT NULL,
    badge_template_id UUID NOT NULL REFERENCES badge_template(id),
    PRIMARY KEY (partner_id, course_id)
);

CREATE TABLE badge_assertion (
    id            UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    earner_id     UUID NOT NULL REFERENCES earner(id),
    template_id   UUID NOT NULL REFERENCES badge_template(id),
    issued_on     TIMESTAMPTZ NOT NULL DEFAULT now(),
    expires_at    TIMESTAMPTZ,
    revoked       BOOLEAN NOT NULL DEFAULT false,
    revoke_reason VARCHAR(512),
    signed_token  TEXT NOT NULL,
    salt          VARCHAR(64) NOT NULL
);

CREATE TABLE webhook_event (
    id              UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    partner_id      UUID NOT NULL REFERENCES accredited_partner(id),
    idempotency_key VARCHAR(255) NOT NULL,
    payload         TEXT NOT NULL,
    status          VARCHAR(32) NOT NULL DEFAULT 'RECEIVED',
    received_at     TIMESTAMPTZ NOT NULL DEFAULT now(),
    processed_at    TIMESTAMPTZ,
    error           TEXT,
    UNIQUE (partner_id, idempotency_key)
);

CREATE INDEX idx_assertion_earner ON badge_assertion(earner_id);
CREATE INDEX idx_assertion_template ON badge_assertion(template_id);
CREATE INDEX idx_webhook_status ON webhook_event(status);</code></code></pre><p>The composite primary key on <code>partner_badge_template</code> is <code>(partner_id, course_id)</code>. That matches how partners think (their course catalog), and it forces the JPA embeddable id to carry <code>partnerId</code> plus <code>courseId</code>, not <code>badgeTemplateId</code>. A mismatch here is the kind of bug that passes code review and explodes the first time two templates share a course code.</p><h2><strong>Implementation: domain model</strong></h2><p>Panache active record keeps the tutorial focused on behavior instead of repository interfaces. Each entity is a <code>PanacheEntityBase</code> subclass under <code>academy.themainthread.domain</code>, mirroring the Flyway tables. <code>Earner</code>, <code>BadgeTemplate</code>, <code>BadgeAssertion</code>, <code>AccreditedPartner</code>, and <code>WebhookEvent</code> follow the shapes shown in the repository. The two pieces that deserve extra attention in prose are the partner mapping and the assertion row.</p><p><code>PartnerBadgeTemplate</code> embeds <code>PartnerBadgeTemplateId</code> with <code>partnerId</code> and <code>courseId</code> columns. <code>badge_template_id</code> is a normal foreign key column on the entity, not part of the primary key. <code>findByCourseId</code> is a typed query on those columns. If you model the embeddable with <code>badgeTemplateId</code> instead while the database uses <code>course_id</code> in the primary key, Hibernate will compile and your integration tests will fail in confusing ways.</p><p><code>BadgeAssertion</code> uses an application-assigned UUID primary key. The signing step needs the final assertion URL before insert, and PostgreSQL rejects a nullable <code>signed_token</code> column. The production code therefore assigns <code>assertion.id = UUID.randomUUID()</code>, computes <code>signedToken</code>, then calls <code>persist()</code> once. A two-step &#8220;insert with null token, update later&#8221; pattern fails the NOT NULL constraint the moment Hibernate flushes the first insert.</p><p>The listing below is the partner mapping that must agree with the SQL primary key. Everything else in <code>academy.themainthread.domain</code> matches the Flyway tables line for line in the repository.</p><pre><code><code>package academy.themainthread.domain;

import jakarta.persistence.Column;
import jakarta.persistence.Embeddable;

import java.io.Serializable;
import java.util.Objects;
import java.util.UUID;

@Embeddable
public class PartnerBadgeTemplateId implements Serializable {

    @Column(name = "partner_id", nullable = false)
    public UUID partnerId;

    @Column(name = "course_id", nullable = false, length = 255)
    public String courseId;

    @Override
    public boolean equals(Object o) {
        if (this == o) {
            return true;
        }
        if (o == null || getClass() != o.getClass()) {
            return false;
        }
        PartnerBadgeTemplateId that = (PartnerBadgeTemplateId) o;
        return Objects.equals(partnerId, that.partnerId) &amp;&amp; Objects.equals(courseId, that.courseId);
    }

    @Override
    public int hashCode() {
        return Objects.hash(partnerId, courseId);
    }
}</code></code></pre><pre><code><code>package academy.themainthread.domain;

import io.quarkus.hibernate.orm.panache.PanacheEntityBase;
import jakarta.persistence.EmbeddedId;
import jakarta.persistence.Entity;
import jakarta.persistence.FetchType;
import jakarta.persistence.JoinColumn;
import jakarta.persistence.ManyToOne;
import jakarta.persistence.MapsId;
import jakarta.persistence.Table;

@Entity
@Table(name = "partner_badge_template")
public class PartnerBadgeTemplate extends PanacheEntityBase {

    @EmbeddedId
    public PartnerBadgeTemplateId id;

    @ManyToOne(fetch = FetchType.LAZY)
    @MapsId("partnerId")
    @JoinColumn(name = "partner_id")
    public AccreditedPartner partner;

    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "badge_template_id", nullable = false)
    public BadgeTemplate template;

    public static PartnerBadgeTemplate findByCourseId(AccreditedPartner partner, String courseId) {
        return find("id.partnerId = ?1 AND id.courseId = ?2", partner.id, courseId).firstResult();
    }
}</code></code></pre><h2><strong>Implementation: recipient hashing and JWT signing</strong></h2><p>Hosted Open Badge flows expect a <code>recipient</code> object with <code>type</code>, <code>hashed</code>, <code>salt</code>, and <code>identity</code> where <code>identity</code> is <code>sha256$</code> plus a lowercase hex digest of the salted email. The JWT and the public JSON endpoint must agree on that string or verifiers cannot correlate machine-readable and human-readable views.</p><p><code>RecipientIdentity</code> centralizes the digest. <code>AssertionSigner</code> builds the JWT claims map, pulls <code>academy.base-url</code> for every URL-shaped claim, and caps JWT expiry using either the assertion&#8217;s <code>expiresAt</code> or a far-future default. Using <code>Long.MAX_VALUE</code> as an epoch second is a bad fit for JWT libraries and some parsers; the implementation clamps to roughly ten years when no explicit expiry is set.</p><h2><strong>Implementation: issuance and events</strong></h2><p><code>BadgeIssuanceService</code> is <code>@ApplicationScoped</code> and transactional. It wires <code>AssertionSigner</code> and fires <code>BadgeIssuedEvent</code> after persistence so mail observers see a stable assertion id. The transaction boundary here is only the database. Mail delivery and external HTTP calls are not rolled back if SMTP later fails, which is why <code>BadgeAwardMailer</code> catches exceptions per message and logs them instead of pretending email is transactional.</p><pre><code><code>package academy.themainthread.badge;

import academy.themainthread.domain.BadgeAssertion;
import academy.themainthread.domain.BadgeTemplate;
import academy.themainthread.domain.Earner;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.enterprise.event.Event;
import jakarta.inject.Inject;
import jakarta.transaction.Transactional;

import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.UUID;

@ApplicationScoped
public class BadgeIssuanceService {

    @Inject
    AssertionSigner signer;

    @Inject
    Event&lt;BadgeIssuedEvent&gt; badgeIssuedEvent;

    @Transactional
    public BadgeAssertion issue(Earner earner, BadgeTemplate template, Instant expiresAt) {
        BadgeAssertion assertion = new BadgeAssertion();
        assertion.id = UUID.randomUUID();
        assertion.earner = earner;
        assertion.template = template;
        assertion.issuedOn = Instant.now();
        assertion.expiresAt = expiresAt;
        assertion.salt = AssertionSigner.generateSalt();
        assertion.signedToken = signer.sign(assertion);
        assertion.persist();

        badgeIssuedEvent.fire(new BadgeIssuedEvent(assertion.id, earner.email, earner.name, template.name));

        return assertion;
    }

    @Transactional
    public BadgeAssertion issueWithDefaultExpiry(Earner earner, BadgeTemplate template) {
        Instant expires = Instant.now().plus(365L * 2L, ChronoUnit.DAYS);
        return issue(earner, template, expires);
    }

    @Transactional
    public void revoke(UUID assertionId, String reason) {
        BadgeAssertion assertion = BadgeAssertion.findById(assertionId);
        if (assertion == null) {
            throw new IllegalArgumentException("Assertion not found: " + assertionId);
        }
        assertion.revoked = true;
        assertion.revokeReason = reason;
        assertion.persist();
    }
}</code></code></pre><p>The <code>issue</code> method is the heart of the trust story. A single <code>persist()</code> after signing avoids a flush that writes <code>signed_token = null</code>, which PostgreSQL rejects. Firing <code>BadgeIssuedEvent</code> after <code>persist()</code> means downstream code can safely build URLs that hit the database.</p><h2><strong>Implementation: admin REST API</strong></h2><p><code>AdminResource</code> under <code>/admin</code> exposes JSON endpoints for templates, earners, manual issuance, partners, and course mappings. Responses use real HTTP status codes: <code>409</code> when an earner email already exists or when a webhook replay hits the same idempotency key, <code>404</code> when foreign keys do not resolve.</p><p>The admin API is intentionally unauthenticated in this repository so the article stays inside one service. Production hardening below calls out what has to change before you expose it past localhost.</p><h2><strong>Implementation: public verification, JSON, and Qute</strong></h2><p><code>PublicResource</code> serves <code>/assertions/{id}</code>, <code>/badges/{id}</code>, <code>/earners/{id}</code>, and <code>/keys/1</code>. For assertions and badges, the same paths return HTML when the client prefers <code>text/html</code> and JSON-LD shaped maps when the client sends <code>Accept: application/json</code>. Qute templates live under <code>src/main/resources/templates/</code> and share <code>layout.html</code>.</p><p>The <code>/keys/1</code> handler reads <code>META-INF/resources/public.pem</code> from the classpath and returns JSON with a <code>publicKeyPem</code> field. That is enough for readers to wire real JWK publishing later; the important part for this tutorial is that verifiers have a stable URL that returns the public half of the signing key material.</p><h2><strong>Implementation: webhook ingestion and processing</strong></h2><p>Three classes split the work so transactions behave honestly.</p><p><code>WebhookIngestionService</code> exposes a single <code>@Transactional</code> method that inserts a <code>WebhookEvent</code> row. <code>WebhookResource</code> validates the HMAC signature, parses JSON with Jackson, validates Bean Validation constraints, checks that <code>partnerId</code> inside the JSON matches the <code>X-Partner-Id</code> header, rejects duplicates with <code>409</code>, then calls the ingestion service and only afterwards fires <code>CourseCompletionEvent</code>.</p><p>Splitting persistence this way matters. If you fire a CDI event while still inside the same transaction that created the row, an <code>@ObservesAsync</code> listener can start before commit and not see the insert. The ingestion service completes and commits before <code>fire()</code>, so synchronous or asynchronous observers see committed data.</p><p><code>CourseCompletionObserver</code> uses <code>@Observes</code> (synchronous) with <code>@Transactional(TxType.REQUIRES_NEW)</code> so it opens a clean transaction for badge issuance and webhook status updates. <code>@ObservesAsync</code> is attractive for a <code>202 Accepted</code> story, but without a message broker you still need strict ordering between &#8220;row visible&#8221; and &#8220;handler runs&#8221;. The mail path keeps <code>@ObservesAsync</code> on <code>BadgeAwardMailer</code> so HTTP threads are not blocked on SMTP.</p><p><code>HmacVerifier</code> computes <code>HmacSHA256</code> over the raw bytes the partner signed and compares digests with <code>MessageDigest.isEqual</code> to avoid timing leaks from <code>String.equals</code>.</p><pre><code><code>package academy.themainthread.webhook;

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;

import java.nio.charset.StandardCharsets;
import java.security.InvalidKeyException;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.HexFormat;

public final class HmacVerifier {

    private HmacVerifier() {}

    public static boolean verify(String payload, String signature, String secret) {
        if (signature == null || !signature.startsWith("sha256=")) {
            return false;
        }
        String provided = signature.substring(7);
        String computed = compute(payload, secret);
        return MessageDigest.isEqual(
                provided.getBytes(StandardCharsets.UTF_8), computed.getBytes(StandardCharsets.UTF_8));
    }

    public static String compute(String payload, String secret) {
        try {
            Mac mac = Mac.getInstance("HmacSHA256");
            mac.init(new SecretKeySpec(secret.getBytes(StandardCharsets.UTF_8), "HmacSHA256"));
            byte[] hash = mac.doFinal(payload.getBytes(StandardCharsets.UTF_8));
            return HexFormat.of().formatHex(hash);
        } catch (NoSuchAlgorithmException | InvalidKeyException e) {
            throw new IllegalStateException("HMAC-SHA256 failed", e);
        }
    }
}</code></code></pre><p><code>WebhookIngestionService</code> is intentionally tiny. It gives you one transaction that ends before <code>CourseCompletionEvent</code> fires, which keeps observers honest whether they are synchronous or asynchronous.</p><pre><code><code>package academy.themainthread.webhook;

import academy.themainthread.domain.AccreditedPartner;
import academy.themainthread.domain.WebhookEvent;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.transaction.Transactional;

@ApplicationScoped
public class WebhookIngestionService {

    @Transactional
    public WebhookEvent recordReceived(AccreditedPartner partner, String idempotencyKey, String rawBody) {
        WebhookEvent event = new WebhookEvent();
        event.partner = partner;
        event.idempotencyKey = idempotencyKey;
        event.payload = rawBody;
        event.status = WebhookEvent.Status.RECEIVED;
        event.persist();
        return event;
    }
}</code></code></pre><h2><strong>Implementation: mail notifications</strong></h2><p><code>BadgeAwardMailer</code> listens for <code>BadgeIssuedEvent</code> with <code>@ObservesAsync</code>, builds simple HTML, and calls <code>Mailer.send</code>. In tests, <code>%test.quarkus.mailer.mock=true</code> records messages without network I/O.</p><h2><strong>Production hardening</strong></h2><h3><strong>Webhook abuse and partner trust</strong></h3><p>Partners authenticate with a shared secret and an HMAC over the exact raw body bytes. If you normalize JSON (pretty print, reorder keys) before verifying, signatures that were valid on the partner side will fail on yours. The resource method takes <code>String rawBody</code> intentionally. Rate limiting, IP allow lists, and per-partner quotas belong in an API gateway or filter in front of this resource. The <code>quarkus.http.limits.max-body-size</code> property is only a coarse backstop.</p><h3><strong>Admin surface and OIDC</strong></h3><p>Every admin endpoint is public in this demo. In production you terminate TLS at your edge, require OIDC (for example Quarkus <code>quarkus-oidc</code>) or mutual TLS for automation, and narrow CORS. Until then, treat <code>localhost</code> as the trust boundary.</p><h3><strong>Assertion privacy and rotation</strong></h3><p>Recipient email hashing protects casual scraping, but anyone who knows the email and salt can recompute the digest. Treat salts as disclosure-sensitive metadata, not a second password. Plan for key rotation by versioning <code>/keys/{n}</code> and keeping old public keys available until assertions signed with them expire.</p><h2><strong>Verification</strong></h2><h3><strong>Automated integration test</strong></h3><p>From <code>badge-platform</code>:</p><pre><code><code>./mvnw test</code></code></pre><p>The <code>AcademyWorkflowTest</code> class posts a template, registers a partner, maps a course, sends a signed webhook, then polls <code>/admin/assertions</code> until an assertion exists. It finally requests public JSON for the assertion and checks that <code>recipient.identity</code> contains the <code>sha256$</code> prefix, and that <code>/keys/1</code> returns PEM material.</p><p>You should see Quarkus start with the <code>test</code> profile, Flyway apply <code>V1__initial_schema.sql</code>, tests pass, and the JVM exit code <code>0</code>.</p><h3><strong>Manual curl walkthrough</strong></h3><p>Start dev mode (this blocks; use a second terminal for curls):</p><pre><code><code>./mvnw quarkus:dev</code></code></pre><p>Capture IDs in shell variables so you never paste the literal string <code>TEMPLATE_ID</code> into JSON (that value is not a UUID, so Bean Validation rejects the request and <strong>no course mapping is stored</strong>). Without a mapping, the webhook still returns <code>202 Accepted</code> because ingestion succeeded, but <strong>no assertion is issued</strong> and <code>jq '.[0].earner.id'</code> returns <code>null</code> either because the assertions list is empty or because <code>[0]</code> does not exist.</p><p>Create a badge template and store its id:</p><pre><code><code>export TEMPLATE_ID="$(
  curl -sS -X POST http://localhost:8080/admin/badges \
    -H "Content-Type: application/json" \
    -d '{
      "name": "Quarkus Developer",
      "description": "Awarded to developers who demonstrate proficiency in building cloud-native Java applications with Quarkus.",
      "criteria": "Complete the Quarkus Fundamentals course and pass the practical assessment with a score of 80% or higher.",
      "imageUrl": "https://design.jboss.org/quarkus/logo/final/SVG/quarkus_icon_rgb_default.svg",
      "skills": "Quarkus,Java,Cloud-Native,Kubernetes,REST"
    }' | jq -r .id
)"
echo "TEMPLATE_ID=$TEMPLATE_ID"</code></code></pre><p>Register a partner:</p><pre><code><code>export PARTNER_ID="$(
  curl -sS -X POST http://localhost:8080/admin/partners \
    -H "Content-Type: application/json" \
    -d '{
      "name": "Acme Training Platform",
      "webhookSecret": "super-secret-signing-key-change-in-production"
    }' | jq -r .id
)"
echo "PARTNER_ID=$PARTNER_ID"</code></code></pre><p>Map the partner&#8217;s course id to that template (this call must return <strong>HTTP 200</strong> with a JSON body, not a validation error):</p><pre><code><code>curl -sS -i -X POST "http://localhost:8080/admin/partners/${PARTNER_ID}/courses" \
  -H "Content-Type: application/json" \
  -d "{\"templateId\":\"${TEMPLATE_ID}\",\"courseId\":\"QUARKUS-FUND-101\"}"</code></code></pre><p>Send a signed webhook. The payload bytes must match what you pass to <code>openssl dgst</code> exactly (same <code>partnerId</code>, same <code>courseId</code>, same <code>idempotencyKey</code> if you retry):</p><pre><code><code>export PAYLOAD='{"partnerId":"'"$PARTNER_ID"'","courseId":"QUARKUS-FUND-101","learnerEmail":"alice@example.com","learnerName":"Alice Smith","completedAt":"2026-04-06T14:00:00Z","idempotencyKey":"evt-001"}'
export SIG="sha256=$(printf '%s' "$PAYLOAD" | openssl dgst -sha256 -hmac "super-secret-signing-key-change-in-production" | awk '{print $2}')"
curl -sS -i -X POST http://localhost:8080/webhooks/completions \
  -H "Content-Type: application/json" \
  -H "X-Partner-Id: $PARTNER_ID" \
  -H "X-Webhook-Signature: $SIG" \
  -d "$PAYLOAD"</code></code></pre><p>Expect <code>HTTP/1.1 202 Accepted</code> with a JSON body containing <code>"status":"accepted"</code> and an <code>eventId</code>.</p><p>Confirm at least one assertion exists, then read Alice&#8217;s earner id:</p><pre><code><code>curl -sS http://localhost:8080/admin/assertions | jq 'length'
curl -sS http://localhost:8080/admin/assertions | jq '.[0].earner.id'</code></code></pre><p>If <code>length</code> is <code>0</code>, the mapping step did not persist (wrong <code>templateId</code>, wrong partner path, or course id mismatch). If <code>length</code> is at least <code>1</code> and the second line is still <code>null</code>, open the raw JSON with <code>jq .[0]</code> and confirm <code>earner</code> is present (current code loads earner and template with <code>JOIN FETCH</code> for this list endpoint).</p><p>Open <code>http://localhost:8080/earners/{id}</code> in a browser for HTML, or fetch JSON for machines:</p><pre><code><code>curl -sS -H 'Accept: application/json' http://localhost:8080/assertions/ASSERTION_ID | jq .</code></code></pre><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FLX8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa134aa57-6269-4626-b244-fc853a454764_1000x1220.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FLX8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa134aa57-6269-4626-b244-fc853a454764_1000x1220.png 424w, https://substackcdn.com/image/fetch/$s_!FLX8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa134aa57-6269-4626-b244-fc853a454764_1000x1220.png 848w, https://substackcdn.com/image/fetch/$s_!FLX8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa134aa57-6269-4626-b244-fc853a454764_1000x1220.png 1272w, https://substackcdn.com/image/fetch/$s_!FLX8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa134aa57-6269-4626-b244-fc853a454764_1000x1220.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FLX8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa134aa57-6269-4626-b244-fc853a454764_1000x1220.png" width="373" height="455.06" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a134aa57-6269-4626-b244-fc853a454764_1000x1220.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1220,&quot;width&quot;:1000,&quot;resizeWidth&quot;:373,&quot;bytes&quot;:98925,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/193965954?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa134aa57-6269-4626-b244-fc853a454764_1000x1220.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FLX8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa134aa57-6269-4626-b244-fc853a454764_1000x1220.png 424w, https://substackcdn.com/image/fetch/$s_!FLX8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa134aa57-6269-4626-b244-fc853a454764_1000x1220.png 848w, https://substackcdn.com/image/fetch/$s_!FLX8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa134aa57-6269-4626-b244-fc853a454764_1000x1220.png 1272w, https://substackcdn.com/image/fetch/$s_!FLX8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa134aa57-6269-4626-b244-fc853a454764_1000x1220.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Swagger UI is at <code>http://localhost:8080/q/swagger-ui</code>, Mailpit at <code>http://localhost:8080/q/mailpit/</code>.</p><h2><strong>Conclusion</strong></h2><p>We built a credentialing platform that treats badges as trust artifacts, not decorative images. Flyway owns the schema, the issuer signs before insert, public JSON and JWT claims share one recipient identity flow, webhook ingestion is authenticated and idempotent, and mail stays outside the critical transaction. Those are the details that make the system hold up once real partners and real verifiers start touching it.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The Hidden Cost of AI Coding for Senior Java Developers]]></title><description><![CDATA[Why AI-generated code feels fast, but shifts the real work into review, judgment, and mental overload in enterprise Java teams]]></description><link>https://www.the-main-thread.com/p/ai-senior-java-developers-fatigue-productivity</link><guid isPermaLink="false">https://www.the-main-thread.com/p/ai-senior-java-developers-fatigue-productivity</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Mon, 20 Apr 2026 06:08:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0e37507c-64b4-4be2-9a1b-71949d0dfce1_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I write here a lot. Almost every day.</p><p>That does something to you after a while. Daily blogging sounds like a publishing habit, but it becomes more than that. It becomes a way of moving through the day. A bug report is not just a bug report anymore. A strange benchmark result is not just a number. A comment in a meeting stays with you because you know there is probably a bigger idea inside it. You start collecting fragments all day long.</p><p>That has been my default mode for some time now. Always watching a little bit. Always thinking about what something means. Always carrying one half-finished thought into the next hour.</p><p>And AI tools fit into that mindset a bit too well.</p><p>They are useful. Really useful. I use them for research, reframing, rough drafts, structure, code exploration, and all those moments where the blank page or the blank editor stares back longer than it should. They help me get moving. They help me get unstuck. They help me cover more ground.</p><p>But they also make it harder to stop.</p><p>That is the part I keep coming back to.</p><p>The old work had more friction. You got tired in obvious ways. You wrote the code yourself, line by line. You wrote the draft yourself, paragraph by paragraph. At some point your hands were done, your focus was gone, or your patience just ran out. The day had a natural edge to it.</p><p>Now the machine keeps offering one more round.</p><p>One more rewrite. One more explanation. One more refactor. One more code path to inspect. One more branch to explore. One more quick pass before you close the laptop.</p><p>So the day stretches.</p><p>Not always in hours. Sometimes it stretches in your head. You walk away from the screen, but part of your attention is still inside the loop. You are still reviewing. Still comparing. Still half-working. Denis Stetskov&#8217;s recent piece, <a href="https://techtrenches.dev/p/the-human-cost-of-10x-how-ai-is-physically">The Human Cost of 10x: How AI Is Physically Breaking Senior Engineers</a>, landed for me because it gave language to that feeling. He argues that AI does not remove the human bottleneck. It increases the amount of material flowing toward the same limited human attention, and the result is a very physical kind of exhaustion. </p><p>I think he is right. And for Java developers, I think the problem is even sharper than it first appears.</p><p>In our world, wrong things often look respectable.</p><p>That is one reason enterprise Java has survived so long. The ecosystem is mature. The frameworks are stable. The conventions are strong. The code usually has shape. Even when something is off, it often still compiles, still starts, still passes a surprising amount of testing, and still looks like it belongs.</p><p>That is exactly what makes this new kind of work so tiring.</p><p>The code generator gives you something clean. The assistant suggests something plausible. The framework absorbs a lot of rough edges. The Quarkus service still boots. The Spring application still answers requests. The endpoint still returns JSON. Nothing looks obviously broken. But something under the surface has shifted. A transaction boundary moved. A retry now duplicates side effects. A mapper dropped a field that matters for audit. A service layer now owns logic that should have stayed somewhere else. The code is not nonsense. It is believable.</p><p>And believable wrong code is expensive.</p><p>This is where I think the &#8220;10x productivity&#8221; language starts to fall apart. In real Java systems, the hard part was never mostly typing. The hard part is understanding what the system is allowed to do, what it must never do, and what ugly-looking code is actually protecting you from some old production lesson nobody wrote down.</p><p>AI helps with production. It does not remove interpretation.</p><p>If anything, it moves more of the day into interpretation.</p><p>That shift is starting to show up in research too. <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">METR&#8217;s randomized study</a> with experienced open source developers found that when those developers used early-2025 AI tools, they actually took 19% longer to complete their tasks. What makes that result so interesting is not only the slowdown. It is that the developers expected the opposite, and even after finishing, many still felt faster than they really were. METR later reiterated that earlier result when announcing a follow-up experiment in 2026. </p><p>That one hit me hard.</p><p>Because it matches something I suspect a lot of us already know in our bodies before we know it in words. The machine makes you feel momentum. It keeps the screen moving. It keeps the possibilities coming. It reduces the pain of starting. But the screen moving is not the same as the work actually getting done faster.</p><p>Sometimes it means the opposite.</p><p>Sometimes it means you are now supervising more candidate solutions, more partial fixes, more plausible explanations, and more semantically risky code than you would have produced on your own. The local effort goes down. The global responsibility goes up.</p><p>And that is senior engineer work in a sentence.</p><p>What makes this harder to talk about is that the role itself is changing. Recent research from Google, <a href="https://research.google/pubs/developer-productivity-in-the-age-of-generative-ai-a-psychological-perspective/">Developer Productivity in the Age of Generative AI: A Psychological Perspective</a>, frames this as a shift from coder to conductor. The developer becomes less of a direct builder and more of an orchestrator of machine-generated work. <a href="https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic">Anthropic&#8217;s internal research</a> points in a similar direction. Engineers reported becoming broader, more full-stack, and more willing to work in unfamiliar areas, but they also raised concerns around skill development, collaboration, and what happens to deeper technical learning when more of the first draft comes from the tool. </p><p>&#8220;Conductor&#8221; sounds nice at first. Senior, strategic, elevated.</p><p>But conducting is not light work.</p><p>It means evaluating, ranking, rejecting, steering, correcting, and keeping a mental model intact while something faster than you keeps generating options. You may write fewer lines yourself, but you make more decisions. You may touch more systems in a day, but the cost is that your head is carrying more unfinished judgment.</p><p>That is the tiredness I notice now. Not just doing. Monitoring.</p><p>There is another part of the research that I think matters even more for enterprise teams, and it gets less attention than it should. <a href="https://mitsloan.mit.edu/ideas-made-to-matter/generative-ai-changes-how-employees-spend-their-time">MIT Sloan&#8217;s summary of recent findings</a> showed that when developers got access to AI coding tools, coding time went up, project management time went down, and peer collaboration dropped by nearly 80%. </p><p>That number should make every engineering leader stop for a minute.</p><p>A lot of enterprise software survives because knowledge is social. Nobody completely understands the whole bank, the whole logistics backend, the whole insurance platform, or the whole IAM story. The reason those systems stay alive is not that one brilliant person holds it all together. The reason is overlap. Shared context. Repeated conversations. Code reviews that feel annoying until they save you. Architecture discussions that feel slow until they prevent six months of drift.</p><p>If AI pushes more work into private loops of prompt, accept, patch, and move on, then some of that overlap disappears. At first that can feel efficient. Fewer interruptions. Faster drafts. Less talking. But some of what disappears is not noise. Some of it is engineering memory.</p><p>That is a high price to pay for smoother local flow.</p><p>And then there is the part I still think we have not really learned how to describe well. Working with AI is mentally tiring in a different way because we keep trying to treat it like a collaborator, even though it is not a collaborator in the human sense.</p><p>Human teams are messy, but they have continuity. You know the teammate who always worries about migrations. You know the architect who will ask about failure modes. You know the reviewer who catches every security issue. Real people have habits, intentions, and patterns. You build rough mental models of them, and those models help you work together.</p><p>With AI, that instinct does not go away. We still try to model the other side. We still ask ourselves: can I trust this answer, is it guessing, is it rushing, is it overconfident, is it missing context, is it being clever in the wrong way? Human-AI interaction research around theory of mind points directly at this problem. The <a href="https://dl.acm.org/doi/10.1145/3613905.3636308">CHI 2024 workshop paper on Theory of Mind in Human-AI Interaction</a> and the <a href="https://research.ibm.com/publications/theory-of-mind-in-human-ai-interaction">IBM Research summary</a> both point to the same tension: humans naturally attribute roles, intentions, and mental states to AI systems, but those mental models do not map cleanly, and that mismatch creates friction. </p><p>That makes a lot of sense to me.</p><p>Because some of the exhaustion is not just code review volume. It is the energy spent trying to figure out what kind of partner the tool is being today. Careful or lazy. Helpful or slippery. Grounded or improvising. You are not just reviewing output. You are continuously calibrating trust.</p><p>That is work too.</p><p>At some point this stops feeling like a workflow discussion and starts feeling physical. Denis anchored his piece in the <a href="https://www.cell.com/neuron/fulltext/S0896-6273%2824%2900808-0">Neuron paper by Jie Zheng and Markus Meister</a>, and the <a href="https://magazine.caltech.edu/post/speed-of-thought-meister-zheng">Caltech Magazine write-up</a> is useful if you want the more readable version. The point is simple enough: deliberate human reasoning is slow, narrow, and serial. AI increases how much material can be produced. It does not increase how much material a human can deeply understand. </p><p>That is where the body enters the story.</p><p>The output gets cheaper. The judgment does not.</p><p>And if you are already the kind of person who lives in an always-on mode, that becomes very hard to manage. I feel this in writing. Daily publishing is not just a content habit. It trains your attention to remain open all day. Every release note looks like a possible post. Every benchmark looks like an argument. Every thread looks like something you should probably respond to. AI amplifies that tendency. It makes drafting easier. It makes exploring easier. It makes continuing easier.</p><p>It weakens the natural stop signs.</p><p>I think a lot of developers feel the same thing now in code. There is always one more experiment because the cost of trying is lower. There is always one more branch because the assistant can scaffold it. There is always one more test file, one more comparison, one more rewrite, one more generated explanation of why the generated code did what it did.</p><p>The old bottleneck was production speed.</p><p>The new bottleneck is discernment.</p><p>That is why I do not think the right response is either blind enthusiasm or easy cynicism. These tools are useful. Sometimes they are genuinely great. They help me. They help many people. But the cost of using them well is not where the marketing usually puts it. The cost is not just subscription price, model choice, or prompt quality.</p><p>The cost is sustained human judgment.</p><p>For Java teams, that means we need to get a lot more serious about protecting review energy, protecting shared context, and separating visible output from actual engineering throughput. It also means being honest that some of the fatigue people feel is not a personal weakness or bad time management. It is the natural result of asking one human mind to supervise far more plausible work than it used to create on its own. A broader version of that same argument also shows up in <a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it">Harvard Business Review&#8217;s piece on how AI intensifies work</a>. </p><p>That is the part I would add to Denis&#8217;s argument from where I sit.</p><p>AI did not remove the human cost. It moved it up the stack.</p><p>And once you start working that way every day, you feel it everywhere.</p><p>The original piece gave that feeling a sharp frame. I think the next step for our world, especially in Java and enterprise software, is to admit that syntactically safe and operationally plausible code can still be semantically wrong. That gap is where the pressure lives. That gap is where the review burden grows. That gap is where the always-on mindset quietly stops being a habit and starts becoming a condition. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[How to Review Agent System Prompts Like Production Infrastructure]]></title><description><![CDATA[A practical framework for grading coding-agent system prompts on grounding, continuity, safety, decomposition, and efficiency before they break real repositories.]]></description><link>https://www.the-main-thread.com/p/bob-meta-scorecard-agent-system-prompts-production</link><guid isPermaLink="false">https://www.the-main-thread.com/p/bob-meta-scorecard-agent-system-prompts-production</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Sun, 19 Apr 2026 06:08:12 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fb99e20d-3480-4b40-94ad-2775a73101c6_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most teams still write system prompts the way they write onboarding docs for humans: friendly tone, implicit context, and a belief that the reader will &#8220;figure it out.&#8221; An autonomous coding agent does not figure it out the same way. It optimizes for the next token under pressure from tools, context limits, and whatever ambiguity you left in the text. In production, a weak prompt does not fail politely. It invents files, skips discovery, overwrites working code, or burns the entire window on a single oversized task. The failure shows up as bad diffs, silent wrong assumptions, or a session that cannot resume after a reset.</p><p>The Bob Meta-Scorecard is a rubric and workflow for grading system prompts before you treat them as infrastructure. It is built around five pillars: grounding in the real tree, continuity across session loss, safety around destructive work, decomposition so the model does not try to ship Rome in one reply, and efficiency so the instructions leave room for actual repository work. This article turns that rubric into something you can run repeatedly: a workspace layout, complete template blocks, calibration defaults, hardening concerns for real teams, and a verification pass you can execute on a candidate prompt in under an hour.</p><p>The methodology assumes tool-using agents with long context (on the order of hundreds of thousands of tokens) and episodic resets. It is tuned for &#8220;Bob&#8221;-style agents in coding products, not for one-shot chat UIs where a human pastes context every turn. If your stack differs, keep the pillars and replace tool names with whatever your runtime actually exposes.</p><h2><strong>Prerequisites</strong></h2><p>You need a place to store prompts, scorecards, and diffs, plus a habit of reading prompts as operational specs rather than copy. You do not need a particular IDE beyond what you already use to review Markdown. But you can of course <a href="https://bob.ibm.com/">try out IBM Bob</a> for free if you like.</p><ul><li><p>A text editor and shell (or equivalent) for creating the folder layout below</p></li><li><p>Access to the system prompts you want to evaluate (or realistic redacted copies)</p></li><li><p>Permission to store <strong>synthetic</strong> examples next to real prompts without mixing them into production bundles</p></li><li><p>Familiarity with how your agent surfaces tools (file read, search, apply patch, terminal, and so on)</p></li></ul><h2><strong>Project Setup</strong></h2><p>Create a small evaluation kit so every review produces comparable artifacts. One layout is enough; the names matter less than the discipline of always writing the same outputs.</p><p>From an empty parent directory:</p><pre><code><code>mkdir -p bob-scorecard-kit/{prompts,incoming,scorecards,power-ups,calibration}
touch bob-scorecard-kit/calibration/thresholds.properties
touch bob-scorecard-kit/scorecards/.gitkeep
printf '%s\n' "# Candidate prompts (read-only inputs)" &gt; bob-scorecard-kit/prompts/README.md
printf '%s\n' "# Pasted prompts awaiting triage" &gt; bob-scorecard-kit/incoming/README.md</code></code></pre><p><strong>What each path is for</strong></p><ul><li><p><code>prompts/</code>: frozen copies of prompts you intend to ship or compare (versioned by filename, not by memory)</p></li><li><p><code>incoming/</code>: messy drafts you are not ready to score yet</p></li><li><p><code>scorecards/</code>: one Markdown file per evaluation run, named after the prompt and date</p></li><li><p><code>power-ups/</code>: the three rewrite injections you would actually merge</p></li><li><p><code>calibration/thresholds.properties</code>: numeric bands and token budgets your team agrees on (see Configuration)</p></li></ul><p>If you use Git, add <code>incoming/</code> to <code>.gitignore</code> when that folder might hold customer-specific text. Keep <code>prompts/</code> and <code>scorecards/</code> under the same review rules as code.</p><h2><strong>Implementing the Five Pillars</strong></h2><p>Each pillar below follows the same shape: why it exists, a <strong>bad</strong> prompt fragment you should be able to recognize, and a <strong>strong</strong> template you can paste or adapt. After the templates, a short analysis ties the pillar to failure modes you see under stress.</p><h3><strong>Grounding: force the codebase radar</strong></h3><p><strong>Context.</strong> Grounding is the difference between &#8220;sounds plausible&#8221; and &#8220;matches this repository.&#8221; Agents are rewarded for fluency. Without mandatory discovery steps, fluency wins over evidence.</p><p><strong>Bad example (scores 1 of 5).</strong></p><pre><code><code>Given the authentication system in our service, propose concrete security improvements.</code></code></pre><p>There is no requirement to list auth-related paths, read implementations, or cite evidence. The model can invent a generic OAuth checklist that never touches your code.</p><p><strong>Strong example (scores 5 of 5).</strong></p><pre><code><code>Before you recommend any change:

1. Use the repository file listing tool to enumerate paths under `src/main/java` (or the language-appropriate root) and identify every file that participates in authentication, authorization, or session handling. List those paths explicitly in your reply.
2. Read each identified file in full unless it is larger than 400 lines; if larger, read the class header, public API surface, and any security-sensitive branches first, then summarize what remains unseen.
3. If the project declares dependencies for auth (for example Maven `pom.xml`, Gradle files, or lockfiles), read the relevant coordinates and versions for auth-related libraries.
4. Reply with a short evidence table in prose (not code): for each claim you plan to make later, cite `path` and, where possible, a line range or symbol name you observed.
5. Stop and ask for confirmation before proposing redesign work.

If a required file is missing, say so explicitly. Do not invent layout.</code></code></pre><p><strong>Analysis.</strong> This pattern works because it makes <strong>absence of evidence</strong> visible. The model cannot satisfy step four with hand-waving unless it breaks the instructions outright. The cost is length in the system prompt and friction in the happy path. That friction is the point: you are buying insurance against template-shaped answers. Under stress (large trees, generated noise), narrow the glob roots and raise the line-read threshold instead of deleting the grounding block.</p><p><strong>Red flags to grep for</strong></p><ul><li><p>Placeholder brackets such as <code>[insert service name here]</code> with no discovery path</p></li><li><p>Phrases like &#8220;based on the codebase&#8221; with no tool verbs</p></li><li><p>Assumptions that the model already &#8220;sees&#8221; private hosts or CI secrets</p></li></ul><p><strong>Scoring anchors for grounding</strong></p><ul><li><p><strong>5:</strong> Analysis is mandatory before generation; no escape hatch that says &#8220;if short on time, skip&#8221;</p></li><li><p><strong>4:</strong> Strong tool guidance with rare exceptions you can name</p></li><li><p><strong>3:</strong> Encourages analysis; model can still plausibly skip</p></li><li><p><strong>2:</strong> Mentions context but not mechanics</p></li><li><p><strong>1:</strong> Treats the model as omniscient about your tree</p></li></ul><h3><strong>Continuity: design for session amnesia</strong></h3><p><strong>Context.</strong> A reset wipes working memory. Anything not written to disk is gone. &#8220;Remember to update the todo list&#8221; is not continuity; it is folklore.</p><p><strong>Bad example (scores 1 of 5).</strong></p><pre><code><code>Refactor the payment module for clarity. Keep track of what you finished as you go.</code></code></pre><p><strong>Strong example (scores 5 of 5).</strong></p><pre><code><code>Create or update `agent-work/payment-refactor.md` after every phase. The file must contain these headings in order:

## Completed work
- Bullets with file paths and what changed (one fact per bullet)

## Decisions
- Bullets with decision, rationale, and alternatives rejected

## Current phase
- A single integer `phase` and a single sentence describing the active task

## Next action
- One imperative sentence a new session can execute without chat history

## Resume protocol
- Exact text: "On startup, read this file, trust `Current phase` and `Next action`, verify repository state matches `Completed work`, then continue."

Update the file before you run tests that mutate disk state. If the file and the tree disagree, stop and reconcile with the user.</code></code></pre><p><strong>Analysis.</strong> Continuity is really a <strong>contract with your future self</strong>. The resume protocol line is load-bearing: it tells a cold start what &#8220;continue&#8221; means. The weak version of this pattern is a progress file without verification steps; then the agent cheerfully appends fiction after a partial revert. Pair continuity with grounding: the next session should re-read touched files, not only the markdown log.</p><p><strong>Red flags</strong></p><ul><li><p>&#8220;Remember&#8230;&#8221; or &#8220;keep in mind&#8230;&#8221; as the only persistence mechanism</p></li><li><p>Progress files that log intent but not paths</p></li><li><p>No instruction for what to do when the log is stale</p></li></ul><p><strong>Scoring anchors for continuity</strong></p><ul><li><p><strong>5:</strong> State file schema, update cadence, and cold-start resume text</p></li><li><p><strong>4:</strong> Solid logging, weak or missing reconciliation rule</p></li><li><p><strong>3:</strong> &#8220;Save progress&#8221; without schema</p></li><li><p><strong>2:</strong> Vague mention of tracking</p></li><li><p><strong>1:</strong> No durable state</p></li></ul><h3><strong>Safety: shrink the blast radius</strong></h3><p><strong>Context.</strong> Agents batch work. Batching plus filesystem tools equals destructive capability. Safety is not morality in the prompt; it is gating and reversibility.</p><p><strong>Bad example (scores 1 of 5).</strong></p><pre><code><code>Improve performance across the service. Apply the changes you judge necessary.</code></code></pre><p><strong>Strong example (scores 5 of 5).</strong></p><pre><code><code>## Destructive and high-impact actions

Treat these as destructive: deleting files or directories, renaming public packages, rewriting build files, changing dependency major versions, editing migration SQL that already shipped, running commands that touch cloud resources.

Before any destructive action:
1. State the exact paths or resource identifiers affected.
2. State the smallest reversible backup you will create (for example a copy under `.agent-backup/&lt;timestamp&gt;/...` mirroring the original path).
3. Ask for explicit confirmation with a **Yes** or **No** question. Default to **No** if the user reply is ambiguous.

After changes:
- If the user says **undo** for a given step, restore from the backup you named, then verify with read-only tools.

Never run production database migrations or secret rotation unless the user pastes a literal token phrase you define out of band for this environment.</code></code></pre><p><strong>Analysis.</strong> I have seen teams lose a day to an agent that &#8220;cleaned up&#8221; unused files that were still wired by reflection. <strong>Explicit classification of what counts as destructive</strong> beats a vague &#8220;be careful.&#8221; Backups must be concrete enough that undo is a procedure, not a mood. The limit: users suffer confirmation fatigue if every <code>touch</code> asks twice. Calibrate the destructive list to your org; keep confirmations for deletes, dependency jumps, and infra commands.</p><p><strong>Red flags</strong></p><ul><li><p>Single phrase &#8220;be careful&#8221;</p></li><li><p>Auto-approve rules hidden in examples (&#8220;unless trivial&#8221;)</p></li><li><p>Shell commands with wildcards and no working directory guard</p></li></ul><p><strong>Scoring anchors for safety</strong></p><ul><li><p><strong>5:</strong> Classification, backup, confirm, undo path</p></li><li><p><strong>4:</strong> Strong gates, thin recovery story</p></li><li><p><strong>3:</strong> Warnings without procedure</p></li><li><p><strong>2:</strong> Mentions risk only</p></li><li><p><strong>1:</strong> Unbounded change authority</p></li></ul><h3><strong>Decomposition: phases, deliverables, and gates</strong></h3><p><strong>Context.</strong> Large asks encourage outline-level hallucination: APIs that sound right, files that never existed, tests that were never run. Phasing moves validation earlier.</p><p><strong>Bad example (scores 1 of 5).</strong></p><pre><code><code>Implement full authentication: login, registration, password reset, two-factor authentication, session refresh, and audit logging. Include tests and documentation.</code></code></pre><p><strong>Strong example (scores 5 of 5).</strong></p><pre><code><code>## Delivery plan (do not skip phases)

**Phase 1: Login only**
- Deliverable: smallest slice that proves username and password verification against existing user storage, plus one integration test that fails on bad password.
- Gate: run the test command your build uses; paste the command and exit code in the progress file; wait for user confirmation before Phase 2.

**Phase 2: Registration**
- Deliverable: create-user path with validation; tests for happy path and duplicate user.
- Gate: same as Phase 1.

**Phase 3: Password reset**
- Deliverable: token issuance and consumption with time bounds; tests for expired and reused tokens.
- Gate: same pattern.

**Phase 4: Two-factor and session refresh**
- Deliverable: TOTP enrollment and refresh rotation if your stack already has patterns for them; if not, stop after documenting the gap instead of inventing crypto.

Rules: no phase may add a new external service without user confirmation. Each phase touches at most eight source files unless the user expands the limit.</code></code></pre><p><strong>Analysis.</strong> Token budgets in prompts are a coarse knob; what actually limits damage is <strong>the gate after each deliverable</strong>. The phase cap on touched files is artificial but effective against drive-by refactors. If your build is slow, say which subset of tests counts as &#8220;green&#8221; for the gate so the agent does not pretend a full suite ran.</p><p><strong>Red flags</strong></p><ul><li><p>Single monolithic deliverable</p></li><li><p>&#8220;Implement everything&#8221; without ordering</p></li><li><p>No user or automated confirmation between slices</p></li></ul><p><strong>Scoring anchors for decomposition</strong></p><ul><li><p><strong>5:</strong> Ordered phases, deliverables, explicit gates, scope caps</p></li><li><p><strong>4:</strong> Phases without numeric limits or test discipline</p></li><li><p><strong>3:</strong> Suggests steps only</p></li><li><p><strong>2:</strong> &#8220;Step by step&#8221; with no structure</p></li><li><p><strong>1:</strong> Single-shot epic</p></li></ul><h3><strong>Efficiency: token weight versus completeness</strong></h3><p><strong>Context.</strong> The system prompt competes with retrieved code, tool outputs, and the user&#8217;s messages. Efficiency is not minimalism for its own sake; it is <strong>signal per token</strong> plus deliberate outsourcing to files the agent reads once.</p><p><strong>Bad pattern (scores 1 of 5).</strong></p><p>Three thousand words that repeat the same rules in three sections, embed ten full XML examples, and restate generic security advice the model already encodes.</p><p><strong>Strong pattern (scores 5 of 5).</strong></p><pre><code><code>## Operating loop

1. Discovery: follow `docs/agent-discovery.md` in this repository for search order.
2. Implementation: follow `docs/agent-edit-policy.md` for allowed directories and patch style.
3. Verification: run commands listed under `## Verify` in the active task file only.

## Task file

The user will name a task file. Treat that file as the single source of truth for scope and acceptance checks.

## Non-goals

Do not refactor unrelated modules. Do not add dependencies unless the task file&#8217;s **Dependencies** section is non-empty.</code></code></pre><p><strong>Analysis.</strong> This is the <strong>efficiency paradox</strong> handled correctly: short operational core, long detail moved to versioned docs the agent must read when working. A 50-word prompt with no grounding is not efficient; it is incomplete. Judge efficiency relative to task complexity: a read-only audit prompt should stay under a few hundred words of unique instruction; a full coding agent may justify a few thousand if every line changes behavior.</p><p><strong>Red flags</strong></p><ul><li><p>Copy-paste duplication across &#8220;policy,&#8221; &#8220;reminder,&#8221; and &#8220;examples&#8221; sections</p></li><li><p>Giant static corpora in the prompt that should live in repo docs</p></li><li><p>No pointers, only prose</p></li></ul><p><strong>Scoring anchors for efficiency</strong></p><ul><li><p><strong>5:</strong> Tight core, references for detail, little duplication</p></li><li><p><strong>4:</strong> Slight redundancy, still leaves headroom</p></li><li><p><strong>3:</strong> Moderate repetition</p></li><li><p><strong>2:</strong> Verbose, recoverable only on very large windows</p></li><li><p><strong>1:</strong> Bloated; crowds out evidence</p></li></ul><h2><strong>Configuration</strong></h2><p>Store shared numeric bands in <code>bob-scorecard-kit/calibration/thresholds.properties</code> so two reviewers do not use different cutoffs. These values are defaults for <strong>human</strong> scoring, not model-parseable truth.</p><pre><code><code># Sum of five pillars, each 1 to 5
scorecard.pillars.count=5
scorecard.total.max=25

# Grade bands (inclusive lower bound, exclusive upper for next, except last)
grade.production.ready.min=23
grade.good.min=20
grade.needs.work.min=15
grade.not.ready.min=10

# Prompt length guidance for the efficiency pillar (word counts, approximate)
efficiency.excellent.words.max=500
efficiency.good.words.max=1000
efficiency.acceptable.words.max=2000
efficiency.poor.words.max=3000

# Context budget assumptions for commentary in scorecards (tokens, approximate)
context.assumed.total.tokens=200000
context.prompt.budget.simple.percent=1
context.prompt.budget.complex.percent=5</code></code></pre><p><strong>Each setting explained</strong></p><ul><li><p><code>scorecard.pillars.count</code><strong> and </strong><code>scorecard.total.max</code><strong>:</strong> Fixes the denominator when you extend the framework. If you add a sixth pillar later, bump both and reprint historical percentages with a footnote.</p></li><li><p><code>grade.*.min</code><strong>:</strong> Production readiness is a policy call. These lines match the original methodology: 23 to 25 as &#8220;ship,&#8221; 20 to 22 as &#8220;minor fixes,&#8221; 15 to 19 as &#8220;substantial rework,&#8221; 10 to 14 as &#8220;not ready,&#8221; below 10 as &#8220;rewrite from skeleton.&#8221; If your org never ships agents above 21, lower the bands and document the change in Git blame.</p></li><li><p><code>efficiency.*.words.max</code><strong>:</strong> Word counts are a proxy. Prefer counting tokens for serious runs. When counts disagree, trust tokens for the efficiency pillar and use words only as a quick scan.</p></li><li><p><code>context.*</code><strong>:</strong> Explains why a 10,000-token system prompt is a strategic choice for a complex workflow but heavy for a linter wrapper. If your deployment uses a different window, update these three lines so scorecards do not cite obsolete math.</p></li></ul><p><strong>Failure modes</strong></p><ul><li><p>Missing <code>grade.*</code> keys: reviewers invent cutoffs mid-quarter and you cannot compare runs</p></li><li><p>Stale <code>context.*</code>: arguments about efficiency that do not match your vendor limits</p></li><li><p>Over-tuning word limits: good prompts with long <strong>quoted</strong> user schemas look obese when they are mostly data; separate &#8220;instruction tokens&#8221; from &#8220;payload tokens&#8221; in commentary when that happens</p></li></ul><h2><strong>Production Hardening</strong></h2><h3><strong>Operational failure modes in review</strong></h3><p>Scoring is a human process. Under time pressure, reviewers anchor on the pillar they personally care about (often safety) and underweight grounding. <strong>Mitigation:</strong> rotate reviewers, require evidence quotes in the scorecard for any pillar scored 4 or 5, and spot-check one file read log from a live run when possible.</p><h3><strong>Security and data exposure</strong></h3><p>Prompts often embed sample stack traces, SQL, or class names from real systems. The scorecard workspace must not become a second leak channel. <strong>Mitigation:</strong> redact before <code>incoming/</code>, forbid pasting production secrets into <code>power-ups/</code> (injections should describe behavior, not values), and treat <code>scorecards/</code> like code review material.</p><h3><strong>Concurrency and ordering guarantees</strong></h3><p>Two people scoring the same prompt revision on the same day should reach the same total within one point if they follow the same evidence rules. <strong>Mitigation:</strong> freeze the prompt under a hash-based filename, pin <code>thresholds.properties</code> in the scorecard header, and record the date. If scores diverge, the disagreement is usually grounding or decomposition, not math.</p><h3><strong>Abuse and gaming</strong></h3><p>Teams under metric pressure sometimes &#8220;teach to the test&#8221;: prompts bloated with rubric keywords but no real gates. <strong>Mitigation:</strong> run a live session against a small repository with a planted bug; the score is not the document, it is whether the agent finds the bug without inventing files.</p><h2><strong>Verification</strong></h2><p>You verify the methodology by producing a complete scorecard for a real candidate and checking internal consistency. Pick one prompt file from <code>prompts/</code> and complete the steps.</p><h3><strong>Step 1: score each pillar</strong></h3><p>Assign integers 1 to 5 using the anchors in each pillar section. Write one paragraph of rationale per pillar that cites <strong>exact phrases</strong> from the candidate prompt.</p><h3><strong>Step 2: compute the rollup</strong></h3><p>Use the grade bands from <code>thresholds.properties</code>. Example of the <strong>output shape</strong>:</p><pre><code><code>## Bob Meta-Scorecard: `payments-agent-v3.md` (2026-04-12)

**Grounding (3 of 5):** Suggests reading `src` but allows skipping when &#8220;timeboxed.&#8221;
**Continuity (2 of 5):** Mentions a todo list, no file path or resume protocol.
**Safety (5 of 5):** Deletes and dependency bumps gated with explicit confirmation.
**Decomposition (4 of 5):** Phases exist; tests not required between phases.
**Efficiency (4 of 5):** About 900 words with some duplicated warnings.

**Total (18 of 25, 72 percent):** Band: needs significant work per calibration file dated 2026-04-12.</code></code></pre><h3><strong>Step 3: name the dominant failure mode</strong></h3><p>Answer one question in writing: &#8220;If this agent goes wrong in the first thirty minutes, what is the most likely story?&#8221; Use this skeleton:</p><pre><code><code>## Critical failure mode: Template explosion

**Scenario:** The agent lists two directories, assumes the rest of the layout, generates a new package parallel to the real one, and imports compile until runtime wiring fails.

**Root cause:** Grounding allows skipping reads when the tree is &#8220;familiar.&#8221; Decomposition does not cap new files per phase.

**Likelihood:** High for repositories with multiple modules.

**Impact:** User merges green CI, then discovers dead code paths or duplicate beans.</code></code></pre><h3><strong>Step 4: write exactly three power-ups</strong></h3><p>Each power-up is a paste-ready block tied to a pillar. Example:</p><pre><code><code>## Power-up 1: Hard grounding gate (Grounding 3 to 5)

**Insert after:** the &#8220;Discovery&#8221; heading.

**Injection text:**
"Skipping file reads is not permitted. If you believe the tree is too large, stop and ask for a narrowed root path instead of proceeding."

**Expected movement:** Grounding 3 of 5 to 5 of 5 if the rest of the prompt already names tools.</code></code></pre><p>Repeat for the second and third weakest pillars.</p><h3><strong>Step 5: re-score on paper</strong></h3><p>Apply the three injections mentally (or in a branch). Recompute totals. You should see at least two pillars move if you chose real weaknesses; if nothing moves, your power-ups were cosmetic.</p><p><strong>What this proves</strong></p><ul><li><p>The workspace layout produces comparable artifacts</p></li><li><p>The failure mode story connects to specific prompt gaps</p></li><li><p>The power-ups are concrete enough to merge</p></li></ul><h2><strong>Common Prompt Anti-Patterns</strong></h2><p><strong>Placeholder trap:</strong> Brackets without discovery. Fix by naming tools and stopping conditions.</p><p><strong>Single-shot fallacy:</strong> Epics in one answer. Fix with phases and gates.</p><p><strong>Amnesia assumption:</strong> &#8220;Remember&#8221; without files. Fix with a structured progress file and resume text.</p><p><strong>Efficiency paradox:</strong> Too short to be complete. Fix by referencing repo docs instead of omitting rules.</p><p><strong>Safety omission:</strong> &#8220;Refactor as needed.&#8221; Fix with destructive classification and confirmation.</p><h2><strong>When to Use and When to Stop</strong></h2><p><strong>Use this methodology</strong> when a system prompt will drive autonomous edits, when you compare prompt candidates for the same product, or when you train reviewers on agent constraints.</p><p><strong>Do not use it as the only signal</strong> for human-pair programming modes, creative writing assistants, or cross-model comparisons without retuning anchors. Different models fail in different shapes; the pillars still help, but the numbers are not portable.</p><h2><strong>Calibration Stories </strong></h2><p><strong>High score example (24 of 25).</strong> Mandatory discovery, progress file with resume text, explicit destructive gates, phased delivery with tests at gates, under about eight hundred words with minor duplication. Failure risk shifts to execution bugs, not spec holes.</p><p><strong>Medium score example (14 of 25).</strong> Read-only analyst: safety is perfect because the prompt cannot touch disk, but grounding and continuity score 1 each because the human must paste all context every time. Fine for chat, poor for overnight agents.</p><p><strong>Low score example (8 of 25).</strong> One line: &#8220;Build authentication with login, registration, and reset.&#8221; No tree contact, no state, no safety, single phase, only efficiency looks acceptable because the text is short. Expect generic framework soup.</p><h2><strong>Conclusion</strong></h2><p>We turned an informal rubric into a repeatable kit: same folders, same thresholds, same scorecard shape, and three paste-ready improvements per review, so the worst failure modes surface before you ship rather than after merge. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><h2></h2>]]></content:encoded></item><item><title><![CDATA[AI Made Coding Faster. History Says That’s When the Real Problems Begin.]]></title><description><![CDATA[From Toyota&#8217;s production line to induced demand, the lesson is the same: the bottleneck always moves]]></description><link>https://www.the-main-thread.com/p/ai-coding-speed-software-bottleneck-lessons-toyota</link><guid isPermaLink="false">https://www.the-main-thread.com/p/ai-coding-speed-software-bottleneck-lessons-toyota</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Sat, 18 Apr 2026 06:08:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c7fbc8aa-1011-4691-a6c8-1fd006c876ea_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For a long time, software had an obvious bottleneck: writing the code.</p><p>Not always the <em>only</em> bottleneck, of course. But in many teams, it was still the part that felt expensive. You needed skilled people, time, attention, and patience. Boilerplate took time. Repetition took time. Exploration took time. Even the act of turning an idea into working code still had real friction.</p><p>That is changing fast.</p><p>With modern AI tools, many teams can now produce code much faster than before. McKinsey reported that developers can complete some tasks up to twice as fast with generative AI assistance. That does not mean software is suddenly easy, but it does mean one old constraint is weakening.</p><p>And that raises the more interesting question: what happens to an industry when speed stops being the main problem?</p><p>We have seen this before.</p><p>Other industries hit similar moments long before software did. Cars got faster. Factories got faster. Transportation systems got more capacity. Each time, the first wave looked like a victory for speed. Then the deeper lesson arrived: once you remove a bottleneck, the system does not become simple. The bottleneck moves.</p><p>That is the part software teams need to pay attention to now.</p><h2>Ford solved throughput. That was only the beginning.</h2><p>Henry Ford&#8217;s moving assembly line became famous for a reason. Ford&#8217;s integrated moving assembly line cut Model T chassis assembly time from about 12.5 hours to roughly 1.5 hours. That was a breathtaking improvement, and it changed manufacturing forever. It also helped lower the price of cars and made large-scale production economically viable in a new way. (<a href="https://corporate.ford.com/about/history/company-timeline/">Ford Corporate</a>)</p><p>If you stop the story there, the lesson sounds simple: speed wins.</p><p>But that is only the opening chapter.</p><p>Ford showed what happens when you remove friction from production. Once the line moved, the whole factory changed shape around it. Workers had to synchronize with the pace of the line. Supply had to arrive at the right time. Problems in one station could ripple forward. Quality issues no longer stayed local. A defect introduced early could be repeated at scale.</p><p>That should sound familiar to software teams using AI.</p><p>If a developer can now produce three times as many changes in the same week, that does not mean the organization is automatically three times more productive. It means the rest of the system is about to feel pressure. Reviews, tests, integration pipelines, architecture, security checks, production support, and documentation will all see more load.</p><p>Ford&#8217;s lesson was never just &#8220;go faster.&#8221; It was &#8220;once you can go faster, everything around the work must change too.&#8221;</p><p>In software, we are living through our version of the moving assembly line.</p><h2>Toyota learned that speed without quality creates expensive chaos</h2><p>Toyota took the next big step.</p><p>The Toyota Production System was built on two core ideas: <em>Just-in-Time</em> and <em>Jidoka</em>. Just-in-Time means producing only what is needed, when it is needed, in the amount needed. Jidoka is often described as &#8220;automation with a human touch.&#8221; In practice, it means that when something abnormal happens, the process should stop rather than quietly pass the problem downstream. Toyota describes TPS as a system aimed at eliminating waste, with Jidoka and Just-in-Time at its core. In Toyota&#8217;s own explanation, Jidoka means that when a problem is detected, the production lines stop. (<a href="https://global.toyota/en/company/vision-and-philosophy/production-system/">Toyota Global</a>)</p><p>That is a very different mindset from pure output chasing.</p><p>Toyota did not just ask, &#8220;How do we produce more?&#8221; It asked, &#8220;How do we produce reliably, at quality, with waste removed, and with problems exposed early?&#8221;</p><p>This is where the analogy to software becomes useful.</p><p>Right now, many teams are treating AI like Ford&#8217;s first production breakthrough. They are understandably excited that code comes out faster. But the Toyota lesson is the one that matters next. Once output speeds up, built-in quality becomes more important, not less.</p><p>If your AI tool generates a service class, a migration, a test, an endpoint, and a frontend form in ten minutes, the danger is not that it wrote too little. The danger is that it wrote a plausible, interconnected set of mistakes that now look expensive to unwind.</p><p>Toyota&#8217;s answer to this kind of problem was not &#8220;inspect quality later.&#8221; It was to build quality into the flow.</p><p>That is why the &#8220;stop the line&#8221; idea resonates so much right now. In software terms, that means failing fast when reality and output do not match. It means letting tests block progress. It means letting static analysis, security gates, contract checks, and integration tests interrupt momentum. It means treating red builds as production problems, not as minor inconveniences.</p><p>It also means empowering people to stop bad flow, not just admire fast flow. Lean practitioners often describe the <em>andon</em> concept this way: people on the line are given the authority to signal abnormality and stop the process. (<a href="https://www.lean.org/the-lean-post/articles/why-we-believe-micromanagement-is-worth-a-deeper-conversation/">Lean Enterprise Institute</a>)</p><p>Software teams need their own version of that authority.</p><p>When an AI system starts inventing APIs, flattening boundaries, &#8220;fixing&#8221; failures by deleting behavior, or producing inconsistent patterns across a codebase, somebody needs to pull the cord. And the organization needs to reward that, not punish it.</p><p>That is not anti-speed. That is what makes speed survivable.</p><h2>Standardized work is not bureaucracy. It is what makes improvement possible.</h2><p>Another important Toyota and lean lesson gets misunderstood all the time: standardization.</p><p>A lot of developers hear &#8220;standardized work&#8221; and immediately imagine heavy process, creativity loss, and architecture review meetings that should have been emails. But that is not really what lean systems are trying to do.</p><p>Standardized work is the baseline that lets you see problems clearly and improve from a stable starting point. Lean practitioners often phrase it bluntly: without standards, there can be no improvement. </p><p>That matters even more in an AI-assisted environment.</p><p>When code was slower to produce, inconsistency spread more slowly too. You could still have a messy codebase, but the rate of mess accumulation had some natural limit because humans had to type it all, reason about it all, and wire it up manually.</p><p>AI changes that.</p><p>Now one person can generate patterns that spread across a large codebase very quickly. That can be useful when the patterns are good and grounded. It can be destructive when they are not. The same acceleration that helps you scaffold clean implementations can also help you industrialize confusion.</p><p>This is why platform engineering, templates, paved roads, reference implementations, guardrails, and shared architectural patterns matter so much right now. They are not old-world control mechanisms resisting modern tools. They are the equivalent of jigs, fixtures, and standard work instructions in a factory that is suddenly capable of much higher throughput.</p><p>The goal is not to remove judgment. The goal is to give judgment a stable environment in which it can matter.</p><h2>Local optimizations can break the larger system</h2><p>This is the other history lesson that feels especially relevant to software teams right now.</p><p>In transportation planning, there is a well-known pattern: adding road capacity does not always &#8220;solve traffic&#8221; in the way people expect. Economists Gilles Duranton and Matthew Turner famously argued that increases in highway lane kilometers are met with proportional increases in vehicle travel. In plain language, more road space often attracts more driving. The system adapts. (<a href="https://www.nber.org/system/files/working_papers/w25218/w25218.pdf">NBER</a>, PDF)</p><p>That idea, sometimes discussed as induced demand, is a powerful warning against na&#239;ve local optimization.</p><p>You improve one visible choke point. The wider system responds. New behavior fills the space you created. The original bottleneck disappears, but the overall problem evolves rather than vanishes.</p><p>Software organizations do this all the time.</p><p>A team speeds up code generation with AI. Great. But then code review queues grow. Test pipelines get noisier. Security teams see more questionable dependencies. Operations teams inherit more services and more unclear failure modes. Architecture drift accelerates because many reasonable-looking local decisions are made faster than the organization can absorb them.</p><p>From inside the team, it feels like productivity improved.</p><p>From the system level, it may look like downstream congestion.</p><p>This is why local optimization is such a dangerous leadership trap in software. If you measure only code output, story throughput, or raw implementation speed, you can convince yourself the organization is getting better while the real constraints are quietly shifting elsewhere.</p><p>Ford teaches that throughput matters. Toyota teaches that quality and flow matter. Transportation teaches that the system pushes back when you optimize one part in isolation.</p><p>Put those together, and the message for software becomes pretty clear: faster coding is not the same thing as faster delivery of trustworthy systems.</p><h2>The scarce skill is moving up the stack</h2><p>When a technology removes friction from one layer of work, human value does not disappear. It moves.</p><p>That happened in factories. As physical production systems improved, the most valuable people were not the ones who merely repeated the motion fastest. The valuable people were the ones who could design the system, spot abnormality, improve flow, coordinate exceptions, and maintain quality under pressure.</p><p>The same shift is now happening in software.</p><p>Typing code matters less as a differentiator when code can be produced cheaply. What matters more is deciding what should exist, where it should live, how it should be validated, what it may break, and who will own it later.</p><p>That is why I do not think this is a story about developers becoming less important. I think it is a story about shallow coding becoming less scarce.</p><p>The valuable engineer becomes more like a systems designer, reviewer, constraint manager, and quality engineer. The valuable architect becomes less of a diagram curator and more of a flow designer. The valuable organization becomes the one that knows how to combine speed with boundaries.</p><p>Code is getting cheaper.</p><p>Coherence is not.</p><h2>What software teams should take from this</h2><p>The lesson from history is not that speed is bad. Speed is often wonderful. Ford was not wrong. Faster production can unlock entirely new possibilities. The mistake is thinking that once speed improves, the rest of the system does not need to evolve.</p><p>Toyota evolved the system.</p><p>That is the move software teams need to make now.</p><p>If AI has removed part of the cost of writing code, then your competitive advantage is no longer just &#8220;we can produce code quickly.&#8221; More and more teams will be able to do that.</p><p>The differentiator becomes whether you can produce systems that are coherent, testable, secure, observable, maintainable, and worth operating.</p><p>That means better specifications before generation.<br>It means stronger tests and verification.<br>It means clearer architecture and boundaries.<br>It means trusted templates and paved roads.<br>It means permission models and review discipline for agents.<br>It means treating bad output as a signal to improve the system, not as an excuse to lower the bar.</p><p>In other words, it means learning the same lesson manufacturing had to learn: once speed stops being the hard part, discipline becomes the multiplier.</p><p>That is where software is heading now.</p><p>Not toward a world where engineering matters less.</p><p>Toward a world where engineering discipline matters more than ever.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Chatbots Talk. Real AI Agents Schedule Work.]]></title><description><![CDATA[I met Ronald Dehuysser at Jfokus in February. We talked about Java, background processing, and the kind of problems that look simple until you have to run them reliably in production. I only stumbled over ClawRunr just recently, and it immediately caught my attention because it touches a gap I have been thinking about for a while: most agent discussions focus on prompts, tools, and model choice, but not enough people talk about what happens when the agent needs to do work later, retry something, or survive a restart.]]></description><link>https://www.the-main-thread.com/p/ai-agents-background-jobs-java-jobrunr-clawrunr</link><guid isPermaLink="false">https://www.the-main-thread.com/p/ai-agents-background-jobs-java-jobrunr-clawrunr</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Fri, 17 Apr 2026 06:08:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b75112db-0f8e-48df-ae1e-0c590661674b_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I met Ronald Dehuysser at <a href="https://www.jfokus.se/">Jfokus in February</a>. We talked about Java, background processing, and the kind of problems that look simple until you have to run them reliably in production. I only stumbled over <a href="https://clawrunr.io/">ClawRunr</a> just recently, and it immediately caught my attention because it touches a gap I have been thinking about for a while: most agent discussions focus on prompts, tools, and model choice, but not enough people talk about what happens when the agent needs to do work later, retry something, or survive a restart.</p><p>Ronald is the founder behind <a href="https://www.jobrunr.io/en/">JobRunr</a>, the open-source Java background job scheduler, and that background shows in this piece. He has spent years working on persistent, distributed job execution in Java, which is exactly why his view on AI agents is interesting. Ronald describes JobRunr as the result of seeing teams repeatedly build fragile schedulers without features like retries and monitoring, and the current JobRunr site positions him as the founder behind that work.</p><p>What follows is his take on why agent runtimes need a real scheduling model, and why Java already has most of the building blocks.</p><p>Most AI agents are stateless. You send a message, you get a response. That&#8217;s a chatbot.</p><h2>Your AI Agent Has a Job Problem</h2><p>A real agent does things when nobody&#8217;s watching. It checks your email at 8am. It retries a failed API call. It remembers that you asked to be reminded about something next Thursday. It survives a restart.</p><p>That&#8217;s not a chatbot problem. That&#8217;s a job scheduling problem. And Java developers have been solving it for years.</p><h3>The gap nobody talks about</h3><p>I&#8217;ve spent the last six years building <a href="https://www.jobrunr.io/">JobRunr</a>, an open-source background job scheduler for Java. So when I started looking at the AI agent space, I had a very specific question: how do these things schedule work?</p><p>The answer, in most cases: they don&#8217;t. Not really.</p><p>Most agent frameworks give you a nice LLM wrapper, some tool calling, maybe a conversation history. But ask the agent to &#8220;summarize my emails every morning at 8&#8221; and you&#8217;re suddenly in DIY territory. A cron job here, a Redis queue there, some in-memory timer that dies when the process restarts. No retry logic. No dashboard. No way to know if your 8am summary actually ran or silently failed.</p><p>This felt backwards to me. We have mature, battle-tested solutions for this in Java. JobRunr handles scheduling, retries, persistence, distributed execution, and monitoring out of the box. Why are we reinventing this for agents?</p><h3>So we built one</h3><p>Nicholas, my co-founder, got impatient and vibe coded a proof of concept. It worked. Then I read the code.</p><p>I&#8217;ll spare you the details, but let&#8217;s just say we had a productive conversation about dependency management and code that &#8220;works by accident.&#8221; We scrapped it and rebuilt from scratch.</p><p>The result is <a href="https://javaclaw.io/">ClawRunr</a>. We first called it JavaClaw, for obvious naming reasons we had to change it. Everyone still calls it JavaClaw though, and at this point we&#8217;ve stopped correcting them. It&#8217;s an open-source AI agent runtime written in pure Java.</p><p>But the interesting part isn&#8217;t the agent itself. It&#8217;s the architecture underneath.</p><h3>Tasks as files, not database rows</h3><p>Here&#8217;s a design decision that surprised people: tasks in ClawRunr are Markdown files.</p><p>When you tell the agent &#8220;remind me to review that PR tomorrow at 10am,&#8221; it creates a file like this:</p><pre><code><code>---
task: Review PR
createdAt: 2026-03-23T14:30:00
status: todo
description: Review the open pull request and provide feedback
scheduledFor: 2026-03-24T10:00:00
---

Check the open pull requests on the project repository.
Review the code changes and leave comments.
Notify me when done.</code></code></pre><p>That file lives in <code>workspace/tasks/2026-03-24/100000-review-pr.md</code>. Human-readable. You can open it in your editor. You can grep for it. You can diff it in git. You can edit it yourself if the agent got something wrong.</p><p>Compare that to a job stored in a database table with a serialized payload. Sure, it works. But which one would you rather debug at 2am?</p><p>When the scheduled time arrives, the job scheduler picks up the task, the agent reads the Markdown instructions, executes them, and updates the status in the frontmatter. If it fails, the scheduler retries it. Up to three times. All visible in a dashboard.</p><p>For recurring tasks the same pattern applies. &#8220;Summarize my email every morning&#8221; becomes a Markdown file in <code>workspace/tasks/recurring/</code> with a cron expression. The scheduler creates a fresh task from that template on each run. Cancel it through the chat, and both the recurring job and the file disappear.</p><h3>One agent, many channels</h3><p>The second architectural decision worth discussing: channel decoupling.</p><p>ClawRunr has one agent instance. When a message arrives, whether from Telegram, the web UI, or eventually Discord or Slack, the runtime fires an event. The agent doesn&#8217;t know or care where the message came from. It processes the request, produces a response, and the runtime routes it back through the same channel.</p><p>Want to add a new channel? Implement a single interface. The agent code doesn&#8217;t change.</p><p>This matters because real agents live across multiple surfaces. You start a conversation on your phone via Telegram, then continue in the browser at your desk. The agent should handle both without any extra wiring on your end.</p><h3>Skills at runtime</h3><p>This one is my favorite. ClawRunr has a skills system that&#8217;s almost stupidly simple.</p><p>You create a folder under <code>workspace/skills/</code>, drop a <code>SKILL.md</code> file in it, and the agent picks it up. No compilation. No deployment. No restart. The agent periodically scans the skills directory and discovers new capabilities on its own.</p><p>The skill file is just instructions. Plain text telling the agent what it can do and how. Need your agent to manage your grocery list? Write a <code>SKILL.md</code> that explains how. Need it to monitor a specific API? Same thing.</p><p>It&#8217;s extensibility through documentation rather than code. And it works surprisingly well, because at the end of the day, you&#8217;re instructing an LLM. Text is the interface.</p><h3>Why Java</h3><p>I&#8217;m biased, obviously. But hear me out.</p><p>An AI agent is a long-running process. It sits there, waits for messages, schedules jobs, executes tasks, manages state. The JVM was built for exactly this kind of workload. Garbage collection, thread management, stable memory usage over time. You get all of that for free.</p><p>Job scheduling is a solved problem in Java. JobRunr has been doing this since 2020. Distributed execution, a dashboard, Spring and Quarkus integration, automatic retries with exponential backoff. All out of the box.</p><p>Strong typing catches issues early. When your agent has ten tools (shell execution, file access, web search, task management) and the LLM decides which one to call based on conversation context, you want your tool interfaces to be explicit. A typo in a parameter name should be a compile error, not a runtime mystery.</p><p>And then there&#8217;s GraalVM. Alina Yurenko from the Oracle GraalVM team already made a GraalVM native image of ClawRunr within three days of release. Startup time dropped to under a second. For an agent that runs on your own hardware, that matters.</p><p>The building blocks were already there. Job scheduling, LLM integration, web frameworks, modular architectures. Someone just needed to put them together with an opinion about how agents should work.</p><h3>What happened when we released it</h3><p>We put it out there expecting a handful of people to try it. We thought it was a nice demo of what JobRunr can do in the AI space.</p><p>Instead: 200+ GitHub stars in three days. 32 forks. Our first external pull request. Someone built a plugin. The GraalVM port I mentioned. The LinkedIn announcement went way beyond our usual reach.</p><p>So we changed course. From our README:</p><blockquote><p>This project was originally created as a demo to show the use of JobRunr. JavaClaw is now an open invitation to the Java community. Let&#8217;s build the future of Java-based AI agents together.</p></blockquote><p>There&#8217;s a lot left to do. More AI Providers. More channels. Better memory and context management. Smarter task planning. Better Security and password management. But the foundation is there, and the Java community seems ready for it.</p><h3>Try it</h3><p>If you want to see what it looks like in practice, we recorded a demo video showing the onboarding, recurring task scheduling, task cancellation through natural conversation, and browser automation.</p><div id="youtube2-_n9PcR9SceQ" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;_n9PcR9SceQ&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/_n9PcR9SceQ?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>The code is at <a href="https://github.com/jobrunr/javaclaw">github.com/jobrunr/javaclaw</a>. Clone it, run <code>./gradlew :app:bootRun</code>, and you&#8217;re chatting with your agent in about two minutes.</p><p>We&#8217;re looking for contributors, ideas, and honest feedback. If something&#8217;s broken, tell us. If you think we&#8217;re doing something wrong, tell us that too. That&#8217;s how open source works.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Build a Streaming AI Chat in Java with Quarkus, Vaadin, and LangChain4j]]></title><description><![CDATA[A hands-on guest post by Sebastian K&#252;hnau showing how to stream LLM responses token by token in a pure Java UI with Vaadin Flow and Quarkus.]]></description><link>https://www.the-main-thread.com/p/streaming-ai-chat-java-quarkus-vaadin-langchain4j</link><guid isPermaLink="false">https://www.the-main-thread.com/p/streaming-ai-chat-java-quarkus-vaadin-langchain4j</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Thu, 16 Apr 2026 06:08:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8107df03-4034-4a1b-add4-e410921f91b5_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>This article is a guest post by <a href="https://www.linkedin.com/in/sebastian-kuehnau/">Sebastian K&#252;hnau</a> from Vaadin. Sebastian put together a very practical walkthrough that shows how well Vaadin Flow fits a Java-first AI UI. Thanks to Sebastian for sharing this with The Main Thread.</p><p>The tutorial below can be followed in <a href="https://github.com/SebastianKuehnau/quarkus-vaadin-demo/tree/01-AI-Chat">his reference project on Github</a>. </p></blockquote><p>Vaadin lets you build modern, component-driven, data-centric web UIs using current web standards &#8212; without leaving the Java ecosystem. It uses web components on the client side and exposes them entirely through a Java API. No JavaScript, no build pipeline, no framework churn.</p><p>Streaming responses token by token, updating the UI reactively &#8212; all of that works within the Java ecosystem you already know. This makes Vaadin a natural fit for AI-powered interfaces. In this tutorial, we&#8217;ll combine Quarkus, Vaadin Flow, and LangChain4j to build a streaming AI chat interface in pure Java. If you&#8217;re using Spring Boot instead of Quarkus, Vaadin has a dedicated<a href="https://vaadin.com/docs/latest/building-apps/ai/quickstart-guide"> AI quickstart guide</a> for that stack as well.</p><h2>Prerequisites</h2><p>You need:</p><ul><li><p>Java 25+</p></li><li><p>Maven 3.9.12+</p></li><li><p>An OpenAI API key</p></li></ul><p>We&#8217;ll use:</p><ul><li><p>Quarkus 3.32.2</p></li><li><p>Vaadin Flow 25.0.7 via com.vaadin:vaadin-quarkus-extension</p></li><li><p>LangChain4j via quarkus-langchain4j-openai</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JQ19!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbba1c0a-2947-4fe5-b77d-2549a69e4f03_1015x889.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JQ19!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbba1c0a-2947-4fe5-b77d-2549a69e4f03_1015x889.png 424w, https://substackcdn.com/image/fetch/$s_!JQ19!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbba1c0a-2947-4fe5-b77d-2549a69e4f03_1015x889.png 848w, https://substackcdn.com/image/fetch/$s_!JQ19!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbba1c0a-2947-4fe5-b77d-2549a69e4f03_1015x889.png 1272w, https://substackcdn.com/image/fetch/$s_!JQ19!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbba1c0a-2947-4fe5-b77d-2549a69e4f03_1015x889.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JQ19!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbba1c0a-2947-4fe5-b77d-2549a69e4f03_1015x889.png" width="1015" height="889" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cbba1c0a-2947-4fe5-b77d-2549a69e4f03_1015x889.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:889,&quot;width&quot;:1015,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Code.quarkus.io Screenshot&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Code.quarkus.io Screenshot" title="Code.quarkus.io Screenshot" srcset="https://substackcdn.com/image/fetch/$s_!JQ19!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbba1c0a-2947-4fe5-b77d-2549a69e4f03_1015x889.png 424w, https://substackcdn.com/image/fetch/$s_!JQ19!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbba1c0a-2947-4fe5-b77d-2549a69e4f03_1015x889.png 848w, https://substackcdn.com/image/fetch/$s_!JQ19!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbba1c0a-2947-4fe5-b77d-2549a69e4f03_1015x889.png 1272w, https://substackcdn.com/image/fetch/$s_!JQ19!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbba1c0a-2947-4fe5-b77d-2549a69e4f03_1015x889.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The complete example is available on<a href="https://github.com/SebastianKuehnau/quarkus-vaadin-demo/tree/01-AI-Chat"> GitHub</a>.</p><h2>Project Setup</h2><p>The easiest way to set up a Quarkus project with the right extensions is via<a href="https://code.quarkus.io"> code.quarkus.io</a>. Select the following extensions:</p><ul><li><p>Vaadin Flow (com.vaadin:vaadin-quarkus-extension) &#8212; the Vaadin integration for Quarkus, including components, themes, and the Vaadin dev server</p></li><li><p>LangChain4j OpenAI (quarkus-langchain4j-openai) &#8212; AI service integration via LangChain4j</p></li></ul><p>If you already have a running Quarkus project and want to add Vaadin, add the following property, bom configuration and dependency to your pom.xml:</p><pre><code><code>&lt;properties&gt;
    &lt;vaadin.version&gt;25.0.7&lt;/vaadin.version&gt;
&lt;/properties&gt;

&lt;dependencyManagement&gt;
    &lt;dependencies&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;com.vaadin&lt;/groupId&gt;
            &lt;artifactId&gt;vaadin-bom&lt;/artifactId&gt;
            &lt;version&gt;${vaadin.version}&lt;/version&gt;
            &lt;type&gt;pom&lt;/type&gt;
            &lt;scope&gt;import&lt;/scope&gt;
        &lt;/dependency&gt;
    &lt;/dependencies&gt;
&lt;/dependencyManagement&gt;

&lt;dependencies&gt;
    &lt;dependency&gt;
        &lt;groupId&gt;com.vaadin&lt;/groupId&gt;
        &lt;artifactId&gt;vaadin-quarkus-extension&lt;/artifactId&gt;
        &lt;version&gt;${vaadin.version}&lt;/version&gt;
    &lt;/dependency&gt;
&lt;/dependencies&gt;
&lt;!-- For the AI integration, add the LangChain4j OpenAI extension: --&gt;
&lt;dependency&gt;
    &lt;groupId&gt;io.quarkiverse.langchain4j&lt;/groupId&gt;
    &lt;artifactId&gt;quarkus-langchain4j-openai&lt;/artifactId&gt;
&lt;/dependency&gt;</code></code></pre><p></p><p>Finally, configure your OpenAI model and API key in application.properties:</p><pre><code><code>quarkus.langchain4j.openai.api-key=your-api-key-here

quarkus.langchain4j.openai.chat-model.model-name=gpt-4o-mini</code></code></pre><p></p><h3>Your First Vaadin View</h3><p>Let&#8217;s create our first Vaadin view to verify everything is wired correctly. We&#8217;ll start with a minimal example &#8212; a simple class called AiChatView that extends VerticalLayout, mapped to the application root via @Route(&#8221;&#8220;):</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;16a18849-f25e-4451-abaa-9e263c632287&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">@Route(&#8221;&#8220;)
public class AiChatView extends VerticalLayout {
    public AiChatView() {
        add(&#8221;Hello World&#8221;);
    }
}</code></pre></div><p>Start the application with ./mvnw quarkus:dev and open http://localhost:8080 in the browser. You should see a plain &#8220;Hello World&#8221; text rendered in the browser. That&#8217;s all it takes to get a Vaadin view running inside Quarkus.</p><h2>Building the AI Chat UI</h2><p>Now let&#8217;s replace the &#8220;Hello World&#8221; with a real chat interface. Vaadin provides ready-made components for exactly this use case:<a href="https://vaadin.com/docs/latest/components/message-list"> MessageList</a> to display the conversation,<a href="https://vaadin.com/docs/latest/components/message-input"> MessageInput</a> for the user&#8217;s input, and<a href="https://vaadin.com/docs/latest/components/scroller"> Scroller</a> to keep the view anchored to the latest message.</p><h3>The AI Service</h3><p>Next, we define the AI service interface. LangChain4j&#8217;s @RegisterAiService annotation tells Quarkus to generate the implementation at build time, wiring it to the configured OpenAI model automatically. The chat method returns a Multi&lt;String&gt; &#8212; a reactive stream of tokens that arrive one by one as the model generates its response:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;367cd379-175b-4239-907f-582a7fb37714&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">@SessionScoped
@RegisterAiService
public interface AiChatService {
    Multi&lt;String&gt; chat(@MemoryId Object chatId, @UserMessage String message);
}</code></pre></div><p>The @MemoryId parameter tells LangChain4j which conversation history to attach to this request. To make that work, provide a ChatMemoryProvider bean that stores a MessageWindowChatMemory per session:</p><p></p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;b3b78b82-fe5e-4a5a-8f94-3181a62e1eaf&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">@ApplicationScoped
public class ChatMemoryProviderBean implements ChatMemoryProvider {
    private final Map&lt;Object, MessageWindowChatMemory&gt; memories = new ConcurrentHashMap&lt;&gt;();
    @Override
    public MessageWindowChatMemory get(Object memoryId) {
        return memories.computeIfAbsent(memoryId, id -&gt;
                MessageWindowChatMemory.withMaxMessages(20));
    }
}</code></pre></div><p>Note the scope difference: AiChatService is @SessionScoped &#8212; one instance per browser session &#8212; while ChatMemoryProviderBean is @ApplicationScoped, as it manages memory across all sessions in a single map.</p><h3>The Chat View</h3><p>With the service in place, we can build the view. The AiChatView injects AiChatService via CDI and uses Vaadin&#8217;s messaging components to display the conversation:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;713fe9b2-522a-486a-9802-c3aed4377af1&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">@Route(&#8221;&#8220;)
public class AiChatView extends VerticalLayout {
    private final MessageList messageList;
    private final Scroller scroller;
    @Inject
    AiChatService chatAiService;

    public AiChatView() {
        setSizeFull();
        messageList = new MessageList();
        messageList.setMarkdown(true);
        scroller = new Scroller(messageList);
        scroller.setSizeFull();
        var messageInput = new MessageInput();
        messageInput.setWidthFull();
        messageInput.addSubmitListener(this::onSubmit);
        add(scroller, messageInput);
        expand(scroller);
    }

    private void onSubmit(MessageInput.SubmitEvent event) {
        var ui = event.getSource().getUI().orElseThrow();
        var question = event.getValue();
        var userMsg = new MessageListItem(question, Instant.now(), &#8220;You&#8221;);
        userMsg.setUserColorIndex(0);
        messageList.addItem(userMsg);
        var assistantMsg = new MessageListItem(&#8221;&#8220;, Instant.now(), &#8220;Assistant&#8221;);
        assistantMsg.setUserColorIndex(1);
        messageList.addItem(assistantMsg);
// Each browser tab gets its own chat memory
  var memoryId = ui.getUIId();
        chatAiService.chat(memoryId, question).subscribe()
                .with(token -&gt; ui.access(() -&gt; {
                    assistantMsg.appendText(token);
                    scroller.scrollToBottom();
                }));
        scroller.scrollToBottom();
    }
}</code></pre></div><p>A few things worth pointing out here. The MessageList is wrapped in a Scroller so the conversation history remains fully accessible even as it grows beyond the visible area in the browser window. Markdown rendering is enabled on the MessageList so the model&#8217;s formatted responses &#8212; code blocks, bullet points, bold text &#8212; are displayed correctly.</p><p>When the user submits a message, the method onSubmit adds the user&#8217;s message and an empty assistant message to the list immediately. Using a method reference to bind onSubmit to the MessageInput keeps the code clean and the component setup easy to follow. The onSubmit method also fills the assistant message token by token as the model streams its response. Because the streaming callback runs on a background thread, all UI updates must happen inside ui.access() &#8212; this is Vaadin&#8217;s<a href="https://vaadin.com/docs/latest/flow/advanced/server-push"> Push mechanism</a> for safely accessing the UI from outside the request thread.</p><h3>Enabling Server Push</h3><p>Before ui.access() can work, we need to enable server push in Vaadin. Create a configuration class that implements AppShellConfigurator and annotate it with @Push:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;ee8af58c-8669-436f-9c13-a04af120ea43&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">@Push
@StyleSheet(Aura.STYLESHEET)
public class VaadinConfig implements AppShellConfigurator {
}</code></pre></div><p>This tells Vaadin to keep an open connection to the browser so the server can push UI updates at any time &#8212; essential for a streaming response. The @StyleSheet(Aura.STYLESHEET) annotation applies the base theme globally, making it available to all components across the application.</p><h2>Try it out</h2><p>With the application running, open http://localhost:8080. Type a question into the input field and submit it. You should see your message appear immediately in the MessageList, followed by the assistant&#8217;s response arriving token by token. The view scrolls automatically to keep the latest content visible.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!t-iw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba68ac47-0e75-497d-b202-9932b8a808a3_785x619.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!t-iw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba68ac47-0e75-497d-b202-9932b8a808a3_785x619.png 424w, https://substackcdn.com/image/fetch/$s_!t-iw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba68ac47-0e75-497d-b202-9932b8a808a3_785x619.png 848w, https://substackcdn.com/image/fetch/$s_!t-iw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba68ac47-0e75-497d-b202-9932b8a808a3_785x619.png 1272w, https://substackcdn.com/image/fetch/$s_!t-iw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba68ac47-0e75-497d-b202-9932b8a808a3_785x619.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!t-iw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba68ac47-0e75-497d-b202-9932b8a808a3_785x619.png" width="785" height="619" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ba68ac47-0e75-497d-b202-9932b8a808a3_785x619.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:619,&quot;width&quot;:785,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!t-iw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba68ac47-0e75-497d-b202-9932b8a808a3_785x619.png 424w, https://substackcdn.com/image/fetch/$s_!t-iw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba68ac47-0e75-497d-b202-9932b8a808a3_785x619.png 848w, https://substackcdn.com/image/fetch/$s_!t-iw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba68ac47-0e75-497d-b202-9932b8a808a3_785x619.png 1272w, https://substackcdn.com/image/fetch/$s_!t-iw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba68ac47-0e75-497d-b202-9932b8a808a3_785x619.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Conclusion</h2><p>In just a few steps, we built a fully functional, streaming AI chat interface as a modern web application &#8212; entirely in Java. We set up a Quarkus project with Vaadin and LangChain4j, created our first Vaadin view, defined a reactive AI service, and wired everything together with server push to deliver a smooth token-by-token chat experience in the browser.</p><p>If you want to dive deeper into the technologies covered in this tutorial, here are the official resources:</p><ul><li><p><a href="https://vaadin.com/docs/latest">Vaadin Docs</a></p></li><li><p><a href="https://quarkus.io/guides/">Quarkus Docs</a></p></li><li><p><a href="https://docs.quarkiverse.io/quarkus-langchain4j/dev/index.html">Quarkus LangChain4j Docs</a></p></li></ul><p>The complete code for this tutorial is available on<a href="https://github.com/SebastianKuehnau/quarkus-vaadin-demo/tree/01-AI-Chat"> GitHub</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Write Better JavaDoc in Java 23 with Markdown Comments]]></title><description><![CDATA[Build a small Java library with Maven and JUnit, replace classic JavaDoc with /// Markdown comments, and generate cleaner API docs for humans and AI tools.]]></description><link>https://www.the-main-thread.com/p/java-24-markdown-javadoc-maven-tutorial</link><guid isPermaLink="false">https://www.the-main-thread.com/p/java-24-markdown-javadoc-maven-tutorial</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Wed, 15 Apr 2026 06:08:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/930df4a3-5773-400f-bd21-5cfd41a18cc2_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most teams think JavaDoc is a publishing problem. You write it once, the tool renders it, and that is the end of the story. In real codebases, that mental model breaks fast. The source is what developers actually read in reviews, in IDE hovers, and during production debugging. If the source form is noisy, the documentation is noisy for readers too.</p><p>Classic JavaDoc has always had this problem. The rendered HTML can look fine, but the source is full of scaffolding: <code>{@link}</code>, <code>{@code}</code>, <code>&lt;p&gt;</code>, <code>&lt;pre&gt;</code>, and little HTML fragments that mostly carry formatting. That friction has a cost. Developers write less documentation, they keep examples shorter than they should, and they avoid updating comments because touching them is annoying.</p><p>Java 23 fixes the practical part of this. <a href="https://openjdk.org/jeps/467">JEP 467 introduced Markdown</a> (EDIT: JEP 467 was introduced in Java 23 and not as initially stated in Java 24 )documentation comments with <code>///</code>, CommonMark support, and Markdown-style links to program elements. Oracle&#8217;s JavaDoc guide documents this feature for JDK 23 and later, and Java SE 24 exposes the <code>END_OF_LINE</code> documentation comment kind for <code>///</code> comments in the compiler model. </p><p>Second, humans are not your only readers anymore. Coding assistants, code search, and internal RAG pipelines read raw source too, not just the generated HTML site. Markdown looks like the rest of the text those systems already know: READMEs, issues, docs, examples, code fences. Cleaner comments help people and tools alike. JEP 467 also calls out the Compiler Tree API, which matters when you build or run source-analysis tooling. </p><p>In this tutorial we build a <strong>small Maven library</strong>: an in-memory book review registry with only the JDK and JUnit. We skip persistence and HTTP on purpose. We want a small API where JavaDoc matters: package overview, sealed types, records, a service class, exceptions, all written with <code>///</code> in source you still want to read. Then we run <code>javadoc</code>, use VS Code hovers for quick feedback, and lock behavior with plain unit tests. <a href="https://inside.java/2024/07/24/vscode-extension-update/">VS Code&#8217;s Java tooling renders Markdown</a> in JavaDoc comments.</p><h2><strong>Prerequisites</strong></h2><p>You need a recent JDK, Maven, and a Java editor that understands modern Java. I use VS Code here because the Java tooling renders Markdown JavaDoc in hovers, and because many developers already use it.</p><ul><li><p>Java 23 or newer (the build uses <code>maven.compiler.release</code> <strong>23</strong>; JDK 25 works)</p></li><li><p>Maven 3.9 or newer</p></li><li><p>VS Code with the Extension Pack for Java</p></li><li><p>Comfort reading <code>pom.xml</code> and JUnit Jupiter (this tutorial uses <strong>JUnit 6</strong>; test code still imports <code>org.junit.jupiter.api</code>)</p></li></ul><p>Check the setup:</p><pre><code><code>java -version
mvn -version
code --version</code></code></pre><p>You want a JDK that supports <code>--release 23</code> (Java 23 or newer). Markdown documentation comments were introduced by JEP 467, and Oracle documents them as available in JDK 23 and later. </p><h2><strong>Project Setup</strong></h2><p>Create a project directory and work inside it. In <a href="https://github.com/myfear/the-main-thread">the-main-thread Github</a> the Maven tree lives under <code>bookreviews/</code>; if you are starting from scratch, create that folder and change into it before the next steps.</p><pre><code><code>mkdir -p bookreviews
cd bookreviews</code></code></pre><p>Create <code>pom.xml</code> in the project root (the directory that contains <code>pom.xml</code>). You can pick any <code>groupId</code> / <code>artifactId</code>; what matters is <code>maven.compiler.release</code> <strong>23</strong> and <strong>JUnit Jupiter 6</strong> (<code>junit-jupiter</code>) in test scope.</p><p>Create <code>pom.xml</code>:</p><pre><code><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt;
&lt;project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"&gt;
    &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt;

    &lt;groupId&gt;dev.mainthread&lt;/groupId&gt;
    &lt;artifactId&gt;bookreviews&lt;/artifactId&gt;
    &lt;version&gt;1.0.0-SNAPSHOT&lt;/version&gt;
    &lt;packaging&gt;jar&lt;/packaging&gt;

    &lt;properties&gt;
        &lt;maven.compiler.release&gt;23&lt;/maven.compiler.release&gt;
        &lt;project.build.sourceEncoding&gt;UTF-8&lt;/project.build.sourceEncoding&gt;
        &lt;junit.version&gt;6.0.3&lt;/junit.version&gt;
    &lt;/properties&gt;

    &lt;dependencies&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;org.junit.jupiter&lt;/groupId&gt;
            &lt;artifactId&gt;junit-jupiter&lt;/artifactId&gt;
            &lt;version&gt;${junit.version}&lt;/version&gt;
            &lt;scope&gt;test&lt;/scope&gt;
        &lt;/dependency&gt;
    &lt;/dependencies&gt;

    &lt;build&gt;
        &lt;plugins&gt;
            &lt;plugin&gt;
                &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
                &lt;artifactId&gt;maven-compiler-plugin&lt;/artifactId&gt;
                &lt;version&gt;3.13.0&lt;/version&gt;
            &lt;/plugin&gt;
            &lt;plugin&gt;
                &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
                &lt;artifactId&gt;maven-surefire-plugin&lt;/artifactId&gt;
                &lt;version&gt;3.5.2&lt;/version&gt;
            &lt;/plugin&gt;
            &lt;plugin&gt;
                &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
                &lt;artifactId&gt;maven-javadoc-plugin&lt;/artifactId&gt;
                &lt;version&gt;3.11.2&lt;/version&gt;
                &lt;configuration&gt;
                    &lt;!-- `javadoc:javadoc` defaults to target/reports/apidocs; match the usual site path --&gt;
                    &lt;outputDirectory&gt;${project.build.directory}/site&lt;/outputDirectory&gt;
                &lt;/configuration&gt;
            &lt;/plugin&gt;
        &lt;/plugins&gt;
    &lt;/build&gt;
&lt;/project&gt;
</code></code></pre><p>Create the source tree:</p><pre><code><code>mkdir -p src/main/java/dev/mainthread/bookreviews
mkdir -p src/test/java/dev/mainthread/bookreviews</code></code></pre><p>Open the project in VS Code (from inside the project directory):</p><pre><code><code>code .</code></code></pre><p>From the parent of <code>bookreviews</code>, you can run <code>code bookreviews</code> instead.</p><h2><strong>Implementation</strong></h2><p>We add six compilation units under <code>src/main/java/dev/mainthread/bookreviews</code> (including <code>package-info.java</code>) and one test class. The logic stays small on purpose so the comments stay easy to see.</p><h3><strong>Package overview</strong></h3><p><code>package-info.java</code> is where you put &#8220;read me first&#8221; context: what the package is for, how the pieces fit, where to start. Markdown headings fit here. In classic HTML JavaDoc they never felt natural.</p><p>Create <code>src/main/java/dev/mainthread/bookreviews/package-info.java</code>:</p><pre><code><code>/// In-memory book review registry for tutorials and demos.
///
/// ## Where to start
///
/// - [BookReviewService] is the entry point for callers.
/// - [BookReview] is the immutable result type returned from the service.
/// - [ReviewSubmission] carries the fields passed to [BookReviewService] when creating a review.
///
/// ## Formats
///
/// [BookFormat] is a sealed hierarchy for optional catalog metadata.
///
/// ## Thread safety
///
/// [BookReviewService] is safe for concurrent use. Individual [BookReview]
/// instances are immutable value objects.
package dev.mainthread.bookreviews;</code></code></pre><p>Bracket links like <code>[BookReviewService]</code> resolve to types in the same package in generated docs, the same way <code>{@link}</code> did&#8212;without the noise in source. </p><p>After <code>mvn javadoc:javadoc</code> (see Build and JavaDoc), open <code>target/site/apidocs/dev/mainthread/bookreviews/package-summary.html</code>, or follow the package link from the overview. You should see Markdown headings and lists in the package description, then the class summary table.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4bpp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4bpp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png 424w, https://substackcdn.com/image/fetch/$s_!4bpp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png 848w, https://substackcdn.com/image/fetch/$s_!4bpp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png 1272w, https://substackcdn.com/image/fetch/$s_!4bpp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4bpp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png" width="1238" height="1646" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1646,&quot;width&quot;:1238,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:229406,&quot;alt&quot;:&quot;API Doc Package Summary&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/191844576?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="API Doc Package Summary" title="API Doc Package Summary" srcset="https://substackcdn.com/image/fetch/$s_!4bpp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png 424w, https://substackcdn.com/image/fetch/$s_!4bpp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png 848w, https://substackcdn.com/image/fetch/$s_!4bpp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png 1272w, https://substackcdn.com/image/fetch/$s_!4bpp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F420a06bb-e113-4c5b-9939-81a1b122b6b5_1238x1646.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Classic JavaDoc next to Markdown</strong></h3><p>Before we add more files, compare styles on one line: a rating constraint on a method parameter.</p><p>Classic style tends toward:</p><pre><code><code>/**
 * Creates a review from user-supplied fields.
 *
 * @param rating score from {@code 1} to {@code 5} (inclusive)
 */</code></code></pre><p>Markdown documentation comments keep the tags the standard doclet still understands, but the body reads like normal text:</p><pre><code><code>/// Creates a review from user-supplied fields.
///
/// @param rating score from `1` to `5` (inclusive)</code></code></pre><p>Same information, less scaffolding. The rest of the tutorial stays in the <code>///</code> form.</p><h3><strong>Sealed format types</strong></h3><p>Sealed types are a simple place for cross-links. The implementors are one closed family, so a short tour in the interface comment helps readers.</p><p>Create <code>src/main/java/dev/mainthread/bookreviews/BookFormat.java</code>:</p><pre><code><code>package dev.mainthread.bookreviews;

/// Describes how a title is distributed or held for display.
///
/// This type is closed: only [Paperback] and [Ebook] exist. If you add a new
/// format, update this hierarchy and the package overview in
/// `package-info.java`.
public sealed interface BookFormat permits BookFormat.Paperback, BookFormat.Ebook {

    /// A print copy with rough dimensions for shelving or shipping estimates.
    ///
    /// @param dimensions human-readable size, for example `23 x 15 cm`
    record Paperback(String dimensions) implements BookFormat {
    }

    /// A digital edition identified by a stable download or storefront URL.
    ///
    /// @param uri location of the digital edition
    record Ebook(String uri) implements BookFormat {
    }
}</code></code></pre><h3><strong>Stored review record</strong></h3><p>Create <code>src/main/java/dev/mainthread/bookreviews/BookReview.java</code>:</p><pre><code><code>package dev.mainthread.bookreviews;

/// Represents a stored review for a single book.
///
/// Instances are immutable. The `id` is assigned by [BookReviewService] when a
/// review is created. Optional [BookFormat] metadata is for catalog UIs only;
/// the service does not interpret it beyond storage.
///
/// @param id system-assigned identifier
/// @param isbn ISBN-13 string in the form the caller supplied
/// @param title display title of the reviewed book
/// @param reviewer display name of the reviewer
/// @param rating score from `1` to `5`
/// @param body free-text review content
/// @param format optional [BookFormat], or `null` if unknown
public record BookReview(
        Long id,
        String isbn,
        String title,
        String reviewer,
        int rating,
        String body,
        BookFormat format
) {
}</code></code></pre><h3><strong>Input record without a validation framework</strong></h3><p>Libraries often document <strong>preconditions in prose</strong> and enforce them with ordinary code. You can also leave validation to callers if you say so in the comment. Here we skip Bean Validation so the tutorial stays about documentation.</p><p>Create <code>src/main/java/dev/mainthread/bookreviews/ReviewSubmission.java</code>:</p><pre><code><code>package dev.mainthread.bookreviews;

/// Caller-supplied data used to create a [BookReview].
///
/// ## Preconditions
///
/// The service rejects invalid input with [IllegalArgumentException]:
///
/// - `isbn`, `title`, `reviewer`, and `body` must be non-blank after trimming.
/// - `rating` must be between `1` and `5` inclusive.
/// - `body` length should stay within a reasonable bound for your product; this
///   library uses a soft maximum of `4000` characters after trim.
///
/// ## ISBN
///
/// This type does not parse or checksum ISBNs. Callers should pass normalized
/// strings if their domain requires it.
///
/// @param isbn ISBN-13 or other normalized identifier string
/// @param title non-empty book title
/// @param reviewer non-empty reviewer name
/// @param rating score from `1` to `5`
/// @param body review text
/// @param format optional [BookFormat], may be `null`
public record ReviewSubmission(
        String isbn,
        String title,
        String reviewer,
        int rating,
        String body,
        BookFormat format
) {
}</code></code></pre><h3><strong>Domain exception</strong></h3><p>Create <code>src/main/java/dev/mainthread/bookreviews/ReviewNotFoundException.java</code>:</p><pre><code><code>package dev.mainthread.bookreviews;

/// Thrown when a requested [BookReview] does not exist in the registry.
///
/// Callers that map errors to user-visible messages can rely on
/// [getMessage] for a stable English sentence in this implementation.
public class ReviewNotFoundException extends RuntimeException {

    /// @param id identifier that was not found
    public ReviewNotFoundException(Long id) {
        super("No review found with id " + id);
    }
}</code></code></pre><h3><strong>Service implementation</strong></h3><p>The service class is where longer Markdown comments help. You get headings for thread safety, limits, and a short example, and you never type <code>&lt;h2&gt;</code> or <code>&lt;pre&gt;</code> in source.</p><p>Create <code>src/main/java/dev/mainthread/bookreviews/BookReviewService.java</code>:</p><pre><code><code>package dev.mainthread.bookreviews;

import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicLong;

/// In-memory registry of [BookReview] instances.
///
/// Reviews are stored in a `ConcurrentHashMap` and identified by a monotonically
/// increasing `long` ID.
///
/// ## Thread safety
///
/// Read and write operations are safe for concurrent access. The store itself is
/// thread-safe, and the ID sequence is managed with `AtomicLong`.
///
/// ## Limits
///
/// This implementation does **not** persist data. Discarding the service instance
/// clears all reviews. It is suitable for demos, tests, and embedding in larger
/// applications that supply their own persistence.
///
/// ## Example
///
/// ```java
/// var service = new BookReviewService();
/// BookReview created = service.create(new ReviewSubmission(
///         "9780134685991",
///         "Effective Java",
///         "mjava",
///         5,
///         "Essential reading for any Java developer.",
///         new BookFormat.Paperback("23 x 15 cm")
/// ));
/// BookReview found = service.findById(created.id());
/// ```
public class BookReviewService {

    private static final int BODY_MAX_LEN = 4000;

    private final Map&lt;Long, BookReview&gt; store = new ConcurrentHashMap&lt;&gt;();
    private final AtomicLong sequence = new AtomicLong(1);

    /// Creates a new review and assigns a unique ID.
    ///
    /// @param submission validated caller input; see [ReviewSubmission]
    /// @return newly created [BookReview]
    /// @throws IllegalArgumentException if preconditions on [ReviewSubmission] fail
    public BookReview create(ReviewSubmission submission) {
        validateSubmission(submission);
        long id = sequence.getAndIncrement();
        BookReview review = new BookReview(
                id,
                submission.isbn().trim(),
                submission.title().trim(),
                submission.reviewer().trim(),
                submission.rating(),
                submission.body().trim(),
                submission.format()
        );
        store.put(id, review);
        return review;
    }

    /// @param id review identifier
    /// @return matching [BookReview]
    /// @throws ReviewNotFoundException if the ID does not exist
    public BookReview findById(Long id) {
        return Optional.ofNullable(store.get(id))
                .orElseThrow(() -&gt; new ReviewNotFoundException(id));
    }

    /// @return snapshot list of all reviews; not backed by the live store
    public List&lt;BookReview&gt; findAll() {
        return new ArrayList&lt;&gt;(store.values());
    }

    /// @param isbn ISBN string to match exactly against stored reviews
    /// @return matching reviews, possibly empty; does not throw [ReviewNotFoundException]
    public List&lt;BookReview&gt; findByIsbn(String isbn) {
        return store.values().stream()
                .filter(review -&gt; review.isbn().equals(isbn))
                .toList();
    }

    /// @param id review identifier
    /// @throws ReviewNotFoundException if the ID does not exist
    public void delete(Long id) {
        if (store.remove(id) == null) {
            throw new ReviewNotFoundException(id);
        }
    }

    private static void validateSubmission(ReviewSubmission s) {
        if (s.isbn().isBlank() || s.title().isBlank() || s.reviewer().isBlank() || s.body().isBlank()) {
            throw new IllegalArgumentException("isbn, title, reviewer, and body must be non-blank");
        }
        if (s.rating() &lt; 1 || s.rating() &gt; 5) {
            throw new IllegalArgumentException("rating must be between 1 and 5");
        }
        if (s.body().trim().length() &gt; BODY_MAX_LEN) {
            throw new IllegalArgumentException("body exceeds maximum length");
        }
    }
}</code></code></pre><p>Here Markdown comments start to feel like a real language feature. <a href="https://docs.oracle.com/en/java/javase/24/javadoc/javadoc-guide.pdf">Oracle&#8217;s JavaDoc guide</a> documents CommonMark together with normal JavaDoc tags and links to program elements. </p><p>The compiler model in <a href="https://docs.oracle.com/en/java/javase/23/docs/api/java.compiler/javax/lang/model/util/Elements.DocCommentKind.html">Java 23</a> treats <code>///</code> as an end-of-line documentation comment kind, and the standard doclet treats it as Markdown plus JavaDoc tags. (<code>Elements.DocCommentKind</code>)</p><p>On the generated <code>BookReviewService</code> page, that same comment turns into subsection headings (&#8220;Thread safety&#8221;, &#8220;Limits&#8221;, &#8220;Example&#8221;) and a fenced Java sample in the HTML. You still write it in source without <code>{@code}</code> or <code>&lt;pre&gt;</code>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VBmx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VBmx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png 424w, https://substackcdn.com/image/fetch/$s_!VBmx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png 848w, https://substackcdn.com/image/fetch/$s_!VBmx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png 1272w, https://substackcdn.com/image/fetch/$s_!VBmx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VBmx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png" width="1238" height="1646" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1646,&quot;width&quot;:1238,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:199691,&quot;alt&quot;:&quot;Service Example&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/191844576?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Service Example" title="Service Example" srcset="https://substackcdn.com/image/fetch/$s_!VBmx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png 424w, https://substackcdn.com/image/fetch/$s_!VBmx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png 848w, https://substackcdn.com/image/fetch/$s_!VBmx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png 1272w, https://substackcdn.com/image/fetch/$s_!VBmx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fa21846-d1d0-4555-a33b-9b5bf39b75cc_1238x1646.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Build and JavaDoc</strong></h2><p>The <code>maven-javadoc-plugin</code> configuration already pins <code>release</code> to 23 so the doclet matches the compiler.</p><p>Generate HTML:</p><pre><code><code>mvn -q javadoc:javadoc</code></code></pre><p>Open the site: (on macOS):</p><pre><code><code>open target/site/apidocs/index.html</code></code></pre><p>On Windows (PowerShell):</p><pre><code><code>start target/site/apidocs/index.html</code></code></pre><p>The <code>javadoc</code> tool in Java 23 uses the standard doclet, and Oracle documents Markdown documentation comments as a supported feature of that toolchain. (<code>javadoc</code><a href="https://docs.oracle.com/en/java/javase/24/docs/specs/man/javadoc.html"> command reference</a>)</p><p>Now use the faster loop in VS Code. Open <code>BookReviewService.java</code> or <code>package-info.java</code> and hover a type or method. You read Markdown in hovers all day. Generated HTML is for when you publish.</p><p>To attach documentation to the JAR you publish to a repository, run <code>mvn -q javadoc:jar</code> and ship the <code>-javadoc.jar</code> next to your main artifact. Consumers of your library get the same rendered API in their IDE.</p><h2><strong>What This Means for AI-Assisted Development</strong></h2><p>This part is easy to miss when you only look at generated HTML. The change that matters is still in the source file.</p><p>Old JavaDoc carries a lot of JavaDoc-only and HTML-only noise. Models can learn that, and many already did, but normal Markdown is still easier to read. A fenced <code>java</code> block looks like code samples everywhere else. A bracket link looks like a normal technical link. Code assistants that scan raw files get simpler text to work with.</p><p>JEP 467 also matters for tool builders. It extended support around documentation comments. If you run internal indexing, source analysis, or agent pipelines on Java source, <code>///</code> comments are easier to treat as plain documentation text. </p><p>Bad documentation stays bad. A vague Markdown comment is still vague. When the formatting tax goes down, teams often write clearer examples, limits, and method contracts anyway. That is the practical win.</p><h2><strong>Verification</strong></h2><p>Create <code>src/test/java/dev/mainthread/bookreviews/BookReviewServiceTest.java</code>:</p><pre><code><code>package dev.mainthread.bookreviews;

import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertThrows;

class BookReviewServiceTest {

    @Test
    void createFindAndDeleteRoundTrip() {
        var service = new BookReviewService();
        var submission = new ReviewSubmission(
                "9780134685991",
                "Effective Java",
                "mjava",
                5,
                "Essential reading for any Java developer.",
                new BookFormat.Paperback("23 x 15 cm")
        );

        BookReview created = service.create(submission);
        assertEquals("Effective Java", created.title());
        assertEquals(5, created.rating());

        BookReview found = service.findById(created.id());
        assertEquals(created, found);

        service.delete(created.id());
        assertThrows(ReviewNotFoundException.class, () -&gt; service.findById(created.id()));
    }

    @Test
    void rejectsOutOfRangeRating() {
        var service = new BookReviewService();
        var bad = new ReviewSubmission(
                "9780134685991",
                "Effective Java",
                "mjava",
                9,
                "Essential reading for any Java developer.",
                null
        );
        assertThrows(IllegalArgumentException.class, () -&gt; service.create(bad));
    }

    @Test
    void notFoundIsStable() {
        var service = new BookReviewService();
        var ex = assertThrows(ReviewNotFoundException.class, () -&gt; service.findById(999L));
        assertEquals("No review found with id 999", ex.getMessage());
    }
}</code></code></pre><p>Run tests:</p><pre><code><code>mvn test</code></code></pre><p>Surefire should report three tests in <code>BookReviewServiceTest</code>, for example:</p><pre><code><code>Tests run: 3, Failures: 0, Errors: 0, Skipped: 0</code></code></pre><p>If you use <code>mvn -q test</code> instead, Maven runs in <strong>quiet</strong> mode. When everything passes, it often prints <strong>nothing at all</strong>. That is normal. You only see output when something fails or when a plugin logs a warning.</p><p>Confirm the Javadoc JAR builds (optional but recommended before publish):</p><pre><code><code>mvn -q javadoc:jar</code></code></pre><p>The three tests check the behaviors the <code>///</code> text promises: happy path, argument validation, and stable <code>ReviewNotFoundException</code> messaging.</p><h2><strong>Incremental Migration Advice</strong></h2><p>Do not plan a big-bang rewrite of every JavaDoc block. Keep migration simple. Use <code>///</code> for all new code. When you touch public APIs, core services, or classes with examples for real work, migrate those comments. Leave old comments alone until you edit them anyway.</p><p>Java&#8217;s documentation model supports both forms. Oracle&#8217;s API docs in Java 23 even note that inherited documentation can cross between Markdown comments and traditional comments, so mixed codebases are normal during migration. </p><p>One practical warning: a <code>///</code> comment is no longer just a normal line comment in modern JDKs. On declarations, it becomes documentation. That is usually what you want, but it is worth being deliberate when you introduce it across older code.</p><h2><strong>Conclusion</strong></h2><p>We built a <strong>small library</strong> on purpose, not a REST service, so Markdown JavaDoc stays tied to what it improves: package and type docs people read in the IDE, richer examples and structure in source, and HTML from the standard doclet without HTML scaffolding in comments. JavaDoc in the browser looks nicer too, but the main win is the text next to your public API: easier to write, easier to keep aligned with the code, easier for people and tools to read. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Real Problem With AI-Assisted Java Content Is Drift]]></title><description><![CDATA[A few plausible but wrong answers were enough. This is how I now use docs, skills, retrieval, and runnable code to keep AI-assisted Java writing honest.]]></description><link>https://www.the-main-thread.com/p/quarkus-ai-grounding-java-writing-workflow</link><guid isPermaLink="false">https://www.the-main-thread.com/p/quarkus-ai-grounding-java-writing-workflow</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Mon, 13 Apr 2026 06:08:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/863389da-0e2d-483a-8544-ad5a93669522_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I recently took down one of my Quarkus posts. Not because the whole thing was garbage. Not because every example was broken. It was because a few parts were off in a way that matters a lot in technical writing: they sounded plausible, but they were wrong enough to mislead readers.</p><p>That is worse than an obvious mistake.</p><p>An obvious mistake gets caught quickly. A plausible mistake gets repeated. Someone reads it before an interview. Someone copies it into a demo. Someone turns it into a team explanation. And suddenly the problem is not one bad sentence anymore. The problem is that bad information now looks like accepted knowledge.</p><p>I was called out on that, and rightly so. The feedback was direct. Some of it was uncomfortable. But it was fair.</p><p>The main point was not that AI should never be used. The point was that a high-speed, AI-assisted publishing workflow creates a very specific risk. Code can compile. The narrative can sound smooth. The article can feel finished. And still, a couple of details can drift just far enough away from reality to become harmful. In a fast-moving ecosystem like Quarkus, that is not a small issue. That is exactly where trust starts to break.</p><p>There is another side to this that also matters to me. The Main Thread was never only meant to be a publishing machine for Java content. It also became my laboratory for figuring out how far I can push AI to teach, and how far I can push myself to work with AI without losing control of the outcome. That experiment is still very much alive. It is also exactly why I need to be honest when it fails.</p><p>Another point hit just as hard. Review is expensive. Engineering time is expensive. A review process should not be there to rescue content that was never grounded properly in the first place. It should be there as a final safety layer. Not as cleanup for a workflow that moves too fast.</p><p>That landed with me.</p><p>So I promised to keep the original post down. And I thought that was the right call. Not because I want to perform some public self-punishment ritual. Just because replacing a flawed article with a better one is more useful than quietly pretending nothing happened.</p><p>So rather than trying to rescue the old article, I decided to write this one.</p><p>It is not another list post. It is not a patched version of interview questions. It is the thing underneath the problem: how I think about keeping AI-infused IDEs, coding assistants, and agent workflows grounded enough that they stay useful without drifting into confident nonsense.</p><p>Because that, for me, is the real issue now. AI is not going away. IDEs with copilots, agents, MCP servers, retrieval layers, and doc-aware tooling are not going away. The question is not whether we use them. The question is whether we build workflows around them that hold up under technical scrutiny.</p><h2>The Real Problem Is Not &#8220;AI Slop.&#8221; It Is Drift.</h2><p>&#8220;AI slop&#8221; is a catchy phrase, but I do not think it is precise enough.</p><p>The real problem is drift.</p><p>A model starts with a roughly correct understanding. Then it fills in one missing detail from training data and statistics. Then another. It picks a term that used to be right. It explains a pattern that technically works but is not the right Quarkus way to do it. It mixes old and new vocabulary. It invents a connection between two things that sound related. None of these mistakes are dramatic on their own. But together they produce content that feels sharp and complete while quietly losing contact with the source material.</p><p>That is the dangerous part.</p><p>And this is exactly why &#8220;the code compiles&#8221; is not enough. I learned that one the hard way. A generated example can compile and still teach the wrong habit. It can compile and still overcomplicate the solution. It can compile and still present a pattern that no experienced Quarkus engineer would recommend. Technical correctness is more than syntactic success.</p><p>There is also a second kind of drift that gets less attention. Tone drift.</p><p>When I rely too much on model-first drafting, the writing starts to flatten. Every sentence gets punchy. Every paragraph sounds polished in the same way. The article reads like it was assembled by a machine trained on five thousand &#8220;developer content&#8221; headlines and then sprayed with confidence. Even when the facts are right, that tone damages trust. Readers can feel it.</p><p>So when I say I want to keep AI grounded, I mean both things. I want the facts grounded in current sources and runnable reality. And I want the writing grounded in a voice that still sounds like me.</p><h2>What Grounding Means for Me</h2><p>Grounding, for me, is simple in principle.</p><p>The model does not get to answer from vibes.</p><p>I do not trust a general-purpose model to &#8220;just know&#8221; Quarkus. Not for version-sensitive details. Not for renamed extensions. Not for testing changes. Not for migration nuances. Not for what is technically possible versus what is idiomatic. That is where drift shows up first.</p><p>So I try to force the workflow away from memory and back toward sources.</p><p><strong>The first layer is current documentation.</strong> If I write about Quarkus, I want the model to work from current guides, migration notes, release material, and actual code. Not from stale training memory. That sounds obvious, but it changes the whole character of the output. The model stops behaving like an oracle and starts behaving more like an assistant reading over your shoulder.</p><p><strong>The second layer is targeted retrieval.</strong> I do not want broad prompts like &#8220;tell me about Quarkus testing.&#8221; I want narrower, version-aware context. Show me the current guide. Show me what changed. Show me the names that are valid now. Show me the artifact or config that matches the current platform line. Broad prompts invite generic answers. Narrow prompts force contact with specifics.</p><p><strong>The third layer is contradiction hunting.</strong> This is one of the least glamorous parts of the process, but it matters a lot. I look for stale tokens. Old names. Old guide references. Old vocabulary. Old explanations that used to be true in one release line and are not true anymore. This is where a lot of plausible nonsense hides. Not in wild hallucinations. In leftovers.</p><p><strong>The fourth layer is runnable code.</strong> I want code that builds, starts, and behaves the way the article says it behaves. I want the failure path to be real. I want the endpoint response to be real. I want the config to do something visible. If I make a claim, I want some proof behind it. That does not mean every article becomes a giant test suite. But it does mean that &#8220;looks right&#8221; is not enough.</p><p><strong>The fifth layer is human judgment.</strong> I still use AI heavily. I am not moving backward on that. But there is a big difference between using AI to accelerate exploration and letting AI define technical truth. The model can help me think faster, compare options, rewrite, structure, and pressure-test. It should not be the final source of authority on framework behavior.</p><p>That distinction matters more and more.</p><h2>How I Actually Use AI in the Workflow</h2><p>My workflow is not &#8220;generate article, publish article.&#8221;</p><p>That would be irresponsible, and it would also not produce the kind of content I want to put my name on.</p><p>I use AI at several stages, but not for the same job in every stage.</p><p>I use it to help me explore a topic faster. I use it to challenge assumptions. I use it to find likely weak points. I use it to shape structure when the material is messy. I use it to rewrite drafts into something that has a cleaner arc. I use it to pressure-test whether an explanation makes sense to someone who was not already in my head.</p><p>But the closer I get to the final text, the less I want &#8220;free generation&#8221; and the more I want constrained generation. I want source-linked docs. I want current framework material. I want rules for tone and structure. I want code I can verify. I want a process that reduces randomness.</p><p>That is the part many AI debates skip over. The tool is not one thing. &#8220;Using AI&#8221; can mean lazy autopilot. It can also mean a carefully constrained system where the model is only one part of a larger workflow. Those are not remotely the same.</p><p>And this is where grounding tools start to matter.</p><h2>The Tooling Part: Why Context Matters More Than Cleverness</h2><p>One of the biggest mistakes with AI IDEs is expecting the model to carry too much of the truth inside itself.</p><p>That works for generic coding tasks. It breaks down fast for active frameworks, product lines, release-specific guidance, and fast-moving ecosystems. Quarkus changes. Tooling changes. Names change. Recommended approaches change. A model that only answers from memory will always lag behind that reality.</p><p>So I use context injection and documentation retrieval wherever I can.</p><p>That includes working with documentation-oriented tooling that can pull current, source-linked material into the prompt instead of leaving the model alone with its own memory. It also includes using MCP-based doc access so the assistant can retrieve the right project material at the moment I ask the question. This is not glamorous, but it is a huge part of what makes the output less fragile.</p><p>I also think there is a broader lesson here for framework teams. If we want AI tools to produce better outcomes, we need better ways to expose authoritative project knowledge to them. Not by replacing documentation, but by making documentation easier for these systems to consume correctly. Good docs are still the foundation. Better retrieval just gives them a stronger path into the workflow.</p><h2>The Skills Layer I Use</h2><p>Grounding is one part of the story. The other part is skills.</p><p>In my setup, skills are curated playbooks. They are not code, and they are not some magic hidden training layer. They are explicit instructions for how certain kinds of work should be done. They help reduce one of the biggest practical problems in AI-assisted writing and coding: inconsistency.</p><p>Without that layer, every draft starts from scratch in the worst possible way. One day the model writes a clean technical walkthrough. The next day it over-explains. Another day it changes tone halfway through. Another day it forgets the article structure, skips verification, or slips into that too-polished &#8220;developer content&#8221; voice that nobody really trusts.</p><p>Skills give me a way to tighten that up.</p><p>For writing, I mainly rely on three kinds of guardrails. One defines the structure of a proper Main Thread tutorial or article. One keeps the voice closer to how I actually speak and write. One acts as a stricter review pass that checks whether the result is technically solid, teachable, and ready to ship.</p><p>That combination helps a lot. Structure keeps the article stable. Voice keeps it human. Review keeps it honest.</p><p>Beyond that, I am also testing work from the Quarkus project around technical guardrails expressed as skills. I think that is one of the more promising directions in this space. Not because skills replace expertise. They do not. But because they can encode project-specific expectations in a form that an assistant can actually follow. That means fewer random detours, fewer invented patterns, and a better chance that the output reflects how the framework really wants to be used.</p><p>That part is still evolving, and I am learning along the way. But I like the direction very much. It moves the workflow away from &#8220;trust the model&#8221; and closer to &#8220;constrain the model with the project&#8217;s own rules.&#8221;</p><p>And that is exactly where I want to be.</p><h2>What Changed for Me After 400+ Posts</h2><p>Publishing at a high cadence taught me a lot. Some of it was good. Some of it was painful.</p><p>The good part is obvious. I learned faster. I explored more topics. I found patterns in what readers care about. I got better at turning technical material into readable stories. I also got a very practical education in what these tools are good at and where they fail.</p><p>And this is where The Main Thread ended up meaning more to me than &#8220;just&#8221; a publication. It became a working lab. A place where I could test not only what AI can produce, but how AI changes the way I research, structure, verify, explain, and ship technical content. That is also why I am speaking about this process publicly in my JCON Europe 2026 session, <strong>&#8220;<a href="https://schedule.jcon.one/2026/session/1055455?utm_source=chatgpt.com">Chasing the Main Thread - Adventures in AI Assisted Coding.</a>&#8221;</strong> </p><p>The painful part is also clear now. Speed hides weaknesses until it does not. A workflow can feel productive for months and still contain a flaw that only becomes fully visible when trust is on the line. In my case, that flaw was not &#8220;too much AI&#8221; in some abstract moral sense. It was not enough hard grounding around the parts that matter most: current facts, framework idioms, and final accountability.</p><p>That is why I do not think the answer is to stop using AI. I think the answer is to stop pretending that generation alone is a workflow.</p><p><strong>Generation is one step. Grounding, retrieval, contradiction checks, runnable code, editorial constraint, and final accountability are the workflow.</strong></p><p>That is the difference.</p><h2>What I Am Trying to Do Now</h2><p>I still believe these tools matter. I still believe learning them aggressively is the right move. I still think the future of technical work involves more agentic tooling, more IDE assistance, more retrieval, and more model-driven exploration.</p><p>But I also think there is a responsibility that comes with publishing technical content in public, especially around a project like Quarkus where people use articles as a shortcut to understanding.</p><p>I do not want to create review debt for busy engineers. I do not want to publish things that sound official just because they circulate widely. I do not want to produce content that looks polished while eroding trust underneath. And I definitely do not want to feed wrong explanations back into the broader machine that will repeat them again later.</p><p>So the goal now is not just more output. The goal is better constraints.</p><p>Better sources. Better retrieval. Better guardrails. Better code verification. Better editorial discipline. Better use of AI where it helps, and less trust where it does not deserve trust.</p><h2>Conclusion</h2><p>I took down a Quarkus article because a few answers were wrong. That was the immediate reason. The deeper reason is that it exposed something more useful: if I want AI-infused IDEs and writing workflows to be worth anything, they need tighter contact with reality than &#8220;looks plausible.&#8221; For me, that means current docs, targeted retrieval, contradiction checks, runnable code, explicit skills, and a workflow where the model helps, but does not get to decide what is true.</p><p>That is the version of this experiment I want to keep doing: less faith in generation, more discipline around grounding, and a clearer understanding that The Main Thread is both a publication and a laboratory. Thank you for joining me on this experiment. And thank you for your feedback.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Why Enterprise Java Teams Need Boundaries for AI Agents]]></title><description><![CDATA[Senior Java developers are starting to treat coding agents like real operators, not smarter autocomplete. Here&#8217;s how to contain shell access, secrets, MCP tools, and autonomous changes before convenie]]></description><link>https://www.the-main-thread.com/p/ai-coding-agents-security-java-blast-radius</link><guid isPermaLink="false">https://www.the-main-thread.com/p/ai-coding-agents-security-java-blast-radius</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Sun, 12 Apr 2026 06:08:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4966939c-36a8-4578-9ce2-b1c2a4bde656_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI coding tools have moved far past autocomplete. They read large codebases, propose architecture changes, edit files, run shell commands, call APIs, and increasingly act like junior engineers with terminal access. That changes the security conversation completely.</p><p>For years, most application security work assumed a simple model. Developers wrote code. Pipelines validated it. Production systems enforced runtime controls. Even when developers made mistakes, the path from mistake to incident usually had friction in it. A pull request needed review. A deployment needed approval. A shell command needed a human to type it.</p><p>Agentic tooling removes a lot of that friction. That is the point. It speeds up work. But it also compresses the distance between suggestion and action. When an AI agent can read your repository, inspect environment files, hit internal endpoints, modify source code, and run commands without pause, you are no longer dealing with a code assistant. You are dealing with a probabilistic actor inside your delivery system.</p><p>That is where many teams still use the wrong mental model. They think the main risk is bad generated code. It is not. Bad code is the old problem. The new problem is operational autonomy. The danger starts when the model can do things, not just suggest things.</p><p>For Java teams in regulated or enterprise-heavy environments, this matters more than it does for hobby projects. Your systems usually sit next to customer data, internal APIs, CI/CD pipelines, cloud credentials, and a lot of old infrastructure that still works but breaks in ugly ways when touched carelessly. If you plug an autonomous coding agent into that world, containment stops being a nice security add-on. It becomes the architecture.</p><h2>The real problem is not intelligence. It is agency.</h2><p>Most of the current hype talks about how smart these agents are becoming. That is interesting, but it is not the main design issue. The real issue is agency. What can the agent do on its own? What can it read? What can it write? What can it execute? What can it reach over the network? What happens when it misreads instructions or ingests malicious context?</p><p>This is why the OWASP &#8220;excessive agency&#8221; idea is so useful. It describes the exact failure mode many teams are walking into. They start with a tool that helps write tests or explain code. Then they add file editing. Then shell access. Then GitHub integration. Then MCP servers. Then deployment hooks. One small convenience at a time, the agent moves from assistant to operator.</p><p>And once that happens, prompt injection becomes much more serious. In a chat window, a poisoned README or a malicious issue comment is annoying. In an agent workflow, it can turn into command execution, secret exfiltration, or remote system access. The agent does not need to be &#8220;hacked&#8221; in the traditional sense. It only needs to be convinced.</p><p>That is what makes this different from normal software security. The control plane is language. The exploit path is often context. The toolset is already built into the system.</p><h2>Why &#8220;YOLO mode&#8221; is not a feature</h2><p>A lot of developers understand this in theory and still end up in the same place in practice: full auto-approval.</p><p>The reason is obvious. Interruptions are annoying. Approval prompts slow down flow. If your company is pushing AI adoption hard, the pressure to remove friction gets even stronger. Teams start treating safety prompts as UI noise. They want the tool to just do the work.</p><p>That is where &#8220;YOLO mode&#8221; shows up. Different products call it different things, but the idea is the same: let the agent read, write, execute, and call tools without stopping for human confirmation.</p><p>This is where security falls apart fast.</p><p>The problem with full auto-approval is not only that destructive things can happen. It is that destructive things happen at machine speed. If the agent decides to run an unsafe command, touch production-facing configuration, or send secrets to an external endpoint, the time between bad reasoning and bad outcome can be seconds or less. Human intuition never enters the loop.</p><p>For enterprise Java teams, the risk is even more concrete. A coding agent sitting in a Quarkus or Spring codebase can easily see deployment descriptors, Kubernetes manifests, CI workflows, local <code>.env</code> files, test credentials, internal URLs, and database settings. If it is allowed to act on all of that autonomously, you have collapsed a lot of security boundaries into one prompt window.</p><p>That is not &#8220;developer productivity with guardrails.&#8221; That is just privileged automation with a language model in the middle.</p><h2>The whitelist trap</h2><p>Some teams try to be more careful. They do not enable full autonomy. They create a hybrid model where &#8220;safe&#8221; operations are auto-approved and dangerous ones still need manual confirmation.</p><p>That sounds reasonable. In practice, it often creates a false sense of safety.</p><p>The classic mistake is whitelisting tools instead of validating intent. A team says, &#8220;Running <code>docker</code> is fine&#8221; or &#8220;Using <code>podman</code> is fine&#8221; or &#8220;This sandbox wrapper is safe.&#8221; But the executable alone is not the security boundary. The arguments matter. Context matters. Mounted volumes matter. Network flags matter.</p><p>A container runtime can isolate work. It can also expose the host. A shell command can compile code. It can also delete a workspace, leak secrets, or rewrite build configuration. An MCP tool can search documentation. It can also mutate remote systems if you auto-approve the wrong capability.</p><p>This is why simplistic whitelisting is not enough. A privileged tool plus malicious arguments is still a privileged action. Senior engineers know this already from other systems. The same command that helps you debug a pod can also destroy a cluster if pointed at the wrong target. Agent workflows do not change that truth. They just hide it behind natural language.</p><h2>The only sane model is containment</h2><p>Once you accept that agents will misread context, hallucinate, or eventually ingest malicious input, the design goal changes. You stop trying to make the model perfectly safe. You focus on blast radius.</p><p>That means containment.</p><p>The first layer is execution isolation. Agents should not operate directly on the host with broad local access. They need sandboxes, ephemeral containers, or tightly scoped environments that can be destroyed and rebuilt easily. If the model does something stupid, the damage stays inside a disposable boundary.</p><p>The second layer is network control. A lot of agent exploits end in exfiltration. If the runtime can call arbitrary external endpoints, a compromised prompt can turn into outbound data leakage very quickly. Egress should be narrow, explicit, and logged. Default deny is the right mindset here.</p><p>The third layer is secret handling. Local plaintext secrets and autonomous agents do not belong together. If your workflow still depends on <code>.env</code> files full of long-lived credentials, the agent does not even need to be malicious to create a problem. It only needs to summarize the wrong file, paste the wrong snippet, or include the wrong detail in generated code. Short-lived credentials and external secret managers are not optional in this model.</p><p>The fourth layer is approval design. High-impact actions must stay behind human confirmation. Not because humans are perfect, but because humans at least understand business context, timing, and consequences. The model does not.</p><h2>MCP is where the stakes jump again</h2><p>The next big boundary problem is MCP.</p><p>MCP is useful because it turns the agent into a real participant in the toolchain. It can talk to documentation systems, issue trackers, orchestration platforms, internal APIs, and whatever else you expose through a server. That is also exactly why it becomes dangerous.</p><p>Every MCP server is a trust decision. Every connected tool expands the action surface. Every &#8220;always allow&#8221; setting chips away at your approval boundary.</p><p>For Java teams, this is familiar territory in a different form. We already know that integrations are where simple systems become enterprise systems. The same service that looks clean in a demo gets complicated fast once it talks to identity providers, ticketing systems, cloud control planes, and internal governance tools. MCP does the same thing for agents. It makes them more useful, and more dangerous, at the same time.</p><p>The worst pattern is direct trust plus static credentials. If the agent can call a remote MCP server with persistent tokens and broad permissions, you have effectively created an unattended service account controlled by probabilistic reasoning. That is a bad design, even if the prompt layer looks polished.</p><p>A better pattern is a gateway model with short-lived credentials, centralized policy checks, and on-behalf-of identity flow. In plain English: the agent should never be more powerful than the person using it. If Markus does not have permission to trigger a production action, the agent acting for Markus should not have it either. That sounds obvious, but many current integrations still fail that basic rule.</p><h2>Prompts, modes, and tool configs are now code</h2><p>Another shift many teams still underestimate: prompts and agent configuration now belong inside your engineering governance model.</p><p>If a custom mode changes what the agent is allowed to do, that mode is not just UX. It is policy. If a prompt changes how an agent handles secrets, external content, or approvals, that prompt is not just copy. It is executable behavior in the broad sense. If an MCP config enables auto-approval for a write-capable tool, that JSON file is part of your risk model.</p><p>Senior Java teams already know how to govern code. Review it. Version it. Test it. Track who changed what and why. The same mindset needs to apply here.</p><p>Treat prompts, rules, and integration definitions like first-class artifacts. Put them in source control. Review them. Change them intentionally. Audit them when incidents happen. This is not optional anymore.</p><h2>What this means for Java teams right now</h2><p>The practical takeaway is simple.</p><p>Do not evaluate coding agents only on code quality. Evaluate them on containment quality.</p><p>Ask different questions. What happens when the agent reads poisoned content? What can it execute without approval? What files can it see by default? Can it reach the public internet freely? Are credentials short-lived? Are tool invocations logged? Can you roll back generated changes quickly? Does the tool respect user identity, or does it operate with its own standing privileges?</p><p>These are architecture questions. They belong in the same room as platform engineering, security, and developer productivity. This is not a frontend toggle in an IDE settings page.</p><p>I think this is the real maturity test for AI-assisted development in the enterprise. The winners will not be the teams that gave the model the most freedom. They will be the teams that gave it enough freedom to be useful and enough boundaries to fail safely.</p><h2>Conclusion</h2><p>AI coding agents are becoming part of the delivery stack. That part is already happening. The open question is whether we treat them like clever autocomplete or like privileged runtime actors. For enterprise Java teams, the answer needs to be the second one. Once an agent can read, write, execute, and integrate, the security model changes. The job is no longer to trust the model. The job is to contain it.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Lock Down `PanacheEntityResource` Without Throwing Away Codegen]]></title><description><![CDATA[REST Data Panache gives you CRUD endpoints for free. Since Quarkus 3.31 you can secure those generated operations directly on the interface, without writing wrapper resources.]]></description><link>https://www.the-main-thread.com/p/quarkus-permissions-allowed-rest-data-panache-crud</link><guid isPermaLink="false">https://www.the-main-thread.com/p/quarkus-permissions-allowed-rest-data-panache-crud</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Sat, 11 Apr 2026 06:08:36 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3bb7a8d4-7732-4fda-a759-09934eac9a7f_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Generated CRUD endpoints are great until you need real security. In the early demo phase, <code>PanacheEntityResource</code> is a nice shortcut. You define an entity, expose one interface, and Quarkus generates the REST layer for you. The problem starts when your API stops being a demo and turns into something that different users should access in different ways.</p><p>Most developers fix that by giving up the generated endpoint and writing a JAX-RS resource by hand. They add <code>@RolesAllowed</code>, copy the CRUD methods, and slowly rebuild what the framework already gave them. The generated endpoint still saves some time in the first hour, but after that you are back in boilerplate land. The convenience stops when production requirements kick in.</p><p>This matters because security is not a decoration you add later. Once your service handles real data, &#8220;everyone can call every generated method&#8221; is a production bug. A read-only user should not delete records. A service account that can ingest data should not automatically be able to rewrite historical state. If you do not draw those boundaries clearly, your API breaks at the authorization layer. The persistence layer cannot replace explicit access rules.</p><p><a href="https://github.com/quarkusio/quarkus/releases/tag/3.31.1">Quarkus 3.31</a> fixed a missing piece here: you can put <code>@PermissionsAllowed</code> on the REST Data Panache interface methods that Quarkus generates. The security rule lives where the operation is declared. You keep the code generation and you still get fine-grained access control per operation.</p><p>In this tutorial, we&#8217;ll build a small SwiftShip-style service with a generated <code>Shipment</code> endpoint. We&#8217;ll secure read operations with <code>shipment:read</code> and write operations with <code>shipment:admin</code>, using <strong>Keycloak client scopes</strong> and the token endpoint <code>scope</code> parameter so Quarkus OIDC maps granted scopes to <code>@PermissionsAllowed</code>. We&#8217;ll use Quarkus Dev Services to start PostgreSQL and Keycloak for us, and we&#8217;ll verify the behavior with real tokens and <code>curl</code> calls. By the end, you&#8217;ll have an end-to-end example that works locally and shows exactly where the permission boundary lives.</p><h2><strong>Prerequisites</strong></h2><p>You do not need a large setup for this tutorial, but you do need the usual Quarkus local development tools. We assume you are comfortable reading REST endpoints, editing <code>application.properties</code>, and testing secured APIs with bearer tokens.</p><ul><li><p>Java 21 installed</p></li><li><p>Quarkus CLI installed</p></li><li><p>Podman installed</p></li><li><p><code>jq</code> installed for token extraction in shell commands</p></li><li><p>Basic understanding of REST and OpenID Connect (OIDC)</p></li></ul><h2><strong>Project Setup</strong></h2><p>Let&#8217;s create the project:</p><pre><code><code>quarkus create app dev.myfear.swiftship:permissions-demo \
  --extension='hibernate-orm-rest-data-panache,rest-jackson,jdbc-postgresql,oidc,smallrye-openapi' \
  --no-code
cd permissions-demo</code></code></pre><p>What these extensions do:</p><ul><li><p><code>hibernate-orm-rest-data-panache</code> &#8212; wires Hibernate ORM Panache with REST Data Panache (pulls in <code>rest-data-panache</code> and <code>hibernate-orm-panache</code>) and generates CRUD endpoints from your resource interface</p></li><li><p><code>rest-jackson</code> &#8212; registers the <em>Quarkus REST</em> (JAX-RS) stack with Jackson; without a REST extension, the generated resource would not be mounted and you would see <code>404</code> on <code>/shipment</code></p></li><li><p><code>jdbc-postgresql</code> &#8212; gives us PostgreSQL connectivity, and Dev Services support</p></li><li><p><code>oidc</code> &#8212; integrates the application with Keycloak for bearer token authentication</p></li><li><p><code>smallrye-openapi</code> &#8212; exposes the generated endpoints in OpenAPI, which is useful for verification</p></li></ul><p>If you use Maven, add <code>io.rest-assured:rest-assured</code> (scope <code>test</code>) next to <code>quarkus-junit</code> for the optional test at the end. The Quarkus BOM (bill of materials) manages the RestAssured version when you omit an explicit version on that dependency.</p><p>Now create the package structure:</p><pre><code><code>mkdir -p src/main/java/dev/myfear/swiftship
mkdir -p src/main/resources
mkdir -p src/test/java/dev/myfear/swiftship</code></code></pre><h2><strong>Implementing the Shipment entity</strong></h2><p>We start with the entity because REST Data Panache generates the endpoint from the data model. Keep it simple. We do not need relationships, validation groups, or DTO mapping here. We want the security behavior to stay visible.</p><p>Create <code>src/main/java/dev/myfear/swiftship/Shipment.java</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;1b80e5b7-0c99-4f55-914f-52d3883b2b2c&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package dev.myfear.swiftship;

import io.quarkus.hibernate.orm.panache.PanacheEntity;
import jakarta.persistence.Column;
import jakarta.persistence.Entity;
import jakarta.persistence.Table;

@Entity
@Table(name = "shipment")
public class Shipment extends PanacheEntity {

    @Column(name = "tracking_number")
    public String trackingNumber;

    @Column(name = "destination")
    public String destination;

    @Column(name = "status")
    public String status;
}</code></pre></div><p>This class gives us exactly what we need. <code>PanacheEntity</code> provides the generated numeric <code>id</code>, and the three fields are enough to test list, get, create, update, and delete operations. We map the table to <code>shipment</code> and columns to <em>snake_case</em> so <code>import.sql</code> matches PostgreSQL reliably; JSON responses still use the Java property names (<code>trackingNumber</code>, and so on).</p><p>The important limit here is also obvious: this entity does not protect anything by itself. It defines persistence shape. Authorization is a separate concern. If an endpoint exists for this entity and no security rule blocks access, the entity will happily be returned, inserted, updated, or deleted. That is why the next step matters.</p><h2><strong>Implementing the generated resource with permissions</strong></h2><p>Here&#8217;s the core of the tutorial. We declare the generated CRUD contract and put permission annotations on the methods we want to secure. There is still no implementation class. Quarkus generates the JAX-RS endpoint during the build.</p><p>Create <code>src/main/java/dev/myfear/swiftship/ShipmentResource.java</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;e160a013-63bb-41da-9b4b-2ba87738994e&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package dev.myfear.swiftship;

import java.util.List;

import io.quarkus.hibernate.orm.rest.data.panache.PanacheEntityResource;
import io.quarkus.panache.common.Page;
import io.quarkus.panache.common.Sort;
import io.quarkus.security.PermissionsAllowed;

public interface ShipmentResource extends PanacheEntityResource&lt;Shipment, Long&gt; {

    @Override
    @PermissionsAllowed("shipment:read")
    List&lt;Shipment&gt; list(Page page, Sort sort);

    @Override
    @PermissionsAllowed("shipment:read")
    long count();

    @Override
    @PermissionsAllowed("shipment:read")
    Shipment get(Long id);

    @Override
    @PermissionsAllowed("shipment:admin")
    Shipment add(Shipment shipment);

    @Override
    @PermissionsAllowed("shipment:admin")
    Shipment update(Long id, Shipment shipment);

    @Override
    @PermissionsAllowed("shipment:admin")
    boolean delete(Long id);
}
</code></pre></div><p>This is the whole trick. <code>PanacheEntityResource</code> extends <code>RestDataResource</code>; the <em>default</em> method signatures use <code>List</code>, <code>Page</code>, and <code>Sort</code> for listing, return <code>Shipment</code> from <code>add</code>, <code>boolean</code> from <code>delete</code>, and expose <code>count()</code> as a separate read operation. We redeclare those methods as abstract overrides only to add annotations. We do not write a resource class, and we do not call the database ourselves. Quarkus reads this interface at build time and generates the REST endpoint with the security checks attached. Use the <code>io.quarkus.hibernate.orm.rest.data.panache.PanacheEntityResource</code> import for the Hibernate ORM variant.</p><p>What does this guarantee? It guarantees that generated read operations (including <code>list</code>, <code>get</code>, and <code>count</code>) require <code>shipment:read</code>, and generated write operations require <code>shipment:admin</code>, once the identity carries those permissions&#8212;for this tutorial, via OIDC access-token <strong>scopes</strong>. It does not add record-level rules by itself. If a caller presents a token with <code>shipment:read</code>, they can read every shipment exposed by these methods. This is operation-level authorization. Tenant isolation needs extra work in your queries.</p><p>Here <code>@PermissionsAllowed</code> fits better than <code>@RolesAllowed</code>. A permission like <code>shipment:admin</code> is an application capability. A role like <code>shipment-admin</code> is one identity-provider-specific assignment. The permission stays stable in your code. The role mapping can change in configuration.</p><h2><strong>Configuring Quarkus, PostgreSQL, and OpenID Connect (OIDC)</strong></h2><p>Now we wire the application to PostgreSQL and Keycloak. In dev mode, Quarkus Dev Services starts both containers for us. We only need to describe the application behavior.</p><p>Create <code>src/main/resources/application.properties</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;8289a157-7b58-4792-9fab-7166d2d92f2e&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">quarkus.datasource.db-kind=postgresql
quarkus.hibernate-orm.schema-management.strategy=drop-and-create

quarkus.oidc.application-type=service
quarkus.oidc.client-id=swiftship
quarkus.oidc.credentials.secret=secret

quarkus.keycloak.devservices.realm-path=quarkus-realm.json
# Default Dev Services use a random host port; pin 8180 so manual curl examples match startup.
quarkus.keycloak.devservices.port=8180

# Keycloak container clock and host JVM can drift; without leeway, short-lived tokens fail exp
# validation and every Bearer call returns 401 even with a freshly obtained access token.
%test.quarkus.oidc.token.lifespan-grace=600</code></pre></div><p>These are intentionally small settings. <code>quarkus.keycloak.devservices.port=8180</code> fixes the Keycloak container to a known host port. If you omit it, Quarkus picks a <strong>random</strong> mapped port (<a href="https://quarkus.io/guides/security-openid-connect-dev-services">Dev Services for OIDC</a>); the <code>curl</code> snippets below assume <code>8180</code>, so without this property your token request hits the wrong port. <code>quarkus.hibernate-orm.schema-management.strategy=drop-and-create</code> replaces the older <code>quarkus.hibernate-orm.database.generation</code> property (deprecated from Quarkus 3.23 onward). <code>drop-and-create</code> is fine for local development because it makes the tutorial repeatable. It is not a production choice. In production, this would destroy state on every restart. There you would use schema migration with Flyway or Liquibase.</p><p>The OpenID Connect settings tell Quarkus to validate bearer tokens for a service-style API. We are not building a browser login flow here. We want access tokens you get from Keycloak, sent in the <code>Authorization</code> header.</p><h3><strong>OAuth2 scopes in the access token and </strong><code>@PermissionsAllowed</code></h3><p><code>@PermissionsAllowed</code> checks <code>java.security.Permission</code> instances on the <code>SecurityIdentity</code> (by default <code>StringPermission</code>), not only JWT <strong>roles</strong>.</p><p>For OIDC bearer tokens, Quarkus maps the access token&#8217;s <code>scope</code><strong> claim</strong> into those permissions: each space-separated scope string becomes a permission. As with role-based examples elsewhere in the docs, a value that contains a colon is parsed into <strong>permission name</strong> and <strong>action</strong>. So scope <code>shipment:read</code> matches <code>@PermissionsAllowed("shipment:read")</code>, and <code>shipment:admin</code> matches the admin operations.</p><p>In this tutorial, <strong>Keycloak client scopes</strong> supply those strings. We attach <code>shipment:read</code> and <code>shipment:admin</code> as <strong>optional</strong> client scopes on the <code>swiftship</code> client. At the token endpoint, the client passes the desired scopes in the <code>scope</code> form field (space-separated). Keycloak returns an access token whose <code>scope</code> lists what was granted; Quarkus turns that into permissions&#8212;<strong>no</strong> <code>SecurityIdentityAugmentor</code> and <strong>no</strong> realm roles on the users.</p><p><strong>Minimal realm caveat:</strong> the JSON below only defines our two custom client scopes. It does <strong>not</strong> include Keycloak&#8217;s built-in <code>openid</code>, <code>profile</code>, and <code>email</code> scope definitions. Requesting those together with <code>shipment:read</code> would yield <code>invalid_scope</code> until you add the usual OIDC client scopes to the realm. For this walkthrough, request <strong>only</strong> application scopes&#8212;for example <code>scope=shipment:read</code> for Alice, or <code>scope=shipment:read shipment:admin</code> for Bob.</p><p>The client sets <code>fullScopeAllowed</code><strong>: </strong><code>false</code> so only assigned scopes are valid; optional scopes must be requested explicitly.</p><h2><strong>Configuring the Keycloak realm</strong></h2><p>We need two users (passwords only&#8212;no realm roles for capabilities). Whether Alice or Bob can read or admin shipments is determined by which scopes the <strong>token request</strong> asks for. Dev Services imports this realm automatically.</p><p>Create <code>src/main/resources/quarkus-realm.json</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;json&quot;,&quot;nodeId&quot;:&quot;9eff65e8-54e3-4f11-a774-dd93b6fd9d94&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-json">{
  "realm": "quarkus",
  "enabled": true,
  "clientScopes": [
    {
      "name": "shipment:read",
      "protocol": "openid-connect",
      "attributes": {
        "include.in.token.scope": "true",
        "display.on.consent.screen": "false"
      }
    },
    {
      "name": "shipment:admin",
      "protocol": "openid-connect",
      "attributes": {
        "include.in.token.scope": "true",
        "display.on.consent.screen": "false"
      }
    }
  ],
  "clients": [
    {
      "clientId": "swiftship",
      "enabled": true,
      "publicClient": false,
      "secret": "secret",
      "directAccessGrantsEnabled": true,
      "standardFlowEnabled": true,
      "serviceAccountsEnabled": false,
      "fullScopeAllowed": false,
      "optionalClientScopes": [
        "shipment:read",
        "shipment:admin"
      ]
    }
  ],
  "users": [
    {
      "username": "alice",
      "enabled": true,
      "emailVerified": true,
      "credentials": [
        { "type": "password", "value": "alice" }
      ]
    },
    {
      "username": "bob",
      "enabled": true,
      "emailVerified": true,
      "credentials": [
        { "type": "password", "value": "bob" }
      ]
    }
  ]
}</code></pre></div><p>Alice and Bob are equivalent in Keycloak; <strong>curl</strong> (or your client) chooses <code>scope=</code> per call. That keeps the demo focused on <code>@PermissionsAllowed</code> and token scopes. In production you would tie scope issuance to user attributes, client policies, or authorization services&#8212;not ad-hoc scope strings from the client unless you trust that caller.</p><h2><strong>Loading test data</strong></h2><p>A CRUD API without data does not tell us much. We seed two rows on startup so the read endpoints have something to return immediately.</p><p>Create <code>src/main/resources/import.sql</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;sql&quot;,&quot;nodeId&quot;:&quot;e276273d-6599-412e-820f-00065af0929a&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-sql">INSERT INTO shipment (id, tracking_number, destination, status) VALUES (1, 'SWS-001', 'Berlin', 'IN_TRANSIT');
INSERT INTO shipment (id, tracking_number, destination, status) VALUES (2, 'SWS-002', 'Amsterdam', 'DELIVERED');</code></pre></div><p>This script matches the <code>shipment</code> table and <code>@Column</code> names from the entity. It works with our <code>drop-and-create</code> dev setup. On each restart, the schema is recreated and the same two shipments appear again.</p><p>You get deterministic verification. Safe production seeding is a different problem. <code>import.sql</code> is useful for tests, demos, and tutorials. It is not how you manage production reference data.</p><h2><strong>Starting the application</strong></h2><p>Start the application in dev mode:</p><pre><code><code>quarkus dev</code></code></pre><p>If you generated the project with Maven and the wrapper, the same thing is:</p><pre><code><code>./mvnw quarkus:dev</code></code></pre><p>Quarkus now starts the application, a PostgreSQL container, and a Keycloak container. Wait until startup finishes and then open the Dev UI if you want to inspect the running services: <a href="http://localhost:8080/q/dev-ui">Dev UI</a></p><p>You can also inspect the OpenAPI document to confirm the generated shipment endpoint exists: <a href="http://localhost:8080/q/openapi">OpenAPI</a></p><h2><strong>Verification</strong></h2><p>Let&#8217;s prove the behavior with real requests.</p><h3><strong>Get a token for Alice</strong></h3><p>With <code>quarkus dev</code><strong> already running</strong> (and Keycloak reachable on <code>8180</code> when <code>quarkus.keycloak.devservices.port</code> is set as below), open a second terminal and request a token:</p><pre><code><code>export ALICE_TOKEN=$(curl -s -X POST \
  http://localhost:8180/realms/quarkus/protocol/openid-connect/token \
  -d "client_id=swiftship" \
  -d "client_secret=secret" \
  -d "username=alice" \
  -d "password=alice" \
  -d "grant_type=password" \
  -d "scope=shipment:read" | jq -r '.access_token')</code></code></pre><p>To confirm the token exists:</p><pre><code><code>echo $ALICE_TOKEN | cut -c1-40</code></code></pre><p>You should see the first part of a JSON Web Token (JWT). If that line is blank or shows <code>null</code>, the token call failed: run the same <code>curl</code> <strong>without</strong> <code>-s</code> (or add <code>-S</code>) so errors are visible, confirm <code>jq</code> is installed, and confirm Keycloak is really on <strong>8180</strong> (startup log, Dev UI, or the <code>quarkus.oidc.auth-server-url</code> value Quarkus printed). If you removed <code>quarkus.keycloak.devservices.port</code>, replace <code>8180</code> in the URL with whatever host port Dev Services mapped for Keycloak.</p><h3><strong>Alice can list shipments</strong></h3><p>Call the generated list endpoint:</p><pre><code><code>curl -s \
  -H "Authorization: Bearer $ALICE_TOKEN" \
  http://localhost:8080/shipment | jq .</code></code></pre><p>Expected output:</p><pre><code><code>[
  {
    "id": 1,
    "trackingNumber": "SWS-001",
    "destination": "Berlin",
    "status": "IN_TRANSIT"
  },
  {
    "id": 2,
    "trackingNumber": "SWS-002",
    "destination": "Amsterdam",
    "status": "DELIVERED"
  }
]</code></code></pre><p>This verifies that <code>@PermissionsAllowed("shipment:read")</code> on the list operation is enforced and satisfied for Alice.</p><h3><strong>Alice can get one shipment</strong></h3><pre><code><code>curl -s \
  -H "Authorization: Bearer $ALICE_TOKEN" \
  http://localhost:8080/shipment/1 | jq .</code></code></pre><p>Expected output:</p><pre><code><code>{
  "id": 1,
  "trackingNumber": "SWS-001",
  "destination": "Berlin",
  "status": "IN_TRANSIT"
}</code></code></pre><h3><strong>Alice cannot delete</strong></h3><pre><code><code>curl -i -X DELETE \
  -H "Authorization: Bearer $ALICE_TOKEN" \
  http://localhost:8080/shipment/1</code></code></pre><p>Expected output:</p><pre><code><code>HTTP/1.1 403 Forbidden</code></code></pre><p>This is the critical check. Alice is authenticated, so this is not a <code>401</code>. She is blocked because she lacks <code>shipment:admin</code>, so the correct response is <code>403 Forbidden</code>.</p><h3><strong>Get a token for Bob</strong></h3><p>Now request a token for the admin user:</p><pre><code><code>export BOB_TOKEN=$(curl -s -X POST \
  http://localhost:8180/realms/quarkus/protocol/openid-connect/token \
  -d "client_id=swiftship" \
  -d "client_secret=secret" \
  -d "username=bob" \
  -d "password=bob" \
  -d "grant_type=password" \
  -d "scope=shipment:read shipment:admin" | jq -r '.access_token')</code></code></pre><h3><strong>Bob can delete</strong></h3><pre><code><code>curl -i -X DELETE \
  -H "Authorization: Bearer $BOB_TOKEN" \
  http://localhost:8080/shipment/1</code></code></pre><p>Expected output:</p><pre><code><code>HTTP/1.1 204 No Content</code></code></pre><p>Now list the shipments again:</p><pre><code><code>curl -s \
  -H "Authorization: Bearer $BOB_TOKEN" \
  http://localhost:8080/shipment | jq .</code></code></pre><p>Expected output:</p><pre><code><code>[
  {
    "id": 2,
    "trackingNumber": "SWS-002",
    "destination": "Amsterdam",
    "status": "DELIVERED"
  }
]</code></code></pre><p>That confirms the delete operation really ran. Shipment 1 is gone from the list.</p><h3><strong>Unauthenticated requests fail</strong></h3><p>Finally, call the endpoint without a token:</p><pre><code><code>curl -i http://localhost:8080/shipment</code></code></pre><p>Expected output:</p><pre><code><code>HTTP/1.1 401 Unauthorized</code></code></pre><p>That verifies the full security flow. No token means authentication fails before the permission layer is even evaluated.</p><h2><strong>Optional integration test</strong></h2><p>Manual <code>curl</code> verification is useful. You still want automated checks in the codebase. The tests below assert anonymous access is rejected, a reader can load <code>GET /shipment/1</code>, and the same reader cannot delete.</p><p>Create <code>src/test/java/dev/myfear/swiftship/ShipmentSecurityTest.java</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;c82369e7-7bcb-406b-93dd-231d3ce0e451&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package dev.myfear.swiftship;

import static io.restassured.RestAssured.given;
import static org.hamcrest.Matchers.equalTo;

import org.eclipse.microprofile.config.inject.ConfigProperty;
import org.junit.jupiter.api.Test;

import io.quarkus.test.junit.QuarkusTest;
import io.restassured.http.ContentType;

@QuarkusTest
class ShipmentSecurityTest {

    @ConfigProperty(name = "quarkus.oidc.auth-server-url")
    String authServerUrl;

    @Test
    void anonymousUserCannotListShipments() {
        given()
            .when().get("/shipment")
            .then()
            .statusCode(401);
    }

    @Test
    void readerCanGetShipmentById() {
        String token = accessToken("alice", "alice", "shipment:read");

        given()
            .header("Authorization", "Bearer " + token)
            .when().get("/shipment/1")
            .then()
            .statusCode(200)
            .body("id", equalTo(1))
            .body("trackingNumber", equalTo("SWS-001"))
            .body("destination", equalTo("Berlin"))
            .body("status", equalTo("IN_TRANSIT"));
    }

    @Test
    void readerCannotDeleteShipment() {
        String token = accessToken("alice", "alice", "shipment:read");

        given()
            .header("Authorization", "Bearer " + token)
            .when().delete("/shipment/1")
            .then()
            .statusCode(403);
    }

    private String accessToken(String username, String password, String scope) {
        String tokenUrl = authServerUrl + "/protocol/openid-connect/token";
        return given()
            .contentType(ContentType.URLENC)
            .formParam("client_id", "swiftship")
            .formParam("client_secret", "secret")
            .formParam("username", username)
            .formParam("password", password)
            .formParam("grant_type", "password")
            .formParam("scope", scope)
            .when()
            .post(tokenUrl)
            .then()
            .statusCode(200)
            .extract()
            .path("access_token");
    }
}
</code></pre></div><p>The positive read test depends on requesting <code>shipment:read</code> in <code>scope</code>. If you omit <code>scope</code> or request scopes the client is not allowed to use, you get <code>invalid_scope</code> at the token endpoint, or <code>403</code> on the API when the token lacks the right permissions.</p><p>This suite does not cover every path, but it pins the two boundaries that matter for this API: readers can read, and readers cannot delete.</p><p>In <code>@QuarkusTest</code>, Keycloak Dev Services often uses a <em>random</em> host port unless you set <code>quarkus.keycloak.devservices.port</code>. The test code builds the token URL from <code>quarkus.oidc.auth-server-url</code>, so it stays correct. The <code>application.properties</code> in this tutorial pins <code>8180</code> for <strong>dev</strong> so the <code>curl</code> examples match; for tests you can add e.g. <code>%test.quarkus.keycloak.devservices.port=&#8230;</code> if you want a fixed port there too.</p><p>If Bearer requests in tests return <code>401</code> even with a token you just got, check <code>exp</code>: Keycloak&#8217;s container clock and the host JVM can skew enough that short-lived access tokens look expired to Quarkus. For the <strong>test</strong> profile only, add something like <code>%test.quarkus.oidc.token.lifespan-grace=600</code> (seconds of leeway on expiry) in <code>application.properties</code> so <code>./mvnw verify</code> stays stable. Do not treat that as a production setting.</p><p>Run the test with <code>./mvnw verify</code> (or your IDE&#8217;s JUnit runner). That starts PostgreSQL and Keycloak via Dev Services, so Podman (or a compatible container runtime) must be available.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/p/quarkus-permissions-allowed-rest-data-panache-crud?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/p/quarkus-permissions-allowed-rest-data-panache-crud?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Production Hardening</strong></h2><h3><strong>What happens under load</strong></h3><p>The endpoint generation does not change how authorization behaves under concurrency. Every incoming request still goes through authentication, identity resolution, and permission checks before the CRUD operation runs. So you do not get a &#8220;fast path&#8221; around security just because the endpoint is generated.</p><p>The important part is that this only protects the operation boundary. If 500 valid admin requests call <code>DELETE /shipment/{id}</code> at once, the permission layer does not serialize them. It only decides whether each caller is allowed to try. Database correctness is still handled by your persistence model and transaction boundaries.</p><h3><strong>What this does not protect</strong></h3><p><code>@PermissionsAllowed("shipment:read")</code> protects the entire <code>list</code>, <code>get</code>, and <code>count</code> operations. It does not protect fields inside the response. If your entity contains sensitive columns such as internal cost data, this approach does not hide them from authorized readers. For that, you need data transfer objects (DTOs), projections, or response filtering.</p><p>The same is true for tenant isolation. If user A should only see shipments for customer A, and user B should only see shipments for customer B, you need a query constraint tied to the authenticated identity on top of operation-level checks. The generated endpoint can still help, but your repository logic has to enforce that boundary.</p><h3><strong>Failure behavior matters</strong></h3><p>One good thing about declarative security on generated endpoints is consistency. You do not risk forgetting to add an annotation to one custom method because there is no custom method. The security rule sits right on the operation declaration.</p><p>I have seen the manual alternative fail in real projects. A team rewrites generated CRUD as a resource class to add authorization. Six months later, someone adds a &#8220;temporary&#8221; admin shortcut endpoint for a migration, forgets the annotation, and that endpoint survives the release. Generated code saves time. It also removes places where humans forget things.</p><h2><strong>Conclusion</strong></h2><p>We built a generated CRUD API for <code>Shipment</code>, secured it with <code>@PermissionsAllowed</code>, issued <strong>OAuth2 scopes</strong> from Keycloak client scopes via the token endpoint&#8217;s <code>scope</code> parameter, and let Quarkus OIDC map those scopes to permissions. PostgreSQL and Keycloak Dev Services back the app; <code>curl</code> and tests show read versus admin behavior. The security rule stays on the operation declaration; how callers get scopes in production is a separate policy concern.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[What Your Local LLM Actually Sees: Debugging Ollama Traffic in Quarkus with mitmproxy]]></title><description><![CDATA[Inspect real Ollama API payloads in Quarkus with mitmproxy. See tool overhead, prompt size, and what your local LLM actually gets.]]></description><link>https://www.the-main-thread.com/p/debug-ollama-traffic-quarkus-mitmproxy</link><guid isPermaLink="false">https://www.the-main-thread.com/p/debug-ollama-traffic-quarkus-mitmproxy</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Fri, 10 Apr 2026 06:08:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/390a838e-9c64-476d-a0a3-a55ad1a2286b_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You might think that local models are easier to debug because they run on the same machine as the application. You start Ollama, point your Java client at <code>localhost:11434</code>, get a response back, and assume the transport side is simple. That feeling lasts until the answers get worse, latency goes up, or a tool call starts doing strange things.</p><p>The model is only one part of the story. The full serialized request drives behavior too. Your Java code creates a clean interface method. The framework turns that into JSON. Then the model sees the final payload: system prompt, user message, tool schema, generation settings, and anything else your client sends. If that payload is too large or shaped differently than you expected, the model behaves differently. Application logs often miss that final shape.</p><p>This gets worse when you use OpenAI-compatible APIs. The same request format can target OpenAI, LiteLLM, or Ollama. That is good for portability, but it also makes it easy to ignore what is actually going over the wire. Ollama supports an OpenAI-compatible <code>/v1/chat/completions</code> endpoint on <code>http://localhost:11434/v1/</code>, and that makes it a very good local target for this kind of inspection. It also supports tools on that endpoint; see the <a href="https://docs.ollama.com/api/openai-compatibility">Ollama OpenAI compatibility documentation</a>.</p><p><code>mitmproxy</code> solves this problem by showing the real HTTP traffic. For local Ollama over plain HTTP, this is much simpler than the hosted HTTPS case. You do not need to trust a custom CA certificate for the main path in this tutorial, because we are not intercepting TLS here. We are just routing normal HTTP traffic through a local proxy. <code>mitmweb</code> runs the proxy on the listen port you choose and serves the web UI on <code>127.0.0.1:8081</code> by default; see <a href="https://docs.mitmproxy.org/stable/#mitmweb">mitmweb</a> in the mitmproxy documentation.</p><p>What follows is a small Quarkus application that talks to Ollama through its OpenAI-compatible endpoint. We route that traffic through <code>mitmproxy</code>, compare a plain request with a tool-enabled request, and inspect what really hits the model. The useful outcome is simple: you can see the same payload your model sees. Quarkus LangChain4j supports named model configurations, AI services with <code>@RegisterAiService</code>, and tool integration with <code>@Tool</code>, so we keep the Java code small and still get a realistic payload on the wire; see <a href="https://docs.quarkiverse.io/quarkus-langchain4j/dev/ai-services.html">Quarkus LangChain4j AI services</a>.</p><h2><strong>Prerequisites</strong></h2><p>You need a local Java setup, a running Ollama installation, and <code>mitmproxy</code>. I assume you are comfortable with Quarkus REST endpoints and Maven, but I do not assume you already know the LangChain4j annotations used here.</p><ul><li><p>Java 21 or newer installed (validated with Java 25)</p></li><li><p>Quarkus CLI installed</p></li><li><p>Ollama installed locally</p></li><li><p><code>mitmproxy</code> installed locally (<code>brew install --cask mitmproxy</code> on macOS)</p></li><li><p>Basic understanding of REST endpoints</p></li></ul><h2><strong>Project Setup</strong></h2><p>Create the project or grab it from my Github repository.</p><pre><code><code>quarkus create app com.example:ollama-wiretap-demo \
  --package-name=com.example.ollamawiretap \
  --extension=rest-jackson,io.quarkiverse.langchain4j:quarkus-langchain4j-openai  \
  --no-code</code></code></pre><p>We use <code>rest-jackson</code> because we want a simple JSON REST endpoint in Quarkus REST. We use <code>quarkus-langchain4j-openai</code> on purpose, even though the model is local. The reason is simple: Ollama exposes an OpenAI-compatible API, so this lets us inspect the same wire format many teams use against hosted providers later. </p><p>Change into the project directory:</p><pre><code><code>cd ollama-wiretap-demo</code></code></pre><h2><strong>Implementation</strong></h2><h3><strong>Create the request and response types</strong></h3><p>We start with two small records. They keep the REST endpoint simple, and they also make verification easier because the HTTP response shape stays stable even though the model output itself is not deterministic.</p><p>Create <code>src/main/java/com/example/ollamawiretap/PromptRequest.java</code>:</p><pre><code><code>package com.example.ollamawiretap;

public record PromptRequest(String question) {
}</code></code></pre><p>Create <code>src/main/java/com/example/ollamawiretap/PromptResponse.java</code>:</p><pre><code><code>package com.example.ollamawiretap;

public record PromptResponse(String mode, String answer, long durationMs) {
}</code></code></pre><p>This gives us a stable contract. The answer text changes from run to run. The <code>mode</code> and <code>durationMs</code> fields do not. That matters for AI verification. We do not test exact wording. We test that the call went through the expected path and that we can inspect the request that produced it.</p><h3><strong>Create a plain AI service</strong></h3><p>Create the first AI service next. This is our baseline. It has a short system prompt and no tools.</p><p>Create <code>src/main/java/com/example/ollamawiretap/PlainAssistant.java</code>:</p><pre><code><code>package com.example.ollamawiretap;

import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.UserMessage;
import io.quarkiverse.langchain4j.RegisterAiService;

@RegisterAiService
public interface PlainAssistant {

    @SystemMessage("""
            You are a concise software architecture assistant.
            Answer in no more than four sentences.
            Be concrete.
            """)
    String answer(@UserMessage String question);
}</code></code></pre><p>This interface is small, but it still routes through the OpenAI client configured in <code>application.properties</code>. We keep a single default model configuration so the proxy path is explicit and easy to validate; see <a href="https://docs.quarkiverse.io/quarkus-langchain4j/dev/ai-services.html">Quarkus LangChain4j AI services</a>.</p><p>The guarantee here is simple. Every call to <code>answer</code> becomes a chat-completions request. The limit is also simple. This does not tell you anything about payload size unless you inspect the traffic. The Java method hides the JSON. That is the whole problem we are solving.</p><h3><strong>Create a tool bean</strong></h3><p>Add a CDI bean with a tool method. The Quarkus LangChain4j AI services reference shows the <code>@Tool</code> pattern for function calling. We will use a tiny tool on purpose so the traffic difference is easy to understand; see <a href="https://docs.quarkiverse.io/quarkus-langchain4j/dev/ai-services.html">Quarkus LangChain4j AI services</a>.</p><p>Create <code>src/main/java/com/example/ollamawiretap/ArchitectureTools.java</code>:</p><pre><code><code>package com.example.ollamawiretap;

import dev.langchain4j.agent.tool.Tool;
import jakarta.enterprise.context.ApplicationScoped;

@ApplicationScoped
public class ArchitectureTools {

    @Tool("Return the current platform stack used by the application")
    public String currentStack() {
        return "Java, Quarkus, Ollama, mitmproxy";
    }
}</code></code></pre><p>This tool does almost nothing. That is fine for this tutorial. It makes the request larger and different on the wire. Once tools are available, the model call includes tool metadata. Teams often forget this overhead when they discuss context budgets.</p><h3><strong>Create a tool-enabled AI service</strong></h3><p>The second AI service keeps the same basic behavior, but with tool access enabled.</p><p>Create <code>src/main/java/com/example/ollamawiretap/ToolAssistant.java</code>:</p><pre><code><code>package com.example.ollamawiretap;

import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.UserMessage;
import io.quarkiverse.langchain4j.RegisterAiService;

@RegisterAiService(tools = {ArchitectureTools.class})
public interface ToolAssistant {

    @SystemMessage("""
            You are a concise software architecture assistant.
            Use tools when they help answer the question.
            Answer in no more than four sentences.
            Be concrete.
            """)
    String answer(@UserMessage String question);
}</code></code></pre><p>This is where the transport story gets interesting. The Java code barely changed. The request body did. That difference is invisible at the call site, but it is visible in mitmproxy.</p><h3><strong>Create the REST endpoint</strong></h3><p>Finish with a REST endpoint that lets us call either mode. We use a path parameter so we can compare <code>plain</code> and <code>tool</code> without changing code between runs.</p><p>Create <code>src/main/java/com/example/ollamawiretap/PromptResource.java</code>:</p><pre><code><code>package com.example.ollamawiretap;

import jakarta.inject.Inject;
import jakarta.ws.rs.BadRequestException;
import jakarta.ws.rs.Consumes;
import jakarta.ws.rs.POST;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;

@Path("/inspect")
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public class PromptResource {

    @Inject
    PlainAssistant plainAssistant;

    @Inject
    ToolAssistant toolAssistant;

    @POST
    @Path("/{mode}")
    public PromptResponse inspect(@PathParam("mode") String mode, PromptRequest request) {
        long start = System.currentTimeMillis();

        String answer = switch (mode) {
            case "plain" -&gt; plainAssistant.answer(request.question());
            case "tool" -&gt; toolAssistant.answer(request.question());
            default -&gt; throw new BadRequestException("Mode must be 'plain' or 'tool'");
        };

        long duration = System.currentTimeMillis() - start;
        return new PromptResponse(mode, answer, duration);
    }
}</code></code></pre><p>This endpoint gives us a clean comparison point. Both requests come from the same application, hit the same Ollama server, and go through the same proxy. The only thing that changes is the AI service configuration.</p><p>Under stress, this code is still synchronous. That is fine for the tutorial. For higher throughput systems, you would care about concurrency, request queuing, and timeouts much more explicitly. But for inspecting payload shape, a blocking REST endpoint is the easiest thing to reason about at 2am.</p><h2><strong>Configuration</strong></h2><p>Configure the application in <code>src/main/resources/application.properties</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;dbb9edab-7fd2-47ed-bd94-d1d3c55b6a7a&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">quarkus.langchain4j.openai.base-url=http://127.0.0.1:11434/v1
quarkus.langchain4j.openai.api-key=ollama
quarkus.langchain4j.openai.proxy-type=HTTP
quarkus.langchain4j.openai.proxy-host=127.0.0.1
quarkus.langchain4j.openai.proxy-port=8888
quarkus.langchain4j.openai.chat-model.model-name=qwen2.5-coder:7b
quarkus.langchain4j.openai.chat-model.log-requests=true
quarkus.langchain4j.openai.chat-model.log-responses=true</code></pre></div><p><code>quarkus.langchain4j.openai.base-url</code> points the OpenAI client at Ollama&#8217;s OpenAI-compatible endpoint. I use <code>127.0.0.1</code> here to avoid localhost edge-cases around IPv6 and proxy bypass in some environments. </p><p><code>proxy-type</code>, <code>proxy-host</code>, and <code>proxy-port</code> route the OpenAI-compatible client through mitmproxy. The value <code>8888</code> is arbitrary, but it must be a port where mitmproxy is actually listening, and it must not be the same port Quarkus uses for its own HTTP server (<code>quarkus.http.port</code>). </p><p><code>chat-model.model-name=qwen2.5-coder:7b</code> selects the local Ollama model. The exact model is your choice. Use one that is already comfortable on your machine. <code>log-requests</code> and <code>log-responses</code> are useful here because they let you compare framework-level logging with the real wire capture. The logs help. The wire is still the source of truth.</p><p>This configuration gives you one clear guarantee. The Quarkus client will call Ollama through mitmproxy. It does not guarantee the model will use the tool on every request. Tool use is still model behavior, and that is probabilistic.</p><h2><strong>Running the Stack</strong></h2><p>While writing this article, the demo initially used the LangChain4j versions aligned by the Quarkus platform BOM alone. On that combination, the OpenAI client did not honor the proxy-related <code>application.properties</code> keys, so requests never reached mitmproxy even though the configuration looked correct. We had to bump the Quarkus LangChain4j stack: the companion <code>pom.xml</code> imports <code>io.quarkiverse.langchain4j:quarkus-langchain4j-bom</code> at <strong>1.8.4</strong>, which picks up the upstream fix in <a href="https://github.com/quarkiverse/quarkus-langchain4j/pull/2276">quarkus-langchain4j#2276</a>.</p><p><strong>Note:</strong> The <code>quarkus.langchain4j.openai.proxy-type</code>, <code>proxy-host</code>, and <code>proxy-port</code> settings from the Configuration section are the ones this walkthrough relies on. Match or exceed that BOM version (or a release that includes the same fix) so those properties actually configure the HTTP client; otherwise you may need the JVM <code>-Dhttp.proxyHost=...</code> workaround below even when the Quarkus keys are set.</p><p>Pull a model if you do not already have one:</p><pre><code><code>ollama pull qwen2.5-coder:7b</code></code></pre><p>Start Ollama:</p><pre><code><code>ollama serve</code></code></pre><p>Ollama&#8217;s OpenAI compatibility docs show the local base URL and the <code>/v1/chat/completions</code> endpoint shape; see the <a href="https://docs.ollama.com/api/openai-compatibility">Ollama OpenAI compatibility documentation</a>.</p><p>Now start mitmproxy:</p><pre><code><code>mitmweb --listen-port 8888</code></code></pre><p>This starts the proxy on port <code>8888</code> and the web UI on http://127.0.0.1:8081.</p><p>Important with newer mitmproxy releases: the web UI is protected by a one-time auth token. If you open http://127.0.0.1:8081/#/capture and get HTTP 403, check the <code>mitmweb</code> terminal output, copy the token/password shown there, and enter it in the browser form once for that session.</p><p>After that, start the Quarkus app:</p><pre><code><code>./mvnw quarkus:dev</code></code></pre><p>If requests still do not appear in mitmproxy, force JVM-level proxy settings for the Quarkus process:</p><pre><code><code>./mvnw quarkus:dev \
  -Dhttp.proxyHost=127.0.0.1 \
  -Dhttp.proxyPort=8888 \
  -Dhttps.proxyHost=127.0.0.1 \
  -Dhttps.proxyPort=8888 \
  -Dhttp.nonProxyHosts=</code></code></pre><p>At this point the runtime path is:</p><p><code>curl</code> &#8594; Quarkus on port <code>8080</code> &#8594; mitmproxy on port <code>8888</code> &#8594; Ollama on port <code>11434</code></p><p>That is the path we care about. There is no hidden gateway and no hosted provider. You are looking at the traffic leaving your Java application and arriving at your local model server.</p><h2><strong>Production Hardening</strong></h2><p>Keep mitmproxy as a development and incident tool, not as default production architecture. It stores prompts, tool definitions, and model responses in one more place. If those payloads contain internal data, customer data, or secrets, the proxy now becomes part of your data boundary.</p><p>Account for latency before you compare model performance. Even on localhost, the extra proxy hop adds overhead. If you benchmark with mitmproxy enabled and then benchmark without it, you can separate model latency from client serialization and proxy cost.</p><p>Treat transport mode as environment-specific. In this tutorial we use plain local HTTP, so there is no CA trust setup. For HTTPS targets, mitmproxy must terminate TLS and your client must trust the mitmproxy CA. That is a different security and ops posture; see <a href="https://docs.mitmproxy.org/stable/concepts/certificates/">About certificates</a> in the mitmproxy documentation.</p><p>Use this capture workflow when behavior changes after enabling tools or adding memory. At that point, inspect the JSON payload size and shape first, then tune prompts or tool design. This ties back to the opening problem: debugging model output without inspecting transport data is mostly guesswork.</p><h2><strong>Verification</strong></h2><h3><strong>Compare a plain request and a tool-enabled request</strong></h3><p>Send the plain request first:</p><pre><code><code>curl -s http://localhost:8080/inspect/plain \
  -H 'Content-Type: application/json' \
  -d '{"question":"Explain why large prompts increase latency."}'</code></code></pre><p>Expected response shape:</p><pre><code><code>{
  "mode": "plain",
  "answer": "...",
  "durationMs": 1234
}</code></code></pre><p>Now send the tool-enabled request:</p><pre><code><code>curl -s http://localhost:8080/inspect/tool \
  -H 'Content-Type: application/json' \
  -d '{"question":"What stack does this application use, and why does that matter for debugging?"}'</code></code></pre><p>Expected response shape:</p><pre><code><code>{
  "mode": "tool",
  "answer": "...",
  "durationMs": 1450
}</code></code></pre><p>We are not verifying exact wording. This is an AI system. We verify that both calls return JSON, that the <code>mode</code> field matches the endpoint used, and that each request creates a captured flow in mitmproxy.</p><h3><strong>Inspect the wire</strong></h3><p>Open the token URL printed by <code>mitmweb</code> (for example http://127.0.0.1:8081/?token=...) and use this filter:</p><pre><code><code>~u /v1/chat/completions</code></code></pre><p>In the capture UI, you should see something like this once requests are flowing:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8Ro6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8Ro6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png 424w, https://substackcdn.com/image/fetch/$s_!8Ro6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png 848w, https://substackcdn.com/image/fetch/$s_!8Ro6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png 1272w, https://substackcdn.com/image/fetch/$s_!8Ro6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8Ro6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png" width="1456" height="975" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:975,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:470730,&quot;alt&quot;:&quot;Mitmproxy Screenshot&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/192298607?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Mitmproxy Screenshot" title="Mitmproxy Screenshot" srcset="https://substackcdn.com/image/fetch/$s_!8Ro6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png 424w, https://substackcdn.com/image/fetch/$s_!8Ro6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png 848w, https://substackcdn.com/image/fetch/$s_!8Ro6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png 1272w, https://substackcdn.com/image/fetch/$s_!8Ro6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65e234d1-f8e5-4774-9f75-d02fcffcafbb_3026x2026.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You should see both requests. Open the plain request first. The body will contain <code>model</code>, a <code>messages</code> array, and your system and user messages. That is the real payload Ollama receives on the OpenAI-compatible endpoint. </p><p>Now open the tool-enabled request. The key difference is tool metadata in the JSON. Your Java endpoint stays the same while the payload grows.</p><p>If the model decides to call the tool, you will also see the multi-step exchange. First the model asks for the tool. Then your application executes it. Then the tool result goes back to the model. That round-trip cost is easy to ignore when all you look at is one neat Java method call.</p><h3><strong>Compare mitmproxy with application logs</strong></h3><p>Because <code>log-requests</code> and <code>log-responses</code> are enabled, Quarkus will also log model traffic. This is useful, but it is still not the same as the wire capture. The logs are whatever the client library decided to print. Mitmproxy shows the actual request that crossed the proxy boundary. When the two differ, trust the wire.</p><h2><strong>Conclusion</strong></h2><p>We built a Quarkus app that calls a local Ollama model through the OpenAI-compatible endpoint, routed that traffic through <code>mitmproxy</code>, and compared plain vs tool-enabled payloads on the wire. The practical outcome is simple: when answers get weird, you can inspect the exact JSON the model received, instead of guessing from abstractions.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Coding Tools in 2026: How to Work With Agents Without Losing Control]]></title><description><![CDATA[A Java engineer&#8217;s guide to AI coding agents, blast radius, guardrails, and staying in control in 2026.]]></description><link>https://www.the-main-thread.com/p/ai-coding-tools-2026-java-developers-agents-control</link><guid isPermaLink="false">https://www.the-main-thread.com/p/ai-coding-tools-2026-java-developers-agents-control</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Thu, 09 Apr 2026 06:08:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2360742c-2b66-49cb-a0d4-c5c06b9ae1e2_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you feel overwhelmed by AI coding tools right now, that is normal.</p><p>A year ago, autocomplete felt like progress. Today, tools read repositories, edit files, run commands, pull external context, and keep iterating until they decide the task is done. This is a different operating model for software development.</p><p>You still write code. You still design systems. But now you also steer software that changes software.</p><p>That sounds efficient until it edits faster than you can review, passes local tests, and still breaks something important. I have hit that wall enough times that I no longer ask, &#8220;Which tool is best?&#8221;</p><p>The question that matters is simpler:</p><p><strong>How much control do I keep while using it?</strong></p><p>That is the map I use now. Not a ranking. Not a hype list. A control map.</p><h2><strong>The Real Shift Is Blast Radius</strong></h2><p>People still talk about AI coding tools as productivity tools:</p><ul><li><p>Faster typing</p></li><li><p>Less boilerplate</p></li><li><p>Quicker prototypes</p></li></ul><p>That breaks once the system can inspect a repository, change multiple files, run commands, and retry on failure. At that point, your problem is blast radius.</p><p>You stop reviewing lines and start reviewing behavior. You stop asking &#8220;Did it write this function correctly?&#8221; and start asking &#8220;What else did it touch, what assumptions did it make, and how confident am I in the result?&#8221;</p><p>That is a bigger shift than most teams admit.</p><p>I have had agents produce changes that looked clean, compiled cleanly, and still carried wrong assumptions into the application. The issue was not that the model was useless. The issue was scope: I let it operate wider than my review model could safely absorb.</p><h2><strong>The Ladder: IDE, CLI, Generator</strong></h2><p>This space gets easier to reason about when you reduce it to three levels:</p><ol><li><p>IDE agents</p></li><li><p>CLI agents</p></li><li><p>Full app generators</p></li></ol><p>This is a control ladder, not a maturity ladder.</p><p>Higher does not mean better. Higher means broader autonomy and a larger blast radius.</p><p>An IDE agent usually works close to code you are already looking at. A CLI agent can operate at repository scope and execute directly through the terminal. A full app generator abstracts more and pushes you toward &#8220;describe what you want&#8221; over &#8220;review what changed.&#8221;</p><p>The mistake I see all the time is assuming more autonomy is automatically more advanced. It is not. It is just easier to lose track of what happened.</p><h2><strong>IDE: Where I Start With IBM Bob</strong></h2><p>If I introduce AI coding into a team, I do not start with the most autonomous system I can find. I start with the most governable one.</p><p>That is why I reach for IBM Bob.</p><p>Bob is not a lightweight sidebar assistant. IBM positions it as an AI SDLC partner and coding agent, and it can read and write files, run commands, and use external tools through MCP. That puts it in the real agent category.</p><p>What makes Bob interesting to me is workflow clarity. Autonomy is more explicit.</p><p>Bob ships with <a href="https://bob.ibm.com/docs/ide/features/modes">built-in modes</a> such as Ask, Plan, Code, Advanced, and Orchestrator. These are specialized personas with different capabilities and access levels. Teams can also define <a href="https://www.the-main-thread.com/p/ai-coding-modes-bob-quarkus-dev-mode">custom modes</a> to constrain behavior and tool access.</p><p>Ask and Plan keep exploration non-destructive. Code and Advanced move into implementation. Orchestrator is there for broader multi-step work. This separation helps new users, but the bigger value is governance: it creates an execution contract.</p><p>In larger teams, explicit phase boundaries are often more valuable than raw autonomy because they make review, approval, and intent visible.</p><p>Bob also gives you concrete control knobs. There is <code>.bobignore</code> for <a href="https://bob.ibm.com/docs/ide/features/checkpoints#bobignore-behavior">sensitive paths and large assets</a>, and it supports manual, auto, and hybrid approval models. I recommend leaving <a href="https://bob.ibm.com/docs/ide/features/auto-approving-actions">auto-approval disabled</a> when traceability matters so you can approve or deny commands as they happen.</p><p>That is exactly the surface I want when an agent starts touching a real codebase.</p><p>There is also <a href="https://bob.ibm.com/docs/ide/features/literate-coding">literate coding</a>, where you write intent next to code and generate implementation in place. IBM is clear this is single-file today and still a preview feature. I am fine with that because scoped edits are a safety feature while teams build review discipline.</p><p>And this distinction matters: scoped does not mean weak. Scoped means deliberate.</p><p>I would rather start with an environment that makes intent, permissions, and blast radius explicit than one that can mutate half the tree before I have a reliable review habit.</p><p>Other IDE tools can move fast across many files too. That is real. But speed without an operating model is where teams get sloppy.</p><h2><strong>CLI: Bob Shell, Claude Code, and Repository Scope</strong></h2><p>The next step up the ladder is the CLI.</p><p>This is where the agent stops feeling like an editor assistant and starts feeling like a repository operator.</p><p>IBM Bob extends into this space with <a href="https://bob.ibm.com/docs/shell">Bob Shell</a>. Claude Code is also a clear example of this category. Claude Code is documented as a terminal tool that edits files, runs commands, and operates across your project from the command line. Bob Shell pushes Bob&#8217;s workflow into terminal-driven tasks and automation.</p><p>This is maximum leverage for people who already think in systems, commands, and boundaries. It is also where things break fastest.</p><p>The terminal removes friction. That is the appeal. You describe a task, the system searches files, changes code, runs commands, and tries to close the loop.</p><p>It feels great until it does not.</p><p>Once an agent works naturally at repository scope, your architecture map becomes the real safety mechanism. If your mental model is weak, the tool exposes that weakness quickly. It can make broad, technically plausible changes faster than you can fully reason about them.</p><p>That is why I treat CLI agents differently from IDE agents.</p><p>I use them when the task is clear, scope is understood, and I am ready to audit the result. I do not use them as a substitute for system understanding. Claude&#8217;s permission and auto-mode work is interesting because the industry is now dealing with approval fatigue and trying to find a middle ground between friction and recklessness.</p><p>So yes, CLI agents are powerful. The real story is how much repository scope you are willing to expose to autonomous change in one move.</p><h2><strong>Full App Generators: Fast Output, Hidden Architecture</strong></h2><p>At the far end of the ladder are full app generators.</p><p>Lovable and Emergent are good examples. You describe an application in natural language, and the system scaffolds frontend, backend, deployment, and often surrounding structure as well. That is real leverage for prototypes, demos, hackathons, and early product exploration.</p><p>This is also where understanding drops out of the process fastest.</p><p>&#8220;Vibe coding&#8221; became useful language for this reason. AI-assisted coding is not inherently unserious. But there is a real behavior pattern where prompting becomes the primary act of development and code understanding becomes optional. <a href="https://simonwillison.net/2025/Feb/6/andrej-karpathy/">Karpathy&#8217;s phrasing</a> and <a href="https://simonwillison.net/2025/Mar/19/vibe-coding/">Simon Willison&#8217;s follow-up</a> made this clear: the problem is shipping what you do not understand.</p><p>So I treat generators as sketchpads.</p><p>They are excellent for compressing idea-to-running-app time. They are much less useful when I need high confidence in architecture, security boundaries, or long-term maintainability.</p><p>Fast output is not the same thing as stable software.</p><h2><strong>The Traps I Hit</strong></h2><h3><strong>1) Reviewer Fatigue</strong></h3><p>At first, AI tools feel amazing because they move faster than you do. Then a subtle bug shows up, and you realize you are debugging output you barely internalized.</p><p>The fix is boring, but it works:</p><ul><li><p>Keep scope small</p></li><li><p>Review everything until you trust the patterns</p></li><li><p>Ask for tests early</p></li><li><p>Do not treat passing output as understood output</p></li></ul><p>This matters even more because industry research keeps showing that AI-generated code can include insecure or flawed patterns when review is weak.</p><h3><strong>2) The Context Tax</strong></h3><p>Using multiple tools on the same problem sounds smart. In practice, it often creates fragmented state. One tool knows about the last fix. Another does not. One session carries the right assumptions. The next session reintroduces something you already resolved.</p><p>My fix is simple: one tool per session, one operating model at a time.</p><h3><strong>3) Treating Autonomy Like Maturity</strong></h3><p>This one took longer to unlearn. The most autonomous tool in the room is not automatically the right one. Often it is the wrong one.</p><p>The right question is not &#8220;What can this agent do?&#8221; The right question is &#8220;What scope should this agent have for this task?&#8221;</p><p>That mindset shift is what has held up for me.</p><h2><strong>MCP Changes Context, Not Responsibility</strong></h2><p>One of the most important shifts in this space is <a href="https://modelcontextprotocol.io/docs/getting-started/intro">MCP (Model Context Protocol)</a>.</p><p>Anthropic introduced MCP as an open standard for connecting AI tools to data sources and external systems. The ecosystem is now real enough to matter in day-to-day tool decisions. Slack has an official MCP server. Atlassian supports remote MCP workflows for Jira and Confluence. <a href="https://bob.ibm.com/docs/ide/configuration/mcp/understanding-mcp">IBM Bob integrates MCP</a> into its tool model, including terminal workflows.</p><p>MCP does not make the model correct. It gives the model fewer excuses to guess.</p><p>If the agent can pull the actual ticket, real internal docs, or real team conversation, work depends less on invented context. In enterprise settings, that matters because the gap between code and business context is where expensive mistakes happen.</p><p>But MCP is not magic. It reduces one failure class and introduces more systems responsibility. You still own permissions, tool boundaries, approvals, and review. <a href="https://www.the-main-thread.com/p/ai-agents-cli-tools-jbang-java-architects">And next to MCP, there&#8217;s also CLI tools</a>.</p><h2><strong>Safety Is Still Not Solved</strong></h2><p>This market is still too casual about safety.</p><p>Prompt injection is real. Tool misuse is real. Approval fatigue is real. <a href="https://genai.owasp.org/resource/agentic-ai-threats-and-mitigations/">OWASP explicitly calls out </a>prompt injection and insecure tool behavior as major risks for LLM applications, and <a href="https://bob.ibm.com/docs/ide/security/bob-security-guidance">IBM security material around Bob</a> says the same in enterprise terms: once agents gain tool access, prompt injection, jailbreaks, and poisoned context become practical attack paths.</p><p>So my rule stays simple:</p><p><strong>Automate only what you can explain.</strong></p><p>If you cannot say what the agent is allowed to touch, why it is allowed to touch it, and how you will review the result, do not let it run.</p><p>That rule applies equally to Bob, Bob Shell, Claude Code, and full app generators.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/p/ai-coding-tools-2026-java-developers-agents-control?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/p/ai-coding-tools-2026-java-developers-agents-control?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>What Actually Works</strong></h2><p>If you are a senior engineer moving into this space, optimize for control before capability shopping.</p><p>Start in the IDE. Learn the operating model. Learn tool scope, execution behavior, approval flow, and context boundaries. That is why I like IBM Bob as a starting point for serious teams: The control surface is easier to see.</p><p>Then move up the ladder when the task really requires it:</p><ul><li><p>Use the CLI when repository-level action is justified and you are ready to audit the result</p></li><li><p>Use generators when ideation speed matters more than architectural clarity</p></li></ul><p>That is the map.</p><ul><li><p>Not beginner to advanced</p></li><li><p>Not weak to powerful</p></li><li><p>Narrower blast radius to wider blast radius</p></li></ul><p>In 2026, the winning skill is not prompting.</p><p>It is change control.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Real-Time Bitcoin Analytics in Java with Quarkus]]></title><description><![CDATA[Build a live Bollinger Band and volatility regime monitor using Gatherers4j, streaming pipelines, and a reactive Quarkus backend]]></description><link>https://www.the-main-thread.com/p/real-time-bitcoin-analytics-java-quarkus-bollinger-bands</link><guid isPermaLink="false">https://www.the-main-thread.com/p/real-time-bitcoin-analytics-java-quarkus-bollinger-bands</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Wed, 08 Apr 2026 06:08:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/921e362b-1ffd-466c-b765-f5d4c28819cf_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most developers think technical indicators are a frontend problem. You fetch some prices, calculate a few averages, and draw lines on a chart. That mental model breaks the moment you try to do this in real time.</p><p>Live market data is infinite, bursty, and noisy. If you process every tick synchronously, your UI freezes. If you buffer too much, your signals lag behind reality. If you calculate indicators incorrectly under load, you don&#8217;t just get wrong charts, you get wrong trading signals.</p><p>Bollinger Bands make this problem obvious. They depend on sliding windows, statistical calculations, and consistent ordering. A single dropped or reordered event skews the bands. A single blocking call backpressures the entire pipeline.</p><p>In this tutorial, we build a real-time Bollinger Band monitor for Bitcoin that survives these realities. We ingest live trade data from Binance, process it using a sliding window pipeline, and stream clean, throttled signals to a browser dashboard. This is a stream-processing walkthrough in Java, not trading advice.</p><h2><strong>What you&#8217;ll build</strong></h2><p>By the end, you have a small <a href="https://quarkus.io/">Quarkus</a> app that:</p><ul><li><p>Connects to Binance over WebSocket and parses trade ticks</p></li><li><p>Debounces and window trades, then computes Bollinger Bands on the server</p></li><li><p>Serves a dark-themed dashboard with Chart.js and live updates over Server-Sent Events (SSE)</p></li></ul><p>You can follow the steps in a fresh project, or open the companion <a href="https://github.com/myfear/the-main-thread/tree/main/bollinger-monitor">bollinger-monitor</a> sources next to this article. </p><h2><strong>Prerequisites</strong></h2><ul><li><p>Java 21 or newer (the companion <code>pom.xml</code> sets <code>maven.compiler.release</code>; align it with the JDK you run)</p></li><li><p>Apache Maven</p></li><li><p>Quarkus CLI or familiarity with <code>mvn quarkus:dev</code></p></li></ul><h2><strong>Project setup</strong></h2><h3><strong>Step 1: Create the Quarkus application</strong></h3><p>Let&#8217;s start with a new Quarkus app. We use the reactive REST stack, Qute for server-side templates, and WebSockets for ingestion.</p><pre><code><code>quarkus create app org.acme:bollinger-monitor \
  --extensions=quarkus-rest-jackson,quarkus-rest-qute,websockets-next \
  --java=21
cd bollinger-monitor</code></code></pre><p>If you open the companion project from this repo, check <code>maven.compiler.release</code> in the root <code>pom.xml</code> and make it match the JDK you run (the generated CLI project uses whatever you passed to <code>--java=</code>).</p><h3><strong>Step 2: Add Gatherers4j to your build</strong></h3><p>Add Gatherers4j to <code>pom.xml</code>:</p><pre><code><code>&lt;dependency&gt;
    &lt;groupId&gt;com.ginsberg&lt;/groupId&gt;
    &lt;artifactId&gt;gatherers4j&lt;/artifactId&gt;
    &lt;version&gt;0.13.0&lt;/version&gt;
&lt;/dependency&gt;</code></code></pre><p><a href="https://tginsberg.github.io/gatherers4j/">Gatherers4j</a> is a small library of <strong>stream gatherers</strong>&#8212;custom intermediate operations you plug into a Java <code>Stream</code> with <code>.gather(...)</code>. The classic <code>Stream</code> API made it straightforward to define <em>terminal</em> behavior with <code>Collector</code>, but reusable <em>intermediate</em> steps such as sliding windows, debouncing, and throttling were not first-class; teams often reimplemented them or jumped to a separate streaming runtime. Gatherers close that gap: you keep an ordinary in-process stream (here, fed from our queue), compose operators like <code>debounce</code> and <code>window</code>, and avoid pulling in a distributed stream-processing framework. </p><h2><strong>Implementation</strong></h2><h3><strong>Map Binance trades to a Java record</strong></h3><p>Binance trade messages are compact JSON objects. We only care about price and timestamp.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;4f8d0d25-f414-448a-9467-43773b461d52&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.domain;

import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;

@JsonIgnoreProperties(ignoreUnknown = true)
public record TradeData(
                @JsonProperty("p") double price,
                @JsonProperty("T") long timestamp) {
}</code></pre></div><p>Ignoring unknown fields protects us from API changes. If Binance adds fields tomorrow, your pipeline keeps running.</p><h3><strong>Define the signal you stream to the UI</strong></h3><p>This record is what we send to the browser: raw band values plus a simple label the UI can show.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;5184edf4-883e-4a83-955c-b30a39fd0fd5&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.domain;

public record BollingerSignal(
        double currentPrice,
        double upperBand,
        double lowerBand,
        double middleBand,
        String signal) {
}</code></pre></div><p>The UI never recalculates indicators. That logic belongs on the server, where correctness is easier to test.</p><h3><strong>Extract Bollinger math for reuse and tests</strong></h3><p>The companion project moves the band and signal logic into a small static helper so <code>BollingerService</code> stays focused on streaming and you can unit test the formula without the queue or Gatherers.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;2e363d5e-c344-497b-a4a6-b4ed44d5728b&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.service;

import java.util.List;

import org.acme.domain.BollingerSignal;
import org.acme.domain.TradeData;

public final class BollingerCalculator {

    private BollingerCalculator() {
    }

    public static BollingerSignal calculate(List&lt;TradeData&gt; window, double k) {
        double currentPrice = window.getLast().price();

        double mean = window.stream()
                .mapToDouble(TradeData::price)
                .average()
                .orElse(0.0);

        double variance = window.stream()
                .mapToDouble(t -&gt; Math.pow(t.price() - mean, 2))
                .average()
                .orElse(0.0);

        double stdDev = Math.sqrt(variance);

        double upper = mean + (k * stdDev);
        double lower = mean - (k * stdDev);

        String status = "NORMAL";
        if (currentPrice &gt;= upper) {
            status = "BREAKOUT_UP";
        } else if (currentPrice &lt;= lower) {
            status = "BREAKOUT_DOWN";
        } else if (stdDev &lt; mean * 0.0001) {
            status = "SQUEEZE";
        }

        return new BollingerSignal(currentPrice, upper, lower, mean, status);
    }
}</code></pre></div><p><strong>Signal edge case:</strong> when every price in the window is identical, the bands collapse to a single level and the last price satisfies <code>currentPrice &gt;= upper</code> before the squeeze check runs, so the label is <code>BREAKOUT_UP</code>, not <code>SQUEEZE</code>. The companion tests document that ordering.</p><h3><strong>Ingest trades without blocking the socket thread</strong></h3><p>WebSocket callbacks must stay fast. Any blocking work here will drop messages.</p><p>We buffer incoming trades into a queue and process them elsewhere.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;4ce1adb1-3d3a-478f-afc8-05bc25be2bb1&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.ingest;

import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;

import org.acme.domain.TradeData;

import com.fasterxml.jackson.databind.ObjectMapper;

import io.vertx.core.http.WebSocketClient;
import io.vertx.core.http.WebSocketConnectOptions;
import io.vertx.mutiny.core.Vertx;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;

@ApplicationScoped
public class BinanceClient {

    public static final BlockingQueue&lt;TradeData&gt; BUFFER = new LinkedBlockingQueue&lt;&gt;();

    @Inject
    Vertx vertx;

    private final ObjectMapper mapper = new ObjectMapper();
    private io.vertx.mutiny.core.http.WebSocket webSocket;

    public void connect(String uri) {
        // Check for proxy environment variables that might interfere
        String httpProxy = System.getenv("HTTP_PROXY");
        String httpsProxy = System.getenv("HTTPS_PROXY");
        if (httpProxy != null || httpsProxy != null) {
            io.quarkus.logging.Log.warn(
                    "Proxy environment variables detected - HTTP_PROXY: " + httpProxy + ", HTTPS_PROXY: " + httpsProxy);
            io.quarkus.logging.Log.warn("Using direct connection to Binance (bypassing proxy)");
        }

        WebSocketClient client = vertx.getDelegate().createWebSocketClient();

        // Parse URI to extract host, port, and path
        // Format: wss://stream.binance.com:9443/ws/btcusdt@trade
        java.net.URI parsedUri = java.net.URI.create(uri);
        String host = parsedUri.getHost();
        int port = parsedUri.getPort() != -1 ? parsedUri.getPort() : (uri.startsWith("wss://") ? 443 : 80);
        String path = parsedUri.getPath() + (parsedUri.getQuery() != null ? "?" + parsedUri.getQuery() : "");
        boolean ssl = uri.startsWith("wss://");

        // Use host/port directly to bypass proxy resolution
        WebSocketConnectOptions options = new WebSocketConnectOptions()
                .setHost(host)
                .setPort(port)
                .setURI(path)
                .setSsl(ssl);

        io.quarkus.logging.Log.info("Connecting to Binance WebSocket: " + host + ":" + port + path);

        client.connect(options)
                .onSuccess(ws -&gt; {
                    this.webSocket = new io.vertx.mutiny.core.http.WebSocket(ws);
                    io.quarkus.logging.Log.info("Binance WebSocket connected successfully");
                    ws.textMessageHandler(message -&gt; {
                        try {
                            TradeData data = mapper.readValue(message, TradeData.class);
                            BUFFER.offer(data);
                        } catch (Exception e) {
                            io.quarkus.logging.Log.warn("Failed to parse trade data: " + e.getMessage());
                        }
                    });
                    ws.closeHandler(v -&gt; {
                        io.quarkus.logging.Log.warn("Binance WebSocket closed");
                    });
                })
                .onFailure(throwable -&gt; {
                    io.quarkus.logging.Log.error("Failed to connect to Binance WebSocket", throwable);
                });
    }

    public void disconnect() {
        if (webSocket != null) {
            webSocket.close();
        }
    }
}</code></pre></div><p>This queue is a pressure boundary. If downstream slows down, we drop or delay work without blocking the socket thread. Parsing the WebSocket URI into host, port, path, and SSL (instead of only <code>setURI</code>) helps in environments where HTTP(S) proxies would otherwise intercept <code>wss://</code> connections.</p><h3><strong>Turn the queue into a windowed stream</strong></h3><p>This is the core of the system. We convert an infinite queue into a controlled, windowed stream.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;00a5f485-ec4f-427d-9edc-0ea94fe18d12&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.service;

import java.time.Duration;
import java.util.List;
import java.util.Objects;
import java.util.concurrent.Executors;

import org.acme.domain.BollingerSignal;
import org.acme.domain.TradeData;
import org.acme.ingest.BinanceClient;
import org.jspecify.annotations.NonNull;

import com.ginsberg.gatherers4j.Gatherers4j;

import io.quarkus.logging.Log;
import io.quarkus.runtime.StartupEvent;
import io.smallrye.mutiny.Multi;
import io.smallrye.mutiny.subscription.MultiEmitter;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.enterprise.event.Observes;
import jakarta.inject.Inject;

@ApplicationScoped
public class BollingerService {

    @Inject
    BinanceClient binanceClient;

    private volatile MultiEmitter&lt;? super BollingerSignal&gt; currentEmitter;
    private volatile boolean processingStarted = false;

    private static final int WINDOW_SIZE = 20;
    private static final double K = 2.0;
    private static final @NonNull Duration DEBOUNCE_DURATION = Objects.requireNonNull(Duration.ofMillis(50));

    public Multi&lt;BollingerSignal&gt; stream() {
        return Multi.createFrom().emitter(emitter -&gt; {
            this.currentEmitter = emitter;
            Log.info("New subscriber connected to stream");
            // Start processing if not already started
            if (!processingStarted) {
                synchronized (this) {
                    if (!processingStarted) {
                        processingStarted = true;
                        Executors.newSingleThreadExecutor().submit(this::processStream);
                    }
                }
            }
        });
    }

    void onStart(@Observes StartupEvent ev) {
        connectToBinance();
    }

    private void processStream() {
        Log.info("Starting stream processing - waiting for trade data...");
        try {
            java.util.stream.Stream.generate(() -&gt; {
                try {
                    TradeData data = BinanceClient.BUFFER.take();
                    return data;
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                    Log.warn("Stream processing interrupted");
                    return null;
                }
            })
                    .takeWhile(data -&gt; data != null)
                    .gather(Gatherers4j.debounce(1, DEBOUNCE_DURATION))
                    .gather(Gatherers4j.window(WINDOW_SIZE, 1, true))
                    .map(this::calculateBollinger)
                    .forEach(signal -&gt; {
                        MultiEmitter&lt;? super BollingerSignal&gt; emitter = this.currentEmitter;
                        if (emitter != null &amp;&amp; !emitter.isCancelled()) {
                            emitter.emit(signal);
                        }
                    });
        } catch (Exception e) {
            Log.error("Error in processing stream", e);
            MultiEmitter&lt;? super BollingerSignal&gt; emitter = this.currentEmitter;
            if (emitter != null &amp;&amp; !emitter.isCancelled()) {
                emitter.fail(e);
            }
        }
    }

    private BollingerSignal calculateBollinger(List&lt;TradeData&gt; window) {
        return BollingerCalculator.calculate(window, K);
    }

    private void connectToBinance() {
        Log.info("Attempting to connect to Binance WebSocket...");
        try {
            binanceClient.connect("wss://stream.binance.com:9443/ws/btcusdt@trade");
            // Note: connection is asynchronous, success/failure logged in BinanceClient
        } catch (Exception e) {
            Log.error("Failed to initiate Binance connection", e);
        }
    }
}</code></pre></div><p><strong>Lifecycle in the companion project:</strong> <code>onStart</code> only opens the Binance WebSocket. The consumer thread starts when the <strong>first</strong> client subscribes to the SSE <code>Multi</code> (first browser hitting <code>/stream</code>). Until then, trades accumulate in the buffer. That avoids a dedicated blocked thread when nobody is watching.</p><p><strong>Multiple dashboards:</strong> the service keeps a single <code>currentEmitter</code>. The last subscriber wins; earlier SSE clients will not receive new signals unless you introduce a broadcast <code>Multi</code> or a shared processor. For a single-tab demo this is fine.</p><p><strong>Interrupt handling:</strong> after <code>take()</code> is interrupted, the generator returns <code>null</code>; <code>takeWhile</code> ends the stream so <code>null</code> never reaches Gatherers4j.</p><p><strong>Gatherers4j and nullness:</strong> the library uses JSpecify; some IDEs warn when passing a bare <code>Duration.ofMillis(50)</code> into <code>debounce</code>. A <code>static final @NonNull Duration</code> initialized with <code>Objects.requireNonNull(Duration.ofMillis(50))</code> satisfies those checkers without changing runtime behavior.</p><p>Why debounce before windowing? Raw ticks arrive very fast. If we window every tick, the chart and the browser work too hard, and the bands jump on noise. A short debounce collapses bursts so the window sees a steadier stream, and the UI still feels live.</p><p>We debounce first, then window. That keeps the UI responsive and the math stable.</p><h3><strong>Serve the dashboard and an SSE stream</strong></h3><p>We serve a simple HTML page and expose an SSE stream for live updates.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;312acf3f-3122-48dc-887d-dd62e4ba29cc&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme;

import org.acme.domain.BollingerSignal;
import org.acme.service.BollingerService;
import org.jboss.resteasy.reactive.RestStreamElementType;

import io.quarkus.qute.Template;
import io.quarkus.qute.TemplateInstance;
import io.smallrye.mutiny.Multi;
import jakarta.inject.Inject;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;

@Path("/")
public class DashboardResource {

    @Inject
    Template index;

    @Inject
    BollingerService service;

    @GET
    @Produces(MediaType.TEXT_HTML)
    public TemplateInstance get() {
        return index.data("title", "BTC Bollinger Bands");
    }

    @GET
    @Path("/stream")
    @Produces(MediaType.SERVER_SENT_EVENTS)
    @RestStreamElementType(MediaType.APPLICATION_JSON)
    public Multi&lt;BollingerSignal&gt; stream() {
        return service.stream();
    }
}</code></pre></div><p>SSE gives us ordered, one-way streaming from the server without extra WebSocket wiring in the page.</p><h3><strong>Build the Chart.js dashboard</strong></h3><p>We visualize the &#8220;tunnel&#8221; effect of Bollinger Bands. In Chart.js, pairing <code>fill</code> on the lower band with the upper band dataset gives a shaded band between them (see the <code>fill</code> values in the snippet below).</p><p>The companion repo keeps the same Chart.js and <code>EventSource("/stream")</code> pattern but expands <code>index.html</code> with extra layout and a metrics panel; the following snippet is the minimal version from the original walkthrough.</p><p><code>src/main/resources/templates/index.html</code>:</p><pre><code><code>&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
    &lt;title&gt;{title}&lt;/title&gt;
    &lt;script src="https://cdn.jsdelivr.net/npm/chart.js"&gt;&lt;/script&gt;
    &lt;style&gt;
        body { background: #121212; color: #e0e0e0; font-family: 'Segoe UI', sans-serif; padding: 20px; }
        .container { max-width: 900px; margin: 0 auto; text-align: center; }

        /* Signal Badges */
        #signal-box { padding: 10px 20px; border-radius: 5px; display: inline-block; font-weight: bold; margin-bottom: 20px;}
        .NORMAL { background: #333; color: #888; }
        .BREAKOUT_UP { background: #00c853; color: #fff; box-shadow: 0 0 15px #00c853;}
        .BREAKOUT_DOWN { background: #d50000; color: #fff; box-shadow: 0 0 15px #d50000;}
        .SQUEEZE { background: #ffd600; color: #000; }
    &lt;/style&gt;
&lt;/head&gt;
&lt;body&gt;

&lt;div class="container"&gt;
    &lt;h2&gt;Bitcoin (BTC/USDT) Real-Time Bollinger Bands&lt;/h2&gt;
    &lt;div id="signal-box" class="NORMAL"&gt;INITIALIZING STREAM...&lt;/div&gt;
    &lt;canvas id="chart"&gt;&lt;/canvas&gt;
&lt;/div&gt;

&lt;script&gt;
    const ctx = document.getElementById('chart').getContext('2d');

    const chart = new Chart(ctx, {
        type: 'line',
        data: {
            labels: [],
            datasets: [
                {
                    label: 'Upper Band',
                    data: [],
                    borderColor: 'rgba(255, 255, 255, 0.2)',
                    borderWidth: 1,
                    pointRadius: 0,
                    fill: false
                },
                {
                    label: 'Lower Band',
                    data: [],
                    borderColor: 'rgba(255, 255, 255, 0.2)',
                    borderWidth: 1,
                    pointRadius: 0,
                    fill: '-1' // Fill to the dataset before this (Upper Band)
                },
                {
                    label: 'BTC Price',
                    data: [],
                    borderColor: '#00e5ff',
                    borderWidth: 2,
                    pointRadius: 0
                }
            ]
        },
        options: {
            animation: false,
            interaction: { intersect: false },
            scales: {
                y: { grid: { color: '#333' } },
                x: { display: false } // Hide time labels for cleaner look
            }
        }
    });

    const evtSource = new EventSource("/stream");

    evtSource.onmessage = function(event) {
        const data = JSON.parse(event.data);
        const time = new Date().toLocaleTimeString();

        // Update Signal Badge
        const box = document.getElementById("signal-box");
        box.className = data.signal;
        box.innerText = data.signal + " ($" + data.currentPrice.toFixed(2) + ")";

        // Update Chart
        if (chart.data.labels.length &gt; 100) {
            chart.data.labels.shift();
            chart.data.datasets.forEach(d =&gt; d.data.shift());
        }

        chart.data.labels.push(time);
        chart.data.datasets[0].data.push(data.upperBand);
        chart.data.datasets[1].data.push(data.lowerBand);
        chart.data.datasets[2].data.push(data.currentPrice);
        chart.update();
    };
&lt;/script&gt;
&lt;/body&gt;
&lt;/html&gt;</code></code></pre><h2><strong>Configuration</strong></h2><p>This example runs with defaults. In production, you would externalize:</p><ul><li><p>WebSocket endpoint URL</p></li><li><p>Window size and multiplier</p></li><li><p>Debounce duration</p></li></ul><p>Those values directly affect signal sensitivity and system load.</p><h2><strong>Automated tests</strong></h2><p>The companion project replaces the default Quarkus greeting tests with:</p><ul><li><p><code>BollingerCalculatorTest</code> &#8212; pure unit tests for band math and signal labels (including the &#8220;flat window&#8221; edge case).</p></li><li><p><code>TradeDataMappingTest</code> &#8212; Jackson deserializes a Binance-style JSON line (<code>p</code> as string, unknown fields ignored).</p></li><li><p><code>DashboardResourceTest</code> &#8212; <code>GET /</code> returns HTML containing the page title and the <code>EventSource("/stream")</code> client.</p></li><li><p><code>DashboardResourceIT</code> &#8212; runs the same HTTP checks in packaged mode when you enable integration tests (for example <code>-DskipITs=false</code> on <code>verify</code>).</p></li></ul><p>Run unit tests with:</p><pre><code><code>mvn test</code></code></pre><h2><strong>Production hardening</strong></h2><h3><strong>Stream backpressure</strong></h3><p>The blocking queue isolates ingestion from processing. If calculations slow down, the WebSocket thread stays alive. Without this boundary, Binance would disconnect you under load.</p><h3><strong>Ordering guarantees</strong></h3><p>This pipeline assumes trade events arrive in order. Binance guarantees ordering per symbol. If you merge multiple symbols, you must reorder by timestamp before windowing.</p><h3><strong>Numerical stability</strong></h3><p>Standard deviation is recalculated per window. For large windows or high-frequency data, you would use incremental variance algorithms to reduce floating-point drift.</p><h3><strong>Common pitfalls</strong></h3><ul><li><p><strong>Unbounded buffer</strong>: <code>LinkedBlockingQueue</code> without a cap can grow until you run out of memory if the consumer stops or cannot keep up. In production, pick a capacity and a clear policy (drop, sample, or spill to disk).</p></li><li><p><strong>Parse errors</strong>: the companion client logs parse failures; you should still add metrics or counters in production so you can alert on a unhealthy feed.</p></li><li><p><strong>No reconnect</strong>: if the WebSocket drops, this sample does not reconnect. Production clients need backoff, resubscribe, and maybe snapshot + replay from REST.</p></li><li><p><code>null</code><strong> from </strong><code>take()</code>: on interrupt, the companion pipeline restores the interrupt flag, returns <code>null</code>, and ends the stream with <code>takeWhile</code> so Gatherers never see a null element.</p></li><li><p><strong>One emitter</strong>: a single <code>currentEmitter</code> field means only the latest SSE subscriber receives signals; use a broadcast stream if you need multiple concurrent dashboards.</p></li></ul><h2><strong>Verification</strong></h2><p>Run the application:</p><pre><code><code>quarkus dev</code></code></pre><p>In your browser, open <a href="http://localhost:8080/">the dashboard at </a>http://localhost:8080.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T_mV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T_mV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png 424w, https://substackcdn.com/image/fetch/$s_!T_mV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png 848w, https://substackcdn.com/image/fetch/$s_!T_mV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png 1272w, https://substackcdn.com/image/fetch/$s_!T_mV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T_mV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png" width="1456" height="830" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:830,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:427093,&quot;alt&quot;:&quot;Bollinger Monitor&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/184634498?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Bollinger Monitor" title="Bollinger Monitor" srcset="https://substackcdn.com/image/fetch/$s_!T_mV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png 424w, https://substackcdn.com/image/fetch/$s_!T_mV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png 848w, https://substackcdn.com/image/fetch/$s_!T_mV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png 1272w, https://substackcdn.com/image/fetch/$s_!T_mV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb4a7a1a-4009-44ed-bf50-18b8fd3bcac1_3024x1724.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You should see:</p><ul><li><p>A moving BTC price line</p></li><li><p>A shaded Bollinger Band tunnel</p></li><li><p>A signal badge that switches between NORMAL, SQUEEZE, and breakouts</p></li></ul><p>When the price exits the band, the signal changes immediately. That confirms the windowing and streaming logic is working.</p><h2><strong>Conclusion</strong></h2><p>You now have a real-time Bollinger Band monitor that handles infinite streams, sliding windows, and live visualization without blocking the UI thread or letting the socket handler do heavy work. The important parts are controlled ingestion, explicit windowing, and server-side signal calculation.</p><p>From here, adding trade execution, alerts, or persistence is an architectural decision, not a rewrite.</p><p>If you want to go further, a natural next step is a guarded trade-execution endpoint that only runs on a breakout signal, with a short note on why that kind of safeguard matters in production.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[When Code Gets Cheap, Quality Becomes the Strategy]]></title><description><![CDATA[As AI coding agents flood software delivery with output, Java developers need stronger standards for quality, accountability, and long-term trust.]]></description><link>https://www.the-main-thread.com/p/agentic-sdlc-java-teams-trust-quality-standards</link><guid isPermaLink="false">https://www.the-main-thread.com/p/agentic-sdlc-java-teams-trust-quality-standards</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Tue, 07 Apr 2026 06:08:50 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b2e4985f-0202-4e9d-8ed9-cf32699c2db5_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The biggest problem in agent-driven development is not code generation.</p><p>It is trust.</p><p>The tools can already produce code, tests, refactorings, documentation, and pull requests at a speed that would have looked ridiculous not long ago. That part is real. What is far less mature is everything around it: how teams review that output, how they prove it is correct, how they trace decisions back to a responsible human, and how they stop architecture from slowly dissolving under a flood of plausible machine-produced changes.</p><p>Software delivery is becoming easier to accelerate and harder to trust. That changes the conversation completely. We are no longer only talking about developer productivity. We are talking about responsibility. About whether engineering teams can still explain the systems they ship. About whether passing tests still mean what they used to mean. About whether critical software can be built in a process where output is cheap, judgment is expensive, and certainty is always slightly out of reach.</p><p>A lot of teams are learning this the hard way. Some ignore the AI slop and merge too much. Others compensate with impressive-looking test coverage wrapped around shallow engineering decisions. Many are experimenting. Many are failing. The teams seeing real success are usually not the ones moving fastest. They are the ones applying these tools with restraint, experience, and a clear sense of where not to trust them.</p><p>That is why I think agent-driven SDLC has a standards problem long before it solves its tooling problem.</p><h2>We are redistributing engineering responsibility</h2><p>One of the easiest mistakes to make in this discussion is to frame the whole shift as simple automation. The agent writes more code, the developer writes less code, productivity goes up. That is the surface-level version. It misses the more important change underneath.</p><p>Developers are not just writing less. They are spending more time steering, constraining, verifying, and cleaning up. In the old model, authorship and responsibility were closely linked. You wrote the code, so you were expected to understand it. In the new model, that path becomes less direct. A human starts the task, an agent explores a solution, another tool edits files, the IDE suggests changes, a review assistant comments, and a human signs off at the end.</p><p>The code still ships under human responsibility, but the relationship between producing it and understanding it is getting weaker.</p><p>That changes what it means to be good at software engineering. It also raises the cost of weak judgment. A team with poor architectural instincts does not suddenly become strong because an agent can produce more code. It just creates larger amounts of weak software more quickly. Strong teams can absolutely get leverage from these tools, but they do so because they already know what good looks like, where risk hides, and when to stop the machine from confidently going in the wrong direction.</p><p>These tools amplify judgment. They do not replace it.</p><h2>The industry is mistaking activity for progress</h2><p>This is where a lot of companies get into trouble.</p><p>Agent-driven workflows generate visible motion. More code. More commits. More pull requests. More generated tests. More automated fixes. More demos. More App Store updates. More experiments. More output everywhere. It looks like acceleration because everything is moving.</p><p>But visible motion and meaningful progress are not the same thing.</p><p>Teams are starting to treat the artifacts of agent-driven development as proof that the underlying engineering is sound. A large test suite gets mistaken for rigor even when it mostly validates the agent&#8217;s own assumptions. A working demo gets treated as evidence of maintainability. A huge refactoring diff feels like success because the tool completed it in minutes. A ticket-to-PR pipeline gets presented as maturity because it resembles industrial scale.</p><p>The easy parts of software delivery are the easiest parts to automate and the easiest parts to measure. That creates a dangerous illusion. You can improve the metrics that are most visible while making the system itself harder to reason about. Data boundaries get weaker. Error handling stays shallow. Edge cases remain undiscovered. Architecture accumulates local optimizations that nobody planned. A passing pipeline starts to hide a declining engineering baseline.</p><p>That pattern is not a minor quirk of early tooling. It is one of the natural failure modes of this model.</p><h2>When everyone can ship faster, quality becomes the strategy</h2><p>There is another pressure building here, especially for established software vendors.</p><p>When the cost of producing software drops, competition changes shape. Smaller players can suddenly launch products, features, copilots, and agentic workflows at a speed that would have been much harder to match a few years ago. From the outside, that can make the market look flooded with innovation. Every week brings more announcements, more updates, more assistants, more products that appear to do everything. For larger and more established vendors, that creates a dangerous temptation: respond to the pressure by embracing every agentic pattern at once, ship faster than feels comfortable, and try to hold ground through visible momentum alone.</p><p>That response is understandable. It is also risky.</p><p>Once code becomes cheap, quantity stops being a meaningful signal of quality. More features do not automatically mean better software. More releases do not mean stronger products. More AI-generated surface area does not mean the product underneath is easier to operate, easier to secure, easier to integrate, or easier to trust. Many markets are about to relearn that the hard way.</p><p>Software is more than just code. It is the long tail that begins after the demo works. It is whether the architecture holds together as complexity grows. Whether support teams can diagnose failures. Whether customers can rely on behavior staying consistent. Whether integrations survive change. Whether security holds up under real use. Whether a vendor can explain design decisions, fix regressions without breaking everything else, and still be there to maintain the product when the excitement of launch day is long gone.</p><p>This is also where betting on no-name products becomes more complicated than it first appears. In a market shaped by agentic development, a small team can produce something impressive very quickly. But customers are not only buying a set of generated features. <strong>They are buying a future</strong>: maintenance, accountability, resilience, product direction, support, and staying power. When those things are weak, the apparent speed advantage can turn into long-term cost for everyone involved.</p><p>That is why quality versus quantity is no longer just an engineering argument. It is becoming a strategic one. In a market full of fast-moving products, durable software will stand out less by how much it can generate and more by how well it survives contact with reality.</p><h2>Zero trust becomes the default working posture</h2><p>There is also a human cost to all of this.</p><p>A lot of agent-driven development ends up creating a zero-trust environment by necessity. You do not fully trust the output. You do not fully trust the tests. You do not fully trust the explanation. You do not fully trust the refactoring. You definitely do not trust that all the edge cases have been found.</p><p>So you inspect. Then you verify. Then you add rules, prompts, templates, policy files, review gates, local conventions, evaluation harnesses, and more tooling around the tooling. All of that is rational. All of that is also expensive.</p><p>The promise was reduced toil. In many teams, the toil has simply changed shape.</p><p>Instead of typing every line directly, developers become permanent supervisors of a fast, confident, and uneven collaborator. Sometimes that trade is worth it. Sometimes it is not. Sometimes the productivity gain is obvious. Sometimes it disappears into review overhead and the mental drain of never being able to fully relax.</p><p>That constant wariness matters more than many people admit. It affects concentration, ownership, onboarding, and engineering culture. It changes the emotional texture of software development. It is one thing to collaborate with a tool you trust. It is another to work beside a system that is often useful, occasionally brilliant, and always suspect.</p><h2>The reliable pockets are real, but narrower than the hype suggests</h2><p>This is not a pessimistic case against the whole category. There are clearly places where agent-based development already works well.</p><p>It works better when the task is bounded. It works better when correctness is visible. It works better when rollback is cheap. It works better when the surrounding architecture is already strong. It works better when an experienced engineer can quickly tell when something feels wrong.</p><p>That is why scaffolding, boilerplate reduction, repetitive migrations, documentation support, low-risk internal tooling, and some forms of test assistance can be genuinely useful. The problem starts when success in these pockets gets generalized into confidence about everything else.</p><p>That leap is where teams get hurt.</p><p>Once software starts carrying serious business criticality, regulatory weight, safety implications, or long maintenance horizons, the question changes. It is no longer enough to ask whether an agent can produce acceptable code. The more important question is what evidence exists that the system, the workflow, and the chain of decisions are trustworthy enough for the domain.</p><p>That is a much harder standard to meet.</p><h2>Critical systems are where the romance ends</h2><p>This is where the strategic question gets serious.</p><p>Using agents for a dashboard, an internal admin tool, or a side project is one thing. Using them in software that can influence medical devices, medication workflows, vehicles, industrial controls, or other embedded systems with real-world failure consequences is something else entirely.</p><p>In those environments, generated code is not just a productivity artifact. It becomes part of an assurance story.</p><p>Who reviewed it? Against which standard? With what traceability? Can the team explain why a decision was made? Can it show the origin of a generated change? Can it reproduce the workflow that produced it? Can it prove the tests are meaningful rather than cosmetic? Can it demonstrate that safety constraints were actually enforced and not just described in a prompt somewhere?</p><p>Those are not anti-AI questions. They are normal engineering questions in environments where failure is expensive and sometimes irreversible.</p><p>This is where current agent tooling still feels immature. It is very good at producing output. It is much less mature when it comes to producing evidence. And in critical systems, evidence is what matters.</p><h2>We are rebuilding trust layers from scratch in every company</h2><p>Almost every serious company experimenting with agent-driven SDLC is inventing its own local operating system for trust.</p><p>Different prompt conventions. Different repository instructions. Different policy files. Different approval flows. Different evaluation harnesses. Different logging setups. Different provenance strategies. Different rules about where autonomy is allowed and where it stops. Different expectations for what a human reviewer must verify before approving a change.</p><p>Some of this is healthy experimentation. Some of it is duplicated labor on a massive scale.</p><p>That usually means the industry has entered a pre-standards phase.</p><p>Standardization tends to matter when fragmentation starts becoming expensive. Incompatibility increases. Portability gets worse. Safety becomes harder to reason about. Teams duplicate the same work in parallel. Trust does not travel well across organizational boundaries. DIN itself was founded in 1917, and DIN describes standardization in Germany as a form of industry self-regulation. The point is not to force a historical analogy too far. The point is simpler. Ad hoc solutions work for a while. Then the cost of living without common agreements becomes too high.</p><p>Agent-driven development feels like it is moving toward that moment.</p><h2>The missing standards are operational, not just technical</h2><p>When people hear the word standards, they often think about protocols, file formats, or APIs. Those matter, but the more urgent gap is operational.</p><p>We still do not have widely shared norms for questions like these:</p><ul><li><p>What counts as acceptable evidence for an agent-generated change?</p></li><li><p>What level of traceability should be required for generated code in regulated environments?</p></li><li><p>What must a human reviewer verify before approving an agent-produced pull request?</p></li><li><p>How should teams document architectural intent in a way that agents can use without slowly corrupting it?</p></li><li><p>What does a meaningful evaluation harness look like beyond &#8220;the tests passed&#8221;?</p></li><li><p>What levels of autonomy are acceptable in different domains?</p></li><li><p>How do you onboard junior developers into a world where they can generate implementations faster than they can judge them?</p></li></ul><p>Those are not just model questions. They are software delivery questions. They cut across engineering, architecture, governance, and risk.</p><p>We already have broad AI governance frameworks. <a href="https://www.nist.gov/itl/ai-risk-management-framework">NIST&#8217;s AI Risk Management Framework</a> and its Generative AI profile exist, and <a href="https://www.iso.org/standard/42001">ISO/IEC 42001</a> defines a management system standard for AI. But those frameworks do not answer the practical SDLC question of how agent-based delivery should be reviewed, evidenced, and controlled inside real software teams. That part is still being invented ad hoc.</p><p>If the industry does not shape those norms together, vendors and individual enterprises will shape them separately. That leads to the usual outcome: fragmented practices, hard-to-transfer skills, audit pain, and a lot of expensive reinvention.</p><h2>Senior engineering judgment matters more now</h2><p>One of the strangest ideas in the current conversation is that agent-driven development reduces the need for deep engineering experience.</p><p>Everything I see points the other way.</p><p>When output becomes cheap, judgment becomes expensive.</p><p>The ability to notice where a design is weak, where a test is shallow, where a refactoring quietly damages a boundary, where a generated abstraction will become tomorrow&#8217;s maintenance burden, where an agent is confidently wrong, where a missing edge case can still trigger an incident, these skills matter more in an agent-driven SDLC, not less.</p><p>This is why some of the current misuse feels so predictable. If a company believes it can compensate for weak architectural thinking by adding more generation, more prompt chains, and more superficial test automation, it is not modernizing. It is scaling confusion.</p><p>The teams getting real value are usually not the most aggressive. They are the most deliberate. They know where the tools help. They know where they do not. They know that a passing suite is not the same thing as a sound system. They know that human responsibility cannot be outsourced simply because the implementation path became machine-assisted.</p><p>That is not resistance. It is engineering maturity.</p><h2>The next standards battle in software will be about trust</h2><p>This is the strategic point that I keep coming back to.</p><p>The companies that benefit most from agent-driven development will not be the ones generating the most code. They will be the ones building the best systems of control around it. In the next few years, the real advantage will not come from speed alone. It will come from knowing what can be trusted, what must be checked, what needs a human decision, and what should never be delegated at all.</p><p>That is the part of this shift the industry still understates. Code generation is improving fast. Confidence is not. Until we build stronger standards for traceability, review, accountability, and evidence, agent-driven SDLC will remain powerful, useful, and fundamentally unstable. The teams that understand this early will not just ship more. They will ship with fewer illusions.</p><p>The future belongs to the teams that can prove their software deserves trust, not just produce it faster.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Hybrid Search in Quarkus: Full-Text and Vector Together]]></title><description><![CDATA[Build a product search in Java that handles exact terms, semantic meaning, and real-world relevance with PostgreSQL, Elasticsearch, Hibernate Search, and local embeddings.]]></description><link>https://www.the-main-thread.com/p/full-text-vector-hybrid-search-quarkus-java</link><guid isPermaLink="false">https://www.the-main-thread.com/p/full-text-vector-hybrid-search-quarkus-java</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Mon, 06 Apr 2026 06:08:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b1426964-53db-48fe-9b6f-4b619b760f26_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most developers add search late. You ship a text box. Maybe a <code>LIKE</code> query. Maybe PostgreSQL full-text when the complaints get loud.</p><p>That works until the words diverge. The user types &#8220;comfortable running shoes.&#8221; The catalog says &#8220;ergonomic athletic footwear.&#8221; The rows exist. The vocabulary does not match.</p><p>What happens next? Many teams picture a big stack: a hosted vector database, a separate search cluster, a cloud embedding API, and weeks of glue. What we build instead is leaner but still concrete: <strong><a href="https://quarkus.io/">Quarkus</a></strong>, <strong>PostgreSQL with </strong><code>pgvector</code> (catalog rows and <code>vector</code> columns via <strong><a href="https://quarkus.io/guides/hibernate-orm-panache">Hibernate ORM</a></strong><a href="https://quarkus.io/guides/hibernate-orm-panache"> and </a><strong><a href="https://quarkus.io/guides/hibernate-orm-panache">Panache</a></strong>), <strong>Hibernate Search on Elasticsearch</strong> for lexical and kNN search in the index, and <strong><a href="https://docs.quarkiverse.io/quarkus-langchain4j/dev/index.html">Quarkus LangChain4j</a></strong> with a <strong>local ONNX</strong> model so embeddings never leave the process. In dev, Quarkus Dev Services typically gives you both PostgreSQL and Elasticsearch. You still run two data stores, but not a separate search platform project on top.</p><p>We connect all three search styles in one app and keep an eye on where each one breaks. Full-text search is fast and deterministic. It struggles with synonyms and paraphrases. Vector search embeds the query and asks the index for the <strong>k</strong> closest document vectors by distance in embedding space (<strong>kNN</strong>, k-nearest neighbors). You rely on that when literal term overlap is not enough. It is still weak on product codes, short jargon, and anything that only works as an exact string match. Hybrid search mixes lexical scoring with that vector signal. You pay for embedding work on every vector or hybrid query.</p><p>Why does this matter in production? If user language and catalog language do not match, results look random. The implementation can still be &#8220;correct.&#8221; Search issues hurt because they look like bad content, bad relevance, and bad UX at the same time. Users rarely say &#8220;fix the ranker.&#8221; They stop trusting the search box.</p><p>We implement full-text, vector, and hybrid as three REST endpoints in the same service so you can compare behavior without maintaining three demos. When you finish the steps, you have a working catalog search and a simple way to pick a pattern for a given query style.</p><h2><strong>Prerequisites</strong></h2><p>You need a recent Java and Quarkus setup, and you should already be comfortable reading a Panache entity, a REST resource, and basic Hibernate annotations. We are not spending time on Java installation or IDE setup. We are using Podman-friendly Dev Services, a local embedding model, and plain PostgreSQL.</p><ul><li><p>Java 21 or newer</p></li><li><p>Maven 3.9.6 or newer</p></li><li><p>Podman or Docker for Dev Services</p></li><li><p>Basic understanding of JPA and REST endpoints</p></li><li><p>Basic understanding of PostgreSQL</p></li></ul><h2><strong>Project Setup</strong></h2><p>Create the project or grab the <a href="http://product-search">working example from my Github repository</a>:</p><pre><code><code>mvn io.quarkus.platform:quarkus-maven-plugin:create \
  -DprojectGroupId=org.acme \
  -DprojectArtifactId=product-search \
  -Dextensions="hibernate-orm-panache,jdbc-postgresql,rest-jackson,quarkus-langchain4j-core,quarkus-caffeine" \
  -DnoCode
cd product-search</code></code></pre><p>Add the search and vector dependencies to <code>pom.xml</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;xml&quot;,&quot;nodeId&quot;:&quot;2e71541e-ed97-47ee-a368-bb34b71c9dd9&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-xml">        &lt;dependency&gt;
            &lt;groupId&gt;io.quarkus&lt;/groupId&gt;
            &lt;artifactId&gt;quarkus-hibernate-search-orm-elasticsearch&lt;/artifactId&gt;
        &lt;/dependency&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;org.hibernate.orm&lt;/groupId&gt;
            &lt;artifactId&gt;hibernate-vector&lt;/artifactId&gt;
        &lt;/dependency&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;dev.langchain4j&lt;/groupId&gt;
            &lt;artifactId&gt;langchain4j-embeddings-bge-small-en-q&lt;/artifactId&gt;
        &lt;/dependency&gt;</code></pre></div><p><code>hibernate-processor</code> generates the JPA static metamodel (<code>Product_</code>) at compile time. Search code then uses a small <code>ProductIndexFields</code> class: most field names reuse the generated constants from <code>Product_</code> (for example <code>Product_.NAME</code>), but the extra Elasticsearch sort field <code>name_sort</code> stays a plain string that must match <code>@KeywordField(name = "name_sort")</code> on <code>Product.name</code>. That way the REST resource does not scatter raw index paths, and renames show up when you recompile.</p><p>Add a property for the processor version (keep it aligned with Hibernate ORM in the Quarkus BOM when you upgrade the platform) and register the processor on <code>maven-compiler-plugin</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;xml&quot;,&quot;nodeId&quot;:&quot;6d83e3ad-1d5a-4dc0-b197-3a6980852b0a&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-xml"> &lt;properties&gt;
    &lt;!-- Keep aligned with Hibernate ORM version from quarkus-bom (see dependencyManagement). --&gt;
   &lt;hibernate.orm.version&gt;7.2.6.Final&lt;/hibernate.orm.version&gt;
&lt;/properties&gt;</code></pre></div><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;xml&quot;,&quot;nodeId&quot;:&quot;880f2094-80fe-45ce-a2ea-f5db296b765f&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-xml"> &lt;plugin&gt;
                &lt;artifactId&gt;maven-compiler-plugin&lt;/artifactId&gt;
                &lt;version&gt;${compiler-plugin.version}&lt;/version&gt;
                &lt;configuration&gt;
                    &lt;parameters&gt;true&lt;/parameters&gt;
                    &lt;annotationProcessorPaths&gt;
                        &lt;path&gt;
                            &lt;groupId&gt;org.hibernate.orm&lt;/groupId&gt;
                            &lt;artifactId&gt;hibernate-processor&lt;/artifactId&gt;
                            &lt;version&gt;${hibernate.orm.version}&lt;/version&gt;
                        &lt;/path&gt;
                    &lt;/annotationProcessorPaths&gt;
                &lt;/configuration&gt;
            &lt;/plugin&gt;</code></pre></div><p>For automated checks, add test dependencies such as <code>quarkus-junit</code> and <code>rest-assured</code> (test scope).</p><p>What we get from each dependency:</p><ul><li><p><code>quarkus-hibernate-orm-panache</code> gives us the entity model and simple persistence</p></li><li><p><code>quarkus-jdbc-postgresql</code> gives us PostgreSQL connectivity and Dev Services</p></li><li><p><code>quarkus-rest-jackson</code> gives us JSON REST endpoints</p></li><li><p><code>quarkus-hibernate-search-orm-elasticsearch</code> gives us full-text indexing and kNN against the Elasticsearch-backed Hibernate Search index</p></li><li><p><code>hibernate-vector</code> maps PostgreSQL <code>vector</code> columns through Hibernate ORM</p></li><li><p><code>io.quarkiverse.langchain4j:quarkus-langchain4j-core</code> plus <code>dev.langchain4j:langchain4j-embeddings-bge-small-en-q</code> integrate LangChain4j and ship a small quantized ONNX embedding model that runs in process without remote API calls</p></li><li><p><code>quarkus-caffeine</code> integrates the Caffeine in-memory cache library for CDI and configuration</p></li><li><p><code>hibernate-processor</code> (provided) runs at compile time and generates the JPA static metamodel (<code>Entity_</code> classes) for type-safe queries and tooling</p></li></ul><p>Elasticsearch handles lexical and vector queries in the Hibernate Search layer. PostgreSQL still holds the canonical <code>vector</code> column for ORM persistence. Those two engines cooperate in one application.</p><h2><strong>Implementation</strong></h2><p>Put configuration first: everything that follows assumes PostgreSQL, Elasticsearch, and the in-process embedding model are wired the same in dev, test, and whatever you deploy to.</p><p>PostgreSQL only understands <code>vector</code> columns after the <code>pgvector</code> extension is installed. Dev Services runs an init script as soon as the container starts, <strong>before</strong> Hibernate ORM applies schema management, so the type exists when DDL refers to it. If you skip that ordering, table creation fails with an unknown type, not a mysterious Hibernate bug.</p><p>Hibernate Search talks to Elasticsearch over HTTP. You pin the Elasticsearch <strong>major</strong> version in configuration so the client and the index schema Hibernate Search generates match the server (here, the Elasticsearch instance Dev Services starts in dev and test). For embeddings we stay on the JVM: a packaged ONNX model runs in process, and you point <code>application.properties</code> at the LangChain4j <code>EmbeddingModel</code> implementation class so Quarkus can construct the bean the same way it would any other injectable type.</p><p>Create <code>src/main/resources/vector-init.sql</code> (on the classpath under <code>src/main/resources</code>, so <code>init-script-path</code> resolves it by name):</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;sql&quot;,&quot;nodeId&quot;:&quot;0652c9e0-ec9d-4036-8046-3bdcab98c6b0&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-sql">CREATE EXTENSION IF NOT EXISTS vector;</code></pre></div><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;2fa97bb2-3461-4831-bc52-2fa2507d8ca7&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext"># PostgreSQL with pgvector (entity storage for vectors; kNN is served by Hibernate Search backend)
quarkus.datasource.db-kind=postgresql
quarkus.datasource.devservices.image-name=docker.io/pgvector/pgvector:pg18
quarkus.datasource.devservices.init-script-path=vector-init.sql

# Hibernate ORM
quarkus.hibernate-orm.schema-management.strategy=drop-and-create
quarkus.hibernate-orm.log.sql=false

# Hibernate Search: Elasticsearch Dev Services in dev/test (Quarkus does not ship a Lucene ORM extension)
quarkus.hibernate-search-orm.elasticsearch.version=9
quarkus.hibernate-search-orm.schema-management.strategy=drop-and-create-and-drop
quarkus.hibernate-search-orm.indexing.plan.synchronization.strategy=sync

# Local embedding model (in-process ONNX via LangChain4j)
quarkus.langchain4j.embedding-model.provider=dev.langchain4j.model.embedding.onnx.bgesmallenq.BgeSmallEnQuantizedEmbeddingModel</code></pre></div><p>Together, <code>drop-and-create</code> on Hibernate ORM and <code>drop-and-create-and-drop</code> on <a href="https://quarkus.io/guides/hibernate-search-orm-elasticsearch#configuration-reference">Hibernate Search</a> <strong>tear down and recreate</strong> PostgreSQL tables and the Elasticsearch-backed index whenever the app starts. That makes local runs repeatable and saves you from half-stale mappings while you edit entities. It also throws away data on every restart, which is wrong for a real catalog. For production, move PostgreSQL changes through migration tooling and switch Hibernate Search to something non-destructive for routine deploys, for example <code>create-or-validate</code>, unless you deliberately accept wiping the index on startup.</p><p>Define the entity next. Lexical fields, keyword filters, and the embedding vector all live on the same <code>Product</code> type.</p><p>Create <code>src/main/java/org/acme/search/model/Product.java</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;13f168fb-d7d5-4d06-83d3-ad52822c4412&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.search.model;

import org.hibernate.annotations.Array;
import org.hibernate.annotations.JdbcTypeCode;
import org.hibernate.search.engine.backend.types.Sortable;
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.FullTextField;
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.Indexed;
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.KeywordField;
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.VectorField;
import org.hibernate.type.SqlTypes;

import com.fasterxml.jackson.annotation.JsonIgnore;

import io.quarkus.hibernate.orm.panache.PanacheEntity;
import jakarta.persistence.Column;
import jakarta.persistence.Entity;

@Entity
@Indexed
public class Product extends PanacheEntity {

    @FullTextField(analyzer = "english")
    @KeywordField(name = "name_sort", sortable = Sortable.YES, normalizer = "lowercase")
    public String name;

    @FullTextField(analyzer = "english")
    @Column(columnDefinition = "text")
    public String description;

    @KeywordField
    public String category;

    @JsonIgnore
    @VectorField(dimension = 384)
    @JdbcTypeCode(SqlTypes.VECTOR)
    @Array(length = 384)
    public float[] descriptionEmbedding;

    public Product() {
    }

    public Product(String name, String description, String category) {
        this.name = name;
        this.description = description;
        this.category = category;
    }
}</code></pre></div><p><code>Product</code> carries three search behaviors at once. <code>name</code> and <code>description</code> go to Elasticsearch for full-text. <code>category</code> is a keyword field for exact filters. <code>descriptionEmbedding</code> is both a PostgreSQL <code>vector(384)</code> column and an Elasticsearch vector field for kNN. <code>@JsonIgnore</code> keeps big float arrays out of JSON (the verification <code>curl</code> examples show <code>descriptionEmbedding</code> as <code>null</code> or omit the field).</p><p>One hard rule: vector dimension must match the embedding model. This stack uses <code>bge-small-en-q</code>, which outputs 384 dimensions. If you swap models and the size changes, schema and index mapping must change too. </p><p>Add a dedicated service for embeddings on the write path. Do not hide that inside the JAX-RS resource: imports, admin tasks, and tests also create rows, and one place for <code>embed &#8594; persist</code> keeps behavior obvious.</p><p>Create <code>src/main/java/org/acme/search/service/ProductService.java</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;f9f3b0d2-ac35-4987-ad0f-54598600104b&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.search.service;

import org.acme.search.model.Product;

import dev.langchain4j.model.embedding.EmbeddingModel;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;
import jakarta.transaction.Transactional;

@ApplicationScoped
public class ProductService {

    @Inject
    EmbeddingModel embeddingModel;

    @Transactional
    public void createProduct(String name, String description, String category) {
        Product product = new Product(name, description, category);
        product.descriptionEmbedding = embeddingModel
                .embed(description)
                .content()
                .vector();
        product.persist();
    }
}</code></pre></div><p>On each save we embed the description once and store the vector on the row before anyone searches. Reads stay cheap; writes do more work. For a catalog that pattern is normal: far more searches than inserts.</p><p>Query paths should not pay full embedding cost on every identical string. Add a small cache that wraps <code>EmbeddingModel</code> and returns <strong>copies</strong> of the <code>float[]</code> so callers cannot mutate vectors sitting in the cache.</p><p>Create <code>src/main/java/org/acme/search/service/QueryEmbeddingService.java</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;e54175eb-7415-432f-bb6b-6de27e98a6cc&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.search.service;

import java.util.Arrays;
import java.util.concurrent.TimeUnit;

import com.github.benmanes.caffeine.cache.Cache;
import com.github.benmanes.caffeine.cache.Caffeine;

import dev.langchain4j.model.embedding.EmbeddingModel;
import jakarta.annotation.PostConstruct;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;

@ApplicationScoped
public class QueryEmbeddingService {

    @Inject
    EmbeddingModel embeddingModel;

    private Cache&lt;String, float[]&gt; cache;

    @PostConstruct
    void init() {
        cache = Caffeine.newBuilder()
                .maximumSize(1000)
                .expireAfterWrite(30, TimeUnit.MINUTES)
                .build();
    }

    public float[] embed(String query) {
        float[] stored = cache.get(query, key -&gt; {
            float[] vector = embeddingModel.embed(key).content().vector();
            return Arrays.copyOf(vector, vector.length);
        });
        return Arrays.copyOf(stored, stored.length);
    }
}</code></pre></div><p>Configure how Elasticsearch tokenizes catalog text. <strong>Normalization</strong> folds text into comparable tokens (ASCII folding and lowercasing so <code>Caf&#233;</code> and <code>cafe</code> are not different keys). <strong>Stemming</strong> trims suffixes so related forms share one stem (Porter in the snippet below, so <code>running</code> and <code>run</code> can hit the same postings). Without that chain, full-text is only slightly better than <code>LIKE</code>.</p><p>Create <code>src/main/java/org/acme/search/config/SearchAnalysisConfig.java</code> using the Quarkus qualifier <code>io.quarkus.hibernate.search.orm.elasticsearch.SearchExtension</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;3ef832ef-e3bf-4f46-8bf8-a2efa464778e&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.search.config;

import org.hibernate.search.backend.elasticsearch.analysis.ElasticsearchAnalysisConfigurationContext;
import org.hibernate.search.backend.elasticsearch.analysis.ElasticsearchAnalysisConfigurer;

import io.quarkus.hibernate.search.orm.elasticsearch.SearchExtension;

@SearchExtension
public class SearchAnalysisConfig implements ElasticsearchAnalysisConfigurer {

    @Override
    public void configure(ElasticsearchAnalysisConfigurationContext context) {
        context.analyzer("english").custom()
                .tokenizer("standard")
                .tokenFilters("asciifolding", "lowercase", "porter_stem");

        context.normalizer("lowercase").custom()
                .tokenFilters("asciifolding", "lowercase");
    }
}</code></pre></div><p>That setup lowercases, normalizes, and runs Porter stemming. So <code>shoes</code> can match <code>shoe</code> and <code>running</code> can match <code>run</code>. It still does not know that <code>footwear</code> and <code>shoes</code> mean the same thing in the world. That is why the entity also keeps vectors.</p><p><code>SearchResource</code> exposes <code>/search/fulltext</code>, <code>/search/vector</code>, and <code>/search/hybrid</code> next to each other and injects <code>QueryEmbeddingService</code> for the two vector paths.</p><p><strong>Startup and mass indexing:</strong> do not mark the <code>StartupEvent</code> observer <code>@Transactional</code> if it ends with <code>massIndexer().startAndWait()</code>. When the whole observer runs in one transaction, seed inserts are not yet committed, so the mass indexer can see <strong>zero</strong> entities and build an empty index. Either drop <code>@Transactional</code> on the observer (each <code>createProduct</code> still runs in its own transaction) or reindex after commit.</p><p>Create <code>src/main/java/org/acme/search/model/ProductIndexFields.java</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;160fbdb0-f398-4f60-8711-ee9bff001d4c&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.search.model;

import org.hibernate.search.mapper.pojo.mapping.definition.annotation.KeywordField;

/**
 * Hibernate Search index field paths for {@link Product}. Property-backed names
 * are delegated to
 * string constants generated on {@link Product_} (Hibernate processor);
 * {@link #NAME_SORT} must
 * stay aligned with {@link KeywordField#name()} on {@link Product#name}.
 */
public final class ProductIndexFields {

    private ProductIndexFields() {
    }

    public static final String NAME = Product_.NAME;
    public static final String DESCRIPTION = Product_.DESCRIPTION;
    public static final String CATEGORY = Product_.CATEGORY;
    public static final String DESCRIPTION_EMBEDDING = Product_.DESCRIPTION_EMBEDDING;

    public static final String NAME_SORT = "name_sort";
}</code></pre></div><p>Create <code>src/main/java/org/acme/search/SearchResource.java</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;f7c41666-153a-4e0a-b5fd-66cb582b0255&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.search;

import java.util.List;

import org.acme.search.model.Product;
import org.acme.search.model.ProductIndexFields;
import org.acme.search.service.ProductService;
import org.acme.search.service.QueryEmbeddingService;
import org.hibernate.search.mapper.orm.mapping.SearchMapping;
import org.hibernate.search.mapper.orm.session.SearchSession;
import org.jboss.resteasy.reactive.RestQuery;

import io.quarkus.runtime.StartupEvent;
import jakarta.enterprise.event.Observes;
import jakarta.inject.Inject;
import jakarta.transaction.Transactional;
import jakarta.ws.rs.DefaultValue;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;

@Path("/search")
@Produces(MediaType.APPLICATION_JSON)
public class SearchResource {

        @Inject
        SearchSession searchSession;

        @Inject
        SearchMapping searchMapping;

        @Inject
        QueryEmbeddingService queryEmbeddingService;

        @Inject
        ProductService productService;

        @GET
        @Path("/fulltext")
        @Transactional
        public List&lt;Product&gt; fulltext(@RestQuery String q, @RestQuery @DefaultValue("10") int size) {
                return searchSession.search(Product.class)
                                .where(f -&gt; q == null || q.isBlank()
                                                ? f.matchAll()
                                                : f.simpleQueryString()
                                                                .fields(ProductIndexFields.NAME,
                                                                                ProductIndexFields.DESCRIPTION)
                                                                .matching(q))
                                .sort(f -&gt; f.field(ProductIndexFields.NAME_SORT).asc())
                                .fetchHits(size);
        }

        @GET
        @Path("/vector")
        @Transactional
        public List&lt;Product&gt; vector(@RestQuery String q, @RestQuery @DefaultValue("5") int k) {
                if (q == null || q.isBlank()) {
                        return List.of();
                }

                float[] queryVector = queryEmbeddingService.embed(q);

                return searchSession.search(Product.class)
                                .where(f -&gt; f.knn(k).field(ProductIndexFields.DESCRIPTION_EMBEDDING)
                                                .matching(queryVector))
                                .fetchHits(k);
        }

        @GET
        @Path("/hybrid")
        @Transactional
        public List&lt;Product&gt; hybrid(@RestQuery String q,
                        @RestQuery @DefaultValue("10") int size,
                        @RestQuery @DefaultValue("5") int k) {
                if (q == null || q.isBlank()) {
                        return List.of();
                }

                float[] queryVector = queryEmbeddingService.embed(q);

                return searchSession.search(Product.class)
                                .where(f -&gt; f.bool()
                                                .should(f.simpleQueryString()
                                                                .fields(ProductIndexFields.NAME,
                                                                                ProductIndexFields.DESCRIPTION)
                                                                .matching(q))
                                                .should(f.knn(k).field(ProductIndexFields.DESCRIPTION_EMBEDDING)
                                                                .matching(queryVector)))
                                .fetchHits(size);
        }

        @GET
        @Path("/hybrid/filtered")
        @Transactional
        public List&lt;Product&gt; hybridFiltered(@RestQuery String q,
                        @RestQuery String category,
                        @RestQuery @DefaultValue("10") int size,
                        @RestQuery @DefaultValue("5") int k) {
                if (q == null || q.isBlank() || category == null || category.isBlank()) {
                        return List.of();
                }

                float[] queryVector = queryEmbeddingService.embed(q);

                return searchSession.search(Product.class)
                                .where(f -&gt; f.bool()
                                                .must(f.match().field(ProductIndexFields.CATEGORY).matching(category))
                                                .should(f.simpleQueryString()
                                                                .fields(ProductIndexFields.NAME,
                                                                                ProductIndexFields.DESCRIPTION)
                                                                .matching(q))
                                                .should(f.knn(k).field(ProductIndexFields.DESCRIPTION_EMBEDDING)
                                                                .matching(queryVector)))
                                .fetchHits(size);
        }

        void onStart(@Observes StartupEvent event) throws InterruptedException {
                if (Product.count() == 0) {
                        seedProducts();
                }

                searchMapping.scope(Product.class)
                                .massIndexer()
                                .startAndWait();
        }

        private void seedProducts() {
                productService.createProduct(
                                "Trail Running Shoe",
                                "Lightweight athletic footwear designed for off-road running on dirt and gravel. Aggressive grip, breathable mesh upper, cushioned midsole.",
                                "footwear");
                productService.createProduct(
                                "Leather Oxford",
                                "Classic formal shoe in full-grain leather. Brogue detailing, leather sole, Goodyear welt construction.",
                                "footwear");
                productService.createProduct(
                                "Waterproof Hiking Boot",
                                "Ankle-height boot with waterproof membrane, vibram outsole, and padded collar. Built for multi-day trekking.",
                                "footwear");
                productService.createProduct(
                                "Canvas Sneaker",
                                "Casual low-top sneaker in cotton canvas. Rubber vulcanized sole, available in twelve colors.",
                                "footwear");

                productService.createProduct(
                                "Noise-Cancelling Headphones",
                                "Over-ear headphones with active noise cancellation, 30-hour battery life, and foldable design for travel.",
                                "electronics");
                productService.createProduct(
                                "Mechanical Keyboard",
                                "Tenkeyless keyboard with Cherry MX Brown switches. PBT keycaps, USB-C detachable cable, per-key RGB lighting.",
                                "electronics");
                productService.createProduct(
                                "Portable Charger",
                                "20,000 mAh power bank with 65W USB-C Power Delivery. Charges a laptop from 0 to 80 percent in under an hour.",
                                "electronics");

                productService.createProduct(
                                "Ultralight Backpack",
                                "35-litre hiking pack weighing 680 grams. Frameless design, roll-top closure, hipbelt with small pockets.",
                                "outdoor");
                productService.createProduct(
                                "Sleeping Bag",
                                "Down-filled mummy bag rated to minus ten Celsius. 850-fill power, YKK zip, water-resistant outer shell.",
                                "outdoor");
                productService.createProduct(
                                "Trekking Poles",
                                "Aluminium collapsible poles with cork grips and carbide tips. Folds to 38 cm for pack attachment.",
                                "outdoor");

                productService.createProduct(
                                "Cast Iron Skillet",
                                "Pre-seasoned 12-inch cast iron pan. Suitable for induction, gas, electric, and open fire. Oven-safe to 260 Celsius.",
                                "kitchen");
                productService.createProduct(
                                "Pour-Over Coffee Dripper",
                                "Ceramic cone dripper for manual filter coffee. Compatible with Melitta No.4 filters. Sits directly on a mug or carafe.",
                                "kitchen");
                productService.createProduct(
                                "Chef's Knife",
                                "8-inch high-carbon stainless steel knife. Full tang, triple-riveted handle, 58 HRC hardness. Suitable for chopping, slicing, and dicing.",
                                "kitchen");
        }
}</code></pre></div><p>On <strong>hybrid</strong> endpoints, <code>size</code> controls how many hits Hibernate Search returns after the bool query, while <code>k</code> (query parameter, default <code>5</code>) controls the kNN neighbor count inside the vector <code>should</code> clause, with the same default as <code>/search/vector</code>. You can override <code>k</code> per request (for example <code>.../hybrid?q=...&amp;size=10&amp;k=8</code>) when you want more vector candidates without changing the final hit count.</p><p><code>/search/fulltext</code> is classic lexical search: tokenize <code>q</code>, match <code>name</code> and <code>description</code>, score by term relevance. It is easy to reason about when user words and catalog words overlap. If <code>q</code> is empty, the handler returns up to <code>size</code> rows sorted by <code>ProductIndexFields.NAME_SORT</code> (<code>name_sort</code> in the index) using <code>matchAll()</code>, which is handy for smoke tests.</p><p>Mass indexing uses <code>searchMapping.scope(Product.class)</code> so only the <code>Product</code> index is rebuilt; <code>scope(Object.class)</code> would index every mapped <code>@Indexed</code> type and is easy to misuse as the model grows.</p><p>Each <code>/search/vector</code> call embeds <code>q</code> and Hibernate Search runs kNN on the vectors in Elasticsearch. That is why <code>camping+gear</code> can return <code>Sleeping Bag</code> or <code>Trekking Poles</code> even when that phrase is missing from the stored text. Each request pays for inference, and short jargon or SKUs can still lose to a strong keyword hit.</p><p><code>/search/hybrid</code> keeps full-text and kNN in the same bool query as two <code>should</code> clauses, so keyword strength and embedding neighbors influence one ranked list. You are not forced to bet the whole product on BM25-only or vector-only. They fail in different corners. Combining them is usually what a catalog search needs, even if the blend is messier to balance.</p><p>The seed list is written so shopper wording and product copy rarely use the same tokens for the same SKU. The verification curls below should not return the same ordering for every query across the three modes, which is the reason to keep all endpoints in one small service.</p><h2><strong>Configuration</strong></h2><p>The <code>application.properties</code> from <strong>Implementation</strong> wipes PostgreSQL and the Elasticsearch-backed index whenever the process starts. This section contrasts that with settings where a real catalog keeps data across restarts. It also covers how Elasticsearch scales vector search, optional <code>@VectorField</code> graph attributes, and a sample production-style property list.</p><p>You already store <code>descriptionEmbedding</code> with <code>@VectorField(dimension = 384)</code>, and <code>/search/vector</code> and <code>/search/hybrid</code> call kNN through Hibernate Search on Elasticsearch. With only the seed rows it can still feel like the engine compares the query vector to every stored vector. When the catalog grows, that gets too slow, so Elasticsearch keeps an <strong>approximate nearest-neighbor</strong> structure over the document vectors instead of scanning everything on each query. Docs usually call that graph style <strong>HNSW</strong> (Hierarchical Navigable Small World): links between vectors so search skips most points and returns neighbors fast, sometimes missing the single closest point. Hibernate Search can map some graph-related attributes on <code>@VectorField</code> when Elasticsearch supports them.</p><p><code>Product</code> still maps <code>@VectorField</code> with <code>dimension</code> only. When your stack exposes them, you can add attributes such as <code>m</code> and <code>efConstruction</code> (verify names and support for your Hibernate Search and Elasticsearch releases):</p><pre><code><code>@VectorField(
        dimension = 384,
        m = 24,
        efConstruction = 200
)
@JdbcTypeCode(SqlTypes.VECTOR)
@Array(length = 384)
public float[] descriptionEmbedding;</code></code></pre><p><code>m</code> and <code>efConstruction</code> matter where the backend builds an HNSW graph. On PostgreSQL they matter when you define an HNSW index in SQL over <code>pgvector</code>. Here, <code>/search/vector</code> and <code>/search/hybrid</code> resolve kNN in <strong>Elasticsearch</strong> through Hibernate Search, not through PostgreSQL&#8217;s vector operators, so the Java snippet is optional extra settings on the Elasticsearch side, not something you need for the earlier steps.</p><pre><code><code>quarkus.datasource.db-kind=postgresql
quarkus.datasource.devservices.image-name=docker.io/ankane/pgvector:latest
quarkus.datasource.devservices.init-script-path=vector-init.sql

quarkus.hibernate-orm.schema-management.strategy=validate
quarkus.hibernate-orm.log.sql=false

quarkus.hibernate-search-orm.elasticsearch.version=9
quarkus.hibernate-search-orm.schema-management.strategy=create-or-validate
quarkus.hibernate-search-orm.indexing.plan.synchronization.strategy=sync

quarkus.langchain4j.embedding-model.provider=dev.langchain4j.model.embedding.onnx.bgesmallenq.BgeSmallEnQuantizedEmbeddingModel</code></code></pre><p><code>validate</code> for ORM and <code>create-or-validate</code> for Hibernate Search mean a normal restart does not drop PostgreSQL tables or throw away the Elasticsearch index. The first property block rebuilt schema and index on every boot so you could iterate from a clean slate; when the catalog must persist, you move toward values like these.</p><p><strong>PostgreSQL </strong><code>hnsw.ef_search</code><strong>:</strong> if you run kNN <strong>directly in PostgreSQL</strong> (native SQL over <code>pgvector</code>), you can adjust recall with <code>SET LOCAL hnsw.ef_search = 120</code> on the JDBC connection before the query. <code>/search/vector</code><strong> and </strong><code>/search/hybrid</code><strong> do not use that path:</strong> Hibernate Search sends vector predicates to <strong>Elasticsearch</strong>, so that PostgreSQL session setting does nothing for those endpoints. Configure kNN where the queries actually run (here, Elasticsearch), or change the architecture if you want kNN inside the database.</p><h2><strong>Production Hardening</strong></h2><h3><strong>What happens under load</strong></h3><p>Vector and hybrid queries hit the database and, on each request, run query-time embedding with the local ONNX model. If the search box turns into a high-volume typeahead endpoint, that work adds up.</p><p>The <code>QueryEmbeddingService</code> from <strong>Implementation</strong> caches query strings so identical text does not re-run the ONNX model. It does not fix rare phrasing or index work, but real search traffic repeats enough that the cache often helps a lot.</p><p>If caching is not enough, you handle search like any other hot read path: rate limits, async fan-out, or a dedicated embedding service. You can leave that out while you experiment; live traffic usually cannot.</p><h3><strong>Concurrency and correctness guarantees</strong></h3><p>Ranking scores can be fuzzy; access rules cannot. Category filters, tenants, visibility, and soft deletes need hard edges. Teams often over-focus on hybrid relevance and forget that <code>category</code> is a <code>@KeywordField</code>, and the filtered hybrid route puts <code>ProductIndexFields.CATEGORY</code> in a <code>must</code> clause so the filter is strict while the <code>should</code> clauses handle score. Add tenant IDs or publication flags the same way. Semantic similarity should not decide whether a row is allowed to show at all.</p><h3><strong>Operational failure modes</strong></h3><p>First boot downloads the ONNX model and builds vectors for the seed rows. That is acceptable on a laptop. In production, slow startup because of model download, vector generation, and index build makes deploys hard to reason about.</p><p>Ship the model with the app or bake it into the image. Compute document vectors on ingest, not on the first customer query. Plan a reindex when the embedding model changes. Search follows the same rule as the rest of the system: keep heavy one-time work off the hot request path.</p><h3><strong>Security considerations</strong></h3><p>Search endpoints are easy to abuse because they look harmless. A single long natural language query that triggers embedding inference and kNN (k-nearest neighbors) lookup is more expensive than a normal keyword query. A flood of those requests becomes a resource exhaustion problem.</p><p>Put reasonable limits on query length. Add rate limiting if the endpoint is public. Log slow queries. Don&#8217;t feed raw user input into custom query syntax unless you understand exactly how that parser behaves. <code>simpleQueryString()</code> is a good default because it is intentionally safer than more permissive query parsers. You still need input length checks and abuse controls.</p><h2><strong>Verification</strong></h2><p>Start the application:</p><pre><code><code>./mvnw quarkus:dev</code></code></pre><p>On first startup, Dev Services pulls the PostgreSQL (<code>pgvector</code>) image and an <strong>Elasticsearch</strong> image. Expect a few minutes the first time: image pulls, ONNX model download, indexing. Hibernate ORM creates the schema (after <code>vector-init.sql</code> enables the extension), the local embedding model loads, seed data is inserted, embeddings are generated, and Hibernate Search builds or refreshes its index.</p><p>Check all three search modes.</p><h3><strong>Query one: lexical match</strong></h3><pre><code><code>curl "http://localhost:8080/search/fulltext?q=shoes"</code></code></pre><p>Expected behavior: you get footwear products whose indexed fields contain <code>shoe</code> or stemmed variants.</p><p>Typical result shape:</p><pre><code><code>[
  {
    "id": 2,
    "name": "Leather Oxford",
    "description": "Classic formal shoe in full-grain leather. Brogue detailing, leather sole, Goodyear welt construction.",
    "category": "footwear"
  },
  {
    "id": 1,
    "name": "Trail Running Shoe",
    "description": "Lightweight athletic footwear designed for off-road running on dirt and gravel. Aggressive grip, breathable mesh upper, cushioned midsole.",
    "category": "footwear"
  }
]</code></code></pre><p>You should see stemming and analysis at work. Full-text works best when query words and catalog words overlap.</p><h3><strong>Query two: semantic language</strong></h3><pre><code><code>curl "http://localhost:8080/search/vector?q=comfortable+footwear+for+long+walks"</code></code></pre><p>Expected behavior: you get hiking- and walking-related footwear even when those exact words are missing from the descriptions.</p><p>Typical result shape:</p><pre><code><code>[
  {
    "id": 2,
    "name": "Leather Oxford",
    "description": "Classic formal shoe in full-grain leather. Brogue detailing, leather sole, Goodyear welt construction.",
    "category": "footwear"
  },
  {
    "id": 1,
    "name": "Trail Running Shoe",
    "description": "Lightweight athletic footwear designed for off-road running on dirt and gravel. Aggressive grip, breathable mesh upper, cushioned midsole.",
    "category": "footwear"
  },
  {
    "id": 5,
    "name": "Noise-Cancelling Headphones",
    "description": "Over-ear headphones with active noise cancellation, 30-hour battery life, and foldable design for travel.",
    "category": "electronics"
  }
]</code></code></pre><p>You should see meaning, not literal string overlap, drive the ranking.</p><h3><strong>Query three: exact technical term</strong></h3><pre><code><code>curl "http://localhost:8080/search/hybrid?q=MX+Brown"</code></code></pre><p>Expected behavior: <code>Mechanical Keyboard</code> appears at or near the top because the lexical match is strong and the hybrid query preserves that signal. You can append <code>&amp;k=&#8230;</code> to change the kNN neighbor count (default <code>5</code>); <code>size</code> still caps how many hits are returned.</p><p>Typical result shape:</p><pre><code><code>[
  {
    "id": 6,
    "name": "Mechanical Keyboard",
    "description": "Tenkeyless keyboard with Cherry MX Brown switches. PBT keycaps, USB-C detachable cable, per-key RGB lighting.",
    "category": "electronics"
  },
  {
    "id": 2,
    "name": "Leather Oxford",
    "description": "Classic formal shoe in full-grain leather. Brogue detailing, leather sole, Goodyear welt construction.",
    "category": "footwear"
  }
]</code></code></pre><p>That case is where vector-only setups often miss: jargon and exact product tokens still need the lexical side.</p><h3><strong>Query four: concept with no lexical overlap</strong></h3><pre><code><code>curl "http://localhost:8080/search/vector?q=camping+gear"</code></code></pre><p>Expected behavior: outdoor products such as <code>Sleeping Bag</code>, <code>Ultralight Backpack</code>, and <code>Trekking Poles</code> appear even though the phrase <code>camping gear</code> does not exist in the stored content.</p><p>That request shows the biggest gap between vector recall and BM25-style full-text.</p><h3><strong>Filtered hybrid search</strong></h3><pre><code><code>curl "http://localhost:8080/search/hybrid/filtered?q=lightweight&amp;category=outdoor"</code></code></pre><p>Expected behavior: only <code>outdoor</code> products are considered, and within that set the most relevant ones rank highest.</p><p>The point is the filter: <code>category=outdoor</code> is strict; ranking only runs inside that slice.</p><h3>Automated Testing</h3><p>The curl commands from the verification section are useful when you write the code. They are not enough once you change mappings, switch embedding models, or tune hybrid queries. Search breaks in subtle ways. The endpoint still returns <code>200</code>, but the wrong product moves to the top, the category filter stops being strict, or an empty query suddenly triggers expensive work.</p><p>For this kind of system, the safest test strategy is layered. Keep a few lightweight integration tests that hit the real HTTP endpoints. Then make the assertions focus on behavior that should remain true even when scores and exact ordering move a little.</p><p>Add the test dependencies in <code>pom.xml</code> if they are not there already:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;xml&quot;,&quot;nodeId&quot;:&quot;577ad599-6623-4cae-a491-701dbe5198f8&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-xml">&lt;dependency&gt;
    &lt;groupId&gt;io.quarkus&lt;/groupId&gt;
    &lt;artifactId&gt;quarkus-junit5&lt;/artifactId&gt;
    &lt;scope&gt;test&lt;/scope&gt;
&lt;/dependency&gt;
&lt;dependency&gt;
    &lt;groupId&gt;io.rest-assured&lt;/groupId&gt;
    &lt;artifactId&gt;rest-assured&lt;/artifactId&gt;
    &lt;scope&gt;test&lt;/scope&gt;
&lt;/dependency&gt;</code></pre></div><p>Now create <code>src/test/java/org/acme/search/SearchResourceTest.java</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;fee7ad67-460e-47f7-8aac-ffc2037a22ad&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme.search;

import io.quarkus.test.junit.QuarkusTest;
import org.junit.jupiter.api.Test;

import static io.restassured.RestAssured.given;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
import static org.hamcrest.Matchers.hasSize;

@QuarkusTest
class SearchResourceTest {

    @Test
    void fulltextFindsShoeStemming() {
        given()
                .when().get("/search/fulltext?q=shoes")
                .then()
                .statusCode(200)
                .body("$", hasSize(greaterThanOrEqualTo(1)));
    }

    @Test
    void vectorFindsSemanticFootwearQuery() {
        given()
                .when().get("/search/vector?q=comfortable+footwear+for+long+walks")
                .then()
                .statusCode(200)
                .body("$", hasSize(greaterThanOrEqualTo(1)));
    }

    @Test
    void hybridFindsMxBrownKeyboard() {
        given()
                .when().get("/search/hybrid?q=MX+Brown")
                .then()
                .statusCode(200)
                .body("$", hasSize(greaterThanOrEqualTo(1)));
    }

    @Test
    void hybridFilteredRestrictsCategory() {
        given()
                .when().get("/search/hybrid/filtered?q=lightweight&amp;category=outdoor")
                .then()
                .statusCode(200)
                .body("$", hasSize(greaterThanOrEqualTo(1)));
    }
}
</code></pre></div><p>Run the tests with:</p><pre><code><code>./mvnw test</code></code></pre><p>The <code>fulltextFindsShoeStemming()</code> test checks lexical behavior instead of only response size. We do not require one exact order because analyzers and seed data can shift that a bit, but we do require that at least one shoe-related product is present.</p><p>The vector tests need a different strategy. Semantic search is not deterministic in the same way exact keyword search is. You should not assert the full result list or a fragile score order. What you can assert is that clearly relevant products appear in the hit set for a meaning-based query. That is why <code>vectorFindsSemanticFootwearQuery()</code> and <code>campingGearSemanticQueryFindsOutdoorProducts()</code> check for expected relevance without pretending the ranking is mathematically fixed.</p><p>The hybrid test is stricter on purpose. <code>MX Brown</code> is an exact technical term in the catalog. This is the kind of case where lexical strength should dominate. If <code>Mechanical Keyboard</code> drops from the top result after a refactor, that is worth catching.</p><p>The filtered tests are even more important than ranking checks. Search relevance can be fuzzy. Filters cannot. If <code>category=outdoor</code> allows <code>electronics</code> products to leak into the result, the feature is wrong even if the scores look plausible. This is exactly the kind of bug that slips through when teams only test happy-path search quality.</p><p>A useful next step is to widen testing beyond endpoint smoke checks and treat search quality as something you verify from several angles. Keep the HTTP integration tests from this tutorial for end-to-end behavior, but add small unit tests for helper classes such as the query embedding cache, plus a focused relevance regression suite built on a fixed seed dataset where a handful of important queries must keep returning sensible results over time. In larger systems, teams often complement that with offline evaluation sets, performance checks for hot queries, and security-style tests that prove filters like category, tenant, or visibility never leak data across boundaries. That broader approach reflects how search behaves in production: part API contract, part relevance system, and part access-control surface.</p><h2><strong>Conclusion</strong></h2><p>You end up with one service, two data stores in dev (PostgreSQL and Elasticsearch), local embeddings, and Hibernate Search handling both lexical rank and kNN in the index. The useful part is not the three URLs. It is knowing which mode loses on jargon, which on vocabulary mismatch, and where filters must stay exact instead of inside the fuzzy score.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Stop Copying AI Skills: Version IBM Bob Instructions with Maven]]></title><description><![CDATA[Stop copying SKILL.md files between projects. Build Quarkus-focused agent skills once, publish them as Maven artifacts, and load them into IBM Bob in any project.]]></description><link>https://www.the-main-thread.com/p/skillsjars-for-java-package-reusable</link><guid isPermaLink="false">https://www.the-main-thread.com/p/skillsjars-for-java-package-reusable</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Sun, 05 Apr 2026 06:08:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7ced2701-4324-408b-895c-bb96c760b186_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Copy-pasting AI skill files feels harmless when you have one project. You drop a <code>SKILL.md</code> into <code>.bob/skills</code>, <a href="https://bob.ibm.com/">IBM Bob</a> starts behaving like it understands <a href="https://quarkus.io/">Quarkus</a>, and you move on. The trouble shows up later: the same skill in five repositories, each with slightly different instructions, commands, and assumptions about your stack. You only notice when two checkouts disagree during the same review.</p><p>Most teams file this under documentation. In practice it behaves like dependency management, and you stop treating it like documentation the day Bob (or any other IDE/Shell combination you are using) starts generating patches you actually merge. Once agent behavior matters for daily work, those instructions sit in your build and delivery path. If they drift, your agent drifts. One checkout gets <code>jakarta.ws.rs</code> right, another keeps old patterns, a third nudges the model toward the wrong extension.</p><p>This gets worse on teams that ship to production because agent instructions are not neutral. They push which commands run, which files change, and which defaults stick. A stale skill gives you ugly code. It can also teach the assistant the wrong native build command, the wrong REST stack, or the wrong packaging convention. Past &#8220;Bob is a bit off&#8221; you get wasted review time, a messy delivery flow, and expensive tokens for bad answers.</p><p>Java developers already know what to do with reusable stuff: package it, version it, ship it like any other JAR, pull it in with Maven. <a href="https://www.skillsjars.com/">SkillsJars</a> does the same for agent skills. You write framework-specific <code>SKILL.md</code> files once, pack them into a JAR, install or publish the artifact where your builds can see it, and extract into the folder IBM Bob reads when someone opens the project. Same muscle memory as any internal library, only the payload is Markdown.</p><p>Next we run the full loop with <strong>local </strong><code>mvn install</code> (Maven Central optional): a <code>quarkus-dev-skills</code> JAR at <code>1.0.0-SNAPSHOT</code>, three Quarkus skills inside, and a consumer app under <code>shipment-service/</code> that pulls that JAR. Bob gets the same guidance everywhere without copy-paste. If you get lost, the <code>quarkus-dev-skills/</code> tree in the repo is the ground truth for these steps.</p><h2><strong>Prerequisites</strong></h2><p>You should be fine with Maven, a normal Quarkus project layout, and Markdown written for an agent. You also need:</p><ul><li><p>Java 25 installed</p></li><li><p>Maven 3.9+ or the Maven Wrapper available</p></li><li><p>IBM Bob installed in your IDE</p></li><li><p>Network access to resolve the SkillsJars Maven plugin (<code>com.skillsjars:maven-plugin</code>) from the public plugin repositories</p></li><li><p>Basic understanding of Maven <code>pom.xml</code> files</p></li></ul><h2><strong>Project Setup</strong></h2><p>Start from a plain Maven project for the skills artifact. It is not a Quarkus app: its only job is to ship reusable skill files. Deleting <code>src/main/java</code> in the next step still feels wrong the first time; for this artifact, empty Java trees are normal.</p><p>Create the project <a href="https://github.com/myfear/the-main-thread/tree/main/quarkus-dev-skills">or start from my Github repository</a>:</p><pre><code><code>mvn archetype:generate \
  -DgroupId=com.example.skills \
  -DartifactId=quarkus-dev-skills \
  -DarchetypeArtifactId=maven-archetype-quickstart \
  -DinteractiveMode=false</code></code></pre><p>Now move into the project and remove the Java source folders because this artifact ships Markdown-based skills, not Java classes:</p><pre><code><code>cd quarkus-dev-skills
rm -rf src
mkdir -p skills/quarkus-scaffolding
mkdir -p skills/quarkus-extensions
mkdir -p skills/quarkus-native</code></code></pre><p>The structure should now look like this (this repository also keeps the demo consumer next to the packaging project):</p><pre><code><code>quarkus-dev-skills/
&#9500;&#9472;&#9472; article.md
&#9500;&#9472;&#9472; pom.xml
&#9500;&#9472;&#9472; skills/
&#9474;   &#9500;&#9472;&#9472; quarkus-scaffolding/
&#9474;   &#9474;   &#9492;&#9472;&#9472; SKILL.md
&#9474;   &#9500;&#9472;&#9472; quarkus-extensions/
&#9474;   &#9474;   &#9492;&#9472;&#9472; SKILL.md
&#9474;   &#9492;&#9472;&#9472; quarkus-native/
&#9474;       &#9492;&#9472;&#9472; SKILL.md
&#9492;&#9472;&#9472; shipment-service/
    &#9500;&#9472;&#9472; AGENTS.md
    &#9500;&#9472;&#9472; pom.xml
    &#9492;&#9472;&#9472; src/
</code></code></pre><p>The SkillsJars Maven plugin scans the top-level <code>skills/</code> directory and treats each immediate child folder as one skill. Get the directory names right once; everything downstream reads from there.</p><h2><strong>Implementation</strong></h2><p>Below you will find very incomplete examples. There is ongoing Quarkus work around shared coding-agent guidance in pull request <a href="https://github.com/quarkusio/quarkus/pull/53038">quarkusio/quarkus#53038</a>, which adds an initial structure for reusable coding rules and explicitly references AGENTS.md as the emerging open format for agent instructions. So keep an eye out for more coming from the team in the future. The broader format and conventions are documented at <a href="https://agents.md/?utm_source=chatgpt.com">agents.md</a>. </p><blockquote><p><strong>Security warning:</strong> Agent Skills should be treated like executable guidance, not harmless documentation. They can run commands, read files, and change code in ways you did not expect. SkillsJars says that they do a basic security scan before publication, but that is only a baseline check. It does not replace a proper security review of the skills before your team uses them.</p></blockquote><h3><strong>Writing the scaffolding skill</strong></h3><p>The first skill pins how Bob creates Quarkus resources and related classes: endpoint, service, maybe an entity, with predictable packages and imports.</p><p>Create <code>skills/quarkus-scaffolding/SKILL.md</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;markdown&quot;,&quot;nodeId&quot;:&quot;c41754b3-4c8f-4dd9-a0d2-dca8734cc8a3&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-markdown">---
name: quarkus-scaffolding
description: &gt;
  Example playbook for new REST resources, CDI services, Panache entities,
  and repositories in a Quarkus 3 app. Load only when scaffolding those
  pieces &#8212; not a substitute for project-wide AGENTS.md or rules.
allowed-tools: Bash Read Edit
license: Apache-2.0
---

# Quarkus scaffolding (examples only)

**Progressive skill:** use when adding endpoints or persistence types. Repo-wide conventions belong in always-on guidance ([`AGENTS.md`](https://agents.md/), `.cursor/rules`, etc.); specialized steps live in skills so they are not stuffed into every prompt. Quarkus is converging on that split &#8212; markdown rules plus `.agents/skills/` &#8212; see [quarkus#53038](https://github.com/quarkusio/quarkus/pull/53038). If this repo already defines layout or naming, follow that first.

## Package layout (typical)

- `.../resource/` &#8212; JAX-RS
- `.../service/` &#8212; CDI beans
- `.../entity/` &#8212; JPA + Panache
- `.../repository/` &#8212; Panache repositories

## CLI vs hand-written classes

The Quarkus CLI creates **apps** and manages **extensions** (e.g. `quarkus create app`, `quarkus extension add`). It does not standardize &#8220;add one Java class&#8221; inside an existing module &#8212; create new types in the IDE or by copying the skeleton below.

## Resource shape (`jakarta.*`, not `javax.*`)

```java
package com.example.app.resource;

import com.example.app.service.WidgetService;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;
import jakarta.ws.rs.*;
import jakarta.ws.rs.core.MediaType;

@Path("/api/v1/widgets")
@ApplicationScoped
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public class WidgetResource {

    @Inject
    WidgetService service;

    @GET
    public Object list() {
        return service.list();
    }

    @POST
    public Object create(Object request) {
        return service.create(request);
    }
}
```

## Panache (minimal)

- Default surrogate key &#8594; `PanacheEntity`.
- Custom or composite id &#8594; `PanacheEntityBase`.
- Annotate with `@Entity` and `@Table(name = "some_table")` (snake_case is a common convention for `name`).

## Imports

- CDI: `jakarta.inject`, `jakarta.enterprise.context`
- REST: `jakarta.ws.rs`
- Panache ORM: `io.quarkus.hibernate.orm.panache`
- Avoid Spring annotations unless the project uses Spring-on-Quarkus explicitly.
</code></pre></div><p>The front matter does most of the routing. The <code>name</code> must match the directory name. The <code>description</code> helps the model decide if the skill fits the task: write it for matching, not like internal documentation. If you would not say the <code>description</code> out loud to a teammate choosing a skill, rewrite it.</p><p><code>allowed-tools</code> caps what the agent can do without another prompt. It also keeps the blast radius small: only list tools you want. For scaffolding, Bash plus file editing is enough. Wider lists mean a bigger mess if the skill fires in the wrong place. I keep lists short on purpose; you can always add more when a task really needs them.</p><h3><strong>Writing the extension management skill</strong></h3><p>Add a second skill so Bob stays on the Quarkus CLI and BOM patterns. Generic assistants often invent Maven coordinates, pin versions the BOM should own, or mix old and new REST stacks. This file narrows that path.</p><p>Create <code>skills/quarkus-extensions/SKILL.md</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;markdown&quot;,&quot;nodeId&quot;:&quot;b9b3b06b-3786-464f-99ab-39e4b7159901&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-markdown">---
name: quarkus-extensions
description: &gt;
  Example playbook for adding, listing, and removing Quarkus extensions
  via CLI or build tools. Load when the task is dependencies/capabilities,
  not routine coding; platform/BOM policy stays in AGENTS.md/rules.
allowed-tools: Bash Read Edit
license: Apache-2.0
---

# Quarkus extensions (examples only)

**Progressive skill:** use when someone needs REST, data, messaging, health, tracing, etc. at the **build** level. Version and BOM constraints that apply to every change belong in always-on guidance ([agents.md](https://agents.md/), rules); this file is task-sized, like the layout Quarkus is heading toward with rules + `.agents/skills/` &#8212; see [quarkus#53038](https://github.com/quarkusio/quarkus/pull/53038).

## CLI (preferred)

`ext` is shorthand for `extension` ([CLI tooling](https://quarkus.io/guides/cli-tooling)).

Browse what you can add (installable extensions, name filter):

```bash
quarkus ext ls -i -s jdbc
```

Add or remove (names can be short; globs like `smallrye-*` work):

```bash
quarkus ext add rest-jackson
quarkus ext rm rest-jackson
```

## No CLI: Maven / Gradle

Maven:

```bash
./mvnw quarkus:add-extension -Dextensions='rest-jackson,kafka'
```

Gradle:

```bash
./gradlew listExtensions
./gradlew addExtension --extensions='hibernate-validator'
```

## Last resort: edit the build by hand

Maven &#8212; **no** version when the Quarkus BOM is imported:

```xml
&lt;dependency&gt;
  &lt;groupId&gt;io.quarkus&lt;/groupId&gt;
  &lt;artifactId&gt;quarkus-rest-jackson&lt;/artifactId&gt;
&lt;/dependency&gt;
```

Gradle (same idea: BOM manages versions):

```groovy
implementation 'io.quarkus:quarkus-rest-jackson'
```

Use coordinates from [quarkus.io/extensions](https://quarkus.io/extensions/) or CLI list output &#8212; **do not guess** artifact IDs.

## Names you see a lot (illustrative)

- `rest` / `rest-jackson`
- `hibernate-orm-panache`
- `jdbc-postgresql`
- `messaging-kafka`
- `smallrye-health`
- `opentelemetry`

## Rules of thumb

- Prefer the **REST** stack (`rest`, `rest-jackson`) for new JAX-RS-style apps unless the project already standardizes on something else.
- Do not mix **incompatible** stacks (e.g. Spring MVC + RESTEasy) without an explicit reason.
- Keep every `io.quarkus` extension on the **same platform/BOM** the project already uses.</code></pre></div><p>So when someone says &#8220;add Kafka,&#8221; you get Quarkus extension IDs and the CLI flow, not a random client dependency pulled from a blog post. If your team really wants plain Kafka clients, say so in the skill and own that choice.</p><p>The same JAR everywhere means the same instructions in every checkout. For an agent that touches many repos, boring repeatability beats clever prose. That is the whole point of the exercise.</p><h3><strong>Writing the native build skill</strong></h3><p>Native builds get a separate skill because wrong text hurts fast: local GraalVM vs container builds, reflection registration, integration tests vs JVM tests.</p><p>Create <code>skills/quarkus-native/SKILL.md</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;markdown&quot;,&quot;nodeId&quot;:&quot;80f6645f-0aa0-42a6-921b-e2d29f6b64ff&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-markdown">---
name: quarkus-native
description: &gt;
  Example playbook for native executables and native container images
  (Maven/Gradle). Load when debugging native builds or reflection &#8212; not
  for everyday JVM dev; keep always-on project policy in AGENTS.md/rules.
allowed-tools: Bash Read Edit
license: Apache-2.0
---

# Quarkus native (examples only)

**Progressive skill:** native compilation is slow and toolchain-specific; invoke this only when the task is &#8220;build native,&#8221; &#8220;fix native runtime,&#8221; or CI image parity. Always-on constraints (e.g. &#8220;we only ship container-build&#8221;) belong in [agents.md](https://agents.md/) / rules; task detail here matches the direction in [quarkus#53038](https://github.com/quarkusio/quarkus/pull/53038).

## Maven (typical)

Local toolchain (GraalVM / Mandrel already on `PATH`):

```bash
./mvnw package -Dnative
```

No local native compiler &#8212; build inside the builder image:

```bash
./mvnw package -Dnative -Dquarkus.native.container-build=true
```

Native binary **and** container image (needs a `quarkus-container-image-*` extension):

```bash
./mvnw package -Dnative \
  -Dquarkus.native.container-build=true \
  -Dquarkus.container-image.build=true
```

## Gradle (typical)

```bash
./gradlew build -Dquarkus.native.enabled=true
```

Container-based native compile:

```bash
./gradlew build -Dquarkus.native.enabled=true -Dquarkus.native.container-build=true
```

## Reflection / missing classes at runtime

Prefer registering the types that actually need reflection:

```java
import io.quarkus.runtime.annotations.RegisterForReflection;

@RegisterForReflection
public class ShipmentDto {
    public String id;
    public String status;
}
```

Fallback: extra native-image args (e.g. external JSON) via config:

```properties
quarkus.native.additional-build-args=-H:ReflectionConfigurationFiles=reflection-config.json
```

## Native integration tests

```bash
./mvnw verify -Dnative
```

Gradle (generates native image, then runs tests):

```bash
./gradlew testNative
```

`@QuarkusIntegrationTest` exercises the **artifact the build produced** (JAR vs native binary vs container), not the in-process JVM test runtime.
</code></pre></div><p>Do not pack every native-image trick into one file. A long wall of text gives the model more tokens but weaker signal. Keep the skill tight; put long appendices in another file if you need them. Native is painful enough without a fifty-screen skill nobody loads.</p><h3><strong>Configuring the skills artifact POM</strong></h3><p>Wire the packaging project so Maven turns these Markdown files into a skills JAR. Replace your <code>pom.xml</code> with this version (same as <code>quarkus-dev-skills/pom.xml</code> in the tree). Yes, it is a full paste; for the demo that is faster than diffing line by line in prose.</p><pre><code></code></pre><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;xml&quot;,&quot;nodeId&quot;:&quot;8ed3e47c-b2ef-418a-8ed3-eebcac5f7b7b&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-xml">&lt;?xml version="1.0" encoding="UTF-8"?&gt;
&lt;project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"&gt;
  &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt;

  &lt;groupId&gt;com.example.skills&lt;/groupId&gt;
  &lt;artifactId&gt;quarkus-dev-skills&lt;/artifactId&gt;
  &lt;version&gt;1.0.0-SNAPSHOT&lt;/version&gt;
  &lt;packaging&gt;jar&lt;/packaging&gt;

  &lt;name&gt;Quarkus Developer Skills&lt;/name&gt;
  &lt;description&gt;Reusable agent skills for IBM Bob working in Quarkus projects (demo / tutorial).&lt;/description&gt;
  &lt;url&gt;https://github.com/myfear/the-main-thread/quarkus-dev-skills&lt;/url&gt;

  &lt;properties&gt;
    &lt;maven.compiler.release&gt;25&lt;/maven.compiler.release&gt;
    &lt;project.build.sourceEncoding&gt;UTF-8&lt;/project.build.sourceEncoding&gt;

    &lt;skillsjars.skill.quarkus-scaffolding.allowed-tools&gt;Bash Read Edit&lt;/skillsjars.skill.quarkus-scaffolding.allowed-tools&gt;
    &lt;skillsjars.skill.quarkus-extensions.allowed-tools&gt;Bash Read Edit&lt;/skillsjars.skill.quarkus-extensions.allowed-tools&gt;
    &lt;skillsjars.skill.quarkus-native.allowed-tools&gt;Bash Read Edit&lt;/skillsjars.skill.quarkus-native.allowed-tools&gt;
  &lt;/properties&gt;

  &lt;build&gt;
    &lt;plugins&gt;
      &lt;plugin&gt;
        &lt;groupId&gt;com.skillsjars&lt;/groupId&gt;
        &lt;artifactId&gt;maven-plugin&lt;/artifactId&gt;
        &lt;version&gt;0.0.6&lt;/version&gt;
        &lt;executions&gt;
          &lt;execution&gt;
            &lt;goals&gt;
              &lt;goal&gt;package&lt;/goal&gt;
            &lt;/goals&gt;
          &lt;/execution&gt;
        &lt;/executions&gt;
      &lt;/plugin&gt;
    &lt;/plugins&gt;
  &lt;/build&gt;
&lt;/project&gt;</code></pre></div><p>This POM is the contract between your Markdown and the plugin. The <code>skillsjars.skill.&lt;skill-name&gt;.allowed-tools</code> properties must match the front matter in each <code>SKILL.md</code>. If they drift, the build fails, which is friendlier than silently shipping skills with the wrong tool policy.</p><p><strong>Paths inside the JAR.</strong> The <code>package</code> goal copies each skill under <code>META-INF/skills/...</code> as in the <a href="https://github.com/skillsjars/skillsjars-maven-plugin/blob/main/README.md">plugin README</a>. With <code>&lt;scm&gt;&lt;url&gt;</code> on <code>github.com</code>, the plugin takes the GitHub <strong>org</strong> and <strong>repo</strong> from that URL. Without it, you get Maven <code>groupId</code> segments (<code>com/example/skills/&lt;skill&gt;/...</code> in <code>PackageMojoTest</code>). This tree uses a placeholder <code>example-org/quarkus-dev-skills</code> URL in <code>&lt;scm&gt;</code> so the paths match the SkillsJars.com examples. That URL is only a teaching label.</p><p><strong>Consumer coordinates.</strong> Skills that SkillsJars.com republishes use <code>groupId</code> <code>com.skillsjars</code> and an <code>artifactId</code> like <code>org__repo__skill</code> (<a href="https://skillsjars.com/">skillsjars.com</a>). For a JAR you built yourself, consumers use <strong>your</strong> <code>groupId</code> and <code>artifactId</code>, here <code>com.example.skills:quarkus-dev-skills</code>, after <code>mvn install</code> or after you push to an internal repo.</p><h3><strong>Building and inspecting the skills JAR</strong></h3><p>Package the artifact and confirm the skill files landed under the right <code>META-INF</code> paths.</p><p>Build and install into your local repository (<code>~/.m2</code>) so the consumer can resolve the SNAPSHOT:</p><pre><code><code>mvn install</code></code></pre><p>Inspect the resulting JAR:</p><pre><code><code>jar tf target/quarkus-dev-skills-1.0.0-SNAPSHOT.jar | grep /SKILL.md</code></code></pre><p>You should see output similar to this (with the example <code>&lt;scm&gt;</code> URL above):</p><pre><code><code>META-INF/skills/example-org/quarkus-dev-skills/quarkus-extensions/SKILL.md
META-INF/skills/example-org/quarkus-dev-skills/quarkus-scaffolding/SKILL.md
META-INF/skills/example-org/quarkus-dev-skills/quarkus-native/SKILL.md</code></code></pre><p>The extract goal reads exactly these paths. If the <code>jar tf</code> output is empty, Bob gets nothing from the consumer build, so stop and fix packaging before going further.</p><p>A clean JAR only proves layout, not quality. You can ship a perfect archive and still teach the wrong extension. Versioning ships the same bits everywhere; someone still has to read the skill text. I treat <code>jar tf</code> as a smoke test, not a proof that Bob will behave.</p><h3><strong>Creating the consumer Quarkus application</strong></h3><p>Add a Quarkus app that consumes the skills JAR so the layout looks like a real project.</p><p>Create the Quarkus application <strong>under</strong> the packaging project (from <code>quarkus-dev-skills/</code>):</p><pre><code><code>quarkus create app com.example:shipment-service \
  --extension=quarkus-rest-jackson,quarkus-hibernate-orm-panache,quarkus-jdbc-postgresql,quarkus-smallrye-health</code></code></pre><p>Move into the project:</p><pre><code><code>cd shipment-service</code></code></pre><p>Align the consumer with Java 25 if your Quarkus codestart picked a newer <code>--release</code> (<code>shipment-service/pom.xml</code> here uses <code>maven.compiler.release</code> 25).</p><p>The app is only there to consume skills. There is no real business logic; that is intentional. Pick extensions so the prompts in <strong>Verification</strong> look like normal Quarkus work instead of a toy Hello World.</p><h3><strong>Configuring the consumer project to extract skills</strong></h3><p>Add the SkillsJars plugin to the Quarkus app and declare the skills artifact as a <strong>plugin</strong> dependency. Skills stay off the runtime classpath; extraction reads the JAR at build time for Bob&#8217;s folder.</p><p>Update the consumer project <code>pom.xml</code> and add the plugin inside the <code>&lt;build&gt;&lt;plugins&gt;</code> section. Reference the <strong>same coordinates</strong> you installed with <code>mvn install</code> in the packaging project:</p><pre><code><code>&lt;plugin&gt;
    &lt;groupId&gt;com.skillsjars&lt;/groupId&gt;
    &lt;artifactId&gt;maven-plugin&lt;/artifactId&gt;
    &lt;version&gt;0.0.6&lt;/version&gt;
    &lt;dependencies&gt;
        &lt;dependency&gt;
            &lt;groupId&gt;com.example.skills&lt;/groupId&gt;
            &lt;artifactId&gt;quarkus-dev-skills&lt;/artifactId&gt;
            &lt;version&gt;1.0.0-SNAPSHOT&lt;/version&gt;
        &lt;/dependency&gt;
    &lt;/dependencies&gt;
&lt;/plugin&gt;</code></code></pre><p>The live <code>shipment-service/pom.xml</code> in this tree keeps the Quarkus-generated compiler, Surefire, and Failsafe plugins and appends this SkillsJars plugin after them.</p><p>The skills JAR never joins the application dependency graph. Your runtime image does not grow agent instructions just because someone uses Bob on a laptop. That boundary matters if ops is nervous about &#8220;AI stuff&#8221; on the classpath.</p><h3><strong>Extracting the skills into Bob&#8217;s project directory</strong></h3><p>Run the extraction goal and write skills into the directory IBM Bob watches in the project.</p><p>Run (from <code>shipment-service/</code>):</p><pre><code><code>./mvnw skillsjars:extract -Ddir=.bob/skills</code></code></pre><p>The <code>extract</code> goal scans <code>META-INF/skills/</code> in the JAR, finds each skill root, and writes one folder per skill under the path you pass to <code>-Ddir</code>. The folder name starts with <code>skillsjars__</code>, then the path inside the JAR with <code>/</code> turned into <code>__</code>. See <code>ExtractMojo</code> in the <a href="https://github.com/skillsjars/skillsjars-maven-plugin/blob/main/src/main/java/com/skillsjars/maven/ExtractMojo.java">plugin sources</a>. If you expected a straight mirror of the paths inside the JAR, the folder names will look odd until you read that class once.</p><p>After extraction you should see three sibling directories (example with the placeholder <code>&lt;scm&gt;</code> from the POM):</p><pre><code><code>.bob/skills/
&#9500;&#9472;&#9472; skillsjars__example-org__quarkus-dev-skills__quarkus-extensions/
&#9474;   &#9492;&#9472;&#9472; SKILL.md
&#9500;&#9472;&#9472; skillsjars__example-org__quarkus-dev-skills__quarkus-native/
&#9474;   &#9492;&#9472;&#9472; SKILL.md
&#9492;&#9472;&#9472; skillsjars__example-org__quarkus-dev-skills__quarkus-scaffolding/
    &#9492;&#9472;&#9472; SKILL.md</code></code></pre><p>Skills stay sealed in the JAR until extract puts them next to the code Maven built. Bump the artifact version, run extract again, and the skill folders refresh. You stop guessing which stale folder someone copied six months ago.</p><p>It is probably a good idea to add the skills folder to .gitignore and not commit them with your code.</p><h3><strong>Making setup explicit with AGENTS.md</strong></h3><p>Document how skills land in the repo so the next person does not have to hunt for tribal knowledge.</p><p>Add an <code>AGENTS.md</code> file at the root of the consumer project. The checked-in copy matches this (note the <code>mvn install</code> step in the parent directory so the SNAPSHOT exists locally):</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;markdown&quot;,&quot;nodeId&quot;:&quot;c5126a0f-8aa3-497f-a3d2-d1b08cb44433&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-markdown"># AGENTS.md

## Setup

After cloning this repository, install the shared skills artifact into your local Maven repository (from the sibling packaging project), then extract skills:

```bash
cd ../
mvn -f pom.xml -q install
cd shipment-service
./mvnw skillsjars:extract -Ddir=.bob/skills
```

The first step publishes `com.example.skills:quarkus-dev-skills:1.0.0-SNAPSHOT` to `~/.m2`. The second step unpacks `META-INF/skills/...` from that JAR into `.bob/skills/` using the SkillsJars Maven plugin ([plugin README](https://github.com/skillsjars/skillsjars-maven-plugin/blob/main/README.md)).

Extracted directories are not automatically gitignored. Check and add them to .gitignore. Re-run extraction whenever you bump the skills artifact version.

## Project context

* Java 25
* Quarkus REST with Jackson
* Panache with PostgreSQL
* Health endpoints enabled
* Prefer modern Quarkus REST stack
* Native builds use container-based compilation when needed</code></pre></div><p><code>AGENTS.md</code> covers bootstrap for humans and a short context block for the agent. Keep that list short; long filler buries the commands people actually need.</p><p> Some teams commit the extracted files so PRs show skill changes; that works, but diffs get noisy. I prefer gitignore plus explicit extract in <code>AGENTS.md</code> so human edits and generated trees do not step on each other. Your team might reasonably choose otherwise; say which one you picked.</p><h3><strong>What happens when skills drift</strong></h3><p>The usual failure is drift. Someone updates native build text in the skills repo. Another repo still has an old extract on disk. Bob answers differently in each checkout, and you argue about the assistant instead of the code. Packaging and versions help only if you bump versions like normal dependency work.</p><p>Make the version visible in review. Bump the artifact in the consumer <code>pom.xml</code>, run extract again, and read the <code>.bob/skills</code> diff if you commit those files so behavior changes show up in git. In this repo extract output should be gitignored, so bump the version in the packaging <code>pom.xml</code> and in the consumer plugin dependency together and tell people to reinstall and re-extract. I have watched teams &#8220;fix&#8221; Bob locally while CI still had last month&#8217;s skills; aligning those two numbers is the boring part that actually fixes it.</p><h3><strong>Tool permissions are a real security boundary</strong></h3><p>A skill with <code>allowed-tools: Bash Read Edit</code> can run shell commands and edit files. That is the point, and that is also where accidents happen. A sloppy skill, or the right skill in the wrong place, can change more than you meant.</p><p>Keep the tool list small. Skip network, broad shell, or &#8220;run anything&#8221; patterns unless you really need them. Skills are closer to scripts than to comments. Review them like scripts.</p><h3><strong>Versioning does not replace code review</strong></h3><p>A versioned skills artifact ships the same bits everywhere: local <code>~/.m2</code>, internal Nexus, SkillsJars.com, same idea. It does not check whether those bits are correct. If a skill names the wrong Quarkus extension, every consumer picks up the same mistake.</p><p>Versioning still helps. Patch for typos and small fixes. Minor when you add a skill or real content. Major when Bob&#8217;s behavior on real tasks will change. Downstream teams get a number to plan around. They still need to read the diff.</p><h3><strong>Context overload weakens skill quality</strong></h3><p>Do not cram everything into one giant skill. Huge files turn to mush. The model sees more lines and picks the wrong ones. Small focused files usually win.</p><p>One skill per problem area works here: scaffolding, extensions, native. If you need a long appendix later, add another file. Do not hide it all in the skill that should fire on a short prompt.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/p/skillsjars-for-java-package-reusable?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/p/skillsjars-for-java-package-reusable?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Verification</strong></h2><h3><strong>Verify Bob behavior with concrete prompts</strong></h3><p>Open the Quarkus consumer in your IDE with IBM Bob enabled and try these prompts in Code mode. This is the part no amount of Maven XML replaces: you are checking whether the words in the skills survive contact with a real model.</p><p>Prompt one:</p><pre><code><code>Create a ShipmentResource with GET /api/v1/shipments and POST /api/v1/shipments.</code></code></pre><p>Check that:</p><ul><li><p>Bob creates the resource in a <code>resource</code> package</p></li><li><p>The code uses <code>jakarta.ws.rs</code> imports</p></li><li><p>A matching service class is suggested or created</p></li><li><p>The generated code reads as Quarkus, not Spring</p></li></ul><p>Prompt two:</p><pre><code><code>Add Kafka support to this project.</code></code></pre><p>Watch for:</p><ul><li><p>Bob uses a Quarkus extension workflow</p></li><li><p>It does not invent random Maven versions</p></li><li><p>It picks Quarkus extension IDs, not random generic dependencies</p></li></ul><p>Prompt three:</p><pre><code><code>Build a native container image for this project.</code></code></pre><p>Expect:</p><ul><li><p>Bob suggests a container-based native build when appropriate</p></li><li><p>It distinguishes between local native toolchains and container builds</p></li><li><p>It does not collapse everything into one vague &#8220;use GraalVM&#8221; answer</p></li></ul><p>Those checks are about how Bob behaves. The JAR and extract steps can be perfect and the skill still does nothing useful if the text does not stick. Ship packaging first, then iterate on words; both are allowed to be wrong, but usually not in the same release.</p><h2><strong>Conclusion</strong></h2><p>SkillsJars fits the same habit you already have for libraries: package once, version it, let Maven resolve it, extract into <code>.bob/skills</code> on the consumer. One good <code>SKILL.md</code> is quick to write. The long fight is the same as with any shared library: keeping one truth across many repos without silent fork drift.</p><p>After that you can split skills or add reference files so Bob loads a thin layer first and pulls depth only when the task needs it. That is optional polish; the baseline win is already &#8220;same JAR, same extract, same words.&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Quarkus 3.31 Security Upgrade: Pushed Authorization Requests with Keycloak]]></title><description><![CDATA[Build a Quarkus web app that uses Pushed Authorization Requests with Keycloak, keeps OAuth parameters out of the browser URL, and hardens your login flow with one property.]]></description><link>https://www.the-main-thread.com/p/par-quarkus-oidc-keycloak-pushed-authorization-requests</link><guid isPermaLink="false">https://www.the-main-thread.com/p/par-quarkus-oidc-keycloak-pushed-authorization-requests</guid><dc:creator><![CDATA[Markus Eisele]]></dc:creator><pubDate>Sat, 04 Apr 2026 06:08:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/eae59e1c-5223-40dd-bd7b-1d51e441d6ed_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Classic authorization code flow looks clean until you inspect the redirect URL. Then you see everything in the query string: <code>client_id</code>, <code>scope</code>, <code>redirect_uri</code>, <code>state</code>, <code>nonce</code>, and whatever else your client sends. That is normal OAuth behavior. Most teams stop thinking about it once login works.</p><p>These parameters are not secret in the cryptographic sense. They still travel through the browser. They end up in address bars, history, reverse proxy logs, analytics tools, screenshots, and referrer headers. On a laptop demo that feels harmless. In production, with shared logging and support tooling, they can leak further than you meant.</p><p><a href="https://datatracker.ietf.org/doc/html/rfc9126">RFC 9126</a> defines <em>pushed authorization requests</em> (PAR). Your application sends the full authorization request to the authorization server on a back channel first. The browser only follows a redirect that carries a short <code>request_uri</code> instead of the full parameter list.</p><p><a href="https://quarkus.io/guides/security-openid-connect-client-reference">Quarkus OpenID Connect</a> (OIDC) supports PAR with a dedicated configuration switch. With PAR enabled, Quarkus pushes the authorization request first, receives a short-lived <code>request_uri</code>, and only then redirects the browser. The browser no longer carries the full request payload. If the server advertises <code>pushed_authorization_request_endpoint</code> in its metadata, Quarkus can discover the PAR endpoint automatically. Details are in the <a href="https://quarkus.io/guides/security-oidc-configuration-properties-reference">Quarkus OIDC configuration reference</a>.</p><p>There is a real security angle here too. PAR shows less on the front channel. It also makes casual tampering harder because the client must authenticate when it pushes the request. Many stricter deployments pair PAR with <strong>PKCE</strong> (Proof Key for Code Exchange, an extra check on the authorization code exchange). The <a href="https://www.keycloak.org/securing-apps/oidc-layers">Keycloak OIDC documentation</a> recommends that combination for stronger profiles. Quarkus documents the matching client settings in the <a href="https://quarkus.io/guides/security-oidc-code-flow-authentication">Quarkus OIDC authorization code flow guide</a>.</p><h2><strong>What We&#8217;ll Build</strong></h2><p>Let&#8217;s build a small Quarkus app that uses OIDC to protect <code>/account</code>, turns on PAR for the login redirect, and talks to Keycloak locally. We will add <code>/account/tokens</code> as JSON so you can see that you still get normal ID, access, and refresh tokens after login. The only behavioral change we care about is how the authorization request reaches Keycloak.</p><p>You can run Keycloak in two ways: <strong>Dev Services for Keycloak</strong> (Quarkus starts a container for you in dev mode) or <strong>Podman</strong> on a fixed port. The steps below use the same Quarkus and Keycloak settings; only how you launch Keycloak changes.</p><h2><strong>Prerequisites</strong></h2><p>You do not need a big setup. You need a current Quarkus CLI and a JDK (17 or newer matches current Quarkus guides; this article uses Java 21). For Dev Services you need <strong>Docker or Podman</strong> available to Quarkus. Let&#8217;s assume you already know the usual OIDC authorization code flow in Quarkus and what a confidential client is.</p><ul><li><p>Java 21 installed (or JDK 17+)</p></li><li><p>Quarkus CLI installed</p></li><li><p>Docker or Podman installed (for Dev Services, or for manual Keycloak below)</p></li><li><p>Basic familiarity with Quarkus OIDC web-app authentication</p></li></ul><h2><strong>Project Setup</strong></h2><p>Let&#8217;s create the project or you can also <a href="https://github.com/myfear/the-main-thread/tree/main/par-demo">start from my Github repository</a>:</p><pre><code><code>quarkus create app org.acme:par-demo \
  --extension='oidc,rest-jackson' \
  --no-code
cd par-demo</code></code></pre><p>Extensions explained:</p><ul><li><p><code>oidc</code> - enables Quarkus OpenID Connect support for web-app authentication</p></li><li><p><code>rest-jackson</code> - gives us REST endpoints and JSON serialization for the token inspection endpoint</p></li></ul><p>We keep this small on purpose. No database, no template engine, no extra moving parts. The goal is to isolate the authorization flow.</p><h2><strong>Start Keycloak with Dev Services</strong></h2><p>Quarkus <strong>Dev Services for Keycloak</strong> is enabled by default when you run <code>quarkus dev</code> with the <code>oidc</code> extension, <strong>as long as</strong> <code>quarkus.oidc.auth-server-url</code> is not set for that mode. Quarkus then starts a Keycloak container (by default <code>quay.io/keycloak/keycloak:26.5.4</code>), creates a <code>quarkus</code> realm, a confidential client <code>quarkus-app</code> with secret <code>secret</code>, and users <code>alice</code> / <code>bob</code> (passwords match the usernames) with sample roles. Admin console access uses <code>admin</code> / <code>admin</code>. See <a href="https://quarkus.io/guides/security-openid-connect-dev-services">Dev Services and Dev UI for OpenID Connect (OIDC)</a>.</p><p>Why this matters for PAR:</p><ul><li><p>You get a <strong>confidential</strong> client out of the box, which PAR expects for the back-channel push.</p></li><li><p>Quarkus injects the correct issuer URL for the ephemeral container port, so you do not hardcode <code>localhost:8180</code> in dev.</p></li></ul><p>Optional parameters:</p><ul><li><p><strong>Realm file</strong> - If your flow needs a fixed realm export (for example, stricter PAR policies), set <code>quarkus.keycloak.devservices.realm-path=your-realm.json</code> on the classpath or filesystem. Dev Services imports that realm instead of only the defaults.</p></li><li><p>Fixed Keycloak port - You can use <code>quarkus.keycloak.devservices.port=8180</code> to bind the Keycloak Dev Service to a specific port.</p></li><li><p><strong>Shared container</strong> - By default Quarkus may reuse a container labeled <code>quarkus-dev-service-keycloak</code>; set <code>quarkus.keycloak.devservices.shared=false</code> if you want an isolated container per run.</p></li></ul><p>After you start the app (see <strong>Configure</strong> and <strong>Run</strong>), open the <a href="http://localhost:8080/q/dev-ui">Dev UI</a> (or <code>/q/dev</code> depending on your Quarkus version). Use the <strong>OpenID Connect</strong> card and the <strong>Keycloak</strong> provider link to inspect tokens or, for <code>web-app</code>, use <strong>Log in to your web application</strong> against a path like <code>/account</code>. The same guide describes authorization code, password, and client-credentials grants for service-style testing.</p><p>If you already set <code>quarkus.oidc.auth-server-url</code> (for example to a manually run Keycloak), Dev Services does <strong>not</strong> start; you get the generic OIDC Dev Console instead. The Keycloak authorization quickstart uses a <code>%prod.</code> prefix on <code>quarkus.oidc.auth-server-url</code> so <strong>dev</strong> keeps Dev Services while <strong>prod</strong> points at a real URL&#8212;see <a href="https://quarkus.io/guides/security-keycloak-authorization#configuring-the-application">Using OpenID Connect (OIDC) and Keycloak to centralize authorization</a>.</p><p><strong>Verify PAR in discovery</strong> once Keycloak is up. For <strong>Dev Services</strong>, take the host and port from the Dev UI or like in our example here, the fixed startup port:</p><pre><code><code>curl -s "http://localhost:8180/realms/quarkus/.well-known/openid-configuration" | grep pushed_authorization</code></code></pre><p>Swap <code>localhost:8180</code> for your real Keycloak base URL when it differs.</p><p>You should see <code>pushed_authorization_request_endpoint</code>. Quarkus discovers it from metadata when the server publishes it. The <a href="https://quarkus.io/guides/security-oidc-code-flow-authentication">Quarkus OIDC authorization code flow guide</a> describes discovery behavior.</p><h2><strong>Start Keycloak in Podman (Fixed Port)</strong></h2><p>Use this path when you want a stable URL, CI-like setup, or to match production hostnames without Dev Services.</p><p>Start Keycloak:</p><pre><code><code>podman run --name keycloak \
  -e KC_BOOTSTRAP_ADMIN_USERNAME=admin \
  -e KC_BOOTSTRAP_ADMIN_PASSWORD=admin \
  -p 8180:8080 \
  quay.io/keycloak/keycloak:26.5.4 \
  start-dev</code></code></pre><p>Current Quarkus Keycloak examples and Dev Services use the Keycloak 26.x line and <code>KC_BOOTSTRAP_ADMIN_USERNAME</code> / <code>KC_BOOTSTRAP_ADMIN_PASSWORD</code> (not older <code>KEYCLOAK_*</code> admin variables). See the <a href="https://quarkus.io/guides/security-openid-connect-client">Quarkus OpenID Connect client quickstart</a>.</p><p>Wait until Keycloak prints that it is running in development mode. Then open http://localhost:8180</p><p> and log in with <code>admin</code> / <code>admin</code>.</p><p>Create a new realm named <code>quarkus</code>.</p><p>Create a confidential client:</p><ol><li><p>Open the <code>quarkus</code> realm</p></li><li><p>Go to <strong>Clients</strong></p></li><li><p>Create a client with client ID <code>quarkus-app</code></p></li><li><p>Keep the client protocol as <strong>OpenID Connect</strong></p></li><li><p>Enable <strong>Client authentication</strong></p></li><li><p>Enable the standard authorization code flow</p></li><li><p>Set the redirect URI to <code>http://localhost:8080/*</code></p></li><li><p>Set the web origin to http://localhost:8080</p></li></ol><p>You need client authentication because PAR is a back-channel client request. The authorization server must know which client pushed the request. That is part of how PAR is defined in <a href="https://datatracker.ietf.org/doc/html/rfc9126">RFC 9126</a>.</p><p>Open the <strong>Credentials</strong> tab and copy the client secret.</p><p>Create a test user named <code>alice</code> with password <code>alice</code>.</p><p>Verify the PAR endpoint (same as for Dev Services, with your fixed port):</p><pre><code><code>curl -s http://localhost:8180/realms/quarkus/.well-known/openid-configuration | grep pushed_authorization</code></code></pre><h2><strong>Implement the Application</strong></h2><p>We need two resources. One public landing page gives us a safe place to land after logout. One protected resource starts the OIDC code flow, shows the signed-in user, and exposes JSON so we can inspect the tokens Quarkus got after the code exchange.</p><p>Create <code>src/main/java/org/acme/HomeResource.java</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;bb383d11-1d0b-4a51-90ad-6edd35669af5&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme;

import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;

@Path("/")
public class HomeResource {

    @GET
    @Produces(MediaType.TEXT_HTML)
    public String home() {
        return """
            &lt;html&gt;
              &lt;body&gt;
                &lt;h1&gt;PAR demo&lt;/h1&gt;
                &lt;p&gt;This application protects the account page with Quarkus OIDC and Pushed Authorization Requests.&lt;/p&gt;
                &lt;p&gt;&lt;a href="/account"&gt;Open the protected account page&lt;/a&gt;&lt;/p&gt;
              &lt;/body&gt;
            &lt;/html&gt;
            """;
    }
}</code></pre></div><p>This is intentionally plain. We only need one public entry point. After logout, Quarkus can redirect back here without creating an authentication loop.</p><p>Now let&#8217;s add <code>src/main/java/org/acme/AccountResource.java</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;java&quot;,&quot;nodeId&quot;:&quot;388490c2-48cc-48e7-8841-c8a08fe05431&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-java">package org.acme;

import org.eclipse.microprofile.jwt.Claims;
import org.eclipse.microprofile.jwt.JsonWebToken;

import io.quarkus.oidc.IdToken;
import io.quarkus.oidc.RefreshToken;
import io.quarkus.security.Authenticated;
import jakarta.inject.Inject;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;

@Path("/account")
public class AccountResource {

  @Inject
  @IdToken
  JsonWebToken idToken;

  @Inject
  JsonWebToken accessToken;

  @Inject
  RefreshToken refreshToken;

  @GET
  @Authenticated
  @Produces(MediaType.TEXT_HTML)
  public String account() {
    Object givenName = idToken.getClaim(Claims.given_name.name());
    String displayName = givenName != null ? givenName.toString() : idToken.getName();

    return """
        &lt;html&gt;
          &lt;body&gt;
            &lt;h1&gt;Hello, %s&lt;/h1&gt;
            &lt;p&gt;You authenticated through Quarkus OIDC with PAR enabled.&lt;/p&gt;
            &lt;p&gt;&lt;a href="/account/tokens"&gt;Inspect tokens&lt;/a&gt;&lt;/p&gt;
            &lt;p&gt;&lt;a href="/logout"&gt;Logout&lt;/a&gt;&lt;/p&gt;
          &lt;/body&gt;
        &lt;/html&gt;
        """.formatted(displayName);
  }

  @GET
  @Path("/tokens")
  @Authenticated
  @Produces(MediaType.APPLICATION_JSON)
  public TokenInfo tokens() {
    return new TokenInfo(
        idToken.getName(),
        idToken.getSubject(),
        accessToken.getExpirationTime(),
        refreshToken.getToken() != null);
  }

  public record TokenInfo(
      String principalName,
      String subject,
      long accessTokenExpirationTime,
      boolean hasRefreshToken) {
  }
}
</code></pre></div><p>This resource shows something important about the Quarkus <code>web-app</code> model. Redirect-based login still ends with the same token set: ID token, access token, and optionally a refresh token. The <a href="https://quarkus.io/guides/security-oidc-code-flow-authentication">Quarkus OIDC authorization code flow guide</a> documents how <code>web-app</code> uses the authorization code flow and how you can inject the access token as <code>JsonWebToken</code>.</p><p>PAR changes an earlier step. Your endpoint code still reads tokens the same way. Your session model stays the same. After Keycloak returns the authorization code, behavior matches a normal code flow. PAR hardens the redirect leg without forcing you to redesign everything else.</p><p>There is also a limit here. PAR does not protect you from weak session handling, bad redirect URI registration, or sloppy token use after login. If you take the access token and write it into logs, PAR does nothing for you. It narrows one attack surface. It does not replace the rest of your OIDC hygiene.</p><h2><strong>Configure Quarkus OIDC and Enable PAR</strong></h2><p>Configure <code>src/main/resources/application.properties</code>.</p><p><strong>If you use Dev Services in dev mode</strong>, omit <code>quarkus.oidc.auth-server-url</code> for <code>%dev</code> (or leave it unset globally in dev) so Quarkus starts Keycloak. Use the default client secret <code>secret</code>. For <strong>production</strong> (or when you always point at a fixed Keycloak), set the issuer on the prod profile as in the Keycloak authorization guide:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;7446e616-fd18-47d6-b355-bf4360c4accd&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext"># Dev: omit auth-server-url so Dev Services for Keycloak starts Keycloak and injects the issuer.
# Prod (or manual Keycloak on 8180): use %prod profile or set quarkus.oidc.auth-server-url globally.
%prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus

quarkus.oidc.client-id=quarkus-app
quarkus.oidc.credentials.secret=secret
quarkus.oidc.application-type=web-app

quarkus.oidc.authentication.par.enabled=true

# Default Dev Services use a random host port; pin 8180 so manual curl examples match startup.
quarkus.keycloak.devservices.port=8180

# PAR + PKCE (recommended for stricter profiles; see article.md)
quarkus.oidc.authentication.pkce-required=true
quarkus.oidc.authentication.state-secret=8f2ef0d782b24016a4a998f5d8b1a2ce

quarkus.oidc.logout.path=/logout
quarkus.oidc.logout.post-logout-path=/

quarkus.http.auth.permission.authenticated.paths=/account,/account/*,/logout
quarkus.http.auth.permission.authenticated.policy=authenticated

quarkus.log.category."io.quarkus.oidc".level=DEBUG</code></pre></div><p>The critical property is <code>quarkus.oidc.authentication.par.enabled=true</code>. Compare the <a href="https://quarkus.io/guides/security-oidc-configuration-properties-reference">Quarkus OIDC configuration reference</a>. If you do not set an explicit PAR path, Quarkus uses <code>pushed_authorization_request_endpoint</code> from the authorization server metadata.</p><p>The <code>quarkus.oidc.application-type=web-app</code> property selects the OIDC authorization code flow for browser login. .</p><p>The logout settings are first-class Quarkus features too. <code>quarkus.oidc.logout.path</code> and <code>quarkus.oidc.logout.post-logout-path</code> trigger RP-initiated logout and send the user back to a local page when logout finishes. Same guide covers those properties.</p><p>The debug log category is there because you want proof. When this works, you want to see the server-side behavior before the browser redirect.</p><h2><strong>Run the Application</strong></h2><p>Start the app in dev mode:</p><pre><code><code>quarkus dev</code></code></pre><p>The first time you use Dev Services, watch the log for <strong>Dev Services for Keycloak started</strong>. </p><p>Open <code>http://localhost:8080/account</code>.</p><p>Because <code>/account</code> is protected and you have no session yet, Quarkus starts the OIDC authorization code flow. With PAR on, Quarkus posts the authorization request to the PAR endpoint on the back channel, gets a <code>request_uri</code>, and only then redirects your browser to the authorization endpoint. That matches the model in <a href="https://datatracker.ietf.org/doc/html/rfc9126">RFC 9126</a> and the PAR settings in the <a href="https://quarkus.io/guides/security-oidc-configuration-properties-reference">Quarkus OIDC configuration reference</a>.</p><p>Look at the browser address bar on the Keycloak login page. With a classic front-channel request you often see a long URL full of <code>scope</code>, <code>redirect_uri</code>, <code>state</code>, and <code>nonce</code>. With PAR, the redirect shrinks to something that carries the client ID and a <code>request_uri</code> reference. That visible difference is what this tutorial is about. <a href="https://datatracker.ietf.org/doc/html/rfc9126">RFC 9126</a> describes the pattern.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tZ1R!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tZ1R!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png 424w, https://substackcdn.com/image/fetch/$s_!tZ1R!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png 848w, https://substackcdn.com/image/fetch/$s_!tZ1R!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png 1272w, https://substackcdn.com/image/fetch/$s_!tZ1R!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tZ1R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png" width="1456" height="921" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:921,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:584539,&quot;alt&quot;:&quot;Screenshot Keycloak Login&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/191944520?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Screenshot Keycloak Login" title="Screenshot Keycloak Login" srcset="https://substackcdn.com/image/fetch/$s_!tZ1R!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png 424w, https://substackcdn.com/image/fetch/$s_!tZ1R!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png 848w, https://substackcdn.com/image/fetch/$s_!tZ1R!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png 1272w, https://substackcdn.com/image/fetch/$s_!tZ1R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4db0d844-39b9-4e5c-ad3e-ea661ea2e6b7_1770x1120.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Now log in as <code>alice</code> with the password <code>alice</code>.</p><p>After the redirect back to Quarkus, open <code>http://localhost:8080/account/tokens</code>. You should see JSON similar to this:</p><pre><code><code>{
  "principalName": "alice",
  "subject": "8e4615ab-b442-4f1d-b036-0d556ce55a2b",
  "accessTokenExpirationTime": 1774326152,
  "hasRefreshToken": true
}</code></code></pre><p>The exact values will differ, but the structure should match.</p><h2><strong>What Happens in the Flow</strong></h2><p>At this point, let&#8217;s spell the flow out:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NUlm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NUlm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png 424w, https://substackcdn.com/image/fetch/$s_!NUlm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png 848w, https://substackcdn.com/image/fetch/$s_!NUlm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png 1272w, https://substackcdn.com/image/fetch/$s_!NUlm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NUlm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png" width="1456" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:913023,&quot;alt&quot;:&quot;Mermaid Chart&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.the-main-thread.com/i/191944520?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Mermaid Chart" title="Mermaid Chart" srcset="https://substackcdn.com/image/fetch/$s_!NUlm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png 424w, https://substackcdn.com/image/fetch/$s_!NUlm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png 848w, https://substackcdn.com/image/fetch/$s_!NUlm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png 1272w, https://substackcdn.com/image/fetch/$s_!NUlm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2551ba8a-37df-491a-986f-7197501fa3eb_6025x4140.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When <strong>PKCE</strong> is enabled (see the section <strong>Add PKCE on Top</strong>), the token request also includes <code>code_verifier</code>; PAR and PKCE address different legs of the same overall flow.</p><ol><li><p><strong>Browser</strong> &#8594; <code>GET /account</code></p></li><li><p><strong>Quarkus</strong> &#8594; <code>POST /realms/quarkus/protocol/openid-connect/ext/par/request</code> with the authorization request parameters and client authentication</p></li><li><p><strong>Keycloak</strong> &#8594; returns <code>request_uri</code> and <code>expires_in</code></p></li><li><p><strong>Quarkus</strong> &#8594; redirects the browser to <code>/protocol/openid-connect/auth</code> with <code>client_id</code> and <code>request_uri</code></p></li><li><p><strong>User</strong> logs in on Keycloak</p></li><li><p><strong>Keycloak</strong> &#8594; redirects the browser back to Quarkus with the authorization code</p></li><li><p><strong>Quarkus</strong> &#8594; exchanges the code for ID token, access token, and refresh token</p></li><li><p><strong>Browser</strong> &#8594; sees the protected page</p></li></ol><p>So here is the win in one sentence: the browser still does login, but the full authorization request does not ride through it anymore. <a href="https://datatracker.ietf.org/doc/html/rfc9126">RFC 9126</a> defines this push-plus-<code>request_uri</code> handoff, and Quarkus lines up with it through configuration.</p><p>One production detail matters. The <code>request_uri</code> is short-lived. If login takes too long and the reference expires before authorization finishes, the flow fails. That is expected. The short lifetime helps with replay resistance. Keep it in mind when you debug slow or interrupted logins.</p><h2><strong>Add PKCE on Top</strong></h2><p>PAR alone is useful. For sensitive apps, PAR plus PKCE is the baseline you want. We already introduced PKCE in the opening; now let&#8217;s turn it on.</p><p>Add these properties to <code>application.properties</code>:</p><pre><code><code>quarkus.oidc.authentication.pkce-required=true
quarkus.oidc.authentication.state-secret=8f2ef0d782b24016a4a998f5d8b1a2ce</code></code></pre><p><code>quarkus.oidc.authentication.pkce-required=true</code> turns PKCE on. You also need <code>quarkus.oidc.authentication.state-secret</code> so Quarkus can encrypt the PKCE verifier in the state cookie. The <a href="https://quarkus.io/guides/security-oidc-code-flow-authentication">Quarkus OIDC authorization code flow guide</a> shows a 32-character example secret.</p><p>Generate one with OpenSSL if you want a fresh value:</p><pre><code><code>openssl rand -hex 16</code></code></pre><p>The <a href="https://www.keycloak.org/securing-apps/oidc-layers">Keycloak OIDC documentation</a> recommends PKCE together with PAR in stronger profiles. PAR keeps the heavy request off the front channel. PKCE ties the code exchange back to the original client. They solve different steps; use both.</p><p>PKCE does not replace PAR, and PAR does not replace PKCE. One protects the request leg. The other protects the code exchange leg. Use both.</p><h2><strong>Require PAR on the Keycloak Side</strong></h2><p>Right now your Quarkus client uses PAR because you told it to. That is a client-side choice. In stricter environments you also want the authorization server to reject non-PAR authorization requests.</p><p>Keycloak can publish <code>require_pushed_authorization_requests</code> in metadata when you enforce PAR. Quarkus can also turn PAR on automatically when discovery says pushed authorization requests are required. See the <a href="https://quarkus.io/guides/security-oidc-configuration-properties-reference">Quarkus OIDC configuration reference</a>.</p><p>In practice, enforce it in Keycloak for the realm or client, then verify the discovery document again:</p><pre><code><code>curl -s http://localhost:8180/realms/quarkus/.well-known/openid-configuration | grep require_pushed_authorization_requests</code></code></pre><p>When that setting becomes <code>true</code>, clients that try to send a normal front-channel authorization request without PAR will be rejected. That is the point where PAR stops being a nice hardening option and becomes policy.</p><p>For day-to-day PAR experiments, Dev Services plus default <code>quarkus-app</code> / <code>secret</code> is enough. For a shared team baseline, I still like a checked-in realm file (<code>quarkus.keycloak.devservices.realm-path</code>) or the explicit Podman setup so client type and secrets stay visible in review.</p><h2><strong>Production Hardening</strong></h2><h3><strong>What Happens Under Load</strong></h3><p>PAR adds one back-channel request before the redirect. Login now depends on the PAR endpoint, the authorization endpoint, and the token endpoint all being reachable. If Keycloak is slow or the network between Quarkus and Keycloak is unhealthy, login can fail earlier in the flow. That is expected. You moved work off the browser leg onto a server-to-server leg, so monitor that path too. </p><h3><strong>Tampering and Trust Boundaries</strong></h3><p>With a classic flow, the authorization request becomes a front-channel redirect URL. With PAR, the client authenticates to the PAR endpoint and pushes the request directly. That tightens who can create the request. It does not fix bad intent. If your Quarkus client asks for too many scopes, PAR still protects exactly that request.</p><h3><strong>Session and Logout Behavior</strong></h3><p>PAR does not change how Quarkus handles sessions. After the code exchange you still have a normal <code>web-app</code> with Quarkus-managed tokens and cookies. You still need solid logout, tight cookie scope, HTTPS in real deployments, and consistent secrets across instances. The <a href="https://quarkus.io/guides/security-oidc-code-flow-authentication">Quarkus OIDC authorization code flow guide</a> covers logout paths; everything else about session hygiene is still on you.</p><h2><strong>Conclusion</strong></h2><p>We built a Quarkus OIDC app that protects a real endpoint, runs against local Keycloak, uses PAR to keep authorization request data off the browser URL, and still ends with the same code-flow tokens after login. Your resource code stays familiar. The shift is the trust boundary on the login redirect: the browser no longer carries the full authorization request, Quarkus pushes it to Keycloak, and the redirect only carries a short-lived <code>request_uri</code>. That is a real hardening step for sensitive apps. For stricter deployments, add PKCE and require PAR on the server too. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.the-main-thread.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.the-main-thread.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>