Today’s episode is a special one for me.
I’m welcoming my friend and colleague Alex Soto to the show. If you follow the Quarkus ecosystem, you already know Alex. He has been shaping developer experience around Java and cloud-native for years. And today, he’s doing something very practical.
He takes an existing Quarkus application. No greenfield. No clean slate. A real project. And he shows how to use IBM Bob to integrate LangChain4j step by step.
This is important. Because most teams don’t start from zero. They have code. They have structure. They have production systems. The question is not “How do I build an AI app?” The question is: “How do I add AI to what I already have?”
In this episode, Alex answers exactly that.
There is a big difference between a CRUD application and an AI-ready application. At least, that’s what most developers think.
You have your repository. You have your service layer. You expose everything through REST. It works. It scales. It follows clean architecture. End of story.
But here’s the thing.
Your users don’t think in endpoints. They don’t think in HTTP verbs. They think in sentences.
In this episode, I show how we take a standard Quarkus booking application and turn it into a natural language assistant. We don’t rewrite it. We don’t redesign the architecture. We connect it to an LLM in a controlled way.
And we use IBM Bob to guide the transformation.
This article breaks down what actually happens under the hood.
The Starting Point: A Classic Quarkus Service
Our starting point is familiar.
A
BookingRepositorythat talks to the databaseA
BookingServicethat implements business logicA
BookingResourceexposing HTTP endpoints
This is how most enterprise Java applications look. Clean separation. Clear boundaries.
But there is friction.
If a user wants to check a booking, they need to know:
The correct endpoint
The booking ID
The correct parameter format
The system is precise. The user must be precise too.
That works for APIs. It does not work for conversational interfaces.
The goal is not to replace the existing architecture. The goal is to augment it.
Step 1: Adding the AI Backbone
The first real step is simple: we add LangChain4j to the project.
Instead of manually browsing extensions and copying Maven coordinates, we told Bob:
“Register dependency to use OpenAI.”
Bob inspected the project and added:
quarkus-langchain4j-openaiThis sounds trivial. It is not.
Doing this step-by-step matters. When you instruct an AI assistant incrementally, it works with the real state of your project. It reduces hallucinations. It keeps changes scoped. It prevents massive diff explosions.
This is how you should work with coding agents in production codebases. Small steps. Controlled context. Explicit instructions.
Now the application is AI-capable. But not AI-integrated.
Step 2: Defining the AI Service
In Quarkus, an AI Service is your contract to the LLM.
We asked Bob to create a BookingAssistant interface.
What we got was more than a stub.
Bob created:
System messages defining the assistant persona
Methods for chat interactions
Methods for cancellation explanations
Methods for confirmation generation
The system message is critical.
If you define:
“You are a helpful booking assistant.”
You shape the entire behavior of the model. Tone. Intent. Scope.
Without this, the LLM behaves like a generic chatbot. With it, it behaves like a domain assistant.
This is architecture, not decoration.
You are defining boundaries for probabilistic behavior.
Step 3: Giving the AI Real Power with Tools
An LLM can generate text. It cannot read your database.
That’s where Tools come in.
Tools are simply Java methods exposed to the LLM with metadata. The model decides when to call them.
We told Bob:
“Expose the getBookingDetails method as a tool.”
Bob did three important things.
First, it annotated the existing method in BookingService with @Tool.
Second, it generated a natural language description of what the method does.
Third, it updated the assistant configuration so the LLM knows this capability exists.
This is the real transformation.
We did not duplicate business logic.
We did not create parallel AI-only services.
We reused existing domain code.
Now when a user says:
“Show me details for booking 123456”
The model reasons about intent, decides that getBookingDetails is relevant, calls the tool, and incorporates the result into its answer.
This is controlled augmentation.
Your service remains the source of truth.
The LLM becomes an orchestrator.
Step 4: Closing the Loop with a Chat Endpoint
Once the assistant exists and tools are registered, we need an entry point.
Bob generated:
A new REST endpoint for chat interaction
A scaffolded
application.propertieswith OpenAI configurationExample configuration values (API key placeholder, model, temperature)
One interesting detail.
Bob scanned the existing test data and generated a cURL command using a real booking number found in the project.
That means it understood the code context well enough to extract meaningful data.
This is where AI-assisted development becomes practical. It does not just generate code. It connects dots inside your repository.
What Actually Changed?
Let’s be clear.
We did not replace REST endpoints.
We did not remove the service layer.
We did not introduce magic.
We added:
A LangChain4j extension
An AI Service interface
Tool annotations on existing business logic
A chat endpoint
That’s it.
The existing architecture stayed intact.
This is important for enterprise systems. You cannot afford a rewrite just to add conversational access. You need incremental transformation.
Architectural Insight: Standard and Smart Can Coexist
There is a misconception that AI-first systems require a different architecture.
In reality, the clean layering of a Quarkus application is ideal for AI augmentation.
Your service layer already encapsulates domain logic.
Your repository already isolates persistence.
Your REST layer already defines boundaries.
You just expose selected methods as tools and let the LLM orchestrate them.
The intelligence does not replace structure. It sits on top of it.
Why Use Bob for This?
You could wire all this manually.
But Bob helps with:
Discovering correct extensions
Generating consistent AI Service interfaces
Adding tool annotations correctly
Updating prompts and configuration
Generating example test calls
The key benefit is not speed alone.
It is correctness within context.
Bob reads your project. It sees your packages. It understands your test data. It proposes changes that align with your codebase.
That reduces the cognitive load when introducing AI features into legacy or existing systems.
From CRUD to Conversational
The gap between a standard application and an intelligent one is no longer architectural. It is connective.
You already have the data.
You already have the business logic.
You already have clear domain boundaries.
AI integration becomes a matter of:
Exposing logic as tools
Defining the assistant persona
Wiring a chat endpoint
That’s the shift.
You don’t build a new system. You make your existing system accessible through language.
Final Thought
Transforming a Quarkus application into an AI-ready assistant is not about hype. It is about controlled extension of your existing architecture.
In the podcast episode, we walk through the full implementation live with IBM Bob.
The takeaway is simple.
You don’t need to throw away your Java services to make them smart.
You just need to connect them the right way.











