Groan-Worthy Greatness: Build an AI Dad Joke Generator with Java, Quarkus, and Langchain4j
Learn how to wire up local LLMs with @RegisterAiService and bring some humor (and hands-on AI skills) to your Quarkus apps.
Welcome to the one tutorial guaranteed to make your code and your coworkers groan.
In this hands-on adventure, we’re going to build a web app that serves one purpose: deliver the finest, cheesiest dad jokes on demand. But under the hood, you’ll get a powerful lesson in building AI-infused Java applications using Quarkus, the blazing-fast Java framework, and Langchain4j, the library that bridges your code to large language models (LLMs).
And yes, we’re running everything locally using Ollama. No OpenAI API keys, no cloud dependencies. Just you, your machine, and an unrelenting stream of puns.
What You’ll Learn (Besides Bad Puns)
Behind the chuckles, this project covers several serious Java concepts:
Using
@RegisterAiService
to wire up an AI-powered Java interface.Configuring and running a local LLM via Ollama.
Injecting AI into your Quarkus REST endpoints.
Building a simple but stylish UI with Qute templates.
Let’s get cracking.
Generate the Project
We’ll create a Quarkus project with the following extensions:
quarkus-rest-jackson
: for exposing REST endpoints.quarkus-langchain4j-ollama
: for AI integration.quarkus-qute
: for building HTML templates.
Open a terminal and run:
mvn io.quarkus.platform:quarkus-maven-plugin:create \
-DprojectGroupId=org.acme \
-DprojectArtifactId=dad-joke-generator \
-Dextensions="rest-jackson,quarkus-langchain4j-ollama, rest-qute"
cd dad-joke-generator
Quarkus will spin up the model in a local ollama container for your. But you can also use a natively installed Ollama. If you’re not feeling the coding today, just grab the working example from my Github repository.
Configure Quarkus to Talk to Ollama
Open src/main/resources/application.properties
and add:
quarkus.langchain4j.ollama.chat-model.model-id=llama3.1:latest
This tells Quarkus and Langchain4j to use the llama2
model running locally.
Create the AI Dad Joke Service
Now for the fun part. Let’s define an AI-powered interface that Quarkus will magically implement.
Create src/main/java/org/acme/DadJokeService.java
:
package org.acme;
import dev.langchain4j.service.SystemMessage;
import io.quarkiverse.langchain4j.RegisterAiService;
@RegisterAiService
public interface DadJokeService {
@SystemMessage("You are a dad joke generator. Your jokes should be short, cheesy, and guaranteed to make people groan.")
String getDadJoke();
}
What’s going on here?
@RegisterAiService
: Tells Quarkus to turn this interface into an injectable bean backed by an LLM.@SystemMessage
: Provides the model with instructions—this is your system prompt.String getDadJoke()
: This method, when called, triggers a prompt to the LLM and returns the generated joke.
No boilerplate. No HTTP calls. Just inject and call.
Create the REST Endpoint
We’ll now expose the joke to the world via a REST endpoint that renders a Qute template.
Rename the GreetingResource to src/main/java/org/acme/DadJokeResource.java
: and replace the content:
package org.acme;
import io.quarkus.qute.Template;
import io.quarkus.qute.TemplateInstance;
import jakarta.inject.Inject;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
@Path("/")
public class DadJokeResource {
@Inject
DadJokeService jokeService;
@Inject
Template dadjoke;
@GET
@Produces(MediaType.TEXT_HTML)
public TemplateInstance get() {
return dadjoke.data("joke", jokeService.getDadJoke());
}
}
This class:
Injects our
DadJokeService
to get jokes.Injects the
dadjoke.html
Qute template.Calls the LLM and passes the joke to the HTML template.
Now let’s make that frontend look good.
Build the Frontend with Qute
Create src/main/resources/templates/dadjoke.html
:
<!DOCTYPE html>
<html>
<head>
<title>Dad Joke Generator</title>
<style> <!-- skipped for brevity -->
</style>
</head>
<body>
<div class="container">
<h1>Guaranteed Groans</h1>
<p class="joke">{joke}</p>
<a href="/" class="button">Get Another Joke</a>
</div>
</body>
</html>
This template displays a joke and gives users a big blue button to fetch another. It’s styled just enough to feel like a modern single-purpose app.
Run It!
Make sure Ollama is still running, then start Quarkus in dev mode:
./mvnw quarkus:dev
Now open your browser and head to http://localhost:8080
You should see a web app with a dad joke ready to go. Click the button to get another one. Every request triggers the LLM to generate a fresh gem of a groaner.
Behind the Scenes
Here’s what just happened:
You defined an AI service using a simple interface and annotation.
Quarkus + Langchain4j wired that interface to a local LLM via Ollama.
You built a REST resource that calls this AI service.
You used Qute to render dynamic content in HTML.
You created a full AI-infused Java web app without touching a single line of HTTP client code.
And you did it all without leaving the JVM.
Where to Go Next?
This is just the beginning. With the same building blocks, you can:
Build customer support bots.
Summarize documents.
Extract structured data from PDFs.
Generate personalized marketing emails.
Build multi-agent systems with LangGraph4j.
Or just keep pumping out dad jokes.
Final Joke (Generated by the App Itself)
Why did the Java developer go broke?
Because he usedfloat
instead ofdouble
in his salary calculations.
Share Your Groans
If you liked this tutorial, drop a comment or share it with your fellow developers. Just make sure to include a dad joke in return.
Stay cheesy.