Progressive Image Streaming in Quarkus: Real-Time Visuals with Java
Create a reactive REST API that delivers images byte-by-byte using Quarkus and Mutiny, mimicking AI preview behavior in your web apps.
AI tools don’t just show images, it looks like they grow them in real time. What if you could do the same from a Java backend?
In this tutorial, you’ll build a Quarkus service that streams image data progressively to the browser using a reactive API. The browser will display the image bit by bit as it arrives without waiting for the full file.
In enterprise apps, progressive loading improves UX for large or remote assets:
Previewing medical or satellite imagery
Streaming generated or transformed images
Low-latency dashboards and monitoring UIs
AI visualization pipelines (e.g., diffusion model previews)
Quarkus is ideal for this pattern because it supports reactive, non-blocking I/O with Mutiny and Vert.x under the hood.
Prerequisites
You’ll need:
Java 17+
Quarkus CLI
Podman or Docker (optional for Dev Services)
A modern browser (Chrome, Firefox, Edge)
quarkus create app org.acme:image-streamer \
--extension="rest-jackson"
cd image-streamerAnd as usual: The full runnable example is in my Github repository.
Prepare an Example Image
Place an example image in src/main/resources/images/download.png:
You can grab my watercolor Quarkus logo or anything else that suits your needs.
We’ll stream this image chunk by chunk.
Implement the Streaming Endpoint
Create ImageStreamerResource.java:
package org.acme;
import java.io.IOException;
import java.io.InputStream;
import java.nio.file.Files;
import io.smallrye.mutiny.Multi;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.NotFoundException;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.PathParam;
import jakarta.ws.rs.Produces;
@Path(”/api/image”)
public class ImageStreamerResource {
@GET
@Path(”/{name}”)
@Produces(”image/png”)
public Multi<byte[]> streamImage(@PathParam(”name”) String name) {
java.nio.file.Path path = java.nio.file.Path.of(”src/main/resources/images”, name);
if (!Files.exists(path)) {
throw new NotFoundException(”Image not found: “ + name);
}
try {
InputStream inputStream = Files.newInputStream(path);
byte[] buffer = new byte[4096];
return Multi.createFrom().emitter(emitter -> {
new Thread(() -> {
try (inputStream) {
int bytesRead;
while ((bytesRead = inputStream.read(buffer)) != -1) {
byte[] chunk = new byte[bytesRead];
System.arraycopy(buffer, 0, chunk, 0, bytesRead);
emitter.emit(chunk);
Thread.sleep(50); // simulate progressive generation
}
emitter.complete();
} catch (Exception e) {
emitter.fail(e);
}
}).start();
});
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}This sends the PNG bytes in 4 KB chunks using Mutiny’s Multi<byte[]> reactive stream.
The browser can start rendering as soon as the first bytes arrive.
Add a Simple Frontend
Create src/main/resources/META-INF/resources/index.html:
<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8”>
<title>Progressive Image Streaming</title>
<style>
<!-- skipped -->
</style>
</head>
<body>
<h1>Progressive Image Streaming Demo</h1>
<img id=”progressive” alt=”streamed image”>
<script>
const img = document.getElementById(’progressive’);
const chunks = [];
let currentBlobUrl = null;
fetch(’/api/image/download.png’)
.then(res => {
const reader = res.body.getReader();
function read() {
reader.read().then(({ done, value }) => {
if (done) {
// Final update with all chunks
updateImage();
return;
}
chunks.push(value);
// Update image as chunks arrive
updateImage();
read();
});
}
function updateImage() {
// Revoke previous blob URL to free memory
if (currentBlobUrl) {
URL.revokeObjectURL(currentBlobUrl);
}
// Create new blob from accumulated chunks
const blob = new Blob(chunks, { type: ‘image/png’ });
currentBlobUrl = URL.createObjectURL(blob);
img.src = currentBlobUrl;
}
read();
})
.catch(err => {
console.error(’Error loading image:’, err);
});
</script>
</body>
</html>
This uses the Fetch Streams API to read data progressively and pipe it into an img tag.
Run and Verify
Start Quarkus in dev mode:
quarkus devThen open http://localhost:8080
You’ll see the image slowly appear line by line, exactly like a low-latency AI preview.
To simulate slower networks, throttle your browser’s network speed to “Slow 3G” in DevTools.
Production and Performance Notes
Use non-blocking I/O (like Vert.x
AsyncFile) for large or many concurrent images.Tune buffer sizes and remove
Thread.sleep()in production.For AI pipelines, connect this to a reactive image generation process (e.g., OpenAI, Stable Diffusion, or LangChain4j streaming output).
Use HTTP/2 for smoother multiplexing over a single connection.
Set
Cache-Controlheaders to manage downstream caching.
Advanced Variation: Real-Time AI Generation
Instead of reading from disk, replace the stream source with on-the-fly image generation, e.g., a model sending partial image bytes.
You can integrate LangChain4j or an SSE endpoint that emits partial results to the same Multi<byte[]>.
This enables real-time model visualization pipelines directly from Quarkus.
Links
Your Java backend can now do what AI image generators do: Reveal beauty one byte at a time.




