Build a Real-Time ISS Tracker with Quarkus, SSE, and Qute
A hands-on Java tutorial that uses a typed REST client, scheduled polling, server-sent events, and a live map UI to track the International Space Station in real time.
Most developers look at a small real-time app like this and think it is mainly a frontend exercise. Poll an API, move an icon on a map, done. That works for a demo, but it breaks down fast when you turn it into an actual service. Browsers start polling too often, upstream calls pile up, your server does duplicate work for every tab, and one slow public API drags the whole thing down.
The better mental model is this: the browser is not the source of truth. Your Quarkus service is. It owns the upstream HTTP call, it decides how often data gets refreshed, it caches the latest good result, and it pushes updates to every connected client. That gives you one polling loop, one failure boundary, and one place to harden behavior when the upstream service is slow or unavailable.
This matters even in a small tutorial. Public APIs fail. Network calls stall. Frontends reconnect. SSE clients disappear without warning. If you do not put the right boundaries in place, a harmless little tracker turns into a noisy service that wastes threads, floods logs, and shows stale or broken data when you most need predictable behavior.
In this tutorial, we build the safer version. We use a typed REST client for the Open Notify ISS endpoint, a scheduler that polls every 10 seconds, an application-scoped cache that stores the latest fix, an SSE endpoint that broadcasts updates, and a Qute frontend that renders the marker on a world map.
The ISS has always had a special pull on us at home. The kids and I have been fascinated by NASA, rockets, astronauts, and all the space stuff for as long as I can remember, and the ISS makes that fascination feel real because it is not science fiction or a distant concept, it is up there right now, moving above our heads. It is also a nice reminder that space software lives under hard, real-world constraints where reliability, testability, and long-term maintainability matter more than hype. That is one reason the stable JVM ecosystem keeps showing up in serious environments: NASA’s own history includes Java-based work such as Java Pathfinder at Ames, and even Java Champion James Weaver has been featured in a NASA Goddard colloquium context.
Prerequisites
You need a recent Java and Maven setup, and you should already be comfortable with CDI, REST endpoints, and basic Quarkus project structure. This is not a beginner Java tutorial. It is aimed at Java developers who want a clean end-to-end example that they can understand, run, and extend.
Java 21 installed
Maven 3.9+
Quarkus CLI installed, or comfort with the Maven plugin
Basic understanding of CDI, JAX-RS, and JSON
A working internet connection because the service calls the Open Notify ISS endpoint
Project Setup
Create the project or start from my Github repository:
quarkus create app com.themainthread:iss-tracker \
--extension='rest-jackson,rest-client-jackson,qute,rest-qute,scheduler,smallrye-health' \
--no-code
cd iss-trackerEach extension has a job:
rest-jackson- REST endpoints with Jackson JSON serializationrest-client-jackson- typed REST client support for the upstream ISS APIqute- server-side templatesrest-qute- smooth integration between Qute and REST resourcesscheduler- periodic polling of the ISS positionsmallrye-health-/q/healthendpoints and readiness checks
Add two extra test dependency are needed. For @InjectMock support in Quarkus tests and RestAssured:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-junit-mockito</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>rest-assured</artifactId>
<scope>test</scope>
</dependency>Create the souce folder structure:
mkdir -p src/main/java/com/themainthread/iss/{client,health,resource,service,util}
mkdir -p src/main/resources/templates/IndexResource
mkdir -p src/test/java/com/themainthread/iss/{resource,service,util}Implementation
Model the upstream API
The Open Notify ISS endpoint returns a very small JSON payload with latitude and longitude as strings, not doubles. Model that shape exactly first. Do not “improve” the upstream contract in your record. Parse when you need numbers. The API docs show that shape explicitly.
Create src/main/java/com/themainthread/iss/client/IssPosition.java:
package com.themainthread.iss.client;
import com.fasterxml.jackson.annotation.JsonProperty;
public record IssPosition(
@JsonProperty("latitude") String latitude,
@JsonProperty("longitude") String longitude) {
public double latDouble() {
return Double.parseDouble(latitude);
}
public double lonDouble() {
return Double.parseDouble(longitude);
}
}Create src/main/java/com/themainthread/iss/client/IssNowResponse.java:
package com.themainthread.iss.client;
import com.fasterxml.jackson.annotation.JsonProperty;
public record IssNowResponse(
String message,
long timestamp,
@JsonProperty("iss_position") IssPosition issPosition) {
}Java records are the right fit here. The data is read-only, small, and short-lived. You get immutability and clean logging without boilerplate. The limit is obvious too: records do not replace domain logic. They are just a good boundary for a small external JSON contract.
Create the typed REST client
Now we create the client that talks to Open Notify. Quarkus supports this pattern directly with MicroProfile REST Client, and the Quarkus REST client guide still recommends rest-client-jackson when you want Jackson-based JSON mapping.
Create src/main/java/com/themainthread/iss/client/IssApiClient.java:
package com.themainthread.iss.client;
import org.eclipse.microprofile.rest.client.inject.RegisterRestClient;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
@RegisterRestClient(configKey = "iss-api")
@Path("/iss-now.json")
public interface IssApiClient {
@GET
@Produces(MediaType.APPLICATION_JSON)
IssNowResponse fetchPosition();
}This client gives you one clear interface for the upstream dependency. That matters later when you test the poller. You can swap in a mock and drive the scheduler logic without touching the network.
The limit is that this interface does not solve resilience on its own. If the upstream hangs, this call hangs until your configured timeout hits. That is why we set tight timeouts in configuration and use SKIP on the scheduler. The interface is clean. The failure behavior still depends on the rest of the system.
Convert latitude and longitude to map pixels
We need server-side projection math so every client gets the same coordinates. That avoids duplicated math in every browser and keeps the frontend simple.
Create src/main/java/com/themainthread/iss/util/MercatorProjection.java:
package com.themainthread.iss.util;
public final class MercatorProjection {
public static final int MAP_WIDTH = 1280;
public static final int MAP_HEIGHT = 640;
private static final double MAX_WEB_MERCATOR_LAT = 85.05112878;
private MercatorProjection() {
}
public static int[] toPixel(double latDeg, double lonDeg) {
double clampedLat = Math.max(-MAX_WEB_MERCATOR_LAT, Math.min(MAX_WEB_MERCATOR_LAT, latDeg));
double x = (lonDeg + 180.0) / 360.0 * MAP_WIDTH;
double latRad = Math.toRadians(clampedLat);
double mercatorY = Math.log(Math.tan(Math.PI / 4.0 + latRad / 2.0));
double y = (MAP_HEIGHT / 2.0) - (MAP_WIDTH / (2.0 * Math.PI)) * mercatorY;
int pixelX = (int) Math.round(Math.max(0, Math.min(MAP_WIDTH - 1, x)));
int pixelY = (int) Math.round(Math.max(0, Math.min(MAP_HEIGHT - 1, y)));
return new int[] { pixelX, pixelY };
}
}This utility gives you one guarantee: the same (lat, lon) always maps to the same (x, y) on your chosen canvas. That consistency matters because the backend and frontend now agree on a fixed coordinate system.
What it does not guarantee is cartographic correctness for every map projection you might use later. This is tuned for a Mercator-style image and a fixed canvas size. Change the map projection or aspect ratio and you must revisit the math. That is normal. Projection code is always coupled to how you draw the world.
Build the application-scoped cache and stream source
The cache is the center of the service. The scheduler writes to it. The JSON endpoint reads from it. The SSE endpoint streams from it. This is where we stop the browser from becoming the source of truth.
Create src/main/java/com/themainthread/iss/service/IssPositionCache.java:
package com.themainthread.iss.service;
import java.time.Instant;
import java.util.concurrent.atomic.AtomicReference;
import com.themainthread.iss.client.IssNowResponse;
import com.themainthread.iss.util.MercatorProjection;
import io.smallrye.mutiny.Multi;
import io.smallrye.mutiny.operators.multi.processors.BroadcastProcessor;
import jakarta.enterprise.context.ApplicationScoped;
@ApplicationScoped
public class IssPositionCache {
public record PositionFix(
double latitude,
double longitude,
int pixelX,
int pixelY,
long timestamp,
Instant updatedAt) {
}
private final AtomicReference<PositionFix> latest = new AtomicReference<>(null);
private final BroadcastProcessor<PositionFix> processor = BroadcastProcessor.create();
public PositionFix update(IssNowResponse response) {
double lat = response.issPosition().latDouble();
double lon = response.issPosition().lonDouble();
int[] pixels = MercatorProjection.toPixel(lat, lon);
PositionFix fix = new PositionFix(
lat,
lon,
pixels[0],
pixels[1],
response.timestamp(),
Instant.now());
latest.set(fix);
return fix;
}
public PositionFix latest() {
return latest.get();
}
public boolean hasData() {
return latest.get() != null;
}
public void broadcast(PositionFix fix) {
processor.onNext(fix);
}
public Multi<PositionFix> stream() {
return processor;
}
}AtomicReference is enough here because each update replaces the whole snapshot. Readers never see half-written state. They either see the old fix, the new fix, or null before the first successful poll. That is exactly what you want.
The BroadcastProcessor gives you hot-stream behavior. Connected clients get new events. Clients that connect later do not get a backlog. That is the right behavior for a live tracker. The limit is that this is not a durable event log. If you want history, you need storage. A processor is for fan-out, not persistence.
Add the scheduler that polls Open Notify
Now we connect the REST client to the cache.
Create src/main/java/com/themainthread/iss/service/IssPoller.java:
package com.themainthread.iss.service;
import org.eclipse.microprofile.rest.client.inject.RestClient;
import org.jboss.logging.Logger;
import com.themainthread.iss.client.IssApiClient;
import com.themainthread.iss.service.IssPositionCache.PositionFix;
import io.quarkus.scheduler.Scheduled;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;
@ApplicationScoped
public class IssPoller {
private static final Logger LOG = Logger.getLogger(IssPoller.class);
@Inject
@RestClient
IssApiClient client;
@Inject
IssPositionCache cache;
@Scheduled(every = "10s", concurrentExecution = Scheduled.ConcurrentExecution.SKIP)
void poll() {
try {
var response = client.fetchPosition();
if (!"success".equalsIgnoreCase(response.message())) {
LOG.warnf("Unexpected upstream response message: %s", response.message());
return;
}
PositionFix fix = cache.update(response);
cache.broadcast(fix);
LOG.debugf("ISS fix updated - lat=%.4f lon=%.4f x=%d y=%d",
fix.latitude(), fix.longitude(), fix.pixelX(), fix.pixelY());
} catch (Exception e) {
LOG.errorf("Failed to fetch ISS position: %s", e.getMessage());
}
}
}This method does one thing well. It fetches one fresh upstream value, updates one shared snapshot, and broadcasts one event. That simplicity matters in production. Scheduled code becomes fragile fast when it tries to do too much.
The critical setting here is concurrentExecution = SKIP. Without it, a slow upstream call can overlap with the next scheduled run. That is how tiny polling jobs slowly turn into thread waste. With SKIP, you lose one tick when the upstream is slow. That is fine for a public ISS feed updating every 10 seconds. Missing one poll is cheaper than building a queue of blocked work.
Expose the JSON and SSE endpoints
Quarkus REST supports SSE with @Produces(MediaType.SERVER_SENT_EVENTS) and @RestStreamElementType(MediaType.APPLICATION_JSON).
Create src/main/java/com/themainthread/iss/resource/IssResource.java:
package com.themainthread.iss.resource;
import org.jboss.resteasy.reactive.RestStreamElementType;
import com.themainthread.iss.service.IssPositionCache;
import com.themainthread.iss.service.IssPositionCache.PositionFix;
import io.smallrye.mutiny.Multi;
import jakarta.inject.Inject;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
@Path("/api/iss")
public class IssResource {
@Inject
IssPositionCache cache;
@GET
@Path("/position")
@Produces(MediaType.APPLICATION_JSON)
public PositionFix position() {
return cache.latest();
}
@GET
@Path("/stream")
@Produces(MediaType.SERVER_SENT_EVENTS)
@RestStreamElementType(MediaType.APPLICATION_JSON)
public Multi<PositionFix> stream() {
return cache.stream();
}
}The JSON endpoint is useful for the initial page load. The SSE endpoint is useful after that. This split keeps the frontend responsive even if it opens before the first live event arrives.
One small but important point: returning null from position() maps cleanly to 204 No Content. That means you do not have to invent a fake placeholder payload before the first successful poll. The client can handle “no data yet” honestly.
Serve the Qute frontend
Qute’s @CheckedTemplate support gives you build-time verification that your template exists. That is exactly the kind of failure you want at build time, not after deployment. The Qute guide still documents this pattern. (quarkus.io)
Create src/main/java/com/themainthread/iss/resource/IndexResource.java:
package com.themainthread.iss.resource;
import io.quarkus.qute.CheckedTemplate;
import io.quarkus.qute.TemplateInstance;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
@Path("/")
public class IndexResource {
@CheckedTemplate
public static class Templates {
public static native TemplateInstance index();
}
@GET
@Produces(MediaType.TEXT_HTML)
public TemplateInstance index() {
return Templates.index();
}
}Create src/main/resources/templates/IndexResource/index.html:
(I’ve omitted the styles here, go check out the Github repository!)
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>ISS Tracker</title>
<style>
</style>
</head>
<body>
<h1>ISS Real-Time Tracker</h1>
<div id="status">Connecting to live feed...</div>
<div id="map-container">
<img id="world-map"
src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/80/World_map_-_low_resolution.svg/1280px-World_map_-_low_resolution.svg.png"
alt="World map">
<div id="iss-marker"></div>
</div>
<div id="coords">
Lat: <span id="lat">-</span>
Lon: <span id="lon">-</span>
Updated: <span id="updated">-</span>
</div>
<footer>
Data from <a href="https://open-notify.org/" target="_blank">Open Notify</a>
</footer>
<script>
const MAP_WIDTH = 1280;
const MAP_HEIGHT = 640;
const marker = document.getElementById("iss-marker");
const status = document.getElementById("status");
const latEl = document.getElementById("lat");
const lonEl = document.getElementById("lon");
const updatedEl = document.getElementById("updated");
function placeMarker(pixelX, pixelY) {
const leftPercent = (pixelX / MAP_WIDTH) * 100;
const topPercent = (pixelY / MAP_HEIGHT) * 100;
marker.style.left = leftPercent + "%";
marker.style.top = topPercent + "%";
}
function renderFix(fix) {
placeMarker(fix.pixelX, fix.pixelY);
latEl.textContent = fix.latitude.toFixed(4) + "°";
lonEl.textContent = fix.longitude.toFixed(4) + "°";
updatedEl.textContent = new Date(fix.updatedAt).toLocaleTimeString();
}
fetch("/api/iss/position")
.then(response => response.status === 204 ? null : response.json())
.then(fix => {
if (fix) {
renderFix(fix);
}
});
const source = new EventSource("/api/iss/stream");
source.onopen = () => {
status.textContent = "Live - updates every 10 seconds";
status.style.color = "#3fd6a5";
};
source.onmessage = (event) => {
const fix = JSON.parse(event.data);
renderFix(fix);
};
source.onerror = () => {
status.textContent = "Connection lost - reconnecting...";
status.style.color = "#ffb366";
};
</script>
</body>
</html>This frontend stays small because the server already did the hard work. The browser only places a marker and updates a few text fields. That is the right split for a live app like this.
It also handles reconnects cleanly because EventSource reconnects automatically. That does not make SSE free. A large number of open clients still means a large number of open HTTP connections. For a small tracker that is fine. For public-scale fan-out, you would think harder about limits, proxies, and backpressure.
Add a readiness check
Health endpoints are only useful if they reflect real readiness. For this service, “ready” means we have fetched at least one good ISS position.
Create src/main/java/com/themainthread/iss/health/IssDataReadinessCheck.java:
package com.themainthread.iss.health;
import org.eclipse.microprofile.health.HealthCheck;
import org.eclipse.microprofile.health.HealthCheckResponse;
import org.eclipse.microprofile.health.Readiness;
import com.themainthread.iss.service.IssPositionCache;
import com.themainthread.iss.service.IssPositionCache.PositionFix;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;
@Readiness
@ApplicationScoped
public class IssDataReadinessCheck implements HealthCheck {
@Inject
IssPositionCache cache;
@Override
public HealthCheckResponse call() {
if (!cache.hasData()) {
return HealthCheckResponse.named("iss-data")
.down()
.withData("reason", "No successful poll yet")
.build();
}
PositionFix fix = cache.latest();
return HealthCheckResponse.named("iss-data")
.up()
.withData("latitude", String.valueOf(fix.latitude()))
.withData("longitude", String.valueOf(fix.longitude()))
.withData("updatedAt", fix.updatedAt().toString())
.build();
}
}This check gives you an honest signal for deployment platforms. A process that has started but never fetched data is alive, but it is not ready. That distinction matters once you put this behind a load balancer.
Configuration
Configure the application in src/main/resources/application.properties:
quarkus.application.name=iss-tracker
quarkus.http.port=8080
quarkus.rest-client.iss-api.url=http://api.open-notify.org
quarkus.rest-client.iss-api.connect-timeout=3000
quarkus.rest-client.iss-api.read-timeout=5000
quarkus.log.level=INFO
quarkus.log.category."com.themainthread.iss".level=DEBUG
%dev.quarkus.scheduler.start-mode=forced
%dev.quarkus.log.category."com.themainthread.iss".level=DEBUG
quarkus.rest-client.iss-api.url points the typed REST client at the public ISS endpoint documented by Open Notify. (open-notify.org)
connect-timeout=3000 prevents a dead TCP connection attempt from hanging too long. When the upstream is down, three seconds is enough to fail fast and keep your scheduler thread moving.
read-timeout=5000 prevents the actual response read from blocking forever. This is the difference between a recoverable slow upstream and a scheduler job that quietly stalls.
%dev.quarkus.scheduler.start-mode=forced makes development behavior obvious when you are iterating locally. You do not want to wonder whether the scheduler started after a hot reload.
Production Hardening
What happens when the upstream API is slow
The upstream service is free and public. Treat it like a dependency you do not control. That means timeouts are mandatory, and overlapping polls are a bug, not a feature.
With the current setup, one slow call blocks only one scheduled execution. The next tick gets skipped because of Scheduled.ConcurrentExecution.SKIP. That is intentional. You keep the last good fix, you avoid overlapping network calls, and your service stays stable.
Without SKIP, slow upstream calls stack up. That is how small scheduled jobs turn into thread leaks under bad network conditions. The tracker does not need every possible update. It needs predictable behavior.
SSE fan-out is cheap until it is not
SSE is simpler than WebSockets for one-way live updates. That is why it fits this tutorial well. The browser API is tiny, the server code is small, and reconnection behavior is built in.
But every client still holds a long-lived HTTP connection. A few browsers are nothing. Thousands of browsers change the operational picture. Reverse proxy timeouts, max connections, and memory overhead start to matter. For a public internet app, you would put this behind a proxy configured for long-lived streams and think about per-IP connection limits.
Stale data is better than broken data
When a poll fails, we keep the last successful position in memory. This is the correct failure mode for a tracker. Users keep seeing the last known good fix, not a blank map or a server error.
This does not mean the data is fresh. It means the service is honest about its last successful state. If freshness matters more than availability, expose an age field and let the frontend show “stale” after a threshold. For this tutorial, readiness tells you whether data exists at all, and the timestamps tell you how current it is.
No auth does not mean no abuse surface
This service has no login and no database. That does not make it safe by default. The abuse surface is the number of open SSE connections and the number of HTTP requests you let through.
If you publish this beyond your laptop, add rate limiting. Also think about response caching on /api/iss/position, and consider authentication if the stream is not meant to be public. Real-time endpoints are attractive to scrapers because they are cheap to consume and easy to reconnect.
Verification
Run the application
Start dev mode:
quarkus devOpen the app http://localhost:8080
You should see a world map, an ISS marker, and coordinates that update every 10 seconds. I have also added a small tracking feature to the Github repository. So make sure to check that out!
Verify the JSON endpoint
Call the snapshot endpoint:
curl -i http://localhost:8080/api/iss/positionExpected behavior after the first successful poll:
HTTP/1.1 200 OK
Content-Type: application/json
Expected payload shape:
{
"latitude": 12.3456,
"longitude": -78.9012,
"pixelX": 360,
"pixelY": 280,
"timestamp": 1716835200,
"updatedAt": "2026-03-14T10:15:30.123Z"
}
Before the first successful poll, you should get:
HTTP/1.1 204 No ContentThat verifies the cache boundary is working correctly.
Verify the readiness endpoint
Call readiness:
curl http://localhost:8080/q/health/readyExpected output after a successful poll:
{
"status": "UP",
"checks": [
{
"name": "iss-data",
"status": "UP",
"data": {
"latitude": "-24.2744",
"longitude": "-86.3863",
"updatedAt": "2026-03-14T10:56:08.176814Z"
}
}
]
}This proves the app is not just running. It has real upstream data.
Test the projection math
Create src/test/java/com/themainthread/iss/util/MercatorProjectionTest.java:
package com.themainthread.iss.util;
import static org.junit.jupiter.api.Assertions.assertTrue;
import org.junit.jupiter.api.Test;
class MercatorProjectionTest {
@Test
void nullIslandMapsNearCenter() {
int[] pixels = MercatorProjection.toPixel(0.0, 0.0);
assertTrue(Math.abs(pixels[0] - 640) <= 2);
assertTrue(Math.abs(pixels[1] - 320) <= 2);
}
@Test
void easternDatelineMapsNearRightEdge() {
int[] pixels = MercatorProjection.toPixel(0.0, 179.9);
assertTrue(pixels[0] > 1260);
}
@Test
void westernDatelineMapsNearLeftEdge() {
int[] pixels = MercatorProjection.toPixel(0.0, -179.9);
assertTrue(pixels[0] < 20);
}
}This test proves the one piece of pure math in the service. If the projection is wrong, the frontend marker is wrong even when every other part works.
Test the REST resource
Create src/test/java/com/themainthread/iss/resource/IssResourceTest.java:
package com.themainthread.iss.resource;
import static io.restassured.RestAssured.given;
import static org.hamcrest.Matchers.is;
import static org.mockito.Mockito.when;
import java.time.Instant;
import org.junit.jupiter.api.Test;
import com.themainthread.iss.service.IssPositionCache;
import com.themainthread.iss.service.IssPositionCache.PositionFix;
import io.quarkus.test.InjectMock;
import io.quarkus.test.junit.QuarkusTest;
@QuarkusTest
class IssResourceTest {
@InjectMock
IssPositionCache cache;
@Test
void returnsCurrentFix() {
PositionFix fix = new PositionFix(
51.5074,
-0.1278,
640,
250,
1716835200L,
Instant.parse("2026-03-14T10:15:30Z"));
when(cache.latest()).thenReturn(fix);
given()
.when().get("/api/iss/position")
.then()
.statusCode(200)
.body("latitude", is(51.5074f))
.body("longitude", is(-0.1278f))
.body("pixelX", is(640))
.body("pixelY", is(250));
}
@Test
void returns204WhenNoFixExists() {
when(cache.latest()).thenReturn(null);
given()
.when().get("/api/iss/position")
.then()
.statusCode(204);
}
}This test proves the HTTP contract. It does not care about the scheduler or the external API. That is good. Resource tests should stay focused.
Test the scheduler
Create src/test/java/com/themainthread/iss/service/IssPollerTest.java:
package com.themainthread.iss.service;
import static org.junit.jupiter.api.Assertions.assertDoesNotThrow;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertTrue;
import static org.mockito.Mockito.when;
import org.eclipse.microprofile.rest.client.inject.RestClient;
import org.junit.jupiter.api.Test;
import com.themainthread.iss.client.IssApiClient;
import com.themainthread.iss.client.IssNowResponse;
import com.themainthread.iss.client.IssPosition;
import io.quarkus.test.InjectMock;
import io.quarkus.test.junit.QuarkusTest;
import jakarta.inject.Inject;
@QuarkusTest
class IssPollerTest {
@InjectMock
@RestClient
IssApiClient apiClient;
@Inject
IssPoller poller;
@Inject
IssPositionCache cache;
@Test
void pollPopulatesCacheOnSuccess() {
IssNowResponse response = new IssNowResponse(
"success",
1716835200L,
new IssPosition("51.5074", "-0.1278"));
when(apiClient.fetchPosition()).thenReturn(response);
poller.poll();
assertTrue(cache.hasData());
var fix = cache.latest();
assertEquals(51.5074, fix.latitude());
assertEquals(-0.1278, fix.longitude());
}
@Test
void pollSwallowsNetworkErrors() {
when(apiClient.fetchPosition()).thenThrow(new RuntimeException("timeout"));
assertDoesNotThrow(() -> poller.poll());
}
}This test proves the behavior that matters most in production: one bad network call does not crash the polling loop.
Conclusion
We built a Quarkus service that treats real-time tracking as a backend responsibility, not a browser trick. The REST client isolates the upstream dependency, the scheduler controls polling, the application-scoped cache holds the last good state, the SSE endpoint fans out updates efficiently, and the Qute frontend stays simple because the server already did the hard work. More importantly, the service fails in the right way: it skips overlapping polls, keeps stale-but-valid data when the upstream breaks, and reports readiness only after it has something real to serve.
This is a small app, but the architecture is the same one you use when a toy demo has to survive the real world.



