The missing transport layer between your LLM and your browser. Spring AI gives you Flux<ChatResponse>. LangChain4j gives you StreamingChatResponseHandler. Neither delivers tokens to the user. Atmosphere does — over WebSocket with SSE/Long-Polling fallback, reconnection, rooms, presence, and Kafka/Redis clustering. Add one dependency to your Spring Boot or Quarkus app.
Frameworks like Spring AI, LangChain4j, and Embabel handle LLM ↔ server communication. Atmosphere handles the other half: server ↔ browser. Built on 18 years of WebSocket experience, rewritten for JDK 21 virtual threads. It streams tokens to the client in real time over WebSocket (with SSE/Long-Polling fallback), manages reconnection and backpressure, and provides React/Vue/Svelte hooks — so you don't have to build all of that yourself.
@AiEndpoint+@Prompt— annotate a class, receive prompts, stream tokens. Runs on virtual threads.- Built-in LLM client — zero-dependency
OpenAiCompatibleClientthat talks to OpenAI, Gemini, Ollama, or any OpenAI-compatible API. No Spring AI or LangChain4j required. - Adapter SPI — plug in Spring AI (
Flux<ChatResponse>), LangChain4j (StreamingChatResponseHandler), or Embabel (OutputChannel). Your framework generates tokens; Atmosphere delivers them. - Standardized wire protocol — every token is a JSON frame with
type,data,sessionId, andseqfor ordering. Progress events, metadata (model, token usage), and error frames are built in. - AI as a room participant —
LlmRoomMemberjoins a Room like any user. When someone sends a message, the LLM receives it, streams a response, and broadcasts it back. Humans and AI in the same room. - Client hooks —
useStreaming()for React/Vue/Svelte gives youfullText,isStreaming,progress,metadata, anderrorout of the box. No custom WebSocket code.
@AiEndpoint(path = "/ai/chat", systemPrompt = "You are a helpful assistant")
public class MyChatBot {
@Prompt
public void onPrompt(String message, StreamingSession session) {
AiConfig.get().client().streamChatCompletion(
ChatCompletionRequest.builder(AiConfig.get().model())
.user(message).build(),
session);
}
}Configure with environment variables — no code changes to switch providers:
| Variable | Description | Default |
|---|---|---|
LLM_MODE |
remote (cloud) or local (Ollama) |
remote |
LLM_MODEL |
gemini-2.5-flash, gpt-5, o3-mini, llama3.2, … |
gemini-2.5-flash |
LLM_API_KEY |
API key (or GEMINI_API_KEY for Gemini) |
— |
LLM_BASE_URL |
Override endpoint (auto-detected from model name) | auto |
Atmosphere doesn't replace your AI framework. It gives it a transport:
Spring AI adapter
@Message
public void onMessage(String prompt) {
StreamingSession session = StreamingSessions.start(resource);
springAiAdapter.stream(chatClient, prompt, session);
// Spring AI's Flux<ChatResponse> → session.send(token) → WebSocket frame
}LangChain4j adapter
@Message
public void onMessage(String prompt) {
StreamingSession session = StreamingSessions.start(resource);
model.chat(ChatMessage.userMessage(prompt),
new AtmosphereStreamingResponseHandler(session));
// LangChain4j callbacks → session.send(token) → WebSocket frame
}Embabel adapter
@Message
fun onMessage(prompt: String) {
val session = StreamingSessions.start(resource)
embabelAdapter.stream(AgentRequest("assistant") { channel ->
agentPlatform.run(prompt, channel)
}, session)
// Embabel agent events → session.send(token) / session.progress() → WebSocket frame
}import { useStreaming } from 'atmosphere.js/react';
function AiChat() {
const { fullText, isStreaming, progress, send } = useStreaming({
request: { url: '/ai/chat', transport: 'websocket' },
});
return (
<div>
<button onClick={() => send('Explain WebSockets')} disabled={isStreaming}>
Ask
</button>
{progress && <p className="muted">{progress}</p>}
<p>{fullText}</p>
</div>
);
}var client = AiConfig.get().client();
var assistant = new LlmRoomMember("assistant", client, "gpt-5",
"You are a helpful coding assistant");
Room room = rooms.room("dev-chat");
room.joinVirtual(assistant);
// Now when any user sends a message, the LLM responds in the same roomSee the AI / LLM Streaming wiki for the full guide.
<dependency>
<groupId>org.atmosphere</groupId>
<artifactId>atmosphere-runtime</artifactId>
<version>4.0.3</version>
</dependency>For Spring Boot:
<dependency>
<groupId>org.atmosphere</groupId>
<artifactId>atmosphere-spring-boot-starter</artifactId>
<version>4.0.3</version>
</dependency>For Quarkus:
<dependency>
<groupId>org.atmosphere</groupId>
<artifactId>atmosphere-quarkus-extension</artifactId>
<version>4.0.3</version>
</dependency>implementation 'org.atmosphere:atmosphere-runtime:4.0.3'
// or
implementation 'org.atmosphere:atmosphere-spring-boot-starter:4.0.3'
// or
implementation 'org.atmosphere:atmosphere-quarkus-extension:4.0.3'npm install atmosphere.js| Module | Artifact | Description |
|---|---|---|
| Core runtime | atmosphere-runtime |
WebSocket, SSE, Long-Polling transport layer (Servlet 6.0+) |
| Spring Boot starter | atmosphere-spring-boot-starter |
Auto-configuration for Spring Boot 4.0.2+ |
| Quarkus extension | atmosphere-quarkus-extension |
Build-time processing for Quarkus 3.21+ |
| AI streaming | atmosphere-ai |
Token-by-token LLM response streaming |
| Spring AI adapter | atmosphere-spring-ai |
Spring AI ChatClient integration |
| LangChain4j adapter | atmosphere-langchain4j |
LangChain4j streaming integration |
| MCP server | atmosphere-mcp |
Model Context Protocol server over WebSocket |
| Rooms | built into atmosphere-runtime |
Room management with join/leave and presence |
| Redis clustering | atmosphere-redis |
Cross-node broadcasting via Redis pub/sub |
| Kafka clustering | atmosphere-kafka |
Cross-node broadcasting via Kafka |
| Durable sessions | atmosphere-durable-sessions |
Session persistence across restarts (SQLite / Redis) |
| Kotlin DSL | atmosphere-kotlin |
Builder API and coroutine extensions |
| TypeScript client | atmosphere.js (npm) |
Browser client with React, Vue, and Svelte bindings |
Server-side room management with presence tracking:
RoomManager rooms = RoomManager.getOrCreate(framework);
Room lobby = rooms.room("lobby");
lobby.enableHistory(100); // replay last 100 messages to new joiners
lobby.join(resource, new RoomMember("user-1", Map.of("name", "Alice")));
lobby.broadcast("Hello everyone!");
lobby.onPresence(event -> log.info("{} {} room '{}'",
event.member().id(), event.type(), event.room().name()));The starter provides auto-configuration for Spring Boot 4.0.2+.
atmosphere:
packages: com.example.chatConfiguration properties
| Property | Default | Description |
|---|---|---|
servlet-path |
/atmosphere/* |
Servlet URL mapping |
packages |
Annotation scanning packages | |
order |
0 |
Servlet load-on-startup order |
session-support |
false |
Enable HttpSession support |
websocket-support |
Enable/disable WebSocket | |
heartbeat-interval-in-seconds |
Server heartbeat frequency | |
broadcaster-class |
Custom Broadcaster FQCN | |
broadcaster-cache-class |
Custom BroadcasterCache FQCN | |
init-params |
Map of any ApplicationConfig key/value |
GraalVM native image
The starter includes AOT runtime hints. Activate the native Maven profile:
./mvnw -Pnative package -pl samples/spring-boot-chat
./samples/spring-boot-chat/target/atmosphere-spring-boot-chatRequires GraalVM JDK 25+ (Spring Boot 4.0 / Spring Framework 7 baseline).
The extension provides build-time annotation scanning for Quarkus 3.21+.
quarkus.atmosphere.packages=com.example.chatConfiguration properties
| Property | Default | Description |
|---|---|---|
quarkus.atmosphere.servlet-path |
/atmosphere/* |
Servlet URL mapping |
quarkus.atmosphere.packages |
Annotation scanning packages | |
quarkus.atmosphere.load-on-startup |
1 |
Servlet load-on-startup order |
quarkus.atmosphere.session-support |
false |
Enable HttpSession support |
quarkus.atmosphere.broadcaster-class |
Custom Broadcaster FQCN | |
quarkus.atmosphere.broadcaster-cache-class |
Custom BroadcasterCache FQCN | |
quarkus.atmosphere.heartbeat-interval-in-seconds |
Server heartbeat frequency | |
quarkus.atmosphere.init-params |
Map of any ApplicationConfig key/value |
GraalVM native image
./mvnw -Pnative package -pl samples/quarkus-chat
./samples/quarkus-chat/target/atmosphere-quarkus-chat-*-runnerRequires GraalVM JDK 21+ or Mandrel. Use -Dquarkus.native.container-build=true to build without a local GraalVM installation.
atmosphere.js includes bindings for React, Vue, and Svelte:
React
import { AtmosphereProvider, useAtmosphere, useRoom, usePresence } from 'atmosphere.js/react';
function App() {
return (
<AtmosphereProvider>
<Chat />
</AtmosphereProvider>
);
}
function Chat() {
const { state, data, push } = useAtmosphere<Message>({
request: { url: '/chat', transport: 'websocket' },
});
return state === 'connected'
? <button onClick={() => push({ text: 'Hello' })}>Send</button>
: <p>Connecting…</p>;
}
function ChatRoom() {
const { joined, members, messages, broadcast } = useRoom<ChatMessage>({
request: { url: '/atmosphere/room', transport: 'websocket' },
room: 'lobby',
member: { id: 'user-1' },
});
return (
<div>
<p>{members.length} online</p>
{messages.map((m, i) => <div key={i}>{m.member.id}: {m.data.text}</div>)}
<button onClick={() => broadcast({ text: 'Hi' })}>Send</button>
</div>
);
}Vue
<script setup>
import { useAtmosphere, useRoom, usePresence } from 'atmosphere.js/vue';
const { state, data, push } = useAtmosphere({ url: '/chat', transport: 'websocket' });
const { joined, members, messages, broadcast } = useRoom(
{ url: '/atmosphere/room', transport: 'websocket' },
'lobby',
{ id: 'user-1' },
);
const { count, isOnline } = usePresence(
{ url: '/atmosphere/room', transport: 'websocket' },
'lobby',
{ id: currentUser.id },
);
</script>
<template>
<div>
<p>{{ count }} online</p>
<div v-for="(m, i) in messages" :key="i">{{ m.member.id }}: {{ m.data.text }}</div>
<button @click="broadcast({ text: 'Hi' })" :disabled="!joined">Send</button>
</div>
</template>Svelte
<script>
import { createAtmosphereStore, createRoomStore, createPresenceStore } from 'atmosphere.js/svelte';
const { store: chat, push } = createAtmosphereStore({ url: '/chat', transport: 'websocket' });
const { store: lobby, broadcast } = createRoomStore(
{ url: '/atmosphere/room', transport: 'websocket' },
'lobby',
{ id: 'user-1' },
);
const presence = createPresenceStore(
{ url: '/atmosphere/room', transport: 'websocket' },
'lobby',
{ id: 'user-1' },
);
</script>
{#if $lobby.joined}
<p>{$presence.count} online</p>
{#each $lobby.messages as m}
<div>{m.member.id}: {m.data.text}</div>
{/each}
<button on:click={() => broadcast({ text: 'Hi' })}>Send</button>
{:else}
<p>Connecting…</p>
{/if}Builder API with coroutine support:
import org.atmosphere.kotlin.atmosphere
val handler = atmosphere {
onConnect { resource ->
println("${resource.uuid()} connected via ${resource.transport()}")
}
onMessage { resource, message ->
resource.broadcaster.broadcast(message)
}
onDisconnect { resource ->
println("${resource.uuid()} left")
}
}
framework.addAtmosphereHandler("/chat", handler)Coroutine extensions:
broadcaster.broadcastSuspend("Hello!") // suspends instead of blocking
resource.writeSuspend("Direct message") // suspends instead of blockingRedis and Kafka broadcasters for multi-node deployments. Messages broadcast on one node are delivered to clients on all nodes.
Redis
Add atmosphere-redis to your dependencies. Configuration:
| Property | Default | Description |
|---|---|---|
org.atmosphere.redis.url |
redis://localhost:6379 |
Redis connection URL |
org.atmosphere.redis.password |
Optional password |
Kafka
Add atmosphere-kafka to your dependencies. Configuration:
| Property | Default | Description |
|---|---|---|
org.atmosphere.kafka.bootstrap.servers |
localhost:9092 |
Kafka broker(s) |
org.atmosphere.kafka.topic.prefix |
atmosphere. |
Topic name prefix |
org.atmosphere.kafka.group.id |
auto-generated | Consumer group ID |
Sessions survive server restarts. On reconnection, the client sends its session token and the server restores room memberships, broadcaster subscriptions, and metadata.
atmosphere.durable-sessions.enabled=trueThree SessionStore implementations: InMemory (development), SQLite (single-node), Redis (clustered).
Micrometer metrics
MeterRegistry registry = new PrometheusMeterRegistry(PrometheusConfig.DEFAULT);
AtmosphereMetrics metrics = AtmosphereMetrics.install(framework, registry);
metrics.instrumentRoomManager(roomManager);| Metric | Type | Description |
|---|---|---|
atmosphere.connections.active |
Gauge | Active connections |
atmosphere.broadcasters.active |
Gauge | Active broadcasters |
atmosphere.connections.total |
Counter | Total connections opened |
atmosphere.messages.broadcast |
Counter | Messages broadcast |
atmosphere.broadcast.timer |
Timer | Broadcast latency |
atmosphere.rooms.active |
Gauge | Active rooms |
atmosphere.rooms.members |
Gauge | Members per room (tagged) |
OpenTelemetry tracing
framework.interceptor(new AtmosphereTracing(GlobalOpenTelemetry.get()));Creates spans for every request with attributes: atmosphere.resource.uuid, atmosphere.transport, atmosphere.action, atmosphere.broadcaster, atmosphere.room.
Backpressure
framework.interceptor(new BackpressureInterceptor());| Parameter | Default | Description |
|---|---|---|
org.atmosphere.backpressure.highWaterMark |
1000 |
Max pending messages per client |
org.atmosphere.backpressure.policy |
drop-oldest |
drop-oldest, drop-newest, or disconnect |
Cache configuration
| Parameter | Default | Description |
|---|---|---|
org.atmosphere.cache.UUIDBroadcasterCache.maxPerClient |
1000 |
Max cached messages per client |
org.atmosphere.cache.UUIDBroadcasterCache.messageTTL |
300 |
Per-message TTL in seconds |
org.atmosphere.cache.UUIDBroadcasterCache.maxTotal |
100000 |
Global cache size limit |
| Java | Spring Boot | Quarkus |
|---|---|---|
| 21+ | 4.0.2+ | 3.21+ |
- Samples
- Wiki
- AI / LLM Streaming
- MCP Server
- Durable Sessions
- Kotlin DSL
- Migration Guide
- Javadoc
- atmosphere.js API
- React / Vue / Svelte bindings
- TypeScript/JavaScript: atmosphere.js 5.0 (included in this repository)
- Java/Scala/Android: wAsync
Available via Async-IO.org
@Copyright 2008-2026 Async-IO.org
