Server-Sent Events with Node.js
Real-time communication between a server and a browser has historically been one of the trickiest problems in web development. WebSockets, long-polling, and various hack-ish workarounds have all been used to push data from the server to the client. But there is a simpler, often overlooked alternative that is built right into the browser: Server-Sent Events (SSE).
Server-Sent Events provide a unidirectional channel from the server to the client over a plain
HTTP connection. The browser opens a persistent connection, and the server pushes text-based
messages down that connection whenever it wants. No upgrade handshake, no binary framing, no
special library on the client side. Just HTTP, a few formatting rules, and the native
EventSource API that every modern browser already supports.
This article walks through the protocol itself, builds a complete SSE server in Node.js from scratch, covers error handling, reconnection, named events, client-side consumption, and discusses when SSE is the right tool and when it is not.
How the SSE Protocol Works
At its core, SSE is a convention layered on top of a long-lived HTTP response. The client sends
a regular GET request. The server responds with the content type
text/event-stream and keeps the connection open. It then writes messages to the
response body, one after another, for as long as the connection lives.
Each message is a block of text composed of one or more fields, separated by newlines. A blank
line (two consecutive newline characters, \n\n) signals the end of a single
message. The fields that the protocol defines are:
data:— the payload of the message. Multipledata:lines within a single message are concatenated with newline characters.event:— an optional event name. If present, the client dispatches the message to a listener registered for that specific event instead of the genericmessageevent.id:— an optional identifier for the message. The browser stores this value and sends it back as theLast-Event-IDheader if it needs to reconnect.retry:— an optional integer (in milliseconds) that tells the browser how long to wait before attempting reconnection after a dropped connection.
A minimal message looks like this:
data: Hello, world!\n\n
A more complete message with an event name and an identifier:
id: 42
event: temperature
data: {"sensor":"A3","value":22.5}
Lines that start with a colon (:) are comments. They are ignored by the client but
can be used as keep-alive signals to prevent intermediaries from closing an idle connection.
A Minimal SSE Server in Node.js
Building an SSE endpoint with Node.js requires nothing more than the built-in http
module. The key steps are: set the correct response headers, disable any buffering, and write
properly formatted messages to the response stream.
const http = require('node:http');
const server = http.createServer((req, res) => {
if (req.url === '/events') {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
});
// Send a comment line as an initial keep-alive
res.write(': connected\n\n');
const intervalId = setInterval(() => {
const data = JSON.stringify({ time: new Date().toISOString() });
res.write(`data: ${data}\n\n`);
}, 1000);
req.on('close', () => {
clearInterval(intervalId);
console.log('Client disconnected');
});
return;
}
res.writeHead(404);
res.end('Not found');
});
server.listen(3000, () => {
console.log('SSE server listening on port 3000');
});
Three response headers matter here. Content-Type: text/event-stream tells the
browser that this response is an event stream. Cache-Control: no-cache prevents
intermediate proxies and the browser from caching the response.
Connection: keep-alive is a hint to keep the TCP connection open, although in
HTTP/1.1 persistent connections are the default.
The req.on('close') listener is essential. When the client closes the tab, navigates
away, or explicitly closes the EventSource, the underlying TCP connection drops.
Node.js fires the close event on the request object, giving you the chance to clean
up resources — in this case, clearing the interval so you stop writing to a dead socket.
Handling CORS
If your client and your SSE server live on different origins, the browser will block the connection unless the server includes the appropriate CORS headers. Adding them is straightforward:
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': 'Cache-Control',
});
In production you would replace the wildcard with the specific origin of your frontend application.
Named Events
By default, every message the server sends triggers the message event on the
client. If you prefix a message with an event: field, the client must listen for
that specific event name instead. This lets you multiplex different kinds of data over a single
connection.
function sendEvent(res, eventName, data, id) {
if (id !== undefined) {
res.write(`id: ${id}\n`);
}
res.write(`event: ${eventName}\n`);
res.write(`data: ${JSON.stringify(data)}\n\n`);
}
// Usage
sendEvent(res, 'user-joined', { username: 'alice' }, 1);
sendEvent(res, 'chat-message', { text: 'Hello everyone' }, 2);
On the client side, you would then register separate listeners:
const source = new EventSource('/events');
source.addEventListener('user-joined', (e) => {
const data = JSON.parse(e.data);
console.log(`${data.username} joined the chat`);
});
source.addEventListener('chat-message', (e) => {
const data = JSON.parse(e.data);
console.log(`New message: ${data.text}`);
});
Message IDs and Automatic Reconnection
One of the most elegant features of SSE is built-in reconnection. When the connection drops
— because of a network interruption, a server restart, or any other reason — the
browser automatically attempts to reconnect after a short delay (typically three seconds, unless
overridden with the retry: field).
If you have been including id: fields in your messages, the browser sends the last
received ID in the Last-Event-ID header of its reconnection request. Your server
can then replay any messages the client missed.
const http = require('node:http');
let messageHistory = [];
let currentId = 0;
function broadcastToAll(clients, eventName, payload) {
currentId++;
const message = { id: currentId, event: eventName, data: payload };
messageHistory.push(message);
// Keep only the last 100 messages
if (messageHistory.length > 100) {
messageHistory = messageHistory.slice(-100);
}
const formatted = formatMessage(message);
for (const client of clients) {
client.write(formatted);
}
}
function formatMessage({ id, event, data }) {
let msg = '';
if (id !== undefined) msg += `id: ${id}\n`;
if (event) msg += `event: ${event}\n`;
msg += `data: ${JSON.stringify(data)}\n\n`;
return msg;
}
const clients = new Set();
const server = http.createServer((req, res) => {
if (req.url === '/events') {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
});
// Replay missed messages if the client is reconnecting
const lastId = parseInt(req.headers['last-event-id'], 10);
if (!isNaN(lastId)) {
const missed = messageHistory.filter((m) => m.id > lastId);
for (const m of missed) {
res.write(formatMessage(m));
}
}
clients.add(res);
req.on('close', () => {
clients.delete(res);
});
return;
}
res.writeHead(404);
res.end('Not found');
});
server.listen(3000);
This pattern is particularly valuable for applications like live feeds, notifications, or dashboards where missing a message would leave the client in an inconsistent state.
Setting a Custom Retry Interval
The server can suggest a reconnection interval by sending a retry: field. The value
is in milliseconds. Once the client receives it, it uses that interval for all subsequent
reconnection attempts until a new retry: value arrives.
// Tell the client to wait 5 seconds before reconnecting
res.write('retry: 5000\n\n');
This is useful for implementing backoff strategies. For instance, if the server is under heavy load, it can instruct clients to back off by sending a longer retry interval before closing the connection.
Sending Multi-Line Data
The SSE specification requires each line of the data payload to be prefixed with
data:. If you are sending a multi-line string, you need to split it:
function sendMultiLineData(res, text) {
const lines = text.split('\n');
for (const line of lines) {
res.write(`data: ${line}\n`);
}
res.write('\n'); // End of message
}
sendMultiLineData(res, 'Line one\nLine two\nLine three');
On the client side, the browser reconstructs the original multi-line string by joining the
data: lines with newline characters. The e.data property will contain
"Line one\nLine two\nLine three".
An Express.js Implementation
While SSE works perfectly with the raw http module, many Node.js applications use
Express. The implementation is nearly identical, but you need to be aware of a subtle issue:
Express (or more precisely, the compression middleware) may try to buffer or compress the
response. You should either disable compression for the SSE route or flush the response
explicitly.
const express = require('express');
const app = express();
app.get('/events', (req, res) => {
res.set({
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'X-Accel-Buffering': 'no', // Disable buffering in Nginx
});
res.flushHeaders();
const send = (data) => {
res.write(`data: ${JSON.stringify(data)}\n\n`);
if (typeof res.flush === 'function') {
res.flush(); // Flush if compression middleware is active
}
};
send({ status: 'connected' });
const intervalId = setInterval(() => {
send({ time: Date.now() });
}, 2000);
req.on('close', () => {
clearInterval(intervalId);
});
});
app.listen(3000);
The res.flushHeaders() call sends the response headers immediately, establishing
the event stream. The X-Accel-Buffering: no header is a directive to Nginx (if
you are running behind it as a reverse proxy) to disable response buffering for this
particular route.
The Client Side: Using EventSource
The browser-native API for consuming an SSE stream is EventSource. It handles
connection management, automatic reconnection, and event parsing out of the box.
const source = new EventSource('http://localhost:3000/events');
source.onopen = () => {
console.log('Connection established');
};
source.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log('Received:', data);
};
source.onerror = (err) => {
if (source.readyState === EventSource.CONNECTING) {
console.log('Reconnecting...');
} else if (source.readyState === EventSource.CLOSED) {
console.log('Connection closed permanently');
}
};
// To close the connection from the client side:
// source.close();
The readyState property tells you where things stand:
EventSource.CONNECTING (0) means a connection attempt is in progress,
EventSource.OPEN (1) means the connection is active, and
EventSource.CLOSED (2) means the connection has been closed and will not
reconnect.
The onerror handler fires both when the connection drops (followed by an automatic
reconnection attempt) and when the connection fails permanently. You can distinguish between
the two by checking readyState.
Broadcasting to Multiple Clients
A common pattern is to maintain a set of connected clients and broadcast messages to all of them. Here is a complete example that combines everything discussed so far into a small but functional server:
const http = require('node:http');
const clients = new Set();
let eventId = 0;
function broadcast(event, data) {
eventId++;
const payload =
`id: ${eventId}\n` +
`event: ${event}\n` +
`data: ${JSON.stringify(data)}\n\n`;
for (const client of clients) {
client.write(payload);
}
}
const server = http.createServer((req, res) => {
if (req.method === 'GET' && req.url === '/events') {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
});
res.write(': welcome\n\n');
clients.add(res);
broadcast('system', { message: `A client connected. Total: ${clients.size}` });
req.on('close', () => {
clients.delete(res);
broadcast('system', { message: `A client disconnected. Total: ${clients.size}` });
});
return;
}
// A simple POST endpoint to trigger broadcasts
if (req.method === 'POST' && req.url === '/send') {
let body = '';
req.on('data', (chunk) => { body += chunk; });
req.on('end', () => {
try {
const parsed = JSON.parse(body);
broadcast('notification', parsed);
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ sent: true, clients: clients.size }));
} catch {
res.writeHead(400);
res.end('Invalid JSON');
}
});
return;
}
res.writeHead(404);
res.end('Not found');
});
server.listen(3000, () => {
console.log('Listening on http://localhost:3000');
});
You can test the broadcast endpoint with curl:
curl -X POST http://localhost:3000/send \
-H "Content-Type: application/json" \
-d '{"title":"Deploy complete","env":"production"}'
Scalability Considerations
Each SSE connection is a persistent HTTP connection, which means each connected client holds an open socket on the server. Node.js handles this well because of its non-blocking I/O model, but there are limits to be aware of.
A single Node.js process can typically handle tens of thousands of concurrent connections,
depending on the amount of work done per message and the available memory. The OS-level limit on
open file descriptors may need to be raised (using ulimit -n on Linux) for
high-connection workloads.
If you run multiple server instances behind a load balancer, each client connects to only one instance. To broadcast a message to all clients across all instances, you need a shared communication channel — Redis Pub/Sub is the most common choice. Each server instance subscribes to a Redis channel and forwards received messages to its local set of SSE clients.
const Redis = require('ioredis');
const subscriber = new Redis();
const publisher = new Redis();
subscriber.subscribe('sse-broadcast');
subscriber.on('message', (channel, message) => {
const { event, data } = JSON.parse(message);
for (const client of clients) {
client.write(`event: ${event}\ndata: ${JSON.stringify(data)}\n\n`);
}
});
// When you want to broadcast from any instance:
function publishEvent(event, data) {
publisher.publish('sse-broadcast', JSON.stringify({ event, data }));
}
Keep-Alive and Connection Health
SSE connections can be silently dropped by proxies, load balancers, or firewalls that close idle
connections. The standard defense is to periodically send a comment line (a line starting with
:) to keep the connection alive:
const keepAliveId = setInterval(() => {
res.write(': keep-alive\n\n');
}, 15000);
req.on('close', () => {
clearInterval(keepAliveId);
});
A 15-second interval is a reasonable default. It is frequent enough to prevent most intermediaries from closing the connection, but not so frequent that it wastes bandwidth.
Authentication and Authorization
Because EventSource sends a regular HTTP request, you can authenticate SSE
connections using cookies, just as you would with any other endpoint. Session cookies are sent
automatically. However, EventSource does not support custom request headers, which
means you cannot send a Bearer token in the Authorization header.
A common workaround is to pass the token as a query parameter:
const source = new EventSource('/events?token=eyJhbGciOi...');
On the server side, you would extract and validate the token before establishing the stream:
app.get('/events', (req, res) => {
const token = req.query.token;
try {
const user = verifyToken(token);
// Set up the SSE connection for this authenticated user
} catch {
res.status(401).json({ error: 'Invalid token' });
return;
}
// ... proceed with SSE setup
});
Be mindful that tokens in query parameters can appear in server logs, proxy logs, and browser history. In sensitive contexts, prefer cookie-based authentication or use a short-lived token obtained from a separate authenticated endpoint.
Graceful Shutdown
When your server needs to restart or shut down, you should close all SSE connections gracefully. This typically involves writing a final message to each client and then ending the response:
process.on('SIGTERM', () => {
for (const client of clients) {
client.write('event: server-shutdown\ndata: "restarting"\n\n');
client.end();
}
clients.clear();
server.close(() => {
process.exit(0);
});
});
The client can listen for the server-shutdown event and handle it appropriately
— for example, by showing a notification and then letting the automatic reconnection
mechanism take over.
SSE vs. WebSockets: When to Choose What
SSE and WebSockets solve overlapping but different problems. Here is a practical guide to choosing between them.
Choose SSE when: the data flows primarily or exclusively from server to client. Typical use cases include live news feeds, stock tickers, notifications, progress updates for long-running tasks, and real-time dashboards. SSE is also a good fit when you want simplicity: it works over standard HTTP, plays well with existing infrastructure (load balancers, proxies, CDNs), and requires no special client library.
Choose WebSockets when: you need bidirectional, low-latency communication. Chat applications, multiplayer games, collaborative editors, and interactive streaming protocols all benefit from the full-duplex channel that WebSockets provide. WebSockets also support binary data natively, which matters for applications that transfer images, audio, or custom binary protocols.
A hybrid approach is also viable: use SSE for server-to-client pushes and regular HTTP POST requests for client-to-server messages. This gives you real-time server push without the complexity of a WebSocket connection, at the cost of slightly higher latency on the client-to-server path.
Limitations of Server-Sent Events
SSE has a few limitations worth noting. First, it is text-only. Binary data must be
Base64-encoded, which increases payload size by roughly 33%. Second, most browsers impose a
limit of six concurrent SSE connections per domain when using HTTP/1.1. This limit is per
browser, not per tab, so opening multiple tabs to the same application can exhaust the
connection pool. HTTP/2 largely eliminates this problem because it multiplexes all streams over
a single TCP connection. Third, EventSource does not support custom headers,
which complicates token-based authentication as discussed above.
For environments where EventSource limitations are a problem, libraries like
eventsource (for Node.js clients) or fetch-based SSE consumers
(using the Streams API) offer more flexibility, including support for custom headers, POST
requests, and binary handling.
A Note on HTTP/2 and HTTP/3
SSE works seamlessly with HTTP/2. In fact, HTTP/2 is arguably the better transport for SSE
because it multiplexes streams over a single connection, eliminating the six-connection limit
mentioned above. Node.js supports HTTP/2 through the built-in node:http2 module,
and the SSE message format remains identical — only the transport layer changes.
HTTP/3, which runs over QUIC instead of TCP, is also compatible with SSE. The benefit is improved resilience to network changes (such as switching from Wi-Fi to cellular) because QUIC connections survive IP address changes. Support in Node.js for HTTP/3 is still evolving, but when used behind a reverse proxy like Caddy or Nginx with HTTP/3 support, SSE connections benefit from these improvements transparently.
Conclusion
Server-Sent Events offer a remarkably simple and effective way to push data from a Node.js
server to the browser. The protocol is minimal, the client API is built into every modern
browser, and the implementation on the server side is just a few lines of well-placed
res.write() calls.
For many real-time use cases — notifications, live feeds, progress indicators, streaming AI responses — SSE is not only sufficient but preferable to WebSockets. It works with existing HTTP infrastructure, supports automatic reconnection out of the box, and keeps both the server and client code refreshingly straightforward.
The next time you reach for a WebSocket library to push updates to the client, consider whether a simple event stream might be all you actually need.