aeronet is a modern, fast, modular and ergonomic HTTP / WebSocket C++ server library for Linux focused on predictable performance, explicit control and minimal dependencies.
- Fast & predictable: edge‑triggered reactor model, zero/low‑allocation hot paths and minimal copies, horizontal scaling with port reuse. In CI benchmarks
aeronetranks among the fastest tested implementations across multiple realistic scenarios. - Modular & opt‑in: enable only the features you need at compile time to minimize binary size and dependencies
- Ergonomic: easy API, automatic features (encoding, telemetry), RAII listener setup with sync / async server lifetime control, developer friendly with no hidden global state, no macros
- Configurable: extensive dynamic configuration with reasonable defaults (principle of least surprise), per path options and middleware helpers, run-time router / config updates
- Standards compliant: HTTP/1.1, HTTP/2, WebSocket, Compression, Streaming, Trailers, TLS, CORS, Range & Conditional Requests, Static files, URL Decoding, multipart/form-data, etc.
- Cloud native: Built-in Kubernetes-style health probes, opentelemetry support (metrics, tracing) with built-in spans and metrics, dogstatsd support, perfect for micro-services
aeronet is designed to be very fast. In our automated wrk-based benchmarks (HTTP/1.1 based) against other popular frameworks (run in CI against a fixed set of competitors such as drogon, pistache, a Rust Axum server, Java Undertow, Go and Python), aeronet:
- Achieves the highest requests/sec in most scenarios
- Consistently delivers lower average latency in those same scenarios
- Maintains competitive or better throughput and memory usage
You can inspect the latest benchmark tables generated on main from the CI benchmarks job and detailed methodology here:
You can browse the latest rendered benchmark tables directly on GitHub Pages:
Spin up a basic HTTP server that responds on /hello in just a few lines.
All code examples in the README and the FEATURES.md files are guaranteed to compile as they are covered by a CI check.
Return a complete, immediate HttpResponse from the handler:
#include <aeronet/aeronet.hpp> // unique 'umbrella' header, includes all public API
using namespace aeronet;
int main() {
Router router;
router.setPath(http::Method::GET, "/hello", [](const HttpRequest& req) {
return HttpResponse(200).header("X-Req-Body", req.body()).body("hello from aeronet\n");
});
HttpServer server(HttpServerConfig{}, std::move(router)); // default port is ephemeral, OS will pick an available one
server.run(); // blocking. Use start() for non-blocking
}See the full program.
For a large, unknown size response body, reply with multiple body chunks using HttpResponseWriter, that will use HTTP chunked transfer encoding automatically:
Router router;
router.setDefault([](const HttpRequest& req, HttpResponseWriter& writer){
writer.status(200);
writer.header("X-Req-Path", req.path());
writer.contentType("text/plain");
for (int i = 0; i < 10; ++i) {
writer.writeBody(std::string(50,'x')); // write by chunks
}
writer.end();
});For a large request body or an asynchronous operation that may take a long time, use an async handler returning RequestTask<HttpResponse>:
// Minimal awaitable used for the README demo so `co_await someAsyncOperation()` compiles.
struct SomeAsyncAwaitable {
bool await_ready() const noexcept { return false; }
void await_suspend(std::coroutine_handle<> h) noexcept { h.resume(); }
std::string await_resume() const noexcept { return std::string("Hello from coroutine!"); }
};
SomeAsyncAwaitable someAsyncOperation() { return {}; }
int main() {
Router router;
router.setPath(http::Method::GET, "/async", [](HttpRequest& req) -> RequestTask<HttpResponse> {
// Suspend execution without blocking the thread
auto result = co_await someAsyncOperation();
co_return HttpResponse(200).body(result);
});
}Async handlers are invoked as soon as the request head is parsed, even if the body is still streaming in.
Call co_await req.bodyAwaitable() (or the chunked helpers) before touching the body to wait for the buffered payload.
You can refer to the complete async handlers example for more details.
aeronet is compatible with HTTP/2, with or without TLS, when built with -DAERONET_ENABLE_HTTP2=ON.
When AERONET_ENABLE_HTTP2 is OFF, the HTTP/2 module is not built and the HTTP/2-specific API surface (e.g. Http2Config, HttpServerConfig::withHttp2()) is not available.
HTTP/2 uses the same unified HttpRequest type as HTTP/1.1:
#include <aeronet/aeronet.hpp>
using namespace aeronet;
int main() {
Router router;
// Single handler works for both HTTP/1.1 and HTTP/2
router.setDefault([](const HttpRequest& req) {
if (req.isHttp2()) {
return HttpResponse{"Hello from HTTP/2! Stream: " + std::to_string(req.streamId()) + "\n"};
}
return HttpResponse{"Hello from HTTP/1.1\n"};
});
HttpServerConfig config;
config.withPort(8443)
.withTlsCertKey("server.crt", "server.key")
.withTlsAlpnProtocols({"h2", "http/1.1"})
.withHttp2(Http2Config{.enable = true});
SingleHttpServer server(std::move(config), std::move(router));
server.run();
}Test: curl -k --http2 https://localhost:8443/hello
See the full HTTP/2 example for more details.
Minimal server examples for typical use cases are provided in examples directory.
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
./build/examples/aeronet-minimal 8080 # or omit 8080 for ephemeralTest with curl:
curl -i http://localhost:8080/hello
HTTP/1.1 200
content-type: text/plain
server: aeronet
date: Sun, 04 Jan 2026 15:49:40 GMT
content-length: 151
Hello from aeronet minimal server! You requested /hello
...The following focused docs expand each area without cluttering the high‑level overview:
If you are evaluating the library, the feature highlights above plus the minimal example are usually sufficient. Dive into the docs only when you need specifics.
| Category | Implemented (✔) | Notes |
|---|---|---|
| Core HTTP/1.1 parsing | ✔ | Request line, headers, chunked bodies, pipelining |
| Routing | ✔ | Exact path + method allow‑lists; streaming + fixed |
| Keep‑Alive / Limits | ✔ | Header/body size, max requests per connection, idle timeout |
| Compression (gzip/deflate/zstd/br) | ✔ | Flags opt‑in; q‑value negotiation; threshold; per‑response opt‑out |
| Inbound body decompression | ✔ | Multi‑layer, safety guards, header removal |
| TLS | ✔ (flag) | ALPN, mTLS, session tickets, kTLS sendfile, timeouts, metrics |
| OpenTelemetry | ✔ (flag) | Distributed tracing spans, metrics counters (experimental) |
| Async wrapper | ✔ | Background thread convenience |
| Metrics hook | ✔ (alpha) | Per‑request basic stats |
| Logging | ✔ (flag) | spdlog optional |
| Duplicate header policy | ✔ | Deterministic, security‑minded |
| WebSocket | ✔ | RFC 6455 compliant, text/binary frames, ping/pong, close handshake |
| HTTP/2 | ✔ (flag) | RFC 9113, HPACK, ALPN h2, h2c upgrade, stream multiplexing |
| Trailers exposure | ✔ | RFC 7230 §4.1.2 chunked trailer headers |
| Middleware helpers | ✔ | Global + per-route request/response hooks (streaming-aware) |
| Streaming inbound decompression | ✔ | Auto-switches to streaming inflaters once Content-Length exceeds configured threshold |
| sendfile / static file helper | ✔ | 0.4.x – zero-copy plain sockets plus RFC 7233 single-range & RFC 7232 validators |
| Feature | Notes |
|---|---|
| Epoll edge-triggered loop | One thread per SingleHttpServer; writev used for header+body scatter-gather |
SO_REUSEPORT scaling |
Horizontal multi-reactor capability |
| Multi-instance wrapper | MultiHttpServer orchestrates N reactors (N threads) |
| Async server methods | start() (void convenience) and startDetached() (returns AsyncHandle) |
| Move semantics | Transfer listening socket & loop state safely |
| Restarts | SingleHttpServer and MultiHttpServer can be started again after stop |
| Graceful draining | SingleHttpServer::beginDrain(maxWait) stops new accepts, closes keep-alive after current responses, optional deadline to force-close stragglers |
| Signal handling | Optional built-in SIGINT/SIGTERM handler to initiate draining when stop requested |
| Heterogeneous lookups | Path handler map accepts std::string, std::string_view, const char* |
| Outbound stats | Bytes queued, immediate vs flush writes, high-water marks |
| Lightweight logging | Pluggable design (spdlog optional); ISO 8601 UTC timestamps |
| Builder-style config | Fluent HttpServerConfig setters (withPort(), etc.) |
| Metrics callback | Per-request timing & size scaffold hook |
| RAII construction | Fully listening after constructor (ephemeral port resolved immediately) |
| Comprehensive tests | Parsing, limits, streaming, mixed precedence, reuseport, move semantics, keep-alive |
| Mixed handlers example | Normal + streaming coexistence on same path (e.g. GET streaming, POST fixed) |
The sections below provide a more granular feature matrix and usage examples.
Moved out of the landing page to keep things concise. See the full, continually updated matrices in:
Consuming aeronet will result in the client code interacting with server objects, router, http responses, streaming HTTP responses and reading HTTP requests.
aeronet provides 2 types of servers: SingleHttpServer and MultiHttpServer.
Client code will mostly use MultiHttpServer because it's the one supporting multi-threaded scaling out of the box, but SingleHttpServer is also available for simpler use cases or when the user wants to manage multiple server instances manually.
For convenience, a HttpServer alias is provided for MultiHttpServer which is the recommended default server type.
These are the main objects expected to be used by the client code.
The core server of aeronet. It is a mono-threaded process based on a reactor pattern powered by epoll with a blocking running event loop.
The call to run() (or runUntil(<predicate>)) is blocking, and can be stopped by another thread by calling stop() on this instance.
The non-blocking APIs launch the event loop in the background. Use start() when you want a void convenience that manages an internal handle for you, or startDetached() (and the related startDetachedAndStopWhen(<predicate>), startDetachedWithStopToken(<stop token>)) when you need an AsyncHandle you can inspect or control explicitly.
Key characteristics:
- It is a RAII class - and actually
aeronetlibrary as a whole does not have any singleton for a cleaner & expected design (except for signal handlers, but it's because signals themselves are global), so all resources linked to theSingleHttpServerare tied (and will be released with) it. - It is copyable and moveable if and only if it is not running.
Warning! Unlike most C++ objects, the move operations are not
noexceptto make sure that client does not move a running server (it would throw in that case, and only in that case). Moving a non-runningSingleHttpServeris, however, perfectly safe andnoexceptin practice. - It is restartable, you can call
start()after astop(). - You can modify most of its configuration safely at runtime via
postConfigUpdate()andpostRouterUpdate(). - Graceful draining is available via
beginDrain(std::chrono::milliseconds maxWait = 0): it stops accepting new connections, lets in-flight responses finish withConnection: close, and optionally enforces a deadline before forcing the remaining connections to close.
aeronet's API extensively uses std::string_view for zero-copy performance. This is safe because each connection maintains its own buffer, and all HttpRequest data (path, query params, headers, body) consists of std::string_view instances pointing into this per-connection buffer. The buffer remains valid for the entire duration of the handler execution, making all request data safe to access without copies.
For detailed information about buffer lifetime guarantees and best practices (especially for coroutines), see Memory Management & std::string_view Safety.
All configuration of the SingleHttpServer is applied per server instance (the server owns its configuration).
SingleHttpServer takes a HttpServerConfig by value at construction, which allows full control over the server parameters (port, timeouts, limits, TLS setup, compression options, etc). Once constructed, some fields can be updated, even while the server is running thanks to postConfigUpdate method.
Note that nbThreads field should be 1 for SingleHttpServer. If you intend to use multiple threads, consider using HttpServer (aka MultiHttpServer) instead.
A convenient set of methods on a SingleHttpServer that allow non blocking:
start() — non-blocking convenience (returns void); the server manages an internal handle.
startDetached() — non-blocking; returns an AsyncHandle giving explicit lifecycle control.
startDetachedAndStopWhen(<predicate>) — like startDetached() but stops when predicate fires.
startDetachedWithStopToken(<stop token>) — like startDetached() but integrates with std::stop_token.
These methods allow running the server in a background thread; pick startDetached() when you need the handle, or start() when you do not.
#include <aeronet/aeronet.hpp>
using namespace aeronet;
int main() {
Router router;
router.setDefault([](const HttpRequest&){ return HttpResponse(200).body("hi"); });
SingleHttpServer srv(HttpServerConfig{}, std::move(router));
// Launch in background thread and capture lifetime handle
auto handle = srv.startDetached();
// main thread free to do orchestration / other work
std::this_thread::sleep_for(std::chrono::seconds(2));
handle.stop();
handle.rethrowIfError();
}Predicate form (stop when external flag flips):
std::atomic<bool> done{false};
SingleHttpServer srv(HttpServerConfig{});
auto handle = srv.startDetachedAndStopWhen([&]{ return done.load(); });
// later
done = true; // loop exits soon (bounded by poll interval)Stop-token form (std::stop_token):
// If you already manage a std::stop_source you can pass its token directly
// to let the caller control the server lifetime via cooperative cancellation.
std::stop_source src;
SingleHttpServer srv(HttpServerConfig{});
auto handle = srv.startDetachedWithStopToken(src.get_token());
// later
src.request_stop();Notes:
- Register handlers before
start()unless you provide external synchronization for modifications. stop()is idempotent; destructor performs it automatically as a safety net.- keep returned
AsyncHandleto keep the server running; server will be stopped at its destruction.
Instead of manually creating N threads and N SingleHttpServer instances, you can use HttpServer to spin up a "farm" of identical servers with same routing configuration, on the same port. It:
- Accepts a base
HttpServerConfig(setport=0for ephemeral bind; the same chosen port is propagated to all instances) - Replicates either a global handler or all registered path handlers across each underlying server (even after in-flight updates)
- Exposes
stats()returning both per-instance and aggregated totals (sums;maxConnectionOutboundBufferis a max) - Provides the resolved listening
port()directly after construction (even for ephemeral port 0 requests) - Provides the same lifecycle APIs as
SingleHttpServer: blockingrun()/runUntil(pred), non-blockingstart()/startDetached(),stop(),beginDrain(), etc. - Like
SingleHttpServer,HttpServeris copyable and moveable when not running, and restartable after stop.
Example:
#include <aeronet/aeronet.hpp>
using namespace aeronet;
int main() {
Router router;
router.setDefault([](const HttpRequest& req){
return HttpResponse(200).body("hello\n");
});
HttpServer multi(HttpServerConfig{}.withNbThreads(4), std::move(router)); // 4 underlying event loops
multi.start();
// ... run until external signal, or call stop() ...
std::this_thread::sleep_for(std::chrono::seconds(30));
auto agg = multi.stats();
log::info("instances={} queued={}\n", agg.per.size(), agg.total.totalBytesQueued);
}Additional notes:
- If
cfg.portwas 0 the kernel-chosen ephemeral port printed above will remain stable across any laterstop()/start()cycles for thisHttpServerinstance. To obtain a new ephemeral port you must construct a newHttpServer(or in a future API explicitly reset the base configuration before a restart toport=0). - You may call
stop()and thenstart()again on the sameHttpServerinstance. - Handlers: global or path handlers registered are re-applied to the fresh servers on each
restart. You may add/remove/replace path handlers using
postRouterUpdate()orrouter()at any time (even during running). - Per‑run statistics are not accumulated across restarts; each run begins with fresh counters (servers rebuilt).
Stats aggregation example:
HttpServer multi(HttpServerConfig{}.withNbThreads(4), Router{});
auto st = multi.stats();
for (size_t i = 0; i < st.per.size(); ++i) {
const auto& s = st.per[i];
log::info("[srv{}] queued={} imm={} flush={}\n", i,
s.totalBytesQueued,
s.totalBytesWrittenImmediate,
s.totalBytesWrittenFlush);
}./build/examples/aeronet-multi 8080 4 # port 8080, 4 threadsEach thread owns its own listening socket (SO_REUSEPORT) and epoll instance – no shared locks in the accept path.
This is the simplest horizontal scaling strategy before introducing a worker pool.
| Variant | Header | Launch API | Blocking? | Threads Created | Scaling Model | Typical Use Case | Restartable? | Notes |
|---|---|---|---|---|---|---|---|---|
SingleHttpServer |
aeronet/single-http-server.hpp |
run() / runUntil(pred) |
Yes (caller thread blocks) | 0 | Single reactor | Dedicated thread you manage or simple main-thread server | Yes | Minimal overhead, zero thread creation |
SingleHttpServer |
aeronet/single-http-server.hpp |
start() (void convenience) / startDetached() / startDetachedAndStopWhen(pred) / startDetachedWithStopToken(token) |
No (startDetached() returns AsyncHandle) |
1 std::jthread (owned by handle) |
Single reactor (background) | Non-blocking single server, calling thread remains free | Yes | startDetached() returns RAII handle; start() is a void convenience |
HttpServer |
aeronet/http-server.hpp |
run() / runUntil(pred) |
Yes (caller thread blocks) | N (threadCount) |
Horizontal SO_REUSEPORT multi-reactor |
Multi-core throughput, blocking orchestration | Yes | All reactors run on caller thread until stop |
HttpServer |
aeronet/http-server.hpp |
start() (void convenience) / startDetached() |
No (startDetached() returns AsyncHandle) |
N std::jthreads (internal) |
Horizontal SO_REUSEPORT multi-reactor |
Multi-core throughput, non-blocking launch | Yes | startDetached() returns RAII handle; start() is a void convenience |
Decision heuristics:
- Use
SingleHttpServer::run()/runUntil()when you already own a thread (or can blockmain()) and want minimal abstraction with zero overhead. - Use
SingleHttpServer::start()family when you want a single server running in the background while keeping the calling thread free (e.g., integrating into a service hosting multiple subsystems, or writing higher-level control logic while serving traffic). The returnedAsyncHandleprovides RAII lifetime management with no added weight toSingleHttpServeritself. - Use
HttpServerwhen you need multi-core throughput with separate event loops per core – the simplest horizontal scaling path before introducing more advanced worker models.
Blocking semantics summary:
SingleHttpServer::run()/runUntil()– fully blocking; returns only onstop()or when predicate is satisfied.SingleHttpServer::start()/startDetachedAndStopWhen()/startDetachedWithStopToken()– non-blocking; returns immediately with anAsyncHandle. Lifetime controlled via the handle's destructor (RAII) or explicithandle.stop().MultiHttpServer::run()/runUntil()– fully blocking; returns only onstop()or when predicate is satisfied.MultiHttpServer::start()– non-blocking; returns after all reactors are launched, manages internal thread pool.
aeronet provides a global signal handler mechanism for graceful shutdown of all running servers:
// Install signal handlers for SIGINT/SIGTERM (typically in main before starting servers)
std::chrono::milliseconds maxDrainPeriod{5000}; // 5s max drain
SignalHandler::Enable(maxDrainPeriod);
// All SingleHttpServer instances regularly check for stop requests in their event loops
SingleHttpServer server(HttpServerConfig{});
server.run(); // Will drain and stop when SIGINT/SIGTERM receivedKey points:
- Process-wide:
SignalHandler::Enable()installs handlers that set a global flag checked by allSingleHttpServerinstances (and so,HttpServerinstances are also affected). - Automatic drain: When a signal arrives, all running servers automatically call
beginDrain(maxDrainPeriod)at the next event loop iteration. - Optional: Don't call
SignalHandler::Enable()if your application manages signals differently.
Routing configuration may be applied in two different ways depending on your application's lifecycle and threading model. Prefer pre-start configuration when possible; use the runtime proxy when you must mutate routing after server construction.
Construct and fully configure a Router instance on the calling thread, then pass it to the server constructor. This is the simplest and safest approach: router will be up to date immediately directly at server construction.
Example (recommended):
Router router;
router.setPath(http::Method::GET, "/hello", [](const HttpRequest&){ return HttpResponse(200).body("hello"); });
SingleHttpServer server(HttpServerConfig{}, std::move(router));
server.run();If you need to mutate routes while a server is active, use the RouterUpdateProxy exposed by SingleHttpServer::router() and by convenience HttpServer::router(). The proxy accepts handler registration calls and forwards them to the server's event-loop thread so updates occur without racing the request processing. If the server is running, the update will be effective at most after one event polling period.
Example (runtime-safe):
SingleHttpServer server(HttpServerConfig{});
auto handle = server.startDetached();
// later, from another thread:
server.router().setPath(http::Method::POST, "/upload", [](const HttpRequest&){ return HttpResponse(201); });Notes:
- The proxy methods schedule updates to run on the server thread; they may execute immediately when the server is idle, or be queued and applied at the next loop iteration.
- The proxy will propagate exceptions thrown by your updater back to the caller when possible; handler registration conflicts (e.g. streaming vs non-streaming for same method+path) are reported.
- Prefer pre-start configuration for simpler semantics and testability; use runtime updates only when dynamic reconfiguration is required.
The router expects callback functions returning a HttpResponse.
You have two ways to construct a HttpResponse:
- Direct construction thanks to its numerous constructors taking status code, body &
content-type, headers, additional capacity for headers/body/trailers - Optimized construction from
HttpRequest::makeResponse()that pre-applies server-global headers and other optimizations
You can build it thanks to the numerous provided methods to store the main components of a HTTP response (status code, reason, headers, body and trailers):
| Operation | Complexity | Notes |
|---|---|---|
status() |
O(1) | Overwrites 3 digits |
reason() |
O(trailing) | One tail memmove if size delta |
header() |
O(headers + bodyLen) | Linear scan + maybe one shift |
headerAddLine() |
O(bodyLen) | Shift tail once; no scan |
body() (inline) |
O(delta) + realloc | Exponential growth strategy |
body() (capture) |
O(1) | Zero copy client buffer capture |
bodyStatic() (capture) |
O(1) | Zero copy client buffer capture |
bodyAppend() (inline) |
O(delta) + realloc | Exponential growth strategy, zero-copy support |
bodyInlineAppend() |
O(delta) + realloc | Exponential growth strategy |
bodyInlineSet() |
O(1) + realloc | Exact growth strategy |
file() |
O(1) | Zero-copy sendfile helper |
trailerAddLine() |
O(1) | Append-only; no scan (only after body) |
Usage guidelines:
- Use
headerAddLine()when duplicates are acceptable or not possible from the client code (cheapest path). - Use
header()only when you must guarantee uniqueness. Matching is case‑insensitive; prefer a canonical style (e.g.Content-Type) for readability, but behavior is the same regardless of input casing. - Chain on temporaries for concise construction; the rvalue-qualified overloads keep the object movable.
- For maximum performance, fill the response in order, starting with status/reason, then headers, then body and trailers, to minimize memory shifts and reallocations.
You can use HttpRequest::makeResponse() methods to optimize some job usually made at finalization time, directly at construction time.
This is especially useful when you have configured globalHeaders in the server config that you want to apply to all responses, as it avoids copying them again before the body (that would also shift the whole body, if inlined) at response finalization time.
Example:
Router router;
router.setDefault([](const HttpRequest& req) {
// Pre-applies global headers from server config
return req.makeResponse("hello\n"); // response already contains global headers (for instance: 'server: aeronet')
});Overloads make it possible to pass status and / or body & content-type, very useful for one-shot responses.
The library intentionally reserves a small set of response headers that user code cannot set directly on
HttpResponse (fixed responses) or via HttpResponseWriter (streaming) because aeronet itself manages them or
their semantics would be invalid / ambiguous without deeper protocol features:
Reserved now (assert if attempted in debug; ignored in release for streaming):
date– generated once per second and injected automatically.content-length– computed from the body (fixed) or set throughcontentLength()(streaming). Prevents inconsistencies between declared and actual size.connection– determined by keep-alive policy (HTTP version, server config, request count, errors). User code supplying conflicting values could desynchronize connection reuse logic.transfer-encoding– controlled by streaming writer (chunked) or omitted whencontent-lengthis known. Allowing arbitrary values risks illegal CL + TE combinations or unsupported encodings.trailer,te,upgrade– not yet supported by aeronet; reserving them now avoids future backward-incompatible behavior changes when trailer / upgrade features are introduced.
Allowed convenience helpers:
content-typeviacontentType()in streaming.locationvialocation()for redirects.
When serving files with the built-in static helpers, aeronet chooses the response content-type using the
following precedence: (1) user-provided resolver callback if installed and non-empty, (2) the configured default
content type in HttpServerConfig, and (3) application/octet-stream as a final fallback. The File::detectedContentType()
helper is available for filename-extension based detection (the built-in mapping now includes common C/C++ extensions
such as c, h, cpp, hpp, cc).
All other headers (custom application / caching / CORS / etc.) may be freely set; they are forwarded verbatim.
This central rule lives in a single helper (http::IsReservedResponseHeader).
Full details (modes, triggers, helpers) have been moved out of the landing page: See: Connection Close Semantics
aeronet has built-in support for automatic outbound response compression (and inbound requests decompression) with multiple algorithms, provided that the library is built with each available encoder compile time flag.
Detailed negotiation rules, thresholds, opt-outs, and tuning have moved: See: Compression & Negotiation
Per-response manual override: setting any Content-Encoding (even identity) disables automatic compression for that
response. Details & examples: Manual Content-Encoding Override
Detailed multi-layer decoding behavior, safety limits, examples, and configuration moved here: See: Inbound Request Decompression
Full RFC-compliant CORS support with per-route and router-wide configuration: See: CORS Support
Detailed policy & implementation moved to: Request Header Duplicate Handling
Enable the builtin probes via HttpServerConfig and test them with curl. This example enables the probes with default paths and a plain-text content type.
#include <aeronet/aeronet.hpp>
using namespace aeronet;
int main() {
HttpServerConfig cfg;
cfg.withBuiltinProbes(BuiltinProbesConfig{});
Router router;
// Register application handlers as usual (optional)
router.setPath(http::Method::GET, "/hello", [](const HttpRequest&){
return HttpResponse(200).body("hello\n");
});
SingleHttpServer server(std::move(cfg), std::move(router));
server.run();
}Probe checks (from the host/container):
curl -i http://localhost:8080/livez # expects HTTP/1.1 200 when running
curl -i http://localhost:8080/readyz # expects 200 when ready, 503 during drain/startup
curl -i http://localhost:8080/startupz # returns 503 until initialization completesFor a Kubernetes Deployment example that configures liveness/readiness/startup probes against these paths, see: docs/kubernetes-probes.md.
There is a small example demonstrating file in examples/aeronet-sendfile.
It exposes two endpoints:
GET /static— returns the contents of a file usingHttpResponse::file(fixed response).GET /stream— returns the contents of a file usingHttpResponseWriter::file(streaming writer API).
Build the examples and run the sendfile example:
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
./build/examples/aeronet-sendfile 8080 /path/to/fileIf the file path argument is omitted the example creates a small temp file in /tmp and serves it.
Fetch the file with curl:
curl -i http://localhost:8080/static
curl -i http://localhost:8080/streamThe example demonstrates both the fixed-response (server synthesizes a content-length header) and the
streaming writer path. For plaintext sockets the server uses the kernel sendfile(2) syscall for zero-copy
transmission. When TLS is enabled the example exercises the TLS fallback that pread()s into the connection buffer
and writes through the TLS transport.
Summary of current automated test coverage (see tests/ directory). Legend: ✅ covered by explicit test(s), ⚠ partial / indirect, ❌ not yet.
| Area | Feature | Test Status | Notes / Representative Test Files |
|---|---|---|---|
| Parsing | Request line (method/target/version) | ✅ | http-parser-errors_test.cpp, http-core_test.cpp |
| Parsing | Unsupported HTTP version (505) | ✅ | http-parser-errors_test.cpp |
| Parsing | Header parsing & lookup | ✅ | http-core_test.cpp, http-core_test.cpp |
| Limits | Max header size -> 431 | ✅ | http-core_test.cpp |
| Limits | Max body size (Content-Length) -> 413 | ✅ | http-additional_test.cpp |
| Limits | Chunk total/body growth -> 413 | ✅ | exercised across http-chunked-head_test.cpp and parser fuzz paths |
| Bodies | Content-Length body handling | ✅ | http-core_test.cpp, http-additional_test.cpp |
| Bodies | Chunked decoding | ✅ | http-chunked-head_test.cpp, http-parser-errors_test.cpp |
| Bodies | Trailers exposure | ✅ | Implemented (see tests/http-trailers_test.cpp) |
| Expect | 100-continue w/ non-zero length | ✅ | http-parser-errors_test.cpp |
| Expect | No 100 for zero-length | ✅ | http-parser-errors_test.cpp, http-additional_test.cpp |
| Keep-Alive | Basic keep-alive persistence | ✅ | http-core_test.cpp |
| Keep-Alive | Max requests per connection | ✅ | http-additional_test.cpp |
| Keep-Alive | Idle timeout close | ⚠ | Indirectly covered; explicit idle-time tests are planned |
| Pipelining | Sequential pipeline of requests | ✅ | http-additional_test.cpp |
| Pipelining | Malformed second request handling | ✅ | http-additional_test.cpp |
| Methods | HEAD semantics (no body) | ✅ | http-chunked-head_test.cpp, http-additional_test.cpp |
| Date | RFC7231 format + correctness | ✅ | http-core_test.cpp |
| Date | Same-second caching invariance | ✅ | http-core_test.cpp |
| Date | Second-boundary refresh | ✅ | http-core_test.cpp |
| Errors | 400 Bad Request (malformed line) | ✅ | http-core_test.cpp |
| Parsing | Percent-decoding of path | ✅ | http-url-decoding_test.cpp, http-query-parsing_test.cpp |
| Errors | 431, 413, 505, 501 | ✅ | http-core_test.cpp, http-additional_test.cpp |
| Errors | PayloadTooLarge in chunk decoding | ⚠ | Exercised indirectly; dedicated test planned |
| Concurrency | SO_REUSEPORT distribution |
✅ | multi-http-server_test.cpp |
| Lifecycle | Move semantics of server | ✅ | http-server-lifecycle_test.cpp |
| Lifecycle | Graceful stop (runUntil) | ✅ | many tests use runUntil patterns |
| Diagnostics | Parser error callback (version, bad line, limits) | ✅ | http-parser-errors_test.cpp |
| Diagnostics | PayloadTooLarge callback (Content-Length) | ⚠ | Indirect; explicit capture test planned |
| Performance | Date caching buffer size correctness | ✅ | covered by http-core_test.cpp assertions |
| Performance | writev header+body path | ⚠ | Indirectly exercised; no direct assertion yet |
| TLS | Handshake & rejection behavior | ✅ | http-tls-handshake_test.cpp, http-tls-io_test.cpp |
| Streaming | Streaming response & incremental flush | ✅ | http-streaming_test.cpp |
| Routing | Path & method matching | ✅ | http-routing_test.cpp, router_test.cpp |
| Compression | Negotiation & outbound insertion | ✅ | http-compression_test.cpp, http-request-decompression_test.cpp |
| OpenTelemetry | Basic integration smoke | ✅ | opentelemetry-integration_test.cpp |
| Async run | SingleHttpServer::start() behavior | ✅ | http-server-lifecycle_test.cpp |
| Misc / Smoke | Probes, stats, misc invariants | ✅ | http-server-lifecycle_test.cpp, http-stats_test.cpp |
| Implemented | Trailers (outgoing chunked / trailing headers) | ✅ | See tests/http-trailers_test.cpp and http-response-writer.hpp |
Full, continually updated build, install, and package manager instructions live in docs/INSTALL.md.
Quick start (release build of examples):
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release
cmake --build build -jFor TLS toggles, sanitizers, Conan/vcpkg usage and find_package examples, see the INSTALL guide.
Full resolution algorithm and matrix moved to: Trailing Slash Policy
Overview relocated to: Construction Model (RAII & Ephemeral Ports)
See: TLS Features
Metrics example: TLS Features
Aeronet provides optional OpenTelemetry integration for distributed tracing and metrics. Enable with the CMake flag -DAERONET_ENABLE_OPENTELEMETRY=ON. Be aware that it pulls also protobuf dependencies.
Instance-based telemetry: Each SingleHttpServer maintains its own TelemetryContext instance. There are no global singletons or static state. This design:
- Allows multiple independent servers with different telemetry configurations
- Eliminates race conditions and global state issues
- Makes testing and multi-server scenarios straightforward
- Ties telemetry lifecycle directly to server lifecycle
All telemetry operations log errors via log::error() for debuggability—no silent failures.
When OpenTelemetry is enabled, aeronet requires the following system packages:
Debian/Ubuntu:
sudo apt-get install libcurl4-openssl-dev libprotobuf-dev protobuf-compilerAlpine Linux:
apk add curl-dev protobuf-dev protobuf-c-compilerFedora/RHEL:
sudo dnf install libcurl-devel protobuf-devel protobuf-compilerArch Linux:
sudo pacman -S curl protobufConfigure OpenTelemetry via HttpServerConfig:
#include <aeronet/aeronet.hpp>
using namespace aeronet;
int main() {
HttpServerConfig cfg;
cfg.withPort(8080)
.withTelemetryConfig(TelemetryConfig{}
.withEndpoint("http://localhost:4318") // OTLP HTTP endpoint
.withServiceName("my-service")
.withSampleRate(1.0) // 100% sampling for traces
.enableDogStatsDMetrics()); // Optional DogStatsD metrics via UDS
SingleHttpServer server(cfg);
// Telemetry is automatically initialized when server.init() is called
// Each server has its own independent TelemetryContext
// ... register handlers ...
server.run();
}When OpenTelemetry is enabled, aeronet automatically tracks:
Traces:
http.requestspans for each HTTP request with attributes (method, path, status_code, etc.)
Metrics (non exhaustive list):
aeronet.events.processed– epoll events successfully processed per iterationaeronet.connections.accepted– new connections acceptedaeronet.bytes.read– bytes read from client connectionsaeronet.bytes.written– bytes written to client connections
All instrumentation happens automatically—no manual API calls required in handler code.
Details here: Query String & Parameters
Details moved to: Logging
Moved to: Streaming Responses
Moved to: Mixed Mode & Dispatch Precedence
HttpServerConfig lives in aeronet/http-server-config.hpp and exposes fluent setters (withX naming):
HttpServerConfig cfg;
cfg.withPort(8080)
.withReusePort(true)
.withMaxHeaderBytes(16 * 1024)
.withMaxBodyBytes(2 * 1024 * 1024)
.withKeepAliveTimeout(std::chrono::milliseconds{10'000})
.withMaxRequestsPerConnection(500)
.withKeepAliveMode(true);
SingleHttpServer server(cfg); // or SingleHttpServer(8080) then server.setConfig(cfgWithoutPort);Two mutually exclusive approaches:
- Global handler:
router.setDefault([](const HttpRequest&){ ... })(receives every request if no specific path matches). - Per-path handlers:
router.setPath(http::Method::GET | http::Method::POST, "/hello", handler)– exact path match.
Rules:
- Mixing the two modes (calling
setPathaftersetDefaultor vice-versa) throws. - If a path is not registered -> 404 Not Found.
- If path exists but method not allowed -> 405 Method Not Allowed.
- You can call
setPathrepeatedly on the same path to extend the allowed method mask (handler is replaced, methods merged). - You can also call
setPathonce for several methods by using the|operator (for example:http::Method::GET | http::Method::POST)
Example:
Router router;
router.setPath(http::Method::GET | http::Method::PUT, "/hello", [](const HttpRequest&){
return HttpResponse(200).body("world");
});
router.setPath(http::Method::POST, "/echo", [](const HttpRequest& req){
return HttpResponse(200).body(req.body());
});
// Add another method later (merges method mask, replaces handler)
router.setPath(http::Method::GET, "/echo", [](const HttpRequest& req){
return HttpResponse(200).body("Echo via GET");
});- 431 is returned if the header section exceeds
maxHeaderBytes. - 413 is returned if the declared
content-lengthexceedsmaxBodyBytes. - Connections exceeding
maxOutboundBufferBytes(buffered pending write bytes) are marked to close after flush (default 4MB) to prevent unbounded memory growth if peers stop reading. - Slowloris protection: configure
withHeaderReadTimeout(ms)to bound how long a client may take to send an entire request head (request line + headers) (0 to disable).aeronetwill return HTTP error 408 Request Timeout if exceeded.
SingleHttpServer::stats() exposes aggregated counters:
totalBytesQueued– bytes accepted into outbound buffering (including those sent immediately)totalBytesWrittenImmediate– bytes written synchronously on first attempt (no buffering)totalBytesWrittenFlush– bytes written during later flush cycles (EPOLLOUT)deferredWriteEvents– number of times EPOLLOUT was registered due to pending dataflushCycles– number of flush attempts triggered by writable eventsmaxConnectionOutboundBuffer– high-water mark of any single connection's buffered bytes
Use these to gauge backpressure behavior and tune maxOutboundBufferBytes. When a connection's pending buffer would exceed the configured maximum, it is marked for closure once existing data flushes, preventing unbounded memory growth under slow-reader scenarios.
You can install a lightweight per-request metrics callback capturing basic timing and size information:
SingleHttpServer server;
server.setMetricsCallback([](const SingleHttpServer::RequestMetrics& m){
// Export to stats sink / log
// m.method, m.target, m.status, m.bytesIn, m.bytesOut (currently 0 for fixed responses), m.duration, m.reusedConnection
});Current fields (alpha – subject to change before 1.0):
| Field | Description |
|---|---|
| method | Original request method string |
| target | Request target (decoded path) |
| status | Response status code (best-effort 200 for streaming if not overridden) |
| bytesIn | Request body size (after chunk decode) |
| bytesOut | Placeholder (0 for now, future: capture flushed bytes per response) |
| duration | Wall time from parse completion to response dispatch end (best effort) |
| reusedConnection | True if this connection previously served other request(s) |
The callback runs in the event loop thread – keep it non-blocking.
The test suite uses a unified helper for simple GETs, streaming incremental reads, and multi-request keep-alive batches. See docs/test-client-helper.md for guidance when adding new tests.
- Connection write buffering / partial write handling
- Outgoing chunked responses & streaming interface (phase 1)
- Trailing headers exposure for chunked requests
- Richer routing (wildcards, parameter extraction)
- TLS (OpenSSL) support (basic HTTPS termination)
- Benchmarks & perf tuning notes
Details merged into: TLS Features
Compression libraries (zlib, zstd, brotli), OpenSSL, Opentelemetry and spdlog provide the optional feature foundation; thanks to their maintainers & contributors.
This project also includes code from the following open source projects:
- amc, licensed under the MIT License.
- flat_hash_map, no license.
- CityHash, licensed under the MIT License.
Licensed under the MIT License. See LICENSE.