Performance

How fast. How much. Measured, not guessed.

The numbers below are from wrk benchmarks against the production API on a 4-vCPU VPS. The methodology is listed below — reproduce it on your own infra and you'll see similar numbers.

0.6 ms
p50 engine latency
rule eval + HMAC sign, in-process
1.4 ms
p95 engine latency
includes Pino structured log write
2.4 ms
p99 engine latency
tail driven by GC pauses
25,000
req/s per CPU core
sustained over 5-min wrk run
100,000+
req/s on a 4-core VPS
linear scaling — fully share-nothing
~12 KB
memory per project
rules + secret + counter, in-memory

Why it's this fast

  • Rules live in process memory. Postgres is touched once at boot to hydrate, then never on the hot path.
  • Single allocation per check. The evaluator returns a struct from a pre-allocated pool — no GC churn per request.
  • HMAC is microseconds. Node's built-in crypto.createHmac is OpenSSL-backed — sign + verify combined under 5 µs.
  • Fastify, not Express. Schema-compiled routes, pino logging, no per-request middleware chain.
  • No serialization tax. The /check response is a fixed-shape JSON we serialize with a precompiled schema (~3 µs versus ~50 µs for ad-hoc JSON.stringify).

Throughput math, plain

A typical request to your app costs you milliseconds — DB queries, template rendering, framework overhead. The Acrossed gate adds at most one more millisecond. On a same-region call from your app to api.acrossed.com, the wall-clock cost looks like:

  • • Network round-trip: 5–25 ms
  • • HMAC sign + verify: ≈ 5 µs
  • • Engine evaluation: ≈ 0.5 ms
  • • Response serialization: ≈ 3 µs

For workloads where 10–25 ms is unacceptable, every SDK supports a 50 ms soft timeout that fails open — so a slow gate cannot stall your request budget.

How we measured this

The benchmark is reproducible. From a load box on the same network, against a freshly hydrated production API:

# Warm up
wrk -d 30s -t 4 -c 64 \
  -s post.lua https://api.acrossed.com/check

# Measure (5 min sustained)
wrk -d 5m -t 4 -c 256 --latency \
  -s post.lua https://api.acrossed.com/check

# Result (4 vCPU, 8 GB VPS, ruleset of 50 rules):
#   Requests/sec:  102,431.7
#   Latency p50:    0.61 ms
#   Latency p95:    1.42 ms
#   Latency p99:    2.38 ms

Caveats: numbers above are engine-side only — they do not include public internet round-trip from your servers, which depends on your geography. Throughput scales linearly with cores until network saturation (~10 Gb/s at the VPS edge).