Credit-Aware Matching at Microsecond Speed
TL;DR — Exchange operators spend too much time wiring together matching, credit, multilateral execution, and audit as separate systems. Synapse collapses that into one engine — fewer moving parts, lower latency, one place for risk controls and compliance. Market makers get clean APIs and fast response paths. Your ops team gets one system to monitor instead of a chain. And when volumes grow, latency stays flat.
The Problem
Most matching engines were built to match orders by price and time. Credit checks, multilateral credit relationships, and audit trails got bolted on later — external credit gateways, separate RFE managers, post-trade reconciliation. Each addition is another hop, another system to coordinate, and another place where latency and inconsistency creep in.
Today's venues need credit and risk enforced at the point of execution, multilateral relationships where multiple parties confirm or reject within the same flow, multi-instrument books with per-entity permissions, and a complete audit record. The usual response is to add systems around the engine. That works until it doesn't.
Synapse treats those concerns as part of matching, not as layers around it.
Credit at the Point of Match
Synapse's patent-pending credit engine evaluates entity-level and prime-broker credit inline with matching — not in a separate gateway before or after. Order-to-fill latency stays in the 1–2 microsecond range for most cases because credit isn't an extra round trip. It's part of the same operation.
Multilateral execution is built in. When a match requires confirmation, the engine coordinates with makers through dedicated market maker APIs. Multiple makers can respond in parallel on the same order, so fill rates go up and time-to-fill goes down. Quantity integrity is enforced throughout — overfills are prevented by construction.
Every lifecycle event is persisted automatically. You get a complete, queryable audit trail without any impact on matching performance.
Performance
Synapse and scaleRT are built entirely in C/C++ — the same language that Linux, Windows, and virtually every operating system kernel is written in. Operating systems use C because they cannot afford abstraction overhead between their logic and the hardware. A matching engine has the same constraint: no garbage collector deciding when to pause the execution thread, no managed runtime making allocation decisions, no virtual machine between an incoming order and the matching logic. This is not a stylistic preference — it is an engineering requirement for deterministic microsecond-level performance. The matching path runs on a dedicated core with no contention, no scheduling jitter, and fully deterministic behavior. Everything else (persistence, logging, network I/O) runs off the critical path. All memory is pre-allocated at startup, so there are no allocation pauses during operation. The result is consistent, predictable latency that doesn't degrade as volume scales.
C/C++ is the language of operating systems, network stacks, and the infrastructure that everything else runs on. For the same reason, it is the right foundation for trading infrastructure where every microsecond of overhead is a cost you pass on to your participants.
Client and market maker APIs are available in multiple languages, so participants integrate in whatever stack they already run.
Synapse also ships with production FIX gateways for both maker and taker connectivity — participants connect through standard FIX protocol, not proprietary integration.
Synapse runs on scaleRT, our messaging runtime. The messaging layer adds no kernel overhead or lock contention to the matching path — orders in and execution reports out at microsecond speed.
What It Means for Your Venue
- 1–2 microsecond matching with credit inline. Patent-pending credit engine evaluates limits as part of the match. No separate gateway, no extra round trip.
- Faster fills, more liquidity. Multiple makers respond in parallel on the same order. Fill rates go up, time-to-fill goes down.
- Market makers plug in, not bolt on. Dedicated APIs for receiving and responding to execution requests — simpler integration, lower latency for makers.
- FIX connectivity out of the box. Production maker and taker FIX gateways ship with Synapse, built on LiqEngine and scaleRT. Participants connect through standard FIX protocol — no proprietary integration required.
- One audit trail, not five. Single, sequenced record of every order and fill from the matching core. No stitching logs across systems.
- Multiple instruments, one engine. Per-entity permissions and credit across all books, managed in one place.
- Change risk parameters live. Credit limits, trading permissions, and instrument settings update through BrokerLink without a restart.
- Predictable latency under load. Dedicated-core matching with pre-allocated memory. Performance stays consistent as volume grows.
- Professional services when you're ready for bare metal. Our team designs, builds, and tunes bare metal datacenter environments for Synapse deployments — network topology, kernel and NIC tuning, co-location layout, and end-to-end latency validation. Not everything needs to move: we design hybrid architectures where the matching core runs on hardware you control while ops, monitoring, and DR stay in the cloud, connected transparently through scaleRT's TCP Bridge.
To learn more about Synapse, our professional services practice, or discuss how either can support your venue, get in touch.