Liquidity at Scale: One Framework, Hundreds of Venues


TLDR: LiqEngine is a liquidity gateway framework that turns venue onboarding from a months-long engineering project into a days-long configuration exercise:


Every venue speaks FIX. None of them speak the same FIX.

That's the reality of multi-venue connectivity. The protocol gives you a common message format on paper, but in practice every liquidity provider extends it differently — custom fields, non-standard enumerations, undocumented session behaviors that only surface when you're onboarding in production. Market data and order management often require separate sessions with different lifecycles, different auth flows, and different failure modes. And all of it needs to feed normalized books into your matching engine without adding latency or operational fragility.

The standard approach is to hand-code a gateway per venue. It works for the first two or three. By the fifth, you're maintaining a portfolio of bespoke integrations where every change to the underlying messaging or matching infrastructure means touching every gateway independently. Parsing logic, session management, order tracking, book construction — duplicated everywhere, tested nowhere consistently.

There is a scaling problem too. In OTC markets — FX, rates, credit — a competitive venue needs dozens of liquidity sources, each with its own market data and order sessions. That is hundreds of concurrent sessions across potentially dozens of venues. LiqEngine was built for massive horizontal scale: sessions distribute across cores through configuration, new venues are added as shared libraries without touching existing ones, and the framework's uniform lifecycle management means operational complexity stays flat as venue count grows. With bespoke gateways, every new venue is a linear increase in engineering and operational burden. With LiqEngine, it is not.

So we built LiqEngine.


What LiqEngine Does

LiqEngine is a gateway framework that separates what's different about each venue (the FIX dialect) from what's the same (session management, book construction, order lifecycle, matching engine integration, failover). It has three layers.

Built in C/C++, no compromise. LiqEngine and its generated parsers are written entirely in C/C++ — no garbage collection, no runtime pauses, no managed runtime overhead. C/C++ is the language of operating systems, network stacks, and the infrastructure that everything else runs on. For the same reason, it's the right foundation for trading infrastructure where every microsecond of overhead is a cost you pass on to your participants.

Generated zero-copy FIX parsers. We wrote a code generation pipeline that takes a venue's FIX XML specification and produces C encoders and decoders — zero-copy decoding into the original message buffer, arena allocation instead of per-message heap allocation, bitmask-based required field validation, and type-safe encoding with compile-time field checks. Each venue gets its own generated parser driven by its own XML spec. Venue-specific fields and behaviors are captured at generation time, not handled through runtime branching. Adding it to the build is one CMake function call.

Pluggable gateway architecture. Gateways are shared libraries loaded at runtime. The main liqdriver process reads an INI config, loads the right gateway modules dynamically, and manages their lifecycle. Every gateway implements the same callback interface — the framework handles heartbeating, sequence numbers, reconnection, and session state. A single gateway can run in market-data-only mode, order-only mode, hybrid mode, maker mode for last-look flows, or RFS mode for structured product pricing. That's a configuration choice, not a code change. Gateways can also run in server mode, accepting inbound FIX connections and routing them to the right gateway instance based on the CompID pair in the client's Logon.

Native scaleRT integration. Every message between LiqEngine and the Synapse matching engine flows through scaleRT's Glide topics — lock-free, zero-syscall, with per-sender sequencing and gap detection. Market data goes out as compact binary book snapshots. Orders and execution reports flow through dedicated per-session topics. Session control, trading controls, credit updates, and entity management all use scaleRT topics. No out-of-band control channels, no separate transport layer to manage.


Beyond Spot: RFS and Structured Products

Most gateway frameworks stop at spot order flow. LiqEngine handles the full product range.

Request for Stream support covers spot, NDF, swap, and forward pricing with multi-leg market data responses — each leg carrying its own entries for all-in rate, forward points, spot rate, and quantity. Price lifecycle is explicit: new, modify, delete actions on streaming prices, with graceful stream teardown when a request is cancelled. Complex orders link back to their originating RFS request, and execution reports carry multi-leg detail including settlement dates and tenor for each leg.

Entity alerts, credit line visibility, position summaries, and margin data flow through the same typed message infrastructure — so the gateway layer participates in the full risk and credit picture, not just order routing.


Failover Without State Replication

LiqEngine leverages scaleRT's RAIN protocol for hot-standby redundancy. Gateway sessions start automatically on primary promotion and stop on demotion. FIX logs rotate on role transitions for clean audit trails. The key design choice: gateway state is reconstructed from configuration and venue re-negotiation on promotion, so there's no state replication between primary and backup. One less thing to get wrong during a failover.


Where It Runs in Production

LiqEngine powers production connectivity across a growing set of institutional FX and equities venues — plus our own Velio server gateways that connect external clients and market makers to Synapse. Each gateway is a self-contained shared library. Adding a new venue means writing the venue-specific FIX handling, generating the parser from the venue's XML spec, and deploying a shared library. The framework, the transport, the book management, the order lifecycle, and the failover are already there.

Every decision in LiqEngine was made for the same reason: venue connectivity shouldn't be the thing that slows you down — not in latency, and not in time to market.


To learn more about LiqEngine, scaleRT, or the Synapse matching engine, get in touch.

More from Velio Labs