Loading...
Loading...
Recoil Analytics runs the entire CS2 demo parser inside your browser using WebAssembly. Your demo file never leaves your device, parsing completes in approximately 11 seconds for a 367 MB match, and there is no upload queue or server wait. This page explains the technical architecture behind that result.
A standard professional CS2 demo file is 300–400 MB of densely packed binary data. It encodes every game event — kills, movements, grenades, economy decisions — across 30+ rounds at a 64-tick or 128-tick rate. Parsing this correctly requires a format-aware binary parser, not a general-purpose text processor.
WebAssembly (WASM) is a binary instruction format that modern browsers execute at near-native speed. The demoparser2 library — a Rust-based CS2 demo parser — is compiled to WASM via wasm-pack and shipped as an 800 KB binary. When you drop a demo file, the browser instantiates this module and runs the full parse locally, with no network traffic involved.
Privacy Guarantees
For players who want cloud sync or AI-powered analysis of their own matches, the Pro tier adds optional server-side processing. But for the vast majority of use cases — instant local replay and analysis — the browser-only pipeline is unbeatable.
You drag and drop a demo file onto the demo viewer page, or use the file picker. The browser's native File API loads the raw bytes into memory without any network transfer.
The decompressed .dem bytes are handed off to a Web Worker — a background thread that runs independently of the main UI thread. This means the page remains fully responsive while parsing occurs; you can still navigate and interact with the interface.
All parsed data is written to the browser's IndexedDB — a local, persistent, transactional database built into every modern browser. The data stays on your device and survives page refreshes and browser restarts. You do not need to re-parse a demo you have already processed.
Once data is in IndexedDB, React components read it on demand. The 2D replay renders player positions on an HTML Canvas element, drawing named circles for each player and animating them along the timeline as you scrub. Kill events flash on contact. Grenade trajectories arc across the map.
WebAssembly is a binary instruction format designed to run in web browsers at near-native speed. It is compiled ahead-of-time from systems languages — primarily Rust, C++, and Go — rather than interpreted at runtime like JavaScript. Browsers compile the WASM binary to native machine code using the same JIT infrastructure that powers V8 and SpiderMonkey, achieving roughly 40–60% of equivalent C++ performance.
For binary file parsing — which is what demo analysis requires — WASM is 4–6x faster than equivalent JavaScript. The demoparser2 library is written in Rust, a language with C-level performance and zero garbage collection overhead. Compiled to WASM, it can scan megabytes of structured binary data per second without the pauses that a garbage-collected runtime would introduce.
| Aspect | JavaScript | WebAssembly |
|---|---|---|
| Parse 350 MB file | 45–60 seconds | 8–12 seconds |
| Memory usage | High (interpreter overhead) | Low (compiled bytecode) |
| CPU usage | High (JIT during execution) | Low (pre-compiled) |
| Browser compatibility | All modern browsers | All modern browsers |
| Bundle size (parser) | Large | ~800 KB binary |
WASM runs inside the browser's security sandbox. It cannot access your filesystem directly, cannot read data from other browser tabs, and cannot execute arbitrary operating system code. The security model is identical to JavaScript — WASM is simply faster for compute-intensive tasks.
Browser support is universal across modern platforms: Chrome and Edge since 2017, Firefox since 2017, Safari since 2018. Approximately 98% of web users are on a browser that supports WASM.
Test conditions: MacBook Air M1, 8 GB RAM. Demo file: Budapest Major match, 367 MB, 30 rounds. Results averaged over 5 runs.
Original implementation (one API call per event type): 107 seconds. Current optimized implementation: ~11 seconds. That is a 10x improvement from the mega-batch parseEvents() optimization alone.
| Component | Time | % of Total |
|---|---|---|
| File drag-drop + read | 0.2 s | 2% |
| File decompression (.zst) | 0.5 s | 5% |
| WASM module load | 0.1 s | 1% |
| Phase 1: parseEvents() | 5.6 s | 51% |
| Phase 2: parseTicks() | 5.2 s | 47% |
| Build UI structures | 0.2 s | 2% |
| Render initial state | 0.2 s | 2% |
| TOTAL | ~11 s | — |
367 MB
Demo file size tested
~11 s
Total parse time
10x
Faster than original (107 s)
The first content visible to the user appears within 1.2 seconds of dropping the file — the 2D map and round timeline are rendered immediately from the file metadata while parsing continues in the Web Worker background thread. Full interactivity, including position-based replay scrubbing, is available once Phase 2 completes at the ~11-second mark.
Because the entire parsing pipeline runs in the browser, Recoil Analytics never receives your demo file on a server. There is no ingestion API, no S3 bucket, no file processing queue. The infrastructure that would be required to handle uploaded demos simply does not exist in the free-tier architecture.
The most important performance decision in the pipeline is the separation of event parsing from tick/position parsing into two sequential phases that partially overlap in the user's experience.
Phase 1 — Events
5.6 s
Blocks UI becoming interactive
Phase 2 — Positions
5.2 s
Runs while UI is already usable
Phase 1 is executed as a single parseEvents() call that requests all event types simultaneously. This is the key optimization: earlier versions of the parser made one parseEvent() call per event type (kills, damages, grenades, flashes, etc.), each of which re-read the entire demo file from the beginning. With 15+ event types and a ~1.5-second I/O baseline per call, the original implementation totalled 107 seconds. The mega-batch approach reads the file once and dispatches all events in a single pass, reducing Phase 1 to 2.8 seconds for the events call itself.
Phase 2 uses a pre-built tick sample list — rather than scanning all ticks in the file — which reduces the position parse from a potential multi-minute operation to 5.2 seconds. The worker sends the Phase 1 result to the main thread immediately on completion, making the UI interactive before Phase 2 finishes. Position data populates asynchronously via a callback and becomes available for the 2D replay once Phase 2 completes.