ZibaXeer Technical Whitepaper
Abstract
ZibaXeer is an on-chain copy-trading protocol built for HyperPaxeer (EVM Chain ID 125). The system is designed to deliver transparent execution, non-custodial capital management, and performance-aligned incentives while sustaining high transaction throughput and low operational latency. Scalability is achieved through a layered architecture: deterministic on-chain settlement for critical state, asynchronous off-chain indexing for analytics, and queue-based background processing for compute-heavy aggregation.System Goals
- Preserve trust-minimized execution for vault lifecycle, deposits, withdrawals, and profit splitting.
- Support growth in vault count, follower count, and trade frequency without degrading UX.
- Keep write-heavy analytics workloads off the critical transaction path.
- Provide modular upgrade paths for risk logic and protocol policy.
Architecture Summary
ZibaXeer uses four coordinated layers:- Smart contract layer (HyperPaxeer): VaultFactory, CopyTradingVault, RiskManager, RevenueSplitter, VaultRegistry, and adapters.
- Event ingestion layer: an indexer that subscribes to contract events and turns chain activity into durable jobs.
- Processing layer: BullMQ workers that materialize trades, snapshots, and follower state into PostgreSQL.
- Delivery layer: backend APIs and frontend dashboard for user-facing analytics and control surfaces.
Core Scalability Decisions
1) Event-driven data plane
On-chain contracts emit canonical events (vault deployed, trade executed, follower subscribed/unsubscribed). The indexer consumes events and enqueues jobs rather than writing directly to all downstream stores. This introduces buffering and backpressure control, allowing the system to absorb bursts of chain activity.2) Queue-based asynchronous processing
BullMQ workers isolate expensive tasks such as snapshot recalculation and leaderboard updates. Work is retried with bounded attempts and backoff, reducing failure amplification during transient RPC or database disruptions.3) Read/write separation
Execution-critical writes remain on-chain. Analytical reads are served from PostgreSQL materializations. This avoids repeated full-chain scans for dashboard queries and keeps API latency stable as historical data volume grows.4) Horizontal scaling model
The indexer can be partitioned by event domain, and workers can scale by queue type and concurrency. Backend API replicas scale independently from indexer/worker throughput, enabling cost-effective autoscaling per subsystem bottleneck.5) Modular contract composition
Risk checks, revenue splitting, and vault deployment are decomposed into dedicated contracts. This reduces blast radius of changes and enables targeted upgrades (via UUPS governance paths) without redeploying the entire protocol surface.Data Consistency and Reliability
- Chain is the source of truth for financial state transitions.
- Event processing is at-least-once; idempotent persistence keys (transaction hash, vault address, event semantics) prevent duplicate side effects.
- Queue decoupling prevents temporary downstream failures from causing data loss.
- Health endpoints and process supervision support fast operational detection and recovery.
Performance and Capacity Considerations
The architecture is designed to scale along three axes:- Vault growth: additional vaults add event volume, handled through parallel listeners and queue workers.
- Follower growth: subscription events and snapshot jobs scale through worker concurrency and database indexing.
- Trade frequency growth: bursty trade streams are smoothed by queue buffering and retriable consumers.
Security and Risk Posture
- Non-custodial user model with contract-mediated flows.
- Explicit risk-gating before trade execution through RiskManager and oracle-backed eligibility checks.
- Upgradeability with governance controls for iterative hardening.
- Circuit-breaker patterns to limit cascading losses during anomalous conditions.