Sustainable Rewards, Powered by Demand
Writen by @dizhel (SQD Co-Founder)
In early January 2026, Subsquid Labs rolled out the first Portal Revenue Pools in beta. The mechanic is simple: SQD holders can lock SQD behind a Portal that serves real paying demand, and receive USDT-nominated rewards funded by those Portal fees. The intent is to start shifting protocol rewards from “inflation-only” toward “usage-funded,” without forcing every end user into crypto-native billing flows.
A few design choices are worth calling out because they explain why this is more than a rewards program.
First, Portal fees are paid by data consumers in fiat or stablecoins. That is a distribution choice: it lowers friction for teams that want predictable billing and accounting, while still keeping SQD central to capacity and security.
Second, the rewards are not unsubstantiated. In the current design, up to 50% of the USDT fees generated by a Portal may be allocated to independently operated Portal pools and distributed to SQD lockers, with the remaining portion earmarked for network incentives and longer-term supply management.
Third, the program is meant to scale via multiple pools rather than one monolithic bucket. The initial rollout is explicitly framed as capped and expandable, which is how you keep a stable reward surface while learning the operational edges of a new mechanism.
The next step is straightforward: more Portals, more dedicated capacity, and more pools that can map to distinct demand profiles. 2026 is where this becomes interesting, because Portals are not a marketing wrapper. They are the interface between the economic layer (SQD) and the performance layer (bandwidth, latency, and query throughput) that enterprise users actually feel.
Portal is the gateway, but the moat is the network
Portal is a high-performance data gateway for the SQD Network. It bridges data consumers to a decentralized set of workers, exposing a streaming HTTP API and combining historical data with a realtime feed where needed.
Two details matter for understanding defensibility.
Portals are capacity-metered by locked SQD. In practice, bandwidth allocation is proportional to SQD locked to a Portal’s on-chain identity. If you want more throughput, you don’t negotiate with an opaque provider; you provision more collateral behind your Portal.
The underlying performance comes from the worker network, not the gateway. The SQD Network is powered by over 2,000 decentralized workers. Each worker is a bonded participant: running a worker requires bonding 100,000 SQD, with slashing conditions for provable violations.
This architecture is why “decentralized infrastructure” can be a defensive moat rather than a slogan. In centralized data systems, the API boundary is the chokepoint: whoever controls it can change pricing, throttle workloads, or deprecate features. In the SQD model, the API boundary (Portal) is open source and reproducible, while the scarce resource is network capacity delivered by thousands of independent operators under a shared incentive and security model.
What “scale” looks like in practice
Network scale is visible in usage metrics, not in slideware.
The SQD Network app reports roughly 8.61 TB served over 24 hours and 1.13 PB over 90 days. The 90-day figure implies an average of about 12.5 TB/day (1.13 PB ÷ 90). In the same view, query volume is shown at about 4.6M over 24 hours and 424.68M over 90 days.
I’m not quoting these to brag. I’m quoting them because scale is the moat - and it’s already visible: a high-bandwidth, low-friction query surface backed by a large decentralized operator set is hard to replicate quickly, even for well-funded teams, because you are building both distributed systems and distributed incentives at once.
This also explains why the Portal Revenue Pools are strategically aligned with growth. If capacity is provisioned by locked SQD and rewards can be funded by real fees, then scale is not purely an internal CapEx decision. It becomes a market between demand (clients) and supply (SQD lockers and workers), with Portals as the coordination primitive.
Beyond on-chain data: why the total addressable market expands from here
Today, the dominant workload is on-chain data across hundreds of networks, including Solana, delivered in a form that is practical for application backends and analytics pipelines.
The longer-range opportunity is that workers are not conceptually limited to “blockchain archives.” They are distributed storage and compute nodes in a protocolized marketplace. If you can assign datasets to workers, verify service, and pay for bandwidth, you can support additional database shapes over time, including those relevant to AI and RAG pipelines. That’s the direction implied by the broader SQD roadmap emphasis on petabyte-scale, self-sovereign data access and permissionless dataset provisioning.
This matters because the buyer profile changes. The market for verifiable, high-throughput data access in Web2 (analytics, agentic systems, ETL, retrieval) is structurally larger than Web3 indexing. The technical bet is that the same primitives that make the network strong for chain data (streaming, bandwidth economics, operator diversity, open gateways) can generalize to other high-volume datasets.
Web2 is moving toward crypto rails, and that changes the integration surface
There is a parallel shift happening in payments. Visa has expanded stablecoin settlement capabilities, including USDC settlement for U.S. institutions as part of its stablecoin settlement program. Klarna has publicly discussed launching a dollar-backed stablecoin (KlarnaUSD), with reporting pointing to a mainnet launch target in 2026. Stripe has also been publishing stablecoin-focused materials aimed at global businesses, which is another signal that “stablecoin rails” are becoming a mainstream product surface rather than an edge experiment.
This is relevant to SQD for a simple reason: once stablecoin settlement is normal inside large payment networks, the boundary between “data infrastructure” and “commerce infrastructure” gets thinner. The more commerce becomes programmable, the more it demands high-quality, high-throughput data and verification primitives.
In that context, Rezolve Ai’s acquisition of Subsquid Labs is not a corporate footnote; it is an integration thesis: pair a state-of-the-art decentralized data layer with payment rails and distribution in commerce.
Why SQD is not optional to the architecture
It’s tempting to describe SQD as “the token,” but that misses its role.
SQD is the coordination asset that turns a distributed set of workers into provisionable capacity. Workers are bonded in SQD. Portals receive bandwidth based on SQD locked to their identity. Revenue Pools distribute stablecoin rewards to SQD lockers who underwrite Portal capacity.
That combination is what makes the decentralized infrastructure defensible. If the system works, demand translates into fees, fees translate into rewards, rewards justify more locked SQD and more operator participation, and the network gets stronger where it matters: bandwidth, resilience, and service quality.
The cautious part is also clear: none of this removes execution risk. Scaling Portals, expanding dataset diversity, and making AI-native workloads first-class citizens are hard engineering problems. But the shape of the system is now coherent: an open gateway layer, a bonded worker network, and a revenue mechanism that can directly connect enterprise adoption to protocol rewards.
That is the kind of flywheel that, if it holds, compounds into a moat.