Portals AMA - All you need to know
On November 14th, SQD CEO Dmitry went live on X to answer questions from the community and talk a little more in-depth about the release of Portals, our next big roadmap item.
Unfortunately, we had some audio issues in the process. Fortunately, our intern took notes, which enabled us to write this re-cap blog with all the answers from the AMA.
Portal is our upgrade to the current client sitting between the network and SQD, leading to drastic performance improvements and increased decentralization.
For a deep dive into that, you can go here.
Portal vs RPC
RPC nodes are the type of node used to interact with the blockchain. While they function well for writing data to it, they aren’t optimized to facilitate read operations. As Dmitry says, “RPCs aren’t built for extracting data at scale. They are optimized for submitting and executing transactions.”
He explains using the example of a wallet. Assume you need to access transaction data to show a user their transaction history. You'll always compete with others when using RPC providers, especially with public RPC endpoints. Alternatively, you can upgrade rate limits with paid plans.
With the SQD portal, however, you can access all the data across networks. No need to compete with others; the only restriction is your own bandwidth.
What’s possible that hasn’t been before?
Dmitry is quick to point out that the portal is just a stepping stone toward the end game. It opens up new opportunities and brings us closer to a vision where you can get data from any chain, anywhere.
“With the Portal, for the first time, you get an uninterrupted stream of data, maxed out only by your ability to consume it. There’s no solution on the market that is decentralized, allows you to set rate limits on your own, and gives you this much power in handling the data.”
What is the end game?
Complete elimination of the middle layer. Just your device and the data.
Co-Processing x SQD?
Co-processing is often brought up in the age of AI agents, yet no one ever explains what they mean. For Dmitry, in the context of blockchains, co-processing is related to how blockchain functions as an attestation layer, based on which other computations can run.
The example he mentions is using onchain information to trigger off-chain actions: for example, ordering you a Pizza if Bitcoin hits $100k.
Obviously, there’ll be plenty of different use cases besides ordering food, only limited by the imagination of the people building co-processors.
Discussing SQD utility and demand for the token
It’s important to understand that there are multiple phases in the creation of a network. At the moment, SQD is still in the phase of bootstrapping user demand. That means ensuring sufficient demand for affordable data access - the same way Uber and Airbnb seeded initial demand.
Therefore, all we require people to do to consume data is to lock SQD if the public limits don’t suffice. Once we hit target numbers, we’ll introduce revenue streams that will benefit the network and reduce the subsidies.
The revenue will be generated by charging consumers for data. Pretty straight-forward.
Thanks to network effects, though, SQD’s data access solution will be magnitudes more cost-efficient (on top of the other benefits like decentralization & scalability). Eventually, these revenue streams will contribute to making SQD deflationary. This will be discussed in-depth in the upcoming paper on Tokenomics 2.0.
“I understand upset community members; unfortunately, price action is unrelated to product quality. It’s just buying and selling, detached from the product. The price is largely speculative, but as we work on accelerating revenue streams, we expect to see some of that also impact price action.”
What are the biggest upcoming milestones?
First up is the portal release, which is focused on devs and will improve the querying speed and onboard all our cloud customers to decentralized rails.
A more important milestone is reached in January with the introduction of Hot Blocks.
A plugin that’ll provide projects with real-time data, removing the need for an RPC endpoint for indexers. With our current setup, we received multiple complaints related to RPC we simply couldn’t influence. With Hot Blocks, there’ll be a backend that caches the latest blocks from multiple RPC providers - guaranteeing resilience and fall-back alternatives.
After that, the next big steps are:
- Onchain verification and slashing
- Opening up data provision: allowing others to contribute data sets to be queried and extracted.
“I anticipate that opening up data submission will have a big social impact as it makes more data query-able.”
All of the current steps set us on the path to further decentralize the network and progress toward a state where we live up to our own web3 ideals of permissionlessness, trustlessness, and sovereignty.
Eventually, anyone should be able to get data, without needing to trust a centralized provider, only limited by their own bandwidth.