Insights from SQD’s Broadcast with M31 Capital

After announcing our partnership with M31 Capital, our two co-founders went live with M31’s David Attermann to discuss our plans and share some insights into the reasoning behind SQD’s development. 

You can view the full recording here: https://x.com/i/broadcasts/1nAKEpvkYpbxL 

Why did M31 invest in SQD? 

Normally, you see announcements of investments after the fact, and there is little transparency in investors' reasoning. Not this time. During the Broadcast, David from M31 shared their thesis and explained how SQD fit into that. 

In general, M31 is focused on infrastructure and middleware, and in particular, the data layer. While well-understood in the traditional web it is a very underrated and misunderstood section in Web3. 

Ever since the Google Cloud partnership, SQD has been on his radar. He started doing due diligence on it, diving deep into the network architecture and quickly realizing that it was a completely different way to do data. Moreover, he quickly understood that once fully developed, it would enable data functionality not even possible in Web2. 

In short, David had come across exactly what VCs dream of: an investment opportunity that is unique but currently misunderstood. As he put it, SQD will provide valuable functionality in the future and, therefore, is a long-term investment. 

A decisive factor was also that SQD emphasized verifiability and designed the network with unit economics in mind to ensure that even growth in the billions would not break it or make it extraordinarily expensive to use. 

Overally, M31’s thesis aligned very closely with SQD’s values, and David shared he knew instantly that our Co-Founders were the real day (which regrettably is rare in this industry).

Why is SQD excited about working with M31 Capital? 

Marcel, SQD Co-Founder, shared that it’s very rare for a VC in crypto, especially a liquid investment fund, to feature such technically sophisticated people as we encountered at M31. Since SQD is laser-focused on product, it’ll be highly beneficial to tap into the expertise of the M31 team. 

Origin Story of SQD & Differentiation 

SQD CEO and Co-founder Dmitry Zhelezov went down memory lane, sharing that SQD started three years ago as an attempt to fix a specific problem: trying to index blockchain data for a video streaming platform. They quickly discovered that the bottleneck for accessing onchain data was the RPC taking hours to sync without providing any flexibility. As a quick fix, they started adding data to a more efficient database, and the first prototype was born. After a Hackathon win, projects started expressing interest and Dmitry realized he had to build something that scales and not just rely on a simple database. That’s how SQD was born. 

In the long run, SQD isn’t just for indexing but an attempt to completely overhaul the way Web3 data is accessed at scale. It combines DePIN with the advances in data technologies from Web2. 

Scalability 

Through decentralizing the data lake, it scales with every node. Each worker contributes 1GB per second in throughput capacity, and each node is a small DB in itself. Therefore, as a whole, they are able to pool Terabytes of data and deliver them quickly to consumers. 

By tailoring to a specific problem, SQD achieves a 100x more efficient solution for querying. It’s not a one-size fits solution as other indexers. 

Flexibility 

There are probably hundreds of data use cases we haven’t even thought of. That’s why SQD stores raw data directly, providing devs with a familiar experience while also allowing them to be creative with it. 

By putting all the tooling to handle data on the client side, we solve data access at scale without worrying about how data is encoded or the different implementations that exist. Everything is solved on the client side, where devs can split and manage data as they desire. 

Network Economics 

As David points out, this is an area that is not enough discussed in crypto. Yet, at SQD, we got it sorted. The main idea is to balance supply with demand and decentralize over time. To achieve decentralization, hardware requirements for running worker nodes need to be low to allow people to run nodes even when rewards are small. 

Of course the question then is: but how do you know the right amount of tokens? You don’t. That’s why, instead of fixing inflation from the start, SQD opted for a fixed reward pool for the bootstrapping period - structured in a way to achieve maximum decentralization. This also means that some investors weren’t happy to realize that the more they delegated the lower their rewards became. 

[At this point, Marcel mentions that people in crypto need to realize that to earn rewards one has to work - like in the real economy - and not expect to get money for just being there…]

Anyway, there is a good reason for this, as it’s best for the network to have many people delegate and run worker nodes rather than just a select few. 

Once the bootstrap period is over, we’ll better understand rewards and costs and set inflation accordingly. At the same time, we’ll likely trigger a fee switch on the supply side, where people who have so far locked tokens for higher rate limits will switch to a subscription model. 

Game Plan 

The entire game plan can be found here. 

Gateway 2.0 

Currently, to access the SQD network, consumers have to bond SQD and run a gateway—basically a piece of software that acts as a proxy service. It’s sequential, and you have to wait for responses. With the introduction of 2.0, all of this is getting smarter. Gateway 2.0 will pre-fetch and allocate data, eventually eliminating the need entirely to have an RPC on the downstream client. 

For users, it’ll be a simple interface where they can stream data based on their filters in real time. Some big names will be coming as partners to facilitate this - especially from the RPC space. RPC providers will be able to join SQD as sellers streaming data with little upfront investment. Watch out for more on this 👀

Light Clients 

Another area of excitement among all the speakers are light clients. Already pretty hyped in Web3 for their ability to verify chains without needing to run an entire node. On SQD Light Clients refers to facilitating the handling and cloning of a DB straight on a users device. 

Dmitry explains that usually Web3 is always a few steps behind the Web2 data world. 

“In Web3, we’re still stuck in the postgres area, while Web2 has long moved on.” - Dmitry 

At the moment, SQLite is the latest innovation, a lightweight production-ready database that can be put on any device. It’s already widely used in mobile apps. Dmitry uses Instagram as an example to illustrate its power. 

When you open instagram, even offline, then you still will see a feed full of things, and that’s largely thanks to local caching and storage.”

Couldn’t be crypto peopleSrc

Trustless access, decentralization, and SQLite are a perfect combination. Once implemented, users could run indexers locally and even enrich them with private data, using secure enclaves for processing without latency. 

While use cases are limited so far, Marcel is confident that builders will discover hundreds of them. David adds that this also allows projects to scale efficiently without the usual data infra overhead—after all, smartphones are everywhere and powerful. 

Verifiability 

The last topic the three discuss is verifiability. 

All agree that most people don’t care that much about decentralization. What matters, though, is verifiability, especially with the rise of AI. As David explains, in many of the current use cases alleging that OpenAi (or another AI company) used data without consent, it’s nearly impossible to prove. Verifiability is lacking. 

With blockchain, there is a way to add verifiability to AI agents. As a large data provider ability to verify also plays an important role in SQD. 

Src

There are two parts to it: 

  • the ability for users to verify that the data providers have added correct onchain data
  • the ability to verify that the query has run correctly 

For the first, adding simple hashes might be sufficient since data is mostly public anyway. For query verification, Dmitry points out that TEEs are a promising technology as they allow low-cost computation with integrity. 

ZKPs, while more expensive, can also be used in a setup with optimistic sampling, where disputes are only submitted when a user is 99% sure that an output is invalid. 


For any further questions on this partnership or SQD’s game plan, please join us on Discord