May Zus be “the too good to be true” in the Decentralized Storage Market?
Modular blockchain suited for storage, How Erasure coding is better in terms of Efficient storage & fast retrieval, enterprise quality storage services, Superior SLA T&C and a better compensation plan
5 years of development, 50% of tokens in circulation already, 2 Ph.D. Founders, one of them is an academic professor, finally, Zus is coming to Mainnet in a few weeks!
Zus is a player in the dStorage Ecosystem, focusing more on the hot storage type and promises to offer enterprise-grade quality storage
Using a similar approach to modular blockchain in the storage niche
Many of the current dStorage providers are PoW systems that combine the ability of a storage provider to mining a block with the amount of data a provider is storing. This indirectly couples the data value to the computing power spent and may have an adverse effect that amplifies node churn, and thereby data loss
However, latecomers to the field, such as the Zus network, are capitalizing on advancements in modular blockchains to optimize their services.
Besides that consensus on the Zus network leverages a voting-based proof of stake (PoS) algorithm,
The Zus network is designed to decouple the blockchain layer from the storage layer, enabling a scalable and high-performing system for enterprise applications, Not only in the Storage area but also for future cloud computation services,
Their approach results in 4 parallel participants collaborating to achieve the network goals:
Miners who produce and validate blocks,
Sharders who store the blockchain,
Blobbers who store the data (Storage providers)
Validators (other Blobbers) who validate challenges submitted by other Blobbers
As a result; we would have a Fast network, Less computation burden on 1 participant (SPs) = less entry barrier, and therefore more Decentralized, Also scalable to whatever number of Storage Providers exist and won’t struggle when the blockchain gets bigger and bigger handling tons of transactions. So it is specially modulated to serve its Storage function
Using Erasure coding as a more efficient storage method, While also helps with fast retrievals
Popular protocols, e.g; Filecoin, Arweave, Use replications to ensure the safety of data, however, this results in much redundancy (Arweave at some points had 200 replications for their data set! Which is useless) therefore less efficient storage system which makes them less cost-efficient solutions on the long run
While Zus uses Erasure Coding; and with EC, only a partial, encoded segment of a file is present on any of the server nodes at a time, so even if a server is compromised, user data is safe.
With EC 10/15 (10+5), any 10 from the 15 total can be used to reconstruct the data. So this would tolerate up to any 5 node failures and still be able to reconstruct the data from the remaining 10, And this has the same efficiency of 66% as the 2+1 parity example, but the chances of >5 failures compared to >1 are exponentially smaller. By comparison, 6x duplication of data would also give a failure tolerance of 5 but only 17% efficiency!
Another example for better understanding;
Compared to 3-way replication, a (5+3) EC can tolerate the same number of node failures as triplication while improving storage efficiency by 80%.
That said, Everything comes with its trade-off, while EC is better than replication in terms of storage efficiency, it needs reconstruction at the client’s end, which may result in some latency to compute, but it is offset by parallel downloads and other factors in their design. So overall Zus still provides a superior solution compared to its peers.
Not only it offers efficient storage and is more cost-effective, but it also helps with fast retrievals!
For dStorage solutions to compete with AWS S3 business, they need to offer at least the same quality of hot storage that its bottleneck is fast retrieval/access to the data.
By utilizing Erasure-Coding (EC) techniques, users can simultaneously access multiple storage nodes and construct/reconstruct the data from them.
Access speeds are excellent, The user's device is reconstructing the data from the 10 fastest responders (all of which are using enterprise-quality hardware). Writes are committed and witnessed on the blockchain in seconds, there is no lag waiting for full redundancy to be established
Access speeds are potentially much faster than the cloud because multiple nodes are accessed in parallel
This combined with the split-up blockchain layer from the storage layer shall provide fast retrieval for clients.
Competing at the hot storage level and Eliminating the pain points related to Centralized solutions.
Besides being trustable and verifiable so the user can verify & ensure he gets at least the same service as his centralized traditional solution if not better;
But the main pain point they experience is the data retrieval fees! Any access/download costs more money.
Zus has nice solutions regarding this; Blobbers’ block rewards depend on their weight; which consists of 4 factors: ( their $ZCN collateral, price, data stored and data served)
2 of which are related to Data retrieval;
On one hand, they incentivize the Blobbers to offer the cheapest read/egress price, as this will result in more weight in the block rewards, So blobbers compete with each other to offer cheaper reads, with the max rewards produced when the egress prices approach 0
And on the other hand, the more reads a Blobber experience, the more the block rewards’ weight, So blobbers are incentivized to be chosen for a large number of reads (this will only happen when they offer fast & cheap reads, So that the client who is willing to use the data frequently would choose them).
More empowerment to the Client through verifiable adherence to the SLA and a superior compensation plan.
For a better understanding of this; we need to know the pain points that users currently experience with centralized solutions, which mainly are;
For security concerns and Service outages:
Besides the distributed Storage providers around the globe and the option of encoding the data so no one can access it even SPs themselves, Just like Filecoin where Storage providers have to provide collateral as staking, they get slashed if they didn’t do their obligations towards data through continuous on chain challenges that verify if he honors the contract details and has the data safe or not. You get much safety here and you yourself verify the good treatment of the data you signed for.
Centralized cloud providers don’t offer that, you just have to trust their promise to act in good faith and rely on the brand’s reputation.
For the SLA & compensation for the outage;
Additionally, Zus has a great feature, where for every failed challenge, this particular challenge reward goes back to the Client! So you literally pay only for what you have signed for -the times when the data has successfully been verified to be safe, any misbehaving to a challenge would get you your money back.
That’s a superior Compensation credit and honoring SLA! That hasn’t been deployed yet in the dStorage space.
Also, the staked amount of Zcn is a factor that determines the block rewards weight for blobbers, the more staked= the more rewards, this is a safety margin implemented to better protect the data.
That’s more power to the end user that natively doesn’t exist in web2,
Concluding the above 3 points, Zus can be considered a candidate to offer the same AWS hot storage quality service while eliminating its pain points at the same time.
Do latecomers benefit from the latest progress & development ideas in the emerging markets?
They implement the collateral staked tokens for slashing from Filecoin,
Modular blockchain for storage from the L1 space,
EC from Sia and Storj,
So a question to be asked; if in Crypto-like emerging markets, latecomers have the benefit of utilizing the latest development & ideas achieved already, or the early birds have the upper hand in spread and dominance?
Zus and devs’ GitHub activity
For many weeks, Zus was constantly at the top of Github’s protocols activity, although this doesn’t indicate much, the changes were in the core, not readme files or changelog ones,
This was important to check on the team’s activity through the past 4 years of development, they raised their presale (39m$) from their community in early 2019, No VCs participated, So it was a challenging task to keep building and hold the community tight across your journey of several years. As we speak about this, it was nice seeing such a community sticking around and supporting the project all this long period! Something to be counted for them
There are some few features that are worth mentioning, making them an attractive solution for some use cases;
-For encryption
Amongst distributed storage platforms, the ability to encrypt files is not unique to Zus. However, Zus offers an incredibly flexible solution giving users the choice to encrypt each file on an individual level.
Even having the ability to have private sharing of these encrypted files. Without making Blobbers able to see those data.
-Forkable chains
where we can have more specialized forked chains with specific adaptions or for specific industries to be deployed later on.
Almost every dStorage provider will offer additional computing services other than storage, they will compete in the whole cloud computing industry, having forkable chains to suit certain niches would be an important feature that attract builders to use Zus and therefore its ecosystem would grow
-Allocation pools
A good feature is that any client can deposit into the write pool of an allocation in order to sustain stored data in the event that the owner of an allocation contains publicly available documents used by everyone and needs to be funded over time, regardless of the ownership.
This suits some public goods, in which privately someone can fund the storage of any data
In Brief;
The current issues of the dStorage protocols and how Zus can be the promise we are waiting for
-From Centralization, price and client control in Storj,
-Replication and much redundancy (not storage efficient) in Filecoin & Arweave,
-Issues of future scalability in most of them, CDN and retrieval markets which make them not suitable for hot storage
-Personal regular computers in Sia, that can’t offer enterprise-quality services
To many more;
Zus has solutions to almost all of them - In theory-; and wrapping up their features
Runs on ultra-fast* blockchain
Own chain technology (not sitting on top of someone else's blockchain)
High redundancy planned through erasure encoded ‘striped’ storage (can withstand several points of failure, far superior to most cloud providers)
Super-fast access potential (via parallel downloads from multiple ‘striped’ blobbers) = fast retrieval & incentivizing free retrievals at the same time
Forkable chains (specialized forked chains with specific adaptions or for specific industries can be deployed)
A whole ecosystem with smart contracts functions and dApps build on top of it
They are releasing their Mainnet in a few weeks with a start of a minimum of 3 PB storage capacity provided by 100+ miners they had from their Testnet.
One of the main takes on Zus; is their long period of development, they promised to deliver the Mainnet at many different dates since 2021, and they keep giving new times and delays, but it’s understandable in some way if we put things in perspective and look at how long other large credible projects have taken. Filecoin has taken about 8 years, Sia about 7 years.
They have around 20 Devs with a whole team of fewer than 40 members, So not the big VC-funded team to deliver their big promise fast enough.
Another issue is that; they can promise a perfect solution in theory, but we have yet to see how this will play out in practice and under stress tests.
Many thanks to Sculptex for his help and I really wish them good luck in eliminating dStorage obstacles
What a great read! I really liked how you compared Zus with other dStorage projects, and provided some insight into its tech. Looking forward to more Zus content after they launch!