dcSpark CTO explains why Cardano is ‘one of the worst blockchains for storing data’

dcSpark CTO explains why Cardano is ‘one of the worst blockchains for storing data’

On Saturday (August 13), Sebastien Guillemot, CTO of blockchain company dcSpark, said L1 blockchain Cardano ($ADA) is “definitely one of the worst blockchains for storing data,” and went on to explain why he thinks so.

In case you’re wondering what dcSpark does, according to the development team, its main goals are to:

  • “Extending Blockchain Protocol Layers”
  • “Implement First Class Ecosystem Tools”
  • “Develop and release user-facing apps”

The firm was co-founded in April 2021 by Nicolas Arqueros, Sebastien Guillemot and Robert Kornacki. dcSpark is best known in the Cardano community for the Milkomeda sidechain project.

On Friday (August 12), a spokesperson for Cardano sent out a tweet that made it sound like Cardano is a great blockchain for storing large amounts of data on-chain.

However, the dcSpark CTO responded that Cardano’s current design makes it one of the worst blockchains for storing data:

Really strange tweet. Cardano is definitely one of the worst blockchains for storing data and this was an explicit design decision to avoid blockchain bloat and it is the root cause of many design decisions like plutus data 64-byte chunks, off-chain pool & token registry, etc.. .

Vasil improves this with inline datums, but they are indirectly discouraged due to the large cost of using them. I agree that the blockchain provides data availability is an important feature, but having a good solution will require changes to the existing protocol.

See also  Crypto Exchange, JPEX Announces Launch of Native Token,

Then another $ADA holder asked Guillemot if this design decision could make life more difficult for team building roll-up solutions (like Orbis) and he got the following response:

Yes, trying to provide data availability for use cases like rollups, mithril, input endorsers, and other similar data-heavy use cases while keeping L1 thin (unlike Ethereum which optimizes for people just dumping data) is one of the big technical challenges that are coped




On August 1, IOG co-founder and CEO Charles Hoskinson released a short video, explaining why the Vasil hard fork had been delayed for a second time and providing a status update regarding the testing of the Vasil protocol update.

Hoskinson said:

Originally we planned to have the hard fork with 1.35 and that is what we sent to the test net. The test net was hard fork under it. And then there was a lot of testing, both internal and community, going on. A collection of bugs was found: three separate bugs that resulted in three new versions of the software. And now we have 1.35.3, which looks set to be the version that will survive the hard fork and upgrade to Vasil.

There is a big retrospective that will be done. The long short is that the ECDSA primitives and among a few other things are not quite where they should be. And so, that feature has to be set aside, but all the remaining features, CIP 31, 32, 33, 40 and other things like that are pretty good.

So they’re in advanced stages of testing, and then a lot of downstream components need to be tested, like DB Sync and the serialization library, and these other things. And it is currently underway. And a lot of testing is underway. As I mentioned before, this is the most complicated upgrade to Cardano in its history because it includes both changes to the Plutus programming language plus changes to the consensus protocol and a number of other things, and was a very loaded release. It had a lot going for it, and as a result it’s one that everyone had a vested interest in testing thoroughly.

The problem is that every time something is detected, you have to fix it, but then you have to verify the fix and go back through the entire test pipeline. So you get to a situation where you’re functional, but then you have to test and when you test, you might discover something and then you have to fix it. And then you have to go back through the entire test pipeline. So this is what is causing release delays…

I was really hoping to get it out in July, but you can’t do that when you have a bug, especially one involved with consensus or serialization or related to a particular problem with transactions. Just have to clear it and that’s just the way it goes. But all things considered, things are moving in the right direction, steadily and systematically…

The set of things that could go wrong has become so small, and now we’re kind of in the final stages of testing in that respect. So unless something new is discovered I don’t think we’ll have any further delays and it’s just getting people upgraded…

And hopefully we should have some positive news soon as we get deeper into August. And the other side of that is that no problems were detected with pipelining, no problems with CIP 31, 32, 33 or 40 throughout this process, which is also very positive news, given that they have been repeatedly tested internally and externally by developer QA firms and our engineers, that means there’s a pretty good probability that these features are bulletproof and tight. So just a few fringes to sort out and hopefully we’ll be able to come up with an update in the middle of the month with more news.

See also  The future of crypto markets will be driven by developments in Hong Kong and China

Image credit

Featured image via Pixabay

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *