Elrond, The 27,000 Transactions in 36 Bytes Blockchain

Делитесь и голосуйте:

There’s a search going on for a scalable blockchain amid a slowdown in both bitcoin and ethereum in regards to capacity management where fees are rising to as much a $7 per transaction in bitcoin, and nearing not far off from $1 on average per transaction in ethereum.

Both chains have plans to address these matters, but other chains have been working on it for years with what can be called generation three blockchains now starting to come to market.

Generation one was of course bitcoin itself and the early attempts to improve it that began around 2011 when Litecoin launched and ended in 2014 when Ripple’s XRP went out.

Generation two blockchains began with ethereum in 2015 and ended arguably in 2017 when many other chains tried to improve upon eth while incorporating smart contracts and a virtual machine, the Turing complete wave.

Generation three is only beginning and thus it is far too early to generalize them, but Elrond might be typical of this wave which might be called the scaling chains.

Their devs were kind enough to have a chat which we replicate below in full starting with a general description that was previously given by Camil Ioan Bancioiu, a research engineer at Elrond. In describing Elrond, he stated:

“It’s easier if you imagine each shard having their own blockchain, independent of the other shards. ‘Block height’ refers to the latest nonce of a block in a shard – this means that there are exactly that many blocks in that shard, regardless of how many rounds and epochs have passed.

In a shard, blocks are counted sequentially, without skipping. And the metachain will notarize blocks from shards as they come, which means that the shards aren’t all forced to produce a block every round. Shards are allowed to miss a round from time to time.

If a shard is producing no blocks at all for a while, the metachain will declare that shard to be ‘stuck’, and might force a premature ‘end of epoch’, causing a reshuffling.

As you can see, the Protocol is designed to handle irregularities. In this specific situation, even network latencies could cause missing one or two blocks in a specific shard. This means that enforcing a rule requiring ‘one block per round in each shard’ would have made the protocol fragile. But we’re building the opposite of that, obviously.

Again, it helps to think of shards as having independent blockchains, while the metachain acts as a rather ‘hands-off management team’.”

“Again, it helps to think of shards as having independent blockchains, while the metachain acts as a rather ‘hands-off management team’.” So presumably this metachain handles the locking and unlocking of tokens when transferring between shards?

Isn’t the metachain a big bottleneck here? Which I guess you address with daily pruning or am I mistaken?

Robert Sasu, Elrond Core Developer: The meta chain does not do lock and unlock. Cross shard transactions are done through miniblocks which first got executed on sender shard, notarized by meta, then executed on destination. Meta keeps track only of miniblock headers (one hash for 1K transactions for example), not transactions.

How does the meta chain notarize them?

Meta does not see the transaction. He tracks only the miniblock headers. One miniblock heeader contains Source shard ID, receiver shard ID and miniblock Hash. 36 bytes.

One miniblock contains N transaction hashes. These are sent from one shard to another. They are not sent to Meta.

Does the meta then prune these miniblock headers?

All nodes, which are not history, prune all the data which are older then 2 epochs.

What are history nodes?

Mihai Iuga, aka DrDelphi, software engineer (and haxor?) who broke our wordpress due to him using fancy nick font: nodes that keep the entire history.

And are they necessary for security or do you not really need them?

Sasu: They are not necessary for security.

Iuga: They are necessary for explorers and services like Infura for example.

And Coinbase?

Sasu: From the perspective of the security of the network, the validators and the correctness of state transfers history nodes are not necessary.

If I wanted to chainsplit elrond, would I need to be a history node?

Sasu: One node cannot chainsplit decentralized network.

Do you get my point though, if I wanted to split on my own for fun where I can run my own network accessible to all, I’d need to be a history node? For fun, I mean BCH or ETC style.

After a long pause: Presuming the answer is yes, was that 36 bytes per transaction or 1,000 transactions?

Sasu: 36bytes per one miniblock, one miniblock can contain up to 27K transactions.

You can run your own network at any time. With your own validators and chain.

So if I wanted to split, I’d need only these 36 bytes? I don’t need any further data from the shards?

Sasu: What do you want by split? What is the goal?

ETC or BCH, or here Elronidon. So the whole network basically like BTC became two with BCH and then loads more with BTG etc.

It is much harder here. You would need to make a new setup from a selected checkpoint with your own validators. Make a new nodes configuration, chainID, run hundreds of machines.

I don’t plan to actually split, I’m just trying to understand what the tradeoff here is and am trying to understand primarily what bytes I’d need. Obviously these 36 bytes, but would I need other historic data as well?

You would need the complete state of the blockchain at one certain checkpoint time. State tries from all the shard, cross shard transactions, miniblocks, shard headers, validators info.

So checkpoint times are daily right? Roughly the equivalent of two epochs? Why you say I’d need to “run hundreds of machines”?

Checkpoints are at every epoch. But if you change the code you can do checkpoints other times as well. At chainsplit you would need to rewrite the BLSkeys of every node, to the ones which you own. You would need to change configurations in order to run on a smaller number of nodes. At min one node per shard would be the must.

And how many shards are there? Also are you basically saying I’d need to be rich to chainsplit, some kid in his basement can’t really?

You need to know a little bit of coding and configuration to chainsplit in the basement. Currently Battle of Nodes is running with 5 shards plus meta.

Yes I was thinking at full utilization of the network, or as Peter Todd would put it, would you need datacenter nodes?

Coding skills assumed of course, with the point being more whether you need rich-level or corporate level resources to chainsplit.

If skilled enough, it can be done in one PC running multiple instances.

I think you claim Visa levels, that’s 27k txs per second, so 36 bytes a second, plus you suggested you need kind of a VM per shard, so realistically after the network has been running at max capacity for say five years, you need a datacentre to chainsplit?

It all depends on how big the state Trie of each shard gets. You do not need the whole history. The state Trie depends on the number of accounts, number of smart Contracts.

Jon Macormack: Visa 27k a second new to me.

I haven’t actually checked their latest numbers.

Jon Macormack: There in orbit somewhere.

Sasu: Visa does 4k in average, having peak of 50k. We demonstrated last year July a testnet with over 65k transactions.

But that’s easy, demonstrate chainspit after ten years at max capacity.

Jon Macormack: 4k sounds realistic.

Robert Sasu: 27K transaction is the max one block can contain, because of blocksize config.

So is that whole blocksize 36 bytes? That’s what I’m not understanding. Can I create the whole network with just these 36 bytes conceptually speaking?

Sasu: The whole blocksize is maximum 1MB. No, you can’t create. 36 bytes of data is what metachain is notarizing in case of cross shard miniblocks, but he keeps track of other things as well.

The size of blockchain can be huge. In terms of 10s of gigabytes. We are testing hardforks, which are sort of chainsplit with huge states.

Tens of gb at how many txs a day, let’s say this does 100m txs, how much gig would be needed to mirror just those 100m txs?

Sasu: Size of the state depends on the number of accounts. 3million accounts are approximately 1GB of data in the Patricia merkle trie.

Accounts being addresses?

Yes. Address and corresponding data, balance, nonce. Etc.

So 3 billion accounts are 10 terabyte? but thats just one part right or is that all the data I’d need?

Iuga, aka DrDelphi: 3b accounts? ETH has little over 500m after all these years.

Thanks, didn’t know they were at half a billion, you making it sound like for elrond it would be same then?

Iuga: It all depends on the adoption we get.

No it depends on whether this can scale, the demand is there.

Iuga: Of course it can.

Well 10 tb at just 6x current eth don’t sound like it can.

Iuga: Robert said that 3million accounts are approximately 1GB. 3b accounts => 1 TB.

Sasu: The whole state of the ethereum state Trie is around 2GB. A little less. Even for ethereum you do not need the whole history of all the transactions to do chainsplit there, or simply synchronize.

So is this less scalable than eth?

Sasu: What? How did you arrive at this conclusion ? Eth can do 15-25 transactions per second. ELROND more then 10K.

A database can do more, what use to them if you can’t chainsplit with less resources.

Sasu: A database is held by a centralized entity. Blockchains try to resolve this in the first place.

By having the ability to chainsplit, which from what you are suggesting would be way easier on eth at 3 billion accounts than on elrond.

Sasu: This is not what I said.

Iuga: By the time ETH will get to 3b accounts, Petabytes SSDs will be available in any corner shop.

Sasu: The state of a blockchain depends on its ability to process more transactions. ETH is slow, will never reach it by doing 15-25tx/s. Elrond can do thousands, and can reach a much higher number of users, addresses and usage then any other.

There’s been a whole civil war over all this Robert Sasu. I think maybe there should be a new standard to all scalability claims: testnet demonstrate the resource requirements of a chainsplit after emulating 5 years of running at max capacity.

Sasu: We are doing that at local testnet, doing hardforks. Will do it in BON as well. But I do not see this as a requirement of scalability. The MAX capacity is the actual indicator.

Well the indicator in my view is what I said it is but obviously you are free to your view, if it was max capacity, a database is what you need.

Demonstrate what I said, and the world is yours.

Sasu: So if I have a blockchain which can do 1 tx per second, thus after 5 years at max capacity it has only 1GB of state, that is more scalable then one which does 10K transactions, reaches a state of 100terraBytes?

No, but if it is 10k txs/s at 1gb, maybe. That’s the point isn’t it? Max compression while still having the freedom to fork off.

Iuga: I don’t get your fork fetish.

Bitcoin can do a trillion txs/s if it wanted to. Exaggerating. But it doesn’t because there was a whole civil war over the matter.

Fork fetish is you can’t touch my money, like Justin Sun did Steem’s. You can’t order me around, like social media is doing currently over words. Imagine over money!

If I can’t fork you, why not just have a database? If I can’t ruin your whole kingdom by some basement code, how is this decentralized?

Iuga: You don’t fork the Elrond Company. It’s about forking a chain running on 1500 nodes spread around the globe and owned by people… nobody is ordering you around.

FIN.

The blockchain space is trying to do something that plenty say is impossible and in previous editorials where we change our mind hopefully less than by the wind, we have said in different context that bitcoin isn’t so decentralized or eth isn’t primarily because devs can have so much say.

Forking is a way to give the market a say, which can have its own book length problems and complications, but without it, arguably there is no blockchain at all.

There is a reason why people go on about the DAO hack reversal which sort of showed a potential weakness which arguably can only be solved by a stagnation of the blockchain, a removal of the ‘governors,’ the devs.

We wrote a long time ago a piece that started off as an attempt to criticize the rigidness of particularly bitcoin core developers and their supporters, but once the article was finished you are left wondering whether freezing the code isn’t actually the better way.

To eventually reach that stage presumably you first need to solve this scaling problem because hashed builds, as bitcoin is proposing, arguably entrenches these governors power. Decades long roadmaps by eth, arguably does too at least for that period. The solution in the meantime so being fierce competition.

For in the games we play and those coins we flip one can easily lose sight of the aim to evolve man himself, to level him up, so that the first man to launch from a private rocket is just the beginning of far bigger dreams to enpleasure our being.

It may well be what we are trying is impossible, but hopefully there won’t be left a piece of wall without a spaghetti on it first, for one can see how it can be possible and one can see how it can be useful.

Copyrights Trustnodes.com

Related posts

Bitcoin Rises as Gemini App Connects to Samsung Wallet

May 28, 2020 2:26 pm

Государство и общество

Ждем новостей

Нет новых страниц

Следующая новость