Let’s Talk Scalability. One of the Challenges in the Blockchain Trilemma

The crypto industry tries to become an alternative to the current financial system, and compete with the traditional products of the means of payment and the money market.

Centralized payment processors process large amounts of transactions per day, such as the Visa network, with an average of 150 million transactions every day, approximately 2,000 transactions per second (TPS). Such performance is unattainable for current blockchain networks, PoW is usually a few dozen TPS and in PoS blockchains only several hundred TPS can be achieved.

Perhaps the crypto industry, with the decentralized blockchain format, does not totally replace the current banking and financial industry, and is complementary, at least it is for now, with the current technological capacity.

I am referring to scalability, that is, the ability to propagate operations at a higher speed, with less waiting time (communication latency) and lower cost.

Using ethereum as a case study, the more developed ecosystem today, having been the pioneer DeFi can say that many users have faced the disadvantages of a chain infrastructure not scalable blocks. High transaction fees, resulting from network congestion, are a disincentive for retail investors. For the average user, there is no way to justify paying USD 70 for the execution of a single transaction worth USD 100.

The scalability solution that the development team of that blockchain has found is the change in the consensus system , from PoW to PoS trying to increase the speed of the network and reducing the operating costs of the nodes. ETH 2.0 is nearing its final implementation, having recently been updated with a new hard fork, called Altair.

The Blockchain Trilemma

Developers seek a design with a balance between decentralization, security and scalability.

There is a first layer (L1), which we call blockchain, which is the most secure and decentralized network but with lower performance. 

Most blockchains sacrifice one of the three characteristics mentioned above. Bitcoin and Ethereum, the largest blockchains in the industry by capitalization, are less scalable, serving different degrees of interest, security and decentralization.

While this was sufficient in the early years of its operation, the influx of blockchain applications has put immense pressure on Layer 1 to evolve scalability.

Other blockchains, such as Binance Smart Chain, Tron, and EOS, which record better scalability, sacrifice decentralization in the trilemma.

As a solution, and thinking outside the box, developers have found an option, Layer 2 (L2) solutions.

Are Layer 2 Solutions the Answer?

Above the first layer, it is possible to create an almost independent and off-chain network. This second layer is designed to scale as high as possible, and make transactions fast and cheap. 

To ensure that new blocks are propagated across the network as a whole in an efficient and secure manner, it is important that the system efficiently consumes network, storage, memory, and compute resources.

The idea is to bring computing and payment processing to the off-chain layer, to make it scalable, and then record the final state of those activities on the layer 1 blockchain. 

Be it optimistic rollups, state channels, plasma or zero-knowledge rollups (zk-rollups), the goal remains the same: bypass the limitations of decentralized blockchains in their first layer.

On the Ethereum blockchain, Polygon, in its layer 2 design, has already come up with an acceptable solution, as has Arbitrum. Celer Network has done it too, for both Polkadot and Ethereum.

Lightning Network is a second layer protocol over Bitcoin, designed to improve scalability, generating payment channels, but its limitation is that it is not multiparty (it only generates channels between two parties), and also in the impossibility of writing programs on this.

Cardano made a smooth transition to Praos, from the previous federated protocol Ouroboros Classic, through a series of changes to a protocol parameter d, leading the ecosystem to full decentralization for block production.

Network performance affects how fast the system works as a whole, due to the volume of data transferred and the block adoption time.

More storage in the mempool (temporary storage area) can often mean better block utilization, but comes at the cost of higher delay (latency) when the system is busy.

The total size of a block is currently limited to a maximum of 64 KB, which represents a trade-off between ensuring good network utilization and minimizing transaction latencies. A single block can contain a combination of operations with smart contracts, native tokens (such as NFT), metadata, and payment transactions in ADA. 

Similarly, a single transaction is currently limited to a maximum of 16 KB. A single block will contain multiple transactions (at least 4, but usually many more), thus improving the overall performance of transactions.

The blocks are validated every 20 seconds, and the time total to produce them was set at 1 second, with a budget of approximately 50 milliseconds available for the execution of the Plutus script. In practice it has been shown that many real scripts will execute in 1 millisecond or less.

The maximum throughput for simple transactions is approximately 11 transactions per second (TPS). Transactions are saved in the mempool until they are ready to be processed and included in a block. The mempool size is currently set to 128 KB, twice the current block size. 

According to current traffic, Cardano’s network is using on average about 25% of its capacity. Of course, the most desirable scenario is for Cardano to operate near full capacity, because at 100% the network would be saturated. The more scalable the network, the more difficult it will be to saturate it. 

With the Hard Fork Alonzo, in September, it is already possible to program smart contracts in Cardano, to develop DApps. This evolution will bring greater demand on the network, and therefore will require scalability.

The best proposal from the IOHK developers, and not the only one, is Hydra, the second layer solution on the extended UTxO model, which allows fragmentation of the delegation space without the need to fragment the general ledger. 

Surely the first version of Hydra will be quite limited, as you will need to open a status channel with a fixed set of actors, and all actors will need to be online 24/7 as there is no They will be able to add or remove assets from the channel of others, but close the channel completely. Thus the head of Hydra v1 will not be a final solution, because the states will still have to eventually open and close on the main chain.

The Hydra head is in development, and perhaps a first version could be available on the Mainnet in Q2 2022, so a fix is ​​needed along the way.

Currently Plutus scripts are expensive to run, and also large in size, a single script is 5–8 KB and therefore a single transaction in Cardano can generally only use 1–2 scripts before reaching the limit. This means that increasing the transaction size limit could allow for larger batching of NFT airdrops, without allowing many additional Plutus computations.

In a congestion situation, transactions are rejected as they exceed the storage capacity in the mempool and its established waiting time, the TTL (Time To Live). This rejection has no cost, due to the eUTxO accounting system. Despite this, the user experience is not good. 

The wallets could implement transaction forwarding for a better user experience, so even if the mempool is full and the transaction was initially rejected, the wallet will just forward the operation, but as there are currently no query tools on the mempool of the lightweight wallet node, it is difficult to implement proper transaction forwarding. 

Decreasing the TTL is also not ideal, because it reduces the chance for the transaction to become a block before it times out.

The Fee Market Model

One way to alleviate the problem without increasing the transaction and / or block size could be to order transactions in the mempool, using a rate market.

At Cardano, transactions are currently processed using the FIFO method, first in first out (processed).

Most blockchains operate on a “pay or shut up” system. If there is congestion, the transaction with the highest network rate value is prioritized.

The rewards are fed in part from the fixed supply of the reservations that are issued, being decreasing, and also from the network rates, and since these do not increase in the amount of traffic (limited by scalability), the return of the rewards for delegation and participation in the network consensus is reduced over time (currently the average return is less than 5%).

There is a fare auction system in Alonzo, but there is no software yet to support it (other than the ledger), which could be used for a fare market on L1.

If the value of the fees were increased, the rewards would increase, and this would be achieved fairly with a fee market model. 

Thus, an incentive would be generated for delegation and participation in the consensus of the network, but it would increase the cost, impacting on scalability.

Increasing the speed of data transmission would not generate congestion and thus excess rates would not be increased. It is the story of the chicken and the egg.

As you can see, the solutions are not unique, nor are they easy, but developers are constantly innovating in this industry, as new as it is promising.

In a future article I will explain more details about the scalability plans for Cardano.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts