Jump to content
Sign in to follow this  
T8493

Ripple: The Most (Demonstrably) Scalable Blockchain

Recommended Posts

http://highscalability.com/blog/2017/10/2/ripple-the-most-demonstrably-scalable-blockchain.html

 

Quote

 

10x+ Throughput Chronology

When performance testing began in February of 2015, the XRP Ledger sustained 80 transactions per second. Today, it’s up to 1500. The initial design of rippled was for scalability, and the underlying architecture has remained the same. Incremental improvement has been made over time as opportunities have been uncovered.

2/25/2015: 80/s
At the time, transaction signature verification was being performed within the master lock. This was turned off in a special test-only branch.
3/2/2015: 120/s
Disabling signature verification (for testing purposes only) provided a significant throughput increase. The next area for improvement discovered was that transactions were being applied to the ledger one at a time, incurring expensive operations to freeze and unfreeze the ledger. The proposed fix was to apply transactions in batches, to incur the expensive operations less frequently.
4/6/2015: 490/s
Transaction batching provided very significant performance gains--about 4-fold! Also, signature verification was moved outside of the master lock, allowing that activity to be distributed across multiple processing threads.
6/18/2015: 150/s
This was a significant regression! Investigation eventually revealed that the cause of the regression resulted from a change to how the ledger was modified. This change was intended to increase the efficiency of ledger modifications, and it did on a per-transaction basis. But, transaction batching as implemented previously was not implemented in the new code!
10/16/2015: 525/s
Re-implementing transaction batching brought throughput back up to slightly higher levels than previously. Investigation revealed the next bottleneck to be in submitting transactions to the network: the client handlers could not saturate the network before they bogged down.
10/13/2016: 1100/s
The next performance test happened a year later, and throughput doubled! The most likely explanation is that coroutines were implemented in the functions that submit transactions to the network. This reduced blocking in those functions.
7/12/2017: 1500/s
Performance testing had resumed but on an automated daily basis. The peak sustained is now 1500/s!

 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×
×
  • Create New...