Popular Post JoelKatz Posted October 2, 2019 Popular Post Share Posted October 2, 2019 By deliberate design, the XRP Ledger prefers safety over liveness. When it is unclear whether safe operation is possible, the XRP Ledger does not operate. Fortunately, the circumstances under which safe operation are not possible have never happened in the production network and the greater reliability and peace of mind has proven to be at minimal cost. However, this does mean that the number of validators that must fail for the network to stop making forward progress can be fairly low. This requires people to set a high bar for validators they choose to listen to. But keeping the bar high prevents new, less well-funded participants or participants less committed to rapid response to failures and downtime from running validators as the set of candidate validators shrinks to those who are well-funded or committed to rapid response. We propose a design change to improve the network’s reliability in the face of failed validators. This change also eases the trade-off of wanting diverse validators but having to insist on very high quality and response readiness to avoid the risk of a network halt. The idea is as follows: In the ledger would be a list of validators that are believed to be unreliable at the moment. That list would be maintained by the validators using the same mechanism they use to maintain the network fees and amendments. If a validator is in that list, it would still participate in consensus in precisely the same way. The list would have no effect on the consensus process whatsoever. The decision when to consider a ledger fully validated would be adjusted so that servers in the negative UNL list would not count towards the 80% threshold. Imagine a network where everyone has the same UNL and consider what happens if validators start to fail. Without this proposal, as soon as 20% of the validators fail, the network would stop making forward progress. With this proposal, as validators fail, the remaining validators would add them to the list of validators believed unreliable. This would mean the network could keep making forward progress safely even if the number of remaining validators gets to 70% or even slightly lower. This post is one suggestion for an enhancement to the XRP Ledger. See this post for context:https://www.xrpchat.com/topic/33070-suggestions-for-xrp-ledger-enhancements/ You can find all the suggestions in one place here: https://coil.com/p/xpring/Ideas-for-the-Future-of-XRP-Ledger/-OZP0FlZQ Kakoyla, nikb, Baboly and 10 others 12 1 Link to comment Share on other sites More sharing options...
mDuo13 Posted October 2, 2019 Share Posted October 2, 2019 I think this is an interesting idea that could have some cool benefits, but it would probably be best studied from a theoretical basis to make sure whatever design we come up with doesn't have any pitfalls that could be abused or unintentionally cascade into a worse situation. For example, I'd want to be fairly confident that a gradual split in network connectivity couldn't result in two networks diverging along the lines of whom they temporarily can't see; and similarly, I'd want to make sure a small but coordinated group of malicious validators couldn't conspire to get honest validators placed on the "unreliable" list through some trickery. I'm optimistic that we could design around those sorts of problems as long as we take the time to plan things out. Malloy, King34Maine, Mpolnet and 1 other 2 2 Link to comment Share on other sites More sharing options...
NightJanitor Posted October 2, 2019 Share Posted October 2, 2019 https://en.wikipedia.org/wiki/Metastability eksarpi 1 Link to comment Share on other sites More sharing options...
Jerrybo Posted October 3, 2019 Share Posted October 3, 2019 Interesting idea. The underlying problem for me seems to be, that too many validators have the same UNL. Imaging everyone had a complete different list. The network halt due 20% fail would be a lot less likely, due there aren't "the top validators". In that assumption of the underlying problem (plz correct me if I'm wrong), the aim should be to get a lot more different UNLs being used. Approach: Dynamic UNL Like in the torrent network, an initial UNL would be just to start to join the network. Every network node maintains his own UNL list dynamically based on known peers. Every node on that list gets a calculated score. The score is getting better based on validators past successfully validations. The dynamic UNL to be used is the result of the top-scored validators. Issue with approach: It should be avoided, that every validator gets the same dynamic UNL due same calculation metrics. Maybe additional metrics such as response time, geo location, etc. could be considered. Link to comment Share on other sites More sharing options...
Sukrim Posted October 3, 2019 Share Posted October 3, 2019 How would you secure such a dynamic system against sybil attackers? Also validators don't expose information such as geo location. The "U" in "UNL" stands for "Unique" - and that's something that the operator of every single node needs to decide upon. Link to comment Share on other sites More sharing options...
Jerrybo Posted October 3, 2019 Share Posted October 3, 2019 1 hour ago, Sukrim said: How would you secure such a dynamic system against sybil attackers? Also validators don't expose information such as geo location. The "U" in "UNL" stands for "Unique" - and that's something that the operator of every single node needs to decide upon. counteractions: - starting a validator requires an XRP reserve. Running more nodes is getting expensier. - new nodes have a low score. Score increases over time in the meaning of they gained trust from the network Basically we could implement the same mechanisem and rules, that exists today to get an entry on the default UNL. Of course, every node can configure his own UNL at anytime. Thinking of a blacklist feature like in other network systems. Link to comment Share on other sites More sharing options...
Sukrim Posted October 3, 2019 Share Posted October 3, 2019 Past performance doesn't guarantee future honesty. XRP are cheap and validators shouldn't only be open to millionaires imho. What are the "mechanisem and rules, that exists today to get an entry on the default UNL" exactly? Link to comment Share on other sites More sharing options...
Guest Posted October 3, 2019 Share Posted October 3, 2019 (edited) Anyone who received any kind of funding via Xpring should be obliged to run a validator. The same goes for any university that's part of the UBRI initiative as well as any "Ripplenet" company that's using XRP. I can only hope that the "team Ripple" is pushing for this for obvious reasons... Another thing would be to take a look at the improvements that our friends at Stellar have been doing since the (in)famous May 2019 halt: https://www.stellar.org/roadmap#stellar-core https://www.stellar.org/developers/blog/may-15th-network-halt/ https://www.stellar.org/developers/blog/why-quorums-matter-and-how-stellar-approaches-them/ (see validator grouping and quality...) https://www.scs.stanford.edu/~dm/home/papers/lokhava:stellar-core.pdf Ripple also more recently proposed an open-membership Byzantine agreement protocol called Cobalt. SCP’s safety is optimal, so SCP is safe in any failure scenario where Cobalt is, while the converse is unclear. However, Cobalt claims its safety condition is easier to understand and thus less prone to misconfiguration,which will be interesting to evaluate if Cobalt gets deployed. The Stellar consensus protocol (SCP) is a quorum-based Byzantine agreement protocol with open membership. Quorums emerge from the combined local configuration decisions of individual nodes. However, nodes only recognize quorums to which they belong themselves, and only after learning the local configurations of all other quorum members. One benefit of this approach is that SCP inherently tolerates heterogeneous views of what nodes exist. Hence, nodes can join and leave unilaterally with no need for a“view change” protocol to coordinate membership. https://github.com/stellar/scp-proofs (formal verification) https://www.scs.stanford.edu/~dm/home/papers/losa:stellar-instantiation.pdf Ripple introduced the first permissionless quorum-based consensus protocol. In the XRP Ledger Consensus Protocol, each participant p is responsible for configuring its own UNL, which is a list of other participants that p will accept messages from. Moreover ,p will accept as a quorum any set of participants consisting of more than a fixed fraction (defined system-wide by the protocol, e.g. 80%) of its UNL. Maintaining agreement in Ripple’s protocol rests on the assumption that participants will provide sufficiently overlapping UNLs (roughly 90% for every pair of participants, in the most adversarial model of Chase and MacBrough) Edited October 3, 2019 by Guest typos Link to comment Share on other sites More sharing options...
Gyru Posted October 3, 2019 Share Posted October 3, 2019 If possible, I would also try to find a check system limiting the number of validators per cloud platform (like the biggest AWS and Azure). What good is decentralized if the higher level is more and more centralized with time passing. The latest AWS incidents did not cause the xrpl network to freeze, but could have been affected if multiple servers had been impacted. https://www.mondo.com/aws-outage-internet-centralization-problem/ King34Maine and mDuo13 2 Link to comment Share on other sites More sharing options...
tulo Posted October 8, 2019 Share Posted October 8, 2019 I'd work more on a new Consensus mechanism instead of putting resources into improving slightly the current one. It's like fixing the car every month when it has 500k Km instead of buying a new one. Link to comment Share on other sites More sharing options...
King34Maine Posted December 5, 2019 Share Posted December 5, 2019 On 10/8/2019 at 4:13 PM, tulo said: I'd work more on a new Consensus mechanism instead of putting resources into improving slightly the current one. It's like fixing the car every month when it has 500k Km instead of buying a new one. Regardless if Ripple created an entirely new Consensus mechanism it would still succumb to the emergence of new advances in consensus algorithms that would require updates as innovation never stops. David alluded to the possibility of a new dual-consensus platform. I'm wondering if they could utilize some of the core-consensus technology stack developed by Logos (their first acquisition). Michael Zochowski (co-founder of Logos, now Head of DeFi at xPring) said that: "While this chapter of Logos is closing, the ideas and spirit behind Logos will continue in our work at Xpring. We have not yet finalized the plans for the Logos core technology, which has continued to set the bar for blockchain performance, but we hope to have exciting news to share in the near future." I can't see a scenario where Ripple doesn't at least take a look at the Logos blockchain tech. After all, my guess is that their tech stack was one of the deciding factors for the acquisition besides the engineering talent. Cesar1810 1 Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now