Jump to content

Suggestion: Robustness Improvements


JoelKatz

Recommended Posts

I think this is an interesting idea that could have some cool benefits, but it would probably be best studied from a theoretical basis to make sure whatever design we come up with doesn't have any pitfalls that could be abused or unintentionally cascade into a worse situation. For example, I'd want to be fairly confident that a gradual split in network connectivity couldn't result in two networks diverging along the lines of whom they temporarily can't see; and similarly, I'd want to make sure a small but coordinated group of malicious validators couldn't conspire to get honest validators placed on the "unreliable" list through some trickery. I'm optimistic that we could design around those sorts of problems as long as we take the time to plan things out.

Link to comment
Share on other sites

Interesting idea. The underlying problem for me seems to be, that too many validators have the same UNL. Imaging everyone had a complete different list. The network halt due 20% fail would be a lot less likely, due there aren't "the top validators". In that assumption of the underlying problem (plz correct me if I'm wrong), the aim should be to get a lot more different UNLs being used.

Approach: Dynamic UNL
Like in the torrent network, an initial UNL would be just to start to join the network. Every network node maintains his own UNL list dynamically based on known peers. Every node on that list gets a calculated score. The score is getting better based on validators past successfully validations. The dynamic UNL to be used is the result of the top-scored validators.

Issue with approach:
It should be avoided, that every validator gets the same dynamic UNL due same calculation metrics. Maybe additional metrics such as response time, geo location, etc. could be considered.

Link to comment
Share on other sites

How would you secure such a dynamic system against sybil attackers? Also validators don't expose information such as geo location. The "U" in "UNL" stands for "Unique" - and that's something that the operator of every single node needs to decide upon.

Link to comment
Share on other sites

1 hour ago, Sukrim said:

How would you secure such a dynamic system against sybil attackers? Also validators don't expose information such as geo location. The "U" in "UNL" stands for "Unique" - and that's something that the operator of every single node needs to decide upon.

counteractions:
- starting a validator requires an XRP reserve. Running more nodes is getting expensier.
- new nodes have a low score. Score increases over time in the meaning of they gained trust from the network

Basically we could implement the same mechanisem and rules, that exists today to get an entry on the default UNL. Of course, every node can configure his own UNL at anytime. Thinking of a blacklist feature like in other network systems.

Link to comment
Share on other sites

Past performance doesn't guarantee future honesty. XRP are cheap and validators shouldn't only be open to millionaires imho.

What are the "mechanisem and rules, that exists today to get an entry on the default UNL" exactly?

Link to comment
Share on other sites

Anyone who received any kind of funding via Xpring should be obliged to run a validator. The same goes for any university that's part of the UBRI initiative as well as any "Ripplenet" company that's using XRP. I can only hope that the "team Ripple" is pushing for this for obvious reasons...

Another thing would be to take a look at the improvements that our friends at Stellar have been doing since the (in)famous May 2019 halt:

  • https://www.stellar.org/roadmap#stellar-core
  • https://www.stellar.org/developers/blog/may-15th-network-halt/
  • https://www.stellar.org/developers/blog/why-quorums-matter-and-how-stellar-approaches-them/  (see validator grouping and quality...)
  • https://www.scs.stanford.edu/~dm/home/papers/lokhava:stellar-core.pdf
    • Ripple also more recently proposed an open-membership Byzantine agreement protocol called Cobalt. SCP’s safety is optimal, so SCP is safe in any failure scenario where Cobalt is, while the converse is unclear. However, Cobalt claims its safety condition is easier to understand and thus less prone to misconfiguration,which will be interesting to evaluate if Cobalt gets deployed.
    • The Stellar consensus protocol (SCP) is a quorum-based Byzantine agreement protocol with open membership. Quorums emerge from the combined local configuration decisions of individual nodes. However, nodes only recognize quorums to which they belong themselves, and only after learning the local configurations of all other quorum members. One benefit of this approach is that SCP inherently tolerates heterogeneous views of what nodes exist. Hence, nodes can join and leave unilaterally with no need for a“view change” protocol to coordinate membership.
    • https://github.com/stellar/scp-proofs (formal verification)
  • https://www.scs.stanford.edu/~dm/home/papers/losa:stellar-instantiation.pdf
    • Ripple introduced the first permissionless quorum-based consensus protocol. In the XRP Ledger Consensus Protocol, each participant p is responsible for configuring its own UNL, which is a list of other participants that p will accept messages from. Moreover ,p will accept as a quorum any set of participants consisting of more than a fixed fraction (defined system-wide by the protocol, e.g. 80%) of its UNL. Maintaining agreement in Ripple’s protocol rests on the assumption that participants will provide sufficiently overlapping UNLs (roughly 90% for every pair of participants, in the most adversarial model of Chase and MacBrough)

 

Edited by Guest
typos
Link to comment
Share on other sites

If possible, I would also try to find a check system limiting the number of validators per cloud platform (like the biggest AWS and Azure). 

What good is decentralized if the higher level is more and more centralized with time passing. 

The latest AWS incidents did not cause the xrpl network to freeze, but could have been affected if multiple servers had been impacted.

https://www.mondo.com/aws-outage-internet-centralization-problem/

 

Link to comment
Share on other sites

I'd work more on a new Consensus mechanism instead of putting resources into improving slightly the current one. It's like fixing the car every month when it has 500k Km instead of buying a new one.

Link to comment
Share on other sites

  • 1 month later...
On 10/8/2019 at 4:13 PM, tulo said:

I'd work more on a new Consensus mechanism instead of putting resources into improving slightly the current one. It's like fixing the car every month when it has 500k Km instead of buying a new one.

Regardless if Ripple created an entirely new Consensus mechanism it would still succumb to the emergence of new advances in consensus algorithms that would require updates as innovation never stops. David alluded to the possibility of a new dual-consensus platform. I'm wondering if they could utilize some of the core-consensus technology stack developed by Logos (their first acquisition). Michael Zochowski (co-founder of Logos, now Head of DeFi at xPring) said that:

"While this chapter of Logos is closing, the ideas and spirit behind Logos will continue in our work at Xpring. We have not yet finalized the plans for the Logos core technology, which has continued to set the bar for blockchain performance, but we hope to have exciting news to share in the near future."

I can't see a scenario where Ripple doesn't at least take a look at the Logos blockchain tech. After all, my guess is that their tech stack was one of the deciding factors for the acquisition besides the engineering talent.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...