Popular Post JoelKatz Posted October 2, 2019 Popular Post Share Posted October 2, 2019 Today, the network uses a single consensus process to both make forward progress from ledger to ledger and to coordinate network rule and fee changes. Because the consensus process provides a high degree of decentralization, it has some trade-offs in performance, traffic levels, and the number of validators that can be tolerated. We propose to switch to two separate consensus layers. One would exclusively advance the ledger. The other would handle fee changes, coordinate rules changes, and manage the participants in the first layer. With this design, a small set of validators selected by the larger set of validators would ensure the network makes reliable forward progress and does not engage in censorship. The larger set of validators would police the smaller set to ensure they are in fact making progress and are not censoring. Any validators in the small set that are not behaving would be voted out by the large set. This design has a number of advantages. The network can continue to make forward progress so long as either consensus layer is working. The set of validators preserving decentralization can be made larger without increasing the amount of work that must be done to generate each new ledger. Poorly-performing validators can continue to help keep the ledger decentralized without slowing down the advancing of the ledger. Validators engaging in censorship or poor service could not affect the advancing of the ledger as they would be voted out by the larger set. This post is one suggestion for an enhancement to the XRP Ledger. See this post for context:https://www.xrpchat.com/topic/33070-suggestions-for-xrp-ledger-enhancements/ You can find all the suggestions in one place here: https://coil.com/p/xpring/Ideas-for-the-Future-of-XRP-Ledger/-OZP0FlZQ Julian_Williams, Malloy, Mpolnet and 12 others 9 6 Link to comment Share on other sites More sharing options...
mDuo13 Posted October 2, 2019 Share Posted October 2, 2019 The "governance layer" could be Cobalt or something else with similar properties, right? Meanwhile, the inner layer could benefit from the (well-studied) efficiencies of a consensus protocol whose participants are known in advance, allowing the ledger to advance even faster, right? This seems like a very complicated change, and the switchover would be especially tricky. But the benefits could be a big deal. I think I'd lean towards incremental improvements in validator list management first. Actually, come to think of it, the "validator list" system kind of resembles this, doesn't it? The small set of participants in the recommended UNL could become the "inner layer" that do transaction processing, and the set of participants who declare consensus on a "recommended UNL" would become the outer layer. Maybe we should think about the path to reaching this through that lens. Mpolnet, r0bertz, Pablo and 2 others 4 1 Link to comment Share on other sites More sharing options...
Sukrim Posted October 2, 2019 Share Posted October 2, 2019 I was hoping that the "validator list" approach would be phased out over time and be replaced with something more decentralized instead of evolving into its own layer of the algorithm... The first incremental improvement I could think of would be to allow more than one validator key per entity in UNLs, so it is easier to account for cases where one entity operates more than one machine (for availability for example). Pablo, MikeNard77 and retryW 2 1 Link to comment Share on other sites More sharing options...
EMacBrough Posted October 2, 2019 Share Posted October 2, 2019 Unless I missed some new developments with the idea, it's not true that the network continues if at least one network functions. To ensure consistency between two different transaction networks, transactions need to be "accepted" by the governance network, which implies that the governance network needs to be live to ensure transactions can be validated. The transaction network also of course needs to be live, but it can be replaced if it goes down, so forward progress is roughly ensured if and only if the governance layer is live. I personally no longer advocate the two-layer approach. In the past I overestimated the difficulty of making an efficient consensus protocol in the UNL framework; UNL-based algorithms based on PBFT-like consensus protocols can be made effectively as fast as algorithms with known participants. Certainly being able to limit the number of nodes actively participating in transaction processing would be nice at some point in the future when that becomes a bottleneck; however, that isn't really the effect that the two-layer design would provide. Recall that in the model described in the Cobalt appendix, there is an acceptance protocol run by the governance network for each block. This isn't talked about so much because compared to the full Cobalt protocol the cost of running this protocol is minimal. But that's only because Cobalt is an inefficient monstrosity. Compared to an efficient PBFT-like UNL consensus algorithm, the cost of running the acceptance protocol alone is comparable to the cost of running the entire consensus to validate a block. This cost could plausibly be decreased by a moderate constant factor by using a more efficient acceptance protocol, but imo it still wouldn't be enough to outweigh the significant complexity costs it introduces as Rome points out. Instead I would advocate 1) decentralizing the overlay network so that not every message passes through a small set of hub nodes, and 2) looking more into sparse-communication consensus systems along the lines of Avalanche. We have the benefit of being in a situation where number of nodes likely will not be a significant bottleneck in the near future, so there should be plenty of time to watch the developments in that space before we actually need to implement anything. mDuo13, Benchmark and Pablo 2 1 Link to comment Share on other sites More sharing options...
JoelKatz Posted October 2, 2019 Author Share Posted October 2, 2019 2 hours ago, EMacBrough said: Unless I missed some new developments with the idea, it's not true that the network continues if at least one network functions. To ensure consistency between two different transaction networks, transactions need to be "accepted" by the governance network, which implies that the governance network needs to be live to ensure transactions can be validated. The transaction network also of course needs to be live, but it can be replaced if it goes down, so forward progress is roughly ensured if and only if the governance layer is live. Thanks for responding. If the governance layer approves a list of validators for the inner layer, those validators agree on something, and that something complies with system rules, I don't see any problem with accepting that without waiting for the governance layer to specifically accept those ledgers or transactions. What would stop it from accepting them? Byzantine accountability can be handled with a delay. I'm not particularly worried about concurrent attacks on both the network topology and a significant number of validators because I don't think those kinds of attacks are realistic. The inner layer could temporarily censor. But if the outer layer is working, that wouldn't last very long. And if the outer layer isn't working, I'd rather have a temporarily censoring network than a temporarily stalled one. King34Maine and Pablo 1 1 Link to comment Share on other sites More sharing options...
EMacBrough Posted October 2, 2019 Share Posted October 2, 2019 I guess it depends on the model we're imagining. In the original formulation the inner layer was completely untrusted because we wanted to be able to choose transaction processors purely based on performance metrics, and instead have the governance layer consist of the trusted entities (it would also be harder to attack the governance layer because of the longer time horizon for decisions). In this case it becomes less unreasonable to imagine scenarios where the topology and transaction layer simultaneously have a critical number of Byzantine nodes. You're right though that this isn't the only model we need to work with. If the model instead assumes that both networks are trusted, then it is true that the governance layer doesn't need to verify every single block, and the efficiency improvements would then be significant if there are many nodes in the governance layer. I'm not concerned about the inner layer censoring because that can be dealt with easily. I personally am concerned about simultaneous attacks on the validators and topology, mainly because of the currently very small number of hubs and known weaknesses with the message-forwarding code. In the future when the topology is stronger and the code is improved I would likely be less opposed to the idea. Pablo and mDuo13 2 Link to comment Share on other sites More sharing options...
MikeNard77 Posted October 3, 2019 Share Posted October 3, 2019 (edited) @EMacBrough So I assume the two layer approach discussed above in the xrpchat thread by @JoelKatz , isn't the approach discussed in the twitter link( cobalt v4) . Is the v4 approach even a two layer approach? If not, what does cobalt v4 look like? Edited October 3, 2019 by MikeNard77 Link to comment Share on other sites More sharing options...
EMacBrough Posted October 3, 2019 Share Posted October 3, 2019 Just now, MikeNard77 said: @EMacBrough So I assume the two layer approach discussed above in the xrpchat thread by @JoelKatz , isn't the approach discussed in the twitter link( cobalt v4) . Is the v4 approach even a two layer approach? If not, what does cobalt v4 look like? Cobalt v4 is just a regular consensus algorithm in the UNL framework, without any fancy innovations like the two-layer approach. Functionally it looks a lot like PBFT (I basically consider it to just be PBFT with a few minor changes). MikeNard77 and Pablo 1 1 Link to comment Share on other sites More sharing options...
Guest Posted October 3, 2019 Share Posted October 3, 2019 (edited) Cobalt is dead. Someone from team Ripple should say that loud and clear. If nothing that will at least halt certain people yelling 589 . So if Cobalt is dead, long live Cobalt-like algorithm. Maybe a new name is in order to energize the XRP-community. Food for thought: https://www.quora.com/What-is-cobalt-in-the-XRP-ledger https://medium.com/blockchain-at-berkeley/better-blockchain-governance-a-conversation-with-david-schwartz-f023485afd69 https://xrpl.org/consensus-research.html #Cobalt on twitter for Ethan, Dave and Nik... Edited October 3, 2019 by Guest Link to comment Share on other sites More sharing options...
Julian_Williams Posted October 4, 2019 Share Posted October 4, 2019 I do not understand fully - but it sounds very innovative technology that will speed things up in a secure way. Ripple leading the way in making Blockchain fast, scalable and secure. Well Done Ripple and thank you for keeping us informed. Link to comment Share on other sites More sharing options...
NightJanitor Posted October 4, 2019 Share Posted October 4, 2019 (edited) Are you guys envisioning the "layers" as flat and on top of one another (stacked) or as spheroidal, with one roughly encapsulating the other? I can see it both ways, I suppose. (Why only two? Seems arbitrary.) (How do they stay in timephase?) I had a brief flash of something - it looks a bit like, say, 3 networks, each of which closes a ledger every 3 seconds, but running out of phase. Maybe I have just seen the LOGO you guys have been using for forever; darn fidget spinners; there's three, but they look like one, when spun. My poor brain... it needs a rest... I'm gonna go count some sheep... err... make that... lambs. (damnit: probably should have said "planar" instead of "linear" (v "spheroidal")...also think I'm struggling to zero in on definitional friction btwn the two concepts - not necessarily mutually exclusive - that are something like "concurrency" (planar) vs "concentricity" (spherical)... may i dream of Riemann!) Edited October 4, 2019 by NightJanitor sweepy (+damnit) Link to comment Share on other sites More sharing options...
JoelKatz Posted December 4, 2019 Author Share Posted December 4, 2019 On 10/3/2019 at 11:40 PM, NightJanitor said: Are you guys envisioning the "layers" as flat and on top of one another (stacked) or as spheroidal, with one roughly encapsulating the other? I tend to envision them as stacked, but you could envision them as one encapsulating the other. On 10/3/2019 at 11:40 PM, NightJanitor said: (Why only two? Seems arbitrary.) Two gets you a lot of benefit without getting absurdly complex. The key is to get the benefits of a fast, light algorithm to advance the ledger while still preserving a level of decentralization you can only get with a heavier, slower algorithm. I actually do have a design for a three-layer algorithm, but that's another story. On 10/3/2019 at 11:40 PM, NightJanitor said: (How do they stay in timephase?) If nothing goes wrong, the inner algorithm keeps advancing the ledger with the outer algorithm making adjustments that take place at particular ledger sequence numbers. The more complex (and hopefully very rare) case is when the inner algorithm becomes so dysfunctional that it can't make legal forward progress, in which case the outer algorithm agrees to suspend the inner algorithm and then resume at a particular point with a particular new list of inner validators. It's a bit complex. wojake 1 Link to comment Share on other sites More sharing options...
wojake Posted November 4, 2021 Share Posted November 4, 2021 (edited) This would make things more efficient for the end users' transaction settlement time and free up a lot of resources for these validators to work efficiently but wouldn't it create a more centralized ecosystem in the XRP Ledger? Its a cool idea but I don't fully understand the protocol/structure. Here's what I think would work out using this similar network structure; 1. Validating layer: This layer would have a large amount of validators that advance the ledger, this layer would "elect" a trusted validator to be apart of the governance layer, a governing validator can be voted out if a majority of the validators do not like a governor's proposals, choices in the governance layer. The validating layer is the only layer that can vote out a malicious/misbehaving validator in its own layer and the governance layer. This would essentially mean, if the governors do not act accordingly based on the XRP Ledger's communities needs, they might be 'voted out' of the governance layer. 2. Governance layer: This layer would have a small amount of validators that are trusted by layer 1 to govern properly, the governing validators cannot vote out a misbehaving validator in both layers unless they are also apart of layer 1. Their only purpose it to only govern over account reserves, fees, network structure and consensus, propose and vote on amendments... Both of these layers would still be in the same network, a validator can play a role in both layers acting as a validator and a governor. ----- What's the benefit? Using this structure, the validators can mitigate their resources to a more demanding layer like the validating layer. This would allow the validating layer to focus on finalizing transactions at record speed since they are not using their limited resources to compute fee changes, coordinate network changes and amendments. Edited November 4, 2021 by wojake Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now