Jump to content


  • Content Count

  • Joined

  • Last visited


About EMacBrough

  • Rank

Profile Information

  • Gender
  • Location
    San Francisco, CA
  • Occupation
    R&D Software Engineer
  • Country
    United States

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Cobalt v4 is just a regular consensus algorithm in the UNL framework, without any fancy innovations like the two-layer approach. Functionally it looks a lot like PBFT (I basically consider it to just be PBFT with a few minor changes).
  2. I definitely support sparser message-relaying, but I think we need to be pretty careful to avoid attacks here. For instance, if I understand correctly, a naive implementation of the slot/squelch idea would enable an attack that works for example as follows: suppose there are 100 hubs, the attacker controls 5 of them, and nodes are configured to only maintain 5 open slots for each originator. Then the attacker sends a message to its 5 hubs which relay that message to every node. Then all the other hubs will be squelched so that the attacker can ensure that only the nodes it controls will be listened to for messages coming from it, breaking Byzantine accountability. There's probably a reasonable way to make it work using per-node squelches that are random among the nodes that passed a message within a moderate interval (rather than first-come-first-serve), but I my main point is just that we should be careful, formalize the idea, and find some specification that's both safe and not overly complicated.
  3. I guess it depends on the model we're imagining. In the original formulation the inner layer was completely untrusted because we wanted to be able to choose transaction processors purely based on performance metrics, and instead have the governance layer consist of the trusted entities (it would also be harder to attack the governance layer because of the longer time horizon for decisions). In this case it becomes less unreasonable to imagine scenarios where the topology and transaction layer simultaneously have a critical number of Byzantine nodes. You're right though that this isn't the only model we need to work with. If the model instead assumes that both networks are trusted, then it is true that the governance layer doesn't need to verify every single block, and the efficiency improvements would then be significant if there are many nodes in the governance layer. I'm not concerned about the inner layer censoring because that can be dealt with easily. I personally am concerned about simultaneous attacks on the validators and topology, mainly because of the currently very small number of hubs and known weaknesses with the message-forwarding code. In the future when the topology is stronger and the code is improved I would likely be less opposed to the idea.
  4. Unless I missed some new developments with the idea, it's not true that the network continues if at least one network functions. To ensure consistency between two different transaction networks, transactions need to be "accepted" by the governance network, which implies that the governance network needs to be live to ensure transactions can be validated. The transaction network also of course needs to be live, but it can be replaced if it goes down, so forward progress is roughly ensured if and only if the governance layer is live. I personally no longer advocate the two-layer approach. In the past I overestimated the difficulty of making an efficient consensus protocol in the UNL framework; UNL-based algorithms based on PBFT-like consensus protocols can be made effectively as fast as algorithms with known participants. Certainly being able to limit the number of nodes actively participating in transaction processing would be nice at some point in the future when that becomes a bottleneck; however, that isn't really the effect that the two-layer design would provide. Recall that in the model described in the Cobalt appendix, there is an acceptance protocol run by the governance network for each block. This isn't talked about so much because compared to the full Cobalt protocol the cost of running this protocol is minimal. But that's only because Cobalt is an inefficient monstrosity. Compared to an efficient PBFT-like UNL consensus algorithm, the cost of running the acceptance protocol alone is comparable to the cost of running the entire consensus to validate a block. This cost could plausibly be decreased by a moderate constant factor by using a more efficient acceptance protocol, but imo it still wouldn't be enough to outweigh the significant complexity costs it introduces as Rome points out. Instead I would advocate 1) decentralizing the overlay network so that not every message passes through a small set of hub nodes, and 2) looking more into sparse-communication consensus systems along the lines of Avalanche. We have the benefit of being in a situation where number of nodes likely will not be a significant bottleneck in the near future, so there should be plenty of time to watch the developments in that space before we actually need to implement anything.
  5. Reply from Ben: "You probably didn't turn on allow-extensions: " He also recommended that if you have further difficulties you should either reply to the Medium post or post it on GitHub issues, since he doesn't check xrpchat (and I only check it rarely).
  6. As I said, we're keeping it ambiguous because we don't know. All I can say is that it probably won't come out in 2018.
  7. Ripple plans to propose Cobalt as an amendment to the network, upon which the validators will vote if they want it or not. We haven't really been keeping this a secret. Why would a Ripple employee write and publicly release a paper on a new consensus algorithm for XRP Ledger if Ripple weren't planning on doing anything with it? We probably won't use it for voting a consistent UNL for use with XRP LCP though; the reason is that XRP LCP was designed to work with different UNLs and hence is not totally optimized for the case where there is only one UNL. Thus we will probably use a more optimized algorithm with better properties. Two suggestions I made in the paper were Aardvark and HoneyBadger, but we haven't settled on a single choice yet. As for the timeline, we're keeping that ambiguous because we don't know. Cobalt is a massive change that will require a lot of new code. Plus we want to develop a formal testing suite that acts directly on the code and formally ensures the implementation is correct before releasing. Meanwhile, to my knowledge there will only be two people working on implementing Cobalt. So it will probably take a while, but we're trying to get it finished as quickly as possible.
  • Create New...