JoelKatz Posted October 2, 2019 Share Posted October 2, 2019 Every XRP Ledger servers has a database called the “node db” that holds all the ledger entries in the current ledger and that the server has chosen to retain as history. Even if you have a cluster of servers, each one has its own node database. If a server loses sync, it cannot keep populating the node database and has to fix what it missed as and after it catches up. We propose implementing a network protocol for access to a node database. This would permit servers to share a node database. This will conserve disk space, conserve memory, and allow a server that loses sync to obtain the data it missed locally if another server that stayed in sync populated the database for it. This is a bit more complex than it might appear because the software is very sensitive to latency in accessing the node database. This means that sophisticated caching and prediction is needed on both sides. For example, if the software fetches an inner ledger node, the database should probably push all the direct children of that ledger node to the server’s cache to avoid an additional round trip. XRP Ledger nodes do not need to trust their node store because they only query by hash. It is not possible for a node store to convince a server to rely on incorrect information. This means that the XRP Ledger software could be enhanced to use a variety of methods to obtain ledger data to reduce local storage requirements and improve synchronization performance. This post is one suggestion for an enhancement to the XRP Ledger. See this post for context:https://www.xrpchat.com/topic/33070-suggestions-for-xrp-ledger-enhancements/ You can find all the suggestions in one place here: https://coil.com/p/xpring/Ideas-for-the-Future-of-XRP-Ledger/-OZP0FlZQ Baboly, Fleshmeister, Zedy44 and 6 others 9 Link to comment Share on other sites More sharing options...
mDuo13 Posted October 2, 2019 Share Posted October 2, 2019 Interesting and promising idea, but the latency issues might be killer. Still, intranet speeds are fast and getting faster, so it might be doable. I'm skeptical how many entities would really have the interest, expertise, and motivation to set up a network database though. I wonder if the same concept could be expanded to implementing a completely diskless server, though. If you can keep all the things you need for validation in memory, you could do away with the node store entirely and track just the latest validated + in progress ledgers. Malloy 1 Link to comment Share on other sites More sharing options...
Sukrim Posted October 3, 2019 Share Posted October 3, 2019 You can do a diskless validator already today as far as I know. I think there is a "none" or "dummy" node store stub in the code. Isn't this anyways what "FetchPack"s are for by the way? For sharing a database between trusted peers, a shared memcached instance might be useful, or even just a more generic database backend that allows plugging in Postgres etc. (the library used in rippled should actually already support that). mDuo13 1 Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now