Jump to content

The Ledger in 2020 - Size? (For a new full history validator) - Also, Debian 10 suitability question!


Recommended Posts

I have a reasonably powerful E-2146/128GB RAM server in a colo facility in Switzerland with the potential of SSD or SAS disks, and I'm considering putting it to use as a full history validator. I am directly connected to the major local IXPs and HE transit (with other transits indirectly connected) so also looking fine on the connectivity aspect.

The rippled docs on xrpl.org were last updated with the ledger size in 2018.. Does anyone know the current size as of April/May 2020? No guesses, I will need to know how many and what size SSDs or SAS disks to buy for future capacity planning based on 8 usable drive slots for a ~3 year server life span.

Also, the install docs on xrpl.org state "Ubuntu Linux 16.04 or higher or Debian 9 (Stretch)".. But not Debian 9 or higher.

Is anyone running rippled on Debian 10 (Buster)? Is the install method the same as with Debian 9?

 

Link to post
Share on other sites

SAS won't work, it needs to be SSD (in case your SAS disks are spinning and not just SSDs with a non-SATA/m2 interface). These days I'd expect about 10-15 TiB, I really should spin my full node back up. @WietseWind probably has better numbers.

I doubt that there would be issues with Buster, I just guess nobody has tested it or updated the docs.

Link to post
Share on other sites
Posted (edited)
14 hours ago, Sukrim said:

SAS won't work, it needs to be SSD (in case your SAS disks are spinning and not just SSDs with a non-SATA/m2 interface). These days I'd expect about 10-15 TiB, I really should spin my full node back up. @WietseWind probably has better numbers.

I doubt that there would be issues with Buster, I just guess nobody has tested it or updated the docs.

Thank you for the heads up. I have a bunch of Samsung PM863 3.84TB SATA SSDs that will cover this and hopefully allow for growth over 3 years. I was afraid it would be 20+TB by now.

May I ask why you don't run your full node anymore? I see a thread below this one where its noted that @haydentiff sold all her XRP holdings and on her twitter feed it appears she shut her node down. Am I crazy for wanting to add a node right now?

12 hours ago, xrptipbot said:

adm@xrpl-fh2:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        21T   13T  7.4T  63% /
adm@xrpl-fh2:~# 

Debian 10 should work just fine. You can try it in a Docker container first?

Thank you!

I didn't know Docker was an option till I googled after your post.. It looks interesting, I might even try and put in a gVisor shim in for added security.

May I ask what your memory stats look like? I see in another thread Sukrim notes ~10GB, but I'm curious as to what distribution+release you're running and any other quantifiable data points you could share!

Edited by HowardWolowitz
Link to post
Share on other sites
2 hours ago, HowardWolowitz said:

May I ask why you don't run your full node anymore?

I run mine from home and need the bandwidth for work from home a bit more urgently than before. Also it would be a shitty move to use up bandwidth that other people might need on that residential connection. Having janky video connections or slow deployments because someone feels the need to spam the ledger with transactions for practically no cost isn't really that helpful, I'm not in anyone's UNL afaik anyways and the more recent history is better available than the early one. I'll start it up again though in a while, for now I try to be helpful in other ways like helping people in here or on other platforms.

Link to post
Share on other sites
1 hour ago, Sukrim said:

I run mine from home and need the bandwidth for work from home a bit more urgently than before. Also it would be a shitty move to use up bandwidth that other people might need on that residential connection. Having janky video connections or slow deployments because someone feels the need to spam the ledger with transactions for practically no cost isn't really that helpful, I'm not in anyone's UNL afaik anyways and the more recent history is better available than the early one. I'll start it up again though in a while, for now I try to be helpful in other ways like helping people in here or on other platforms.

 

Totally understandable, I respect your logic :)

I'm going to give it a whirl with a ~2TB portion of the ledger in Docker on Debian 10.3 first and see how it runs.

Link to post
Share on other sites
Posted (edited)
On 5/3/2020 at 2:54 AM, xrpscan said:

@xrptipbot What kind of disks are these?

I'm running two full history nodes.

One of them contains 4x Micron 5210 7.68TB in Raid5, the other one 24x Samsung Enterprise 1TB in Raid0

 

Quote

May I ask what your memory stats look like? I see in another thread Sukrim notes ~10GB, but I'm curious as to what distribution+release you're running and any other quantifiable data points you could share!

See attached images :)

image.png

image.png

Edited by xrptipbot
Link to post
Share on other sites

Too much uptime! ;-)

Memory usage depends a lot on how much a server is queried through the API by the way, since I guess that causes rippled to fill up its internal caches faster. The maximum amount of how these caches are filled depends on the "node_size" parameter, which internally maps to presets of couple of hidden tuneables. If you want to change these, you need to recompile (not hard, but tedious) which I needed to do for a while since a "huge" server also seemed to expect a "huge" internet connection.

Link to post
Share on other sites
On 5/5/2020 at 9:37 AM, Sukrim said:

Too much uptime! ;-)

Memory usage depends a lot on how much a server is queried through the API by the way, since I guess that causes rippled to fill up its internal caches faster. The maximum amount of how these caches are filled depends on the "node_size" parameter, which internally maps to presets of couple of hidden tuneables. If you want to change these, you need to recompile (not hard, but tedious) which I needed to do for a while since a "huge" server also seemed to expect a "huge" internet connection.

node_size "huge" over here (internet connection: huge as well 😇)

Uptime: kernelcare 😅🍻

They are being queried for sure, as the mem usage somewhat reflects indeed.

Link to post
Share on other sites

You could look at the sizes of shards over time, but afaik they are storing contents compressed (also the ~13 TB are compressed, but also contain an index and probably also include the duplicated data in ledgers.db and transactions.db).

There are 2 important, but different metrics at play: Growth of ledger history (all transactions and their resulting state changes) and growth of ledger state (all current objects like accounts, trust lines, offers, escrows...). Both are somewhat difficult to measure for slightly technical reasons and both have different impact on node sizing (more RAM vs. more disk).

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.