Jump to content


  • Content Count

  • Joined

  • Last visited

1 Follower

About r3lik

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @Professor Hantzen I'm looking for something that will allow me to specify the range without having to sync the most recent ledgers again. This is because I already have the current ledger data in mysqlDB (I'm using this to run some calculations to generate other data). I'd like to be able to then delete the ledgers that I already have, set a new range, sync those, import them to my MySQL db, rinse and repeat. If I just increment ledger_history, it will still start from the most recent ledger and work backwards, but I don't want to constantly sync the current ones. Instead, I want the full history but in ranges that I specify. This would allow me to consume all of the XRP ledger data one chunk at a time without needing to have 8TB of provisioned storage (expensive). Is this possible? Thanks for all your help!
  2. @JoelKatz Is there any way to fetch a specific range of ledgers? Thanks much for the tips above.
  3. Is it possible to sync a specific range of ledgers? I'm ingesting a lot of the data into a db, so I could do this in chunks rather than attempting to sync 8TB.
  4. Thanks for the responses so far! A few more questions: Is there a way to leverage sharding (https://developers.ripple.com/history-sharding.html) to get ledger data faster? I'm unclear whether this would be any faster than just getting ledger data from peers. Has anyone been able to download a copy of the nodestore file and import it successfully? @Sukrim I'm not using 10TB of SSD. The price goes up significantly. What is the primary concern here? Thanks much
  5. Hello everyone, I think this belongs in the Technical section, but I can't seem to post there as a new user. I'm operating a rippled server and pulling the entire ledger history, as I need access to all ledgers and transactions for a data app that we are building. So far I've only been able to sync 38GB in 16.5h. Assuming the full history is about ~9TB (where can I find exact size?), that's only 0.42% and will take about 6months to sync at the current rate. This is unacceptable. Is there any way to significantly speed up the sync? My node is running in Amsterdam and has 16 cores, 128GB RAM, 24TB HDD and 20Gb/s NICs. Here is my config: [server] port_rpc_admin_local port_ws_public # port_peer # port_ws_admin_local # ssl_key = /etc/ssl/private/server.key # ssl_cert = /etc/ssl/certs/server.crt [port_rpc_admin_local] port = 5005 # allow from everywhere and restrict on network side ip = admin = protocol = http [port_ws_public] port = 80 ip = protocol = ws # [port_peer] # port = 51235 # ip = # protocol = peer # [port_ws_admin_local] # port = 6006 # ip = # admin = # protocol = ws [node_size] huge # tiny # small # medium # large # huge [node_db] type=rocksdb path=/data advisory_delete=0 open_files=2000 filter_bits=12 cache_mb=256 file_size_mb=8 file_size_mult=2 # How many ledgers do we want to keep (history)? # Integer value that defines the number of ledgers # between online deletion events #online_delete= [ledger_history] # How many ledgers do we want to keep (history)? # Integer value (ledger count) # or (if you have lots of TB SSD storage): 'full' full [database_path] /data [fetch_depth] full [sntp_servers] time.windows.com time.apple.com time.nist.gov pool.ntp.org [ips] r.ripple.com 51235 [validators_file] validators.txt [rpc_startup] { "command": "log_level", "severity": "info" } # severity (order: lots of information .. only errors) # debug # info # warn # error # fatal Any input is much appreciated! Thanks,
  • Create New...