Jump to content


  • Content Count

  • Joined

  • Last visited

About fh13

  • Rank

Profile Information

  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I always imagined Ripple would be using ILP more heavily, so everybody could set up an individual ILP connector and provide liquidity to the whole infrastructure. But I guess that model bears some legal issues and is probably even harder to monetize. Also wouldn't the ILP be the most elegant way to set up a transaction from the start to the end? Because it's transactions are safe to execute.
  2. So where does the exchange take place then? In a closed exchange hosted by ripple? Sound like all the market makers that are operating now on the RCL exchange are left out for all the volume xRapid provides.
  3. So xRapid uses ILP connected exchanges to source liquidity and sends XRP over the RCL? Is there any Ripple software product that uses the on-chain exchange feature?
  4. As far as I understand xCurrent basically uses ILP and a Messenger to move funds cross border from one xCurrent enabled financial institution to another. An ILP enabled liquidity provider converts currency A directly into currency B in the process. Is my assumption about xCurrent correct so far? As this article states, "However, when XRP is traded through the xCurrent system, Ripple defines that as a new product called xRapid.": https://www.coindesk.com/xrp-fits-ripples-payments-products-explained/ Since I couldn't find any real technical explanation, on how xRapid is supposed to work ,from a technical perspective, I assume that xRapid works just like xCurrent, but the RCL is now taking on the task of providing the liquidity. So the currencies aren't converted directly but into XRP first (A->XRP->B). Which would make sense, because XRP was originally intended as the bridge currency for cross border payments. Is my understanding of xRapid about correct? My question is: Does that mean that all liquidity for xRapid is provided from the order books in the Ripple consensus ledger (RCL)?
  5. Thank you very much for this insightful explanation. Makes a lot of sense this way.
  6. I noticed that the ledger close time is recorded in the following pattern: ....:xx:x0.000Z, ..:xx:x1.000Z, ..:xx:x2.000Z, ..:xx:x0.000Z ........ Why are the ledgers timestamped in this werid pattern and not by their actual close time?
  7. I use the list as you can see in my first post and as recommend here Yesterday I set the node:size parameter to small and now the ram figure seem to be reasonable again.
  8. The benefits are having a trusted and (mostly) stable server. Also the latencies between server and application are much lower when they run on the same machine... which is critical when you are running any sort of trading software.
  9. @jn_r how RAM is your Rippled consuming exactly? Im surprise it's possible to run rippled with only 4gb if I'm thinking about the ram figures I saw at my setup...
  10. Yesterday I upgraded my rippled to version 0.81.0 as suggested. Since then the rippled struggles keeping in sync with the network and all my applications connected to the server constantly fail to the max_ledger error code. Also I noticed the rippled porcess consumes a lot of RAM ... about 5.5 GB. Which is strange since I am using NuDB a back end. I never had any problems and I didnt change my rippled.cfg beside the suggested changes regarding the validator list. I'm running on a VPS with 6GB of Ram and SSD storage... This is my rippled.cfg [server] port_rpc_admin_local port_peer port_ws_admin_local [port_rpc_admin_local] port = 5005 ip = admin = protocol = http [port_peer] port = 51235 ip = protocol = peer [port_ws_admin_local] port = 6006 ip = admin = protocol = ws [node_size] medium [node_db] type=NuDB path=/var/lib/rippled/db/rocksdb online_delete=50000 advisory_delete=0 [ledger_history] 50000 [database_path] /var/lib/rippled/db [debug_logfile] /var/log/rippled/debug.log [sntp_servers] time.windows.com time.apple.com time.nist.gov pool.ntp.org [ips] r.ripple.com 51235 [validators_file] validators.txt [rpc_startup] { "command": "log_level", "severity": "warning" } [ssl_verify] 1 Does anybody have a suggestion how to fix this problem? I'm really confused since it worked without a flaw for weeks before. Thank you
  11. So is the unscaled fee 10 Drops for a Reference Transaction or 10(1+n) Drops with n=1? I can't find anything about that special case in the ripple docs honestly...
  12. When I have set a regular key I can sign a transaction with that regular key and obviously I only pay the transaction for a single signed transaction. But how about the case that I have set a signer list with a quorum of 1 and all the keys have a weight of 1 aswell. What transaction fee do I pay for a transaction that is signed with one of the keys from that list. My plan is to have the master key enabled but only used for total emergency and the purpose of setting the SignerList. Then I want to have two (or more) keys in the signer list one of which corresponds to a hardware wallet to manually sign transactions and one or more that correspond to a temporary key on a hsm (since I am running bots).
  13. I have a few market making bots running too and the strategy isn't that complicated at all. Parameter optimization, strategy verification and reliability... this is where the magic happens. But that is not specific for ripple or crypto but a truth for every trading bot. I like your idea of "following" the successful bots though and I think you may have some good results as low as execution time isn't that critical.
  14. Yes you are right I thought about that too. However for small developers there is no way to get started because you either need to rent a smaller instance and run it 24/7 or rent a larger on and pay for all the syncing process. I can't understand why this issue isn't addressed by ripple labs
  15. Since it is possible to share EBS snapshots publicly I thought it would be a great idea if somebody who is running a rippled on an aws instance could save the current rippled data base files and share it as an EBS snapshots periodically. That way developers like me could easily spin up their instance mount the EBS with the data base files and are ready to go. I'd really like to do some analysis on historical data but the problem is running an SSD server with sufficient storage 24/7 is too expensive and syncing the current way takes forever. (Which is a shame actually if you ask me)
  • Create New...