Jump to content

T8493

Bronze Member
  • Content Count

    2,304
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by T8493

  1. Well, it is not comparable. You usually don't run gatehub in a virtual machine that is disconnected from the network, right?
  2. Except that you have to trust that the random number generator in your virtual machine has access to some reliable source of entropy (it probably has, though, although - generally speaking - virtual machines that are e.g. cloned and are without network access can have some issues with this).
  3. Gatehub isn't audited (externally) and that's why other solutions are not better in this sense.
  4. Without checking that the random number generator has enough entropy?
  5. It is somewhat concerning that GateHub couldn't detect this attack before hackers stole $5 millions. GateHub doesn't have such enormous daily volume of payments to process and therefore this attack probably lasted for days before it was detected.
  6. What are the alternatives in your opinion? Using unaudited desktop wallets?
  7. @artpope007, GateHub and GateHub Fifth are the same thing, at least legally.
  8. Are your XRps in the old wallet still there? Or were they sent elsewhere?
  9. Is there any technical documentation about these atomic swaps? Something bothers me in this article.
  10. This is the correct URL: https://cointelegraph.com/news/segwit-first-steps-to-ecosystem
  11. https://www.dnevnik.si/1042785705/posel/novice/kriptovaluto-bitcoin-lahko-odslej-kupite-tudi-na-klasicnem-bankomatu Google Translate says:
  12. Well, your users will need to access Ripple network somehow (how will they submit signed transactions? How will they query the account balance?) That's correct. Honestly speaking, I'm not sure what are all implications of completely turning off the history by setting ledger_history to none (maybe @Sukrim knows). Documentation somehow implies that this setting cannot be "none" if you want to serve clients (however, it doesn't specify what kind of queries/commands can you send to the rippled in this case). However, you can always use some small number instead of "none" if "none" doesn't work for some reason. Look at the settings ledger_history, node_db, online_delete here: https://github.com/ripple/rippled/blob/develop/doc/rippled-example.cfg
  13. Ripple is technically not a "blockchain". The first 38.000 or so ledgers are somewhat "lost" and are not in accessible format (you can't access them using rippled). However, if you want to use rippled API, you don't have to download all historical ledgers. Therefore, rippled API with no or with minimal history can act as a "light" node (but this depends on your use case). However, if you plan to expose your own rippled servers to your users, you should probably ask yourself why should users trust your "public" rippled service.....
  14. The problem is that we don't know what you mean by "full ripple node". Every rippled server can be configured to store some specific amount of historical ledgers (and not the entire "blockchain" as is the case with Bitcoin node) and you can also completely turn off the history. The cost of 150+ GB cloud storage with guaranteed reasonable IOPS ( >>100) should be minimal? Take a look at AWS EBS volume gp2 pricing (and some instances provide ephemeral instance based storage for "free").
  15. You can query your "full rippled node" or query public rippled servers provided by RL.... I'm not sure what are you trying to achieve. There is also a public Data API v2 service.
  16. RL actually advises against using them for business use....
  17. Maybe you want to read about how rippled works first.... You don't need any "node" anyway because you can probably use Ripple provided servers (but it depends on your use case).
  18. You can use rippled provided API interface on your "heavy" node. But 150+ GB maybe sounds too much for your use case.
  19. @Max Entropy, you can't just use random compression library (e.g. RAR) in open source software because of licensing restrictions. And you also need to use library which is supported on a variety of platforms and which was heavily tested in practice. Only a few standard algorithms can satisfy these requirements,.
  20. I'm not sure what is being asked here. If your question was "Can I use metatrader with GateHub?", then the answer is no.
  21. It looks like Ethereum may implement compression of peer-to-peer connections using the snappy algorithm: https://github.com/ethereum/EIPs/pull/706 They claim their implementation provides around 60-80% sync bandwidth savings and it has negligible compression/decompression CPU costs. Sounds promising.
  22. It would be probably better for remote point to deal with much higher level operations (and not with individual Ripple transactions) that are based on their specific business processes (workflows). Basically: rippled (in a cloud) <--> very high-level API gateway <--> remote point This could allow you to use extremely low bandwidth/high latency communication links (e.g. SMS messages). Even if you use IP networks, that doesn't mean that you have to use protocols such as HTTP(S) or similar. You could use custom protocol (maybe at the UDP level or e.g. Google's QUIC protocol) and maybe piggy-back streaming data on top of other messages. However, such solution would be extremely expensive. (BTW, I worked on POCs of similar solutions for salesmen that work in regions with very bad GSM signal coverage)
  23. Probably not because peer-to-peer traffic is encrypted and you can't compress encrypted data.
  24. Here we are talking about lossless compression and not lossy compression (which could be used in many real-time systems) Websockets connections to rippled will use compression, but that also means that clients connecting to rippled will have to support it. Peer-to-peer links are probably more problematic. It is probably far from clear how good will general purpose compression algorithms behave. Ripple binary serialization format contains a lot of "random" data (e.g. account ids, public keys) that is maybe incompressible. It is also not clear what to compress (entire stream or just individual messages), which algorithm to use ("ordinary" algorithms like gzip or newer/faster ones like snappy or zstandard), which dictionary to use, etc. Compression is always a trade off between latency, bandwidth and CPU usage. It also adds additional layer of complexity to the underlying protocol and sometimes it can even become a security issue by itself (e.g. BREACH security exploit of TLS compression).
×
×
  • Create New...