Ripple Employee
  • Content count

  • Joined

  • Last visited

  • Days Won


Everything posted by mDuo13

  1. If the payment uses ILP and goes Fiat→XRP→Fiat, then the institutions on the sending and receiving side aren't exposed to any volatility risk in XRP exchange rates, but the trader/liquidity provider is the one exposed to the currency risk. For example, liquidity provider Mark agrees to an ILP quote and lock up 400 XRP in exchange for 100 USD. During the time before this ILP quote expires, the price of XRP suddenly skyrockets and that 400 XRP is now worth over 150 USD. Mark is still committed to the previously-agreed 400:100 exchange rate, so Mark ends up losing out on $50 USD worth of value he could've gained by not trading his XRP. Mark has to choose his exchange rates and expiration times to offset and minimize the chance of losing out like that all day long. If banks use XRP as a reserve and vehicle currency, they can minimize their exchange costs (consolidating many accounts in different places to just one XRP reserve) but that does mean they take on the volatility risk instead.
  2. Well, yeah. A totally unique building that lasts 8 years without falling over is pretty impressive, especially if it's built in a way nobody had done before. And a bunch of other buildings (altcoins) built using a similar process also haven't fallen over in the years since, so clearly proof-of-work is fairly safe under the conditions we've seen thus far.
  3. Yeah, ledgers 1 through 32569 were accidentally deleted early in Ripple's history when servers ran out of disk space. 32570 is the earliest ledger that anyone has a record of. The contents of ledger 1 are known (they were the default values for everything) but the events of ledgers 2 through 32569 can only be surmised by the state of the ledger as of 32570. Back on the original topic, there's a new amendment in voting now, EnforceInvariants, that adds an extra check to make sure that a transaction can't create XRP (or delete more than a transaction's exact transaction cost), among other rules. To be clear, all transaction processing already follows those rules, but after this amendment gets approved there will be an independent check to make sure that even in the case of a bug or something a transaction still can't break certain rules. (This was one of the ideas we started talking about after watching Etherium's DAO get wrecked: with sufficiently complex systems, we figure, more sanity checks are well worth the effort, and way better than having to fork or rewrite history to undo an exploit.)
  4. I'm pretty sure the reason nobody else (but Stellar of course) uses a "trusted validators" consensus protocol like Ripple's is that it doesn't scale to more validators automatically. The one really big deal with proof-of-work algorithms is they make it so any new server can participate in the validation process with no human intervention or configuration. That's possible because proof-of-work assumes that no malicious party can accumulate more computing power than all the non-malicious parties combined. When you look at it that way, it's really impressive because the fact that that Bitcoin and major altcoins haven't had a 51% attack yet is evidence that proof-of-work actually works. Starting from just a handful of servers, Bitcoin now has a huge number of mining servers worldwide participating in its "consensus" process. On the other hand, as we reach later and larger stages with greater understanding, we're now starting to see where it could go wrong. A few mining pools with subsidized electricity have gained inordinate influence over the blockchain. Does this lead to a 51% attack? We'll see. But it certainly does lead to huge backlogs of transactions and wasteful electricity usage. Ripple's consensus algorithm, by contrast, requires manual intervention to increase the number of validators. The people who run rippled servers have to manually choose which servers to add to their UNLs; and if they choose poorly, they could end up forking from the rest of the network. (For example, if you edit your UNL to include 20 additional validators that are all ultimately controlled by the same party, that party could dictate which ledgers and transactions your server thinks are validated.) To guarantee a lack of forking, the RCL needs two things: validators' trusted node lists must have sufficient overlap with one another; and the number of maliciously-colluding nodes in those trusted lists must be small enough. The exact math is in the consensus whitepaper, but basically it comes down to: consensus works smoothly as long as everyone in the network has more than 20% overlap in their UNL, and each server works well as long as no more than 20% of the validators in its UNL are malicious. There's also a huge gap between "there are enough malicious nodes to stop consensus" (>20%) and "there are enough malicious nodes colluding to confirm a transaction fraudulently" (>80%). There are some major upsides to Ripple's system, aside from the oft-mentioned efficiency & speed things, too. The RCL's consensus algorithm is not vulnerable to the same type of 51% attack, since malicious actors can't participate in the consensus process without convincing validator operators to add the malicious actors' validators to the trusted operators' UNLs. And, if you identify a malicious actor, you can remove that actor from your UNL and ask others to do the same, locking them out of the process. With Bitcoin, the only recourse you have is to amass more computing power to compete, or try to get people to manually recompile the software to blacklist mining servers you don't like. On a more esoteric note, if there's a case where two "cliques" of RCL servers really don't want to trust one another, they can "agree to disagree", remove each others' members from their UNLs, and amicably diverge, intentionally forking the network. (Server operators who are caught in between would have their server correctly report "no consensus" until they chose which clique to follow.) Ripple's plan is to move beyond this to a stage where UNLs are completely dynamic, too, which starts with the "Dynamic UNL Lite" code that debuted in rippled 0.60.0. This algorithm introduces the idea of a "validator registry" or "publisher" that lists validators. In the past, we also called this same concept an "attestor". The idea is that someone (probably Ripple first) publishes a list of validators they think are reliable, and you configure your rippled with a publisher instead of configuring your UNL directly. You can also add multiple publishers to choose from all their lists. (Eventually we may add a score attribute to the list format, too.) Your rippled periodically fetches the latest list and updates its UNL automatically to a subset of the results. It's possible to have multiple publishers supporting the same network, as long as their lists of validators have enough overlap. It might also be possible to have a parallel network by using a publisher with a completely separate list. For example, the Ripple Test Net would have its own registry with a completely separate set of validators that are all on the test net.
  5. The more usage it gets, the lower the volatility should be. Also, compared to Bitcoin, XRP settles so fast that you're not exposed to the volatility risk nearly as much. That said, it's certainly a concern. If you look at our math on using XRP for currency conversion, we outline a "high-volatility" scenario where XRP is about 2.5× as volatile as the fiat currencies. The nice thing is that you save so much on other costs that it's still cheaper in total to go through XRP.
  6. If you were thinking of brute-forcing XRP secrets, I highly recommend you put that effort into studying public key cryptography. Not only do you have a higher chance of cracking someone's secret key before you die of old age, you're also far more likely to develop useful skills that can earn you a lof of XRP in actual legal income. Yes, it's entirely possible that someone may invent a new algorithm that makes it far easier to decipher private keys from public signatures. (If they do, no money is safe.) Brute-force ain't gonna get you there, though.
  7. Well, their validator is a subdomain of this page...
  8. All accounts are only subject to the current reserve. Maybe you forgot to factor in the additional owner reserve in your testing? Also, XRP allocated to the reserve can be destroyed to pay for transaction costs, so it's not completely out of circulation in the same way that destroyed XRP is. I think a more interesting number is how much XRP is owned by black hole accounts. The total owned by the known black holes is pretty small though (less than 200,000 XRP).
  9. It's hard to stay cool sometimes, but then I remember just how much work we still have to do. But seriously, I'm most excited about the stuff that's open-source anyway so I have less to worry about.
  10. Yes, "Ripple Charts" is now "XRP Charts". I don't actually know why we made the name change, but I can tell you it's official.
  11. That's a consciously-reinforced company culture thing. We're trying to be constructive and build bridges, not disruptive and destructive. We pay attention to other tech, but we mostly spend our effort improving our own and talking about how good ours is. In particular, we make it a point not to trash-talk other crypto-currencies. We'd rather trash talk the completely outdated legacy payment systems banks are using today.
  12. I think Interledger is building up a pretty sweet set of features and tooling to enable micropayments. With the now-robust features to support Interledger stuff in RCL, these are a natural fit. Meanwhile, the W3C and browser developers are working on building the interfaces to request payments right into browsers themselves. Meanwhile, better support from new and existing exchanges has made it far easier to buy XRP. So yeah, I'm not going to be writing a tutorial for how to buy XRP with a credit card (hint: debit/credit payments are reversible but XRP payments aren't so that's a bad idea for the seller) nor how to set up sites for micropayments just yet... but I think those things will be coming from somewhere in the next few years. It's all part of the bigger picture of "enabling the internet of value" as we say!
  13. Pretty much what T8493 said. If you want the current (latest) orderbook status, you should call a rippled server directly. The Data API mostly serves historical / not-live data. It's especially important with orderbooks that you probably want to work from the in-progress current open ledger, not the most recent validated ledger. Although the data you see that way isn't final, it includes transactions that are likely to become final within the next 3-5 seconds. The Data API mostly does not serve that kind of up-to-the-second in-progress data (it mostly gets its data by importing ledgers that are already closed and completed). I can't directly link you to the API results because the JSON-RPC API needs you to do a POST instead of a GET request, but it's still simple enough to pull such data with a tool like cURL or other HTTP clients. (It's a little harder to do from a browser because of the browser's same-origin policy. Oddly, WebSocket requests are generally exempt from browsers' same-origin policy so you could do that.)
  14. Most Data API methods can return CSV data directly, but I think this one can't. (The docs say that it can, but I think that's a doc bug, which I should fix.) I think RCL transactions are irregular enough that CSV data would be kind of tough to format sanely.
  15. To run a node with a "private" location, you need to run (at least) two rippled servers: A public-facing rippled server whose IP address will be known. Better yet, run a cluster of such servers. A private rippled server whose IP address will not be known. Its node public key will be known, but that's meaningless (rippled servers just generate a random one when they start if they don't have one defined). Its validator public key will also be known, if you configure one. (That's how other servers know that a validation message is from your validator if they trust you.) You configure your private rippled server to peer/connect only to your public-facing server(s). You configure your public servers to connect to the network at large and also your private servers, but not to relay the IP addresses of your private servers to the rest of the network. (That's what peer_private is for.) As an analogy, it's kind of like having a very important person do all their business through signed messages carried by carefully-vetted couriers. Everyone can see that the VIP's messages are genuine because they're signed and everyone can send messages through the couriers to the VIP, but nobody besides the couriers gets to talk to the VIP directly so nobody knows where the VIP lives or works or what the VIP looks like. This is Ripple's recommended way of running high-security validators, by the way.
  16. To be totally certain, you should combine the tx_hash, the provider, and the offer_sequence. An "exchange" in the Data API is any instance where all or part of an offer was executed, so there are a lot of ways that individual "exchanges" could look alike: A single Payment transaction can flow through several offers for the same or different currency pairs, and an OfferCreate can flow through several offers in the same currency pair (as it digs into the order book) so tx_hash by itself, or tx_hash and buyer/seller/provider/taker, isn't sufficient. Existing offers in the ledger are unique identified by the pair of account that placed them (provider) and the sequence number of the transaction that created them (offer_sequence). So provider/offer_sequence uniquely identifies a single offer, but that offer could be executed in multiple "exchanges" if each only took part of it. The "offer_sequence" is just a number for each account that increments as you send transactions, so it's likely that offers from different accounts have the same offer_sequence occasionally. ("Oh, that was placed by your 100th transaction? What a coincidence, mine too!") So you need to use the provider to distinguish offers taken by the same transaction, placed by different accounts at the same sequence number. Any single transaction can only go through an individual offer a single time, so I think the three-part key is certain to be unique.
  17. That's pretty awful. I have an HP EliteBook! Luckily, the first thing I did with it was format the hard drive and install Arch Linux, so I think I'm OK.
  18. Why the bird analogy and not a water analogy? What size of Ripple Wallet would you like? Thermos, Tub, Pond, Lake, Sea?
  19. Here's an excerpt from the docs that might help you: So just look for a Link header with rel=next and get that URL to get the next page of results in CSV format. Then you just have to find a way to combine the results together, which could be as simple as copy-paste or just adding each new result to the end of the previous one in the CSV file.
  20. Yep: Set the "Balance" to 8 in the PaymentChannelClaim transaction and the "Amount" to 10. It'll redeem the 8 and leave the 2 for later. (You can use the same signed claim to take the remaining 2 in another PaymentChannelClaim transaction later if you decide to)
  21. I apologize for confusing everyone with my previous example about the transaction ordering. I misunderstood how the current implementation works and I had filled in my gaps in understanding with elements from the previous implementation that did not apply. (Specifically, the part where an account's first transaction was separated from its remaining transactions was a feature of the old algorithm that I had incorrectly assumed was part of the new algorithm.) It was careless of me to post an explanation without confirming my understanding first. I think the discussion we are now having is: is front-running still feasible (in other words: profitable) under the current transaction ordering rules? and secondarily, if so, what should be done about it? Personally, I'm not convinced either way right now. The only way one could definitively prove one way or the other would be to demonstrate a profitable front-running system; saying it can't be done is a devil's proof: we can always think up new techniques that might be viable and each would have to be proven ineffective individually. The lack of evidence (that is, that nobody is doing it currently) is suggestive that front-running isn't profitable, but certainly not proof. So before we go about changing the canonical ordering system at the center of consensus, let us think about the current situation. Donch proved that front-running was viable under the previous system by doing it and documenting how in detail. For the record, the tl;dr of his approach was this: Since transactions were ordered primarily by hash, he would sign the same instructions multiple times to get the lexicographically lowest hash he could, during the gap between ledgers. (It helped that he could use Ed25519 signing, which is pretty fast.) Then he'd only submit the one with the lowest hash value. Since nobody else was doing this, he had a high probability of his transaction appearing early in the ledger, where it could immediately consume the cheapest orders that were placed on the books in the previous ledger iteration. Aside: One "solution" that would not have needed changes to the protocol would be for everyone to do the same thing to compete for lexicographically-lowest hash, making it much less profitable to do so. That would be needless complexity for users and a waste of computing power, so we changed the protocol. The new algorithm is different from the old algorithm in the following key ways: Instead of each account's first transaction executing before any account's second or further transactions execute, all transactions from an account now execute as a group. This seems to be the change tulo is concerned about, since now you can probably get two or more transactions in before a "targeted" transaction rather than one. Instead of being sorted by hash, transactions are sorted by "account XOR'd with a pseudo-random value" (the hash of the consensus set, I think?) To head off confusion here: no, different accounts don't have properties that would make them come earlier with any consistency, so you can't really game this.
  22. The "canonical order" is the order the transactions are executed in, when a ledger closes. (Transactions are always executed in canonical order in closed and validated ledgers. In the open ledger, transactions are executed in the order they arrived instead. This is why transactions that tentatively failed can succeed and vice versa.) So if the transactions that make it into a ledger are 5 transactions from different accounts and 6 transactions from tulo, the canonical ordering is something like this: Take tulo's first transaction (by sequence number) and add it to the 5 transactions from different accounts Sort this set by transaction hash Start at a pseudo-random point in this set, using the final contents of the ledger as the seed value so everyone picks the same pseudo-random value Execute the transactions in order from the chosen starting point, looping around to the beginning until you've executed all the transactions in this set Now sort tulo's remaining transactions and sort them by sequence number Execute those transactions in order from lowest sequence to highest Maybe it would make more sense to visualize it like this: Transactions in the ledger: a, b, c, d, e, t1, t2, t3, t4, t5, t6 Set of transactions to be executed first: a, b, c, d, e, t1 Set of transactions, ordered according to a random starting point (for example, d): d, e, t1, a, b, c Tulo's remaining transactions: t2, t3, t4, t5, t6 Canonical order: d, e, t1, a, b, c, t2, t3, t4, t5, t6
  23. Also, just a clarification on Eik's post regarding trying to pack transactions into the same ledger so that one of them executes before the transaction you're front-running: It doesn't help to send multiple transactions from a single address. As I recall, if a sender has multiple transactions with different sequence numbers in a ledger, everything past the first gets bumped to the end of the canonical order (and then sorted by sequence). If the transactions have the same sequence number, you have no control which one makes it into the ledger (and it increases the chance that none of them will) so you're not increasing your chances that you'll get one in before the targeted transaction. You could do the same thing with separate sending addresses for each transaction without encountering this issue, but: - it becomes harder to make sure that only one of them succeeds - you're on the hook for funding the reserve of that many more accounts
  24. If the transaction you want to front-run makes it into the open ledger of enough servers, it'll pass consensus and make it into the next validated ledger. At that point, you can hope that the canonical ordering randomly favors you, but that's not reliable. You can't kick a transaction "back into the queue" from the open ledger on one server. You might be able to race it to the open ledger on other servers.
  25. This makes me wonder if you could make an ILP Connector as a contract on Etherium.