Sukrim

Member
  • Content count

    223
  • Joined

  • Last visited

1 Follower

About Sukrim

  • Rank
    Advanced Member
  1. I would consider them to be random enough, but you can probably even try to calculate them for every ledger so far and do the statistical analysis yourself.
  2. Someone offering money to traders would probably offer it with several parameters: Minimum collateral (e.g. the 20% on Poloniex) Maximum margin allowed (to ensure that the minimum collateral is enough) Repayment date of the full funds Interest per unit of time Optionally an array of allowed or banned currencies/issuers to trade against, so you can ensure that this will only be traded in liquid markets or not in markets you are not allowed legally to enagage in for whatever reason. An existing risk is that it is possible for anyone observing this to calculate margin cascade points (at which price will someone have to sell/buy, pushing the price further in that direction forcing someone else to sell/buy etc.) and (ab-)using them. It also allows for timing markets, since it is clear by which point in time a position has to be closed and many people will wait until the last second. Also it might be difficult to prevent redemption of IOUs by borrowers, alternatively it would be hard to ensure liquid markets, if IOU-IOUs get transferred instead and the original ones stay with the lenders. I suspect there might be a way to do this using some custom trust lines and rippling between lender and borrower, but haven't thought about this use case in years since smart contracts would have required Codius anyways first.
  3. Ok, please tell me exactly which of the accounts in ledger 32570 belong to: OpenCoin / Chris / Jed / Arthur / someone else. There are only a few dozen accounts anyways, and apparently you don't need the missing ledgers to figure out what's what...
  4. An example might be a transaction arriving very late in the consensus process where it is easier to let them sit for a few 100 ms instead of doing another full round of consensus.
  5. I would not use or install a browser on something that is dedicated to holding private keys and signing transactions...
  6. They are something similar to a shitty version of a Ripple gateway without Ripple (which helped them a bit, since the name does not exactly evoke lots of pleasant comments in the BTC space) and with as little actual "money" aspect as legally possible.
  7. There is no way to access inner nodes via any rippled API at the moment, so you basically either need to calculate them from full ledger dumps or do raw database calls. Also, as you said there is no way to conveniently import data and if I share stuff, I prefer to do it in a reproducible way (e.g. by sorting all nodes by their key for example). I would share via IPFS rather than BitTorrent, but that's a minor detail compared to these two larger issues. I was already working to do it this way in general, however I ran into the issue that it takes definitely longer than a ledger close interval to get a full ledger dump to be able to calculate the inner nodes. This is a non-starter, so I focussed on writing Python bindings for NuDB and/or improving external tooling there to directly interface with the database. Unfortunately the author of that database left Ripple Inc. recently and also didn't really seem to be too eager to improve code, documentation or even accept external contributions. If I just wanted to offer the database as I have it, I could simply create a ~4 TB torrent file and seed away - however I suspect that I still have some missing nodes (the server stalls suspiciously long around a certain old ledger when restarting), there might be some garbage data in there and it is completely unverified at import time as far as I understand the process. Also it would get outdated quickly and nearly impossible to be updated down the road. If you urgently need full history, I can offer you to buy a 6-8 TB HDD on Amazon, pop it in, copy the NuDB file I use on there and ship it to you via snail mail. I would charge 1 BTC in total for the hardware and time though and offer no support or warranties beyond dropping the HDD off at the post office.
  8. I know, I'm still from time to time thinking on how to improve historic data sharing in a secure way for everyone involved (so it is hard to feed someone spoofed data but also easy to share yourself + import it).
  9. Well, if you get 200 or 400 IOPS from your HDD RAID won't make much difference. You can test your database performance for NuDB and RocksDB with https://github.com/vinniefalco/NuDB/tree/master/bench
  10. You're cutting your capacity in half to mirror easily available public data and this is a reason for using RAID 1+0? Anyways, as I said, you'll need to use a bunch of SSDs or a single HDD is likely already enough capacity until RocksDB does not really give any kind of useful performance. It might have been improved since I last tried it, but once NuDB landed, I didn't look back.
  11. Don't use NuDB on spinning disks, RocksDB should work but had issues with lots of history. Imagine querying a full ledger - this means going through a few 100k leaf nodes as well as all their SHAmap inner tree nodes. Each of them is at a random position in the data set and requires a random read. Nothing that any spinning disk could reasonably handle. I don't get why you'd want to use RAID, but that's up to you to decide.
  12. I would rather try to generate as many transactions as possible/profitable and hope that they get ordered in front of the target transaction. One issue is that by doing so, this also changes the ledger state and thus the random seed - so it might not be that useful to rely on heavy computations (which used to be possible before - just create a transaction with a smaller txid). It might be a better idea to just spam a lot of transactions and making as sure as possible that they will fail if they get scheduled AFTER the target transaction. E.g. there is a chance for a 1 USD profit. You submit 100 transactions that either take this profit if they get scheduled before your target or fail completely, if they get scheduled after. Then it just matters how likely it is to actually get the money and how much fees you pay for the failing transactions. As long as this is still profitable, it would still make sense to try to front-run. The transactions would likely need to be created and submitted very close to the 5 servers that matter, so I recommend scoping out Ripple Inc.'s preferred data centers etc. to get a good guess about their validator location.
  13. Something relatively easy would be a script that exports transaction data from rippled (e.g. by entering a single address) to Beancount/Ledger-cli format (http://ledger-cli.org/) I'm not so sure what people with minimal tech skills would do though, maybe run a rippled validator for fun or start their own gateway? Maybe already moving away from Poloniex and other off-ledger exchanges would already be some success...
  14. The bootstrapped topology seems too easy to game with secretly colluding nodes, there's a reason why UNL selection is currently manual and in the future maaaybe half-automated.
  15. I hope Snappy and LZ4 get some love too! You seem to have updated only docca and NuDB though, I doubt that rippled uses the rocksdb that is included in NuDB, right? As far as I understand it, that is only used for benchmarks anyways (which to me seems VERY wasteful to include in a small database repository).