Jump to content

Professor Hantzen

Bronze Contributor
  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Professor Hantzen

  1. Professor Hantzen


    For those who don't understand what the error means, tecPATH_DRY is basically equivalent to the error an ATM would give you if you tried to withdraw money using a card linked to an account that's empty. If an ATM gave you this error, you probably wouldn't remain standing there putting your card in forever trying again. One error, maybe two would be enough to make any rational person stop trying. From this we can gather the cause of the error is a badly written, and/or abandoned bot - as others have pointed out.
  2. Professor Hantzen

    You can literally taste the salt.

    Hah, nice summary. When I read the subject line of your post, I was 99% sure it was going to be about this exact coindesk article.
  3. https://developers.ripple.com/capacity-planning.html
  4. Professor Hantzen

    How to speed up XRP ledger sync?

    Kind of. You can change ledger_history from "full" to an integer value n representing how many "current_ledger minus n" ledgers you want rippled to fetch & hold. After its reached that limit, you can stop rippled, extend the value further, and start it again. However, I believe this won't really help. For one, by forcing rippled to only a limited range, it may pull slower as any servers able to offer ledgers outside of that range will not be taken advantage of. Further, restarting involves two delays. The first is, rippled will have to go through the store it has to date and figure out what ledgers it has (though actually in practice it doesn't really know even after this process, during which it just kind of "glances"). Even so, this process can take anywhere from a few minutes to hours or longer depending on your machines storage format and how many ledgers it has. The second delay is that the amount of initial downtime in restarting, plus the amount of time rippled spent figuring out how many ledgers it has, is all time that its not been collecting new ledgers that are being closed by the network every 3.5 seconds. So, after doing the tally, it has to fetch all the ledgers its missed during that time and after that, its ready to go again. So, big delay introduced in multiple restarts, and probably slower acquisition as well. There may well be another way to force a specific range, as there are various queries you can run and maybe depending how you do it something could cause rippled to go off and fetch the ledgers its missing - I'm not sure. But again, I wouldn't see an advantage, unless there's something I'm missing.
  5. You mean the 65K? I wondered if they'd rather gloss over it for the sake of the code working the same across ledger instances.
  6. Professor Hantzen

    How to speed up XRP ledger sync?

    SSD's are simply (and unfortunately) the price of entry for running a full history rippled server at the moment. If you get it working without needing them, please shout from the rooftops exactly what you did, but also... good luck, seriously.
  7. Professor Hantzen

    How to speed up XRP ledger sync?

    @r3lik You will absolutely have to switch to fast SSD storage for the entire ledger if you want your node to work for any purpose, even just fetching. While RocksDB may appear to work fine at the beginning, you will arrive at all kinds of spurious problems as your stored history increases. And yes, it will likely take at least 6 months to get the full store, no matter how you configure anything. Unfortunately, there is as yet no deterministic, canonical data storage format for ledger history. Until there is, we all share this problem.
  8. Professor Hantzen

    Bitstamp XRP withdrawals not working?

    Put the Bitstamp XRP deposit/withdrawal address into an XRP account tracker and check the most recent transactions. If others transactions are continuing to go through, its likely just your account. Eg: https://ledger.exposed/tx-flow/rDsbeomae4FXwgQTJp9Rs64Qg9vDiTCdBv Shows that both XRP deposits and withdrawals were working at the time of writing, and have been for the past few hours.
  9. Looks like they do: https://github.com/stellar/stellar-core/tree/dd7f38dba232e20ae75a95641857c0b7ed0252e1/src/invariant Seems more thorough than Ripple's equivalent - which is perhaps simultaneously worrying and reassuring. I can't tell at a glance if invariant failures are recorded on-chain or not. XRPL ultimately throws invariant failures on the ledger, using "tecINVARIANT_FAILED", a code I hope we never see.
  10. Thanks! Great source. He's SVP of Product so he really should know the terminology they're using internally. Actually this concerns me slightly, as I had the impression these pilots were ongoing, but it sounds almost as if they've wound down, though that could be my misinterpretation. It's just concerning as it sounds like the pilots were things that came and went and the product now requires more work. Ripple does have a history of pivoting on major issues, which I don't see as a bad thing (Apple does too and they're not doing too badly). But it does make me wonder if xRapid is not as ready as everyone hopes. Maybe its just one of those things - someone asked them a question, and so they used the same terminology as the questioner in response. Well, so long as someone at some point starts using some XRP to move their customers money around and keeps on doing it, I guess we can all agree its "live", "in production", and if we must.... "released".
  11. Do you have a source for that? I found many bloggers / crypto "media" websites using the term but could only find one "official" source when Sagar Sarbhai stated a couple of banks would "go live" with xRapid by the end of the year. As far as I can tell that is indeed what happened, if he meant the ones that subsequently began using the beta in production. I guess from this "going live" to Ripple people is analogous to using xRapid "in production". Or, without any terminology: using xRapid to move real customer funds around.
  12. @Busa_ https://ripple.com/insights/first-pilot-results-for-xrapid/ There are also examples on this forum of people trying to track the transactions related to the above, as well as statements from Ripple employees describing exactly how xRapid is in operation at this time. @mikkelhviid Indeed, it would appear so. To me "production use" means real clients actual money is being moved around, and that has definitely been happening already throughout this year no matter the development-cycle designation given to the software being used to do that. To anyone confused by this - let's get clear: if a new piece of video editing software came out in a beta version, and Pixar used it to edit their latest feature film together, this beta version of this video editing software will have been used in production. In general there is a difference between the terms we use to describe the developmental state of a piece of software, and the terms we use to describe what people use the software for. We need two sets of terms because the concepts they represent are separate and distinct and can and are discussed without any necessary dependence upon one another. I can say "Vidiot v1.0 is being used in production!" or I can say "Vidiot v1.0 has moved from beta into full release!" and the two sentences describe different concepts, just as they would (should...) for xRapid.
  13. xRapid is in production use *right now*, and has been for several months, built on whatever previous rippled versions were in use at that time. That being the case, it follows therefore that its current use is not dependent on any future upgrades to rippled. As @lucky has said however, we do not know what future features are planned for xRapid, and how these may or may not be affected by future rippled upgrades. But even including that caveat, the subject title of this thread is unintentionally misleading, as it is based on misunderstanding the present, known situation and relationships between xRapid and rippled.
  14. #1: Speed & throughput. PoW approaches currently in demonstrable widespread use (Ethereum, Bitcoin etc) typically have block-times ranging from 10-15 seconds up to 10-15 minutes, and can only handle a comparatively tiny number of transactions per second versus consensus/BFT that have similar uptake. #2: One of the reasons for the creation of the original Ripple network and XRP was specifically to come up with something less-wasteful (more environmentally-friendly) than PoW. Whilst being terribly slow, PoW also throws away vast computing resources and electricity by comparison. I'm not sure on the specific differences between PBFT and the consensus mechanism in use on the XRP Ledger, but AFAIK they involve similar tradeoffs in terms of agreement. Finally, as I understand it, an upgrade to the consensus protocol, called "Cobalt" plans to address some issues regarding agreement distributions (though maybe not the one you are concerned with here). I'm interested to know - why do you think this 20% matters? Can you foresee a situation where this could cause a problem? Each node has its own UNL so if problems with specific nodes keep cropping up they can simply be cut out of the consensus process and bring the network back into agreement were it to halt for a while due a malicious collusion.
  15. Professor Hantzen

    New Consensus Articles

    Thanks for those links, glad to see this is being pursued. I can understand intuitively the rationale not to publish UNL's, what are some specific reasons in your opinion? I wonder if, given the main issue in publishing UNL's is establishing how decentralised the network is by examining the state of the topology, and not necessarily the identity of the particular parties doing the publishing, it might be possible to employ some kind of anonymising to allow exposing UNL's in a way that it is reliably known that a particular validator is publishing its UNL, but not which validator it is? It may be a difficult problem, but I wonder if enough tools already exist on the XRP Ledger to put this together with minimal extra effort. Eg: 1) Somehow employ the key(s) of each validator to provide a unique anonymous ID for a given validator. 2) The validator then creates a transaction from a known public account with a memo containing its UNL, signed with the above unique ID. If there could be some way to do 1), such that it can be proved that the public key of that validator is included in a given set of validators public keys, but not which validator it is, maybe this could work. A threshold of agreement could be determined prior, and at regular intervals all validators presently meeting that threshold could be published on chain via a memo (or memo's) and thus invited to register their UNL anonymously in response. Doing so before the next interval would result in "inclusion" in that intervals set of anonymised UNL topology results. From this it would be possible to glean a set of percentages regarding the frequency in which particular nodes are included in other nodes UNL's, but in an anonymous way that is also definitive (assuming the supplied UNL's can be trusted...). As for validators lying about their UNL, I wonder if something could be employed internally in rippled such that each time a validator changes its UNL, something deterministic is exposed to the network that can be later used for the purposes of a collective audit, again without exposing particular parties or which UNL belongs to whom. Maybe it's not possible, but I'd hope there could be an arrangement that uses the trade-off of knowing how many validators are lying, but not which ones they are. This could at least provide a percentage figure regarding the integrity of the UNL results. I would suggest that if it turns out its really not possible to ensure the level of integrity of any reported UNL topology, there's little point bothering to create one. However, it should be explained clearly by Ripple in such a case that the level of decentralisation cannot be determined.
  16. Professor Hantzen

    New Consensus Articles

    Hopefully I've just missed something here. In support of @JoelKatz's article and its goal of illustrating the decentralised nature of the XRP Ledger, isn't what we really need to see a map of actual UNL relationships, how they've evolved over time and the nature of transaction submissions and validations? For example, it wouldn't support decentralisation having a balance of 150 independent validators vs 10 Ripple-run validators if all 150 populated their UNL with 10 entries consisting only of the Ripple-run validators. (Freedom to decentralise is presumably also freedom to centralise.) Or, each of the 150 only had one entry of a Ripple-run validator, but those Ripple-run validators always "led the pack" validating ledgers while the independent nodes continually lagged and disagreed. Would it be fair to say the potential for decentralisation has been well elucidated and explained, but the real-world trust relationships and how they function in practice, and so therefore the actual realised level of decentralisation has not been revealed? I'm not familiar with an API call that allows probing of a remote validators UNL, if there is one I guess this would be easy to at least map that out. I'm also not sure how to accurately account for disagreement over time. For example, if disagreement only happened at infrequent busy times, an overly simple metric might artificially inflate recorded agreement figures. Given its unique properties and the subtleties that may be involved in honouring them, does a reliable way to measure decentralisation on the XRP Ledger even presently exist?
  17. I often have to check the balances on a few different accounts. To make that easier I wrote this node.js script that checks multiple Ripple account balances (XRP and IOU's) over websocket. Posting if it's useful to anyone.
  18. Professor Hantzen

    Q2 2018 XRP market reports

    To be clear - the important part of the sales did not drop, they went up, if only slightly. Further, the less-important part of the sales declining in USD looks positive to me. Regarding the direct sales, these went up slightly from $16.60m, to $16.87m. These are sales of XRP direct to new clients, so it's good news they've sold as much in USD as the previous quarter. Also, given that the market price of XRP is significantly lower this quarter versus last, yet they sold the same amount in dollar value, suggests many more clients may have purchased a lot more XRP this quarter (up to six times as much going by the price extremes, or at least twice as much). We know they're taking on more clients from Ripple's statements, but its good to also see it reflected in the figures and have a hint at by how much. Programmatic sales are lower in USD, yes. However, in XRP they are in some sense higher, when taken as a percentage of total traded volume. Ie, as a percentage of total traded volume, more XRP was sold this quarter than last. As these sales are bound by exchange-traded volume, to me that is a relevant consideration (though I can understand the argument it isn't). Anyway, these sales are on-exchange, and for the purposes of funding Ripple operations... So... going back to the USD side - as in, the amount in USD Ripple ends up with after selling - tells a more direct story. On-exchange sales dropping when the price is low is positive (to me, at least) for two reasons. Firstly, Ripple clearly believes XRP will appreciate, and so isn't cashing out as much as when XRP's price was higher. Further, despite Ripple's high rate of expansion, they can clearly afford such a luxury of waiting, which must mean they are balancing their books well and have enough operational cash.
  19. Another angle on this - it actually looked like she couldn't hear things very well. She seemed to be having significant trouble with her earpiece but no-one doing tech for the event was picking up on that or sorting it out. This often happens at these events - everyone is wearing radio mics but with the amount of devices and wifi and other radio frequencies in use they can sometimes perform very badly. It's enough to throw you off even if you've experienced it many times, but as a first timer I imagine it would be particularly difficult. Considering the amount of ambient noise coming in through those headset mics, which typically have an extremely localised pickup (they reject most ambient noise unless its extremely loud), the noise in that room must have been huge. We're only hearing the mixed feed from the mics, so it may seem quiet to us when watching the video. So you have: - (Maybe) first time on stage nerves - Presenter seeming to initially grill you, singling you out for difficult questions - Not being able to hear the questions clearly, probably involving drop-outs in mid-sentence, volume too low, and latency (the presenter speaking but Cassie hearing it much later, which can be off-putting - seeing someones lips move and the voice arriving in your ear much later). And crucially: no one stepping in to help with any of that. Sure, it could also be that she's not so good on stage under pressure, but given the above three things simultaneously I'd give her another chance, and possibly fault some of the blame with the sound tech people at the event.
  20. Professor Hantzen

    Is Miguel Vias still working for Ripple?

    His most recent Ripple-related tweet was on June 18th, not long ago, and there are many immediately before that. If archive.org doesn't show him on the leadership page since May 9th, it means he hasn't been on that page for months (if he ever was?), yet has still been completely active in his role as Director of XRP Markets whilst not on that page. So I wouldn't be alarmed... yet.
  21. Probably just some new bot started up. Transactions are so cheap that the actions of a single player can radically alter XRP Ledger statistics.
  22. Professor Hantzen

    What Is Going On With Ripple and Coinbase?

    A moonbase, to some.
  23. Professor Hantzen

    Where can i download ripple linux 64 bit daemon?

  24. No problem. The reason its done this way is presumably for speed within rippled when processing account information. In C++ - the langage rippled is primarily written in, and what is generally regarded as a language used for projects requiring fast run-time - this is a very typical construct to store this kind of information. If you come across code that reduces things to bits, and requires bitwise operators, it is almost always done this way as a speed optimisation. (By taking something a human readily understands and storing it in a manner a computer readily "understands".)
  25. These are bit flags. They look weird as decimal values, but as binary values they make a lot more sense, eg, a few are "on", and the rest are "off". So 194 might appear to contain little immediately useful information, but its binary equivalent %11000010 shows which three flags would be considered to be switched on. So, in JS you need to do this kind of thing: var containsFlag = function(number, flag) { return (number & flag) === flag; }; containsFlag(129, 128); // true containsFlag(81, 128); // false (Source: This Stack Overflow question.) The difference in the case of Ripple Account flags is they are all presently higher order bits, but the principle is the same, you just need to shift the bits to the right (note that if they enabled more flags in the lower order this would change and you'd have to adjust your code accordingly). Thus, if you saw an account with the flags set as 17891328, you'd just need to: (17891328 >>> 16).toString(2) // '100010001' To reveal the flags. In this case, lsfDepositAuth, lsfDisableMaster and lsfPasswordSpent, as per this list. (If you order the available flags in such a list by their descendent decimal value, they will correspond to the bits resulting from the above piece of code when read from left to right.)