Jump to content

Professor Hantzen

Bronze Member
  • Content Count

    591
  • Joined

  • Last visited

  • Days Won

    1

Professor Hantzen last won the day on February 22 2017

Professor Hantzen had the most liked content!

About Professor Hantzen

  • Rank
    Advanced

Contact Methods

  • Website URL
    https://twitter.com/phantzen

Profile Information

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This is unfortunately an unsolvable kind of chicken and egg problem. If there was such a status-checking service, there would then likely be times when the exchange you're checking is actually fully operational and working fine, but the service that provides the status check is down, or reporting incorrectly. Thus, you'd then need another one to check on that status reporter, and so on. There are "is it down?" websites intended for verifying large-scale, long-term outages of mainstream websites (usually with comments where users can discuss the state of things in their own locale etc), and you can also use many of these to check any exchange is up. But, the results are less useful than if you simply sent an API call to the exchange yourself, as that's all the service does from its own physical location. As such, this kind of website is less accurate from your point of view as it's not necessarily checking from the same physical location your code would be, and any result will be returned later as it involves an extra network call from the point of view of your code. It's also a tough problem to consider in general - how long is an outage considered an outage before updating the status? For example if the status service does a check every five seconds, and it takes 10ms to make the check, it could report its results every five seconds. But then, these results would only technically be "accurate" for the 10ms it took to make the check, ie, 1/500th of the time. What if the outages of the website in question are happening every 5 seconds, for 4 seconds duration, and happen to be in continual sync with the check? The service would consistently report the website as "up" when in reality an API user would have a 4/5 chance of getting 520'd.
  2. It's calculated here: https://github.com/ripple/rippled/blob/fa57859477441b60914e6239382c6fba286a0c26/src/ripple/protocol/impl/PublicKey.cpp#L306:L319 With this: https://github.com/ripple/rippled/blob/5214b3c1b09749420ed0682a2e49fbbd759741e5/src/ripple/protocol/digest.h#L121:L180 (Which uses what I would assume to be equivalent implementations.) So, indeed looks like you SHA256 the public key, then RIPDEMD160 it. Maybe test your hashing code with known inputs and results for both the SHA256 and the RIPEMD160? Then you can narrow down if its your code, or what you're supplying to it.
  3. If a submitted transaction consumed a fee, it will appear on the ledger regardless of whether the transaction actually succeeded or failed to achieve its intended result. Note that a response of "tesSUCCESS" means the transaction was successfully *submitted*. It does not mean it was successfully *applied* or that it can be expected to necessarily appear in any ledger, ever. The answer to your questions may be to search for all transactions from the account in question. Eg, with the data API: https://data.ripple.com/v2/accounts/r9cZA1mLK5R5Am25ArfXFmqgNwjZgnfk59/balance_changes?descending=true Put the account in question in place of "r9cZA1mLK5R5Am25ArfXFmqgNwjZgnfk59". "decending=true" means more recent transactions will be shown first. Another way is to use the websocket tool to directly query Ripple's public full-history server. Again, replace the account in question, and in this case "forward:false" means most recent transactions will be listed first (click "Send request"): https://xrpl.org/websocket-api-tool.html?server=wss%3A%2F%2Fs2.ripple.com%2F&req={"id"%3A2%2C"command"%3A"account_tx"%2C"account"%3A"r9cZA1mLK5R5Am25ArfXFmqgNwjZgnfk59"%2C"ledger_index_min"%3A-1%2C"ledger_index_max"%3A-1%2C"binary"%3Afalse%2C"limit"%3A2%2C"forward"%3Afalse} This approach will produce significantly less-readable output, but the returned results will be about as accurate as is possible. You can change the amount of transactions returned with the "limit" parameter.
  4. Maybe ls -lR /tmp/full_history_dmp/ and post here - maybe something will stick out as off, or if not, at least that aspect could be ruled out.
  5. Maybe I'm reaching beyond the limits of my knowledge. As I understand it - if you start with 128-bits, then for all practical purposes, all you have at the end - in terms of your security - are those same 128-bits? In other words, as a user of the system, is there any necessity to make a discernment between the seed and the key? And if so, why introduce a new term to describe what's effectively the same thing? (I could understand why for development, but this is user-facing.)
  6. I think this is a great idea for the benefit of the future masses - at least if it's optional, mitigating the disadvantage to those used to the "s..." version. I am curious how you've implemented the check digit in your mechanism? The common schemes I know of didn't work when I did them in my head (but maybe that's because of my head...). I use the term "secret" to refer only to the base58-encoded, "s"-prefixed and checksummed version of the "seed", which I think of as just an ordinary number with no prefix or checksum. I have wondered why the well-established "private key" was not the term used to refer to what I understand is just the private part of a public/private key-pair.
  7. Great that you've done this. I don't know much (anything) about Ruby, but in general when implementing such things, watch out for how the system might fallback on some other source of randomness in the case the intended source isn't available. This could result in something less than secure and as such may be important to understand how differing systems running the same code can serve different (but still "valid") results. On from there, for the secret_seed --> secret_human_readable part, there's a roughly-parallel node.js implementation, here.
  8. Yes, that's another side of this. I mean to suggest that the metric of on-exchange XRP versus off-exchange - when taken alone - is not necessarily indicative of anything, unless paired with other figures backing up whatever claim is being made.
  9. XRP being moved to exchanges is an interesting metric to check, but it could be a predictor of selling action as much as be representative of supply purchases. It might be useful to connect this information with flow from known XRP II accounts to more feasibly determine the latter, but even then: Given the low trading volumes on the XRP Ledger presently, what you're looking at could be also seen as indicative of how much XRP the market wants to hold on exchanges versus in "cold storage". In that sense, an argument could be made that its a stronger positive signal in terms of price to see the on-exchange number *decrease*. If more want to hold XRP in cold storage on the XRP Ledger, it means more don't want to sell their XRP, so buying it becomes more difficult (ie, more expensive).
  10. This looks like mainly a charting bug/data API bug. The historical data API is known to be buggy, but this has nothing to do with the integrity of the data on the XRP Ledger itself. When I checked more directly the amount of actual ledger closes for the past 24 hours, versus a control of a 24 hours period about one month ago - they were about the same. I also checked the past 2.5 hours versus a 2.5 hour period a month ago, same again. What I am noticing is a variation in the pattern in which ledgers are closing. The network appears to close a handful of ledgers every second or so, and then hang around 8-9 seconds before closing another one, then return to roughly once per second, and so on. It looks like the average close time is overall within normal expected frequency, but perhaps the chart backend is not averaging it usefully due to this behaviour. The endpoint of the chart may be too dependent on a small number of immediately previous ledger closes or something like that. No idea if the network assumes such a pattern regularly and that's considered ordinary or not, but I've certainly seen variance before (though usually at times of high load which that doesn't appear to be the case at the moment).
  11. Given the amount of caveats and the lengths I needed to take to describe them, I somewhat agree with the negative assessments. Though I still find it interesting that the Top 100 by this metric shows 90% of all of the ex and current Ripple employees that I am aware have ever used the board. That they reliably cluster together at the very top when ordering by RPP makes me disagree that it's useless. Moving the cut-off point to a higher reputation (~700, arrived at by limiting to the first ten pages of results), the list is still populated with many ex and current Ripple employees. In that case, there are three within the top 5, two more within the top 15, and no others within the top 100. The four excluded by the shift in cut-off point all only posted for a short period of time before disappearing, so it would appear to work better at a higher setting in that regard. It's also populated more thoroughly with names even I recognise (I'm not a particularly frequent contributor), so maybe it does indeed show more useful results at a higher cut-off to remove a greater amount of noise/outliers. I may redo it. @vsyc RPP dilutes Reputation by weighting it against Posts, so at base its an improvement over Reputation alone. It penalises high-volume, low-medium-response posters in favour of low-medium-volume, high-response posters. (Also, I must note you have a negative RPP... ) Thanks for the kind words @Tinyaccount!
  12. When reading through the forum, I often unconsciously divide a members Reputation by their Posts in my head, to get a feeling for how their contributions are percieved by other members. A higher ratio of Reactions Per Posts ("RPP" ;) ) shows that more people may be willing to demonstrate they value those members contributions. I find it interesting, but also useful. I might not read a long post by a member I don't recognise if they have a terribly low ratio, for instance. Perhaps others do something like this too? Recently, when @BobWay joined, I noticed that his RPP climbed very (x)rapidly up to being much higher than what I'm used to seeing on a regular basis. It reminded me of something I've wanted to do for a while - scrape the member data from the site, and make a chart to see where all members sit by this metric. With a few caveats (one briefly on the chart itself, more in detail below), here it is: Caveats (this list may not be exhaustive...!): 1) One limiting factor with this chart is with how people digest and respond to information when all eyes are upon them. For many people, truths can be uncomfortable, difficult or even completely unwelcome - and even when they do agree or value an insight, they don't always want to "own up to it". I like to think people are pretty open-minded and flexible here, but I've still seen this pretty normal human behaviour from time to time, as anyone would reasonably expect. So, while I believe this chart give some useful indication of something, it is not prone to including those who may regularly say things other people aren't willing to show they agree with, even though they may agree, or even though those things may be true, important and valuable. There's probably nothing that can be done about that, except to note that if there's a member missing you expected to find on here - well, you may have a good point. 2) The chart does not use a complete set of members who have Reputation. If it did, it wouldn't be very useful as it would likely be filled with outliers who made one post, received a handful of votes and never returned. And their inclusion would be at the expense of significantly more established members who might be hard to see among them or even drop off the chart entirely. Partly as a fix, and partly because I didn't have time nor inclination to click "Next" 200 times, I picked range of reputation to include and culled the rest below it. The solution is perhaps equal parts reasonable and unreasonable, given that the point at where some members are included versus excluded is essentially arbitrary. Members on the boundary could theoretically be one reaction away from topping the chart in at #1, or disappearing entirely. Nevertheless, a compromise simply had to be made. If anyone has a suggestion about how to better go about it - I'm all ears. 3) As I understand it, the "Reputation" score on the forum counts any "reaction" of any kind (and hopefully I've got that right...!). In theory, someone could game their way to the top of such a chart by posting only tremendously sad and confusing things. In practice I doubt someone would be able to gain much traction with their posts - more likely they would be ignored - so I think the metric stands a pretty good chance of being at least vaguely indicative of how members posts are perceived. (Also, in my anecdotal and unscientific experience, I see the positively-connotated Like, Thanks and Laugh buttons clicked much more often, in that order. The negative Sad and Confused buttons seem to be used at the low ends of the bell curve.) What gives me some hope that this metric is valuable in spite of these caveats, is that the resultant Top 100 is populated with almost all of the current and former Ripple Employees on the forum that I am aware of, and all of them within the top third of the chart. That was pretty cool to see! Anyway, see what you think, and if this is valuable/useful maybe I can do it again in a month or so.
  13. Well, what's your reward for asking the question? My take on summarising the answers already given: 1) It's interesting, fun and/or feels good. There is no comparable, sufficiently-utilised open ledger in existence that doesn't also destroy the environment. That's cool! 2) Running a node gets you fast and reliable access to the ledger and information on it. This can also *save* you money if you regularly query the ledger, as you can locally query stored data as much as you want without cost. If you are relying on others servers, you may be paying high fees in data every month to satisfy your queries, and have to wait on that data to be transmitted each time, possibly redundantly. It can also save you money if you trade on the decentralised exchange, and need to submit those trades fast and reliably. 3) It's very cheap to do. But this minimal costs gets you the benefits of both 1) and 2). It's not necessarily cheap to run a full-history node, but this comes with significantly stronger versions of the above benefits, which is obviously worth it to some. The only reason the Internet now exists, is because thousands of people all over the world did exactly the same thing when it was in its infancy. That's how the internet was first created. By people voluntarily connecting their computers together, at their own cost, and with no real immediate benefit other than that they thought it was cool and interesting. Many see a similar process playing out now, with the "Internet of Value" in its infancy, of which they view technologies such as the XRP Ledger as a potentially integral part. There are many people now I'm sure who would have loved to have been part of the early Internet, even if they made no money out of it - just to be able to say they were there and took part in it. However, huge businesses and even entire new industries were launched on the back of such open and giving voluntary participation. Billionaires, and trillion-dollar economies were created out of nothing, and in that case the focus wasn't even financial. So, here we have a similar thing playing out again, and this time the focus is specifically around finance and money. I'm sure you can do the math on that.
  14. Fair amount of stuff archived here if you hunt around the various dates: https://web.archive.org/web/20150422094949/https://wiki.ripple.com/Main_Page
  15. Hah, so the most valuable art in the world might be the Nazca lines - huge, and ugly. I kind of agree regarding market cap, and I was not entirely non-serious to suggest doing away with this popular valuation scheme could be healthy for crypto in terms of price. I went into this in another thread. (One thing I didn't get into deeply there is the reliance on wash-trading current market cap values have.) TL;DR: I don't necessarily see a ceiling for any crypto price, but I do think if we multiply price by supply and get a value orders of magnitude greater than all the wealth in the world, it's unlikely it's going to get up there. The reason being that the major market participants (who set price), are performing and likely will always perform similar calculations themselves as a matter of course before making decisions. I think it will take a major shift in current popular perception, for that ceiling to break. I love this argument. Thanks!
×
×
  • Create New...