Jump to content
Chan_Maddanna

About time to scale RCL

Recommended Posts

34 minutes ago, Professor Hantzen said:

Indeed, the subscribe stream is better and more suited.  However, it's more work to build and maintain the book from what you get coming down that stream, and requires a deeper understanding of RCL to implement correctly, which also leaves room for error and the risk of loss from trading against an inaccurate set of offers. Newcomers will go straight for book_offers for these reasons as it's integrity is instantly higher and it's easy to implement.  This may make it a particular pain point during times of an influx of new users.

Do you always need the complete book? I only look at e.g. the top 5 (options = { "limit": 5}), that makes a huge difference..

Edited by jn_r

Share this post


Link to post
Share on other sites
14 minutes ago, jn_r said:

Do you always need the complete book? I only look at e.g. the top 5 (options = { "limit": 5}), that makes a huge difference..

Can you teach me how to do this in ripplerm's open source client pretty please?  You know so much more about rippleapi than I do as proven in the past.  I also unfortunately lost that script you helped me with when the old server was taken offline so I am currently lacking any working examples (also haven't built rippleapi successfully on the new machine either so I am using an older minified version).

Edited by Twarden

Share this post


Link to post
Share on other sites
Just now, jn_r said:

Do you allways need the complete book? I only look at e.g. the top 5 (options = { "limit": 5}), that makes a huge difference..

Sure, though regardless of what limit is used it remains that 95% of the information returned per offer is not required in most cases.

Let's see what I mean.  Here's an offer from a book_offers request:

      {
        "Account": "rE4S4Xw8euysJ3mt7gmK8EhhYEwmALpb3R",
        "BookDirectory": "DFA3B6DDAB58C7E8E5D944E736DA4B7046C30E4F460FD9DE4E061E626E1EDBDC",
        "BookNode": "0000000000000000",
        "Flags": 0,
        "LedgerEntryType": "Offer",
        "OwnerNode": "0000000000000000",
        "PreviousTxnID": "F9B8F10C1702F8468B0E371B9177594AA122006E14D834DBFFC8B1765E8DCFD5",
        "PreviousTxnLgrSeq": 29660002,
        "Sequence": 36071,
        "TakerGets": "2081342447",
        "TakerPays": {
          "currency": "USD",
          "issuer": "rvYAfWj5gh67oV6fW32ZzP3Aw4Eubs59B",
          "value": "358.4608601602031"
        },
        "index": "003A1AD9946EF1E4388E0E0F0043EB7D461C3448AEB27FA987A02709FDEBC763",
        "owner_funds": "21583922306",
        "quality": "0.0000001722257963408348"
      }

Here's what a client needs out of that to initiate a trade:

["0.0000001722257963408348","2081342447"]

Or, more clearly:

["0.1722257963408348","2081.342447"]

The first is price (USD), the second is amount (XRP).  This is how books are usually presented by exchange API's.  Rippled on the other hand provides a wealth of information, some of which may be important to a small subset of users, but it makes no sense to burden all servers and the clients with all of this extra information when only a few clients require it.  At the very least, it would be nice to have the option to forego it. 

It's further the case that some of this information is redundant to all users, all of the time.  Eg, every offer in a book_offers response contains the currency and issuer, but these are known at the time of making the request, and (should) never change from offer to offer within a given book_offers response.

Share this post


Link to post
Share on other sites
3 minutes ago, Professor Hantzen said:

Sure, though regardless of what limit is used it remains that 95% of the information returned per offer is not required in most cases.

...

...
It's further the case that some of this information is redundant to all users, all of the time.  Eg, every offer in a book_offers response contains the currency and issuer, but these are known at the time of making the request, and (should) never change from offer to offer within a given book_offers response.

I agree with you it's a lot of (possibly) not always usefull information, that could be improved (I would prefer it to be optional then so that the code is backwards compatible) yet that's also reason the more to limit the depth of the orderbook (orderbooks of +100 depth are not uncommon) 

Share this post


Link to post
Share on other sites
12 minutes ago, Twarden said:

Can you teach me how to do this in ripplerm's open source client pretty please?  You know so much more about rippleapi than I do as proven in the past.  I also unfortunately lost that script you helped me with when the old server was taken offline so I am currently lacking any working examples (haven't built rippleapi successfully on the new machine either so I am using an older minified version).

I'll PM you

Share this post


Link to post
Share on other sites

95%

Btw, this not a guesstimate.  I worked it out before posting the idea originally, by examining the byte content of a few offers.

In the example, the amount of bytes used (when reduced, removing carriage returns and whitespace) is 626 for the full offer, and 41 for the reduced and 36 for the (mostly) significant-digits-only version.  That works out to 94.3% reduction in content in this case.
 

2 minutes ago, jn_r said:

I agree with you it's a lot of (possibly) not always usefull information, that could be improved (I would prefer it to be optional then so that the code is backwards compatible) yet that's also reason the more to limit the depth of the orderbook (orderbooks of +100 depth are not uncommon) 

Yes, I agree it should be an option, not a replacement - a "reduce" flag should be sufficient, and consistent with other ripple API endpoints.  And yes, limiting is certainly important.

Share this post


Link to post
Share on other sites
2 hours ago, Professor Hantzen said:

Indeed, the subscribe stream is better and more suited.  However, it's more work to build and maintain the book from what you get coming down that stream, and requires a deeper understanding of RCL to implement correctly, which also leaves room for error and the risk of loss from trading against an inaccurate set of offers. Newcomers will go straight for book_offers for these reasons as it's integrity is instantly higher and it's easy to implement.  This may make it a particular pain point during times of an influx of new users.

There is a library for that...it only needs a bit more of work to be integrated into ripple-lib. Also it generates the autobridged book.

Share this post


Link to post
Share on other sites
9 hours ago, nikb said:

......the most important code held up brilliantly and didn't even bat an eyelash; it was rock solid.

Hell yeah it was! Like me, right now, reading this post.

What? Too much?

Share this post


Link to post
Share on other sites

 

The s1 cluster looks much healthier right now. The issue appeared to be mainly due to poor tuning of certain network timing parameters causing instability under increased load of the peer network (traffic between rippled servers). I'll give more details shortly. This is the code change:

https://github.com/ripple/rippled/pull/2110

We actually didn't upgrade the s1 cluster with these changes yet. We upgraded the servers they connect to that we control to help improve relaying of server-server traffic.

 

Share this post


Link to post
Share on other sites
6 minutes ago, JoelKatz said:

The s1 cluster looks completely healthy right now.

It is looking much better.  

I'm monitoring with the following test:  subscribe to ledger stream, store ledger_index from that stream.  Each ledger, request a book, calculate the difference between the book's ledger_index and that from the ledger stream.  When the book's ledger_index is +1, everything is normal.  When it's greater than that, the node is out of sync.

I've been seeing differences as large as 100+ ledgers over the past couple of days.  Just now it's mostly stabilised though a few minutes ago it went out of sync by 7-8 ledgers for a couple of minutes. 

Share this post


Link to post
Share on other sites
Guest Dizer

@JoelKatz, @nikb,

 

I want to thank you both for your active role on this forum and interaction with us, your patience in explaining things and responding promptly to issues we face as network users. 

This forum is not the same without you guys being part of it. I'm grateful I'm in this space. I'm very grateful that I believed in Ripple and its potential since January 2013.

In short, you guys are awesome! Keep it up. :good: :)

 

Share this post


Link to post
Share on other sites

The s1 cluster is still 100's of ledgers out of sync.  I also spun up my own rippled (0.60.2) and it ran fine for a while, but now it will run in and out of sync also.  Is there a particular node list I should be using?  Everything is at the out-of-the-box defaults.

Share this post


Link to post
Share on other sites
1 hour ago, Professor Hantzen said:

The s1 cluster is still 100's of ledgers out of sync.  I also spun up my own rippled (0.60.2) and it ran fine for a while, but now it will run in and out of sync also.  Is there a particular node list I should be using?  Everything is at the out-of-the-box defaults.

Please upgrade to 0.60.3, it should be much better.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...