Jump to content

Just_J

Member
  • Content count

    133
  • Joined

  • Last visited

2 Followers

About Just_J

  • Rank
    Advanced Member
  1. Just_J

    xRapid Tipping Point Reached?

    I hate to be the one to respond to what seems like a certain someone who is responding with REPLY ALL to an errant email sent to the corporate-wide distribution list, and then responding via REPLY ALL to every person who tries to tell people to stop responding ... But seriously ... stop responding ... this person is not seeking knowledge, but rather a way to obfuscate and spread an agenda
  2. @nikb or @JoelKatz Any ideas or suggestions regarding the correct configuration of v.81 rippled nodes with the new Validator List in a clustered environment as described above when hiding Validator nodes behind public Tracking nodes? If I add [validators] in the "validators.txt" file with the Ripple Validators and my Validators' Public Keys - it seems to then ignore the new v.81 suggested configuration of: [validator_list_sites] https://vl.ripple.com [validator_list_keys] ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734 I can see why it does that, as explicitly defining trusted validators in the [validators] section would make sense to override [validator_list_sites] and [validator_list_keys] Yet if I do this, it seems to render the performance of my validator poorly, and does not include my validators in all expected transactions. Is there special configuration that needs to be performed for clustered nodes and protected validators in v.81 of RippleD that hasn't been shared publicly at GitHub or elsewhere? Thanks!
  3. So I've been having some mixed results with our Validators since the update to v.81 of rippled ... My Validators sit behind public Tracking Nodes in a clustered configuration. Using the Validator List as prescribed by Ripple seems to remove the Validators from the Tracking Nodes as a trusted validator, despite being clustered and configured to stay connected via [ips_fixed] When this happens, the Validators are then removed from being listed from the XRP Charts Validator list completely, and of course, being seen as a "verified validator" If I remove the Validator List config as recommended by Ripple, and return to hard coded trusted validators in "validators.txt" I find that the validators will return to the XRP Charts Validator list, and appear as "verified" - but quickly fall out of sync with the network and show poor performance. In watching the Validator List at XRP Charts, it seems that all validators are showing up, disappearing, losing transactions, and overall show strange performance. Does anyone have any additional information on configuring clustered Tracking and Validator nodes with protected Validators in the v.81 rippled environment?
  4. I run our Validators and Tracking Nodes via Docker, and truthfully it's pretty easy to accomplish. I haven't released mine to the public because some of it is specialized to our production network, and obviously that would be a security risk. That said, I'm sure I could pare it down to a baseline for others to build off of so they can cater it to their environment.
  5. wow ... I feel dumb Lesson: Don't do your configurations late at night ....
  6. So with the changes made to the Validator List with v.81 I am having some issues .... At first, everything seemed fine. My docker containers came up fine and were connecting to the live ledger. Yet a couple days ago, this changed. My tracking and validator nodes would come up, and would show up in the network topology on xrpcharts ... but the validators would not get on the live ledger. I backed out almost all of my customizations in rippled.cfg and still could not get the nodes on the live ledger. I then noticed in the logs that it was unable to resolve https://v1.ripple.com .... Does anyone know the underlying IP address so that I can attempt to hard-code it? I'm using Google DNS servers 8.8.8.8 and 8.8.4.4 ... they seem to lack an entry for the v1.ripple.com ... It could certainly be some upstream routing on my production network, but so far, I haven't found a smoking gun there ...
  7. I guess that's her opinion. Doesn't help any negotiations if there are any negotiations. Seems like a lot of people follow her and will probably join in with criticism. End of the day, if people really want coinbase to add ripple, this isn't helping.
  8. Thanks @JoelKatz for your response! I am definitely using online delete on this pair of nodes. These two servers are currently running in a very lightweight config I developed into docker containers. When I say lightweight, I mean that they are keeping only the bare minimum of ledgers at 256 with online_delete running at the same 256. Since I am not serving any applications locally with these nodes they are purely for the RippleNet. The configuration of these servers hasn't changed for a week or so, and the only configuration changes were to change the [Inisght] server mapping ip to support my new metrics package. Going back to a previous question I posted on this forum about "blocking known problematic nodes" proactively, part of that question was based on the connection attempts being made over and over by some nodes to the US WEST NODE 1 server indicated in these charts. From the logs I was thinking it was possibly affecting the performance of the US WEST NODE 1, and since the validator in this node sits protected behind the tracking node with IPS_FIXED - the performance of the validator was being affected. The validator wasn't getting timely response to all fetch requests and was falling behind on consensus due to missing ledgers. As for the "event" that appears on the charting above, the bad thing is that with my logs set only to warn and error - the event above didn't really show up in the logs. Everything was working as intended it seems prior to and again after the "event" time. So maybe it's just an anomaly, or maybe it was just a set of bad proposals that were hanging around in the validator that finally timed out and purged? Whatever the case - these 2 nodes seem to be running very well right now. The validator is now running near the top of the charts. Our other pair of nodes are currently offline as I work on some changes to kubernetes, but once I bring them up I'm going to lower the logging to INFO on the validator and see if a similar "event" happens with it. If so, I'll at lease have direction from Load Monitor job logging entries. Anyways - thanks for your input - I'm sure it's just something I did (or didn't) do ...
  9. Here's a screen grab of metrics for that same validator and its companion node with a 24 hour window.
  10. Oh I don't know that it is anything major ... just was pretty interesting that all of a sudden a lot of activity just went "poof" with no changes to the environment or any connectivity problems. I looked at the FW device logs and didn't see any sort of attacks, and the performance of the validator was good before and after the event - never missed a beat. I'm leaning towards sunspots and geo-magnetic storms
  11. Was reviewing the daily logs of our validators and nodes, and saw something interesting regarding activity on our validators. It clearly shows up on our graphs as well in terms of the amount of stale records purged .... Anyone running a node/validator see the same thing?
  12. Just_J

    Correction: Rippled 0.80.1 is ready

    @warpaul Would you like some of us to move to 80.1 manually now? I can begin the update process if so on our nodes/validators ... Thanks!
  13. Thanks @JoelKatz ! Ahhhhhh ... I believe that because I'm currently in the process of tuning my Docker container and Kubernetes pod configurations, I'm watching the log files for errors at startup and then just a small window of time after ... If I were to wait longer, it appears that the protocol itself would take care of the issue due to the mechanisms you stated. The lesson here is - I need to stop combing the desert and back away from the keyboard! Thanks again for your input - I really appreciate it!
  14. No ... not on a mass scale ... Just follow the bottom of the Topology (Node List) at: https://xrpcharts.ripple.com/#/topology
  15. Wondering if anyone is aware of a way to exclude nodes from connecting to nodes under your control. I am aware that the peer protocol itself is supposed to report problem nodes and place them on the naughty list, but this does not stop a node from connecting to you and causing problems. Further still, it seems that an operator of a problem node could simply set their IPS_FIXED to any node and cause ... trouble ... via large requests for ledgers or simply just connect requests over and over. Short of blocking them via FW rules at the perimeter, is there a way within a rippled.cfg file to ban known problematic nodes? Many of the older revision nodes out there in the current node list on XRP Charts are simply rebooting every 5 to 20 minutes, and from my log inspection, are doing nothing but causing unnecessary connect requests and large ledger fetches, and then are of such poor connectivity or performance that they cause tons of PEER WARN entries in the logs. I know this is somewhat against the nature/intent of the peer protocol as you want the protocol itself to manage the problem nodes by excluding them until they can get their act together, or identify a possible mal-actor intentionally causing problems on the RippleNet - yet it seems that there might be some room for improvement at this point ... Thanks in advance for any tips -
×