axellent Posted August 28, 2017 Share Posted August 28, 2017 Hi, after stopping and restarting the "validator" server, which is part of my internal setup of two servers (stock & validator pointing to each other), I am getting the following errors: 2017-Aug-28 15:10:38 InboundLedger:WRN 11 timeouts for ledger 3 2017-Aug-28 15:10:38 InboundLedger:WRN 11 timeouts for ledger 2 2017-Aug-28 15:11:58 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-28 15:11:58 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E 2017-Aug-28 15:12:01 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-28 15:12:01 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E 2017-Aug-28 15:12:03 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-28 15:12:03 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E 2017-Aug-28 15:12:06 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-28 15:12:06 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E rippled keeps printing this continuously. What does it mean? Here's my server_info: 2017-Aug-28 15:10:32 HTTPClient:NFO Connecting to 127.0.0.1:5005 { "id" : 1, "result" : { "info" : { "build_version" : "0.70.1", "complete_ledgers" : "4-9870", "hostid" : "osboxes", "io_latency_ms" : 1, "last_close" : { "converge_time_s" : 1.999, "proposers" : 0 }, "load" : { "job_types" : [ { "in_progress" : 1, "job_type" : "clientCommand" }, { "job_type" : "peerCommand", "per_second" : 2 } ], "threads" : 6 }, "load_factor" : 1, "peers" : 1, "pubkey_node" : "n9KPE9tnVzsW5cQWHzdonASMraR9SdLW3JsfiHF5e7dbWX31B8UK", "pubkey_validator" : "nHUuVACwNCYzKPUTE6V5VYab76aqP7gWN8sDuLWwSSUUdbB2w6Xc", "server_state" : "proposing", "state_accounting" : { "connected" : { "duration_us" : "62065426", "transitions" : 1 }, "disconnected" : { "duration_us" : "1102048", "transitions" : 1 }, "full" : { "duration_us" : "2840062162", "transitions" : 1 }, "syncing" : { "duration_us" : "0", "transitions" : 0 }, "tracking" : { "duration_us" : "0", "transitions" : 1 } }, "uptime" : 2904, "validated_ledger" : { "age" : 2, "base_fee_xrp" : 1e-05, "hash" : "935F3658B983BC45597D0FF9105B41DF6768EC341F9BDFC9C5E1B2E4788761D7", "reserve_base_xrp" : 20, "reserve_inc_xrp" : 5, "seq" : 9870 }, "validation_quorum" : 1 }, "status" : "success" } } server_state: { "id" : 1, "result" : { "state" : { "build_version" : "0.70.1", "complete_ledgers" : "4-9880", "io_latency_ms" : 1, "last_close" : { "converge_time" : 2000, "proposers" : 0 }, "load" : { "job_types" : [ { "in_progress" : 1, "job_type" : "clientCommand" }, { "job_type" : "peerCommand", "per_second" : 1 } ], "threads" : 6 }, "load_base" : 256, "load_factor" : 256, "peers" : 1, "pubkey_node" : "n9KPE9tnVzsW5cQWHzdonASMraR9SdLW3JsfiHF5e7dbWX31B8UK", "pubkey_validator" : "nHUuVACwNCYzKPUTE6V5VYab76aqP7gWN8sDuLWwSSUUdbB2w6Xc", "server_state" : "proposing", "state_accounting" : { "connected" : { "duration_us" : "62065426", "transitions" : 1 }, "disconnected" : { "duration_us" : "1102048", "transitions" : 1 }, "full" : { "duration_us" : "2868813662", "transitions" : 1 }, "syncing" : { "duration_us" : "0", "transitions" : 0 }, "tracking" : { "duration_us" : "0", "transitions" : 1 } }, "uptime" : 2932, "validated_ledger" : { "base_fee" : 10, "close_time" : 557248260, "hash" : "379660D07D109DA08F5F30A1880805FA6E7723F37C8FAD81D0FB82AECBF70851", "reserve_base" : 20000000, "reserve_inc" : 5000000, "seq" : 9880 }, "validation_quorum" : 1 }, "status" : "success" } } Is there a way to tell that the validator is in sync with the stock server? Btw, I am stopping the server with this command: /opt/ripple/bin/rippled --conf=/opt/ripple/etc/rippled.cfg stop and starting with /opt/ripple/bin/rippled --conf=/opt/ripple/etc/rippled.cfg is this correct or do I need to include --start or --net args when starting? Link to comment Share on other sites More sharing options...
axellent Posted August 30, 2017 Author Share Posted August 30, 2017 stopped the validator, restarted and got this: 2017-Aug-30 12:42:32 NetworkOPs:WRN We are not running on the consensus ledger 2017-Aug-30 12:42:32 NetworkOPs:ERR JUMP last closed ledger to 810ACA99C83EC234414E2834217FD8295587AA294221B0871CCB0A770DCCEDAB what does it mean and how to fix it? Link to comment Share on other sites More sharing options...
Zim Posted August 30, 2017 Share Posted August 30, 2017 @axellent I am unsure of your problem. Its evident your are missing most if not all of the public xrp ledgers. I remeber you saying you were using the ledger fork, which i have not used yet. Do you have all of your nodes clustered and keyed? If you have your nodes clustered and keyed your validator should pickup any work it needs as long as there are peers to provide the ledger. Also in my last post i suggested ledger_history changes. If you went with "full" on a small server or vps. You may run into problems due to work load as the full ledger history is like 6TB. Wish i could help more. If there is anything else just @ me. axellent 1 Link to comment Share on other sites More sharing options...
axellent Posted August 30, 2017 Author Share Posted August 30, 2017 Thanks @Zim After restarting both servers one more time, I checked server_info of both, as simultaneously as possible: stock server: "complete_ledgers" : "4-42245", "server_state" : "full", "pubkey_node" : "n9KHec9cJXXgaHT4nYCQQgCq3Miaz4tXab9HPNAVPUfCkZiThmfy", "pubkey_validator" : "none", validator: "complete_ledgers" : "4-42245", "server_state" : "proposing" "pubkey_node" : "n9KPE9tnVzsW5cQWHzdonASMraR9SdLW3JsfiHF5e7dbWX31B8UK", "pubkey_validator" : "nHUuVACwNCYzKPUTE6V5VYab76aqP7gWN8sDuLWwSSUUdbB2w6Xc", Since "completed_ledgers" is identical, is it safe to assume that servers are in sync? I am not sure how to interpret the "full" vs "proposing" server_state. Also I think I have the nodes keyed and clustered: On the validator server rippled.cfg I have the [validator_token] set and on the other server I have the [validators_file] set and inside of the validators.txt I have: [validators] nHUuVACwNCYzKPUTE6V5VYab76aqP7gWN8sDuLWwSSUUdbB2w6Xc also both rippled.cfg files have this set: [ips_fixed] 192.168.0.33 51235 192.168.0.29 51235 meaning they are both pointing to both self and the other server. Is this correct? Or should I make one server point to the other (and not to self) and similarly on the other server? Thanks again for your help, much appreciated! Link to comment Share on other sites More sharing options...
Zim Posted August 30, 2017 Share Posted August 30, 2017 Everything looks normal except for your ledgers. But i know you were trying to create your own ledger and ripple, or something of the sort from the thread you posted the other day. What is the exact problem you are having? Link to comment Share on other sites More sharing options...
axellent Posted August 30, 2017 Author Share Posted August 30, 2017 @Zim, Yes, exactly - I am running two servers locally (ledger fork) and they should be interacting only with each other. After stopping and restarting the validator, the server keeps printing these warnings over and over: 2017-Aug-30 13:50:00 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-30 13:50:00 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E 2017-Aug-30 13:50:02 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-30 13:50:02 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E 2017-Aug-30 13:50:05 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-30 13:50:05 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E 2017-Aug-30 13:50:07 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-30 13:50:07 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E 2017-Aug-30 13:50:10 InboundLedger:WRN 11 timeouts for ledger 2 2017-Aug-30 13:50:10 InboundLedger:WRN 11 timeouts for ledger 3 and the other server (stock server) was printing this: 2017-Aug-30 13:51:06 LedgerMaster:NFO Advancing accepted ledger to 42985 with >= 1 validations 2017-Aug-30 13:51:06 LedgerConsensus:NFO We closed at 557416264 2017-Aug-30 13:51:06 LedgerConsensus:NFO 1 time votes for 557416264 2017-Aug-30 13:51:06 LedgerConsensus:NFO Our close offset is estimated at 0 (2) 2017-Aug-30 13:51:06 NetworkOPs:NFO Consensus time for #42986 with LCL 6662C84187ECEB896AA9E97A68D159BB46723987D0513174E8FE76A0C7A46172 2017-Aug-30 13:51:06 LedgerConsensus:NFO Entering consensus process, watching 2017-Aug-30 13:51:09 LedgerConsensus:NFO Proposers:1 nw:50 thrV:1 thrC:1 2017-Aug-30 13:51:09 LedgerConsensus:NFO Converge cutoff (1 participants) 2017-Aug-30 13:51:09 LedgerConsensus:NFO CNF buildLCL 234158E6320BFC0BA16AA9AA88EECBE3FD179A31C37B712F44C9FF719AC4E64A that seems fine, just an info message. I changed the rippled.cfg of that server to show only warnings (before it was set to info) and now it started printing similar messages as the validator: 2017-Aug-30 13:52:33 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E 2017-Aug-30 13:52:33 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-30 13:52:35 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E 2017-Aug-30 13:52:35 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-30 13:52:38 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-30 13:52:38 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E 2017-Aug-30 13:52:40 InboundLedger:WRN Want: 53AD3751A30BD9CD5B4BE2D8E4F8956D558E758EBC541DEB1FC93B5E5AF2047E 2017-Aug-30 13:52:40 InboundLedger:WRN Want: A0B39C5C0478A5FB789C7CE1840965077148C351F4D0A27C67211748DE18482E 2017-Aug-30 13:52:43 InboundLedger:WRN 11 timeouts for ledger 2 2017-Aug-30 13:52:43 InboundLedger:WRN 11 timeouts for ledger 3 so both servers have the same issue, is this something to be concerned about? Does it have anything to do with how my ledger in both servers is "complete_ledgers" : "4-42245" so the 1st ledger I suppose is the "genesis ledger", and maybe something happened with my 2 & 3 ledger and that's what the servers are complaining about? Link to comment Share on other sites More sharing options...
Zim Posted August 30, 2017 Share Posted August 30, 2017 (edited) Unsure. Maybe you lost those ledgers some how and now there is just no "peer" in the network with them? But like i said i am unsure. Are you having problems making transactions? Or is there some sort of ledger problem? Or are you just concerened about the ledgers 2-3? I typically just try and repoduce errors and see if it repeats or not. Thats why we use dev enviromemts and testnets right? Maybe someone else with more experiance will have answers for the missing ledgers. Edited August 30, 2017 by Zim axellent 1 Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now