Jump to content
Sign in to follow this  
caroma

Validator not running on main chain

Recommended Posts

I've recently updated my rippled build to v1.2.1. I cannot seem to get my validator to run on the main chain.  Any ideas as to why this may be? 

Share this post


Link to post
Share on other sites

I have removed my validator token to see if my node is operating just as a stock node. I am using the default config file and validator file provided by the rippled 1.2.1 build. I can see that I'm not getting passed the connected state. I've been able to run fine in the past, so I don't believe it is due to hardware resources. Any ideas? 

This is the result of the "Get Topology Node" command. As you can see, it only shows results from my previous build.

{
  "rowkey": "n9LhmDHz3sbB8fryeJvddqSyVZjnRoU6H5gBJvDAb4iaJvPf2Ehx",
  "host": "129.97.10.33",
  "last_updated": "2019-02-12T17:14:51Z",
  "uptime": 74306,
  "version": "1.1.2",
  "city": "Waterloo",
  "country": "Canada",
  "country_code": "CA",
  "isp": "University of Waterloo",
  "lat": "43.4715",
  "location_source": "petabyet",
  "long": "-80.5454",
  "postal_code": "N2L",
  "region": "Ontario",
  "region_code": "ON",
  "timezone": "America/Toronto",
  "node_public_key": "n9LhmDHz3sbB8fryeJvddqSyVZjnRoU6H5gBJvDAb4iaJvPf2Ehx",
  "inbound_count": 0,
  "outbound_count": 9,
  "result": "success"
}

When I run "server_info" I get the following: 

{
   "result" : {
      "info" : {
         "build_version" : "1.2.1",
         "complete_ledgers" : "45470742-45470749",
         "fetch_pack" : 5457,
         "hostid" : "clm1",
         "io_latency_ms" : 1,
         "jq_trans_overflow" : "0",
         "last_close" : {
            "converge_time_s" : 3.981,
            "proposers" : 26
         },
         "load" : {
            "job_types" : [
               {
                  "avg_time" : 3470,
                  "job_type" : "untrustedProposal",
                  "over_target" : true,
                  "peak_time" : 8652,
                  "per_second" : 10
               },
               {
                  "avg_time" : 4049,
                  "in_progress" : 2,
                  "job_type" : "ledgerData",
                  "peak_time" : 9884,
                  "per_second" : 1,
                  "waiting" : 13
               },
               {
                  "avg_time" : 56,
                  "in_progress" : 1,
                  "job_type" : "clientCommand",
                  "peak_time" : 169,
                  "per_second" : 1
               },
               {
                  "in_progress" : 1,
                  "job_type" : "updatePaths"
               },
               {
                  "avg_time" : 710,
                  "job_type" : "transaction",
                  "over_target" : true,
                  "peak_time" : 1876,
                  "per_second" : 8
               },
               {
                  "job_type" : "batch",
                  "per_second" : 7
               },
               {
                  "avg_time" : 147,
                  "in_progress" : 1,
                  "job_type" : "advanceLedger",
                  "peak_time" : 366,
                  "per_second" : 17
               },
               {
                  "avg_time" : 31,
                  "job_type" : "fetchTxnData",
                  "peak_time" : 118,
                  "per_second" : 7
               },
               {
                  "avg_time" : 271,
                  "job_type" : "trustedValidation",
                  "peak_time" : 644,
                  "per_second" : 6
               },
               {
                  "job_type" : "writeObjects",
                  "peak_time" : 105,
                  "per_second" : 95
               },
               {
                  "avg_time" : 156,
                  "job_type" : "trustedProposal",
                  "over_target" : true,
                  "peak_time" : 357,
                  "per_second" : 10
               },
               {
                  "job_type" : "peerCommand",
                  "per_second" : 695
               },
               {
                  "avg_time" : 9,
                  "job_type" : "diskAccess",
                  "peak_time" : 165,
                  "per_second" : 5
               },
               {
                  "job_type" : "processTransaction",
                  "per_second" : 8
               },
               {
                  "avg_time" : 1,
                  "job_type" : "SyncReadNode",
                  "peak_time" : 130,
                  "per_second" : 274
               },
               {
                  "avg_time" : 1,
                  "job_type" : "AsyncReadNode",
                  "peak_time" : 147,
                  "per_second" : 409
               },
               {
                  "job_type" : "WriteNode",
                  "per_second" : 148
               }
            ],
            "threads" : 6
         },
         "load_factor" : 22.70703125,
         "load_factor_local" : 22.70703125,
         "peer_disconnects" : "3",
         "peer_disconnects_resources" : "0",
         "peers" : 10,
         "pubkey_node" : "n9LhmDHz3sbB8fryeJvddqSyVZjnRoU6H5gBJvDAb4iaJvPf2Ehx",
         "pubkey_validator" : "none",
         "published_ledger" : 45470749,
         "server_state" : "connected",
         "server_state_duration_us" : "121726616",
         "state_accounting" : {
            "connected" : {
               "duration_us" : "1621996411",
               "transitions" : 4
            },
            "disconnected" : {
               "duration_us" : "2598702",
               "transitions" : 1
            },
            "full" : {
               "duration_us" : "0",
               "transitions" : 0
            },
            "syncing" : {
               "duration_us" : "128501685",
               "transitions" : 3
            },
            "tracking" : {
               "duration_us" : "0",
               "transitions" : 0
            }
         },
         "time" : "2019-Feb-28 19:32:14.404796",
         "uptime" : 1754,
         "validated_ledger" : {
            "age" : 183,
            "base_fee_xrp" : 1e-05,
            "hash" : "4155BBFB54610B1107E87EB070B8ACA0A2AC1D404EB26F286D8972D05947D55E",
            "reserve_base_xrp" : 20,
            "reserve_inc_xrp" : 5,
            "seq" : 45470814
         },
         "validation_quorum" : 21,
         "validator_list" : {
            "count" : 1,
            "expiration" : "2019-Mar-06 00:00:00.000000000",
            "status" : "active"
         }
      },
      "status" : "success"
   }
}
 

Edited by caroma

Share this post


Link to post
Share on other sites

peers: 10

I read on the diagnosing prolems page :

"If you have exactly 10 peers, that may indicate that your rippled is unable to receive incoming connections through a router using NAT . You can improve connectivity by configuring your router's firewall to forward the port used for peer-to-peer connections (port 51235 by default )"

 

https://developers.ripple.com/diagnosing-problems.html

Share this post


Link to post
Share on other sites

Running netstat shows that I have connections on port 51235, so I'm not sure that this would be the issue, especially since I was running a node fine until I decided to update it. Thank you @Dario_o for your suggestion. Looking at the log files, I was getting "NetworkOPs: WRN We are not running on the consensus ledger" warning messages. I do not think there are any issues with my build as I was able to run all unit tests.

I changed the node_size to small and checking the server info after 20 minutes shows that I am finally running a "full" ledger. Not sure if this was in fact the root issue, since I've always run with node_size as medium. I've noticed that the network topology has not been updated since February 23, so this must be why only my old build was showing up in the network topology even though I had updated it, which had been leading me to believe that there was a bigger issue with my server. Seeing as I was able to get the node status to full, I think that it should run fine as a stock node now. Now I just need to figure out if I can get back to proposing on the main chain.   

**UPDATE**

Trying to run again as a validator. Looking at the validator registry on xrpcharts, when it first begins proposing, the site shows it is on the main chain. I refresh the page and it then changes to show that it is on its own chain. I refresh the page again and my validator no longer shows up. I am stumped as to what is causing this behaviour. 

I left the validator to run for >12hrs. Checking the validator info shows that I am running on the main chain with >90% agreement. I guess I will continue to work under the small node setting and try again as a medium node if I am able to achieve similar performance. Thank you all for your suggestions. 

Edited by caroma

Share this post


Link to post
Share on other sites

I have been running under the changed configuration for quite some time now and I am having the same problem, ie. running on a different chain than the main chain. Running server_info gives the results shown below. What stands out to me is that it says there are 0 proposers. I am using the rippled's provided validators.txt file. Why are there no proposers? 

{
   "result" : {
      "info" : {
         "build_version" : "1.2.1",
         "complete_ledgers" : "45497739-45497800,45499436-45499437,45500170-45500184,45500607,45501181-45501187,45501397-45501420,45501540,45502499-45502617",
         "fetch_pack" : 5356,
         "hostid" : "clm1",
         "io_latency_ms" : 1,
         "jq_trans_overflow" : "21855",
         "last_close" : {
            "converge_time_s" : 2.001,
            "proposers" : 0
         },
         "load" : {
            "job_types" : [
               {
                  "avg_time" : 29,
                  "job_type" : "ledgerRequest",
                  "peak_time" : 90
               },
               {
                  "avg_time" : 2515,
                  "job_type" : "untrustedProposal",
                  "over_target" : true,
                  "peak_time" : 6397
               },
               {
                  "avg_time" : 9697,
                  "in_progress" : 2,
                  "job_type" : "ledgerData",
                  "peak_time" : 25744,
                  "per_second" : 2,
                  "waiting" : 12
               },
               {
                  "avg_time" : 3,
                  "in_progress" : 1,
                  "job_type" : "clientCommand",
                  "peak_time" : 45,
                  "per_second" : 1
               },
               {
                  "job_type" : "advanceLedger",
                  "peak_time" : 4,
                  "per_second" : 13
               },
               {
                  "avg_time" : 20,
                  "job_type" : "trustedValidation",
                  "peak_time" : 144,
                  "per_second" : 5
               },
               {
                  "job_type" : "writeObjects",
                  "peak_time" : 3,
                  "per_second" : 121
               },
               {
                  "avg_time" : 13,
                  "job_type" : "trustedProposal",
                  "peak_time" : 45,
                  "per_second" : 10
               },
               {
                  "job_type" : "peerCommand",
                  "per_second" : 605
               },
               {
                  "avg_time" : 2,
                  "job_type" : "diskAccess",
                  "peak_time" : 33,
                  "per_second" : 5
               },
               {
                  "job_type" : "SyncReadNode",
                  "peak_time" : 98,
                  "per_second" : 42
               },
               {
                  "avg_time" : 2,
                  "job_type" : "AsyncReadNode",
                  "peak_time" : 431,
                  "per_second" : 408
               },
               {
                  "job_type" : "WriteNode",
                  "per_second" : 211
               }
            ],
            "threads" : 6
         },
         "load_factor" : 44.34375,
         "load_factor_local" : 44.34375,
         "peer_disconnects" : "21",
         "peer_disconnects_resources" : "0",
         "peers" : 10,
         "pubkey_node" : "n9LhmDHz3sbB8fryeJvddqSyVZjnRoU6H5gBJvDAb4iaJvPf2Ehx",
         "pubkey_validator" : "nHU5Lub2RXRyVXTriT6vCG6oao5fcZVvT8GKJrxPsmFrx2EkosK8",
         "server_state" : "proposing",
         "server_state_duration_us" : "221662849908",
         "state_accounting" : {
            "connected" : {
               "duration_us" : "20925119110",
               "transitions" : 11
            },
            "disconnected" : {
               "duration_us" : "2518949",
               "transitions" : 1
            },
            "full" : {
               "duration_us" : "221718390946",
               "transitions" : 2
            },
            "syncing" : {
               "duration_us" : "588104929",
               "transitions" : 12
            },
            "tracking" : {
               "duration_us" : "0",
               "transitions" : 2
            }
         },
         "time" : "2019-Mar-04 17:03:40.372842",
         "uptime" : 243234,
         "validated_ledger" : {
            "age" : 221500,
            "base_fee_xrp" : 1e-05,
            "hash" : "CE1DCFDD444C2A20FA4252DCB01E4A967F17C4C791EB25DF21B9801DB3                                                                                        B87580",
            "reserve_base_xrp" : 20,
            "reserve_inc_xrp" : 5,
            "seq" : 45502617
         },
         "validation_quorum" : 22,
         "validator_list" : {
            "count" : 1,
            "expiration" : "2019-Mar-06 00:00:00.000000000",
            "status" : "active"
         }
      },
      "status" : "success"
   }
}
 

Share this post


Link to post
Share on other sites

What's in your rippled.cfg file and what's the load on your disk storage?

Also you most likely have not forwarded your peer port (meaning your server isn't reachable from the internet for incoming connections).

Share this post


Link to post
Share on other sites

I've included snippets of my rippled.cfg file, netstat outputs, and storage info using iostat. I have my rippled instance on a partition on sda, that has a total ~120GB of which 22GB is used. 

caroma@clm1:~$ netstat -an | grep "51235"
tcp        0      0 0.0.0.0:51235           0.0.0.0:*               LISTEN
tcp        0      0 129.97.10.33:59354      54.244.69.36:51235      ESTABLISHED
tcp        0      0 129.97.10.33:52724      198.11.206.26:51235     ESTABLISHED
tcp        0     82 129.97.10.33:46592      52.213.33.138:51235     ESTABLISHED
tcp        0      0 129.97.10.33:34624      169.54.2.157:51235      ESTABLISHED
tcp        0      0 129.97.10.33:41626      188.65.212.210:51235    ESTABLISHED
tcp        0      0 129.97.10.33:54736      54.186.248.91:51235     ESTABLISHED
tcp        0    268 129.97.10.33:43320      52.79.141.73:51235      ESTABLISHED
tcp6       0      0 129.97.10.33:33382      184.173.45.62:51235     ESTABLISHED
tcp6       0      0 129.97.10.33:36466      169.54.2.154:51235      ESTABLISHED
tcp6       0      0 129.97.10.33:40504      54.213.156.82:51235     ESTABLISHED
caroma@clm1:~$ iostat
Linux 4.15.0-42-generic (clm1)  2019-03-04      _x86_64_        (8 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.54    0.00    0.27    5.14    0.00   94.05

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00          5          0
sda              69.44       429.94       195.96  790191662  360154653
sdb               0.14         0.89         1.85    1635303    3407176
[server]
port_rpc_admin_local
port_peer
port_ws_admin_local

[port_rpc_admin_local]
port = 5005
ip = 127.0.0.1
admin = 127.0.0.1
protocol = http

[port_peer]
port = 51235
ip = 0.0.0.0
protocol = peer

[port_ws_admin_local]
port = 6006
ip = 127.0.0.1
admin = 127.0.0.1
protocol = ws

#-------------------------------------------------------------------------------

[node_size]
small

# This is primary persistent datastore for rippled.  This includes transaction
# metadata, account states, and ledger headers.  Helpful information can be
# found here: https://ripple.com/wiki/NodeBackEnd
# delete old ledgers while maintaining at least 2000. Do not require an
# external administrative command to initiate deletion.
[node_db]
type=RocksDB
path=/var/lib/rippled/db/rocksdb
open_files=2000
filter_bits=12
cache_mb=256
file_size_mb=8
file_size_mult=2
online_delete=2000
advisory_delete=0

[database_path]
/var/lib/rippled/db

# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/var/log/rippled/debug.log

[sntp_servers]
time.windows.com
time.apple.com
time.nist.gov
pool.ntp.org

# Where to find some other servers speaking the Ripple protocol.
[ips]
r.ripple.com 51235

[validator_token]
"..."

# File containing trusted validator keys or validator list publishers.
# Unless an absolute path is specified, it will be considered relative to the
# folder in which the rippled.cfg file is located.
[validators_file]
validators.txt

# Turn down default logging to save disk space in the long run.
# Valid values here are trace, debug, info, warning, error, and fatal
[rpc_startup]
{ "command": "log_level", "severity": "fatal" }

# If ssl_verify is 1, certificates will be validated.
# To allow the use of self-signed certificates for development or internal use,
# set to ssl_verify to 0.
[ssl_verify]
1

 

Share this post


Link to post
Share on other sites

Can you please execute the following command and paste the results?

fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=256M --numjobs=8 --runtime=60 --group_reporting

You may need to install the fio program.

Please do it twice: once when your rippled is running and once when it’s stopped.

 

Share this post


Link to post
Share on other sites
Posted (edited)

@nikb The following are the outputs of the command you've asked for.

With Rippled running: 

caroma@clm1:~$ fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite                                                                                         --bs=4k --direct=0 --size=256M --numjobs=8 --runtime=60 --group_reporting
randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 8 processes
randwrite: Laying out IO file(s) (1 file(s) / 256MB)
randwrite: Laying out IO file(s) (1 file(s) / 256MB)
randwrite: Laying out IO file(s) (1 file(s) / 256MB)
randwrite: Laying out IO file(s) (1 file(s) / 256MB)
randwrite: Laying out IO file(s) (1 file(s) / 256MB)
randwrite: Laying out IO file(s) (1 file(s) / 256MB)
randwrite: Laying out IO file(s) (1 file(s) / 256MB)
randwrite: Laying out IO file(s) (1 file(s) / 256MB)
Jobs: 8 (f=8): [w(8)] [21.4% done] [0KB/37488KB/0KB /s] [0/9372/0 iops] [eta 00m                                                                                        Jobs: 8 (f=8): [w(8)] [20.0% done] [0KB/7224KB/0KB /s] [0/1806/0 iops] [eta 00m:                                                                                        Jobs: 8 (f=8): [w(8)] [19.2% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 00m:21s]                                                                                          Jobs: 8 (f=8): [w(8)] [18.8% done] [0KB/608KB/0KB /s] [0/152/0 iops] [eta 00m:26                                                                                        Jobs: 8 (f=8): [w(8)] [17.9% done] [0KB/300KB/0KB /s] [0/75/0 iops] [eta 00m:32s                                                                                        Jobs: 1 (f=1): [_(4),w(1),_(3)] [26.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 02m:59s]
randwrite: (groupid=0, jobs=8): err= 0: pid=9732: Tue Mar  5 09:56:32 2019
  write: io=503012KB, bw=8359.2KB/s, iops=2089, runt= 60169msec
    slat (usec): min=2, max=11437K, avg=3824.18, stdev=102371.39
    clat (usec): min=0, max=237, avg= 0.76, stdev= 1.30
     lat (usec): min=2, max=11437K, avg=3825.08, stdev=102371.54
    clat percentiles (usec):
     |  1.00th=[    0],  5.00th=[    0], 10.00th=[    0], 20.00th=[    0],
     | 30.00th=[    0], 40.00th=[    1], 50.00th=[    1], 60.00th=[    1],
     | 70.00th=[    1], 80.00th=[    1], 90.00th=[    1], 95.00th=[    1],
     | 99.00th=[    3], 99.50th=[    5], 99.90th=[    8], 99.95th=[   10],
     | 99.99th=[   19]
    bw (KB  /s): min=    1, max=150822, per=21.22%, avg=1773.74, stdev=12740.80
    lat (usec) : 2=96.51%, 4=2.60%, 10=0.83%, 20=0.04%, 50=0.01%
    lat (usec) : 100=0.01%, 250=0.01%
  cpu          : usr=0.04%, sys=0.13%, ctx=2263, majf=0, minf=91
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=125753/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=503012KB, aggrb=8359KB/s, minb=8359KB/s, maxb=8359KB/s, mint=60169msec, maxt=60169msec

Disk stats (read/write):
  sda: ios=12935/15721, merge=3056/11583, ticks=242072/975464, in_queue=451860, util=99.94%

With Rippled stopped: 

caroma@clm1:~$ fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=256M --numjobs=8 --runtime=60 --group_reporting
randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 8 processes
Jobs: 5 (f=5): [_(2),w(4),_(1),w(1)] [16.0% done] [0KB/31988KB/0KB /s] [0/7997/0                                                                                        Jobs: 4 (f=4): [_(2),w(3),_(2),w(1)] [14.9% done] [0KB/43780KB/0KB /s] [0/10.1K/                                                                                        Jobs: 4 (f=4): [_(2),w(3),_(2),w(1)] [16.1% done] [0KB/28544KB/0KB /s] [0/7136/0                                                                                        Jobs: 4 (f=4): [_(2),w(3),_(2),w(1)] [17.2% done] [0KB/26648KB/0KB /s] [0/6662/0                                                                                        Jobs: 4 (f=4): [_(2),w(3),_(2),w(1)] [18.4% done] [0KB/33916KB/0KB /s] [0/8479/0                                                                                        Jobs: 4 (f=4): [_(2),w(3),_(2),w(1)] [19.5% done] [0KB/31872KB/0KB /s] [0/7968/0                                                                                        Jobs: 4 (f=4): [_(2),w(3),_(2),w(1)] [20.7% done] [0KB/29132KB/0KB /s] [0/7283/0                                                                                        Jobs: 1 (f=1): [_(3),w(1),_(4)] [93.2% done] [0KB/33520KB/0KB /s] [0/8380/0 iops] [eta 00m:03s]
randwrite: (groupid=0, jobs=8): err= 0: pid=12151: Tue Mar  5 09:59:22 2019
  write: io=2048.0MB, bw=50892KB/s, iops=12722, runt= 41208msec
    slat (usec): min=2, max=207819, avg=222.71, stdev=2804.54
    clat (usec): min=0, max=175, avg= 0.53, stdev= 0.69
     lat (usec): min=2, max=207827, avg=223.34, stdev=2804.75
    clat percentiles (usec):
     |  1.00th=[    0],  5.00th=[    0], 10.00th=[    0], 20.00th=[    0],
     | 30.00th=[    0], 40.00th=[    0], 50.00th=[    1], 60.00th=[    1],
     | 70.00th=[    1], 80.00th=[    1], 90.00th=[    1], 95.00th=[    1],
     | 99.00th=[    2], 99.50th=[    2], 99.90th=[    8], 99.95th=[    9],
     | 99.99th=[   12]
    bw (KB  /s): min=    0, max=301852, per=20.88%, avg=10627.86, stdev=19639.04
    lat (usec) : 2=98.87%, 4=0.74%, 10=0.36%, 20=0.02%, 50=0.01%
    lat (usec) : 100=0.01%, 250=0.01%
  cpu          : usr=0.17%, sys=1.27%, ctx=26609, majf=0, minf=96
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=524288/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=2048.0MB, aggrb=50891KB/s, minb=50891KB/s, maxb=50891KB/s, mint=41208msec, maxt=41208msec

Disk stats (read/write):
  sda: ios=0/80696, merge=0/27555, ticks=0/2458864, in_queue=2507588, util=99.97%

 

Edited by caroma

Share this post


Link to post
Share on other sites

@nikb Below are the specs of my machine. I have installed my rippled instance on a 120GB partition on the HDD.  I am currently running Ubuntu 16.04. As I said earlier in my post, I was able to run a previous version of rippled without any issues . There have been no changes to my system to my knowledge. 

 

caroma@clm1:~$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    2
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 42
Model name:            Intel(R) Xeon(R) CPU E31270 @ 3.40GHz
Stepping:              7
CPU MHz:               1995.705
CPU max MHz:           3800.0000
CPU min MHz:           1600.0000
BogoMIPS:              6800.36
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0-7

 

Share this post


Link to post
Share on other sites

Switch to an SSD then. The node store especially needs a fast storage subsystem, but also the sqlite databases (which are NOT subject to online_delete) perform better on SSDs.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×
×
  • Create New...