reserved memory could be ballooned away.
CPU performance major counters:
- %RDY 10 Overpovisioning of vCPUs, execssive usage of vSMP, a limit has be set: %MLMTD
- %CSTP 3 Excessive usage of vSMP. Decrease amount of vCPUs for this particular VM, leads to increased scheduling opportunities.(ready/Co-deschedule State Time Percentage)
- %MLMTD 0 If larger than 0 the world is being throttled. Possible cause: limit on CPU (ResoucePool/World's limit setting)
- %SWPWT 5 (SWap Waiting Time) VM waiting on swapped pages to be read from disk: memory overcommitment.
- MCTLSZ 1 If larger than 0, host is forcing VMs to inflate balloon driver to reclaim memory as host is overcommited.
- SWCU 1 if larger than 0, host has swapped memory pages in the past: over-commitment
- SWR/s 1 if larger than 0. host is actively reading from swap(vswap): excessive over-commitment
- SWW/s 1 if larger than 0. host is actively writing to swap(vswap): excessive over-commitment
- N%L 80 if less than 80, VM experience poor NUMA locality: ESX scheduler doesn't attempt to use NUMA optimization for the VM, and use memory via interconnect.(NRMEM)
- %DRPTX 1 Drop packets transmitted: very high network utilization
- %DRPRX 1 Drop packet received: very high network utilization
- GAVG 25 = DAVG + KAVG
- DAVG 25 Disk latency most likely caused by array
- KAVG 2 Disk latency caused by VMkernel, means queuing, check "QUED"
- QUED 1 Queue maxed out. Possibly queue depth set to low, check with vendor for optimal queue depth value.
- ABRTS/s 1 Aborts issued by VM, because storage is not responding (60second for Windows): path failure or array not accepting I/O
- RESETS/s 1 number of commands reset per second
No comments:
Post a Comment