Friday, December 16, 2011

dvPortGroup Settings

General:
  • Name
  • Description
  • Number of ports
  • Port Bindings (Static, Dynamic, no binding)
Policies:
  • Security: same as Port Group at vSS
  • Teaming and Failover as Port Group at vSS
  • Traffic Shaping has two ways traffic shaping (Ingress and Egress), while vSS port group only has in-bound traffic shaping
  • VLAN Type: VLAN, VLAN Trunking, PVLAN
  • Block all port (Yes/No default) from Miscellaneous
Advanced:
  • Override port policy above (multipe radio button yes/no), checked by default: Per port level
  • Live Port moving (not checked by default): moved while in use
  • Configure reset at reconnect (checked by default): back to Port Group settings from per port settings when disconnected from VM
  • Define DVPort name format: template for dvPort name



vDS property

From vShpere client:
General:
  • name
  • Number of uplinks (uplink name editing here)
  • Number of ports
Advanced:
  • MTU
  • CDP
  • Administrator contact info (Name and other details text)
Networks in vDS:
  • dvSwitch-DVUplinks (Static Port Binding, VLAN Truning 0-4094)
  • dvPortGroup
PVLAN is created from Private VLAN tab
  • Primary PVLAN: Promiscuous
  • Secondary PVLAN: Isolated/Community

Traffic Shapping

Switch level and port group level

By default disabled
  • Average bandwidth
  • Peak bandwidth
  • Bust size

Port Property under vSS

  • Network Label
  • VLANID optional
  • VMotion
  • Fault Tolerance logging
  • Management Traffic
Port policy consists of:
  • Security
  • Traffic Shapping
  • Failover and load Balancing

Thursday, December 15, 2011

vmkfstools

Also included in cCLI with reduced function.
  • Create/Extend vmfs file system; upgrade vmfs 2 to vmfs 3
  • Create/Extend/Migrate/Clone/Inflate/Rename/Delete Virtual Disk
  • Create RDM (Virtual -r /Physical -z)
  • Manage SCSI Reservation for LUNs
Supported disk formats:
  • zeroedthick(default)
  • eagerzeroedthick
  • thick
  • thin
  • rdm
  • rdmp
  • raw
  • 2gbsparese

SSH security

/etc/ssh/sshd_config
  • PermitRootLogin yes/no
  • Protocol 2
  • 3DES cipher
  • To disable SFTP, comment out: Subsystem ftp /usr/libexec/openssh/sftp-server
Restart ssh for change: service sshd restart

Wednesday, December 14, 2011

esxcfg-auth

  • change user password aging restriction
  • change user password complexity
  • authentication from (LDAP, NIS, Kerberos,Actitive Directory authentication)

Friday, December 9, 2011

vCenter Orchestrator

Requirement:
  • 4GB ram, 2GB disk, Static IP, 2.0+ GHZ CPU
  • Working LDAP
  • Web browers IE 7 plus or Firefox 3 +
  • DB recommended on separated server
Default plugin:
  • Mail Plugin
  • SSH plugin
  • vCenter 4x plugin
  • vCO Library
  • WebOperator
  • Enumeration
  • .Net
  • XML
  • Database
Skill:
  • Configure Orchestrator, database, ldap
  • Run a workflow
  • Administrator package
  • identify actions, tasks, policies
*** feel similar to Microsoft SMS server, and wondering how to test this at DCA exam?

Thursday, December 8, 2011

esxcli

esxcli is part of the vSphere CLI, also available from console from host, primarily used for PSA management

Three namespaces:

  • nmp <object> <command> eg. PSP selection
  • swiscsi <object> <command> eg. port binding
  • corestorage <object> <command> eg.path masking



vCenter time out settings

From vSphere client: Administration > vCenter Server Settings > Timeout Settings.


Or you could also increasee the timeout value using the vCenter Server database (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1002721)

Affinity

VM-VM affinity (VM MSCS cluster)
Host-VM affinity (CPU affinity)
NUMA affinity

Port group security

  • Promiscuous Mode (Reject Default)
  • MAC Address Change (Accept Default)
  • Forged Transmit (Accept Default)
vSphere to modify; or command line: vim-cmd

PowerCLI could also change port group security settings


NUMA controls

NUMA Nodes consist of a set of processors and the memory

NUMA load-balancer in ESX assigns a home node to a VM. For the VM the memory is allocated from the home node. Since the VM rarely migrates away form the home node, the memory access from the VM is mostly local. The all vCPUs of the VM are scheduled within home node too.


Virtual Machine Attributes

.vmx file; vSphere VM settings:

Harware tab:
  • Passthrough Device could be added here
  • VMCI device is restriced for security by default: VM high speed communication via VMkernel
  • Network adapter option for TSO and Jumbo Frame
  • hard drive choice between vmdk and RDM
 Options tab:
  • General options: name, os, etc
  • VMware Tools
  • Guest power management: guest OS in standby, and VM's response (standby/suspend)
  • Advanced
           General: Settings: Disable Acceleration; Enable logging (checked); debugging and statistics (Normally);Configuration parameters(click to configure)
            CPUID Mask: CPU Identification mask (expose by default), disable certain CPU feature to vMotion
            Boot Options: Power on delay; Force into BIOS setting for next reboot
            ParaVirtualization: VMI (Virtual Machine Interface)Disabled by default; power performance boost on supported OS with impact on vMotion (VMI will retire in 2011)
            Fiber Channel NPIV: Virtual WWN, Physical RDM use case
            CPU/MMU Visualization: Automatic by default
            Swap file location: Default use the cluster or host on which VM resides

Resource tab:
  • CPU at Resource pool 
  • Memory at Resource pool
  • Disk 
  • Advanced CPU: NUMA, HyperThreading Sharing (HT) - Any, None, internal. Scheduling Affinity (select physical CPUs)
  • Advanced Memory: NUMA, choose memory from node or No affinity












          





Host Attributes

vSphere client -> configuration -> software -> Advanced Settings

NIC Teaming load balancing / failover

Port group level settings:

Load Balancing:
  • Route based on the original virtual port ID (default)
  • Route based on ip hash
  • Route based on source MAC hash
  • use explicit failover order
Failover order: the active and standby adapter for this port group in specified order

Network failover Detection:
  • Link Status only (default)
  • Beacon Probing
Notify Switch (yes Default/No)

Failback (yes Default/No)

IPv6

Enabling IPv6 on host requires a reboot, which could be done from vShpere client configuraton->networking.

Or you could enable the service console and VMkernel separately from command-line:
  • esxcfg-vswif -6 true
  • esxcfg-vmnic -6 true
Noticed that IPv6 has limitation on:
  • iSCSI, experimental
  • TSO not supported
  • HA and FT not supported

Pluggable Storage Architecture (PSA)

  1. a special VMkernel layer
  2. VMP (NMP by default VMware NMP, a generic Multipathing Plugin)
  3. SATP + PSP
  • Manage physical path claiming and unclaiming
  • Register and de-register logical devices
  • Associates physical path with logical devices
  • Processes I/O requests to logical devices (optimal physical  path for the requests - load balance; fail-over )
  • abort or reset logical devices

Custom Alarm at vCenter

vSphere client: Inventory->Alarms->Definitions->new Alarms

Alarm Settings:
  • General
  • Triggers (Trigger type from Drop down, condition,warning, condition length, Alert, condition length)
  • Reporting (Range, and Frequency)
  • Actions (Action from Drop down, Configuration, once/repeat, green->Yellow, Yellow->Red, Red->Yellow, Yellow->Green)

vCneter server storage filters

vCenter server uses default storage filter, which could be edit from vSphere client interface:

Administration->vCenter Server Setttings->Advanced Settings->config.vpxd.

Raw Device Mapping (RDM)

RDM file acts as a proxy between VM and LUN.

VMFS requires a little more CPU for I/O operation than RDM.

Virtual compatibility mode RDM provides advanced file locking for snapshot and data protection

Physical compatibility mode RDM allow VMkernel to pass SCSI command to the physical device.

Microsoft Exchange cluster on ESX is a use case for RDM physical compatibility mode.


VMDirectPath Gen I

For VMs running on Intel Nehalem platform, the VMs could take advantage of new hardware such as 10G network Adapter.

Each VM could connect up to two pass-through devices

Configure Passthrough devices on  a host: Configure->Hardware->Advanced Settings->Configure Passthrough. Green icon indicates a devices is enable and active. Reboot is required for enabling the device.

VMs on the host then could add the passthrough device to their configuration via editing settings.

TSO TCP Segmetation Offload

TSO (TSO MSS: 65535) is enabled at VMKernel by default, while it must be enable at VM level, which requires enhanced vmxnet virtual network adapter.

Windows 2k3, RHEL4/64, and SuSE 10 are the entry OS for TSO

For VMs without enhanced vmxnet, its virutal network adapter need to be replaced to support TSO. (Writting down the old MAC address is a good practice before removing the old virtual NIC).

TSO at VMkernel NIC could not be re-enabled, it has be re-created.

VIX

An API that lets you programmatically control hosts and their VMs.

VIX provides support for 3 languages: C, Perl, and COM. Open-source has the bindings of Java and Python version not endorsed by VMware though.

From ESX 3.5 update 1, VIX is able to manager ESX and its VMs, such as:
  • copy files out of guests
  • stop and restart process within guests.
  • run programs within guest and get their output

vim-cmd

 vim-cmd is an interactive shell buffer, and allows execution of shell commands in a vim buffer, without having to suspend the vim session. This API provides reporting options and automates scripting tasks.

vim-cmd is for the later version of ESXi, which has equivalent at vShpere ESX as vmware-vim-cmd.

Earlier version of ESX also provide vimsh to provide similar function.

vim-cmd commands available under /:
  • Internelsvc/
  • vmsvc/
  • vimsvc/
  • solo/
  • proxysvc/
  • hostsvc/
  • supportsvc_cmds

Storage PSP (Path Selection Policy)

VMware supports three PSP:
  • Fixed
  • MRU
  • Roud Robin
Path Selection Policy could be manged by esxcli nmp with the following objects:
  • fixed
  • roundrobin
  • psp
  • path
  • device
  • satp (Storage Array Type Plugin)
  • boot


Virtualization evolution

  1. Binary Translation
  2. Intel VT-x/AMD-v (put guest OS back to ring 0), software for MMU
  3. Intel VT-x + EPT or AMD-v + AMD RVI (hardware MMU)
Paravirtualization (VMI) will make guest (if guest OS supports)performance improved significantly, while it will restrict the VM's compatibility for VMotion.

Transit Lockside Buffer TLB

Host memory overhead

  1. VMM overhead
  2. Scheduling overhead; network bandwidth insufficient; storage overhead (slow/overloading)
  3. vMotion

Linked Mode vCenter

Usee ADAM DB, and support host 3000, VM 10,000

Reconcile Roles in  Linked Mode vCenter

Multiple vCenters can join a Linked Mode Group.

Prerequisite for Linked Mode vCenter:
  • DNS for replication
  • two-way trust for different domain name
  • Admin account for linked mode
  • Network time synchronization
*** how to test Linked Mode vCenter at DCA exam?

I/O Meter

I/O subsystem measurement and characterization tool for single and clustered systems. GNU license.

Solaris, Windows and Linux plaform

vscsiStats

/usr/lib/vmware/bin/vscsiStats

collects and then report counters on storage activities

VM disk I/O workload characterization
  • -s start to collect (30 minutes by default)
  • -x manually stop to collect
  • -r rest all counters to zero
  • -w WorldGroup ID
  • -i -w specific vscsi handleID within WorldGroupID
  • -c result in csv
  • -p print results with option
Support NFS storage performance monitoring.


TPS, Balloning, and swaping

TPS: Transparent Page Sharing, by default from ESX(i) (4k page only)

Balloning: from vmware tools installed VM, max 65% by default (free and Idle memory)

swapping to/from disk, caused by insufficient host memory or Resource Pool Settings.

VM without vmware tools installed will begin memory swapping directly when memory insufficient.

Host memory state: 6%(and above) soft, low 4%, hard2%,  low1%

memory over-commitment -> server consolidation possibly

ping vs vmkping

ping: service console network connectivity command

vmkping VMkernel network connectivity command

ESXi doesn't have service console, so they are the same.

PVLAN

The extension of VLAN standard, able to isolate traffic between VMs within same VLAN

DMZ is a use case PVLAN


PVLAN contains primary VLAN (Promiscuous) and secondary VLAN (identified by unique VLAN ID), while secondary VLAN (Isolated/Community)exist only within primary VLAN

Both physical switch (PVLAN-aware, due to MAC address discovery) and vDS could identify the VLAN ID for PVLAN

Physical switch also use tagging for the traffic; physical switch must trunk to ESX(i) hosts and not in secondary VLAN

vLAN Trunking

configures switch to pass tagged traffic to Virtual Machines
vSS used VLAN ID 4095 to indicate VLAN trunking; where vDS Port Group has option to do so

vDS VLAN trunking has VLAN trunking range, which allows you to specify which VLAN to be monitored.

The VMs at VLAN Trunking need to tag and untag traffic (Lab Manager Service VM, sniffer, DVUplinks VLAN)


vDS DVPort Group Port binding

DVUplink also has port binding, but not editable from vSphere client

Port Binding Options: (Static, Dynamic, no binding)
























esxcfg-vmknic

VMkernel NIC is the TCP/IP stack handles vMotion, iSCISI, and NFS

Another type of Virtual Adapter is Service Console, which also uses TCP/IP stack to add support for host management traffic.esxcfg-vswif is used to configure Service Console.

Yet, the third Port Group (Partitioned network connection within a virtual switch) is Virtual Machine, which doesn't requires virtual adapter.

DCUI

ESXi Direct Console User Interface (DCUI) could:

  • Configure Root password
  • Configure Lockdown mode
  • Configure Management network
  • Restart Management Network
  • Test or disable Management
  • Configure Keyboard
  • View Support Information
  • View System Logs
  • Restart Management Network
  • Reset System Configuration
  • Remove Custom Extensions
  • Shutdown or Restart /Reboot the ESXi Server

ESXi Lockdown mode

When you enable Lockdown mode, only the vpxuser has authentication permissions. Other users cannot perform any operations directly on the host. Lockdown mode forces all operations to be performed through vCenter ServerA host in Lockdown mode cannot run vCLI commands from an administration server, from a script, or from the vMA on the host. In addition, external software or management tools might not be able to retrieve or modify information from the ESXi host. 

 You can enable Lockdown mode from the Direct Console User Interface (DCUI).

When you enable DCUI while running in Lockdown mode, you can log in locally to the direct console user interface as the root user and disable Lockdown mode.


command vim-cmd or PowerCLI could be to enable or disable Lockdown mode (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008077)

SSH on ESXi

After Tech Support Mode is enabled, you could then enable SSH on ESXi.

  1. vi /etc/inetd.conf  (unmark two lines start with #ssh)
  2. kill -HUP < cat /var/run/inetd.pid (restart the inetd process)
Of course, if you need to disable SSH on ESXi, just remark ssh lines at /etc/inetd.conf and restart the inetd process.

ESXi technical support mode

ESXi Tech Support Mode is inactive by default installation, while you could enabled it from ESXi configuration via vSphere client. (Configuration->Software->Advanced Settings->Boot->VMKernel.Boot.techSupportMode (checkbox)

Since the setting is at Boot section, so a reboot is required to enable or disable tech support mode for ESXi

From DCUI (iLO, DRAC, AMM or local console), type ALT + F1 to display ESXi console screen, and type unsupported, and answer the password for root.

Defaut shell for ESXi console is ashell

vicfg-module

vSphere CLI command, simple implementation of esxcfg-module, without capacity of export, unload, or disable module from VMkernel.

iSCSI

hwiscsi need iSCSI hardware initiator to function similar to HBA to access iSCSI storage over TCP/IP network instead of FC.

vmware also provide a software initiator for access iSCSI storage over TCP/IP network.

swiscsi requires more settings than hardware iSCSI:
  • create a new VMkernel port for the physical NIC
  • enable software iSCSI initiator
  • If using multiple physical nic, then configure port binding
  • If needed, enable Jumbo Frame, end to end (Kernel, switch)
Configuring Discover Address for iSCSI Initiator
  • Dynamic Discovery
  • Static Discovery (send target)
CHAP parameters for iSCSI Initiators
  • CHAP (One-way or Mutual)
  • Additional Parameter (Header Digest, Data Digest etc)
*Openfiler is a great tool to test out the functionality of iSCSI. It even has a VM version!





vDS

I/O Plane hidden switch

vSphere Distributed Switch (vDS)is typically created from vSphere client, although the customized PowerCLI with the help from vSphere SDK could also do the job. (http://blogs.vmware.com/vipowershell/2010/07/vds-in-powercli-41.html)

esxcfg-vwitch could only work with DVPort as such:
  •  -P|--add-dvp-uplink=uplink  Add an uplink to a DVPort on a DVSwitch.
  • -Q|--del-dvp-uplink=uplink  Delete an uplink from a  DVPort on a DVSwitch.
  • -V|--dvp=dvport             Specify a DVPort Id for the operation. 
To create the vswif and uplink it to the DVS port:
esxcfg-vswif -a -i IP_address -n Netmask -V dvSwitch -P DVPort_ID vswif0    
For example:
esxcfg-vswif -a -i 192.168.76.1 -n 255.255.255.0 -V dvSwitch -P 8 vswif0 


vCenter Server Heartbeat

Protection level:
  • Server protection: hardware failure or OS crashes
  • Network Protection: up to three nodes to make sure server visible on the network
  • Application protoection: service alive
  • Performance protection: operating at normal range
  • Data protection: failure-over for data
 The vCenter fail-over/back solution

HA, DRS, DPM

VMware Cluster feature:
  • HA
  • DRS (Plus DPM: using WoL, iLO to power on hosts), EVC enhances the vMotion compatibility
VMware HA
  • VM options 
  1. VM restart priority: (Disabled, Low, Medium, High)
  2. Host Isolation response (Leaved Power on, Power off, Shutdown)
  • VM monitoring: monitoring sensitivity, w ->high
VMware DRS
  • Rules:
  • VM Options: individual VM level enabled
  • Power management: Individual host level option
  • EVC
Swap file location: same directory as VM recommended.

**** HA/DRS technical deep dive book

FT Fault Torlerance

VMware FT compatibility requirements:
  • same build number for ESX(i) hosts
  • Common shared storage
  • Single Processor machine
  • Thin Provisioned disks not supported will be converted to thick disk automatically
  • No Snapshots
FT Logging: (port group settings)
  • Virtual NIC for FT Logging: checkbox
  • separated from vMotion NIC recommended.
For VMware FT to be supported, the servers that host the virtual machines must use compatible processors selected from the categories as documented below. Processors within the same category are always compatible with each other. Note that AMD Opteron Barcelona processors are not compatible with Intel processors. 

Intel processor requires Intel Lockstep technology support in the CPUs and chipset, also the VT need to enabled at BIOS.

HA is required for FT at Cluster level.

Host Profile

Host Profile is ideal for networking settings for ESX(i) host

Flexible profile for both ESX and ESXi

PowerCLI Cmdlets for host profiles

NTP and NAS settings

vSphere Client -> Inventory ->Management->Host Profile

CDP Cisco Discovery Protocol

CDP is used to share information about the other directly-connected Cisco networking equipment, such as upstream physical switches.. This is usesful when troubleshooting networking connectivity issues related to VLAN tagging methods on virtual and physical port settings.

1. CDP info from vSphere client

2. CDP info from PowerCLI

3 CPD info from esxcfg-info --network

4 vim-cmd (vmware-vim-cmd) hostsvc/net/query_networkhint

5. esxcfg-vswitch -B -b (vSS)

CDP status: down, listen, advertise, both







Update Manager

Configure a shared repository (vCenter server local or http share)
vSphere client: home->update manager->Configuration->Settings->Patch Download->Patch Download Setting->Use a shared repository (Radio Button), requires to validate URL

Manually download updates to a repository, requires update manager privilege.

Orchestrated vSphere upgrades: upgrades Host and Virtual Machines, and VMware Tools of VMs in the inventory at the same time.

vmware-umds patches ESX(i) hosts that don't have internet connection.(Update Manager Download Service)

esxcfg-volume

command-line to manage volumes have detected as snapshot/replica; also Resignature/mount the snapshot/replica persistently, if the originally volume is not online.

CLI command for this is vicfg-volume.

For datastore on host, using esxcfg-scsidevs  for SCSI volume and esxcfg-nas for NFS share.

esxcfg-rescan will scan the devices change

vShield

vShield App protects application in the virtual data-center from network threat

vShield Edge is a network gateway solution that protects the edges of the virtual datacenter, and help organizations maintain  proper segmentation between different organization units.

vShield Endpoint provides on-host antivirus and malware protection that reduces performance latency and eliminates the need to maintain individual  security agents in each and every virtual machine.

vShield Manager, included with all vShield products, a central point.

vShield Zones, included with vSphere, provides the basic protection from network-based threats in vCenter, based on defined zones, using IP, port etc information. deployed per vSphere host and serves as application layer firewall for the vDC, could upgrade to vShield App

Features:
  • Application aware firewall
  • policy management
  • logging and auditing
  • flow monitoring
Component:
  • vShield Manager
  • vShiled: active security component: monitors traffic between hosts, and between VMs on the host.
vShield Zone CLI mode:
  • Basic
  • Privileged
  • Configuration
  • Interface configuration




PowerCLI

Along with vSphere CLI, vMA, and vSphere SDK for Perl (bundled with CLI), PowerCLI is the three offerings from VMware to manage host remotely.

PowerCLI is a powerful command line tool that lets you automate all aspects of vSphere management, including network, storage, VM, guest OS and more. PowerCLI is distributed as a Windows PowerShell snapin, and includes more than 230 PowerShell cmdlets, along with documentation and samples.

PowerCLI runs on customized Windows PowerShell.

It connects to vCenter server or host directly. via: Connect-VIServer <host or vCenter address>.

PowerCLI works on:
  • Virtual Switch
  • Port Group
  • Data Center
  • Data Store
  • DRS Rules
  • Folder
  • Snapshot
  • Cluster
  • Resource Pool
  • Security
  • VM
  • Host
  • VM Invoked

vSphere CLI

Issue command to host or via vCenter to host from remote workstation, which requires https authentication. (command-line, session file, or configuration file. password prompt, Active Directory, Credential store, and Environment variable could also be used for authentication)

For Windows: commmand.pl
For Linux: command
For vMA, a prepackaged Centos OS VM: command

CLI could be used in windows or Linux script to perform repetitive tasks on multiple hosts.

Most commands start with vicfg-***

Here is some commands not started with vicfg:
  • vifs
  • vihostupdate(35)
  • vmkfstools
  • esxcli
  • vmware-cmd
  • svmotion
  • resxtop
For the easy migration purpose, some vicfg-* command also has esxcfg-* equivalent at CLI, which could be obsolete at future CLI

vMA vSphere Management Assistant

vMA a prepackaged CLI VM from VMware. It could be imported into /vCenter or vSphere host only.

There are two default user at vMA: vi-admin and vi-user. user is disabled by default.vCenter server could only authenticated by vi-admin account

sudo passwd vi-user will enable vi-user account, which is not in sudoers.

sudo vifp will addtarget server to vMA. Target server could be vCenter or hosts. Of course sudo vifp could also remove targer server from vMA.vifpinit will initialize vi-fastpass to target server. Session file could also be used to fasten the authentication process.

vMA could be shut down and removed from host/vCenter just like any other VMs.

vMA could be update to date via command sudo vima-update

*vima-update is just the symbolic link to vma-update

vMA could be configured to collect log files from hosts and vCenter server according to specified log policy. vilogger is the command.

vifplib is the library for Per or Java with vi-fastpass to programmatically connect to vMA

esxcfg-advcfg

Command line manage the configuration of the value of VMkernel advanced settings options.

For example, you could enable/disable NetQueue from this command-line

CLI command is vicfg-advcfg.

vicfg-ntp

CLI command, provides an easy to set up NTP service on host. Previously, there are multiple steps to change the configuration files, step-tickers, restart the ntpd, and open the firewall via esxcfg-firewall to do so.

vicfg-snmp

CLI command simplifies the steps to enable snmp service on host. Without this command, you will need to complete the following steps:
  1. add the community string at the file /etc/snmpd/snmpd.conf
  2. restart snmpd
  3. esxcfg-firewall -e snmpd

vmkload_module

VMkernel module loader, and also could be used to unload a module from memory.

Obviously, esxcfg-module and vicfg-module will call vmkload_module to manage the modules in VMkernel.

ssl certificate

rui.crt, rui.key
location: /etc/vmware/ssl
openssl is used to create certificate (or submit to CA for CSR) for host, or vCenter server, update manager etc.
hostd /vCenter/Update Manager service need be restarted for certificate change.
ESX: service mgmt-vmware restart
ESXi: services.sh


SSL session timeout

file:  /etc/vmware/hostd/config.xml

 add entry:

<vmacore>
...
<http>
<readTimeoutMs>20000</readTimeoutMs>
</http>
...
<ssl>
...
<handshakeTimeoutMs>20000</handshakeTimeoutMs>
...
</ssl>
</vmacore>
restart the hostd process

NPV N_Port Virtualization

Advanced FC feature: NPV allows you to add switches and ports to the fabric without requiring more domain ids.

iptables

Modern Linux use iptable to handle firewall, which is used by vMA.

iptables evolved from ipchains, so it still has 3 chains: INPUT, OUTPUT, FORWARD

Target: ACCEPT, DROP, QUEUE, RETURN

option: protocol source destination interface jump target goto chain match numeric table line-numbers

Commands: list, flush, zero,append, delete, insert, replace, delete-chain, new, rename, policy

iptables command works on rule or chain

--sport source port number
--dport destination port number

port range: port_nmber1:port_number2

-p <port_name> without port number

 customized chain could also be loaded

* Here only the Filter table is covered for firewall issue, while the other two tables (NAT, and Mangle are ignored)

esxcfg-vswitch

command line to work on vSwitch standard mostly
  • create/delete vSS and portgroup (vSS)
  • add/delete uplink adpater (physical nic) to  port group
  • add delete uplink to DVport (service console or VMkerenl at vDS)
  • set CDP status for vSwitch
  • set MTU for vSwtich
  • set VLAN ID for port group

net-dvs

/usr/lib/vmware/bin/net-dvs (same location for vmkping, vscsiStats, vmware-vim-cmd)
dvs information: /etc/vmare/dvsdata.db (cache info from /vmfs/volumes/.dvsdata) , could be read by net-dvs or vm-support command.
net-dvs out will display all the information about dvs.



esxtop/resxtop

resxtop is lack of replay mode

reserved memory could be ballooned away.

CPU performance major counters:
  • %RDY 10 Overpovisioning of vCPUs, execssive usage of vSMP, a limit has be set: %MLMTD
  • %CSTP 3 Excessive usage of vSMP. Decrease amount of vCPUs for this particular VM, leads to increased scheduling opportunities.(ready/Co-deschedule State Time Percentage)
  • %MLMTD 0 If larger than 0 the world is being throttled. Possible cause: limit on CPU (ResoucePool/World's limit setting)
  • %SWPWT 5 (SWap Waiting Time) VM waiting on swapped pages to be read from disk: memory overcommitment.
memory performance major counters:
  • MCTLSZ 1 If larger than 0, host is forcing VMs to inflate balloon driver to reclaim memory as host is overcommited.
  • SWCU 1 if larger than 0, host has swapped memory pages in the past: over-commitment
  • SWR/s 1 if larger than 0. host is actively reading from swap(vswap): excessive over-commitment
  • SWW/s 1 if larger than 0. host is actively writing to swap(vswap): excessive over-commitment
  • N%L 80 if less than 80, VM experience poor NUMA locality: ESX scheduler doesn't attempt to use NUMA optimization for the VM, and use memory via interconnect.(NRMEM)
network performance major counters:
  • %DRPTX 1 Drop packets transmitted: very high  network utilization
  • %DRPRX 1 Drop packet received: very high  network utilization
storage performance major counter:
  • GAVG 25 = DAVG + KAVG
  • DAVG 25 Disk latency most likely caused by array
  • KAVG 2 Disk latency caused by VMkernel, means queuing, check "QUED"
  • QUED 1 Queue maxed out. Possibly queue depth set to low, check with vendor for optimal queue depth value.
  • ABRTS/s 1 Aborts issued by VM, because storage is not responding (60second for Windows): path failure or array not accepting I/O 
  • RESETS/s 1 number of commands reset per second

tcpdump tcp-dump-uw

ESX tcpdump
  •  -s 1514 (normal traffic) 9014 (Jumbo Frame)
  • -i interface (VM NIC, kernel port nic, console nic)
  • -w outfile for other network traffic analyzer (traffic.pcap for WireShark )
  • promiscuous mode accept at security settings from Switch, port group, and VMs' NIC

ESXi tcpdump-uw
  • -s 1514 (normal traffic) 9014 (Jumbo Frame)
  • -i interface (VM NIC, kernel port nic, console nic)
  • -w outfile for other network traffic analyzer (traffic.pcap for WireShark )
Wirehark is similar to tcpdump, but has a graphicl front-end,, plus some sorting and filtering options

log identification & configruation

1. vCenter server log
  • location: C:\Documentation and Settings\All Users\Application Data\VMware\VMwae VirtualCenter\logs
  • major logs: vpxd, and vpxd-profilers; log files rotate with only two um-comppressed log files; recreated when vpxd.exe restart or reach size limit (5MB by default for vpxd-number.log)
  • log option from vSphere client: Adminstration->vCenter Server Settings->logging options
  • Export (download) vCenter Server log from vSphere client: Administration->vCenter Server Settings->Export logs (with the option to export the all/selected hosts log file too)
  • Generate vCenter Server log bundle (extended if needed) from vCenter server
2. ESX host log
  • location: /var/log, /var/log/vmware
  • major logs: vmkerenl, vmkwarning, messages; vpxa.log, hostd.log (logrotate configs log rotation)
  • host logs could be export from vSphere as described above
  • vm-support
  • /etc/sysconfig/syslog; /etc/logrotate.conf
  • /etc/syslog.conf
3. ESXi host log
  • location: /var/log, /var/log/vmware
  • major logs: messages; vpxa.log, hostd.log (log rotation)
  • host logs could be export from vSphere as described above
  • /etc/syslog.conf  (vSphere client on ESXi host has optiont to set remote host from Advanced Settings)
  • host log could also be view from DCUI
  • vm-support or wget https://host/cgi-bin/vm-support.cgi



host log forwarding, vMA as log server

1. ESX
  • vi /etc/syslog.conf (add entry *.*  @ip_of_vMA)
  • /etc/rc.d/init.d/syslog restart (restart syslogd)
  • esxcfg-firewall -o 514, udp, out, syslog_traffic (open the firewall for syslog_traffic )
2. ESXi
  • vicfg-syslog -s ip_of_vMA (Setting remote log server from vMA)
  • vSphere client also does the job; or just edit /etc/syslog.conf, /sbin/services.sh restart though
3. Config vMA to receive syslog from ESX/ESXi hosts (vi vMA)
  • sudo vi /etc/sysconfig/syslog (modify entry to receive log from remote server, unmark SYSLOGD_OPTIONS)
  • sudo /etc/rc.d/init.d/syslog restart (restart syslogd)
  •  sudo iptables -I INPUT -i eth0 -p udp --dport 514 -j ACCEPT
4. Enable vMA to receive syslog from ESX(i) host via vMA
  • vilogger enable --server <ESX(i)_name>
  • vilogger list (verify if succeed)
  • vilogger disable --server <ESX(i)_name>
  • default log location: /var/log/vmware/ESX(i)_name (/etc/vmware/viconfig/vilogdefaults.xml)
  • option to change logname, logpolicy (collectionperiod, numrotation, maxfilesize)


esxcfg-firewall

ESX uses esxcfg-firewall to handle firewall settings for service console, simpler than iptables.
  • -q will display open port and enabled services
  • -s will display known services
  • -e enable service
  • -d disable service
  • -o will open a new port <port, udp|tcp, in|out, name>
  • -c will close a port previous opened by -o
ESXi doesn't implement iptalbe, so there is no esxcfg-firewall on ESXi, neither dose the vCLI

vSphere client could enable/disable known service. For the custom service, command-line is still the choice.



NFS Storage

VMware utilizes NFS storage to host VMs and regular data files, such as ISO images.

esxcfg-nas -a -o <hostname or IP> -s <share path on remote host> -y Label

This willl add a read-only NFS share with Label under /vmfs volumes

vicfg-nas is the CLI equivalent to do so.

Storage Path Masking

This is VMware's approach for LUN masking. (after array level presenting, and switch level Zoning)

Storage is VMware preferred first step for drop a datastore from a host

Starting vSphere, esxcli corestorage is used to handle path masking, instead of previous configuration file editing.

 esxcli corestorage claiming is temporary, it won't survive the reboot.

Claim Rule modification does not operate on the VMkernel directly. Changing the current claim rules requires two steps:
  1. a call to add/remove/move claim rule: esxcli corestorage claimrule add/delete/move
  2. a call to esxcli corestorage claimrule load to load the change from config file to VMkernel.
vSphere is using MASK_PATH plugin as claim rule to implement storage path masking, instead of modifying configuration file.The rule ID is from 0 to 64k-1, and rule 101 to 65435 is available for general use.

Here are the steps for Path Masking:
  1. esxcli corestorage claimrule add --plugin MASK_PATH --rule <rule ID> --type location -A <adapter> -C <> -T <>  -L <>
  2. esxcli corestorage claimrule list to verify
  3. esxcli corestorage claimrule load to load the new rules into VMkernel
  4. esxcli corestorage claiming unclaim (remove old rules)
  5. esxcli corestorage claimrule run (run the path claiming rule without reboot)
Accordingly, here are the steps for unmasking a path:

  1. esxcli corestorage claimrule delete  --rule <rule ID> --type location -A <adapter> -C <> -T <>  -L <>
  2. esxcli corestorage claimrule list to verify
  3. esxcli corestorage claimrule load to load the new rules into VMkernel
  4. esxcli corestorage claiming unclaim (remove old rules)
  5. esxcli corestorage claimrule run (run the path claiming rule without reboot)

iSCSI NIC Binding

esxcli swiscsi nic command specifies NIC bindings for VMkernel NIC, the use case iSCSI storage.

esxcli swiscsi nic <commands> --adapter <iSCSI adapter name> --nic <VMkernel NIC>
  • list
  • add
  • remove
* VMKernel NIC from vSphere client or esxcfg-vmknic, where MTU has to be specified at the creating. PowerCLI could modify the MTU for virtual adapter afterwards (http://communities.vmware.com/thread/237181)

NPIV N_Port (Node Port) ID Virtualization

NPIV is a useful Fibre Channel feature which allows a physical HBA (Host BUS Adapter) to have multiple Node Ports. Normally, a physical HBA would have only 1 N_Port ID. The use of NPIV enables you to have multiple unique N_Port ID’s per physical HBA. NPIV can be used by ESX4 to allow more Fibre Channel connections than the maximum physical allowance which is currently 8 HBA’s per Host or 16 HBA Ports per Host.   

NPIV requires supported hardware, incldued HBAs and Switches. With NPIV you could present a LUN to a specific VM. (The case for RDM)

Jumbo Frame MTU=9kB

Jumbo Frame has be set end-to-end to take into effect.

1. Jumbo Frame at vmkernel nic can be setting only at creating, not modifiable later
    esxcfg-vmknic -a myvmknic -i x.x.x.x -mtu 9000 <port_group>
2. Jumbo Frame Virtual Switch level for vSS
    esxcfg-vswitch -m 9000 <vSwitch>
3. Jumbo Frame for vDS, could be set from vSphere client interface.
4. Jumbo Frame for VM level
    Enhanced vmxnet is used for enabling Jumbo Frame at VM settings
5. Jumbo Frame at guest OS level.
    Inside Guest OS, configure the network adapter to allow Jumbo Frames
   Linux Red Hat: ifconfig eth0 mtu 9000 (MTU 9000 edit /etc/sysconfig/network-scripts/ifcfg-eth0 to make changes permanent)
   Windows: NIC property->advanced->Jumbo Frame->9000
6. Application Protocol Tunning.
    update existing NFS, SMB, iSCSI for Jumbo Frame.

NetQueue

NetQueue is a feaure for Virtual Machines hosting on new Intel  hardware (supporting multiple receiving queues) to improve receive-side networking performance.

1. By default it is enabled at Configuration->Software->Advanced Settings->VMKernel->Boot->VMKernel.Boot.netQueueEnabled
Such step could also be done by add a line at /etc/vmware/esx.conf: /vmkernel/NetQueueEnabled=True or
o esxcfg-advcfg --set-kernel 1 netNetqueueEnabled (to enable) esxcfg-advcfg --set-kernel 0 netNetqueueEnabled (to disable)

2. esxcfg-module is used to config NIC driver (for example s2io is the NIC driver) to use NetQueue. (also via vicfg-module from vCLI)
    vicfg-module -s "intr_type=2 rx_ring_number=8" s2io
3. To disable NetQueue on the NIC driver, just use vicfg-module -s "" s2io

* s2io is loaded by vmkload_module "path-to-s2io-driver"

PrintOn for Android

Wireless Printing from Android

Tuesday, December 6, 2011

VOIP on Android with Google Voice

SIP service provider (Register your SIP account: usually call in is free. ekiga.net)

DID service provider (IPKall is the one still offers free number so far, hopefully more)

VOIP client on Android (CSipSimple, SIPDroid etc)

Since you are likely only to get free call in fomr SIP provider, Google Voice could play a key role for pseudo dial out (free so for US and Canada), as well as call-in forward too. Every time you dial out a number, GV will intercept that call out and make the call out for you, after GV call in and connect your DID number first. There is a free Android app named GVoice Callback does this job on Android sytsem.

Yahoo! email manual set up at HTC Inspire 4G

It is quite strange that Yahoo's email auto settings for HTC Inspire 4G only worked few days, then it just stopped to sync new email anymore.

Google around, and noticed it is quite common for EVO, and Thunderbelt too, which are Inspire's siblings, and here is the manual settings:

Incoming Server:
  • your yahoo email address
  • username is also your yahoo email address
  • Protocol: IMAP
  • IMAP Server: imap.mail.yahoo.com
  • Security Type: SSL
  • Server Port: 993
Outgoing Server: 
  • check login required
  • User name is your yahoo email address again
  • SMTP Sever: android.smtp.mail.yahoo.com
  • Security Type: SSL
  • Server Port: 465

Messi and Rooney at WSJ

Not much into WSJ at all. But just noticed that both Messi and Rooney were covered on it, of course at sport page, although the front page is still about Euro Zone

Monday, December 5, 2011

AA Mobile Check in at O'Hare Ariport

It sounds cool that they sent you the confirmation code, and you put it into your mobile apps, then the two dimension barcode appears at your smartphone just as StarBucks did.

Well, if you need check in your bags, then you still need to go to the self check-in and use the bag only. At this time, you will still a credit card to pay for the bag fee. (Guess I did not bind my AA Advantage account with any credit card?).

Of course, security and bag check-in personnel still need your ID too. So far, all the benefit of Mobile Check-in is the electronic boarding pass, so you could show it to TSA (security) and Gate check-in.