Saturday, October 13, 2012

IaaS PaaS SaaS

Infrastructure as a Service (IaaS)

Infrastructure as a Service is a provision model in which an organization outsources the equipment used to support operations, including storage, hardware, servers and networking components. The service provider owns the equipment and is responsible for housing, running and maintaining it. The client typically pays on a per-use basis.
Characteristics and components of IaaS include:
IaaS is one of three main categories of cloud computing service. The other two are Software as a Service (SaaS) and Platform as a Service (PaaS).
Infrastructure as a Service is sometimes referred to as Hardware as a Service (HaaS).

Platform as a Service (PaaS) is a way to rent hardware, operating systems, storage and network capacity over the Internet. The service delivery model allows the customer to rent virtualized servers and associated services for running existing applications or developing and testing new ones.
Platform as a Service (PaaS) is an outgrowth of Software as a Service (SaaS), a software distribution model in which hosted software applications are made available to customers over the Internet. PaaS has several advantages for developers. With PaaS, operating system features can be changed and upgraded frequently. Geographically distributed development teams can work together on software development projects. Services can be obtained from diverse sources that cross international boundaries. Initial and ongoing costs can be reduced by the use of infrastructure services from a single vendor rather than maintaining multiple hardware facilities that often perform duplicate functions or suffer from incompatibility problems. Overall expenses can also be minimized by unification of programming development efforts.
On the downside, PaaS involves some risk of "lock-in" if offerings require proprietary service interfaces or development languages. Another potential pitfall is that the flexibility of offerings may not meet the needs of some users whose requirements rapidly evolve.

Software as a Service (SaaS) is a software distribution model in which applications are hosted by a vendor or service provider and made available to customers over a network, typically the Internet.
SaaS is becoming an increasingly prevalent delivery model as underlying technologies that support Web services and service-oriented architecture (SOA) mature and new developmental approaches, such as Ajax, become popular. Meanwhile, broadband service has become increasingly available to support user access from more areas around the world.
SaaS is closely related to the ASP (application service provider) and on demand computing software delivery models. IDC identifies two slightly different delivery models for SaaS. The hosted application management (hosted AM) model is similar to ASP: a provider hosts commercially available software for customers and delivers it over the Web. In the software on demand model, the provider gives customers network-based access to a single copy of an application created specifically for SaaS distribution.
Benefits of the SaaS model include:
  • easier administration
  • automatic updates and patch management
  • compatibility: All users will have the same version of software.
  • easier collaboration, for the same reason
  • global accessibility.
The traditional model of software distribution, in which software is purchased for and installed on personal computers, is sometimes referred to as software as a product.

What is XaaS (anything as a service)?
XaaS is a collective term said to stand for a number of things including "X as a service," "anything as a service" or "everything as a service." The acronym refers to an increasing number of services that are delivered over the Internet rather than provided locally or on-site. XaaS is the essence of cloud computing.
The most common examples of XaaS are Software as a Service (SaaS), Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). The combined use of these three is sometimes referred to as the SPI model (SaaS, PaaS, IaaS). Other examples of XaaS include storage as a service (SaaS), communications as a service (CaaS), network as a service (NaaS) and monitoring as a service (MaaS).
Following the convention of pronouncing "SaaS" as "sass," "XaaS" is sometimes pronounced as "zass."
 

Thursday, April 12, 2012

Cisco Nexus 1000V

The Cisco Nexus 1000V is compatible with any upstream physical access layer switch that is Ethernet standard compliant, including the Catalyst 6500 series switch, Cisco Nexus switches, and switches from other network vendors. The Cisco Nexus 1000V is compatible with any server hardware listed in the VMware Hardware Compatibility List (HCL).

Cisco and VMware jointly designed APIs that produced the Cisco Nexus 1000V. The Cisco Nexus 1000V is a distributed virtual switch solution that is fully integrated within the VMware virtual infrastructure, including VMware vCenter for the virtualization administrator. This solution offloads the configuration of the virtual switch and port groups to the network administrator to enforce a consistent data center network policy.

The Cisco Nexus 1000V has the following components that can virtually emulate a 66-slot modular Ethernet switch with redundant supervisor functions:

• Virtual Ethernet module (VEM) data plane—Each hypervisor is embedded with one VEM, which is a lightweight software component that replaces the virtual switch by performing the following
functions:
– Advanced networking and security
– Switching between directly attached virtual machines
– Uplinking to the rest of the network
– Only one version of VEM can be installed on an ESX/ESXi host at any given time.
- Install VEM from host by esxupdate command or using VMware update manager (VUM)

• Virtual supervisor module (VSM) control plane—The VSM is a virtual appliance that can be
installed in either a standalone or active/standby HA pair. The VSM, with the VEMs that is controls,
performs the following functions for the Cisco Nexus 1000V system. (1.5GHz, 2GB memory, 3GB disk, 3vNics: Control, Management, Packet connection to vSwitch at ESX host)

– Configuration (domain ID, HA pair: Active/Standby, Admin password, hostid for registration and license; Trial license is valid for 16 CPUs for 60 days via TFTP installation)
– Management (register VSM plug-in to vSphere client, download the xml from Cisco)
– A single VSM can manage up to 64 VEMs.
– Monitoring (Port profile from Cisco is equal to Port Group for VMware)
– Diagnostics
– Integration with VMware vCenter
– Active-standby VSMs increase high availability
- vem status (check VEM state from host)
- add host to DVS from network view, matching uplink
- show module show the VEM/VSM installed and enabled from NX CLI

In the Cisco Nexus 1000V, traffic is switched between virtual machines locally at each VEM instance.
Each VEM also interconnects the local virtual machine with the rest of the network through the upstream
access-layer network switch (blade, top-of-rack, end-of-row, and so forth).

The VSM runs the control plane protocols and configures the state of each VEM accordingly, but it never forwards packets.

In the Cisco Nexus 1000V, the module slots are for the primary module 1 and secondary module 2. Either
module can act as active or standby. The first server or host is automatically assigned to Module 3. The
Network Interface Card (NIC) ports are 3/1 and 3/2 (vmnic0 and vmnic1 on the ESX/ESXi host). The
ports to which the virtual NIC interfaces connect are virtual ports on the Cisco Nexus 1000V where they
are assigned a global number.

Memory Compression

ESX/ESXi provides a memory compression cache to improve virtual machine performance when you use memory overcommitment. Memory compression is enabled by default. When a host's memory becomes overcommitted, ESX/ESXi compresses virtual pages and stores them in memory.

Because accessing compressed memory is faster than accessing memory that is swapped to disk, memory compression in ESX/ESXi allows you to overcommit memory without significantly hindering performance. When a virtual page needs to be swapped, ESX/ESXi first attempts to compress the page. Pages that can be compressed to 2 KB or smaller are stored in the virtual machine's compression cache, increasing the capacity of the host.

You can set the maximum size for the compression cache and disable memory compression using the Advanced Settings dialog box in the vSphere Client.

Enable or Disable the Memory Compression Cache
Memory compression is enabled by default. You can use the Advanced Settings dialog box in the vSphere Client to enable or disable memory compression for a host.

Procedure
1 Select the host in the vSphere Client inventory panel and click the Configuration tab.
2 Under Software, select Advanced Settings.
3 In the left pane, select Mem and locate Mem.MemZipEnable.
4 Enter 1 to enable or enter 0 to disable the memory compression cache.
5 Click OK.

Set the Maximum Size of the Memory Compression Cache

You can set the maximum size of the memory compression cache for the host's virtual machines.

You set the size of the compression cache as a percentage of the memory size of the virtual machine. For example, if you enter 20 and a virtual machine's memory size is 1000 MB, ESX/ESXi can use up to 200MB of host memory to store the compressed pages of the virtual machine.

If you do not set the size of the compression cache, ESX/ESXi uses the default value of 10 percent.
Procedure
1 Select the host in the vSphere Client inventory panel and click the Configuration tab.
2 Under Software, select Advanced Settings.
3 In the left pane, select Mem and locate Mem.MemZipMaxPct.
The value of this attribute determines the maximum size of the compression cache for the virtual machine.
4 Enter the maximum size for the compression cache.
The value is a percentage of the size of the virtual machine and must be between 5 and 100 percent.
5 Click OK.

http://www.vmware.com/pdf/vsphere4/r41/vsp_41_resource_mgmt.pdf

iBFT iSCSI Boot Overview

ESXi hosts can boot from an iSCSI SAN using the software or dependent hardware iSCSI adapters and network adapters.

To deploy ESXi and boot from the iSCSI SAN, the host must have an iSCSI boot capable network adapter  that supports the iSCSI Boot Firmware Table (iBFT) format. The iBFT is a method of communicating parameters about the iSCSI boot device to an operating system.

Before installing ESXi and booting from the iSCSI SAN, configure the networking and iSCSI boot parameters on the network adapter and enable the adapter for the iSCSI boot. Because configuring the network adapter is vendor specific, review your vendor documentation for instructions.

When you first boot from iSCSI, the iSCSI boot firmware on your system connects to an iSCSI target. If login is successful, the firmware saves the networking and iSCSI boot parameters in the iBFT and stores the table in the system's memory. The system uses this table to configure its own iSCSI connection and networking and to start up.

The following list describes the iBFT iSCSI boot sequence:
1 When restarted, the system BIOS detects the iSCSI boot firmware on the network adapter.
2 The iSCSI boot firmware uses the preconfigured boot parameters to connect with the specified iSCSI target.
3 If the connection to the iSCSI target is successful, the iSCSI boot firmware writes the networking and iSCSI boot parameters in to the iBFT and stores the table in the system memory.
NOTE The system uses this table to configure its own iSCSI connection and networking and to start up.
4 The BIOS boots the boot device.
 5 The VMkernel starts loading and takes over the boot operation.
6 Using the boot parameters from the iBFT, the VMkernel connects to the iSCSI target.
7 After the iSCSI connection is established, the system boots.

iBFT iSCSI Boot Considerations
When you boot an ESXi host from iSCSI using iBFT-enabled network adapters, certain considerations apply.
The iBFT iSCSI boot does not support the following items:
* IPv6
* Failover for the iBFT-enabled network adapters
NOTE Update your NIC's boot code and iBFT firmware using vendor supplied tools before trying to install and boot VMware ESXi 4.1 release. Consult vendor documentation and VMware HCL guide for supported boot code and iBFT firmware versions for VMware ESXi 4.1 iBFT boot. The boot code and iBFT firmware released by vendors prior to the ESXi 4.1 release might not work.

After you set up your host to boot from iBFT iSCSI, the following restrictions apply:
* You cannot disable the software iSCSI adapter. If the iBFT configuration is present in the BIOS, the host re-enables the software iSCSI adapter during each reboot.
* You cannot remove the iBFT iSCSI boot target using the vSphere Client. The target appears on the list of adapter static targets

Load-Based Teaming (LBT)

vSphere 4.1 introduces a load-based teaming (LBT) policy that is traffic-load-aware and ensures physical NIC capacity in a NIC team is optimized. Note that LBT is supported only with the vNetwork Distributed Switch (vDS). LBT avoids the situation of other teaming policies where some of the distributed virtual uplinks (dvUplinks) in a DV Port Group's team are idle while others are completely saturated. LBT reshuffles the port binding dynamically, based on load and dvUplink usage, to make efficient use of the available bandwidth.

LBT is not the default teaming policy while creating a DV Port Group, so it is up to you to configure it as the active policy. As LBT moves flows among uplinks, it may occasionally cause reordering of packets at the receiver. LBT will only move a flow when the mean send or receive utilization on an uplink exceeds 75% of capacity over a 30 second period. LBT will not move flows any more often than once every 30 seconds.

vSphere 4 (and prior ESX releases) provide several load balancing choices, which base routing on the originating virtual port ID, an IP hash, or source MAC hash. While these load balancing choices work fine in the majority of virtual environments, they all share a few limitations. For instance, all these policies statically map the affiliations of the virtual NICs to the physical NICs (based on virtual switch port IDs or MAC addresses) and do not base their load balancing decisions on the current networking traffic and therefore may not effectively distribute traffic among the physical uplinks. Besides, none of these policies take into consideration the disparity of the physical NIC capacity (such as a mixture of 1 GbE and 10 GbE physical NICs in a NIC team).

Load-based teaming (LBT) is a dynamic and traffic-load-aware teaming policy that can ensure physical NIC capacity in a NIC team is optimized.  In combination with VMware Network IO Control (NetIOC), LBT offers a powerful solution that will make your vSphere deployment even more suitable for your I/O-consolidated datacenter.

How LBT works :
LBT dynamically adjusts the mapping of virtual ports to physical NICs to best balance the network load entering or leaving the ESX/ESXi 4.1 host. When LBT detects an ingress- or egress- congestion condition on an uplink, signified by a mean utilization of 75% or more over a 30-second period, it will attempt to move one or more of the virtual ports to vmnic-mapped flows to lesser-used links within the team.

Tuesday, April 10, 2012

Network I/O Control (NetIOC): Architecture, Performance and Best Practices

NetIOC is only supported with vNetwork Distributed Switch (vDS), new feature on vSphere 4.1 (vSphere 5 push this to VM level)

NetIOC Feature Set:
  • Traffic isolation
  • Shares: allows flexible networking capacity partitioning
  • Limits: enforce traffic bandwidth limit on the overall vDS set of dvUplinks
  • Load-Beased Teaming: effectively use a vDS set of dvUplinks for networking capacity. (LBT is not based on limit and shares, not the default teaming policy for DV port; reshuffle port binding; only start to work when an uplink utilization is over 75% for more than 30 seconds.)





















NetIOC defines how different network traffic traffics are propagated through each congested network adapter in vDS. 

NetIOC could be excluded from host configuration at physical network adapter advanced settings at Software section.

Network Resource pool represent different traffic at vDS (FT, VM, Management, iSCSI, NFS, vMotion)

Shares are in effect only when there is period of contention.

NetIOC Best Practices

NetIOC is a very powerful feature that will make your vSphere deployment even more suitable for your I/O-consolidated datacenter. However, follow these best practices to optimize the usage of this feature:
Best practice 1: When using bandwidth allocation, use “shares” instead of “limits,” as the former has greater flexibility for unused capacity redistribution. Partitioning the available network bandwidth among different types of network traffic flows using limits has shortcomings. For instance, allocating 2Gbps bandwidth by using a limit for the virtual machine resource pool provides a maximum of 2Gbps bandwidth for all the virtual machine traffic even if the team is not saturated. In other words, limits impose hard limits on the amount of the bandwidth usage by a traffic flow even when there is network bandwidth available.
Best practice 2: If you are concerned about physical switch and/or physical network capacity, consider imposing limits on a given resource pool. For instance, you might want to put a limit on vMotion traffic flow to help in situations where multiple vMotion traffic flows initiated on different ESX hosts at the same time could possibly oversubscribe the physical network. By limiting the vMotion traffic bandwidth usage at the ESX host level, we can prevent the possibility of jeopardizing performance for other flows going through the same points of contention.
Best practice 3: Fault tolerance is a latency-sensitive traffic flow, so it is recommended to always set the corresponding resource-pool shares to a reasonably high relative value in the case of custom shares. However, in the case where you are using the predefined default shares value for VMware FT, leaving it set to high is recommended.
Best practice 4: We recommend that you use LBT as your vDS teaming policy while using NetIOC in order to maximize the networking capacity utilization.
NOTE: As LBT moves flows among uplinks it may occasionally cause reordering of packets at the receiver.
Best practice 5: Use the DV Port Group and Traffic Shaper features offered by the vDS to maximum effect when configuring the vDS. Configure each of the traffic flow types with a dedicated DV Port Group. Use DV Port Groups as a means to apply configuration policies to different traffic flow types, and more important, to provide additional Rx bandwidth controls through the use of Traffic Shaper. For instance, you might want to enable Traffic Shaper for the egress traffic on the DV Port Group used for vMotion. This can
help in situations when multiple vMotions initiated on different vSphere hosts converge to the same destination vSphere server.

Conclusions
Consolidating the legacy GbE networks in a virtualized datacenter environment with 10GbE offers many benefits — ease of management, lower capital costs and better utilization of network resources. However, during the peak periods of contention, the lack of control mechanisms to share the network I/O resources among the traffic flows can result in significant performance drop of critical traffic flows. Such performance loss is unpredictable and uncontrollable if the access to the network I/O resources is unmanaged. NetIOC available in vSphere 4.1 provides a mechanism to manage the access to the network I/O resources when multiple traffic flows compete. The experiments conducted in VMware performance labs using industry standard workloads show that:
• Lack of NetIOC can result in unpredictable loss in performance of critical traffic flows during periods of contention.
• NetIOC can effectively provide service level guarantees to the critical traffic flows. Our test results showed that NetIOC eliminated a performance drop of as much as 67 percent observed in an unmanaged scenario.
• NetIOC in combination with Traffic Shaper provides a comprehensive network convergence solution enabling features that are not available with the any of the hardware solutions in the market today.