Monday, June 17, 2013

Presenting Lun's from Netapp 8 Cluster to VMware ESXi & Configuring Array based replication in VMware SRM



How to present LUN from Netapp Cluster to VMware ESX

This is in continuation to my previous post now in the below video we will see how to present a Lun from a replicated volume to ESXi

 Step 1: Creating a LUN for ESXi on the Netapp Cluster
 Step 2 : Creating a Initiator group for which the LUN has to be mapped
 Step 3 : Choose the Volume from where the LUN has to be created
 Step 4 : Configure the iSCSI initiator on the ESXi server with the target ip address
 Step 5 : Once the volume is detected create a VMFS volume out of the newly presented LUN
 Step 6 : Configure the same on the rest of the ESX servers but no need to create the VMFS volume it will appear once you rescan the iscsi adapter

Now we can proceed with the array configuration on the SRM, before we start you have to download the appropriate SRA from vmware.com and install it on the SRM server, so that it can communicate with the array. It is simple configuration and all the steps are captured in the video.

                              Watch at 720p or above to see the text more clear


Saturday, June 15, 2013

How to setup Netapp 8.1.x Cluster with Replication in 25 mins


How to setup Netapp 8.1.x Cluster with Replication in 25 mins 


Pre requisites to setup Netapp 8.1.2 Cluster:

Hypervisor: ESX, VMware Workstation, VMware Player, VMware Fusion

VMware Player is free you can download it from here

Netapp Simulator: You can download it from Netapp.com, before the download you need to register and create an account

Management Software: You can download the On Command System Manager to Configure and Manage your Netapp Simulators

Compute Resources: 

Per simulator:  2vCPU, 1.7GB RAM 260GB, we need two simulators so total compute requirement is  4vCPU, 3.4GB memory, 320GB disk space.

IP address: we need at least 15 ip addresses for the complete setup: list of interfaces to be created
Cluster 1 vServer Cluster 1 Interfaces        ipaddr
Cluster 1 Cluster Management 
Cluster 1 Node Management
Cluster 1 vServer1 vServer data
Cluster 1 vServer1 vServer  Management
Cluster 1 vServer1 CIFS & NFS
Cluster 1 vServer1 iSCSi Lif 1
Cluster 1 vServer1 iSCSi Lif 2




Cluster 2 Cluster Management 
Cluster 2 Node Management
Cluster 2 vServer1 vServer data
Cluster 2 vServer1 vServer  Management
Cluster 2 vServer1 CIFS & NFS
Cluster 2 vServer1 iSCSi Lif 1
Cluster 2 vServer1 iSCSi Lif 2
Inter-Cluster communication

You can setup Netapp 8.1.2 Cluster on ESX, VMware Workstation, VMware Player, VMware Fusion


watch at 720p or above to see the text more clear




                              Netapp 8.1.2 Cluster Setup Part2

                    watch at 720p or above to see the text more clear




VMware vCloud Connector 2.5 GA and available to download


VMware vCloud Connector 2.5 GA and available to download

What's vCloud Connector?
  • vCloud Connector (vCC) is a key differentiator in vCloud Hybrid Service (vCHS) as well as a core component of the vCloud Suite.
  • vCC helps customers realize the hybrid cloud vision by providing them with a single pane of glass to view, operate and copy VMs/vApps/templates across vSphere/vCD, vCHS & vCloud Service Providers.
What's new in vCC 2.5?
  • Offline Data Transfer (ODT) for vCHS
Allows customers to ship large number of VMs/vApps/templates (>TBs) from on-prem vSphere / vCD to vCHS via an external storage device. Customer uses vCC to export VMs/vApps/templates to the device and ships the device to vCHS. vCHS Ops team then uploads the VMs/vApps/templates from the device into the customer vCHS account. Customer can start using these in their vCHS environment.
Note: This feature only support vCHS as a destination.
  • UDP Data Transfer (UDT)
For customers who could leverage the UDP protocol instead of HTTP, this feature will significantly improve the transfer performance of vCC and reduce the amount of time it takes to move VMs/vApps/templates between vSphere/vCD, vCHS & vCloud SPs.
  • Path Optimization
Uses streaming between the vCC Nodes to dramatically improve the transfer performance of vCC. 
Support
vSphere 5.1, 5.0, 4.x
vCloud Director 5.1, 1.5
vCHS

    • Available to all vSphere, vSphere with Operations Management & vCloud Director customers as a free download.
    • Includes all the new features such as ODT, UDT & Path Optimization.
    • Datacenter Extension & Content Sync NOT INCLUDED.
    • Available via vSphere download in the "Drivers & Tools" tab, Click here
    • Available to all vCloud Suite & vCHS customers. 
    • Includes all vCC Core features plus Datacenter Extension & Content Sync.
    • Activated with a valid vCloud Suite license key or a vCC Key provided with the vCloud Hybrid Service Click here

Wednesday, June 12, 2013

Want to go VMworld Europe in Barcelona



Join cloudcredibility and complete the tasks assigned to you and your team, these points can be redeemed to nice goodies. To see the entire list of goodies click HERE

The grand prize is a trip to VMworld Europe in Barcelona.





Different Hypervisor designs in Type 1 hypervisor's






Different Hypervisor designs in Type 1 hypervisor's

In the type 1 VMM/hypervisor a.k.a bare metal hypervisor there are two categories of Hypervisor designs.


a. Microkernelized Hypervisor Design
Ex: Microsoft Hyper V
b. Monolithic Hypervisor Design
Ex: VMware vSphere

Microkernelized hypervisor:

Device drivers does not need to be hypervisor aware and they are on the controlling layer. So a wide range of Hardware can run this kind of a Hypervisor and there is less overhead on the hypervisor itself.
At the same time this needs an Operating System to be installed to initialize the hypervisor layer, and any attack or a fault of that controlling layer operating system can affect the whole hypervisor and bring down all the virtual machines.

Monolithic Hypervisor Design:

In this design the device drivers run at the same layer as the VMM/Hypervisor, hence the hardware and I/O devices should be Hypervisor aware in other words device drivers to be developed for the hypervisor so this results only a certain set of certified hardware can only run this kind of a hypervisor.
No operating system is required to fork lift this hypervisor this makes the hypervisor stable, No security patches are needed for components running in the "Controlling Layer."

Now that we have understood about type 1 hypervisor's what is type 2.

A type  2 hypervisor runs as a software on top of an existing Operating system.

Ex: VMware Workstation, VMware Fusion etc.