Friday, December 23, 2011

Screencast: Adding Hyper-V Cluster to SCVMM 2012 RC

Continue to my previous blog Screencast: SCVMM 2012 Installation, I have proceed to add my biggest production Hyper-V cluster to SCVMM 2012 RC. The cluster consists of 9 Hyper-V nodes which is hosting 71 VMs on VFARM3 cluster. Below is the screencast of the process.

Screencast: SCVMM 2012 Installation

Installation of System Center Virtual Machine Manager (SCVMM) 2012 is simple enough, and I have screenshot each of the important screen you will come across during installation.

Windows Automation Installation Kit is require. To download a copy, click this link

Upon completing this, VMM is installed on the host. Take a peek at my next blog on how I add a Hyper-V cluster to SCVMM 2012 RC at Screencast: Adding Hyper-V Cluster to SCVMM 2012 RC.

Saturday, December 17, 2011

How-To: Rapid Deploy VM Using Powershell

I have created a Powershell script to rapidly deploy VMs on Microsoft Hyper-V host and this was demo during the Hyper-V Workshop.

The demo Clustered Hyper-V infrastructure are as below diagram:

Clustered Hyper-V infrastructure with MS Hyper-V 2008 R2 SP1

On of my interesting demo is to rapid deploy VM on Microsoft Hyper-V host without using SCVMM. In achieving this I have created a Powershell script by referring to existing script found on

The script which I created is not perfect to suit everyone yet, but its enough for most of us out there who in need to create n numbers of VM given a short period. During this event, I have conducted a demo in creating 6 VMs with just a hit on 'enter' and all 6 VMs booted. And right now, I will share with you in details how to achieve this.

What the Powershell script does (In general speaking)
1. Read VMs configuration from CSV line-by-line
2. Create VM into 'C:\clusterstorage\volume1\vname'
3. Set the processor count of each VMs
4. Set the memory for each VMs
3. Create and attach a 'Differencing Disk' to each VMs

Future development of this script
1. To be able to set 'Dynamic Memory' for each VMs
2. To be able to set 'Fixed/Dynamic/Differencing Disk' for each VMs
3. To be able to set desire sysprep/preconfigured or mix OS platform base VMs
4. To be able to add each VMs to CSV cluster if the Hyper-V host is clustered
5. To add and improve comments into the script


Import-Module "C:\Program Files\modules\HyperV"
$vmdefaultpath = "C:\ClusterStorage\volume1"
$ParentVHD = "win2k8r2sp1"
$path = "c:\createvm\VM.csv"
 import-csv -path $path|ForEach-Object {
$erroractionpreference = 0    
$vmName = $_.Name
$vmmemory = $_.Memory
$vmcpucount = $_.Cpucount
$vmSwitch = $_.Network
$vmpath = $vmdefaultpath
New-VM -Name $vmname
Set-VMMemory -VM $vmName -Memory $vmmemory
Set-VMCPUCount -VM $vmname -CPUCount $vmcpucount
Add-VMNIC -VM $vmName -VirtualSwitch $vmSwitch
New-VHD -VHDPaths $vmpath\$vmname\$vmname.vhd -ParentVHDPath $vmpath\vmbase\$ParentVHD.vhd
Add-VMDisk -VM $vmname -ControllerID 0 -Path $vmpath\$vmname\$vmname.vhd
Write-Host -BackgroundColor Green -ForegroundColor Black "Virtual Machine $vmname has been successfully created"     


CSV file which predefined VMs settings

How to use this script.
1. Create a CSV and list all necessary configuration of each VM as shown sample above.
2. Save the CSV as 'vm.csv'.
3. Copy 'vm.csv' to the Hyper-V host in 'c:\createvm\'
4. Copy the script above and save it as 'createvm.ps1' in 'c:\createvm\' (you may have to edit the variable to suit your requirement and configuration of your Hyper-V host.
5. Execute the PowerShell management Library for Hyper-V by codeplex.
6. Type 'powershell -file createvm.ps1'.

Requirement and caution
1. A syspreped OS image is required
2. Windows Server 2008 R2 and Windows 2008 R2 SP1 with Hyper-V role installed
3. Basic knowledge of Powershell script
4. Changing of variables in the script to suit your Hyper-V environment
5. Use at you own risk as you have been warned this may affects you Hyper-V server. KNOWING WHAT YOU ARE ABOUT TO EXECUTE IS IMPORTANT. 

A video worth thousand words

Wednesday, December 7, 2011

How-To: Compact Hyper-V Dynamic Expanding VHD

When you do housekeeping on your target VM which using 'Dynamic Disk', the free space that you free up in your VM will not added back to your Hyper-V host. If space is the concern for you Hyper-V host, then you may have to manually compact the VM's VHD.

To compact a Dynamic Expanding VHD:

Step 1: Right-click the target VM. Select 'Settings...'. Highlight the target Hadr Drive. Click 'Edit'
 Step 2: Select 'Compact'.
Compact - Applies to dynamically expanding virtual hard disks and differencing virtual hard disks. Reduces the size of the .vhd file by removing blank space that is left behind when data is deleted from the virtual hard disk. If the virtual hard disk is not NTFS formatted, the blank space must be overwritten with zeroes so that the compact action can reduce the file size by removing sectors that contain only zeroes.
If the virtual hard disk is not NTFS formatted, you must prepare the virtual hard disk for compacting by using a non-Microsoft disk utility program to replace the blank space with zeroes.
Step 3: Click 'Finish'

Wait for the compacting process to finish.

Below show the size of before and after VHD compacting process.

How-To: Enabling Data Co-Location on DPM

Microsoft's Data Protection Manager (DPM) 2010 allows backup administrator to co-locate protection groups on tape. In previous version of DPM, tape co-location is not available. This means to say that if you have a protection group with backup size maybe 2Gb a day and 5 LTO4 tapes and the backup strategy is to backup 5 days a week. You will end up with DPM 2010 uses each LTO4 1.6GB (compress) to backup your 2GB everyday. This only utilize 0.1% of your tape capacity, and the worst is that other protection group will not be able to write on the same tape.

With tape co-location available now you can optimizes the tape usage in case you have many small protection groups. The backup to tape will backup multiple protection group on the same tape depending on the backup strategy you configured and consideration on "TapeWritePeriodRatio" and "ExpiryToleranceRange".

Enabling Data Co-Location on DPM
  1. Open DPM Management Shell.
    Set-OptimizeTapeUsage to True using the Set-DPMGlobalProperty cmdlet.
    Set-DPMGlobalProperty -DPMServerName <name of DPM server> -OptimizeTapeUsage $True

Take note that a dataset will be collocated only if both the below conditions are true.

The expiry date of the current dataset should fall in between the following dates: 
Upper bound: furthest expiry date among all the datasets on the tape - (furthest expiry date among all the datasets on the tape - current date) * ExpiryToleranceRange
Lower Bound: furthest expiry date among all the datasets on the tape + (furthest expiry date among all the datasets on the tape - current date) * ExpiryToleranceRange.

Current time should be less than first backup time of the dataset on the media + TapeWritePeriodRatio * RetentionRangeOfFirstDataset.


It is a global property for the DPM which needs to be set using DPM CLI command. Here is the command to set it
Set-DPMGlobalProperty –DPMServerName <dpm server name> -TapeWritePeriodRatio <fraction>
WritePeriodRatio indicates the number of days for which data can be written on to a tape as a ratio of the retention period of the first data set written to the tape.

WritePeriodRatio value can be between 0.0 to 1.0

Default value is 0.15 (i.e. 15%)

Saturday, December 3, 2011

How-To: Force Expire Tape in DPM 2010

System Center Data Protection Manager (DPM) 2010 enables disk-based and tape-based data protection and recovery for servers such as SQL Server, Exchange Server, SharePoint, virtual servers, file servers, and support for Windows desktops and laptops. DPM can also centrally manage system state and Bare Metal Recovery (BMR).

Do you ever got yourself into a situation where you run out of backup tapes due to inappropriate backup strategy ? If your answer is yes, then you probably notice at the same time you are facing an issue and that you are desperately to expire a LTO tape which holding the oldest recovery point. And guess what... You can't do this using DPM 2010 Administrator Console...OUCH!!!

So, you are sitting in the dark, waiting any tape to expire and helplessly watching each and every scheduled backup failed. Well, this will not happen  anymore if you use a powershell as found in Microsoft TechNet's Library. The script contents is as below :

Code Begins Here

param ([string] $DPMServerName, [string] $LibraryName, [string[]] $TapeLocationList)
if(("-?","-help") -contains $args[0])
    Write-Host "Usage: ForceFree-Tape.ps1 [[-DPMServerName] <Name of the DPM server>] [-LibraryName] <Name of the library> [-TapeLocationList] <Array of tape locations>"
    Write-Host "Example: Force-FreeTape.ps1 -LibraryName "My library" -TapeLocationList Slot-1, Slot-7"
    exit 0
if (!$DPMServerName)
    $DPMServerName = Read-Host "DPM server name: "
    if (!$DPMServerName)
        Write-Error "Dpm server name not specified."
        exit 1
if (!$LibraryName)
    $LibraryName = Read-Host "Library name: "
    if (!$LibraryName)
        Write-Error "Library name not specified."
        exit 1
if (!$TapeLocationList)
    $TapeLocationList = Read-Host "Tape location: "
    if (!$TapeLocationList)
        Write-Error "Tape location not specified."
        exit 1
if (!(Connect-DPMServer $DPMServerName))
    Write-Error "Failed to connect To DPM server $DPMServerName"
    exit 1
$library = Get-DPMLibrary $DPMServerName | where {$_.UserFriendlyName -eq $LibraryName}
if (!$library)
    Write-Error "Failed to find library with user friendly name $LibraryName"
    exit 1
foreach ($media in @(Get-Tape -DPMLibrary $library))
    if ($TapeLocationList -contains $media.Location)
        if ($media -is [Microsoft.Internal.EnterpriseStorage.Dls.UI.ObjectModel.LibraryManagement.ArchiveMedia])
            foreach ($rp in @(Get-RecoveryPoint -Tape $media))
                Get-RecoveryPoint -Datasource $rp.Datasource | Out-Null
                Write-Verbose "Removing recovery point created at $($rp.RepresentedPointInTime) for tape in $($media.Location)."
                Remove-RecoveryPoint -RecoveryPoint $rp -ForceDeletion -Confirm:$false
            Write-Verbose "Setting tape in $($media.Location) as free."
            Set-Tape -Tape $media -Free
            Write-Error "The tape in $($media.Location) is a cleaner tape."

Code Ends Here

To put this script into use
  1. Open a new Notepad file and copy the code above into it.
  2. Save the file as ForceFree.ps1.
  3. Copy ForceFree.ps1 to C:/Program Files/Microsoft/Microsoft Data Protection Manager/scripts .
  4. The syntax to run the script is ForceFree.ps1 -DPMServerName <Name of server> -LibraryName <Name of library> -TapeLocation <slot numbers>.
Sample of script execution

PS C:\Program Files\Microsoft DPM\DPM\Scripting> .\ForceExpire.ps1
DPM server name: : backupserver
Hewlett Packard LTO Ultrium-4 drive
Hewlett Packard MSL G3 Series library  (x64 based)
Library name (cut & paste from above): : Hewlett Packard MSL G3 Series library
(x64 based)
Tape location: : slot-2
Processing this slot list...
The operation will remove the following recovery point(s) because they have dep
endencies on each other:
Datasource '\\?\Volume{9f6da658-f6f1-11df-8d4f-00155d000115}\' on
Saturday, 5 November, 2011 10:31:28 AM
Monday, 7 November, 2011 11:52:13 PM
Wednesday, 9 November, 2011 4:03:54 AM
Wednesday, 9 November, 2011 9:47:34 PM
Thursday, 10 November, 2011 8:01:09 PM
Friday, 11 November, 2011 8:09:17 PM
Monday, 14 November, 2011 8:31:54 PM
The operation will remove the following recovery point(s) because they have dep
endencies on each other:
Datasource '\\?\Volume{dff33793-b735-11df-a919-00155d000225}\' on
Sunday, 6 November, 2011 12:02:41 AM
Tuesday, 8 November, 2011 2:36:56 AM
Wednesday, 9 November, 2011 7:33:40 AM
Wednesday, 9 November, 2011 9:24:50 PM
Thursday, 10 November, 2011 11:04:38 PM
Friday, 11 November, 2011 10:01:35 PM
Monday, 14 November, 2011 8:09:52 PM
Tuesday, 15 November, 2011 8:00:30 PM

More details information can be found at

Sunday, October 9, 2011

How-To: Installing Smokeping on CentOS 5.5

yum update
# rpm -Uhv
# yum install httpd
# yum install rrdtool
# yum install fping
# yum install echoping
# yum install curl
# yum install perl perl-Net-Telnet perl-Net-DNS perl-LDAP perl-libwww-perl perl-RadiusPerl perl-IO-Socket-SSL perl-Socket6 perl-CGI-SpeedyCGI

# wget
# tar zxvf smokeping-2.4.1.tar.gz
# mv smokeping-2.4.1 /opt/smokeping
# cd /opt/smokeping

# cd bin/
# cp smokeping.dist smokeping
# cd ../htdocs/
# cp smokeping.cgi.dist smokeping.cgi
# cp tr.cgi.dist tr.cgi
# cd ../etc/
# cp config.dist config
# cp basepage.html.dist basepage.html
# cp smokemail.dist smokemail
# cp tmail.dist tmail
# cp smokeping_secrets.dist smokeping_secrets
# chmod 600 /opt/smokeping/etc/smokeping_secrets

# vi /opt/smokeping/bin/smokeping

Hard Drives IOPS

Found and interesting topic over the weekend. A topic which most of us overlook during the design of a storage infrastructure.

We all do concern about the speed if the data transfer between storage and host, the network bandwidth, and that RAID configuration used, while we always forgot about the IOPS.

For some of us out there who wondering what is IOPS, "IOPS (Input/Output Operations Per Second, pronounced i-ops) is a common performance measurement used to benchmark computer storage devices like hard disk drives (HDD), solid state drives (SSD), and storage area networks (SAN). As with any benchmark, IOPS numbers published by storage device manufacturers do not guarantee real-world application performance" - Wikipedia (on layman term)

The table I found below to remind myself of the ball park IOPS figure of various type of common hard drives:

7,200 rpm HDD~75-100 IOPS[2]SATA 3 Gb/s
10,000 rpHDD~125-150 IOPS[2]SATA 3 Gb/s
15,000 rpHDD~175-210 IOPS [2]SAS
Simple SLSSD~400 IOPS[citation needed]SATA 3 Gb/s
Intel X25-SSD~8,600 IOPS[11]SATA 3 Gb/s
Intel X25-SSD~5,000 IOPS[13]SATA 3 Gb/s
G.Skill PhSSD~20,000 IOPS[citation needed]SATA 3 Gb/s
OCZ VerteSSDUp to 60,000 IOPS[citation needed]SATA 6 Gb/s
Texas MeSSD120,000+ Random Read/Write IOPS[17]PCIe
Fusion-io SSD140,000 Read IOPS, 135,000 Write IOPS [18]PCIe
Virident SSSD320,000 sustained READ IOPS using 4KB blocks and 200,000 sustained WRITE IOPS using 4KB blocks [19]PCIe
OCZ RevoSSD200,000 Random Write 4K IOPS [21]PCIe
Fusion-ioSSD250,000+ IOPS [22]PCIe
Violin MeSSD250,000+ Random Read/Write IOPS[23]PCIe /FC/Infiniband/iSCSI
DDRdrive SSD300,000+ (512B Random Read IOPS) and 200,000+ (512B Random Write IOPS)[24][25][26][27]PCIe
OCZ SinglSSDUp to 500,000 IOPS [28]PCIe
Texas MeSSD600,000+ Random Read/Write IOPS[29]PCIe
Texas MeSSD1,000,000+ Random Read/Write IOPS[30]FC / InfiniBand
Fusion-ioSSD1,180,000+ Random Read/Write IOPS[31]PCIe
OCZ 2x SuSSDUp to 1,200,000 IOPS[28]PCIe