This blog article is about the I/O size or block sizes in virtualized environments. I am sure you came along this if you are dealing with databases or other systems. Do you remember it would be best to keep a Microsoft SQL DB on a 64 KB formatted volume or was that the NTFS allocation size, wait was the storage system involved here as well? So I can tell you that if you still believe a Microsoft SQL server operates with only 64 KB blocks that this is not true as there are many kinds of block sizes dependent on what the SQL DB is doing. There is clearly confusion between I/O size and the NTFS allocation size, VMFS block size or NFS block size. On top you have an underlying storage system which is organised in volumes while using meta structures to organise underlying physical disks or flash. This blog article hopefully shines a little bit of light into this. The following figure shows a 64 KiB Write I/O traversing through the different levels we use in a virtualized environment.
virtualguido
Friday, November 4, 2016
Wednesday, October 26, 2016
VMware ESXi - vmkvsitools
This blog article is about a very useful command master command with the name vmkvsitools which can retrieve specific information about the host and ESXi processes. The name is a short name for “VMkernel Sys Info Tools”. Sounds interesting? Let’s dig into it.
You find vmkvsitools under /usr/sbin/vmkvsitools. Vmkvsitools includes the below shown 16 subcommands which are a link to /bin/vmkvsitools (/bin is a link to /usr/sbin). That means you don’t have to put vmkvsitools in front of these commands. I am sure certain ones you have seen before like ps, vdf, update etc. but you eventually never knew that they were part of vmkvsitools. First let’s see what the command has when using help.
VMware ESXi - vmkvsitools lspci
Lspci is a short name for list PCI (Peripheral Component Interconnect) devices. PCI is a standardised bus to attach different kinds of hardware on the motherboard of a x86 system. Many devices such as I/O cards (e.g. HBA, CNA, network cards) are PCI cards. In current hardware these PCI is replaced by the next generation standard PCIe (Peripheral Component Interconnect Express). PCIe delivers a high-speed serial computer expansion bus standard to replace PCI, PCI-X (Peripheral Component Interconnect eXtended) and AGB (Accelerated Graphics Port) bus standards. It is also expected that SATA will get replaced by PCIe which happens today already as Flash gets faster and there is a need for high demanding performance. With -h you see below that there are a few options to get info from detailed, over hex dump to verbose output.
Tuesday, October 25, 2016
Understanding I/O - What about IOPS, Throughput, Latency and I/O Size
This blog article is about one of my most favourite topics with just two letters: I/O. Why? Because really everything around virtualization and storage is about how I/O works and there is not much fun in life without Input/Output. And yes there are many blog articles about I/O but not so much from the I/O size perspective. So hopefully this will help you with some confusion. Let’s start very simple and define this four keywords:
Thursday, October 20, 2016
VMware ESXi How to find Firmware and Driver version
This blog article is about how to find a particular driver and firmware of an I/O device like a HBA, Network card etc.! For VMware VSAN there is a nice health checker now but not everyone is using VSAN and not everything in a ESXi system involved in VSAN like Network Cards and local devices.
VMware ESXi locking and how to kill a frozen VM
This blog article is about how to kill a frozen VM. Working in technical support you often get cases where let’s name it: “something went wrong”. This could be a great variety from storage issues or other process running on the particular ESXi which still hold on a lock file as well as many other reasons. This blog article is structured in the way that I start with the different ways how to kill a VM followed by several troubleshooting techniques like: which host has the lock, is there eventually an APD (All Path Down) or even PDL (Permanent Device Loss) happening.
Tuesday, September 6, 2016
VMware ESXi Claim Rules unleashed
This blog article explains VMware ESXi claim rules in general and the differences between core and SATP (Storage Array Type Plugin) claim rules. In support we get a lot of questions about claim rules for several reasons. One could be performance tuning while another one could be the usage of a 3rd party multipathing plugin. With a claim rule you can pretty much define how a device of a certain vendor should behave if the the default setting won’t be sufficient. But let’s start from the beginning. There are two kind of claim rules:
Wednesday, August 31, 2016
VMware SCSI Controller Options
This blog article describes the different SCSI controllers in VMware ESXi and why one eventually is better than another one. I had multiple discussions with customers at my current job why and in which situation the LSI Logic SAS or Parallel makes more sense vs. the Paravirtual SCSI adapter (PVSCSI) and I didn’t really find a good blog article or KB explaining what I think is needed to really understand the differences. What I see in most of the environments is the standard adapter for the chosen Operation System and in many cases that is absolutely fine and works well. The problem starts when there is a limitation somewhere but how do find that out? Let’s start from the beginning. With the current version of ESXi 6.0 there are five options of SCSI Controllers which get illustrated in the following table:
Wednesday, August 24, 2016
VMware ESXi Storage I/O Path
For me it was a long time a myth how the I/O flow works in ESXi. For sure I knew what a Path Selection Policy (PSP) as well as the Storage Array Type Plugin (SATP) was and I have heard about vSCSI stats but I was not really able to explain the I/O flow in depth. It was more or less I/O gets from the VM somehow into the kernel. Then you could monitor certain performance values and stats with e.g ESXTOP or change the IOPS settings at the PSP level if a vendor recommend so to improve performance. And yeah there were RAW Devices, different storage protocols and queues. But how does this all work together?
Friday, August 19, 2016
PernixData Backup Best Practices - 4. SAN Backup Mode
To close this backup best practices series we end with the SAN Backup Mode. In SAN Backup Mode the storage device where the VMs actually is running from, is mounted to the backup server to bypass the ESXi environment altogether. The backup then takes place via the storage protocol (Fibre Channel,iSCSI, etc.) locally on the backup server. While this is a very performant way of doing backups it has some dependencies on the backup process itself. The following figure illustrates how the SAN Backup Mode works.
PernixData Backup Best Practices - 3. VADP Hot-Add
The third backup method in this series is vSphere Storage APIs for Data Protection (VADP). The Virtual Disk Development Kit (VDDK) is used with VADP to develop backup and restore software. VDDK is an open API and SDK and includes the following components:
- The virtual disk library which is a set of C functions call to manipulate VMDK files
- The disk mount library which is a set of C functions call to remote mount VMDK files
- Sample C++ codes
- VDDK utilities which include disk mount and the virtual disk manager
- Documentation
PernixData Backup Best Practices - 2. NBD (Network Block Device)
The second article describes the method NBD (Network Block Device) which uses some of the virtualization functionality. It is available in the un-encrypted flavour of just NBD or encrypted via the LAN usually termed as NBDSSL For further information about the best practices from a VMware perspective please follow this VMware KB 1035096:
Thursday, August 18, 2016
PernixData Backup Best Practices - 1. Client based backups
The four different backups methods get explained in the following PernixData FVP Backup Best Practices series. We start with the first one which explains client based backups.
To understand backups it is important to understand what kind of backups are possible in a VMware environment and specifically how they interact and work well with PernixData FVP. A good first overview gives the official VMware documentation.
PernixData FVP - Write I/O
This blog article describes Write I/O with PernixData FVP in place. We separate between just read acceleration (Write-Through) and read & write acceleration (Write-Back).
PernixData FVP - Read I/O Misses
When a VM issues an I/O operation FVP determines how it can serve this I/O. In the case of a read I/O an early distinction depends on whether this I/O had been requested before and is already within the FVP cache layer or whether it has to be fetched from the Storage System so later reads of this same block can be server from the FVP layer and the VM can benefit from this I/O acceleration.
Kernel vs. Usermode
Welcome to my first blog article ever. I wrote this a long time ago when thought I would like to start a blog but then it somehow never happened :). But hey here it is now life and in colour. The content of this article started quite a long time ago when I had a class at University with the name "IT Infrastructure" where we went into the depth of the OS kernel, understanding caching in the CPU and how the CPU actually calculates and obviously many
more things. At that time I thought wow being in the OS kernel is very slick but on the flip side something very complicated so eventually dangerous e.g. overwrite memory...C programming...procedural no objects. Of course every OS driver using hardware is also running in the Ring 0 so the complexity really depends a lot on the functionality of the software running in the kernel as well as on QA itself. So enough of the initial talking now let's get into the article.
more things. At that time I thought wow being in the OS kernel is very slick but on the flip side something very complicated so eventually dangerous e.g. overwrite memory...C programming...procedural no objects. Of course every OS driver using hardware is also running in the Ring 0 so the complexity really depends a lot on the functionality of the software running in the kernel as well as on QA itself. So enough of the initial talking now let's get into the article.
Subscribe to:
Posts (Atom)