Fiber Channel – Node Types

    Used in Point to Point or Switched Fabric topologies.
    N_Ports are connected to each other through the fabric topology.
    Found on HBAs and Storage Processors.
F_PORT (Fabric Ports)
    Used in a Switch Fabric topology.
    Found on a switch – enables HBA & Storage Processor to connect.
NL_PORT (Arbitrated Loop)
    Supports Arbitrated Loop topology.
    These ports are found on HBAs & Storage Processors.
FL_PORT (Fabric port with Arbitrated Loop capabilities)
    Ports on a switch that support connecting to an Arbitrated Loop.
E_PORT (Extension Port)
    Extension port for interconnecting switches in a multi switch fabric.
    Ports on a switch that connect other switches to the fabric.
G_PORT (Generic Port)
    It can be configured to either an E_Port or an F_Port.
    These ports are found on a switch, not on a HBA or an SP.

How do I resignature VMFS3 volumes that are not snapshots

You could run into this issue after changing the Host Mode setting on your HDS Storage Array. This results in all of his VMFS3 volumes being seen as snapshot volumes by the ESX server.

A similar situation occurs when you set the SPC-2 flag on your EMC Symmetrix Storage Array.

This procedure is used to allow you to change the Host Mode setting/director flags on your array and make all of the VMFS3 volumes visible again.

1. Stop the running VMs on all the ESX servers.

2. Change the Host Mode/Director flags on the Storage Array – now when you rescan, you will see snapshot LUN mentioned in /var/log/vmkernel.

3. Enable LVM Resignaturing on the first ESX server => set LVM.EnableResignature to 1.

-log to the ESX with VI client

-select the configuration tab

-select the Advanced setting option

-select the LVM section

-make sure that the fourth and last option allowresignaturing is set to 1.

-save the change

-select storage adapter

-select rescan adapter

-leave the default option and proceed

-you should now be able to see the VMFS

4. Disable LVM Resignaturing

-log to the ESX with VI client

-select the configuration tab

-select the Advanced setting option

-select the LVM section

-make sure that the fourth and last option allowresignaturing is set to 0.

-save the change

5. No snapshot messages should now be visible in the /var/log/vmkernel.

6. Re-label the volume

-log to the ESX with VI client

-select Datastores view in inventory view

-select the datastore, right click, select remove to remove the old label as this is associated with the old UUID of the volume

-select Hosts & Clusters view instead of Datastores view

-in the summary tab, you should see the list of datastores

-click in the name field for the volume in question and change it to the original name – you now have the correct original label associated with the resignatured volume

7. Now rescan from all ESX servers

8. Re-register all the VMs

-Because the VMs will be registered against the old UUID, you will need to re-register them in VC.

-log to the ESX with VI client

-select the configuration tab

-select Storage(SCSI, SAN & NFS)

-double-click on any of the datastores to open the Datatstore browser

-navigate to the .vmx file of any of the VMs by clicking on the folders

-right click, select ‘add to inventory’

9. Remap any RDMs

-If you a VM which uses an RDM, you will have to recreate the mapping.

-the problem here is that you may not be able to identify which RDM is which if you used multiple ones.

-if they are different sizes, then this is ok – you should be able to map them in the correct order by their size

-make a note of the sizes of the RDMS and which VMs they are associated with before starting this process

-make a note of the LUN ID before starting this process too – you may be able to use this to recreate the mapping

-if they are all the same size, this is a drag since you will have to map them and boot the VM, and then check them

-if you do not use RDMs, you can ignore this step

10. Powering on the VMs

-start the VM, reply yes if prompted about a new UUID

-if any of the VMs refer to missing disks when they power up, check the .vmx file and ensure that the scsi disk references are not made against the old uuid instead of the label.

-if any of the VMs refer to missing disks when they power up, check the .vmx file and ensure that the scsi disk references are not made against the old label instead of the new label if you changed it.

11. Repeat steps 3 thru 10 for all subsequent ESX servers that are still seeing snapshot volumes.

-if all ESX servers share the same volumes, then this step will not be necessary

How to Troubleshoot ESX 2.5.x by loading vmkernel manually

# chkconfig vmware off This will let you boot into ESX without starting the VMkernel. Reboot the server and allow it to boot into the standard “ESX” mode. You will notice that on the next reboot that although ESX was selected, the typical VMware services will be skipped. This provides you with a clean slate to manually step through the process of loading the VMkernel to narrow down the root cause of your boot issues. 1. Load the vminx module:
# /sbin/insmod -s -f vmnixmod You will get a message about tainted drivers, which can be ignored.
2. Load the VMKernel itself:
# /usr/sbin/vmkloader /usr/lib/vmware/vmkernel 3. Allow the VMkernel to run Linux drivers:
# /usr/sbin/vmkload_mod -e /usr/lib/vmware/vmkmod/vmklinux linux As we understand it, this is the step in which the final transformations are occurring to load the management console as a virtual machine.
4. Make sure all devices are enumerated:
# /usr/sbin/vmkchdev -n The next steps would be system specific based on the hardware installed in the system. This is typically where we see a majority of the issues while loading the VMkernel. If the system freezes while loading a specific module, you have narrowed down your issue to a very specific portion of the boot process and further investigation may be performed with VMware support or other methods. To review which modules need to be loaded, check the contents of your vmkmodule.conf file:
# cat /etc/vmware/vmkmodule.conf We will utilize one of our servers as an example configuration. vmklinux linux
nfshaper.o nfshaper
bcm5700.o vmnic
e1000.o vmnic
aic79xx.o aic79xx We are now going to load the drivers one by one using vmkload_mod. Since the vmklinux module was previously loaded in step 3 above, it is not necessary here. If a module is commented out, it is not required in this step.
Load the packet shaper driver (This is disabled by default) # /usr/sbin/vmkload_mod /usr/lib/vmware/vmkmod/nfshaper.o shaper Load an Intel e1000 network adapter
# vmkload_mod /usr/lib/vmware/vmkmod/e1000.o vmnic Load a Broadcom BCM5700 network adapter
# vmkload_mod /usr/lib/vmware/vmkmod/bcm5700.o vmnic Load a SCSI adapter
# vmkload_mod /usr/lib/vmware/vmkmod/aic79xx.o aic79xx If any one module hangs the system, you have found your culprit. A complete list of steps followed should be documented in the event a support call needs to be opened with VMware. The above steps will help narrow problems to a specific area. If the system starts as expected without error in the above process VMware support should be consulted to help further analyze why a particular system may hang during its boot process.
When all is said and done, do not forget to re-enable the VMkernel services on startup with the following command:
# chkconfig vmware on]]>