148 Administering dynamic multipathing (DMP)
Administering DMP using vxdmpadm
Note: Starting with release 4.1 of VxVM, I/O policies are recorded in the file
/etc/vx/dmppolicy.info, and are persistent across reboots of the system.
Do not edit this file yourself.
The following policies may be set:
■ adaptive
This policy attempts to maximize overall I/O throughput from/to the disks
by dynamically scheduling I/O on the paths. It is suggested for use where
I/O loads can vary over time. For example, I/O from/to a database may
exhibit both long transfers (table scans) and short transfers (random look
ups). The policy is also useful for a SAN environment where different paths
may have different number of hops. No further configuration is possible as
this policy is automatically managed by DMP.
In this example, the adaptive I/O policy is set for the enclosure enc1:
# vxdmpadm setattr enclosure enc1 iopolicy=adaptive
■ adaptiveminq
Similar to the adaptive policy, except that I/O is scheduled according to the
length of the I/O queue on each path. The path with the shortest queue is
assigned the highest priority.
■ balanced [partitionsize=size]
This policy is designed to optimize the use of caching in disk drives and
RAID controllers. The size of the cache typically ranges from 120KB to
500KB or more, depending on the characteristics of the particular
hardware. During normal operation, the disks (or LUNs) are logically
divided into a number of regions (or partitions), and I/O from/to a given
region is sent on only one of the active paths. Should that path fail, the
workload is automatically redistributed across the remaining paths.