Saturday, 19 October 2019

NIMADM limitations

1>The client's hardware and software must support the AIX(R) level that is being migrated to and meet all other conventional
migration requirements.

2>The application servers, such as DB2 and LDAP, must be stopped before you runn the clone rootvg command. Otherwise, the
application servers do not start normally after the clone rootvg command has finished processing.       

3>If the client's rootvg has TCB turned on, you must either disable it (permanently), use the disk caching option (-j ), or perform
a conventional migration. (This limitation exists because TCB needs to access file metadata which is not visible over NFS).

4>All NIM resources used by the nimadm command must be local to the NIM master.

5>Although there is almost no interference with the client's active rootvg during the migration, the client may experience minor
reduction in performance due to increased disk input/output, biod activity, and some CPU usage associated with alt_disk_install
cloning.

6>NFS tuning may be required to optimize nimadm performance.

7>The nimadm command is not supported with the multibos command when there is a bos_hd5 logical volume.

NIMADM Migration Process - 12 Phases

NIMADM Migration Process - 12 Phases

The nimadm command performs migration in 12 phases. Each phase can be executed individually using the -P flag. The nimadm phases are as follows:

1    The master issues an alt_disk_install command to the client which makes a copy of the rootvg to the target disks (coincidentally
     this is Phase 1 of the alt_disk_install process). In this phase altinst_rootvg (alternate rootvg) is created. If a target mksysb
     has been specified, the mksysb is used to create a rootvg using local disk caching on the NIM master.

2    The master runs remote client commands to export all of the /alt_inst file systems to the master. The file systems are exported
     as read/write with root access to the master. If a target mksysb has been specified, the cache file systems are created based on
     the image data from the mksysb.

3    The master NFS mounts the file systems exported in Phase 2. If a target mksysb has been specified, the mksysb archive is restored
     in the cache file systems that were created in phase 2.

4    If a pre-migration script resource has been specified, it is executed at this time.

5    System configuration files are saved. Initial migration space is calculated and appropriate file system expansions are made.
     "bos" is restored and the device database is merged (similar to a conventional migration). All of the migration merge methods are
     executed and some miscellaneous processing takes place.

6    All system filesets are migrated using installp. Any required RPM images are also installed during this phase.

7    If a post-migration script resource has been specified, it is executed at this time.

8    bosboot is executed to create a client boot image, which is written out to the client's boot logical volume (hd5).

9    All mounts made on the master in phase 3 are removed.

10   All client exports created in phase 2 are removed.

11   The alt_disk_install is called again (phase 3 of alt_disk_install) to make final adjustments and put altinst_rootvg to sleep. The
     bootlist is set to the target disk (unless the -B flag is used). If an output mksysb has been specified, the cache is archived
     into a mksysb file and made into a NIM mksysb resource.

12   Cleanup is executed to end the migration. The client is rebooted, if the -r flag is specified.  Note: The nimadm command supports
     migrating several clients simultaneous.