Migration Toolkit for Virtualization 2.10

Migrating your virtual machines to Red Hat OpenShift Virtualization

{subtitle-migrating}

Red Hat Modernization and Migration Documentation Team

Abstract

{abstract-migrating}

Chapter 1. Performing a migration

When you have planned your migration by using the Migration Toolkit for Virtualization (MTV), you can migrate virtual machines from the following source providers to OpenShift Virtualization destination providers:

  • VMware vSphere
  • Red Hat Virtualization (RHV)
  • OpenStack
  • Open Virtual Appliances (OVAs) that were created by VMware vSphere
  • Remote OpenShift Virtualization clusters

Chapter 2. Migrating from VMware vSphere

Run your VMware migration plan from the MTV UI or from the command-line.

2.1. Prerequisites

  • You have planned your migration from VMware vSphere.

2.2. Running a migration plan in the MTV UI

You can run a migration plan and view its progress in the Red Hat OpenShift web console.

Prerequisites

  • Valid migration plan.

Procedure

  1. In the Red Hat OpenShift web console, click Migration > Plans for virtualization.

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.

  2. Click Start beside a migration plan to start the migration.
  3. Click Start in the confirmation window that opens.

    The plan’s Status changes to Running, and the migration’s progress is displayed.

    Warm migration only:

    • The precopy stage starts.
    • Click Cutover to complete the migration.

      Warning

      Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.

  4. Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:

    • The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
    • The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:

      • The name of the VM
      • The start and end times of the migration
      • The amount of data copied
      • A progress pipeline for the VM’s migration

        Warning

        vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.

  5. Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:

    1. Click the Virtual Machines tab.
    2. Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.

      The VM’s details are displayed.

    3. In the Pods section, in the Pod links column, click the Logs link.

      The Logs tab opens.

      Note

      Logs are not always available. The following are common reasons for logs not being available:

      • The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case, virt-v2v is not involved, so no pod is required.
      • No pod was created.
      • The pod was deleted.
      • The migration failed before running the pod.
    4. To see the raw logs, click the Raw link.
    5. To download the logs, click the Download link.

2.2.1. Migration plan options

On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu kebab beside a migration plan to access the following options:

  • Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:

    • All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
    • The plan’s mapping on the Mappings tab.
    • The hooks listed on the Hooks tab.
  • Start migration: Active only if relevant.
  • Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
  • Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:

    • Set cutover: Set the date and time for a cutover.
    • Remove cutover: Cancel a scheduled cutover. Active only if relevant.
  • Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    • Migrate VMs to a different namespace.
    • Edit an archived migration plan.
    • Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
  • Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.

    Note

    Archive Plan is irreversible. However, you can duplicate an archived plan.

  • Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.

    Note

    Delete Plan is irreversible.

    Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.

    Note

    The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.

    • If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
    • If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.

2.2.2. Canceling a migration

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console.

Procedure

  1. In the Red Hat OpenShift web console, click Plans for virtualization.
  2. Click the name of a running migration plan to view the migration details.
  3. Select one or more VMs and click Cancel.
  4. Click Yes, cancel to confirm the cancellation.

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

2.3. Running a VMware vSphere migration from the command-line

You can migrate from a VMware vSphere source provider by using the command-line interface (CLI).

Important

Anti-virus software can cause migrations to fail. It is strongly recommended to remove such software from source VMs before you start a migration.

Important

MTV does not support migrating VMware Non-Volatile Memory Express (NVMe) disks.

Note

To migrate virtual machines (VMs) that have shared disks, see Migrating virtual machines with shared disks.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 1
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: vsphere
        createdForResourceType: providers
    type: Opaque
    stringData:
      user: <user> 2
      password: <password> 3
      insecureSkipVerify: <"true"/"false"> 4
      cacert: | 5
        <ca_certificate>
      url: <api_end_point> 6
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify the vCenter user or the ESX/ESXi user.
    3
    Specify the password of the vCenter user or the ESX/ESXi user.
    4
    Specify "true" to skip certificate verification, and specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
    6
    Specify the API endpoint URL of the vCenter or the ESX/ESXi, for example, https://<vCenter_host>/sdk.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: vsphere
      url: <api_end_point> 1
      settings:
        vddkInitImage: <VDDK_image> 2
        sdkEndpoint: vcenter 3
      secret:
        name: <secret> 4
        namespace: <namespace>
    EOF
    1
    Specify the URL of the API endpoint, for example, https://<vCenter_host>/sdk.
    2
    Optional, but it is strongly recommended to create a VDDK image to accelerate migrations. Follow OpenShift documentation to specify the VDDK image you created.
    3
    Options: vcenter or esxi.
    4
    Specify the name of the provider Secret CR.
  1. Create a Host manifest:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Host
    metadata:
      name: <vmware_host>
      namespace: <namespace>
    spec:
      provider:
        namespace: <namespace>
        name: <source_provider> 1
      id: <source_host_mor> 2
      ipAddress: <source_network_ip> 3
    EOF
    1
    Specify the name of the VMware vSphere Provider CR.
    2
    Specify the Managed Object Reference (moRef) of the VMware vSphere host. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    3
    Specify the IP address of the VMware vSphere migration network.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 1
          source: 2
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> 3
            namespace: <network_attachment_definition_namespace> 4
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod, multus, and ignored. Use ignored to avoid attaching VMs to this network for this migration.
    2
    You can use either the id or the name parameter to specify the source network. For id, specify the VMware vSphere network Managed Object Reference (moRef). To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 1
          source:
            id: <source_datastore> 2
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    Specify the VMware vSphere datastore moRef. For example, f2737930-b567-451a-9ceb-2887f6207009. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/kubev2v/hook-runner
      serviceAccount:<service account> 1
      playbook: |
        LS0tCi0gbm... 2
    EOF
    1
    Optional: Red Hat OpenShift service account. Use the serviceAccount parameter to modify any cluster resources.
    2
    Base64-encoded Ansible Playbook. If you specify a playbook, the image must include an ansible-runner.
    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Enter the following command to create the network attachment definition (NAD) of the transfer network used for MTV migrations.

    You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.

    Configuring the IP address enables the interface to reach the configured gateway.

    $ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit>
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: <name_of_transfer_network>
      namespace: <namespace>
      annotations:
        forklift.konveyor.io/route: <IP_address>
  2. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 1
      namespace: <namespace>
    spec:
      warm: false 2
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 3
        network: 4
          name: <network_map> 5
          namespace: <namespace>
        storage: 6
          name: <storage_map> 7
          namespace: <namespace>
      preserveStaticIPs: 8
      networkNameTemplate: <network_interface_template> 9
      pvcNameTemplate: <pvc_name_template> 10
      pvcNameTemplateUseGenerateName: true 11
      skipGuestConversion: false 12
      targetNamespace: <target_namespace>
      useCompatibilityMode: true 13
      volumeNameTemplate: <volume_name_template> 14
      vms: 15
        - id: <source_vm1> 16
        - name: <source_vm2>
          networkNameTemplate: <network_interface_template_for_this_vm> 17
          pvcNameTemplate: <pvc_name_template_for_this_vm> 18
          volumeNameTemplate: <volume_name_template_for_this_vm> 19
          targetName: <target_name> 20
          hooks: 21
            - hook:
                namespace: <namespace>
                name: <hook> 22
              step: <step> 23
    
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify whether the migration is warm - true - or cold - false. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.
    3
    Specify only one network map and one storage map per plan.
    4
    Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    5
    Specify the name of the NetworkMap CR.
    6
    Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    7
    Specify the name of the StorageMap CR.
    8
    By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP address linked to the interface name in the guest VM lose their IP address. To avoid this, set preserveStaticIPs to true. MTV issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to MTV.
    9
    Optional. Specify a template for the network interface name for the VMs in your plan. The template follows the Go template syntax and has access to the following variables:
    • .NetworkName: If the target network is multus, add the name of the Multus Network Attachment Definition. Otherwise, leave this variable empty.
    • .NetworkNamespace: If the target network is multus, add the namespace where the Multus Network Attachment Definition is located.
    • .NetworkType: Specifies the network type. Options: multus or pod.
    • .NetworkIndex: Sequential index of the network interface (0-based).

      Examples

    • "net-{{.NetworkIndex}}"
    • {{if eq .NetworkType "pod"}}pod{{else}}multus-{{.NetworkIndex}}{{end}}"

      Variable names cannot exceed 63 characters. This rule applies to a network name network template, a PVC name template, a VM name template, and a volume name template.

    10
    Optional. Specify a template for the persistent volume claim (PVC) name for a plan. The template follows the Go template syntax and has access to the following variables:
    • .VmName: Name of the VM.
    • .PlanName: Name of the migration plan.
    • .DiskIndex: Initial volume index of the disk.
    • .RootDiskIndex: Index of the root disk.
    • .Shared: Options: true, for a shared volume, false, for a non-shared volume.

      Examples

    • "{{.VmName}}-disk-{{.DiskIndex}}"
    • "{{if eq .DiskIndex .RootDiskIndex}}root{{else}}data{{end}}-{{.DiskIndex}}"
    • "{{if .Shared}}shared-{{end}}{{.VmName}}-{{.DiskIndex}}"
    11
    Optional:
    • When set to true, MTV adds one or more randomly generated alphanumeric characters to the name of the PVC in order to ensure all PVCs have unique names.
    • When set to false, if you specify a pvcNameTemplate, MTV does not add such characters to the name of the PVC.

      Warning

      If you set pvcNameTemplateUseGenerateName to false, the generated PVC name might not be unique and might cause conflicts.

    12
    Determines whether VMs are converted before migration using the virt-v2v tool, which makes the VMs compatible with OpenShift Virtualization.
    • When set to false, the default value, MTV migrates VMs using virt-v2v.
    • When set to true, MTV migrates VMs using raw copy mode, which copies the VMs without converting them first.

      Raw copy mode copies VMs without converting them with virt-v2v. This allows for faster conversions, migrating VMs running a wider range of operating systems, as well as supporting migrating disks encrypted using Linux Unified Key Setup (LUKS) without needing keys. However, VMs migrated using raw copy mode might not function properly on OpenShift Virtualization. For more information on virt-v2v, see How MTV uses the virt-v2v tool.

    13
    Determines whether the migration uses VirtIO devices or compatibility devices (SATA bus, E1000E NIC) when skipGuestConversion is true, that is, when raw copy mode is used for the migration. The setting of useCompatibilityMode has no effect when skipGuestConversion is false, because virt-v2v conversion always uses VirtIO devices.
    • When set to true, the default setting, MTV uses compatibility devices (SATA bus, E1000E NIC) in the migration process to ensure that the VMs can be booted after migration.
    • When set to false, MTV uses high-performance VirtIO devices in the migration process, and virt-v2v ensures that the VMs can be booted after migration. Before using this option, verify that VirtIO drivers are already installed in the source VMs.
    14
    Optional: Specify a template for the volume interface name for the VMs in your plan. The template follows the Go template syntax and has access to the following variables:
    • .PVCName: Name of the PVC mounted to the VM using this volume.
    • .VolumeIndex: Sequential index of the volume interface (0-based).

      Examples

    • "disk-{{.VolumeIndex}}"
    • "pvc-{{.PVCName}}"
    15
    You can use either the id or the name parameter to specify the source VMs.
    16
    Specify the VMware vSphere VM moRef. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
    17
    Optional: Specify a network interface name for the specific VM. Overrides the value set in spec:networkNameTemplate. Variables and examples as in callout 9.
    18
    Optional: Specify a PVC name for the specific VM. Overrides the value set in spec:pvcNameTemplate. Variables and examples as in callout 10.
    19
    Optional: Specify a volume name for the specific VM. Overrides the value set in spec:volumeNameTemplate. Variables and examples as in callout 14.
    20
    Optional: MTV automatically generates a name for the target VM. You can override this name by using this parameter and entering a new name. The name you enter must be unique, and it must be a valid Kubernetes subdomain. Otherwise, the migration fails automatically.
    21
    Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step.
    22
    Specify the name of the Hook CR.
    23
    Allowed values are PreHook before the migration plan starts or PostHook after the migration is complete.
    Important

    When you migrate a VMware 7 VM to an OpenShift 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.

  3. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

Important

There is an issue with the forklift-controller consistently failing to reconcile a migration plan, and subsequently returning an HTTP 500 error. This issue is caused when you specify the user permissions only on the virtual machine (VM).

In MTV, you need to add permissions at the data center level, which includes storage, networks, switches, and so on, which are used by the VM. You must then propagate the permissions to the child elements.

If you do not want to add this level of permissions, you must manually add the permissions to each object on the VM host required.

2.3.1. Retrieving a VMware vSphere moRef

When you migrate VMs with a VMware vSphere source provider by using Migration Toolkit for Virtualization (MTV) from the command line, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.

You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.

Procedure

  1. Retrieve the routes for the project:

    oc get route -n openshift-mtv
  2. Retrieve the Inventory service route:

    $ oc get route <inventory_service> -n openshift-mtv
  3. Retrieve the access token:

    $ TOKEN=$(oc whoami -t)
  4. Retrieve the moRef of a VMware vSphere provider:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere -k
  5. Retrieve the datastores of a VMware vSphere source provider:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k

    Example output

    [
      {
        "id": "datastore-11",
        "parent": {
          "kind": "Folder",
          "id": "group-s5"
        },
        "path": "/Datacenter/datastore/v2v_general_porpuse_ISCSI_DC",
        "revision": 46,
        "name": "v2v_general_porpuse_ISCSI_DC",
        "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-11"
      },
      {
        "id": "datastore-730",
        "parent": {
          "kind": "Folder",
          "id": "group-s5"
        },
        "path": "/Datacenter/datastore/f01-h27-640-SSD_2",
        "revision": 46,
        "name": "f01-h27-640-SSD_2",
        "selfLink": "providers/vsphere/01278af6-e1e4-4799-b01b-d5ccc8dd0201/datastores/datastore-730"
      },
     ...

In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC is datastore-11 and the moRef of the datastore f01-h27-640-SSD_2 is datastore-730.

2.3.2. Migrating virtual machines with shared disks

You can migrate VMware virtual machines (VMs) with shared disks by using the Migration Toolkit for Virtualization (MTV). This functionality is available only for cold migrations and is not available for shared boot disks.

Shared disks are disks that are attached to more than one VM and that use the multi-writer option. As a result of these characteristics, shared disks are difficult to migrate.

In certain situations, applications in VMs require shared disks. Databases and clustered file systems are the primary use cases for shared disks.

MTV version 2.7.11 or later includes a parameter named migrateSharedDisks in Plan custom resources (CRs) that instructs MTV to either migrate shared disks or to skip them during migration, as follows:

  • If set to true, MTV migrates the shared disks. MTV uses the regular cold migration flow using virt-v2v and labeling the shared persistent volume claims (PVCs).
  • If set to false, MTV skips the shared disks. MTV uses the KubeVirt Containerized-Data-Importer (CDI) for disk transfer.

After the disk transfer, MTV automatically attempts to locate the already shared PVCs and the already migrated shared disks and attach them to the VMs.

By default, migrateSharedDisks is set to true.

To successfully migrate VMs with shared disks, create two Plan CRs as follows:

  • In the first, set migrateSharedDisks to true.

    MTV migrates the following:

    • All shared disks.
    • For each shared disk, one of the VMs that is attached to it. If possible, choose VMs so that the plan does not contain any shared disks that are connected to more than one VM. See the following figures for further guidance.
    • All unshared disks attached to the VMs you choose for this plan.
  • In the second, set migrateSharedDisks to false.

    MTV migrates the following:

    • All other VMs.
    • The unshared disks of the VMs in the second Plan CR.

When MTV migrates a VM that has a shared disk to it, it does not check if it has already migrated that shared disk. Therefore, it is important to allocate the VMs in each of the two so that each shared disk is migrated once and only once.

To understand how to assign VMs and shared disks to each of the Plan CRs, consider the two figures that follow. In both, migrateSharedDisks is set to true for plan1 and set to false for plan2.

In the first figure, the VMs and shared disks are assigned correctly:

Figure 2.1. Example of correctly assigned VMs and shared disks

Example successful migration

plan1 migrates VMs 2 and 4, shared disks 1, 2, and 3, and the non-shared disks of VMs 2 and 4. VMs 2 and 4 are included in this plan, because they connect to all the shared disks once each.

plan2 migrates VMs 1 and 3 and their non-shared disks. plan2 does not migrate the shared disks connected to VMs 1 and 3 because migrateSharedDisks is set to false.

MTV migrates each VMs and its disks as follows:

  1. From plan1:

    1. VM 3, shared disks 1 and 2, and the non-shared disks attached to VM 3.
    2. VM 4, shared disk 3, and the non-shared disks attached to VM 4.
  2. From plan2:

    1. VM 1 and the non-shared disks attached to it.
    2. VM 2 and the non-shared disks attached to it.

The result is that VMs 2 and 4, all the shared disks, and all the non-shared disks are migrated, but only once. MTV is able to reattach all VMs to their disks, including the shared disks.

In second figure, the VMs and shared disks are not assigned correctly:

Figure 2.2. Example of incorrectly assigned VMs and shared disks

Complex cyclic shared disk dependencies

In this case, MTV migrates each VMs and its disks as follows:

  1. From plan1:

    1. VM 2, shared disks 1 and 2, and the non-shared disks attached to VM 2.
    2. VM 3, shared disks 2 and 3, and the non-shared disks attached to VM 3.
  2. From plan2:

    1. VM 1 and the non-shared disks attached to it.
    2. VM 4 and the non-shared disks attached to it.

This migration "succeeds", but it results in a problem: Shared disk 2 is migrated twice by the first Plan CR. You can resolve this problem by using one of the two workarounds that are discussed in the Known issues section, which follows the procedure.

Procedure

  1. In MTV, create a migration plan for the shared disks, the minimum number of VMs connected to them, and the unshared disk of those VMs.
  2. On the VMware cluster, power off all VMs attached to the shared disks.
  3. In the Red Hat OpenShift web console, click Migration > Plans for virtualization.
  4. Select the desired plan.

    The Plan details page opens.

  5. Click the YAML tab of the plan.
  6. Verify that migrateSharedDisks is set to true.

    Example Plan CR with migrateSharedDisks set to true

    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
     name: transfer-shared-disks
     namespace: openshift-mtv
    spec:
     map:
       network:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: NetworkMap
         name: vsphere-7gxbs
         namespace: openshift-mtv
         uid: a3c83db3-1cf7-446a-b996-84c618946362
       storage:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: StorageMap
         name: vsphere-mqp7b
         namespace: openshift-mtv
         uid: 20b43d4f-ded4-4798-b836-7c0330d552a0
     migrateSharedDisks: true
     provider:
       destination:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: Provider
         name: host
         namespace: openshift-mtv
         uid: abf4509f-1d5f-4ff6-b1f2-18206136922a
       source:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: Provider
         name: vsphere
         namespace: openshift-mtv
         uid: be4dc7ab-fedd-460a-acae-a850f6b9543f
     targetNamespace: openshift-mtv
     vms:
       - id: vm-69
         name: vm-1-with-shared-disks

  7. Start the migration of the first plan and wait for it to finish.
  8. Create a second Plan CR to migrate all the other VMs and their unshared disks to the same target namespace as the first.
  9. In the Plans for virtualization page of the Red Hat OpenShift web console, select the new plan.

    The Plan details page opens.

  10. Click the YAML tab of the plan.
  11. Set migrateSharedDisks to false.

    Example Plan CR with migrateSharedDisks set to false

    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
     name: skip-shared-disks
     namespace: openshift-mtv
    spec:
     map:
       network:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: NetworkMap
         name: vsphere-7gxbs
         namespace: openshift-mtv
         uid: a3c83db3-1cf7-446a-b996-84c618946362
       storage:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: StorageMap
         name: vsphere-mqp7b
         namespace: openshift-mtv
         uid: 20b43d4f-ded4-4798-b836-7c0330d552a0
     migrateSharedDisks: false
     provider:
       destination:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: Provider
         name: host
         namespace: openshift-mtv
         uid: abf4509f-1d5f-4ff6-b1f2-18206136922a
       source:
         apiVersion: forklift.konveyor.io/v1beta1
         kind: Provider
         name: vsphere
         namespace: openshift-mtv
         uid: be4dc7ab-fedd-460a-acae-a850f6b9543f
     targetNamespace: openshift-mtv
     vms:
       - id: vm-71
         name: vm-2-with-shared-disks

  12. Start the migration of the second plan and wait for it to finish.
  13. Verify that all shared disks are attached to the same VMs as they were before migration and that none are duplicated. In case of problems, see the discussion of known issues that follows.

2.3.3. Canceling a migration from the command-line interface

You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.

Chapter 3. Migrating from Red Hat Virtualization

Run your Red Hat Virtualization migration plan from the MTV UI or from the command-line.

3.1. Prerequisites

  • You have planned your migration from Red Hat Virtualization.

3.2. Running a migration plan in the MTV UI

You can run a migration plan and view its progress in the Red Hat OpenShift web console.

Prerequisites

  • Valid migration plan.

Procedure

  1. In the Red Hat OpenShift web console, click Migration > Plans for virtualization.

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.

  2. Click Start beside a migration plan to start the migration.
  3. Click Start in the confirmation window that opens.

    The plan’s Status changes to Running, and the migration’s progress is displayed.

    Warm migration only:

    • The precopy stage starts.
    • Click Cutover to complete the migration.

      Warning

      Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.

  4. Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:

    • The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
    • The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:

      • The name of the VM
      • The start and end times of the migration
      • The amount of data copied
      • A progress pipeline for the VM’s migration

        Warning

        vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.

  5. Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:

    1. Click the Virtual Machines tab.
    2. Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.

      The VM’s details are displayed.

    3. In the Pods section, in the Pod links column, click the Logs link.

      The Logs tab opens.

      Note

      Logs are not always available. The following are common reasons for logs not being available:

      • The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case, virt-v2v is not involved, so no pod is required.
      • No pod was created.
      • The pod was deleted.
      • The migration failed before running the pod.
    4. To see the raw logs, click the Raw link.
    5. To download the logs, click the Download link.

3.2.1. Migration plan options

On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu kebab beside a migration plan to access the following options:

  • Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:

    • All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
    • The plan’s mapping on the Mappings tab.
    • The hooks listed on the Hooks tab.
  • Start migration: Active only if relevant.
  • Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
  • Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:

    • Set cutover: Set the date and time for a cutover.
    • Remove cutover: Cancel a scheduled cutover. Active only if relevant.
  • Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    • Migrate VMs to a different namespace.
    • Edit an archived migration plan.
    • Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
  • Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.

    Note

    Archive Plan is irreversible. However, you can duplicate an archived plan.

  • Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.

    Note

    Delete Plan is irreversible.

    Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.

    Note

    The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.

    • If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
    • If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.

3.2.2. Canceling a migration

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console.

Procedure

  1. In the Red Hat OpenShift web console, click Plans for virtualization.
  2. Click the name of a running migration plan to view the migration details.
  3. Select one or more VMs and click Cancel.
  4. Click Yes, cancel to confirm the cancellation.

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

3.3. Running a Red Hat Virtualization migration from the command-line

You can migrate from a Red Hat Virtualization (RHV) source provider by using the command-line interface (CLI).

Prerequisites

If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the OpenShift Virtualization destination cluster that the VM is expected to run on can access the backend storage.

Note
  • Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.
  • LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 1
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: ovirt
        createdForResourceType: providers
    type: Opaque
    stringData:
      user: <user> 2
      password: <password> 3
      insecureSkipVerify: <"true"/"false"> 4
      cacert: | 5
        <ca_certificate>
      url: <api_end_point> 6
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify the RHV Manager user.
    3
    Specify the user password.
    4
    Specify "true" to skip certificate verification, and specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    Enter the Manager CA certificate, unless it was replaced by a third-party certificate, in which case, enter the Manager Apache CA certificate. You can retrieve the Manager CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA.
    6
    Specify the API endpoint URL, for example, https://<engine_host>/ovirt-engine/api.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: ovirt
      url: <api_end_point> 1
      secret:
        name: <secret> 2
        namespace: <namespace>
    EOF
    1
    Specify the URL of the API endpoint, for example, https://<engine_host>/ovirt-engine/api.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 1
          source: 2
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> 3
            namespace: <network_attachment_definition_namespace> 4
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    You can use either the id or the name parameter to specify the source network. For id, specify the RHV network Universal Unique ID (UUID).
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 1
          source:
            id: <source_storage_domain> 2
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    Specify the RHV storage domain UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/kubev2v/hook-runner
      serviceAccount:<service account> 1
      playbook: |
        LS0tCi0gbm... 2
    EOF
    1
    Optional: Red Hat OpenShift service account. Use the serviceAccount parameter to modify any cluster resources.
    2
    Base64-encoded Ansible Playbook. If you specify a playbook, the image must include an ansible-runner.
    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Enter the following command to create the network attachment definition (NAD) of the transfer network used for MTV migrations.

    You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.

    Configuring the IP address enables the interface to reach the configured gateway.

    $ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit>
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: <name_of_transfer_network>
      namespace: <namespace>
      annotations:
        forklift.konveyor.io/route: <IP_address>
  2. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 1
      namespace: <namespace>
      preserveClusterCpuModel: true 2
    spec:
      warm: false 3
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 4
        network: 5
          name: <network_map> 6
          namespace: <namespace>
        storage: 7
          name: <storage_map> 8
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms: 9
        - id: <source_vm1> 10
        - name: <source_vm2>
          hooks: 11
            - hook:
                namespace: <namespace>
                name: <hook> 12
              step: <step> 13
    EOF
    1
    Specify the name of the Plan CR.
    2
    See note below.
    3
    Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.
    4
    Specify only one network map and one storage map per plan.
    5
    Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    6
    Specify the name of the NetworkMap CR.
    7
    Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    8
    Specify the name of the StorageMap CR.
    9
    You can use either the id or the name parameter to specify the source VMs.
    10
    Specify the RHV VM UUID.
    11
    Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step.
    12
    Specify the name of the Hook CR.
    13
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
    Note
    • If the migrated machine is set with a custom CPU model, it will be set with that CPU model in the destination cluster, regardless of the setting of preserveClusterCpuModel.
    • If the migrated machine is not set with a custom CPU model:

      • If preserveClusterCpuModel is set to 'true`, MTV checks the CPU model of the VM when it runs in RHV, based on the cluster’s configuration, and then sets the migrated VM with that CPU model.
      • If preserveClusterCpuModel is set to 'false`, MTV does not set a CPU type and the VM is set with the default CPU model of the destination cluster.
  3. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

3.3.1. Canceling a migration from the command-line interface

You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.

Chapter 4. Migrating from OpenStack

Run your OpenStack migration plan from the MTV UI or from the command-line.

4.1. Prerequisites

  • You have planned your migration from OpenStack.

4.2. Running a migration plan in the MTV UI

You can run a migration plan and view its progress in the Red Hat OpenShift web console.

Prerequisites

  • Valid migration plan.

Procedure

  1. In the Red Hat OpenShift web console, click Migration > Plans for virtualization.

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.

  2. Click Start beside a migration plan to start the migration.
  3. Click Start in the confirmation window that opens.

    The plan’s Status changes to Running, and the migration’s progress is displayed.

    Warning

    Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.

  4. Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:

    • The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
    • The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:

      • The name of the VM
      • The start and end times of the migration
      • The amount of data copied
      • A progress pipeline for the VM’s migration

        Warning

        vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.

  5. Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:

    1. Click the Virtual Machines tab.
    2. Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.

      The VM’s details are displayed.

    3. In the Pods section, in the Pod links column, click the Logs link.

      The Logs tab opens.

      Note

      Logs are not always available. The following are common reasons for logs not being available:

      • The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case, virt-v2v is not involved, so no pod is required.
      • No pod was created.
      • The pod was deleted.
      • The migration failed before running the pod.
    4. To see the raw logs, click the Raw link.
    5. To download the logs, click the Download link.

4.2.1. Migration plan options

On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu kebab beside a migration plan to access the following options:

  • Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:

    • All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
    • The plan’s mapping on the Mappings tab.
    • The hooks listed on the Hooks tab.
  • Start migration: Active only if relevant.
  • Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
  • Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:

    • Set cutover: Set the date and time for a cutover.
    • Remove cutover: Cancel a scheduled cutover. Active only if relevant.
  • Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    • Migrate VMs to a different namespace.
    • Edit an archived migration plan.
    • Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
  • Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.

    Note

    Archive Plan is irreversible. However, you can duplicate an archived plan.

  • Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.

    Note

    Delete Plan is irreversible.

    Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.

    Note

    The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.

    • If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
    • If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.

4.2.2. Canceling a migration

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console.

Procedure

  1. In the Red Hat OpenShift web console, click Plans for virtualization.
  2. Click the name of a running migration plan to view the migration details.
  3. Select one or more VMs and click Cancel.
  4. Click Yes, cancel to confirm the cancellation.

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

4.3. Running an OpenStack migration from the command-line

You can migrate from an OpenStack source provider by using the command-line interface (CLI).

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 1
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: openstack
        createdForResourceType: providers
    type: Opaque
    stringData:
      user: <user> 2
      password: <password> 3
      insecureSkipVerify: <"true"/"false"> 4
      domainName: <domain_name>
      projectName: <project_name>
      regionName: <region_name>
      cacert: | 5
        <ca_certificate>
      url: <api_end_point> 6
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify the OpenStack user.
    3
    Specify the user OpenStack password.
    4
    Specify "true" to skip certificate verification, and specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
    6
    Specify the API endpoint URL, for example, https://<identity_service>/v3.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: openstack
      url: <api_end_point> 1
      secret:
        name: <secret> 2
        namespace: <namespace>
    EOF
    1
    Specify the URL of the API endpoint.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 1
          source:2
            id: <source_network_id>
            name: <source_network_name>
        - destination:
            name: <network_attachment_definition> 3
            namespace: <network_attachment_definition_namespace> 4
            type: multus
          source:
            id: <source_network_id>
            name: <source_network_name>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    You can use either the id or the name parameter to specify the source network. For id, specify the OpenStack network UUID.
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 1
          source:
            id: <source_volume_type> 2
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    Specify the OpenStack volume_type UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/kubev2v/hook-runner
      serviceAccount:<service account> 1
      playbook: |
        LS0tCi0gbm... 2
    EOF
    1
    Optional: Red Hat OpenShift service account. Use the serviceAccount parameter to modify any cluster resources.
    2
    Base64-encoded Ansible Playbook. If you specify a playbook, the image must include an ansible-runner.
    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Enter the following command to create the network attachment definition (NAD) of the transfer network used for MTV migrations.

    You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.

    Configuring the IP address enables the interface to reach the configured gateway.

    $ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit>
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: <name_of_transfer_network>
      namespace: <namespace>
      annotations:
        forklift.konveyor.io/route: <IP_address>
  2. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 1
      namespace: <namespace>
    spec:
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 2
        network: 3
          name: <network_map> 4
          namespace: <namespace>
        storage: 5
          name: <storage_map> 6
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms: 7
        - id: <source_vm1> 8
        - name: <source_vm2>
          hooks: 9
            - hook:
                namespace: <namespace>
                name: <hook> 10
              step: <step> 11
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify only one network map and one storage map per plan.
    3
    Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4
    Specify the name of the NetworkMap CR.
    5
    Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6
    Specify the name of the StorageMap CR.
    7
    You can use either the id or the name parameter to specify the source VMs.
    8
    Specify the OpenStack VM UUID.
    9
    Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step.
    10
    Specify the name of the Hook CR.
    11
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
  3. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

4.3.1. Canceling a migration from the command-line interface

You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.

Chapter 5. Migrating from OVA

Run your OVA migration plan from the MTV UI or from the command-line.

5.1. Prerequisites

  • You have planned your migration from OVA.

5.2. Running a migration plan in the MTV UI

You can run a migration plan and view its progress in the Red Hat OpenShift web console.

Prerequisites

  • Valid migration plan.

Procedure

  1. In the Red Hat OpenShift web console, click Migration > Plans for virtualization.

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.

  2. Click Start beside a migration plan to start the migration.
  3. Click Start in the confirmation window that opens.

    The plan’s Status changes to Running, and the migration’s progress is displayed.

    Warning

    Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.

  4. Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:

    • The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
    • The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:

      • The name of the VM
      • The start and end times of the migration
      • The amount of data copied
      • A progress pipeline for the VM’s migration

        Warning

        vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.

  5. Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:

    1. Click the Virtual Machines tab.
    2. Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.

      The VM’s details are displayed.

    3. In the Pods section, in the Pod links column, click the Logs link.

      The Logs tab opens.

      Note

      Logs are not always available. The following are common reasons for logs not being available:

      • The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case, virt-v2v is not involved, so no pod is required.
      • No pod was created.
      • The pod was deleted.
      • The migration failed before running the pod.
    4. To see the raw logs, click the Raw link.
    5. To download the logs, click the Download link.

5.2.1. Migration plan options

On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu kebab beside a migration plan to access the following options:

  • Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:

    • All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
    • The plan’s mapping on the Mappings tab.
    • The hooks listed on the Hooks tab.
  • Start migration: Active only if relevant.
  • Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
  • Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:

    • Set cutover: Set the date and time for a cutover.
    • Remove cutover: Cancel a scheduled cutover. Active only if relevant.
  • Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    • Migrate VMs to a different namespace.
    • Edit an archived migration plan.
    • Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
  • Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.

    Note

    Archive Plan is irreversible. However, you can duplicate an archived plan.

  • Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.

    Note

    Delete Plan is irreversible.

    Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.

    Note

    The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.

    • If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
    • If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.

5.2.2. Canceling a migration

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console.

Procedure

  1. In the Red Hat OpenShift web console, click Plans for virtualization.
  2. Click the name of a running migration plan to view the migration details.
  3. Select one or more VMs and click Cancel.
  4. Click Yes, cancel to confirm the cancellation.

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

5.3. Running an Open Virtual Appliance (OVA) migration from the command-line

You can migrate from Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the command-line interface (CLI).

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 1
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: ova
        createdForResourceType: providers
    type: Opaque
    stringData:
      url: <nfs_server:/nfs_path> 2
    EOF
    1
    The ownerReferences section is optional.
    2
    where: nfs_server is an IP or hostname of the server where the share was created and nfs_path is the path on the server where the OVA files are stored.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: ova
      url:  <nfs_server:/nfs_path> 1
      secret:
        name: <secret> 2
        namespace: <namespace>
    EOF
    1
    where: nfs_server is an IP or hostname of the server where the share was created and nfs_path is the path on the server where the OVA files are stored.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 1
          source:
            id: <source_network_id> 2
        - destination:
            name: <network_attachment_definition> 3
            namespace: <network_attachment_definition_namespace> 4
            type: multus
          source:
            id: <source_network_id>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod and multus.
    2
    Specify the OVA network Universal Unique ID (UUID).
    3
    Specify a network attachment definition for each additional OpenShift Virtualization network.
    4
    Required only when type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 1
          source:
            name:  Dummy storage for source provider <provider_name> 2
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
    2
    For OVA, the StorageMap can map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as "Dummy storage for source provider <provider_name>". In the YAML, write the phrase as it appears above, without the quotation marks and replacing <provider_name> with the actual name of the provider.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/kubev2v/hook-runner
      serviceAccount:<service account> 1
      playbook: |
        LS0tCi0gbm... 2
    EOF
    1
    Optional: Red Hat OpenShift service account. Use the serviceAccount parameter to modify any cluster resources.
    2
    Base64-encoded Ansible Playbook. If you specify a playbook, the image must include an ansible-runner.
    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Enter the following command to create the network attachment definition (NAD) of the transfer network used for MTV migrations.

    You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.

    Configuring the IP address enables the interface to reach the configured gateway.

    $ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit>
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: <name_of_transfer_network>
      namespace: <namespace>
      annotations:
        forklift.konveyor.io/route: <IP_address>
  2. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 1
      namespace: <namespace>
    spec:
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 2
        network: 3
          name: <network_map> 4
          namespace: <namespace>
        storage: 5
          name: <storage_map> 6
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms: 7
        - id: <source_vm1> 8
        - name: <source_vm2>
          hooks: 9
            - hook:
                namespace: <namespace>
                name: <hook> 10
              step: <step> 11
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify only one network map and one storage map per plan.
    3
    Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4
    Specify the name of the NetworkMap CR.
    5
    Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6
    Specify the name of the StorageMap CR.
    7
    You can use either the id or the name parameter to specify the source VMs.
    8
    Specify the OVA VM UUID.
    9
    Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    10
    Specify the name of the Hook CR.
    11
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
  3. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

5.3.1. Canceling a migration from the command-line interface

You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.

Chapter 6. Migrating from OpenShift Virtualization

Run your OpenShift Virtualization migration plan from the MTV UI or from the command-line.

6.1. Prerequisites

  • You have planned your migration from OpenShift Virtualization.

6.2. Running a migration plan in the MTV UI

You can run a migration plan and view its progress in the Red Hat OpenShift web console.

Prerequisites

  • Valid migration plan.

Procedure

  1. In the Red Hat OpenShift web console, click Migration > Plans for virtualization.

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.

  2. Click Start beside a migration plan to start the migration.
  3. Click Start in the confirmation window that opens.

    The plan’s Status changes to Running, and the migration’s progress is displayed.

    Warning

    Do not take a snapshot of a VM after you start a migration. Taking a snaphot after a migration starts might cause the migration to fail.

  4. Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:

    • The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
    • The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:

      • The name of the VM
      • The start and end times of the migration
      • The amount of data copied
      • A progress pipeline for the VM’s migration

        Warning

        vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.

  5. Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:

    1. Click the Virtual Machines tab.
    2. Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.

      The VM’s details are displayed.

    3. In the Pods section, in the Pod links column, click the Logs link.

      The Logs tab opens.

      Note

      Logs are not always available. The following are common reasons for logs not being available:

      • The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case, virt-v2v is not involved, so no pod is required.
      • No pod was created.
      • The pod was deleted.
      • The migration failed before running the pod.
    4. To see the raw logs, click the Raw link.
    5. To download the logs, click the Download link.

6.2.1. Migration plan options

On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu kebab beside a migration plan to access the following options:

  • Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:

    • All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
    • The plan’s mapping on the Mappings tab.
    • The hooks listed on the Hooks tab.
  • Start migration: Active only if relevant.
  • Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
  • Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:

    • Set cutover: Set the date and time for a cutover.
    • Remove cutover: Cancel a scheduled cutover. Active only if relevant.
  • Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    • Migrate VMs to a different namespace.
    • Edit an archived migration plan.
    • Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
  • Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.

    Note

    Archive Plan is irreversible. However, you can duplicate an archived plan.

  • Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.

    Note

    Delete Plan is irreversible.

    Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.

    Note

    The results of archiving and then deleting a migration plan vary by whether you created the plan and its storage and network mappings using the CLI or the UI.

    • If you created them using the UI, then the migration plan and its mappings no longer appear in the UI.
    • If you created them using the CLI, then the mappings might still appear in the UI. This is because mappings in the CLI can be used by more than one migration plan, but mappings created in the UI can only be used in one migration plan.

6.2.2. Canceling a migration

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console.

Procedure

  1. In the Red Hat OpenShift web console, click Plans for virtualization.
  2. Click the name of a running migration plan to view the migration details.
  3. Select one or more VMs and click Cancel.
  4. Click Yes, cancel to confirm the cancellation.

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

6.3. Running a Red Hat OpenShift Virtualization migration from the command-line

You can use a Red Hat OpenShift Virtualization provider as either a source provider or as a destination provider. You can migrate from an OpenShift Virtualization source provider by using the command-line interface (CLI).

Note

The Red Hat OpenShift cluster version of the source provider must be 4.16 or later.

Procedure

  1. Create a Secret manifest for the source provider credentials:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret>
      namespace: <namespace>
      ownerReferences: 1
        - apiVersion: forklift.konveyor.io/v1beta1
          kind: Provider
          name: <provider_name>
          uid: <provider_uid>
      labels:
        createdForProviderType: openshift
        createdForResourceType: providers
    type: Opaque
    stringData:
      token: <token> 2
      password: <password> 3
      insecureSkipVerify: <"true"/"false"> 4
      cacert: | 5
        <ca_certificate>
      url: <api_end_point> 6
    EOF
    1
    The ownerReferences section is optional.
    2
    Specify a token for a service account with cluster-admin privileges. If both token and url are left blank, the local OpenShift cluster is used.
    3
    Specify the user password.
    4
    Specify "true" to skip certificate verification, and specify "false" to verify the certificate. Defaults to "false" if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed.
    5
    When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
    6
    Specify the URL of the endpoint of the API server.
  1. Create a Provider manifest for the source provider:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <source_provider>
      namespace: <namespace>
    spec:
      type: openshift
      url: <api_end_point> 1
      secret:
        name: <secret> 2
        namespace: <namespace>
    EOF
    1
    Specify the URL of the endpoint of the API server.
    2
    Specify the name of provider Secret CR.
  1. Create a NetworkMap manifest to map the source and destination networks:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: NetworkMap
    metadata:
      name: <network_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            name: <network_name>
            type: pod 1
          source:
            name: <network_name>
            type: pod
        - destination:
            name: <network_attachment_definition> 2
            namespace: <network_attachment_definition_namespace> 3
            type: multus
          source:
            name: <network_attachment_definition>
            namespace: <network_attachment_definition_namespace>
            type: multus
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are pod, ignored, and multus.
    2
    Specify the network name. When the type is multus, use the OpenShift Virtualization network attachment definition name.
    3
    Required only when the type is multus. Specify the namespace of the OpenShift Virtualization network attachment definition.
  1. Create a StorageMap manifest to map source and destination storage:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: StorageMap
    metadata:
      name: <storage_map>
      namespace: <namespace>
    spec:
      map:
        - destination:
            storageClass: <storage_class>
            accessMode: <access_mode> 1
          source:
            name: <storage_class>
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
    EOF
    1
    Allowed values are ReadWriteOnce and ReadWriteMany.
  2. Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/kubev2v/hook-runner
      serviceAccount:<service account> 1
      playbook: |
        LS0tCi0gbm... 2
    EOF
    1
    Optional: Red Hat OpenShift service account. Use the serviceAccount parameter to modify any cluster resources.
    2
    Base64-encoded Ansible Playbook. If you specify a playbook, the image must include an ansible-runner.
    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

  1. Enter the following command to create the network attachment definition (NAD) of the transfer network used for MTV migrations.

    You use this definition to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically.

    Configuring the IP address enables the interface to reach the configured gateway.

    $ oc edit NetworkAttachmentDefinitions <name_of_the_NAD_to_edit>
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: <name_of_transfer_network>
      namespace: <namespace>
      annotations:
        forklift.konveyor.io/route: <IP_address>
  2. Create a Plan manifest for the migration:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Plan
    metadata:
      name: <plan> 1
      namespace: <namespace>
    spec:
      provider:
        source:
          name: <source_provider>
          namespace: <namespace>
        destination:
          name: <destination_provider>
          namespace: <namespace>
      map: 2
        network: 3
          name: <network_map> 4
          namespace: <namespace>
        storage: 5
          name: <storage_map> 6
          namespace: <namespace>
      targetNamespace: <target_namespace>
      vms:
        - name: <source_vm>
          namespace: <namespace>
          hooks: 7
            - hook:
                namespace: <namespace>
                name: <hook> 8
              step: <step> 9
    EOF
    1
    Specify the name of the Plan CR.
    2
    Specify only one network map and one storage map per plan.
    3
    Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    4
    Specify the name of the NetworkMap CR.
    5
    Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    6
    Specify the name of the StorageMap CR.
    7
    Optional: Specify up to two hooks for a VM. Each hook must run during a separate migration step.
    8
    Specify the name of the Hook CR.
    9
    Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
  3. Create a Migration manifest to run the Plan CR:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Migration
    metadata:
      name: <name_of_migration_cr>
      namespace: <namespace>
    spec:
      plan:
        name: <name_of_plan_cr>
        namespace: <namespace>
      cutover: <optional_cutover_time>
    EOF
    Note

    If you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example, 2024-04-04T01:23:45.678+09:00.

6.3.1. Canceling a migration from the command-line interface

You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.

Chapter 7. Advanced migration options

Perform advanced migration operations, such as changing precopy snapshot intervals for warm migration, creating custom rules for validation, or adding hooks to your migration plan.

7.1. Changing precopy intervals for warm migration

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

Procedure

  • Patch the ForkliftController CR:

    $ oc patch forkliftcontroller/<forklift-controller> -n openshift-mtv -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge 1
    1
    Specify the precopy interval in minutes. The default value is 60.

    You do not need to restart the forklift-controller pod.

7.2. Creating custom rules for the Validation service

The Validation service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The Validation service generates a list of concerns for each VM, which are stored in the Provider Inventory service as VM attributes. The web console displays the concerns for each VM in the provider inventory.

You can create custom rules to extend the default ruleset of the Validation service. For example, you can create a rule that checks whether a VM has multiple disks.

7.2.1. About Rego files

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

drs_enabled.rego example

package io.konveyor.forklift.vmware 1

has_drs_enabled {
    input.host.cluster.drsEnabled 2
}

concerns[flag] {
    has_drs_enabled
    flag := {
        "category": "Information",
        "label": "VM running in a DRS-enabled cluster",
        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
    }
}

1
Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for Red Hat Virtualization.
2
Query parameters are based on the input key of the Validation service JSON.

7.2.2. Checking the default validation rules

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

Procedure

  1. Connect to the terminal of the Validation pod:

    $ oc rsh <validation_pod>
  2. Go to the OPA policies directory for your provider:

    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> 1
    1
    Specify vmware or ovirt.
  3. Search for the default policies:

    $ grep -R "default" *

7.2.3. Creating a validation rule

To ensure that your custom validation rules persist across pod restarts, scaling events, and Migration Toolkit for Virtualization (MTV) upgrades, the rules must be deployed using a ConfigMap and referenced in the MTV Custom Resource (CR).

Important
  • If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.
  • If you create a rule that contradicts a default rule, the Validation service will not start.

Validation rule example

Validation rules are based on virtual machine (VM) attributes that are collected and simplified by the Provider Inventory service.

The Provider Inventory service acts as an abstraction layer, normalizing complex, provider-specific VM properties into standardized, testable attributes for the validation engine. This allows rules to be written once and applied across different source environments.

For example, a validation rule needs to check if a VMware VM has NUMA node affinity configured.

  • Provider-Specific Path: The raw VMware API uses a deeply nested path to expose this configuration: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].
  • Simplified Inventory Attribute: The Provider Inventory service simplifies this internal path and presents it as a testable attribute with a normalized list value:

The value for this attribute is standardized as a list.

Inventory AttributeExample Value

numa.nodeAffinity

["True"] or [] (empty list if not configured)

This simplified attribute (numa.nodeAffinity) is what the validation engine uses to execute the rules efficiently.

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

"numaNodeAffinity": [
    "0",
    "1"
],

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

`count(input.numaNodeAffinity) != 0`

Procedure

  1. Create a config map CR according to the following example:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: <forklift-validation-config>
      namespace: openshift-mtv
    data:
      vmware_multiple_disks.rego: |-
        package <provider_package> 1
    
        has_multiple_disks { 2
          count(input.disks) > 1
        }
    
        concerns[flag] {
          has_multiple_disks 3
            flag := {
              "category": "<Information>", 4
              "label": "Multiple disks detected",
              "assessment": "Multiple disks detected on this VM."
            }
        }
    EOF
    1
    Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for Red Hat Virtualization.
    2
    Specify the concerns name and Rego query.
    3
    Specify the concerns name and flag parameter values.
    4
    Allowed values are Critical, Warning, and Information.
  2. Stop the Validation pod by scaling the forklift-controller deployment to 0:

    $ oc scale -n openshift-mtv --replicas=0 deployment/forklift-validation
  3. Start the Validation pod by scaling the forklift-controller deployment to 1:

    $ oc scale -n openshift-mtv --replicas=1 deployment/forklift-validation
  4. Wait a few moments for the pod to terminate.
  5. Check the Validation pod log to verify that the pod started:

    $ oc logs -f <validation_pod>

    If the custom rule conflicts with a default rule, the Validation pod will not start.

  6. Remove the source provider:

    $ oc delete provider <provider> -n openshift-mtv
  7. Add the source provider to apply the new rule:

    $ cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Provider
    metadata:
      name: <provider>
      namespace: openshift-mtv
    spec:
      type: <provider_type> 1
      url: <api_end_point> 2
      secret:
        name: <secret> 3
        namespace: openshift-mtv
    EOF
    1
    Allowed values are ovirt, vsphere, and openstack.
    2
    Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for RHV, or https://<identity_service>/v3 for OpenStack.
    3
    Specify the name of the provider Secret CR.

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

7.2.4. Updating the inventory rules version

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

The rules version is recorded in a rules_version.rego file for each provider.

Procedure

  1. Retrieve the current rules version:

    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version 1

    Example output

    {
       "result": {
           "rules_version": 5
       }
    }

  2. Connect to the terminal of the Validation pod:

    $ oc rsh <validation_pod>
  3. Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.
  4. Log out of the Validation pod terminal.
  5. Verify the updated rules version:

    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version 1

    Example output

    {
       "result": {
           "rules_version": 6
       }
    }

7.2.5. Retrieving the Inventory service JSON

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

Procedure

  1. Retrieve the routes for the project:

    oc get route -n openshift-mtv
  2. Retrieve the Inventory service route:

    $ oc get route <inventory_service> -n openshift-mtv
  3. Retrieve the access token:

    $ TOKEN=$(oc whoami -t)
  4. Trigger an HTTP GET request (for example, using Curl):

    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
  5. Retrieve the UUID of a provider:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k 1
    1 1 1
    Allowed values for the provider are vsphere, ovirt, and openstack.
  6. Retrieve the VMs of a provider:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
  7. Retrieve the details of a VM:

    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k

    Example output

    {
        "input": {
            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
            "id": "vm-431",
            "parent": {
                "kind": "Folder",
                "id": "group-v22"
            },
            "revision": 1,
            "name": "iscsi-target",
            "revisionValidated": 1,
            "isTemplate": false,
            "networks": [
                {
                    "kind": "Network",
                    "id": "network-31"
                },
                {
                    "kind": "Network",
                    "id": "network-33"
                }
            ],
            "disks": [
                {
                    "key": 2000,
                    "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
                    "datastore": {
                        "kind": "Datastore",
                        "id": "datastore-63"
                    },
                    "capacity": 17179869184,
                    "shared": false,
                    "rdm": false
                },
                {
                    "key": 2001,
                    "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
                    "datastore": {
                        "kind": "Datastore",
                        "id": "datastore-63"
                    },
                    "capacity": 10737418240,
                    "shared": false,
                    "rdm": false
                }
            ],
            "concerns": [],
            "policyVersion": 5,
            "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
            "firmware": "bios",
            "powerState": "poweredOn",
            "connectionState": "connected",
            "snapshot": {
                "kind": "VirtualMachineSnapshot",
                "id": "snapshot-3034"
            },
            "changeTrackingEnabled": false,
            "cpuAffinity": [
                0,
                2
            ],
            "cpuHotAddEnabled": true,
            "cpuHotRemoveEnabled": false,
            "memoryHotAddEnabled": false,
            "faultToleranceEnabled": false,
            "cpuCount": 2,
            "coresPerSocket": 1,
            "memoryMB": 2048,
            "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
            "balloonedMemory": 0,
            "ipAddress": "10.19.2.96",
            "storageUsed": 30436770129,
            "numaNodeAffinity": [
                "0",
                "1"
            ],
            "devices": [
                {
                    "kind": "RealUSBController"
                }
            ],
            "host": {
                "id": "host-29",
                "parent": {
                    "kind": "Cluster",
                    "id": "domain-c26"
                },
                "revision": 1,
                "name": "IP address or host name of the vCenter host or RHV Engine host",
                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
                "status": "green",
                "inMaintenance": false,
                "managementServerIp": "10.19.2.96",
                "thumbprint": <thumbprint>,
                "timezone": "UTC",
                "cpuSockets": 2,
                "cpuCores": 16,
                "productName": "VMware ESXi",
                "productVersion": "6.5.0",
                "networking": {
                    "pNICs": [
                        {
                            "key": "key-vim.host.PhysicalNic-vmnic0",
                            "linkSpeed": 10000
                        },
                        {
                            "key": "key-vim.host.PhysicalNic-vmnic1",
                            "linkSpeed": 10000
                        },
                        {
                            "key": "key-vim.host.PhysicalNic-vmnic2",
                            "linkSpeed": 10000
                        },
                        {
                            "key": "key-vim.host.PhysicalNic-vmnic3",
                            "linkSpeed": 10000
                        }
                    ],
                    "vNICs": [
                        {
                            "key": "key-vim.host.VirtualNic-vmk2",
                            "portGroup": "VM_Migration",
                            "dPortGroup": "",
                            "ipAddress": "192.168.79.13",
                            "subnetMask": "255.255.255.0",
                            "mtu": 9000
                        },
                        {
                            "key": "key-vim.host.VirtualNic-vmk0",
                            "portGroup": "Management Network",
                            "dPortGroup": "",
                            "ipAddress": "10.19.2.13",
                            "subnetMask": "255.255.255.128",
                            "mtu": 1500
                        },
                        {
                            "key": "key-vim.host.VirtualNic-vmk1",
                            "portGroup": "Storage Network",
                            "dPortGroup": "",
                            "ipAddress": "172.31.2.13",
                            "subnetMask": "255.255.0.0",
                            "mtu": 1500
                        },
                        {
                            "key": "key-vim.host.VirtualNic-vmk3",
                            "portGroup": "",
                            "dPortGroup": "dvportgroup-48",
                            "ipAddress": "192.168.61.13",
                            "subnetMask": "255.255.255.0",
                            "mtu": 1500
                        },
                        {
                            "key": "key-vim.host.VirtualNic-vmk4",
                            "portGroup": "VM_DHCP_Network",
                            "dPortGroup": "",
                            "ipAddress": "10.19.2.231",
                            "subnetMask": "255.255.255.128",
                            "mtu": 1500
                        }
                    ],
                    "portGroups": [
                        {
                            "key": "key-vim.host.PortGroup-VM Network",
                            "name": "VM Network",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
                        },
                        {
                            "key": "key-vim.host.PortGroup-Management Network",
                            "name": "Management Network",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
                        },
                        {
                            "key": "key-vim.host.PortGroup-VM_10G_Network",
                            "name": "VM_10G_Network",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
                        },
                        {
                            "key": "key-vim.host.PortGroup-VM_Storage",
                            "name": "VM_Storage",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
                        },
                        {
                            "key": "key-vim.host.PortGroup-VM_DHCP_Network",
                            "name": "VM_DHCP_Network",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
                        },
                        {
                            "key": "key-vim.host.PortGroup-Storage Network",
                            "name": "Storage Network",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
                        },
                        {
                            "key": "key-vim.host.PortGroup-VM_Isolated_67",
                            "name": "VM_Isolated_67",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
                        },
                        {
                            "key": "key-vim.host.PortGroup-VM_Migration",
                            "name": "VM_Migration",
                            "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
                        }
                    ],
                    "switches": [
                        {
                            "key": "key-vim.host.VirtualSwitch-vSwitch0",
                            "name": "vSwitch0",
                            "portGroups": [
                                "key-vim.host.PortGroup-VM Network",
                                "key-vim.host.PortGroup-Management Network"
                            ],
                            "pNICs": [
                                "key-vim.host.PhysicalNic-vmnic4"
                            ]
                        },
                        {
                            "key": "key-vim.host.VirtualSwitch-vSwitch1",
                            "name": "vSwitch1",
                            "portGroups": [
                                "key-vim.host.PortGroup-VM_10G_Network",
                                "key-vim.host.PortGroup-VM_Storage",
                                "key-vim.host.PortGroup-VM_DHCP_Network",
                                "key-vim.host.PortGroup-Storage Network"
                            ],
                            "pNICs": [
                                "key-vim.host.PhysicalNic-vmnic2",
                                "key-vim.host.PhysicalNic-vmnic0"
                            ]
                        },
                        {
                            "key": "key-vim.host.VirtualSwitch-vSwitch2",
                            "name": "vSwitch2",
                            "portGroups": [
                                "key-vim.host.PortGroup-VM_Isolated_67",
                                "key-vim.host.PortGroup-VM_Migration"
                            ],
                            "pNICs": [
                                "key-vim.host.PhysicalNic-vmnic3",
                                "key-vim.host.PhysicalNic-vmnic1"
                            ]
                        }
                    ]
                },
                "networks": [
                    {
                        "kind": "Network",
                        "id": "network-31"
                    },
                    {
                        "kind": "Network",
                        "id": "network-34"
                    },
                    {
                        "kind": "Network",
                        "id": "network-57"
                    },
                    {
                        "kind": "Network",
                        "id": "network-33"
                    },
                    {
                        "kind": "Network",
                        "id": "dvportgroup-47"
                    }
                ],
                "datastores": [
                    {
                        "kind": "Datastore",
                        "id": "datastore-35"
                    },
                    {
                        "kind": "Datastore",
                        "id": "datastore-63"
                    }
                ],
                "vms": null,
                "networkAdapters": [],
                "cluster": {
                    "id": "domain-c26",
                    "parent": {
                        "kind": "Folder",
                        "id": "group-h23"
                    },
                    "revision": 1,
                    "name": "mycluster",
                    "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
                    "folder": "group-h23",
                    "networks": [
                        {
                            "kind": "Network",
                            "id": "network-31"
                        },
                        {
                            "kind": "Network",
                            "id": "network-34"
                        },
                        {
                            "kind": "Network",
                            "id": "network-57"
                        },
                        {
                            "kind": "Network",
                            "id": "network-33"
                        },
                        {
                            "kind": "Network",
                            "id": "dvportgroup-47"
                        }
                    ],
                    "datastores": [
                        {
                            "kind": "Datastore",
                            "id": "datastore-35"
                        },
                        {
                            "kind": "Datastore",
                            "id": "datastore-63"
                        }
                    ],
                    "hosts": [
                        {
                            "kind": "Host",
                            "id": "host-44"
                        },
                        {
                            "kind": "Host",
                            "id": "host-29"
                        }
                    ],
                    "dasEnabled": false,
                    "dasVms": [],
                    "drsEnabled": true,
                    "drsBehavior": "fullyAutomated",
                    "drsVms": [],
                    "datacenter": null
                }
            }
        }
    }

7.3. Adding hooks to an MTV migration plan

You can add hooks to an Migration Toolkit for Virtualization (MTV) migration plan to perform automated operations on a VM, either before or after you migrate it.

7.3.1. About hooks for MTV migration plans

You can add hooks to an Migration Toolkit for Virtualization (MTV) migration plan to perform automated operations on a VM, either before or after you migrate it.

You can add hooks to Migration Toolkit for Virtualization (MTV) migration plans using either the MTV CLI or the MTV user interface, which is located in the Red Hat OpenShift web console.

  • Pre-migration hooks are hooks that perform operations on a VM that is located on a provider. This prepares the VM for migration.
  • Post-migration hooks are hooks that perform operations on a VM that has migrated to OpenShift Virtualization.

7.3.1.1. Default hook image

The default hook image for an MTV hook is quay.io/kubev2v/hook-runner. The image is based on the Ansible Runner image with the addition of python-openshift to provide Ansible Kubernetes resources and a recent oc binary.

7.3.1.2. Hook execution

An Ansible Playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap. The hook container is run as a job on the desired cluster in the openshift-mtv namespace using the ServiceAccount you choose.

When you add a hook, you must specify the namespace where the Hook CR is located, the name of the hook, and whether the hook is a pre-migration hook or a post-migration hook.

Important

In order for a hook to run on a VM, the VM must be started and available using SSH.

The illustration that follows shows the general process of using a migration hook. For specific procedures, see Adding a migration hook to a migration plan using the Red Hat OpenShift web console and Adding a migration hook to a migration plan using the CLI.

Figure 7.1. Adding a hook to a migration plan

Adding a hook to a migration plan

Process:

  1. Input your Ansible hook and credentials.

    1. Input an Ansible hook image to the MTV controller using either the UI or the CLI.

      • In the UI, specify the ansible-runner and enter the playbook.yml that contains the hook.
      • In the CLI, input the hook image, which specifies the playbook that runs the hook.
    2. If you need additional data to run the playbook inside the pod, such as SSH data, create a Secret that contains credentials for the VM. The Secret is not mounted to the pod, but is called by the playbook.

      Note

      This Secret is not the same as the Secret CR that contains the credentials of your source provider.

  2. The MTV controller creates the ConfigMap, which contains:

    • workload.yml, which contains information about the VMs.
    • playbook.yml, the raw string playbook you want to run.
    • plan.yml, which is the Plan CR.

      The ConfigMap contains the name of the VM and instructs the playbook what to do.

  3. The MTV controller creates a job that starts the user specified image.

    1. Mounts the ConfigMap to the container.

      The Ansible hook imports the Secret that the user previously entered.

  4. The job runs a pre-migration hook or a post-migration hook as follows:

    1. For a pre-migration hook, the job logs into the VMs on the source provider using SSH and runs the hook.
    2. For a post-migration hook, the job logs into the VMs on OpenShift Virtualization using SSH and runs the hook.

7.3.2. Adding a migration hook to a migration plan using the Red Hat OpenShift web console

You can add a migration hook to an existing migration plan by using the Red Hat OpenShift web console.

Note

You need to run one command in the Migration Toolkit for Virtualization (MTV) CLI.

For example, you can create a hook to install the cloud-init service on a VM and write a file before migration.

Note

You can run one pre-migration hook, one post-migration hook, or one of each per migration plan.

Prerequisites

  • Migration plan
  • Migration hook file, whose contents you copy and paste into the web console
  • File containing the Secret for the source provider
  • Red Hat OpenShift service account called by the hook and that has at least write access for the namespace you are working in
  • SSH access for VMs you want to migrate with the public key installed on the VMs
  • VMs running on Microsoft Server only: Remote Execution enabled

Additional resources

For instructions for creating a service account, see Understanding and creating service accounts.

Procedure

  1. In the Red Hat OpenShift web console, click Migration > Plans for virtualization and then click the migration plan you want to add the hook to.
  2. Click Hooks.
  3. For a pre-migration hook, perform the following steps:

    1. In the Pre migration hook section, toggle the Enable hook switch to Enable pre migration hook.
    2. Enter the Hook runner image. If you are specifying the spec.playbook, you need to use an image that has an ansible-runner.
    3. Paste your hook as a YAML file in the Ansible playbook text box.
  4. For a post-migration hook, perform the following steps:

    1. In the Post migration hook, toggle the Enable hook switch Enable post migration hook.
    2. Enter the Hook runner image. If you are specifying the spec.playbook, you need to use an image that has an ansible-runner.
    3. Paste your hook as a YAML file in the Ansible playbook text box.
  5. At the top of the tab, click Update hooks.
  6. In a terminal, enter the following command to associate each hook with your Red Hat OpenShift service account:

    $ oc -n openshift-mtv patch hook <name_of_hook> \
      -p '{"spec":{"serviceAccount":"<service_account>"}}' --type merge

The example migration hook that follows ensures that the VM can be accessed using SSH, creates an SSH key, and runs 2 tasks: stopping the Maria database and generating a text file.

Example migration hook

- name: Main
  hosts: localhost
  vars_files:
    - plan.yml
    - workload.yml
  tasks:
  - k8s_info:
      api_version: v1
      kind: Secret
      name: privkey
      namespace: openshift-mtv
    register: ssh_credentials

  - name: Ensure SSH directory exists
    file:
      path: ~/.ssh
      state: directory
      mode: 0750

  - name: Create SSH key
    copy:
      dest: ~/.ssh/id_rsa
      content: "{{ ssh_credentials.resources[0].data.key | b64decode }}"
      mode: 0600

  - add_host:
      name: "{{ vm.ipaddress }}"  # ALT "{{ vm.guestnetworks[2].ip }}"
      ansible_user: root
      groups: vms

- hosts: vms
  vars_files:
    - plan.yml
    - workload.yml
  tasks:
  - name: Stop MariaDB
    service:
      name: mariadb
      state: stopped

  - name: Create Test File
    copy:
      dest: /premigration.txt
      content: "Migration from {{ provider.source.name }}
                of {{ vm.vm1.vm0.id }} has finished\n"
      mode: 0644

7.3.3. Adding a migration hook to a migration plan using the CLI

You can use a Hook CR to add a pre-migration hook or a post-migration hook to an existing migration plan by using the Migration Toolkit for Virtualization (MTV) CLI.

For example, you can create a Hook custom resource (CR) to install the cloud-init service on a VM and write a file before migration.

Note

You can run one pre-migration hook, one post-migration hook, or one of each per migration plan. Each hook needs its own Hook CR, but a Plan CR contains data for all the hooks it uses.

Note

You can retrieve additional information stored in a secret or in a ConfigMap by using a k8s module.

Prerequisites

  • Migration plan
  • Migration hook image or the playbook containing the hook image
  • File containing the Secret for the source provider
  • Red Hat OpenShift service account called by the hook and that has at least write access for the namespace you are working in
  • SSH access for VMs you want to migrate with the public key installed on the VMs
  • VMs running on Microsoft Server only: Remote Execution enabled

Additional resources

For instructions for creating a service account, see Understanding and creating service accounts.

Procedure

  1. If needed, create a Secret with an SSH private key for the VM.

    1. Choose an existing key or generate a key pair.
    2. Install the public key on the VM.
    3. Encode the private key in the Secret to base64.

      apiVersion: v1
      data:
        key: VGhpcyB3YXMgZ2Vu...
      kind: Secret
      metadata:
        name: ssh-credentials
        namespace: openshift-mtv
      type: Opaque
  2. Encode your playbook by concatenating a file and piping it for Base64 encoding, for example:

    $ cat playbook.yml | base64 -w0
  3. Create a Hook CR:

    $  cat << EOF | oc apply -f -
    apiVersion: forklift.konveyor.io/v1beta1
    kind: Hook
    metadata:
      name: <hook>
      namespace: <namespace>
    spec:
      image: quay.io/kubev2v/hook-runner
      serviceAccount:<service account> 1
      playbook: |
        LS0tCi0gbm... 2
    EOF
    1
    (Optional) Red Hat OpenShift service account. The serviceAccount must be provided if you want to manipulate any resources of the cluster.
    2
    Base64-encoded Ansible Playbook. If you specify a playbook, the image must include an ansible-runner.
    Note

    You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

    Note

    To decode an attached playbook, retrieve the resource with custom output and pipe it to base64. For example:

    $ oc get -n konveyor-forklift hook playbook -o \
        go-template='{{ .spec.playbook }}' | base64 -d
  4. In the Plan CR of the migration, for each VM, add the following section to the end of the CR:

      vms:
        - id: <vm_id>
          hooks:
            - hook:
                namespace: <namespace>
                name: <name_of_hook>
              step: <type_of_hook> 1
    1
    Options are PreHook, to run the hook before the migration, and PostHook, to run the hook after the migration.
Important

In order for a PreHook to run on a VM, the VM must be started and available via SSH.

The example migration hook that follows ensures that the VM can be accessed using SSH, creates an SSH key, and runs 2 tasks: stopping the Maria database and generating a text file.

Example migration hook

- name: Main
  hosts: localhost
  vars_files:
    - plan.yml
    - workload.yml
  tasks:
  - k8s_info:
      api_version: v1
      kind: Secret
      name: privkey
      namespace: openshift-mtv
    register: ssh_credentials

  - name: Ensure SSH directory exists
    file:
      path: ~/.ssh
      state: directory
      mode: 0750

  - name: Create SSH key
    copy:
      dest: ~/.ssh/id_rsa
      content: "{{ ssh_credentials.resources[0].data.key | b64decode }}"
      mode: 0600

  - add_host:
      name: "{{ vm.ipaddress }}"  # ALT "{{ vm.guestnetworks[2].ip }}"
      ansible_user: root
      groups: vms

- hosts: vms
  vars_files:
    - plan.yml
    - workload.yml
  tasks:
  - name: Stop MariaDB
    service:
      name: mariadb
      state: stopped

  - name: Create Test File
    copy:
      dest: /premigration.txt
      content: "Migration from {{ provider.source.name }}
                of {{ vm.vm1.vm0.id }} has finished\n"
      mode: 0644

Chapter 8. Upgrading or uninstalling the Migration Toolkit for Virtualization

You can upgrade or uninstall the Migration Toolkit for Virtualization (MTV) by using the Red Hat OpenShift web console or the command-line interface (CLI).

8.1. Upgrading the Migration Toolkit for Virtualization

You can upgrade the MTV Operator by using the Red Hat OpenShift web console to install the new version.

Procedure

  1. In the Red Hat OpenShift web console, click OperatorsInstalled OperatorsMigration Toolkit for Virtualization OperatorSubscription.
  2. Change the update channel to the correct release.

    See Changing update channel in the Red Hat OpenShift documentation.

  3. Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the CatalogSource pod:

    1. Note the catalog source, for example, redhat-operators.
    2. From the command line, retrieve the catalog source pod:

      $ oc get pod -n openshift-marketplace | grep <catalog_source>
    3. Delete the pod:

      $ oc delete pod -n openshift-marketplace <catalog_source_pod>

      Upgrade status changes from Up to date to Upgrade available.

      If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.

  4. If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.

    See Manually approving a pending upgrade in the Red Hat OpenShift documentation.

  5. If you are upgrading from MTV 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical. For more information, see Adding a VMSphere source provider.
  6. If you mapped to NFS on the Red Hat OpenShift destination provider in MTV 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.

8.2. Uninstalling MTV by using the Red Hat OpenShift web console

You can uninstall Migration Toolkit for Virtualization (MTV) by using the Red Hat OpenShift web console.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.

Procedure

  1. In the Red Hat OpenShift web console, click Operators > Installed Operators.
  2. Click Migration Toolkit for Virtualization Operator.

    The Operator Details page opens in the Details tab.

  3. Click the ForkliftController tab.
  4. Click Actions and select Delete ForkLiftController.

    A confirmation window opens.

  5. Click Delete.

    The controller is removed.

  6. Open the Details tab.

    The Create ForkliftController button appears instead of the controller you deleted. There is no need to click it.

  7. On the upper-right side of the page, click Actions and select Uninstall Operator.

    A confirmation window opens, displaying any operand instances.

  8. To delete all instances, select the Delete all operand instances for this operator checkbox. By default, the checkbox is cleared.

    Important

    If your Operator configured off-cluster resources, these will continue to run and will require manual cleanup.

  9. Click Uninstall.

    The Installed Operators page opens, and the Migration Toolkit for Virtualization Operator is removed from the list of installed Operators.

  10. Click Home > Overview.
  11. In the Status section of the page, click Dynamic Plugins.

    The Dynamic Plugins pop-up opens, listing forklift-console-plugin as a failed plugin. If the forklift-console-plugin does not appear as a failed plugin, refresh the web console.

  12. Click forklift-console-plugin.

    The ConsolePlugin details page opens in the Details tab.

  13. On the upper right side of the page, click Actions and select Delete ConsolePlugin from the list.

    A confirmation window opens.

  14. Click Delete.

    The plugin is removed from the list of Dynamic plugins on the Overview page. If the plugin still appears, restart the Overview page.

8.3. Uninstalling MTV from the command line

You can uninstall Migration Toolkit for Virtualization (MTV) from the command line.

Note

This action does not remove resources managed by the MTV Operator, including custom resource definitions (CRDs) and custom resources (CRs). To remove these after uninstalling the MTV Operator, you might need to manually delete the MTV Operator CRDs.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.

Procedure

  1. Delete the forklift controller by running the following command:

    $ oc delete ForkliftController --all -n openshift-mtv
  2. Delete the subscription to the MTV Operator by running the following command:

    $ oc get subscription -o name|grep 'mtv-operator'| xargs oc delete
  3. Delete the clusterserviceversion for the MTV Operator by running the following command:

    $ oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc delete
  4. Delete the plugin console CR by running the following command:

    $ oc delete ConsolePlugin forklift-console-plugin
  5. Optional: Delete the custom resource definitions (CRDs) by running the following command:

    oc get crd -o name | grep 'forklift.konveyor.io' | xargs oc delete
  6. Optional: Perform cleanup by deleting the MTV project by running the following command:

    oc delete project openshift-mtv

Chapter 9. Troubleshooting migration

Troubleshoot migration issues, navigate custom resources (CRs), services, and workflows, and download logs and CRs for troubleshooting information.

9.1. Error messages

This section describes error messages and how to resolve them.

9.1.1. warm import retry limit reached

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.

To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.

9.1.2. Unable to resize disk image to required size

The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.

To resolve this problem, increase the file system overhead in CDI to more than 10%.

9.2. Using the must-gather tool

You can collect logs and information about MTV custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

Note

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

Prerequisites

  • You must be logged in to the OpenShift Virtualization cluster as a user with the cluster-admin role.
  • You must have the Red Hat OpenShift CLI (oc) installed.

Procedure

  1. Navigate to the directory where you want to store the must-gather data.
  2. Run the oc adm must-gather command:

    $ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.10.0

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

  3. Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    • Namespace:

      $ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.10.0 \
        -- NS=<namespace> /usr/bin/targeted
    • Migration plan:

      $ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.10.0 \
        -- PLAN=<migration_plan> /usr/bin/targeted
    • Virtual machine:

      $ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.10.0 \
        -- VM=<vm_id> NS=<namespace> /usr/bin/targeted 1
      1
      Specify the VM ID as it appears in the Plan CR.

9.3. MTV custom resources and services

The Migration Toolkit for Virtualization (MTV) is provided as an Red Hat OpenShift Operator. It creates and manages the following custom resources (CRs) and services.

9.3.1. MTV custom resources

  • Provider CR stores attributes that enable MTV to connect to and interact with the source and target providers.
  • NetworkMapping CR maps the networks of the source and target providers.
  • StorageMapping CR maps the storage of the source and target providers.
  • Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.
  • Migration CR runs a migration plan.

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

9.3.2. MTV services

  • The Inventory service performs the following actions:

    • Connects to the source and target providers.
    • Maintains a local inventory for mappings and plans.
    • Stores VM configurations.
    • Runs the Validation service if a VM configuration change is detected.
  • The Validation service checks the suitability of a VM for migration by applying rules.
  • The Migration Controller service orchestrates migrations.

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller service changes the plan status to Completed.

  • The Populator Controller service orchestrates disk transfers using Volume Populators.
  • The Kubevirt Controller and Containerized Data Import (CDI) Controller services handle most technical operations.

9.4. High-level migration workflow

The high-level workflow shows the migration process from the point of view of the user:

  1. You create a source provider, a target provider, a network mapping, and a storage mapping.
  2. You create a Plan custom resource (CR) that includes the following resources:

    • Source provider
    • Target provider, if MTV is not installed on the target cluster
    • Network mapping
    • Storage mapping
    • One or more virtual machines (VMs)
  3. You run a migration plan by creating a Migration CR that references the Plan CR.

    If you cannot migrate all the VMs for any reason, you can create multiple Migration CRs for the same Plan CR until all VMs are migrated.

  4. For each VM in the Plan CR, the Migration Controller service records the VM migration progress in the Migration CR.
  5. Once the data transfer for each VM in the Plan CR completes, the Migration Controller service creates a VirtualMachine CR.

    When all VMs have been migrated, the Migration Controller service updates the status of the Plan CR to Completed. The power state of each source VM is maintained after migration.

9.4.1. Detailed migration workflow

You can use the detailed migration workflow to troubleshoot a failed migration.

The workflow describes the following steps:

Warm Migration or migration to a remote OpenShift cluster:

  1. When you create the Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    For each VM disk:

  2. The Containerized Data Importer (CDI) Controller service creates a persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.
  3. If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.
  4. The CDI Controller service creates an importer pod.
  5. The importer pod streams the VM disk to the PV.

    After the VM disks are transferred:

  6. The Migration Controller service creates a conversion pod with the PVCs attached to it when importing from VMware.

    The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

  7. The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.
  8. If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

Cold migration from RHV or OpenStack to the local OpenShift cluster:

  1. When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates for each source VM disk a PersistentVolumeClaim CR, and an OvirtVolumePopulator when the source is RHV, or an OpenstackVolumePopulator CR when the source is OpenStack.

    For each VM disk:

  2. The Populator Controller service creates a temporarily persistent volume claim (PVC).
  3. If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    • The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.
  4. The Populator Controller service creates a populator pod.
  5. The populator pod transfers the disk data to the PV.

    After the VM disks are transferred:

  6. The temporary PVC is deleted, and the initial PVC points to the PV with the data.
  7. The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.
  8. If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

Cold migration from VMware to the local OpenShift cluster:

  1. When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    For each VM disk:

  2. The Containerized Data Importer (CDI) Controller service creates a blank persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.
  3. If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

For all VM disks:

  1. The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.
  2. The Migration Controller service creates a conversion pod for all PVCs.
  3. The conversion pod runs virt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.

    After the VM disks are transferred:

  4. The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.
  5. If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

9.4.2. How MTV uses the virt-v2v tool

The Migration Toolkit for Virtualization (MTV) uses the virt-v2v tool to convert the disk image of a virtual machine (VM) into a format compatible with OpenShift Virtualization. The tool makes migrations easier because it automatically performs the tasks needed to make your VMs work with OpenShift Virtualization. For example, enabling paravirtualized VirtIO drivers in the converted VM, if possible, and installing the QEMU guest agent.

virt-v2v is included in Red Hat Enterprise Linux (RHEL) versions 7 and later.

9.4.2.1. Main functions of virt-v2v in MTV migrations

During migration, MTV uses virt-v2v to collect metadata about VMs, make necessary changes to VM disks, and copy the disks containing the VMs to OpenShift Virtualization.

virt-v2v makes the following changes to VM disks to prepare them for migration:

  • Additions:

    • Injection of VirtIO drivers, for example, network or disk drivers.
    • Preparation of hypervisor-specific tools or agents, for example, a QEMU guest agent installation.
    • Modification of boot configuration, for example, updated bootloader or boot entries.
  • Removals:

    • Unnecessary or former hypervisor-specific files, for example, VMware tools or VirtualBox additions.
    • Old network driver configurations, for example, removing VMware-specific NIC drivers.
    • Configuration settings that are incompatible with the target system, for example, old boot settings.

If you are migrating from VMware or from Open Virtual Appliances (OVA) files, virt-v2v also sets their IP addresses either during the migration or during the first reboot of the VMs after migration. s

Note

You can also run predefined Ansible hooks before or after a migration using MTV. For more information, see Adding hooks to an MTV migration plan.

These hooks do not necessarily use virt-v2v.

9.4.2.2. Customizing, removing, and installing files

MTV uses virt-v2v to perform additional guest customizations during the conversion, such as the following actions:

  • Customization to preserve IP addresses
  • Customization to preserve drive letters
Note

For Red Hat Enterprise Linux (RHEL)-based guests, virt-v2v attempts to install the guest agent from the Red Hat registry. If the migration is run in a detached environment, the installation program fails, and you must use hooks or other automation to install the guest agent.

For more information, see the man reference pages:

9.4.2.3. Permissions and virt-v2v

virt-v2v does not require permissions or access credentials for the guest operating system itself because virt-v2v is not run against a running VM, but only against the disks of a VM.

9.5. Collected logs and custom resource information

You can download logs and custom resource (CR) yaml files for the following targets by using the Red Hat OpenShift web console or the command-line interface (CLI):

  • Migration plan: Web console or CLI.
  • Virtual machine: Web console or CLI.
  • Namespace: CLI only.

The must-gather tool collects the following logs and CR files in an archive file:

  • CRs:

    • DataVolume CR: Represents a disk mounted on a migrated VM.
    • VirtualMachine CR: Represents a migrated VM.
    • Plan CR: Defines the VMs and storage and network mapping.
    • Job CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.
  • Logs:

    • importer pod: Disk-to-data-volume conversion log. The importer pod naming convention is importer-<migration_plan>-<vm_id><5_char_id>, for example, importer-mig-plan-ed90dfc6-9a17-4a8btnfh, where ed90dfc6-9a17-4a8 is a truncated RHV VM ID and btnfh is the generated 5-character ID.
    • conversion pod: VM conversion log. The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the VM. The conversion pod naming convention is <migration_plan>-<vm_id><5_char_id>.
    • virt-launcher pod: VM launcher log. When a migrated VM is powered on, the virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.
    • forklift-controller pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.
    • forklift-must-gather-api pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.
    • hook-job pod: The log is filtered for hook jobs. The hook-job naming convention is <migration_plan>-<vm_id><5_char_id>, for example, plan2j-vm-3696-posthook-4mx85 or plan2j-vm-3696-prehook-mwqnl.

      Note

      Empty or excluded log files are not included in the must-gather archive file.

Example must-gather archive structure for a VMware migration plan

must-gather
└── namespaces
    ├── target-vm-ns
    │   ├── crs
    │   │   ├── datavolume
    │   │   │   ├── mig-plan-vm-7595-tkhdz.yaml
    │   │   │   ├── mig-plan-vm-7595-5qvqp.yaml
    │   │   │   └── mig-plan-vm-8325-xccfw.yaml
    │   │   └── virtualmachine
    │   │       ├── test-test-rhel8-2disks2nics.yaml
    │   │       └── test-x2019.yaml
    │   └── logs
    │       ├── importer-mig-plan-vm-7595-tkhdz
    │       │   └── current.log
    │       ├── importer-mig-plan-vm-7595-5qvqp
    │       │   └── current.log
    │       ├── importer-mig-plan-vm-8325-xccfw
    │       │   └── current.log
    │       ├── mig-plan-vm-7595-4glzd
    │       │   └── current.log
    │       └── mig-plan-vm-8325-4zw49
    │           └── current.log
    └── openshift-mtv
        ├── crs
        │   └── plan
        │       └── mig-plan-cold.yaml
        └── logs
            ├── forklift-controller-67656d574-w74md
            │   └── current.log
            └── forklift-must-gather-api-89fc7f4b6-hlwb6
                └── current.log

9.5.1. Downloading logs and custom resource information from the web console

You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) from the Red Hat OpenShift web console.

Procedure

  1. In the Red Hat OpenShift web console, click MigrationPlans for virtualization.
  2. Click Get logs beside a migration plan name.
  3. In the Get logs window, click Get logs.

    The logs are collected. A Log collection complete message is displayed.

  4. Click Download logs to download the archive file.
  5. To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.

9.5.2. Accessing logs and custom resource information from the command line

You can access logs and information about custom resources (CRs) from the command line by using the must-gather tool. You must attach a must-gather data file to all customer cases.

You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.

Note

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

Prerequisites

  • You must be logged in to the OpenShift Virtualization cluster as a user with the cluster-admin role.
  • You must have the Red Hat OpenShift CLI (oc) installed.

Procedure

  1. Navigate to the directory where you want to store the must-gather data.
  2. Run the oc adm must-gather command:

    $ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.10.0

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

  3. Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    • Namespace:

      $ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.10.0 \
        -- NS=<namespace> /usr/bin/targeted
    • Migration plan:

      $ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.10.0 \
        -- PLAN=<migration_plan> /usr/bin/targeted
    • Virtual machine:

      $ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.10.0 \
        -- VM=<vm_name> NS=<namespace> /usr/bin/targeted 1
      1
      You must specify the VM name, not the VM ID, as it appears in the Plan CR.

Chapter 10. MTV performance recommendations

Review recommendations for network and storage performance, cold and warm migrations, and multiple migrations or single migrations.

10.1. MTV performance recommendations

The purpose of this section is to share recommendations for efficient and effective migration of virtual machines (VMs) using Migration Toolkit for Virtualization (MTV), based on findings observed through testing.

The data provided here was collected from testing in Red Hat labs and is provided for reference only. 

Overall, these numbers should be considered to show the best-case scenarios.

The observed performance of migration can differ from these results and depends on several factors.

10.1.1. Ensure fast storage and network speeds

Ensure fast storage and network speeds, both for VMware and Red Hat OpenShift (OCP) environments.

  • To perform fast migrations, VMware must have fast read access to datastores. Networking between VMware ESXi hosts should be fast, ensure a 10 GiB network connection, and avoid network bottlenecks.

    • Extend the VMware network to the OCP Workers Interface network environment.
    • It is important to ensure that the VMware network offers high throughput (10 Gigabit Ethernet) and rapid networking to guarantee that the reception rates align with the read rate of the ESXi datastore.
    • Be aware that the migration process uses significant network bandwidth and that the migration network is utilized. If other services use that network, it might have an impact on those services and their migration rates.
    • For example, 200 to 325 MiB/s was the average network transfer rate from the vmnic for each ESXi host associated with transferring data to the OCP interface.

10.1.2. Ensure fast datastore read speeds to ensure efficient and performant migrations

Datastores read rates impact the total transfer times, so it is essential to ensure fast reads are possible from the ESXi datastore to the ESXi host.  

Example in numbers: 200 to 300 MiB/s was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible.

10.1.3. Endpoint types 

MTV 2.6 allows for the following vSphere provider options:

  • ESXi endpoint (inventory and disk transfers from ESXi), introduced in MTV 2.6
  • vCenter Server endpoint; no networks for the ESXi host (inventory and disk transfers from vCenter)
  • vCenter endpoint and ESXi networks are available (inventory from vCenter, disk transfers from ESXi).

When transferring many VMs that are registered to multiple ESXi hosts, using the vCenter endpoint and ESXi network is suggested.

Note

As of vSphere 7.0, ESXi hosts can label which network to use for Network Block Device (NBD) transport. This is accomplished by tagging the desired virtual network interface card (NIC) with the appropriate vSphereBackupNFC label. When this is done, MTV is able to use the ESXi interface for network transfer to OpenShift provided that the worker and ESXi host interfaces are reachable. This is especially useful when migration users might not have access to the ESXi credentials yet want to be able to control which ESXi interface is used for migration. 

For more details, see: (MTV-1230)

You can use the following ESXi command, which designates interface vmk2 for NBD backup:

esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2

10.1.4. Set ESXi hosts BIOS profile and ESXi Host Power Management for High Performance

Where possible, ensure that hosts used to perform migrations are set with BIOS profiles related to maximum performance. Hosts which use Host Power Management controlled within vSphere should check that High Performance is set.

Testing showed that when transferring more than 10 VMs with both BIOS and host power management set accordingly, migrations had an increase of 15 MiB in the average datastore read rate.

10.1.5. Avoid additional network load on VMware networks

You can reduce the network load on VMware networks by selecting the migration network when using the ESXi endpoint.

By incorporating a virtualization provider, MTV enables the selection of a specific network, which is accessible on the ESXi hosts, for the purpose of migrating virtual machines to OpenShift. Selecting this migration network from the ESXi host in the MTV UI ensures that the transfer is performed using the selected network as an ESXi endpoint.

It is imperative to ensure that the network selected has connectivity to the OCP interface, has adequate bandwidth for migrations, and that the network interface is not saturated.

In environments with fast networks, such as 10GbE networks, migration network impacts can be expected to match the rate of ESXi datastore reads.

10.1.6. Control maximum concurrent disk migrations per ESXi host

Set the MAX_VM_INFLIGHT MTV variable to control the maximum number of concurrent VMs transfers allowed for the ESXi host. 

MTV allows for concurrency to be controlled using this variable; by default, it is set to 20.

When setting MAX_VM_INFLIGHT, consider the number of maximum concurrent VMs transfers are required for ESXi hosts. It is important to consider the type of migration to be transferred concurrently. Warm migrations, which are defined by migrations of a running VM that will be migrated over a scheduled time.

Warm migrations use snapshots to compare and migrate only the differences between previous snapshots of the disk. The migration of the differences between snapshots happens over specific intervals before a final cut-over of the running VM to OpenShift occurs. 

In MTV 2.6, MAX_VM_INFLIGHT reserves one transfer slot per VM, regardless of current migration activity for a specific snapshot or the number of disks that belong to a single VM. The total set by MAX_VM_INFLIGHT is used to indicate how many concurrent VM tranfers per ESXi host is allowed.

Example

  • MAX_VM_INFLIGHT = 20 and 2 ESXi hosts defined in the provider mean each host can transfer 20 VMs.

10.1.7. Migrations are completed faster when migrating multiple VMs concurrently

When multiple VMs from a specific ESXi host are to be migrated, starting concurrent migrations for multiple VMs leads to faster migration times. 

Testing demonstrated that migrating 10 VMs (each containing 35 GiB of data, with a total size of 50 GiB) from a single host is significantly faster than migrating the same number of VMs sequentially, one after another. 

It is possible to increase concurrent migration to more than 10 virtual machines from a single host, but it does not show a significant improvement. 

Examples

  • 1 single disk VMs took 6 minutes, with migration rate of 100 MiB/s
  • 10 single disk VMs took 22 minutes, with migration rate of 272 MiB/s
  • 20 single disk VMs took 42 minutes, with migration rate of 284 MiB/s
Note

From the aforementioned examples, it is evident that the migration of 10 virtual machines simultaneously is three times faster than the migration of identical virtual machines in a sequential manner.

The migration rate was almost the same when moving 10 or 20 virtual machines simultaneously.

10.1.8. Migrations complete faster using multiple hosts

Using multiple hosts with registered VMs equally distributed among the ESXi hosts used for migrations leads to faster migration times.

Testing showed that when transferring more than 10 single disk VMS, each containing 35 GiB of data out of a total of 50G total, using an additional host can reduce migration time.

Examples

  • 80 single disk VMs, containing 35 GiB of data each, using a single host took 2 hours and 43 minutes, with a migration rate of 294 MiB/s.
  • 80 single disk VMs, containing 35 GiB of data each, using 8 ESXi hosts took 41 minutes, with a migration rate of 1,173 MiB/s.
Note

From the aforementioned examples, it is evident that migrating 80 VMs from 8 ESXi hosts, 10 from each host, concurrently is four times faster than running the same VMs from a single ESXi host. 

Migrating a larger number of VMs from more than 8 ESXi hosts concurrently could potentially show increased performance. However, it was not tested and therefore not recommended.

10.1.9. Multiple migration plans compared to a single large migration plan

The maximum number of disks that can be referenced by a single migration plan is 500. For more details, see (MTV-1203)

When attempting to migrate many VMs in a single migration plan, it can take some time for all migrations to start. By breaking up one migration plan into several migration plans, it is possible to start them at the same time.

Comparing migrations of:

  • 500 VMs using 8 ESXi hosts in 1 plan, max_vm_inflight=100, took 5 hours and 10 minutes.
  • 800 VMs using 8 ESXi hosts with 8 plans, max_vm_inflight=100, took 57 minutes.

Testing showed that by breaking one single large plan into multiple moderately sized plans, for example, by using 100 VMs per plan, the total migration time can be reduced.

10.1.10. Maximum values tested for cold migrations

  • Maximum number of ESXi hosts tested: 8
  • Maximum number of VMs in a single migration plan: 500
  • Maximum number of VMs migrated in a single test: 5000
  • Maximum number of migration plans performed concurrently: 40
  • Maximum single disk size migrated: 6 TB disk, which contained 3 TB of data
  • Maximum number of disks on a single VM migrated: 50
  • Highest observed single datastore read rate from a single ESXi server:  312 MiB/second
  • Highest observed multi-datastore read rate using eight ESXi servers and two datastores: 1,242 MiB/second
  • Highest observed virtual NIC transfer rate to an OpenShift worker: 327 MiB/second
  • Maximum migration transfer rate of a single disk: 162 MiB/second (rate observed when transferring nonconcurrent migration of 1.5 TB utilized data)
  • Maximum cold migration transfer rate of the multiple VMs (single disk) from a single ESXi host: 294 MiB/s (concurrent migration of 30 VMs, 35/50 GiB used, from Single ESXi)
  • Maximum cold migration transfer rate of the multiple VMs (single disk) from multiple ESXi hosts: 1173MB/s (concurrent migration of 80 VMs, 35/50 GiB used, from 8 ESXi servers, 10 VMs from each ESXi)

10.1.11. Warm migration recommendations

The following recommendations are specific to warm migrations:

  • Migrate up to 400 disks in parallel

Testing involved migrating 200 VMs in parallel, with 2 disks each using 8 ESXi hosts, for a total of 400 disks. No tests were run on migration plans migrating over 400 disks in parallel, so it is not recommended to migrate over this number of disks in parallel.

  • Migrate up to 200 disks in parallel for the fastest rate

Testing was successfully performed on parallel disk migrations with 200, 300, and 400 disks. There was a decrease in the precopy migration rate, approximately 25%, between the tests migrating 200 disks and those migrating 300 and 400 disks.

Therefore, it is recommended to perform parallel disk migrations in groups of 200 or fewer, instead of 300 to 400 disks, unless a decline of 25% in precopy speed does not affect your cutover planning.

  • When possible, set cutover time to be immediately after a migration plan starts

To reduce the overall time of warm migrations, it is recommended to set the cutover to occur immediately after the migration plan is started. This causes MTV to run only one precopy per VM. This recommendation is valid, no matter how many VMs are in the migration plan.

  • Increase precopy intervals between snapshots

If you are creating many migration plans with a single VM and have enough time between the migration start and the cutover, increase the value of the controller_precopy_interval parameter to between 120 and 240 minutes, inclusive. The longer setting will reduce the total number of snapshots and disk transfers per VM before the cutover.

10.1.12. Maximum values tested for warm migrations

  • Maximum number of ESXi hosts tested: 8
  • Maximum number of worker nodes: 12
  • Maximum number of VMs in a single migration plan: 200
  • Maximum number of total parallel disk transfers: 400, with 200 VMs, 6 ESXis, and a transfer rate of 667 MB/s
  • Maximum single disk size migrated: 6 TB disk, which contained 3 TB of data
  • Maximum number of disks on a single VM migrated: 3
  • Maximum number of parallel disk transfers per ESXi host: 68
  • Maximum transfer rate observed of a single disk with no concurrent migrations: 76.5 MB/s
  • Maximum transfer rate observed of multiple disks from a single ESXi host: 253 MB/s (concurrent migration of 10 VMs, 1 disk each, 35/50 GiB used per disk)
  • Total transfer rate observed of multiple disks (210) from 8 ESXi hosts: 802 MB/s (concurrent migration of 70 VMs, 3 disks each, 35/50 GiB used per disk)

10.1.13. Recommendations for migrating VMs with large disks

The following recommendations are suggested for VMs with data on disk totaling to 1 TB or greater for each individual disk:

  • Schedule appropriate maintenance windows for migrating large disk virtual machines (VMs). Such migrations are sensitive operations and might require careful planning of maintenance windows and downtime, especially during periods of lower storage and network activity.
  • Check that no other migration activities or other heavy network or storage activities are run during those large virtual machine (VM) migrations. You should treat these large virtual machine migrations as a special case. During those migrations, prioritize MTV activities. Plan to migrate those VMs to a time when there are fewer activities on those VMs and related datastore.
  • For large VMs with a high churn rate, which means data is frequently changed in amounts of 100 GB or more between snapshots, consider reducing the warm migration controller_precopy_interval from the default, which is 60 minutes. It is important to ensure that this process is started at least 24 hours before the scheduled cutover to allow for multiple successful precopy snapshots to complete. When scheduling the cutover, ensure that the maintenance window allows for enough time for the last snapshot of changes to be copied over and that the cutover process begins at the beginning of that maintenance window.
  • In cases of particularly large single-disk VMs, where some downtime is possible, select cold migrations rather than warm migrations, especially in the case of large VM snapshots.
  • Consider splitting data on particularly large disks to multiple disks, which enables parallel disk migration with MTV when warm migration is used.
  • If you have large database disks with continuous writes of large amounts of data, where downtime and VM snapshots are not possible, it might be necessary to consider database vendor-specific replication options of the database data to target these specific migrations outside MTV. Consult the vendor-specific options of your database if this case applies.

10.1.14. Increasing AIO sizes and buffer counts for NBD transport mode

You can change Network Block Device (NBD) transport network file copy (NFC) parameters to increase migration performance when you use Asynchronous Input/Output (AIO) buffering with the Migration Toolkit for Virtualization (MTV).

Warning

Using AIO buffering is only suitable for cold migration use cases.

Disable AIO settings before initializing warm migrations. For more details, see Disabling AIO Buffer Configuration.

10.1.14.1. Key findings

  • The best migration performance was achieved by migrating multiple (10) virtual machines (VMs) on a single ESXi host with the following values:

    • VixDiskLib.nfcAio.Session.BufSizeIn64KB=16
    • vixDiskLib.nfcAio.Session.BufCount=4
  • The following improvements were noted when using AIO buffer settings (asynchronous buffer counts):

    • Migration time was reduced by 31.1%, from 0:24:32 to 0:16:54.
    • Read rate was increased from 347.83 MB/s to 504.93 MB/s.
  • There was no significant improvement observed when using AIO buffer settings with a single VM.
  • There was no significant improvement observed when using AIO buffer settings with multiple VMs from multiple hosts.

10.1.14.2. Key requirements for support for AIO sizes and buffer counts

Support is based upon tests performed using the following versions:

  • vSphere 7.0.3
  • VDDK 7.0.3

10.1.15. Enabling and configuring AIO buffering

You can enable and configure Asynchronous Input/Output (AIO) buffering for use with the Migration Toolkit for Virtualization (MTV).

Procedure

  1. Ensure that the forklift-controller pod in the openshift-mtv namespace supports the AIO buffer values. Since the pod name prefix is dynamic, check the pod name by running the following command:

    oc get pods -n openshift-mtv | grep forklift-controller | awk '{print $1}'

    For example, the output if the pod name prefix is "forklift-controller-667f57c8f8-qllnx" would be:

    forklift-controller-667f57c8f8-qllnx
  2. Check the environment variables of the pod by running the following command:

    oc get pod forklift-controller-667f57c8f8-qllnx -n openshift-mtv -o yaml
  3. Check for the following lines in the output:

    ...
    \- name: VIRT\_V2V\_EXTRA\_ARGS
    \- name: VIRT\_V2V\_EXTRA\_CONF\_CONFIG\_MAP
    ...
  4. In the openshift-mtv namespace, edit the ForkliftController custom resource (CR) by performing the following steps:

    1. Access the ForkliftController CR for editing by running the following command:

      oc edit forkliftcontroller -n openshift-mtv
    2. Add the following lines to the spec section of the ForkliftController CR:

      virt_v2v_extra_args: "--vddk-config /mnt/extra-v2v-conf/input.conf"
      virt_v2v_extra_conf_config_map: "perf"
  5. Create the required config map perf by running the following command:

    oc -n openshift-mtv create cm perf
  6. Convert the desired buffer configuration values to Base64. For example, for 16/4, run the following command:

    echo -e "VixDiskLib.nfcAio.Session.BufSizeIn64KB=16\nvixDiskLib.nfcAio.Session.BufCount=4" | base64

    The output will be similar to the following:

    Vml4RGlza0xpYi5uZmNBaW8uU2Vzc2lvbi5CdWZTaXplSW42NEtCPTE2CnZpeERpc2tMaWIubmZjQWlvLlNlc3Npb24uQnVmQ291bnQ9NAo=
  7. In the config map perf, enter the Base64 string in the binaryData section, for example:

    apiVersion: v1
    kind: ConfigMap
    binaryData:
      input.conf: Vml4RGlza0xpYi5uZmNBaW8uU2Vzc2lvbi5CdWZTaXplSW42NEtCPTE2CnZpeERpc2tMaWIubmZjQWlvLlNlc3Npb24uQnVmQ291bnQ9NAo=
    metadata:
      name: perf
      namespace: openshift-mtv
  8. Restart the forklift-controller pod to apply the new configuration.
  9. Ensure the VIRT_V2V_EXTRA_ARGS environment variable reflects the updated settings.
  10. Run a migration plan and check the logs of the migration pod. Confirm that the AIO buffer settings are passed as parameters, particularly the --vddk-config value.

    For example, if you run the following command:

    exec: /usr/bin/virt-v2v … --vddk-config /mnt/extra-v2v-conf/input.conf

    The logs include a section similar to the following, if debug_level = 4:

    Buffer size calc for 16 value:
    (16 * 64 * 1024 = 1048576)
    nbdkit: vddk[1]: debug: [NFC VERBOSE] NfcAio_OpenSession:
    Opening an AIO session.
    nbdkit: vddk[1]: debug: [NFC INFO] NfcAioInitSession:
    Disabling
    read-ahead buffer since the AIO buffer size of 1048576 is >=
    the read-ahead buffer size of 65536. Explicitly setting flag
    '`NFC_AIO_SESSION_NO_NET_READ_AHEAD`'
    nbdkit: vddk[1]: debug: [NFC VERBOSE] NfcAioInitSession: AIO Buffer Size is 1048576
    nbdkit: vddk[1]: debug: [NFC VERBOSE] NfcAioInitSession: AIO Buffer
    Count is 4
  11. Verify that the correct config map values are in the migration pod. Do this by logging into the migration pod and running the following command:

    cat /mnt/extra-v2v-conf/input.conf

    Example output is as follows:

    VixDiskLib.nfcAio.Session.BufSizeIn64KB=16
    vixDiskLib.nfcAio.Session.BufCount=4
  12. Optional: Enable debug logs by running the following command. The command converts the configuration to Base64, including a high log level:

    echo -e
    "`VixDiskLib.nfcAio.Session.BufSizeIn64KB=16\nVixDiskLib.nfcAio.Session.BufCount=4\nVixDiskLib.nfc.LogLevel=4`"
    | base64
    Note

    Adding a high log level reduces performance and is for debugging purposes only.

10.1.16. Disabling AIO buffering

You can disable AIO buffering for a cold migration using Migration Toolkit for Virtualization (MTV). You must disable AIO buffering for a warm migration using MTV.

Note

The procedure that follows assumes the AIO buffering was enabled and configured according to the procedure in Enabling and configuring AIO buffering.

Procedure

  1. In the openshift-mtv namespace, edit the ForkliftController custom resource (CR) by performing the following steps:

    1. Access the ForkliftController CR for editing by running the following command:

      oc edit forkliftcontroller -n openshift-mtv
    2. Remove the following lines from the spec section of the ForkliftController CR:

      virt_v2v_extra_args: "`–vddk-config /mnt/extra-v2v-conf/input.conf`"
      virt_v2v_extra_conf_config_map: "`perf`"
  2. Delete the config map named perf:

    oc delete cm perf -n openshift-mtv
  3. Optional: Restart the forklift-controller pod to ensure that the changes took effect.

10.2. MTV performance addendum

The data provided here was collected from testing in Red Hat labs and is provided for reference only. 

Overall, these numbers should be considered to show the best-case scenarios.

The observed performance of migration can differ from these results and depends on several factors.

Chapter 11. Telemetry

Red Hat uses telemetry to collect anonymous usage data from Migration Toolkit for Virtualization (MTV) installations to help us improve the usability and efficiency of MTV.

MTV collects the following data:

  • Migration plan status: The number of migrations. Includes those that failed, succeeded, or were canceled.
  • Provider: The number of migrations per provider. Includes Red Hat Virtualization, vSphere, OpenStack, OVA, and OpenShift Virtualization providers.
  • Mode: The number of migrations by mode. Includes cold and warm migrations.
  • Target: The number of migrations by target. Includes local and remote migrations.
  • Plan ID: The ID number of the migration plan. The number is assigned by MTV.

Metrics are calculated every 10 seconds and are reported per week, per month, and per year.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.