Migrating 7-mode vfiler into C-mode
Assumptions
- The source vfiler is only serving NFS v3
- This process will attempt to use the read/write mode as there are no LUNs
- The project will not attempt to migrate the Snapmirror destination as the WAN links to our DR site are good enough to re-baseline within our DR’s RPO.
- The source and destination systems have been prepared
Summary
- Create C-mode SVM same name as source but with temp IP address
- Create standalone 7MTT Project migrating source vols (not vol0)
- Run a read/write test to validate migration (any data changes are lost once testing done):
- manual update of any root based export rules
- validate DNS
- validate NIS
- Resync project
- Cut over at agreed date/time
- all clients unmount source volumes
- final sync
- storage cutover
- Manual tidy up steps
- validate source is down
- change target SVM IP address to be the same as source
- validate/update all root based export policies are correct
- validate DNS
- validate NIS
- Clients re-connect to C-mode SVM
- Source volume is offlined automagically preserving source data.
- Source Snapmirror relationship is broken and target volume offlined.
- A manual C-mode SVM DR relationship will be created post migration.
Create a Destination SVM
Create a DNS entry for the temporary IP address of the target SVM. For example the source vfiler being migrated is:
vnas-testvsm01 = 192.168.18.38
So the C-mode SVM temp entry is:
vnas-testvsm01-mig = 192.168.18.44
Then create a SVM using the same name as the source vfiler. Do not create any Snapmirror config, data volumes or export policies.
Create the data lif using the temp IP address and relevant services. This is required because the SVM doesn’t reside in the default IPspace so the transition tool can not migrate routing or interface configurations:
network interface create -vserver vnas-testvsm01 -lif data_temp -role data -data-protocol nfs -home-node nas30a -home-port a0a-116 -address 192.168.18.44 -netmask 255.255.252.0 -status-admin up -auto-revert true network route create -vserver vnas-testvsm01 -destination 0.0.0.0/0 -gateway 192.168.16.1 vserver services name-service dns create -vserver vnas-testvsm01 -domains <internal domain> -name-servers <nameserver1>,<nameserver2>,<nameserver3> -state enabled vserver services name-service nis-domain create -vserver vnas-testvsm01 -domain <NIS domain> -active true -server <nameserver1>,<nameserver2>,<nameserver3>
Update the default export policy to match our standard one:
export-policy rule create -vserver vnas-testvsm01 -policyname default -ruleindex 1 -clientmatch <mgmt host> -rorule any -rwrule any -superuser any -protocol nfs export-policy rule create -vserver vnas-testvsm01 -policyname default -ruleindex 2 -clientmatch 0.0.0.0/0 -rorule any -rwrule none -superuser none -protocol nfs
Create a Transition Project
Add Source vfiler volumes
Login to the tool’s web interface (I have a VM running Windows 2012 R2) using an account that has local admin rights over the host.
Under Collect & Assess click on Get Started
Click on Get Started
Enter in the source 7-mode system, admin privileged account credentials then click Add:
Enter in the destination C-mode system, admin privileged account credentials then click Add. Both systems should appear below in their respective tables:
Below the tables click on Next
In the project naming dialogue box choose a relevant name. I’m naming it after the source vfiler. The tool will then scan the source filer and list all volumes and vfilers. Click on the source vfiler:
In each data volume(s)’s row, tick the box in the Transition as stand alone column. They should then appear in the box below. Click on Create Project and Continue
In the Data Copy and Multipath IP Configuration section make sure the IP address of the 7-mode system in the Data Copy IP box:
- belongs to a fast interface, for example a bonded or 10gig interface
- has data copy permission (all interfaces should as part of the system’s preparation steps)
By default it will use the 7-mode management IP address which in this example is on a bonded vif so is acceptable. Click on Next
Run Project pre-flight checks
It is recommended to run the pre-checks so click on the Run Pre-checks button for it to generate a report of any issues and warnings.
Once the report is complete, summarising the checks and their results click on Close. You can then review any points or click on the Save AS CSV link at the bottom. Take remedial work on any show stopper issues before continuing by clicking Next.
Mapping to Destination SVM
In the SVM Mapping window The previously configured C-mode system will be automagically populated and all running SVMs in the cluster will be listed. Select the previously created SVM and then click Next
Leave the Target Volume mount policy as Preserve 7-mode mount paths
In the Volumes to Aggregates mapping window try to choose aggregates on the same node as the target SVM’s home node, nas30a in this case. Click Next when complete.
We do not want to migrate the the source vfiler’s IP address so ignore the box next to it’s IP address:
Click on Next
Create Migration Update Schedule
In the Data Copy Schedule window click on Create Schedule
Fill in relevant detail. The smallest schedule is every 30mins unless you click Continuous Updates in which case it will be every 5 mins. When complete click Create
Click Next
Final Project pre-flight checks
Then click on Run Prechecks
Review the report to see if any new warnings or errors appear. Fix any that can be or make a note for any post cutover work.
Click Next to view the Project summary.
Click Save and go to Dashboard
Begin Project Sync
On the main page click on Migrate and then Dashboard. The previously created projects should be there. Click on the required project.
Start the baseline copy
In the main pane the Preparation icon should be a green circle. Click on the Start Baseline button the start the copy. If you have any warnings a confirmation box will appear just confirm you wish to continue. The tool will then run through the checks one more time and in no errors detected will begin.
You can monitor progress by:
- snapmirror show on the C-mode target
- snapmirror status on the 7-mode source
- View Transition Details button in the migration tool
Test Project Cutover
Once the baseline is complete and the update schedule is updating correctly as shown by a Green circle:
The source configuration can be applied to the target SVM, and in the case of NAS only migrations, tested before cutover.
To test the applied configuration, called read-write mode, select the Test Mode check box and then click Apply Configuration. Review the summary window and then click on Continue:
A dialogue box appears showing the progress of the task throwing up information, warnings, and errors. Click on View Detailed Results to see more information.
Review any warnings, for example DNS and hosts configuration:
dns show -vserver vnas-testvsm01 vserver services dns hosts show -vserver vnas-testvsm01
Check other settings such as export policies and quotas to make sure they’ve migrated correctly:
export-policy rule show -vserver vnas-testvsm01 -policyname vnas-testvsm015 -fields clientmatch,rorule,rwrule,superuser quota show -vserver vnas-testvsm01 quota report -vserver vnas-testvsm01 -volume vnas_testvsm01_data1
Test access to the exported volume(s) from a client that does and does not have access:
[root@netapphost ~]# mount -t nfs 192.168.18.44:/vol/vnas_testvsm01_data1 /mnt/test/ [root@netapphost ~]# ls -la /mnt/test/ total 38371428 drwxr-xr-x 2 root root 4096 Sep 4 12:36 . dr-xr-xr-x 30 root root 4096 Sep 3 14:39 .. -rw-r--r-- 1 root root 39138237658 Aug 20 14:14 w4.dmp -rw-r--r-- 1 root root 26 Sep 4 12:36 post_export_update.txt drwxrwxrwx 15 root root 4096 Sep 9 14:47 .snapshot [root@netapphost ~]#
[root@testhost ~]# mount -t nfs 192.168.18.44:/vol/vnas_testvsm01_data1 /mnt/test/ mount.nfs: access denied by server while mounting 192.168.18.44:/vol/vnas_testvsm01_data1
Once all is checked out click on Close to close the Operation Progress Window.
Re-establish Project Sync
Even though there were warnings and the Precutover section is an orange circle, the Finish Testing button is available showing that migration is possible:
Unmount any C-mode SVM file systems and then click on Finish Testing to re-establish the sync between the two NetApp systems.
A progress window will appear, review any warnings etc and then click Close
The project summary graphic should show the following, with the Complete Transition button available meaning it’s ready for the final part:
Until final cutover you can verify the syncs are working using the command:
snapmirror show-history -destination-vserver vnas-testvsm01 -operation-type manual-update
Final Project Cutover
In the Project Summary window just before the maintenance window perform a manual Snapmirror update (unless you’ve chosen the continuous update schedule) click on the Update Now button:
Click Continue and then a progress window will appear for the entirety of the update. Review the summary and then click on Close.
When complete get all the clients to unmount the source filesystems.
If this takes a large amount of time a re-run of the manual update could be done during the unmount process.
Then click on Complete Transition which will make the source unavailable and perform a final update. It warns about correcting any warnings. Ignore if the testing performed OK or you know what you need to update upon cutover:
It is recommended to take the source offline, and unless you have more than 50 volumes being migrated you do not need to adjust the default concurrent transfer configuration, click Continue:
A summary window will appear giving real time updates to the cut over. Review any warnings (mine was for Unicode directories) and continue.
Post Project Manual steps
These steps need to be completed before clients attempt to re-mount the exported filesystems
The source vfiler is still online but the migrated volumes are not:
[root@netapphost ~]# ssh nas20 vfiler status vfiler0 running <SNIP> vnas-testvsm01 running [root@netapphost ~]# ssh nas20 vol status | grep offline <SNIP> vnas_testvsm01_data1 offline raid_dp, flex [root@netapphost ~]#
Firstly check the target SVM’s export policies and namespace to make sure all is ok. Adjust any policy rules that need to have r/w added or removed from root access rules. I suggest using the GUI for this as it’s quicker.
Then check quotas are on and applied correctly.
Ping test the SVM:
[root@netapphost ~]# ping 192.168.18.44 PING 192.168.18.44 (192.168.18.44) 56(84) bytes of data. 64 bytes from 192.168.18.44: icmp_seq=1 ttl=254 time=0.090 ms
If OK stop the source vfiler:
[root@netapphost ~]# ssh nas20 vfiler stop vnas-testvsm01 vnas-testvsm01 stopped [root@netapphost ~]# ping vnas-testvsm01 PING vnas-testvsm01.ebi.ac.uk (192.168.18.38) 56(84) bytes of data. ^C --- vnas-testvsm01.ebi.ac.uk ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1694ms
Update target SVM with source vfiler IP address:
netapp02::> net int modify -vserver vnas-testvsm01 -lif data_temp -address 192.168.18.38 -netmask 255.255.252.0 (network interface modify) netapp02::> net int show -vserver vnas-testvsm01 (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- vnas-testvsm01 data_temp up/up 192.168.18.38/22 nas30a a0a-116 true
Ping test SVM:
[root@netapphost ~]# ping vnas-testvsm01 PING vnas-testvsm01.ebi.ac.uk (192.168.18.38) 56(84) bytes of data. 64 bytes from vnas-testvsm01.ebi.ac.uk (192.168.18.38): icmp_seq=1 ttl=254 time=0.091 ms
Remount file system and check:
[root@netapphost ~]# mount /mnt/7-2-c-test [root@netapphost ~]# ls -la /mnt/7-2-c-test/ total 38371432 drwxr-xr-x 3 root root 4096 Sep 10 10:37 . dr-xr-xr-x 31 root root 4096 Sep 11 09:28 .. -rw-r--r-- 1 root root 39138237658 Aug 20 14:14 genew4.dmp
If a client DID NOT unmount then when the volume comes back online from the SVM the client will see an error and will have to remount:
[root@netapphost ~]# ls -la /mnt/7-2-c-test/ ls: cannot access /mnt/7-2-c-test/: Unknown error 521
As the source volume is offline no further snapshots will be taken. Break the original Snapmirror relationship and offline the target volume to preserve the data.
Create C-mode SnapMirror
Follow the Identity Preserving SVM DR procedure [to be created]
Clean up tasks
The migration tool will create SVM based snapshot policies which in turn will use migration tool created cluster based job schedules.
I prefer cluster based schedules and policies so I change the SVM’s volumes snapshot policies to the cluster based STD policy. You can then remove the created schedules and policies:
netapp02::> volume modify -vserver vnas-testvsm01 -volume vol_ -snapshot-policy STD
Warning: You are changing the Snapshot policy on volume vol_abacus to STD. Any Snapshot copies on this volume from the previous policy will not be deleted by this new Snapshot policy. Do you want to continue? {y|n}:
netapp02::> snapshot policy delete -vserver vnas-testvsm01 -policy SnapShot_vnas_testvsm01_data01 netapp02::> job schedule show netapp02::> job schedule delete -name CronJob__vnas-testvsm01_vol_abacus_8 netapp02::> job schedule delete -name CronJob__vnas-testvsm01_vol_abacus_9
Important Pre-Migration Changes
NIS/Netgroup based exports
In one of the pre-checks a very important corrective set is flagged relating to netgroups. Typically on the 7-mode systems the option nfs.netgroup.strict is off so the @ prefix isn’t required in the /etc/exports file.
This is mandatory in C-mode so the exports on the 7-mode system should be updated; either use the export-p command or edit the /etc/exports file directly, for example:
mount -t nfs vnas-testvsm01:/vol/vnas_testvsm01_vol0 /mnt/test vi /mnt/test/etc/exports
replace:
/vol/vnas_testvsm01_vol0 -sec=sys,ro,rw=172.16.71.1,root=172.16.71.1,nosuid /vol/vnas_testvsm01_data1 -sec=sys,ro=nis-group1,rw=nis-group2:10.0.0.0/16:10.1.0.0/16:nis-group3,root=172.16.71.1,nosuid
With:
/vol/vnas_testvsm01_vol0 -sec=sys,ro,rw=172.16.71.1,root=172.16.71.1,nosuid /vol/vnas_testvsm01_data1 -sec=sys,ro=@nis-group1,rw=@nis-group1:10.0.0.0/16:10.1.0.0/16:@nis-group3,root=172.16.71.1,nosuid
Then refresh the exports, check they’ve updated and everything is still accessible!!!
ssh nas20 vfiler run vnas-testvsm01 exportfs -a
===== vnas-testvsm01 ssh nas20 vfiler run vnas-testvsm01 exportfs
===== vnas-testvsm01 /vol/vnas_testvsm01_vol0 -sec=sys,ro,rw=172.16.71.1,root=172.16.71.1,nosuid /vol/vnas_testvsm01_data1 -sec=sys,ro=@nis-group1,rw=@nis-group1:10.0.0.0/16:10.1.0.0/16:@nis-group3,root=172.16.71.1,nosuid touch /mnt/7-2-c-test/post_export_update.txt ls -la /mnt/7-2-c-test/ total 38371428 drwxr-xr-x 3 root root 4096 Sep 4 12:11 . dr-xr-xr-x 30 root root 4096 Sep 3 14:39 .. -rw-r--r-- 1 root root 39138237658 Aug 20 14:14 genew4.dmp -rw-r--r-- 1 root root 0 Sep 4 12:11 post_export_update.txt drwxrwxrwx 13 root root 4096 Sep 4 12:00 .snapshot
Export Permissions
If any client has root and read only permissions in the source export then the tool will create an export rule on the destination system with read/write as well root permissions.
If the same clients require root but not read/write as well (why I do not know) then the destination export rules will need to be manually updated during testing removing the write permissions to guarantee pre-migration access permissions.