We finally got ourselves an 8.3 C-Mode Snapmirror target device; well that’s the only mode available in 8.3. A relativly small FAS2554 with 6TB drives.
The proposal was to use the new feature in 8.3 and the smaller heads and use the Advanced Disk Partitioning (ADP) feature to maximise the system’s capacity so we have over 350TB usable.
On paper it seems a good idea, the system takes a small 0.05TB slice off ten disks for the root aggregate. You then have a mixed size of 5.35TB and 5.3TB drives.
The factory install had the ADP root aggregate taking 10 disks each.
Normally based on the 6TB drives using a traditional root aggregate you’d loose 16TB of space by using 3 drives, one data and two parity. In a single shelf FAS2550 system that’s six drives out of 24 gone before you’ve even thought about spares and parity, which realistically is another 6.
I can see why they’ve done it, traditionally instead of gettting almost 128TB to play with, which I’m sure is plenty for most as a cheap SnapVault repository or SnapMirror target, you get 64.2TB before WAFL reserve.
For our config however with 4 shelves and 96 drives it initally didn’t seem a great fit mainly because I’m forced into having a 10 disk data aggregate on the second partition of the ADP disks. This does sound a lot but for us many of our volumes are multi terabyte. With very large aggregates it gives me flexibility in my SnapMirror target placement, with smaller aggregates I may end up with unusable space.
We followed this NetApp KB (login required probably) to migrate the root aggregate and node vol0. I then zeroed all the disks for the old aggr0’s. A day or so later I had a look and the ADP is still there, for example:
netapp04::> node run nas40a -command sysconfig -r Spare disks for block checksum spare 0a.00.1P2 0a 0 1 SA:A 0 FSAS 7200 55176/113000448 55184/113016832 spare 0a.00.1P1 0a 0 1 SA:A 0 FSAS 7200 5559408/11385668608 5559416/11385684992
I hunted about and found the commands to remove partitioning which I tried. Weirdly enough after a small amount of time the partitioning returned, even after initiating zeroing on the supposed unpartioned disk, even in advanced mode:
netapp04::*> node run -node nas40b -command disk unpartition 0a.00.0 disk unpartition: 0a.00.0 was un-partitioned successfully. netapp04::*> node run -node nas40b -command disk zero spares netapp04::*> node run nas40b -command sysconfig -r spare 0a.00.0 0a 0 0 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400 (zeroing, 0% done) 5 mins later:
spare 0a.00.0P2 0a 0 0 SA:B 0 FSAS 7200 55176/113000448 55184/113016832 (not zeroed) spare 0a.00.0P1 0a 0 0 SA:B 0 FSAS 7200 5559408/11385668608 5559416/11385684992 (not zeroed)
I’ve raised a ticket with our supplier and the initial responce from NetApp is that you can not remove ADP at all!
So the inital proposed disk config wasn’t possible:
- 4 x 22 disk aggragates in raid groups of 11
- 4 spare per head
- Approx 385TB
Now it seems my next choice isn’t either:
- 2 x 3 disk root aggragates
- 2 x 20 disk aggragates in raid groups of 20
- 2 x 22 disk aggragates in raid groups of 11
- 3 spare per head
- Approx 385TB
What I may end up with is:
- 2 x 11 disk partition 1 root aggragates
- 2 x 11 disk partition 2 aggregates in raid groups of 11 (5.3TB rather 5.35TB)
- 4 x 17 disk aggragates in raid groups of 17
- 3 spare per head (1 is ADP)
- Approx 416TB
It seems ADP may actually help me! Are the 10 disk, 42TB ADP data aggregatesd that bad? Just need to get the procedure in recreating the ADP root aggregate in to the P1 of the ADP disks from NetApp and I can finally provision this filer.