I recently ran into a situation with a customer where we were attempting to add a small subset of SSDs into an existing NetApp A200 configuration that leveraged the lastest version of Advanced Disk Partitioning (ADP). For those not familiar with ADP and what it brings to the table, please see fellow NetApp A-Teamer Chris Maki’s awesome write-up here. The latest iteration of ADP allows for each SSD in an all-flash array to have 3 partitions – 1 root and 2 data partitions. If you read Chris’s blog article, he does a good job of showcasing why this is important and how you can reduce the number of parity disks in this type of configuration.
The issue with this configuration, however, is the lack of documentation and intuitiveness (if that’s a word) if you were required to expand the two data aggregates. In most cases, customers would purchase full shelves to add to such configuration and new aggregates would be provisioned across a full shelf(ves) of 24+ SSDs. In this situation, we had to add just 2 SSDs to the configuration. Here’s a more graphical representation of the SSD/aggregate layout:
As you can see each node is assigned a partitioned root aggregate and a partitioned data aggregate. When the two new SSDs were added, auto disk assignment gave ownership of one SSD to one node and the second SSD to its partner. You can see the dilemma at this point. If you were to run the
aggr add command, you would only have one new SSD available to add. The SSDs were not automatically partitioned at this point. Only the full SSD device ID was available. We contacted some folks at NetApp (thanks to local Atlanta SE Mark Harrison (@NetApp_Mark) and NetApp TME for Flash, Mr. Skip Shapiro) and this is the process they delivered to expand these two partitioned data aggregates:
- Quick note to begin – don’t use System Manager for this process. Do this on the command line. Unfortunately, at this time, System Manager doesn’t have the ability to simulate adding disks into an aggregate. Adding disks to an aggregate is an irreversible process and one that you need to validate before just diving in.
- For each data aggregate the default RAID group size is set to 23 disks when the two aggrs were created via the ADPv2 R-D2 provisioning process. Change the RAID group size to reflect the additional SSDs to be added (in this case, 25).
- Assign both new SSDs to one controller. These means reassigning whichever SSD was assigned to node 2, over to node 1. At this point, both new SSDs would be owned by node 1.
- Use the aggr add command (with the -simulate flag) to add the two new SSDs to the existing data aggregate on node 1. The simulation should show the new SSDs as being partitioned and one data partitioned from each getting added to the data aggregate on node 1.
- If Step #3 works, commit the command without the -simulate flag to allow the process to run. This will leave two root partitions (which will not be used) and two remaining data partitions.
- If needed at this point, reassign the two spare data partitions over to node 2.
- Add the two data partitions now on node 2 to node 2’s data aggregate.
Hopefully, this clears this up for anyone that runs into this issue moving forward. Let me know if you have any questions or need clarification on anything in this process.