Expanding an ADPv2 R-D2 Aggregate

I recently ran into a situation with a customer where we were attempting to add a small subset of SSDs into an existing NetApp A200 configuration that leveraged the lastest version of Advanced Disk Partitioning (ADP). For those not familiar with ADP and what it brings to the table, please see fellow NetApp A-Teamer Chris Maki’s awesome write-up here.  The latest iteration of ADP allows for each SSD in an all-flash array to have 3 partitions – 1 root and 2 data partitions.  If you read Chris’s blog article, he does a good job of showcasing why this is important and how you can reduce the number of parity disks in this type of configuration.

The issue with this configuration, however, is the lack of documentation and intuitiveness (if that’s a word) if you were required to expand the two data aggregates.  In most cases, customers would purchase full shelves to add to such configuration and new aggregates would be provisioned across a full shelf(ves) of 24+ SSDs.  In this situation, we had to add just 2 SSDs to the configuration.  Here’s a more graphical representation of the SSD/aggregate layout:

As you can see each node is assigned a partitioned root aggregate and a partitioned data aggregate.  When the two new SSDs were added, auto disk assignment gave ownership of one SSD to one node and the second SSD to its partner.  You can see the dilemma at this point.  If you were to run the `aggr add` command, you would only have one new SSD available to add.  The SSDs were not automatically partitioned at this point.  Only the full SSD device ID was available.  We contacted some folks at NetApp (thanks to local Atlanta SE Mark Harrison (@NetApp_Mark) and NetApp TME for Flash, Mr. Skip Shapiro) and this is the process they delivered to expand these two partitioned data aggregates:

  1. Quick note to begin – don’t use System Manager for this process.  Do this on the command line.  Unfortunately, at this time, System Manager doesn’t have the ability to simulate adding disks into an aggregate.  Adding disks to an aggregate is an irreversible process and one that you need to validate before just diving in.
  2. For each data aggregate the default RAID group size is set to 23 disks when the two aggrs were created via the ADPv2 R-D2 provisioning process.  Change the RAID group size to reflect the additional SSDs to be added (in this case, 25).
  3. Assign both new SSDs to one controller.  These means reassigning whichever SSD was assigned to node 2, over to node 1.  At this point, both new SSDs would be owned by node 1.
  4. Use the aggr add command (with the -simulate flag) to add the two new SSDs to the existing data aggregate on node 1.  The simulation should show the new SSDs as being partitioned and one data partitioned from each getting added to the data aggregate on node 1.
  5. If Step #3 works, commit the command without the -simulate flag to allow the process to run.  This will leave two root partitions (which will not be used) and two remaining data partitions.
  6. If needed at this point, reassign the two spare data partitions over to node 2.
  7. Add the two data partitions now on node 2 to node 2’s data aggregate.
  8. Viola!

Hopefully, this clears this up for anyone that runs into this issue moving forward.  Let me know if you have any questions or need clarification on anything in this process.

 

5 Replies to “Expanding an ADPv2 R-D2 Aggregate”

  1. ChrisJune 9, 2017 at 5:48 pmReply

    Well done….also, Viola? lol.

    1. trigganJune 9, 2017 at 5:54 pmReply

      My poor attempt at trying to use french… Voila, is what I meant. 🙂

      1. ChrisJune 9, 2017 at 6:01 pmReply

        Oh, I figured it was a joke, my father says “Viola” jokingly…If we’re being pedantic, it’s “voilà”. 😉

  2. Scott HarneyJune 10, 2017 at 12:23 amReply

    Ran into a similar scenario with a customer who deployed an a300 with 1 full shelf and a second shelf with 18 drives. ADPv2 partitioned 32 of the drives and divided ownership of the remaining 10 unpartitioned drives between the node pairs.

    Splitting evenly results in data Aggregates with extra parity losing a big chunk of capacity. What I ended up doing was moving ownership of several partitions to one could controller and making a larger raid group for that aggr. I had a smaller data partition size and count on the other controller. I then owned 9 of the 10 full drives to the second controller and added them to the data aggregate in a separate raid group(which it does anyway). This gave me 2 3.8T drives worth of usable data back.

    I did much of the aggregate work in system manager. It does show you the resulting size of the aggregate if you execute and you can tune raid size and whether to add to existing rg or create new. It gives a nice visual view of the rg layout also.

    When I get back in the system I can perhaps show a sysconfig -r snip which is still my preferred way of visualizing Aggregates and raid group members.

  3. SergeMay 6, 2018 at 1:42 pmReply

    Hello Taylor!
    “The simulation should show the new SSDs as being partitioned and one data partitioned from each getting added to the data aggregate on node 1.”

    looks like id doesn’t want to create any partitions:

    netappcluster::> storage aggregate add -aggregate aggr_ssd_02 -simulate true -disklist 2.1.0,2.1.1,2.1.2,2.1.3

    Disks would be added to aggregate “aggr_ssd_02” on node “node-02” in the following manner:

    First Plex

    RAID Group rg1, 4 disks (block checksum, raid_dp)
    Position Disk Type Size
    ———- ————————- ———- —————
    dparity 2.1.0 SSD –
    parity 2.1.1 SSD –
    data 2.1.2 SSD 894.0GB
    data 2.1.3 SSD 894.0GB

    In my case I need to add whole new DS224Cx24SSD disk, and equally extend current aggregates.

Leave a Comment