NetApp’s OnCommand Workflow Automation (WFA) tool is so incredibly powerful. Right out of the box, you have the ability to automate a large number of common (and some not-so-common) NetApp administration tasks. What pains me is how WFA is overlooked for its ability to be used as an automation platform for more than just NetApp storage. The framework and foundation are there for WFA to do so MUCH more. I think NetApp has fallen short by not providing the user community with enough detailed documentation on how to do this. The WFA Developer’s Guide leaves much to be desired. It talks about the individual WFA components one can use to add third-party objects, but doesn’t really go into any sort of development process. It doesn’t tell you where to start. So… over the next few blog posts I’m going to attempt to add some clarity to WFA development in hopes that many of you will start looking to WFA for more of your automation needs.
I’m going to start by better defining development components of WFA. This should extend what the Developer Guide already provides. Let’s start by looking at the Designer tab in WFA:
I’ve never really understood the logic in how NetApp ordered the different objects along the left-hand side of this screen. I can only think that maybe they ordered these by perceived use by WFA admins? Maybe they thought most customers using WFA would primarily use the Workflows section (to create new Workflows), but may not look at anything else? And that’s why they put Workflows first? No clue. But anyways… here’s a run down of each of the objects, and what they, are in a more logical order:
- Dictionary – a dictionary is a defined schema for a given actionable object in WFA. An example may be a VMware ESXi host. The dictionary definition for an ESXi host may contain the name of the host, it’s IP address, OS version, which vCenter manages the host, etc. A dictionary defines what data you want to have access to when you go to create your workflows. Another example – Say you want to automate the creation of Cisco UCS Service Profiles in a UCS B-Series environment. You may want to create a definition for UCS that has all of the defined policies and pools available to you when creating the new service profile. Or maybe a list of Service Profile Templates. The image below is a snapshot of a VMware ESXi Host dictionary definition:
- Data Source Types – a data source is the collection mechanism in which information defined in a dictionary is collected. Following along in our previous example of a VMware ESXi host, a data source would be the Powershell script (leveraging VMware PowerCLI commandlets) used to collect information for all ESXi hosts per the ESXi host definition. Data source types can be Powershell or Perl scripts as well as SQL drivers (if you wanted to pull data from another management system that had a SQL backend).
- Commands – this is wear the rubber begins to meet the road. Commands are the building blocks for your future workflows. Commands maybe equal to one more Powershell cmdlets or Perl library commands. They should be small enough and simple enough to allow them to be reused across many different workflows. Commands take inputs comprised of the fields that were previously defined in one or more Definitions. Below is an example of a Command to create a VMware VM Snapshot. The left hand side of the screen capture shows the code for the command – a simple try block that runs the Get-VM and New-Snapshot PowerCLI commandlets. The left hand side shows the inputs mandatory and option inputs for this command.
- Filters and Finders – these are SQL queries that you can run against the data collected by the Data Sources and Definitions. Filters are closest to a SQL SELECT statement. Finders are closer to a SQL JOIN. Finders can also be classified as one or more Filters. You can predefine Filters and Finders for use in your Workflows. An example of a finder is below – a query to find available VMware datastores with greater than a provided capacity. Say, for instance, that you wanted a workflow to create a bunch of new virtual desktop VMs. This finder would be useful to find available datastores for those VMs with at least X capacity available.
- Functions – not to be confused with Commands, functions provide a means for you to include complex calculations or algorithms in your workflows. Functions are not written in Powershell or Perl. They are written in MVEL. A good exmaple of this is the actualVolumeSize function that ships with the default WFA install. This function calculates actual volume size based on usable capacity and a snapshot reserve percentage. If you needed to manipulate an input before using it in a workflow, functions are what you would use.
- Templates – like functions, templates are another way to manipulate input. Perhaps you had a bunch of constants that you always wanted to use for configuring a VMware datastore (i.e. turn off array based snapshots, set a certain percentage of snapshot reserve, etc.) you could use a template for this use case. Another example could be enabling dedupe by default. This is the case in the built-in template for Space Efficient NAS Settings (below). As you can see, this template applies to a Volume object type. In your Workflow creation, you can use this template when entering information for a new volume in order ensure dedupe is enabled.
- Cache Queries – these are often (if not always) used with Data Sources that leverage SQL drivers (not Powershell or Perl scripts). It allows you to pull in information from a remote database using a Data Source with a SQL driver and cache that data on the local WFA database. It’s a means to speed up Workflows that leverage data from external sources that might not be available on the local network. For example, if your WFA server is at a different site from your OnCommand Unified Manager (OCUM) server, you may want to pull and cache the OCUM data locally to speed up Workflow execution.
- Workflows – the culmination of all of the objects above. Workflows use collected data from Data Sources and Definitions to chain together Commands to complete the automation workflow. Other items like Filters, Finders, Functions, and Templates are leveraged to manipulate inputs for the Commands used in the Workflow.
- Schemes – these aren’t defined in the menu of the Developer section in WFA, but they are essential as they contain all common objects for a given device. In the default installation of WFA, you will see four predefined schemes: storage, performance, cm_performance, and cm_storage. storage and performance represents all Data Sources, Definitions, Commands, and Workflows that apply to NetApp FAS 7-Mode arrays. cm_storage and cm_performance applies to NetApp Clustered Data ONTAP. I’m also leveraging another scheme in my WFA install called “vc” that contains components from VMware. When you go to create a Data Source or Definition you will define the scheme in which that component will reside. As you create Commands and Workflows, WFA will look at the variables you’re using and place the Commands and Workflows in the one ore more appropriate schemes. Will get into schemes more in more detail later, as they also are integral to how the backend WFA database is structured and how variables are addressed in Commands and Workflows. Below is a depiction of how a scheme is defined when creating a new Definition.
I hope this was a good introduction to understand the components used in WFA development. My plan, or so I hope, is to walk you through using each of these components to integrate a third party product into WFA. I have some ideas as to what that product might be, but I’m open to suggestions. Just a reminder… the product must have a Powershell or Perl library. Without those libraries, it can be difficult to automate using WFA.