Leveraging VSA Machine Groups

VSA Best Practice Series

Planning, Structure, and Consistency

These are the essentials for creating a VSA platform that is "automation friendly". If you read my article on Monitor Sets, you'll recall that our monitors utilize a machine-readable subject line. This allows our automation to break down the alert, attempt remediation, and ultimately route the alert properly. This is possible because every header has the same structure, and every field has the same kind of data in it. It's time to apply this logic to the Machine Groups and create a hierarchy.

Kaseya defaults to a machine group below the Customer ID called "root". This meaning may have become lost over time, but it's intent is the root where an object hierarchy is built. This is the point where System Policies are linked. Using a rooted machine group with subgroups is the only way to apply a policy to all agents for a specific client and easily differentiate between managed and unmanaged locations. Important concept #1 – Organization is critical for effective automation!

So often when working with an MSP, I find the default "root" group present and containing most of the agents, and then other groups at the root of the Customer ID representing different locations, machine types, or other groupings. This, sadly, is not a good plan as any policy common to all agents must be linked multiple times. This introduces risk through the possibility of not linking critical policies to all groups or linking them to the wrong group. The primary purpose of a machine group is very similar to Active Directory OUs - organize objects to apply policies. To do this effectively, you need to design a hierarchy based on how you might apply policies. Important concept #2 - System Policies are the Heart of Automation. To effectively use policies for automation, the machine groups must form a reasonable hierarchy. Consider how you might automate things and how policies themselves work. A policy can run procedures when a condition is met, schedule procedures, and define configuration settings. Configuration settings is a big consideration, since they are often different between workstations and servers, right? So - you should have groups that allow linking policies based on the class of system.

Our Core Automation Suite depends upon a standardized machine group structure, and we leverage the Dickens out of it. We can configure and schedule patching & application updating, configure AV and AM products, apply monitor sets, and much more, all with a minimum of manual involvement. (In fact, for almost all customer onboarding, all we do manually is define the server patching sequence.)

Change is Bad Good!

Almost every MSP we've worked with to help optimize their platform has had their staff initially complain about the changes to the structure. Not because the structure was bad or confusing, but simply because it was - change! Every MSP, after using the new organizational hierarchy for a couple of days, universally agreed that it was easier to find agents by type, site, and class of service, and that the automation that this organization allowed reduced everyone's workload. Of course, this takes effort - whiteboard the structure based on policy linking, accommodate requirements of different customers, and different kinds of clients, and a method to define sites that works globally, not just in your back yard. Once the planning is complete, you can create the machine group structure for the clients and move the agents, cleaning up old groups when the last agent has been moved.

Oh, remember "consistency"? If you create a site group for a customer that has 10 sites, you should create the same structure for a customer with one site. Why? CONSISTENCY, of course. It's all about the ability to automate and know exactly what the data format will always be in! And - should the customer expand and add a second site, you simply create a second site group and add the new agents there - no need to restructure, add 2 groups, and move agents before you can add agents from the new location. To simplify the decision process and remove any “emotion” from the process, we utilize the United Nations LOCODE standard for identifying the locations – they have IDs for virtually every town or city on the planet.

Why have a workstation and server group – I use Views to handle that!

Sure – we use over 300 views, and around 150 of those just for policy management, but we strongly recommend separate groups for workstations and servers in each client group. This allows you to accommodate situations where a developer might use a server O/S for their workstation, or – more commonly – a workstation O/S is used as some kind of “server”. I can link a view to a policy that alters the monitoring or configuration settings simply because an agent is in a “*.servers.*” group – it’s a workstation, but should be monitored or configured like a server! This can’t be done if you simply apply views based on the O/S type. We also recommend the use of a group called “special”. Any view should exclude this group – it is a simple way to temporarily stop all automated processing by simply moving the agent into that group.

So, to summarize, we create a root group that identifies whether the customer is managed or unmanaged, then subgroups for servers, workstations, and special. The servers and workstations groups have subgroups for each location that the customer has, and only the location groups and the special group contain agents. This took time and energy to design and define, but the payback has been extensive. When an agent checks in, the automation runs, figures out what the agent has, where it is, and automatically applies monitors, schedules patching, daily maintenance, and application updating. This is all made possible by using a consistent structure that the automation can leverage.



Comments are closed on this post.