Of all the MSPs that I speak with, patching is the one component that everyone says that they use. Once we dig in a little deeper, I’m amazed by how few MSPs use this fundamental platform component as effectively as it can be.

Most of the configurations that I review indicate that the goal was to get patching deployed quickly following the most basic onboarding instructions provided by Kaseya. What’s missed is that this information is an example of what can be done, but not the most effective way to do it. MSPs often report that patching causes as many issues as it solves or doesn’t work the way they think it should. They’re surprised when we tell them that our patching process requires little ongoing maintenance to achieve reliable patch deployment across several thousand systems.

Here are just some of the patch configurations we find deployed by MSPs that can be improved:

Patch Policies that auto-approve every patch.

This mostly eliminates the monthly review and approval process, but also eliminates all control over patching. This is a dangerous configuration as patches are immediately approved for deployment. There are several categories of patches that are “optional” and will install or update applications that could actually add vulnerabilities to the environment that would not exist otherwise. While many categories can be set for auto-approval, these categories should be reviewed and approved manually to minimize this risk.

“One size fits all” policies.

I’ve seen MSPs with just one patch policy for all agents, or (slightly better) a policy for workstations and another for servers. These may have some manually approved categories, but this doesn’t provide the flexibility to accommodate customers with different needs. Some patches can affect customer LOB applications, and when this happens, the updates are excluded from the Patch Policy and thus excluded from all customers. This clearly leaves some clients vulnerable.

One patch policy for each configuration needed by customers, or one per customer.

Having one Patch Policy for every configuration or for individual customers may be a step in the right direction for reducing vulnerabilities, but this increases the amount of manual review and approval needed every month. This gets old really fast, and we find that MSPs skip this for a month or two, then find themselves with hundreds of patches to review. Patching becomes unmanageable very quickly using this method.

Patch settings applied manually.

Here’s a method that works for about a month after onboarding VSA. During onboarding, all of the customers are reviewed and Patch Policies and schedules are applied. “OK! We’re done!” is the thought and patching works – mostly. When we perform an audit, we usually find many machines that don’t have an update schedule or Patch Policy applied, much to the surprise of the MSP. Automating this task will eliminate both this maintenance task and the vulnerability that not patching represents.

Patch Updates applied using a “shotgun” approach on servers.

While providing training for our Core Automation Suite recently, we were covering the patch process. We have 48 distinct “patch windows” covering three weeks every cycle. The MSP’s engineer asked why we did this, since their VSA training suggested creating a “Servers” patch policy and then scheduling all servers for updating starting at midnight on Saturday. I pointed out that this method insures that all servers get patched, but won’t assure that servers that are inter-dependent will be started in the proper sequence to allow the application to come back up cleanly. The six patch windows we have on Saturday night allow you to update servers in a specific order, eliminating application restart sequence issues. The light dawned, and the engineer said “No wonder we need to manually restart the servers at 3 different clients every month!|”.  Creating and deploying these patch windows took time and effort, but the ability to automate the patching for hundreds of servers year after year has paid that initial cost several times over.

Good Practices

The following methods are used in our Core Automation Suite to automate the bulk of ongoing patch management.

  • Layered Patch Policies – We utilize 3 core policies – Baseline, Servers, and Workstations. These policies have just one role – approve everything except the patches that we never want to install on any system, on servers, or on workstations. It’s a pretty small list, and usually limited to the Optional Software category (think “Zune media player”). The we have additional policies that block a specific class or category. These might include DotNET, various IE versions, optional updates, and the like. Every agent is a member of at least two patch policies – Baseline and either Server or Workstation. Customers that have specific exclusion requirements get additional policies applied. All told, we have about 14 Patch Policies to accommodate the 3 baseline and 11 custom blocking configurations.
  • Effective Auto-Approval Policies – The Baseline policy auto-approves most categories, and the Server/Workstation policies set a few categories to be reviewed and manually approved. The remaining policies are “blocking” policies and approve most categories except the ones that contain the updates we might want to restrict. This means we have to review and approve updates for the 9 distinct Patch Policies that we’ve created, but it’s a small number to review each month and generally takes less than 30 minutes to complete. This is a small price to pay for low-risk updates to client machines.
  • Automation for Patch Policies and Schedules – Leverage System Policies to define the patch update schedule, policies, and other patch-related configuration settings. Policies merge, so a single policy can configure settings for all servers, and a second policy can define the update schedule. Since policies can perform multiple tasks, you can easily perform pre and post update tasks such as disabling monitors and forcing pre-update reboots. Policies also override settings when applied at a lower (client or machine-group) level. A policy that uses additional Patch Policies can be applied to a client folder to prevent specific update categories or types. Utilize Views to control the application of these policies, limiting them to server or workstation class systems and identifying specific schedules or other restrictions.
  • Run Weekly Patch Scans – insure that sufficient time exists between the scan and the update schedule. We run our scans on Mondays for all agents (servers during early AM) and schedule patches for Wednesday or later during the week. (We patch some servers during mid-morning or noon on Wednesdays if they can’t be done during normal update windows.)
  • Pay Attention! – a small amount of review each month will make sure that the automation is functioning and identify gaps in your process. Does each agent have a Scan scheduled? What about an Update schedule? The systems without update schedules – are they being manually updated because of application or customer service requirements? Check these and the system policies each month when reviewing the new patches for approval – this only adds another 5 minutes to the monthly patch management tasks.

The effort and time to perform this level of patch management pays for itself with a reliable and highly automated process. The time to develop this system can be significant – our patch components consist of 14 Patch Policies, 54 patch-related System Policies with associated views, and took about a month to develop, test, and document. Not only will updating be applied automatically to workstations and to servers with just a custom field update, but we’ve developed ways to automatically exclude systems and customers that don’t subscribe to patching. These policies and views are just one part of what makes the Core Automation Suite’s cost a true bargain.

 

Comments

Comments are closed on this post.