Otto  background

Windows Administration with PowerShell #9: Logging

Connect With Us

Start now, and patch, configure, and control all your endpoints in just 15 minutes.

If you have been following along with our Windows Administration with PowerShell series, you have probably noticed that a common theme has been the supportability of the PowerShell scripts you develop. The more effort and thought you put into this during development, the easier your job will be in the future. While we have already talked this horse into the ground, there is one key factor we have yet to touch on: logging.

How Does Logging Help?

Troubleshooting any issue always comes with multiple puzzle pieces. The more pieces you have before you start, the easier it will be to suss out your problem. Think of this as if you are trying to guess the phrase in Wheel of Fortune; the more letters you have, the easier it will be to guess the full word. But, isn’t the error in this analogy the phrase? We handled all of that in the last installment, so we should be good to go, right? Wrong. Just because you have the error message does not mean that you know why it’s happening.

Let’s briefly revisit the beginning of this series. Why are you developing these scripts? Because you have a repetitive task you need to automate in your environment. This means that your script is processing dynamic data, which may differ each time it is run. The larger your dataset, the higher the chance that an outlier will cause an error. In those instances, you need to make sure you gather as much information as possible to make troubleshooting easier. Not only does this include the error message, but also the data that is being processed with the specific iteration of the script. This will allow you to replicate the error which makes troubleshooting a breeze, relatively speaking.

Where Are These Logs Stored?

Logging allows you to store all of this information in a format that is easily accessible. The level of accessibility depends on your implementation. It is up to you to decide the best methodology for storing and accessing your data, given your environment, and how you will be executing your script.

Decentralized Logging

The simplest of implementations generally store logs locally on the host machine in the form of .log files. Many would recommend placing these logs inside a folder such as C:\Temp. While that is certainly a viable solution, it is also important to realize that some users may periodically empty their temp directories. So, if you want to ensure the persistence of your logs it may be worth considering the creation of a new directory for your solutions. One location for this new directory could be C:\ProgramData.

Once you decide where you want to store your logs you will need to decide exactly how they will be stored. Do you want one log file per iteration or one log file which is appended with each iteration? The answer depends on your script and what exactly you are doing. What may work for one environment may not work for others. While it is important to be cognizant of best practices, ultimately the best way to implement your logging solution is up to you..

Centralized Logging

Centralized logging is a little more complex. You can continue to store your information in logs files, however, the files would be located in a network share as opposed to individual host computers. Generally, this is not a viable solution outside of an enterprise environment because some level of authentication is required. That being said, there are huge benefits to this methodology. The primary benefit is that you are not at the mercy of the host endpoints being online to access their logs. Another option is to store your information in a database instead of a file. This allows you to more easily traverse your data when searching for specific issues. The downside of centralized logging, however, is that you are dependent on a network connection to log info.

Persistence

Regardless of the methodology you choose to follow you will need to think about how you want your logs to persist. The trick here is to find a good balance. The larger your logs, the harder it will be to find relevant information. Conversely, if you delete/overwrite your logs too frequently you risk losing out on relevant data. As with everything else, the right solution for you depends on your environment and what you are trying to do.

Show Me How to Log Already!

One pattern many systems implement is based on file size. Once the log file grows to be larger than a certain size, it is renamed and a new one is created. Once the new one also passes the set size, the old one is deleted and the new one is then renamed to reflect its status.

This pattern prevents log data from becoming overwhelming but also retains information for a relevant amount of time. For the sake of this post, we will be focusing on this pattern to demonstrate what implementation may look like. As usual, this will take the form of a cmdlet we can then use throughout our script to log relevant information.

As you can see, the cmdlet takes in the necessary information and outputs it to the provided log file. Every time you run the function it goes through an initialization stage which will groom the logs per the pattern described above. The only thing you then have to do is start implementing the function throughout your solution to log any relevant information. In this particular implementation, we are creating tab delimited logs. You will need to modify the code to fit your needs.

Next Week

You will also notice that we used a PowerShell class in the example above. This is a new feature that was introduced in PowerShell 5.0 which we will dive into in our next installment. Those of you who are familiar with object-oriented languages probably already understand how useful this is. For everybody else, tune in next week!

About Automox

Facing growing threats and a rapidly expanding attack surface, understaffed and alert-fatigued organizations need more efficient ways to eliminate their exposure to vulnerabilities. Automox is a modern cyber hygiene platform that closes the aperture of attack by more than 80% with just half the effort of traditional solutions.

Cloud-native and globally available, Automox enforces OS & third-party patch management, security configurations, and custom scripting across Windows, Mac, and Linux from a single intuitive console. IT and SecOps can quickly gain control and share visibility of on-prem, remote and virtual endpoints without the need to deploy costly infrastructure.

Experience modern, cloud-native patch management today with a 15-day free trial of Automox and start recapturing more than half the time you're currently spending on managing your attack surface. Automox dramatically reduces corporate risk while raising operational efficiency to deliver best-in-class security outcomes, faster and with fewer resources.