From Customer Feedback to Feature Release: Policy Results Report

Episode 5   Published May 21, 202415 minute watch

Episode Summary

In the latest Product Talk episode, we explore big improvements to Automox's policy results report. We've listened to your feedback and added cool features like historical data, more Worklets, and details on required software policies. Plus, new statuses make the report easier to use. Now, you can access it through an API and export data as a CSV file, making it faster and more scalable to handle lots of data. This report is super helpful for audits, spotting trends, and checking how well Worklet automation scripts work. It's all about making things better for you and improving how things run.

Episode Transcript

Peter Pflaster: Hello everyone and welcome back to Product Talk. Today we have a really great conversation queued up. We're going to be talking about some enhancements that we've made to the policy results report. And for those of you that are new to Automox, we'll give you a quick baseline of what the policy results report actually is. Joining me today, we've got Emily, Stephanie and Tommy. They're some of the core team that was working on this project. Emily, why don't you go ahead and introduce yourself, tell us a little bit about what you do.

Emily Pace: Sure, thanks Peter. Hi everybody, excited to be here for my second Product Talk. My name is Emily Pace. I'm a Senior Product Manager here at Automox.

Peter Pflaster: Awesome, thanks so much Emily and Stephanie, why don't you tell us a little bit about your role in the project?

Stephanie Chesler: Yeah, I'm a Product Designer here at Automox, and I focus on user research, design and testing.

Peter Pflaster: Excellent, and last but not least, Tommy.

Tommy Rogers: Yeah, my name is Tommy Rogers. I'm a Software Engineer here at Automox and worked specifically on the backend for this project and glad to see that we're almost got it completed.

Peter Pflaster: Awesome, yeah, this should be out pretty soon after the podcast is released. We're really excited to talk a little bit about the project today and the reasoning behind it. Emily, why don't you give us a little bit of background about what the policy results report actually is and the enhancements that we're making to it.

Emily Pace: Yeah, of course. So when the current policy results report was released initially, there were some technical limitations in place in which we didn't have all the features we had hoped for. So since the release a couple of years ago, we've been collecting customer feedback. And I'm happy to say that we've addressed almost every single comment that came in as feedback. Reporting within the tool is a high priority for us.

And this is one of the first big projects that we're going to be releasing in terms of reporting within the Automox console and enhancements. So in terms of the enhanced report we've been working on, we've added historical data. And that's one of the most requested enhancement requests that requested enhancements that we got for this project. So in terms of the historical data, we're eventually going to have 400 days of historical data.

And it's rolling data. So we started collecting on March 1st of this year. And right now we have a little over 60 days. By the time we release into production, we'll have a little over 90 days of data. So we're really excited about that. And that's just going to keep collecting. So in addition to historical data, we've added a Worklet automation script and required software policy information. So the data is not going to be limited to patch policies like it is today.

You're going to have a big picture of the policy information we have in the console and the runs of each policy. And so in addition to that, we've also added two new statuses. So with the current statuses we have today of pending, successful, and failed, you're also going to see remediation not applicable and not included. And that not included data, that's going to include information around devices that are offline or they're deferred or they're filtered out of policy runs for a number of reasons.

And all of this data is going to be available via APIs. So if you use other tools that you want to consume the information in, we support that. So that's really exciting. And then the team has worked extremely hard to get this data in a performing manner. And it's all been the culmination of customer feedback and customer interviews that Steph and I were part of, which I'll let her talk about in a second.

But we held interviews for, I want to say, like six months or nine months. And so we got a really, really good feedback. And we had a lot of interaction with customers and showing them the report and making changes on the fly. So it's very exciting. We've been in beta and early access for a month now. And we've also received really good feedback in beta that we've also implemented during the time span of development. And so we're just really excited to get this into production. Right now we're targeting next month, so or June, depending on when the podcast comes out.

Peter Pflaster: Awesome. Yeah, I know from my personal experience talking to our customers and our prospects as well, you know, reporting is not a given with any solution. It can be very difficult to get the information that you need to actually prove the value of the work that you're doing. And even if you can get that information, it may be really difficult and it may require, you know, in -depth knowledge of APIs. And obviously APIs are important for the more sophisticated folks out there, folks that are moving that information into another platform. But I think the really cool thing about what we've built here is, you know, even somebody like me can go in and look at the report and get a clear idea of what's been going on. And that's obviously a credit to this team. So, Stephanie, I'd love to kick it over to you and just hear a little bit about, you know, how we approach the design with this project and the types of conversations we had with customers.

Stephanie Chesler: Yeah, so I'm going to echo a bit what Emily was saying, but we did conduct customer interviews as well as internal interviews for people who had interacted with this report. And the main thing was to get out historical data, but that was like the huge technical limitation, but also how do we improve that experience as we add more data to this report? So one: 

Emily already kind of mentioned we added the statuses to make sure we're covering all of the devices within the policy run, whether they had a result or not. Let me add that not included status. So we get a complete picture. Another thing we added from customer interviews. So before you had to navigate away to the activity log to see the reason why, say you had a failed results.

So we actually integrated that into the device table. So now when you go and check that policy run for that specific device, you're going to get a little summary of why it failed. So you'll be able to troubleshoot much easier instead of hunting down through the console. And then again, going back to how do we display this more data in a digestible manner.

So one thing you can do now is when you're viewing all of your policy results, you can click group to view. So you'll be able to see them by policy. And we found from interviews that customers wanted to see trending data. So that's just a better way to group your data. And then in addition to that, we added a whole new tab for policy run history. So if you go check a specific policy, you're now going to get this page, this nice data visualization line graph that's going to show you the trends up to 90 days for each status. And it gives a little summary. So just multiple ways to present the data to the user. And lastly, adding the CSV export, because that's a huge ask across the board from our customers. How do we show this data to other people in our company who aren't in the console? So I think that's a good way to pass it over to Tommy.

Tommy Rogers: Yeah, so one of the decisions that was made as far as with the CSV export, we have a couple of different ways that we have the CSV downloads flows in place currently today. So with the move that we've done on the CSV exports today, we have a new standard. And what that's going to give us is one, a new asynchronous flow that'll be available for our customers to be able to download this stuff and they could potentially leave the page, come back and still have that stuff available for them.

So that would be a really nice benefit. And then the other thing that we've done with that is we have made it open for all of our other places that we do the CSV to also use this as well. So going forward, we're going to be able to move that capability over to anywhere else that we have the CSV export. So that's a really nice benefit that we've got it as well.

Peter Pflaster: Yeah, that's really interesting to hear. I mean, for me, right, 400 days is a long time, obviously. And I think for our customers, I assume one of the big benefits there will be if they need that information for like audit purposes or some sort of an annual review that they're doing internally, that would be a huge benefit.

But I'm really curious from kind of a database perspective, how did we prepare for that? Did we need to build a new method of storage and access of that data to kind of support this report?

Tommy Rogers: So we took a look at a couple different options, even stuff like an AWS Timestream for like more of a Timestream database to be able to store this. We ended up sticking still with actually Aurora database just because that was one, something that we were comfortable and familiar with. It's a good amount of data. I mean, we've got 6 billion records that we're gonna have to be storing in comparison to before.

So it's a jump for us, but we still, because of some of the smart things that we've put in place, we were comfortable with actually just leaving it in the Aurora database and that we'll be handling it fine. One of the other big things that we had to go and keep in mind though, because of our event -driven architecture, we end up having about 1,200 to 1,700 messages coming in a second first to handle and there's actually like 4,000 messages a second burst that we also see. So we needed to be able to handle those two things. One, being able to get through those 6 billion items of data and also be able to ingest all of those events that are coming in.

One of the key decisions that made that so that we kind of split things up. So we have one service that goes and handles ingesting all of those events. And then on the other side, we have a service just as far as to be able to handle the APIs. And what that actually does is it keeps the read and the write completely separate on the database. So there's no potential contention as far as on the load. And it's worked out extremely well because we end up able to handle all of the messages without having any issues.

There's a lot of flexibility for us to be able to scale. And there's also, as far as on the APIs themselves, they're all averaging under 500 milliseconds. So really good performance for the 6 billion rows, which is exactly what the benchmark that we wanted to hit. So very happy about that.

Peter Pflaster: Awesome. Yeah, I think apart from the stuff that we've released over the last six months, I will say just being in the console in general over the past six or so months, it's been pretty noticeable. Just the level of kind of snappiness moving around the console and accessing various menus that before would take a second to load. Now it's almost instant, it feels like when I click into a new menu, new report, et cetera.

So, I think a lot of Automox customers maybe have noticed that, but if you haven't, definitely pay attention the next time you're in the console and kind of check that out and appreciate the tech improvements we've made over the last six or so months.

Emily Pace: Yeah, and to your point earlier, Peter, about the 400 days and the auditing, I just wanted to expand on that really quick. The reason we went with 400 days, which kind of correlates to 13 months, is to get that extra month in so that customers can and users can go and look at that annual auditing track and, to Steph's point, the trend data. And you'll be able to see a full 12 months of history with that additional 13th month.

And so they won't overwrite until you have all of that data. So that's kind of why we went with 400 days is to make sure you have that extra time to account for the auditing purposes of an annual view.

Peter Pflaster: And I think the last thing that I'll kind of mention here that we talked about at the very top, right, if you've used this report before, it was a great indicator at the last patch job that happened on your endpoints, which is great data. And adding 400 days of history to that data is huge for a lot of our customers.

But I think even bigger is the capability to go back and see the history of all those Worklet automations, right? So those are Bash and PowerShell scripts that are automating, you know, configuration or remediation in your environment. You're actually able to see, you know, the success rate and the history of all those automations as well, which I think will be pretty huge for our customers.

Emily Pace: Yeah, it's a powerful report. The team's done a really, really great job at implementing this and making sure it's performant. And Steph did a really great job with design and implementing all the customer feedback. And we've been in, like I said, early access for the last month. And we've gotten some really, really great feedback. Customers that have used it really enjoy it. They find it easy to use. They really like the data coming in. So we're really excited about it. It's going to be powerful for all our users.

Peter Pflaster: so you can Keep your eyes peeled for that release sometime in June. We're really excited to hear everyone's experience with the tool. Definitely comment on this post on LinkedIn and let us know what you think of the report, talk to your CSM, et cetera.

Well, Emily, Stephanie, Tommy, really appreciate the time. I'm so excited for this report to come out in about a month's time here and look forward to talking to you on the next release. Thanks, everybody.

Emily Pace: Thank you.

Tommy Rogers: Thank you.