What came first; the metrics or the policy?

News

Comment

When you are attempting to get a grasp on your organisation's risk profile, it can be tempting to go all in with metrics.

"Get the metrics in, and then we can understand what we have, then we can formulate a plan" 

However, this is a terrible solution. The end result is bad statistics from the start that get worse as you gradually mature. It shows a complete lack of understanding of how information security actually works if you think about it. The problems in any organisation are not what you know, it is what you don't know. This approach will force you to miss huge important factors within your IT network.

Why? Well, without at least a baseline of policies and processes, you are not actually gaining good stats, you are getting 'guesstimates' from only a few sections that are reporting to you. Each section is reporting what they think you want - and that is if they are reporting to you at all. You will also be missing key people and assets. Each guestimate provides more and more fog, and then you start extracting conclusions out of bad data which will result in terrible decisions. 

For example, let's say you are reporting about of incidents within your IT department. So you ask your security Incident Response team how many incidents there have been, and they report the number. So far, so good, right?

No. They will only be reporting incidents that they have actually found. The loud and proud malware packages and ones found on their *possibly not fit for purpose* tooling. If you do not have a policy in force that states all SysAdmins must perform weekly checks on their syslogs for irregularities, or a policy that states all incidents and near-misses must be reported to the local CERT team, your figure is way off. 

Of course, a good SysAdmin would always check their logs.... right? Wrong. They have a lot of pressure on them to maintain the service and provide operational uptime. If it is not mandated that these checks must happen, or that the logs must be sent periodically for analysis, they will not be able to allocate time for that within their respective routines - and mistakes will happen.

So, what is the best approach?

Before you even attempt metrics, firmly establish some baseline policies that state the security responsibilities for system owners, your minimum expected standards, and your formal reporting pathways (during an incident and during periodic security checks).  This will allow you to actually get some valid data (albeit, small at the start) which will be able to formulate the full policy/process framework you need, and then you'll be able to input this within the strategy for your organisation. It also provides the necessary backing to SysAdmins that these tasks are important and require resources in both time and money.

Trust me, those hidden/magic services that we all have lurking within your network have more chance of being found if you do metrics in a smart way; rather than if you just jump in headfirst coming up with fictional numbers (which, if you do not have everyone reporting to a set minimum expected standard, is exactly that). 

One of the biggest failings I have seen is an information strategy based on completely fictional data, resulting in the wrong precedence being attached to the wrong assets, which then results in the wrong investment in tooling and then results in the inevitable. A breach.

The best approach, after going down this path, is to start again and spend time unpicking the damage done from those reports - a costly exercise in both money and effort.

Social:

Add a comment

Email again: