Organizations are routinely failing to identify cyber-security attacks until it is far too late. According to the 2016 SANS Incident Response Survey, approximately 21% of respondents said that they could typically detect a cyber-attack within 2 to 7 days, while 40% said they could detect a security incident in less than 1 day. Approximately 2% of respondents reported that their average “dwell time” was greater than 1 year. It should also be noted that the longer it takes to detect a breach, the more expensive it will be to remediate it.
In the case of the recent Equifax data breach, claims were made that they had suffered the breach months before the attack actually surfaced. The hackers were able to compromise 45% of all US Social Security numbers. A breach of this scale is not something that can be carried out quickly, and so the question remains; how were the hackers able to compromise such vast amounts of information over a long period of time without getting noticed?
The truth is, most companies are still relying on outdated methods to protect their sensitive data, and still focusing far too much on external threats as opposed to insider threats – which account for nearly 75% of data breaches. To be fair though, it’s not all doom and gloom. According to the Ponemon 2017 Cost of a Data Breach Study, companies have made significant progress within the last year, with the average dwell time dropping from 70 days to 55.
While there are many tools and technologies that can help us detect a potential data breach, such as firewalls, endpoint detection, data loss prevention (DLP) software, SIEM solutions, and so on, one of the first steps an organization should take is to ensure that they are able to detect any suspicious activity associated with the files, folders and user accounts in their systems. After all, according to the Verizon Data Breach Investigation Report 2017, file servers remain the prime target for cyber-attacks.
Organizations will need to know exactly who, what, where and when, relevant changes are being made to the files, folders and accounts on their system. There are many file server auditing solutions which can help you monitor access privileges and detect when user accounts are created, deleted or modified and detect suspicious file and folder activity, either based on a single event or threshold condition. However, it’s all well and good auditing such events, but it’s not so useful if we don’t know what we are actually looking for. So what are we looking for?
We are essentially looking out for any events that do not conform to the patterns we typically expect to see. For example, a cyber-attack could cause some form of outage. By monitoring the availability of our systems, we can review the logs to determine whether it was caused by human error, a technological failure, or some form of malicious activity.
Secondly, we can monitor the time at which certain events take place. For example, if normal working hours are between 9am to 5pm (Mon – Fri), and some events were taking place outside of these hours – let’s say 2am on a Saturday – we can deduce that something suspicious has taken place. We can thus setup real-time alerts or some form of automated response to address the anomaly. This may include stopping a specific process, disabling a user account, changing the firewall settings, or shutting down the server entirely.
Finally, we can use data discovery tools, accompanied by a data classification policy, to help us keep track of what data belongs where. We can then setup alerts or an automated response, should our sensitive data end up in the wrong place.