2016: Healthcare Data Breaches in review, Part 2
To recap Part 1: although headlines tend to scream “HACK!” (and irritatingly show us stock images of hackers in hoodies), in reality, a significant percentage of incidents coded as “hacks” occurred because an employee error created an easier opportunity for attackers. Similarly, if you are rigorous in your data security and auditing but more lax in monitoring your business associate, their employee errors can create an easier opportunity for attackers.
Failure to recognize the significant extent and impact of insider errors can easily result in a mis-allocation of available resources in an entity’s cybersecurity program. So let’s start with the findings on third-party breaches and insider breaches. Much, but not all, of the data discussed below was included in Protenus’s report.
In September, Protenus, in collaboration with DataBreaches.net, published a white paper on third-party breaches that included an analysis of more than 60 incidents from January 1- August 31. The analysis found that at least 30% of breaches and 35% of breached records reported to HHS’s public breach tool were attributable to third-party breaches, even though HHS’s breach tool only reported the involvement of third parties in 7% of their entries.
Consistent with the partial-year data, analysis of reports from the entire year for which we had sufficient details found that at least 33% of incidents were attributable to third-parties, and that those third-party breaches for which we had numbers accounted for 63% of the breached records we knew about in 2016. Even we omit the 10.3 million from the vendor that was never identified, third-party incidents still accounted for 40% of all breached records for incidents where we knew the number of records affected. Once again, then, third-party breaches account for a disproportionate percentage of breached records.
When looking only at the third-party incidents, we found that there were more Insider breaches (human error + intentional wrongdoing) than hacks. Of the insider incidents at third parties, there were more reports of human error incidents than intentional wrong-doing.
For the entire analysis of 450 incidents and for the 400 for which we had sufficient details, we found that 48% of the reports could be attributed to Insiders. That proportion does not include incidents coded as Lost or Missing, because in many cases, we did not have sufficient information to determine if an employee lost the data or device, or if it was stolen. Similarly, cases of theft by external parties were not included in the Insider statistic, even though employees may have contributed to those incidents by leaving devices in unattended vehicles. Thus, the 48% is likely to be a significant underestimate of the proportion of cases where employee actions caused or significantly contributed to breaches.
For the 192 incidents coded as Insider for our analyses, 52% could be ascribed to employee errors or accidents, and 48% appeared to be intentional wrong-doing by employees. For purposes of these analyses, we did not distinguish between employees of covered entities and employees of third parties.
Insider error accounted for slightly more incidents than insider-wrongdoing and for more records breached per incident. The mean number of records breached for an insider-error incident was 17,642.14 compared to 5,728.74 for an insider-wrongdoing incident.
Not surprisingly, insider-wrongdoing incidents took significantly longer to detect than insider-error incidents. For those cases where we had the date of breach and date of discovery, the median number of days to discovery was 290 for insider-wrongdoing, as compared to 11 for insider-error.
But if insider/employee incidents accounted for one-half or more of incidents, did they also account for one-half or more of breached records? The answer appears to be “no.” Incidents coded as “Insider” for purposes of our analyses accounted for only 12% of breached records.
Don’t be misled by that statistic, though. Although 9 of the 12 largest incidents in 2016 were coded as “hacks” in analyses, the reality was that 5 out of 12 of the largest hacks involved insider errors. Had all hacks that started with or were enabled by insider error been attributed to the “Insider-Error” category, the results of our analyses would have been dramatically different. If we took all the ransomware incidents that occurred because an employee clicked on a link when they should not have and attributed those breached records to insider error, what would the year have looked like?
External attacks will continue to occur at extraordinarily high rates. And yes, we need to ensure perimeter defenses. But if we really want to prevent more breaches, make it your goal in 2017 to reduce employee errors and to increase employee compliance with policies and protocols. Then figure out how to accomplish those goals. Whether it’s failing to restore a firewall after an upgrade, or not checking to ensure that no sensitive data are on a publicly available FTP server that has no login required, or checking to make sure that what should be a BCC: distribution list is actually in the BCC field and not the CC: field, or clicking on links that could result in ransomware infection, well, those are things we should be able to do something about.
Humans can and will make errors. But where in your security program are there any checks or controls that anticipates common human errors and prevents them or rapidly detects them before some criminal capitalizes on them?
Of course, apart from errors, some employees knowingly engage in wrongdoing, so what controls or safeguards do you have in place to limit employee access to only those files they need to perform their assigned duties? What safeguards are in place to prevent employees from copying massive amounts of PHI onto thumb drives or emailing it to their home email account? And when employees terminate, what protections do you have in place to make sure they are not taking data with them or haven’t exfiltrated data in anticipation of leaving?
But most importantly, perhaps: how often are you training and re-training your employees on policies and procedures? How often are you giving them exercises to recognize phishing attempts or targeted phishing attempts? Hackers that DataBreaches.net interviewed throughout 2016 all stated, quite frankly, that there were two easy routes into a network: social engineering or exploiting well-known vulnerabilities.
After one major incident caused by employees clicking on a link that inserted malware into their system, I checked an agency’s site to see how much privacy and security training it required of employees after their initial hire. I was dismayed to read that the state required “at least one hour of training every two years.” I cannot imagine that one hour every two years would encourage a culture of privacy and data security. Give employees regular drills where they respond to social engineering attempts. Give them examples of phishing attempts that are currently circulating. Remind them of policies about encryption and removing data from the office and how it is to be secured. Yes, some will get lazy and knowingly violate the procedures, but the more you train and reinforce the training with discipline for violators, the greater the likelihood that employee errors should be reduced. And of course, deploy technical safeguards where they are available and can be accommodated in your budget.
Protenus has called for 2017 to be the year of increased awareness about insider incidents and the need to reduce them. Safetica has also predicted that insider errors will drive major breaches in 2017. Yes, both firms have a vested interest in promoting greater attention to insiders’ behaviors that are the primary or contributory causes of breaches. But my analysis of the data suggests that they are right, and I hope their message is being heard loud and clear.
This article originally appeared at Office of Inadequate Security