Latest Findings – Verizon’s Data Breach Investigations Report

A Q&A with Chris Novak, Managing Principal at Verizon Business
Verizon’s Data Breach Investigations Report, conducted by the Verizon RISK Team with cooperation from law enforcement agencies around the world, has become an invaluable resource for anyone looking to gauge the current landscape in data breach incidents. “It’s not enough to know what happened. We need to know why and what we could have done to prevent it,” says Chris Novak, managing principal, investigative response for Verizon Business Security Solutions. I talked to Chris about the latest findings in this year’s report.

What does the report cover and what’s new this year?
While it doesn’t allow us to speak to specific incidents due to confidentiality reasons, it allows us to aggregate the data, anonymize it and offer a summary so that we can make this information available to others and serve as an educational resource. This year’s report looks at 855 incidents with 147 million compromised records. We’ve added some additional contributors, the Irish Reporting and Information Security Service, the Australian Federal Police, and the London Metropolitan Police. These partners give us a better sense of what’s going on within their footprint and it also allows us to give better sample sets of global data.

What are the biggest findings of this year’s report?
We found that the external threat is still the greatest, including 98 percent of cases, up 6 percent from last year, with only 4 percent of cases that were internal. (The overlap accounts for the cases where internal people collude with external people.) Organizations have implemented much more internal control and identified vulnerabilities so that improvement is reflected in the numbers we’re seeing. Hacktivism was responsible for 58 percent of compromised records—that’s a significant number. These groups typically target larger organizations. In general, external breaches were conducted with hacking (81 percent) and malware (69 percent). Social engineering is still registering as a small threat in the landscape, with only 7 percent of the cases socially engineered.  We’re also finding that servers (94 percent) are the most vulnerable to attack—at the end of the day, that’s where all the data is. In terms of the kind of data we’re seeing it’s still mostly personal information, with about 95 percent of cases including PII such as names, social security numbers and addresses—all the items needed for identity theft. We continue to see intellectual property from the trade sector being stolen, but it’s difficult to monetize the worth of that information. Another interesting finding is that 65 percent of the attacks were considered “low difficulty,” showing us that in most cases the perpetrators are not very sophisticated—they often looked up techniques on Google or Wikipedia, but simply worked until they got in.

What should security officers and risk managers be worried about?
An area we’re keeping our eye on is the healthcare industry and we expect to see more breaches in this area. We also looked at how long it takes for companies to discover a breach. In 84 percent of the cases it took multiple weeks or longer to figure out. That is concerning. Another issue of concern is that 86 percent of organizations with a breach had everything they needed to know in their own logs. If they’d been looking at their own data they could have stopped the incident. 97 percent of breaches were avoidable through simple or intermediate controls.

What’s the good news?
We are not seeing any increased risk tied to cloud computing, an area many people have worried about. People using cloud computing are often getting a better level of service so that if something happens they can catch it more quickly—so in some cases, it’s actually a security improvement. In general, preventing a breach from happening is less expensive than the cost of wading through a typical breach, so the proverbial “ounce of prevention” still holds true here. 63 percent of respondents said that the cost of preventing their breach would have been simple and cheap and 31 percent said it would not have been difficult or expensive.

In conclusion…
Chris Novak’s insights are helpful, especially to risk managers trying to get their arms around the causes of loss and the potential frequency and severity of cyber risk. The Verizon report is especially focused on risks caused by malicious actors, which continue to morph each year, always seeming to stay one step ahead of corporate efforts to safeguard information assets. However, it should be footnoted that a fair amount of cyber liability insurance claims that we see are the result of non-malicious events such as lost laptops, staff mistakes, and improperly disposed paper records. This is not to discount the importance of being battle-ready to deflect the malicious threats that our clients literally face on a daily basis, but to acknowledge that both types of events must be anticipated.

Trends in Cyber Risk Management Services

A Q&A with Rick Betterley of Betterley Risk Consultants, Inc.
Like any segment of the insurance industry, cyber risk management services evolve over time. To get a handle on some of the latest trends, I spoke with Rick Betterley, President of Betterley Risk Consultants (an independent risk management consulting firm), and publisher of The Betterley Report at www.betterley.com.  Rick can be reached at rbetterley@betterley.com or 978.422.3366.

What do you see as the major trends in cyber risk management services?
We’re seeing a sharpening of industry focus from the service companies and insurance companies, as well as a more focused range of products for specific industries. The advanced vendors are realizing that one service doesn’t fit all and they have to adapt to particular needs, which is a sure sign of a maturing marketplace. Healthcare is a good example. We see more risk management services that cater to HIPAA, including compliance e-tools.

Another trend is more restriction in regard to vendors. Insurance companies are less willing to allow the insured party to use the vendor of their choice, and that’s a double-edged sword: Controlling the list of approved vendors helps the insurance company better manage their vendors and perhaps pass along better prices to customers but the risk is that the insured will be less satisfied with their policy, as they might not realize they’re restricted in their choice until it’s too late.

The final trend we see is more internal management of vendors by insurers. Insurance companies have an interest in these services as it’s a big part of a claims expense, so they are are investing more time and personnel into looking into them and making sure they’re cost effective, especially for individual claims.

What are the top five reasons middle-market organizations don’t buy cyber insurance?

  1. Brokers generally aren’t good at communicating the value between different insurance policies and the forms are hard to compare so it leaves the insured less confident to buy the product.
  2. In many cases, the insured believes cyber insurance is already part of their policy, when in fact it’s not.
  3. The organization is still resistant to the cost involved and believes it’s too expensive. They might read the headlines about data breaches but still have an “it won’t happen to us” denial.
  4. The organization might be resistant to the idea of notification costs as a sublimited coverage. They might find it off-putting that they are told that they have to get a higher amount of liability coverage to obtain the breach notice limits that are really driving the purchase.
  5. This one is hardly a blinding flash of insight, but the company just might not be paying attention. They might be short-staffed or they think it’s taken care of or they put off buying insurance until next year.

How are cyber insurers responding to fierce competition in the marketplace?
There are close to 30 carriers on the market now. One of the competitive responses we’re seeing is removing sub-limits that otherwise existed on breach notification, so if you’re buying a $10 million liability policy the insurer might let you have it with the full limit for breach notification avis viagra france. This practice was unheard of until last year. We’re also seeing lower deductibles. I already mentioned the limits on vendors, which help the insurance companies keep down costs. Finally, I would say we are seeing a tremendous investment in marketing to help brokers better communicate the value of their product.

In conclusion…
NetDiligence can agree with many of the observations that Mr. Betterley is seeing in the trenches. We are also seeing some leading brokers and insurers that specialize in cyber liability coverage making a push to educate clients with traditional lines of insurance about the many nuances of cyber coverage and the must-have supporting services. This is done through weekly webinars and conferences. Even with all that, I am amazed while speaking at various conferences at how many small and medium-sized companies are just beginning to realize they have a cyber/privacy exposure, and want to learn the very basics. For this reason we are seeing more markets leverage our eRisk Hub® portal to help them get the message out about the liability exposures, coverage for same, and general ‘state of cyber liability union.’

Cloud Computing and Insuring Risk

A Q&A with Andreas Schlayer of Munich Re
Cloud computing remains a hot topic in the area of insurance risk, though many companies and insurers are still assessing its impact on IT security. To find out more, I spoke with Andreas Schlayer, who heads the insurance IT risk team at Munich Re.

What are some of your concerns in terms of cloud computing risk exposures impacting Munich Re clients?
In our opinion, cloud computing is likely to become very popular in the future. The main success factors we see are the competitive pricing and the quality of services and products that will be available.

This future prospect calls for something I would best describe as a “supply chain of IT services.” It can combine one or more cloud-based services from one or more providers to create a new product or service. This development is favored by the reasonable lease price for cloud computing power, which allows more and smaller companies to offer competitive services or tools.

From an insurer’s perspective, these IT exposures will be similar to today’s supply chain exposures in property insurance. With the cross-linking of providers increasing in the cloud, a risk scenario we see emerging on the horizon is the blackout of a major cloud service provider. We are concerned that such a blackout could affect a large number of customers who do not even know that their IT depends on this provider.

A risk scenario could look like this: Two students have a brilliant idea for a software tool. Using cloud service provider “A” to scale computing power to industrial size, the two students can target Fortune 500 companies as customers. A blackout of cloud provider “A” would affect all companies that bought the software tool created by the two students plus those companies that have a contract with cloud provider “A” to run their IT.

This type of aggregation is very challenging for insurers to monitor, as it requires making correct assumptions about the number of affected policies per event and the average loss amount per policy.

What are some thoughts/suggestions that you might have as to ways in which a primary cyber liability insurance carrier can offset the cloud aggregation exposures facing their book of business?
I doubt that there will be reliable models available in the near future to quantify the costs of an internet blackout or the knock-on effects caused by the blackout of a large cloud provider.

If they are not able to offset the costs of cloud aggregation exposure, insurance carriers will be forced to limit it in the policy wording. In our opinion, the most effective way to control cloud aggregation exposures is to differentiate between direct and indirect dependencies.1
Direct dependencies are easier to monitor, for instance by asking the insured what companies are providing IT services. Based on this information, the aggregation exposures for cloud providers can be monitored. The insurance carrier can provide higher limits for these exposures than for unmonitored exposures.

Indirect dependencies cannot yet be monitored with reasonable effort. We therefore recommend providing small limits, or excluding losses caused by indirect dependencies, in the policy wording to restrict the aggregation exposures.

In conclusion … 
The cloud comes with many risk issues impacting both insurer and insured business clients. Many of the risks related to cloud computing revolve around contractual risks (e.g. do you own your data once it is uploaded into a third-party cloud? See our eRisk Hub® Cloud Risk Considerations tool).

The commentary offered by Mr. Schlayer is of vital importance to many primary insurance carriers offering cyber liability coverage to entities that already leverage cloud computing, and this trend will continue to grow for various reasons. The potential for a data breach event creates both first-party cyber risk exposure (business interruption) and third-party exposure (class action legal liability). The latter can have systemic implications that impact a sizeable portion of an insurer’s book of business. This aggregation concern is on the minds of many underwriters we support (and Munich Re).

On the loss control side, we are seeing newer technical solutions deployed that can mitigate some cloud exposures, such as encryption solutions (see our prior post on Cloud Security) that allow clients to encrypt/protect customer PII data in a cloud’s “stove pipe”. This type of protection could also give the insured a safe haven from future compliance and liability risks (i.e. they may not need to report their data breach).

—————- ### —————-

1 In policy language, direct dependency means that the reason for the interruption of the cloud service has to be caused by the cloud service provider itself and not by a third party which is part of the provider’s supply chain.
Indirect dependency means that the reason for the interruption of the cloud service has to be caused by a third party (i.e., the cloud service provider’s service provider) that is part of its supply chain.

No more posts.