Vermont Privacy Breach Regulations

A Q&A with Ryan Kriger
Among state Attorneys General, Vermont has gained a reputation for being particularly aggressive about data breach and privacy regulation. To better understand the state’s Consumer Protection Act requirements and processes for data breach investigation, I talked to Ryan Kriger, Assistant Attorney General.

What should a small business know about complying with the Vermont law?
We have a guidance available on our website, which should be helpful. In the case of a breach, they should first contact law enforcement, their insurer, their lawyer, any IT people involved and, if there’s credit card information at stake, their processor. Their primary duty is to figure out what happened and get the situation under control. They have to notify us within 14 days of finding out about the breach. That preliminary notice is kept confidential. We want businesses to give notice to consumers relatively quickly, and the 14-day notice to us allows us to stay on top of things and make sure they are doing that. We did create a waiver last year—if your company has policies in place and you’re confident that you will comply with the law, you can be certified ahead of time as long as you sign the document and get it on file with us before a breach incident. If you have a certification on file, you don’t need to notify us within 14 days. Another subsection says that if the data collector is sure that the data never got into the wrong hands—say, a password protected laptop was lost for five hours, then returned—they can call and ask us if they still need to give notice, and we probably won’t require it.

If it’s a really big breach and we think it could be problematic, we may follow up with questions. If we perceive the company’s actions to be unreasonable, unfair or deceptive, such as in the case with TJX, then we will begin an inquiry. Often, this wouldn’t just be Vermont, but multiple states getting together and asking questions.

How might you approach a data breach incident?
The first step is that we want to make sure the business has covered all of the necessary notification. Notice to consumers should go out “in the most expedient time possible and without unreasonable delay.” Vermont has a 45-day deadline, but we think in many cases notice should go out sooner. We encourage companies to send us their notification letter before it goes out to consumers, and we can help them make sure it’s in line with the statute. Also, the sample letter to consumers gets posted to our website, so consumers can confirm that the letter itself is legitimate. The second thing is to make sure the company fixes the problems that led to the breach. Sometimes smaller businesses think it’s a one-shot deal and don’t want to change their business practices, but we remind them that they are on notice, and that the fine outlined in the Consumer Protection Act is $10,000 per violation. Now, we’ve never had to levy that fine as most people seem to want to resolve the issues, but we want businesses to know that we are here to protect consumers and they need to take that seriously. In the TJX case, it appears that the company may have been collecting credit card information at point of sale and transmitting it, unencrypted, over unprotected wi-fi networks. This sort of blatant violation of standard security practices, and the length of time that it was allowed to continue, clearly justified bringing an enforcement action. We’re not trying to trick people, and in most cases we can resolve things in a cooperative fashion, but when a company drags their feet, we will go after them.

What are some of the key weak spots that lead to a privacy/data breach incident?
It can be all over the map—certainly, not encrypting data where encryption is appropriate is one issue. Over-collecting data you don’t need, such as using SSNs as an identifier, could be another. Other problems we see: collecting credit card data through a homemade system that’s not PCI-compliant when you could be using a secure third-party system. Not changing passwords or updating software. In smaller businesses, it might be negligence about employees who could be stealing credit card information. In general, it’s a good practice to have the occasional forensic analysis or stress test. We have partnered with Norwich University to offer penetration testing to any small business in Vermont that wants it. The Verizon Report has shown us that small businesses are the prime focus of security breaches, so we are particularly sensitive to the needs of small businesses in Vermont.

What type of fines and penalties can a company face for noncompliance? Can the lack of certain actions or controls increase their culpability in your view?
I mentioned the $10,000 per violation fine, and we consider each day you go beyond the deadline a separate penalty. Our Consumer Protection Act doesn’t have an intent requirement, but we obviously take intent, negligence and lack of controls into account when we think about enforcement and penalties. A business suffering a breach calling us to ask what they can do, making it’s clear they want to do the right thing, is very different from a business that denies anything went wrong, after we’ve found out about the breach three months later. We are very cautious with our use of power and we’re not trying to bully anyone, but if we need to use a large fine to get a business into compliance, we will do so. If an enforcement action reaches a settlement agreement, called an assurance of discontinuance or consent judgment, we may seek penalties, but we will also seek injunctive relief, which is asking the business to change its behavior. For example, we may want the business to put security or compliance systems into place, offer restitution for consumers, or take other steps to make sure it doesn’t happen again. In general, we are eager to proactively work with businesses to protect consumers and create a productive, cooperative relationship in order to prevent breaches.

In summary…
I first met AAG Ryan Kriger at our NetDiligence® Cyber Risk & Privacy Liability Forum last year in Marina del Rey. I thought he might be guarded about the state’s approach to enforcement, but boy, was I wrong. He was actually very forthright in talking about how seriously Vermont takes the issue of consumer privacy, including violators of state regulation. He makes the point that his department is willing to work with organizations that suffer a data breach incident and will give them a roadmap to do the right thing by the victims (whose personal information is now in wrongful hands). What is clear is that organizations that demonstrate a lack of care (or even willful nondisclosure) will be penalized.

Ryan is also speaking at the upcoming NetDiligence® Cyber Risk & Privacy Liability Forum in Philadelphia this June 6-7.

Inside a Computer Forensic Investigation

A Q&A with Andrew Obuchowski
When a data breach occurs, a computer forensic investigator can save a company hundreds of thousands of dollars by establishing what exactly happened, potentially obviating the need for notification, and offering recommendations for remediation. To better understand how these investigations work, I spoke to Andrew Obuchowski, associate director of technology solutions at Navigant Consulting in New York, NY.

Let’s say you receive a panicked call from a client, and they just learned that their database holding all of their customer’s personal information was hacked. Can you explain in layperson’s terms how a typical computer forensic investigation may unfold? What might a process flow look like?
Of course, each investigation is unique. I’m tasked with asking questions to identify all possible sources of information that can be used to help me determine the affected data, when and how the incident occurred, and the outcome. There will be a series of high-level questions to start—and generally, one person cannot answer all of them. My interviews would include the legal department, HR, marketing and communications, in addition to IT staff. These conversations might lead to third parties such as cloud computing or other IT vendors. From there, I would ask more detailed questions about the network, systems and logging capabilities, remote access, user and application accounts, and then make a determination regarding what information should be collected. I also like to focus on what is unknown to the company, what else is going on and what could be affected—for instance, if someone’s laptop had malware installed that could compromise other areas on the network. Part of doing due diligence is covering all possible scenarios. Then there is the preservation stage, which includes imaging computer hard drives and collecting network data, for example. Some of these steps might occur after hours due to staff availability or volume of network activity and could be in conjunction with third party resources. We work with the staff to assist in the investigation and ask them to carve out time to support us. Here’s an example of what the flow might look like:

  • Setting up scoping meetings to identify background information, availability of staff personnel, data sources, and actions taken since the incident was discovered.
  • Deploying qualified trained personnel to preserve and collect data from relevant computer systems such as laptops or servers, network logs, administrative documentation, and perform vulnerability scans (if needed). Onsite analysis of this collected information would be performed to “triage” the incident and offer any immediate remediation tasks to mitigate further risk.
  • Detailed analysis would be performed in a controlled environment, such as a forensic lab. This analysis could include generating various reports or monitor malware behavior to help us identify the level of compromise and will be used to help us reach a conclusion. (This could take a few days to weeks and usually involves follow-up questions.)

What are some typical problem spots or issues that might complicate your investigation?
Some of the biggest challenges occur when the company doesn’t know where sensitive data is stored, who the key contacts are, the location of backups, or how long they have been maintaining records. Gaps like this could extend our investigation and cause delays. Having access to all available information is critical—even though we may not “need” all of the information we are asking for, it is useful to know it is there.

Often when there’s a third party such as a cloud computing vendor it can be difficult to gain access to the data we need. Sometimes vendors sell a solution to host the data but don’t provide vital logging information in terms of who accessed the data, what actions were performed, and when this occurred. These things are commonly overlooked and should be detailed in the service level agreements you have with your provider.

Many organizations rush to focus on repairing the incident rather than preserving the information. This would cause a loss of valuable evidence that can help us determine whether we are handling a data breach or an information security incident, and what was affected. The loss of crucial evidence could force a company into a position where they have to notify customers of a data breach since we would be unable to defend the claim that sensitive information was never compromised.

Another overlooked task is the lack of historical documentation or report. The information contained in the incident report should include who performed an action/task, when it was performed, and a description of the action/task.

Finally, I’d say we could run into delays when administrative documentation such as network diagrams or incident response plans are outdated or incomplete. In many instances, and it is more frequent with smaller organizations, all of the network and relevant information rests on the shoulders of one individual. Murphy’s Law would suggest that an incident will happen when that individual is on vacation and out of the country. Be sure that the weight of this responsibility doesn’t lie with just one person in an incident. This is not only useful for an investigation but also provides a form of internal checks and balances for an organization.

What are some best practices used by clients that aid your investigation?
Certainly, all the issues I have previously mentioned should be addressed in some level.  Companies need to have the ability to maintain business continuity while balancing the staff and system requirements needed in an investigation. Here are a few high level bullet items that are useful:

  • Detailed logging information of your network and key systems should at least be retained for 60 to 90 days.
  • Information security awareness training and educating employees to identify risks can help to minimize human error, although it will not prevent the disgruntled employee from taking your data.
  • Network diagrams and incident response plans are also very helpful. Keep in mind that you should not just create an incident response plan but perform a tabletop exercise to test this plan and make sure it works.
  • If the company has acquired solutions for email and data archiving, they need to ensure that the information can be extracted in a usable format. Don’t just implement your products; test them.
  • Key staff should be able to identify where all sensitive data is stored—and sensitive but unrelated data such as HR and financial information should never be stored together or with common group and file shares.

Finally, companies need to pay attention to service level agreements and understand where they are relinquishing control of the data and make sure that it can be accessed when needed. Identifying where your data is physically stored can also be a concern. Your data could be stored in New York, Denver, or maybe even in Europe. Although most vendors are not going to provide you with this information, this does pose a legal challenge if data has to be collected.

In summary…
We feel the forensic investigation is often critical to the insurance company’s ability to make a proper coverage determination in order to pay out on a loss resulting from a data breach event. For example, the insurer needs to factually verify:

  • Date of the breach? This is key to determining whether coverage was in place at the time of the event.
  • Date of detection? This is important to show an Attorney General (or plaintiff lawyer) that the insured business responded in a timely and prudent manner to the event.
  • Location of the breach? Did the incident occur on the insured’s system or a third-party provider’s system or Cloud (which could trigger a later subrogation action)?
  • What controls/safeguards were defeated that contributed to the cause of the loss? This is important in order to document during a postmortem review of the claim that new safeguards are in place to minimize the chance of the incident happening again.

Of course, as Mr. Obuchowski illustrates, a claims investigation typically doesn’t come with a neat bow wrapped around it. Problems often occur. The Cloud, in particular, may present challenges for both the insured client and the insurer. A Cloud provider may not allow a forensic investigation of an alleged breach incident on its network (assuming, of course, that the Cloud provider actually notified the insured client of the alleged breach in the first place).


SEC Enforcement for Cyber Risk and Data Breaches

A Q&A with Jacob Olcott
As Jacob Olcott, principal in cybersecurity at Good Harbor Security Risk Management, LLC points out, the SEC Guidance released in 2011 brings the issue of data security out of the IT realm and into corporate governance. But these rules for publicly traded companies are still relatively new and what they mean in terms of legal exposure is still largely untested. Olcott answered a few of my questions about the guidance and how companies can minimize their risks.

Can we have a layperson explanation of the SEC Guidance as it relates to data security?
The idea here, generally speaking, is that publicly traded companies are obligated to disclose material risks to investors. The securities laws have been in place for 80 years—what’s new is that in 2011, the SEC issued guidance for companies to apply that longstanding legal requirement to the cyber security context. We know that every company in the world today has been penetrated and huge volumes of information have been exfiltrated out of corporate networks, largely the loss of intellectual property and trade secrets. But what hasn’t happened yet, necessarily, is disclosure of these incidents and that’s important from an investor’s standpoint. This guidance sits alongside all of the other legal obligations to disclose events when they happen—whether it’s laws regarding the security of health information or financial information—but it also covers information for which there had been no existing legal requirement, such as business secrets and intellectual property.

What’s the financial exposure here for a company that ignores the SEC guidance?
The failure to disclose material information can lead to shareholder lawsuits—there’s decades and decades of history behind that. It can also lead to SEC enforcement acts, which are associated with fines. However, up until now there has never been an example of the SEC bringing an enforcement action against a company for notification around data loss so it’s still unknown. Still, I think the reality is that as companies become more aware of their legal obligations to defend their networks, shareholders will be demanding greater security from the companies they’re investing in.

What concern might a board or CEO have in complying with the guidance? And how can a business mitigate this exposure?
The bottom line here is that boards and CEOs should be very worried about this because it raises a question that most if not all companies cannot answer today: If we had a material event in our system, would we know it? There’s a growing realization in the C suite that we have got to get a better understanding about what our security posture is today, whether it’s because of the SEC guidance or the growing realization that bad guys are here and they’re coming after us. The first step is to think about what would constitute a “material event” to the business and that is very business-dependent so we would tell our clients to figure out what they do and work backwards from there—basically, to figure out what the crown jewels are. If a company has a significant amount of consumer information, for instance, then that’s what they need to be focusing on. If it’s a piece of critical infrastructure like the electrical grid, then keeping lights on and the control systems working is the most sensitive thing to protect.

It’s very important for companies to have a corporate-wide cyber risk committee. If you ask the average IT security guy what “material cyber risks or events” mean they will just look at you dumbfounded—it’s not a term of art in the IT world. This is a good example of why general counsel or even more senior folks like the CEO who understand the business implications have to be more involved in managing cyber risk. It’s also very important for officers and directors to work with the security staff and do tabletop exercises in planning incident response ahead of time because the last thing you want is to be in the middle of a crisis and just thinking about it for the first time.

Can a violation of this SEC guidance lead to a possible directors and officers (D&O) lawsuit?
Yes, officers and directors have a longstanding legal responsibility to disclose material information to investors and I don’t think there’s any question that if they’re not closely examining cyber risk they could be very vulnerable in a potential suit. However, it hasn’t happened yet.

In conclusion …
This is a complex topic. In summary, it’s difficult to define “material risk.” After all, some companies face multiple malicious attacks/attempts on a daily basis that they may consider a nuisance but routine—and it would have to be an actual breach to be deemed “material.” For another company, the close calls could pose a material risk. And what if it’s only a minor breach, like a lost laptop? To report material risk publicly the clients will need to involve counsel skilled in security and privacy matters and thoughtful about balancing the needs of outside investors with the company’s interests while not releasing too much information about their own loss control measures to the outside world. To build on Mr. Olcott’s insightful comments, it will be interesting to see if the SEC follows up here with significant penalties for willful violators of the guideline’s intent AND whether plaintiff lawyers leverage the SEC noncompliance argument in their data breach class action lawsuit complaints.

Understanding and Avoiding OCR Investigations

A Q&A with Lynn Sessions of Baker Hostetler
In enforcing the HIPAA/HITECH regulations, the Department of Health and Human Services’ Office of Civil Rights (OCR) has been coming down on healthcare organizations with recent fines of US $1.7 million, and yet the OCR and its investigations remains an area of mystery for many organizations. I asked Lynn Sessions, counsel at Baker Hostetler in Houston, TX, for some perspective on OCR’s process for ensuring data security and privacy compliance in healthcare.

In defending clients from an OCR inquiry, what are some of the shortcomings you often see in the data protection efforts that may increase the scorn of the government?
Some of the biggest concerns are an incomplete risk assessment or one that was created several years ago and shelved, a risk management plan that’s not followed or maintained, a lack of an incident response plan, and a lack of organizational support for information security projects. We also see issues with organizational silos in communication, where the people who work in compliance and privacy may not talk to risk management or information security or IT—and all of those people should be talking. Many organizations have data encryption but they don’t have encrypted Blackberrys or laptops or backup tapes. This is important because with some insurance companies, if the data isn’t encrypted, you fall out of coverage. The OCR has come back to some of our clients who had encryption, to point out that there were still laptops that were not encrypted. On the other hand, many organizations say ‘we don’t have to worry because we encrypted,’ but they still have to document what they did with the encrypted devices, that they met safe harbor requirements, and that a risk of harm analysis was conducted in relation to devices.

Explain at a high level how the OCR investigation process may work, from notice letter to enforcement penalty.
After a data breach incident, the OCR will send out a letter to the covered entity that includes approximately 20 different requests for information. It typically starts out broadly–they’re looking to see whether or not the entity is compliant. They narrow the requests more and more as they look at documents in an increasingly detailed fashion. It’s amazing how, by going down what seems like an unrelated rabbit hole, they may find a smoking gun. We’ve worked with several clients whose investigations go back to 2011 breach incidents and are now on the third round of questions from OCR, and investigations from 2010 that are now on their sixth round of questions. We haven’t had to go to the penalty phase yet with any of our clients, but we anticipate that if the OCR found a lack of compliance, the client would be assessed and given a penalty for the violations and the OCR and the organization could take it into settlement to come up with some kind of corrective action plan and fines. The fines are usually proportionate to the size of the organization and the violation. Still, you want to avoid this at all costs: At the point of a penalty, there is always a public announcement that gets picked up by industry publications. Once you’ve been fined, you will have the OCR looking over your shoulder for the next few years. We’ve also heard that in the coming years the fines could be growing, up into the eight-figure range.

Are there one or more key areas that OCR or state attorneys general have been focused upon for most breach incidents?
Mobile devices are still a very hot topic for OCR, as they are for state attorneys general. Third party compliance is another one. We saw recently that the Massachusetts attorney general fined a healthcare organization that where a third party had disposed of records in a dumpster. That’s not necessarily a typical finding, but it’s something we’ve seen.

HIPAA/HITECH leaves some gray area with regard to its classifications of safeguard measures. Can you explain what “addressable” means versus “required” and can HHS/OCR regulators or state attorneys general have different interpretations of the regulation when it comes to a breach incident?
What we’re hearing from OCR is that just because it says it’s “addressable” doesn’t mean you have an option of addressing the issue. You can check the box choosing not to address encryption because it’s not required but what OCR is saying is that if you choose not to, you should have a risk analysis as to why the unencrypted data is still safe. If you haven’t documented what you’re doing to protect the PHI, then the OCR can still come after you. This is important in healthcare, because some devices can’t be encrypted—we’ve heard vendors say that in the case of some medical equipment, patient safety could be compromised by encryption. OCR would say that just because you haven’t encrypted it, doesn’t mean you can’t have other safeguards in place. As long as you can demonstrate what you’ve done to protect the information and you’ve documented those processes, you have some protection.

In conclusion…
The issue of protecting healthcare-related data will only get more attention from regulators and plaintiff lawyers in the coming years. It is advisable for any company worried about safeguarding private health information (PHI) to be proactive and fully assess (document) its enterprise information security posture.  But companies should still expect the inevitable bad event (breach) and have a response plan in place that includes a relationship with leading counsel, like Lynn Sessions, to guide them through the labyrinth of Federal regulations (HIPAA) and state laws (which can be gray). Ideally, legal counsel should also have a working relationship with enforcement regulators to better represent the victimized company and improve the outcome of any regulatory investigation.

Public Relations in Face of a Data Breach: Risk and Preparation

A Q&A with Robert McEwen of McEwen & McMahon
Among the multitude of risks posed by data insecurity is a company’s reputation. In the past, ineffective communications about a data breach often has led to greater financial loss for victimized companies, such as when customers speak publicly about their negative experiences and damage brand equity), or when victims feel their concerns are not being taken seriously and seek recourse through legal action. So how can organizations prepare to communicate effectively in case sensitive information ever is compromised? We spoke with Robert McEwen of McEwen & McMahon to find out.

Why should clients care about PR as it relates to data breach/privacy violations?
Data breaches can erode trust in a company and damage its reputation. What wise business leaders have come to understand is that reputation has quantitative value. It is just as tangible as inventory, receivables, real estate or any other asset on the corporate balance sheet. Year-over-year analyses of Fortune magazine’s annual ranking of “Most Admired Companies” illustrate the indisputable cause-and-effect relationship between reputation and market capitalization. Moving up or down a single notch in a company’s industry sector rankings on average translates into a gain or loss of more than $100 million in shareholder value. It’s only common sense to take every precaution to protect and defend such a precious asset by investing in strategic communications counsel.

How can clients prepare to better manage their brand and mitigate future liability following a data breach event?
Data breaches are an unfortunate fact of life in a digital society. They are as ubiquitous as fires. The question is not whether they will happen, but when. Never, therefore, has the old adage “an ounce of prevention’s worth a pound of cure” held more true than when managing network security. It is far more economical to monitor, identify and deal with potential security issues in advance than to ignore them until some triggering event thrusts an issue before the klieg lights of the media. That’s when a company finds itself in the docket of the court of public opinion, where the jury most often presumes guilt, not innocence, and the trial is almost always a costly one. Such messes often can be avoided if only business leaders would make relatively small investments in crisis preparedness plans and rehearse them regularly.

Every manager with data breach response authority ought to have the crisis management plan filed and posted as an icon on his or her desktop. The plan should include specific scenarios for a variety of different occurrences—whether caused by a stolen laptop, a technology glitch, or a malicious hacker. Such pre-planning enables companies to deal with the situation more effectively than scrambling frenetically at the last minute.

In my experience, most stakeholders understand that data breaches are inevitable to an extent and they will be relatively forgiving if a company handles such an incident efficiently and straightforwardly. If, however, they perceive anything less than full transparency, then stakeholders can be ruthlessly unforgiving. That’s where the rubber meets the road and companies can suffer a significant bottom-line impact.

How much can PR services cost for a large/medium/small business?
The best way of estimating the cost of preparing for or responding to a data breach is to use the PR Cost Calculator that McEwen McMahon and NetDiligence developed for the eRisk Hub.

Generally speaking, the kinds of variables that impact PR costs mostly have to do with the size and scope of the breach, and the company’s degree of readiness to deal with it. How many stakeholder audiences are affected and how large are they? How sensitive is the information that’s been compromised? (Credit card data? Social security numbers? Private health information?) Does the company have internal PR capability? Is there a crisis communications plan? How up-to-date is the plan? Have employees rehearsed it?

Depending on the answers to these questions, PR costs can range from tens of thousands to hundreds of thousands of dollars. But far more important than the immediate cost of retaining outside PR counsel is the potential cost to a company’s reputation. Millions of dollars in brand equity that has taken decades to build can be wiped out instantaneously if a company’s response to a data breach is — or is perceived to be — inadequate.

In conclusion …
What most impressed me about Robert McEwen when I met him a year or so ago, was that he was talking about the value of PR. He recalled the Tylenol case (of 1982), and how that was a classic example of excellent media management and customer communication, while the BP oil spill in the Gulf showcased the opposite. Bob felt there are strong similarities to properly handling a massive data breach event. I think he is spot-on, especially if you look at some of the largest publicly reported data breach incidents and how they were handled in the public forum. There is a strong argument for having a professional PR team in place to significantly help mitigate the risk exposures facing many businesses when the inevitable data breach or leak occurs

The Lowdown on Healthcare Data Breaches

A Q&A with Michael Bruemmer of Experian
Healthcare is one of the single biggest areas for data breach and identity fraud, yet many people still don’t understand the gravity of the risks facing companies and consumers. To get a better handle on the specific risks and how organizations can better protect themselves, I spoke with Michael Bruemmer, VP of Data Breach Resolution at Experian.

What are some of the challenges in healthcare in regard to data breaches right now?
I think there are three big ones: First is HIPAA and HITECH, which have really put pressure on industry. For the most part, healthcare entities, particularly individual doctors and smaller hospitals, would carry on with paper records if they were not pushed to digitize. So it has created a lot of pressure to not only get those records in order, but to make them accessible. All of this has made for challenges with regards to protecting medical records from data breaches. Number two is that the use of those records is not a single handoff—there are multiple exchanges between the patient, the provider, the processor of the payments and the insurance companies involved, so it’s a complex system. Consulting an attorney who can help you better understand these laws is a good idea. Under the law, whether you’re a covered entity or business associate you have to take the same level of care in handling those records, including business records and actual medical records like x-rays and blood work. The third thing is employee training. Employee negligence is still a leading cause of data breaches in the United States. Given the fact that some large hospitals employ upwards of 15,000 to 20,000 people, this means dealing with large networks for training, not to mention policy and enforcement of the training.

What makes a healthcare data breach different from a data breach in another industry?
I touched on HITECH and HIPAA in the first question, but the laws we are operating under now were created in August 2009. We’re still waiting for the final rule to be published, so that puts us in a unique position. There are requirements to protect information from a security and compliance perspective and companies also have to have a data breach response plan in place, and not only for what is called the covered entity but also for any subcontractors or vendors they use. There are 46 different state laws for notification in the case of a healthcare data breach, with varying requirements. California is the most stringent, for instance, and they require that you have to notify consumers within five business days. In most of the other states it’s 60 days. If you’re a healthcare entity you have to have a compliance officer privacy officer who knows these laws and knows how to protect all of the health records and information.

What are some of the recommendations you would make to healthcare entities in preparation for a data breach?
First of all, gain an understanding of the law by speaking with an attorney who specializes in healthcare law. These days, you’ve got to have a deep understanding of HIPAA and HITECH. Second is to invest in the security compliance, starting with the planning and training of your organization and the people related to the laws and those requirements. Included in that investment—I really focus on this one—is that you actually have to practice your data breach response plan like it’s a fire drill so that people know what to do and everything is coordinated the way it should be. The third thing is making sure you have independent professionals on the team such as outside legal counseling, a forensic specialist to track the source of the breach, and a notification call center.

How real is Medical Identity theft?
A 2011 study from the Ponemon Institute quotes the annual economic impact of medical identity theft at $30.9 billion. A year earlier, Ponemon found that 1.42 million people were impacted by medical identity theft. Medical or healthcare ID theft represents 40 percent of all data breaches that have been published. On the black market sites where you can buy and sell identities, a Social Security number costs one or two dollars, whereas someone’s full identity, including medical insurance and other medical information, is worth about 50 dollars. The value is in being able to use the services. And the people who are trading this information are getting more money for medical information.

What types of things can happen to victims of Medical Identity theft?
If someone steals your medical identity, the financial impact takes a while to clean up, but that’s not the worst of it. It can literally be a life and death risk. Let’s say you’re a hemophiliac and someone steals your medical information and gets services provided to them, including an operation where they are given blood thinners. That gets put into the records. So then you come in needing a surgery and they give you a blood thinner and you end up having huge complications. You could also have denial of service if someone stole your medical ID—if you go to the emergency room and they see flags on your account from other providers, they can’t deny you that immediate coverage but they could deny you some services because you have unpaid bills that someone rang up on your behalf. That’s not even counting the costs: If you accept the numbers from the Ponemon Institute study, billions of dollars of medical identity theft trickle down to consumers who have to cover the costs of insurance. Medical Identity theft is a very real and significant problem.

In conclusion…
NetDiligence recently conducted its second annual Cyber Liability & Data Breach Insurance ClaimsCyber Liability & Data Breach Insurance Claims study, which again reinforced that the healthcare sector is incurring a large number of data breach incidents and cyber liability insurance claims for same. Mr. Bruemmer did a nice job of summarizing some of the many risk exposures we see facing our customers in this sector, such as strict and changing state and federal privacy laws; emerging e-health record sharing platforms that increase opportunities for events; and causes of loss such as vendor and business associate mishaps, as well as negligent employees.

Latest Findings – Verizon’s Data Breach Investigations Report

A Q&A with Chris Novak, Managing Principal at Verizon Business
Verizon’s Data Breach Investigations Report, conducted by the Verizon RISK Team with cooperation from law enforcement agencies around the world, has become an invaluable resource for anyone looking to gauge the current landscape in data breach incidents. “It’s not enough to know what happened. We need to know why and what we could have done to prevent it,” says Chris Novak, managing principal, investigative response for Verizon Business Security Solutions. I talked to Chris about the latest findings in this year’s report.

What does the report cover and what’s new this year?
While it doesn’t allow us to speak to specific incidents due to confidentiality reasons, it allows us to aggregate the data, anonymize it and offer a summary so that we can make this information available to others and serve as an educational resource. This year’s report looks at 855 incidents with 147 million compromised records. We’ve added some additional contributors, the Irish Reporting and Information Security Service, the Australian Federal Police, and the London Metropolitan Police. These partners give us a better sense of what’s going on within their footprint and it also allows us to give better sample sets of global data.

What are the biggest findings of this year’s report?
We found that the external threat is still the greatest, including 98 percent of cases, up 6 percent from last year, with only 4 percent of cases that were internal. (The overlap accounts for the cases where internal people collude with external people.) Organizations have implemented much more internal control and identified vulnerabilities so that improvement is reflected in the numbers we’re seeing. Hacktivism was responsible for 58 percent of compromised records—that’s a significant number. These groups typically target larger organizations. In general, external breaches were conducted with hacking (81 percent) and malware (69 percent). Social engineering is still registering as a small threat in the landscape, with only 7 percent of the cases socially engineered.  We’re also finding that servers (94 percent) are the most vulnerable to attack—at the end of the day, that’s where all the data is. In terms of the kind of data we’re seeing it’s still mostly personal information, with about 95 percent of cases including PII such as names, social security numbers and addresses—all the items needed for identity theft. We continue to see intellectual property from the trade sector being stolen, but it’s difficult to monetize the worth of that information. Another interesting finding is that 65 percent of the attacks were considered “low difficulty,” showing us that in most cases the perpetrators are not very sophisticated—they often looked up techniques on Google or Wikipedia, but simply worked until they got in.

What should security officers and risk managers be worried about?
An area we’re keeping our eye on is the healthcare industry and we expect to see more breaches in this area. We also looked at how long it takes for companies to discover a breach. In 84 percent of the cases it took multiple weeks or longer to figure out. That is concerning. Another issue of concern is that 86 percent of organizations with a breach had everything they needed to know in their own logs. If they’d been looking at their own data they could have stopped the incident. 97 percent of breaches were avoidable through simple or intermediate controls.

What’s the good news?
We are not seeing any increased risk tied to cloud computing, an area many people have worried about. People using cloud computing are often getting a better level of service so that if something happens they can catch it more quickly—so in some cases, it’s actually a security improvement. In general, preventing a breach from happening is less expensive than the cost of wading through a typical breach, so the proverbial “ounce of prevention” still holds true here. 63 percent of respondents said that the cost of preventing their breach would have been simple and cheap and 31 percent said it would not have been difficult or expensive.

In conclusion…
Chris Novak’s insights are helpful, especially to risk managers trying to get their arms around the causes of loss and the potential frequency and severity of cyber risk. The Verizon report is especially focused on risks caused by malicious actors, which continue to morph each year, always seeming to stay one step ahead of corporate efforts to safeguard information assets. However, it should be footnoted that a fair amount of cyber liability insurance claims that we see are the result of non-malicious events such as lost laptops, staff mistakes, and improperly disposed paper records. This is not to discount the importance of being battle-ready to deflect the malicious threats that our clients literally face on a daily basis, but to acknowledge that both types of events must be anticipated.

What’s Happening in the World of Data Breach Litigation?

A Q&A with Sasha Romanksy, Ph.D. Candidate, Carnegie Mellon University
For organizations dealing with a data breach, legal liability is one of the first questions that arises. But are some data breaches more likely to result in lawsuits than others? Sasha Romanosky, a Ph.D. candidate at the Heinz College of Information Systems and Public Policy at Carnegie Mellon University, studies the legal and economic issues around data security and consumer privacy. In a recent study he coauthored, “Empirical Analysis of Data Breach Litigation,” he found that breaches resulting from the unauthorized disclosure or disposal of personal information are 6.9% more likely to result in lawsuit, relative to breaches caused by lost or stolen hardware, whereas breaches caused by cyber-attack are only 2.9% more likely to result in lawsuit. We spoke with him about his findings.

Can you explain the importance of your study for a risk manager or an Insurer?
Basically, we were looking at what kind of breaches are being litigated and what kind of variables are strong predictors of lawsuits. The second question is what are the variables and conditions that make a plaintiff more likely to win? This information can help risk managers and insurers have a better sense of how to protect themselves and for assessing and pricing cyber insurance policies.

What were the biggest takeaways from the study?
Very simply it seemed that only 4 percent of reported breaches are being litigated at the federal level—we make a distinction between the federal and the state level. We also found a huge variation in the causes of action, which included unfair business practices, negligence, breach of contract, breach of duty, and various state and federal statutes. A new cause of action is the unauthorized disclosure of personal information.

What you can draw from all of this, it seems to me, is that attorneys are trying different approaches. If there is no evidence of financial loss, the case is usually dismissed. We found that those organizations that offered credit monitoring were 6 times less likely to be sued—those that didn’t were thought to have behaved carelessly. We also found that financial information as opposed to other personal information or medical information is more likely to lead to lawsuits. When individuals suffered financial harm the odds of a firm being sued in federal court were 3.5 times greater. As such, firms dealing in financial information should take more care not to disseminate it.

About half of the cases settle, which is a useful finding, and very often for a nominal fee for the named plaintiff. There can be a substantial award or lump sum for people who suffered identity theft to pay specifically for losses. Defendants settle 30 percent more often when plaintiffs allege financial loss from a data breach or when faced with a certified class action suit.

So far we can’t tell what other factors or characteristics might influence lawsuits and settlements. We need to do more research to find out if the prominence and size of the company, the presence of liability insurance coverage, jurisdiction of event, the timing or quality of notice to victims, and/or media coverage have an impact.

What else do you see on the horizon as far as trends in data breach litigation?
One thing we saw with the Sony breach is that after 30 people filed class action suits, the insurance company would not pay out the damages. In response, Sony changed their end user agreement license to prevent users from suing—instead they must now agree to arbitration. That might be something to keep an eye on going forward—it will be interesting to see if other companies do the same thing.

In conclusion…
This study conducted by Mr. Romanosky and his colleagues (see study) is a great step towards helping corporate insurance risk managers and cyber risk underwriters  better understand the reality of the class action litigation costs exposure that many organizations are facing. Lawsuits can be time consuming and very expensive. The 2011 NetDiligence® Cyber Claims Study found the average loss paid out by insurance carriers for a data breach event was $2.4 million, a good portion of that devoted to legal defense and indemnification. Moreover, we believe that emerging precedents from plaintiff-friendly cases might reduce the number of future cases dismissed for lack of damages, one of those being the RockYou lawsuit (see summary) which found that personally identifiable info has inherent value.

Preventing eBusiness Interruption

A Q&A with Mark Teolis, General Manager of DOSarrest
Denial of service (DDoS) attacks are a threat to any business with an online presence. With little effort, an attacker across the world can completely overwhelm, degrade and/or crash your business computer servers. The result is that you then lose customer trust and revenue for every minute the system is down. This type of attack is very prevalent and difficult to defeat. DOSarrest (an eRisk Hub listed vendor) assists organizations in deflecting these belligerent attacks. We spoke with general manager Mark Teolis to learn more about DDoS attacks and what we can do about them.

Can you explain what a DDoS attack is, and how this type of interruption impacts commerce operations?
A DDoS attack is when someone is maliciously sending unimportant—often just nonsense—traffic to your webserver, forcing the server to respond to it. The repeated requests bog down the server and eventually it can’t deal with any requests, even legitimate ones, and it starts to slow down or crash. If you have an ecommerce operation that’s hit by a DDoS attack, your operations simply stop. Your site is down, and customers can’t log on. Often they go somewhere else to make the purchase. And if your operation is time-critical, such as Ticketmaster, for instance, missing out on that day’s sales is not like having a bag of sugar you can sell the next day. It’s a loss you can’t recoup. These attacks can be devastating and most people are not prepared. I always tell people to have a plan in place, to think about being down for a day to three days and how it will impact business, prestige and sales. Protecting yourself is an expensive undertaking, but can you really afford to take the risk?

How often are businesses sustaining this type of an attack? Are any sectors more exposed than others?
We don’t have any hard and fast numbers because in most cases companies don’t report these attacks. However, we believe there are about 10,000 DDoS attacks a day. At the beginning, about 10 or 15 years ago, the biggest target was the electronic gaming industry and that’s where this thing started. These days, anyone can launch this type of attack, without any kind of tech knowledge. All they need is to rent a botnet for as little as US $20 a day. So everyone is getting hit now.

How does DOSarrest (or similar solutions) help prevent or mitigate DDoS attacks?
If you want to protect yourself, there are a couple of ways to go about it. You can buy a piece of equipment, a DDoS mitigation device, which is a onetime fee and it will stop attacks, though each device has different capabilities. Another route is to go to a provider who offers protection services–again, some are better than others. In this case you are usually paying a monthly fee. Your provider is only as good as their upstream connection—if the attack is too big for the connection, your system will go down. One of the biggest misconceptions people have is if they buy a service or a device it will be able handle everything, and it’s just not true. Our service relies on our own proprietary techniques to block malicious traffic and we offer it as a monthly fee.

If my business is undergoing a live DDoS attack and I call DOSarrest (in a panic, of course), how soon can I expect to get the problem resolved so I’m operational again?
We can have it resolved in 15 minutes once a customer goes through the emergency form on our site.

In conclusion…
We have personally seen clients pummeled by DDoS attacks and often it’s at the height of their sales season (Black Friday or Cyber Monday, for instance). Sometimes these attacks are accompanied with an extortion threat (pay this or else). Other times, the bad guys might use the DDoS as camouflage so they can exploit and breach an application. We have also seen DOSarrest help clients and restore their ecommerce operations in a timely manner so that desired customer traffic can get through, while the bad guy noise cannot. This is not a testimony, but this firsthand experience is one of the reasons why we wanted to interview Mr. Teolis for this article, and this is why we include DOSarrest in our eRisk Hub crisis portal.


14.5 Things NOT to Do Following a Data Breach Incident

A Q&A with John Mullen, Nelson Levine de Luca & Hamilton, LLP

The hours and days following the initial discovery of a breach are full of confusion and chaos. However, companies can save themselves from a lot of trouble later on down the line if they stay focused. We spoke to lawyer John F. Mullen Sr. of Nelson Levine de Luca & Hamilton, LLP in Blue Bell, PA, about dos and don’ts for companies in this situation—mostly don’ts.

The following is what he came up with:

  1. Don’t assume a breach won’t happen to you. It’s going to happen and you need to be insured. Even if you’re not a big multinational company that’s attracting hackers you are likely to have someone working for you who could accidentally leave their laptop with TSA at the airport and land you in a data leak situation.
  2. Don’t kid yourself. This was a breach. I’ve seen companies in the aftermath of an incident who don’t want to come to terms with the reality so they bury it. They put off dealing with it. They rationalize. It doesn’t help.
  3. Don’t rush to judgment. Meaning, don’t start sending out notice until you know how many people are involved. To the extent possible, don’t start responding until you have all of the facts.
  4. Don’t assume that the first factual answers you get are accurate. In all my years in the business, I have never encountered a case where the original version of the story ends up being the absolute story. The truth is always more complicated. See above.
  5. Don’t let your self-insured retention cripple you from taking the right action. In other words, don’t be cheap. If you’ve got a million-dollar problem, don’t let your 50,000-dollar checkbook force you to cut corners. At the end of the day, it’s just going to delay the action and compromise the situation.
  6. Don’t hire your favorite M&A lawyer for a breach case. This may sound self-serving but it’s also true: This is a specialty area of the law and you want a person who is an expert in this area to represent you.
  7. Don’t do what I call “panic hiring.” Yes, you have limited time to take care of the response, but don’t just hire the first vendors you meet. That’s the equivalent of walking into a car dealership and handing them your checkbook and asking the salesman to write in the price. You may be panicked but if you don’t hire the right people, they will take advantage of that and you’ll pay out of the nose. This is another reason to have cyber insurance, as many of the insurers have negotiated favorable rates with needed vendors.
  8. Don’t over-notify people when notice is required.
  9. Don’t ignore your vendor due diligence. If you’re handing off your data to a company to do your processing and they lose the information then you will likely still be held liable. Make sure the company has the insurance and capital to handle that kind of loss so you don’t get stuck.
  10. Don’t forget to create a response plan ahead of time.
    10.b  Don’t run a response by committee.
    If you’ve got five people in charge, then no one’s in charge. Have a senior manager who handles decision-making and money spending in charge. If not, people will sit around looking at each other and it will take much longer to complete everything that needs to be done.
  11. Don’t rush through any of the process. Yes, there’s a time element involved—typically 45 to 60 days. But I can’t tell you how many clients come to me and say they want to give notice tomorrow. I always have to slow them down because inevitably they will find out they were more exposed than they thought, and then everything they did would be wrong and they’d have to do it all over again.
  12. Don’t fight with regulators, and don’t let your lawyers fight with regulators. Picking fights doesn’t help anybody and if you get on their bad side, regulators will put you through years of hell. Show that you’re willing to bend over backward to work with them and things will usually go well.
  13. Don’t forget e-discovery.
    Not saving your data up front can get you into big trouble down the road.
  14. Don’t assume you can win the class action suit.

Clients come to me assuming they will win because there aren’t “sufficient damages,” but the courts are swinging the other way now and that is no longer the case.

In conclusion…
In assisting insurance companies in dealing with their data breach insurance claim incidents—on average about one per week, and no two events look the same—I find it amazing how many times we come across clients who trigger not one but several of the issues listed in Mr. Mullen’s list. The good news is that many businesses are starting to follow (albeit slowly) a prudent breach response roadmap, demonstrating that they have learned from either their past mistakes or by seeing other organizations (their peers/competitors) deal with a publicly reported incident.

No more posts.