Data Collection Liability and Trends

A Q&A with Dominique Shelton
The area of mobile app customer data collection is fraught with heightened regulatory interest. In the past 18 months, industry leaders have come together to create the Best Practices in Mobile Data Collection guidance while the California Attorney General released “Privacy on the Go.” To get a better handle on the issue of mobile app data collection, I spoke with Dominique Shelton of Edwards Wildman Palmer, LLP in Los Angeles, CA.

Can you provide an example of a business being sued for wrongful data collection?
It is important to keep in mind that data collection has been challenged by plaintiffs and regulatory agencies when an argument can be raised that the consumers did not understand that their data was being collected. The focus of the FTC and certain regulators like the California Attorney General has been on compliance with “privacy by design.” This means giving consumers sufficient notice and the choice to make meaningful decisions about their data and how it’s used. Currently, there are over 176 class actions pending around the country, based on behavioral advertising or tracking information to create customized products or targeted advertising. One recent example is the case the California Attorney General brought against Delta Airlines for not posting a mobile privacy disclosure. We have also seen class actions filed against a major studio and search engine in December, challenging mobile apps that allegedly collect behavioral information from users under the age of 13. So in this environment it’s very important for companies to take a look at this issue.

What types of data can get businesses into trouble?
Certainly, any personally identifiable information, such as name, address, phone numbers or email that may be collected without disclosure. And now we’re starting to see risk around behavioral data that identifies the user’s mobile device or social networking ID associated with their Facebook profile, for instance. Those identifiers are considered personally identifiable information by regulators. The CA AG recently issued the “Privacy on the Go” report, in which personally identifiable information is defined as “any data linked to a person or persistently linked to a mobile device—data that can identify a person via personal information or a device via a unique identifier.” Also, the FTC is moving towards a definition of personal information that includes any data “reasonably linkable” to an individual based upon the comments of users that unique identifiers can be linked to other information to identify users personally. Of course, we are also seeing financial and health data as an area of interest and this is considered “sensitive” information as well.

Are there any states with stricter laws that increase liability?
No. California doesn’t have a law yet. There is a Do Not Track bill pending but it hasn’t gone through to a full vote. In the context of class actions, most have been filed using older statutes such as the Electronic Privacy Act and the Computer Abuse Act, claiming that tracking violates those statutes. Although there is not a distinct Do Not Track bill, it is important to know that this issue is getting greater attention by regulators and this in and of itself may create a new standard. For example, “Privacy on the Go” does create a de facto new standard for the country by recommending security, encryption and protection standards for unique identifiers and behavioral data that was previously considered by many companies to be non-personally identifiable information, not subject to enforcement. Further, the class actions that have been brought based upon the creation of ad profiles from the users’ online behavior also acts as a check for how companies should consider compliance. Further, the SEC October 2011 guidance calling for disclosure of all material cyber risks by public companies is useful.

What can a business do to mitigate their risk?
First, take a look at online disclosure and mobile disclosure policies. Supplemental notice is ideal. Make sure that someone in the company is responsible for privacy compliance—it could be the chief privacy officer or someone in the legal department or product development. They should be in touch with self-regulatory groups that have promulgated guidelines to address these issues. At a minimum, the company should be in step with its peers in addition to focusing on the latest guidance materials and which activities will attract the attention of regulators. There needs to be a dialogue between the privacy group and product development and marketing so that when new products are rolled out they can vet their use of customer information and make sure they are up to date on their legal obligations. Also, conducting annual trainings on privacy practices is a good idea and recommended by the CA AG.

In summary…
Ms. Shelton has provided an excellent summary on some emerging ‘big data’ issues that impact organizations that collect and use private information without adhering to reasonable privacy principles. Dominique mentions the Delta Airline case, which could serve as a bellwether case for mobile apps that are deployed without much regard for privacy policy/principles (i.e., customer notice and consent to the use of their personal information). Delta is facing a massive California AG penalty of $2500 per download (reportedly, Delta had a million downloads). One might argue what Delta did with its mobile app is a common practice, one that needs to be ceased immediately.

Understanding the Final HIPAA Security and Privacy Rules

A Q&A with Barbara Bennett
When they were released this past January, the final HITECH regulations amending the HIPAA Security, Privacy, Breach and Enforcement Rules updated those regulations with expansion of the scope of the rules, increased patient protections and more stringent government oversight, including application of the rules to contractors and subcontractors. So what does this mean for healthcare organizations and the service providers that work with them? I asked Barbara Bennett, partner at Hogan Lovells, LLP in Washington, DC to explain the finer points of the final rules.

Can you summarize the key requirements of the HITECH final rules amending HIPAA? What has changed?

  • The biggest changes are the application of much of the HIPAA privacy rule and virtually all of the security rule to “business associates,” with almost all third party contractors and subcontractors that access or maintain protected health information (PHI) now being considered business associates.
  • The second major change is the change in the HIPAA breach rule, which used to have a “risk of harm” standard incorporated into the definition of a data breach that required notice to affected individuals or regulators. Under that standard, an incident was not considered a breach unless the incident posed a significant risk of harm to the affected individuals. The final rule now provides that any unauthorized use or disclosure of PHI is presumed to be a reportable data breach unless the covered entity or business associate demonstrates that there is a low probability that the information has been compromised based on a risk assessment that considers certain factors.
  • Changes to the HIPAA marketing rule include further restrictions on use of PHI for marketing. In the proposed rule there had been an exception for providers to use PHI for subsidized communications and HHS took that exception away so that all subsidized promotional communications require an individual authorization unless they concern a drug or biologic currently prescribed.
  • There are some other, less monumental changes that include the right for an individual to restrict the sending of certain PHI to that individual’s health plan, the ability to get authorizations for research to do more than one study per authorization, and more stringent enforcement penalties.

In looking at the business associates component, how does this change impact healthcare organizations and their vendors?
There is potential for tremendous impact. The bottom line is that a lot of companies out there know they’re business associates and have the wherewithal to implement compliance, but many organizations do not.

In the past, business associates and their subcontractors who provided services to covered entities only had contractual liability, but they now have direct liability. “Business associates” are defined by virtue of their role and not by any agreement with the covered entity. For instance, the rule makes clear that cloud and other storage providers are business associates, whether or not they have a business associate agreement. If you maintain PHI, it does not matter whether you review or access it. You are still responsible and subject to the HIPAA rules, including the breach rule. It’s a big liability to assume and a big expense to implement compliance with these requirements.

Others that could be affected include software and hardware vendors that provide doctors and hospitals with access to information through health information exchanges (even if the vendor is not actually storing the information) as well as consulting and law firms that serve hospitals, doctors and health plans and may need PHI to perform their services. It goes deeper, too. Let’s say that these firms rely on a document management company. They, too, would be business associates subject to the rules and associated liability.

Another clarification that was made, however, is that the conduit exception is retained. If you’re a courier or telephone or internet provider that solely helps transmit or transfer the data—and does not maintain it—you may be exempt from the business associate requirements. And financial institutions that are just cashing a check or performing a payment transaction are not considered business associates, either. However, if they are performing a service for the organization such as accounts receivable or a lockbox that involves PHI, then they would become a business associate.

What are the challenges going forward for small entities and business associates?
First of all, these businesses need to understand whether the law applies to them for purposes of their own risk management, which means taking a good look at their customer base. For small entities, compliance costs can be expensive and require a certain amount of technical expertise, especially with the security rule provisions. There are some changes to the requirements for business associate agreements that need to be reviewed to determine if existing agreements need to be amended—there is an extended compliance period in some cases for those agreements—and business associates are now required to have these agreements with their subcontractors.

There’s also a challenge for an organization that deals with a lot of covered entities, such as a data analytics company or IT vendor that works with multiple hospitals, in managing thousands or millions of records for different covered entities. It is very difficult to comply with different contractual requirements with respect to different sets of data and still operate efficiently and effectively. Those companies might want to develop a standard template because they can’t get that granular in terms of differing compliance requirements.

In particular, HHS said in the preamble to the rules that a business associate must comply with the minimum necessary policies and procedures of the covered entity. That appears to me to be nearly impossible. If you have one customer it may simply be a hassle but if you’re a large service provider working with 10,000 covered entities, how can you possibly comply with all of their various minimum necessary policies and procedures? And why would the covered entities want to disclose their internal policies and procedures, which have been drafted for covered entities and not business associates, anyway? But, to add insult to injury, HHS also has indicated that a violation of the minimum necessary standard (i.e., the use or disclosure of more PHI than required for a specific purpose) can itself constitute a data breach.

In general, though, business associates need to get serious about compliance before September 23, 2013, when these requirements will be enforced. That means doing a survey of activities to gauge compliance and then creating a plan to address any issues; determining when the business associate acts as an agent of the covered entity (which carries additional burdens); training workforce members; and assessing relationships with subcontractors, and even educating those subcontractors where necessary. We recommend engaging outside counsel if there is no internal expertise available to understand these regulations and how they are applied to one’s business operations.

In summary…
We think Ms. Bennett highlighted the key (pressure point) areas of the Final Rule that will cause future legal liability and enforcement actions for both Covered Entities and Business Associates.  A complicating issue is how HHS/OCR interpret their own Security Rule regulations, which seems to vary from one investigator to the next. Moreover, many state Attorneys General are paying particular attention to how health information is safeguarded, and the penalties for noncompliance can be harsh. Having a knowledgeable security attorney (or Breach Coach®, as we call it) is essential for organizations in the healthcare industry and the insurance companies that underwrite their data breach liabilities.

Inside a Computer Forensic Investigation

A Q&A with Andrew Obuchowski
When a data breach occurs, a computer forensic investigator can save a company hundreds of thousands of dollars by establishing what exactly happened, potentially obviating the need for notification, and offering recommendations for remediation. To better understand how these investigations work, I spoke to Andrew Obuchowski, associate director of technology solutions at Navigant Consulting in New York, NY.

Let’s say you receive a panicked call from a client, and they just learned that their database holding all of their customer’s personal information was hacked. Can you explain in layperson’s terms how a typical computer forensic investigation may unfold? What might a process flow look like?
Of course, each investigation is unique. I’m tasked with asking questions to identify all possible sources of information that can be used to help me determine the affected data, when and how the incident occurred, and the outcome. There will be a series of high-level questions to start—and generally, one person cannot answer all of them. My interviews would include the legal department, HR, marketing and communications, in addition to IT staff. These conversations might lead to third parties such as cloud computing or other IT vendors. From there, I would ask more detailed questions about the network, systems and logging capabilities, remote access, user and application accounts, and then make a determination regarding what information should be collected. I also like to focus on what is unknown to the company, what else is going on and what could be affected—for instance, if someone’s laptop had malware installed that could compromise other areas on the network. Part of doing due diligence is covering all possible scenarios. Then there is the preservation stage, which includes imaging computer hard drives and collecting network data, for example. Some of these steps might occur after hours due to staff availability or volume of network activity and could be in conjunction with third party resources. We work with the staff to assist in the investigation and ask them to carve out time to support us. Here’s an example of what the flow might look like:

  • Setting up scoping meetings to identify background information, availability of staff personnel, data sources, and actions taken since the incident was discovered.
  • Deploying qualified trained personnel to preserve and collect data from relevant computer systems such as laptops or servers, network logs, administrative documentation, and perform vulnerability scans (if needed). Onsite analysis of this collected information would be performed to “triage” the incident and offer any immediate remediation tasks to mitigate further risk.
  • Detailed analysis would be performed in a controlled environment, such as a forensic lab. This analysis could include generating various reports or monitor malware behavior to help us identify the level of compromise and will be used to help us reach a conclusion. (This could take a few days to weeks and usually involves follow-up questions.)

What are some typical problem spots or issues that might complicate your investigation?
Some of the biggest challenges occur when the company doesn’t know where sensitive data is stored, who the key contacts are, the location of backups, or how long they have been maintaining records. Gaps like this could extend our investigation and cause delays. Having access to all available information is critical—even though we may not “need” all of the information we are asking for, it is useful to know it is there.

Often when there’s a third party such as a cloud computing vendor it can be difficult to gain access to the data we need. Sometimes vendors sell a solution to host the data but don’t provide vital logging information in terms of who accessed the data, what actions were performed, and when this occurred. These things are commonly overlooked and should be detailed in the service level agreements you have with your provider.

Many organizations rush to focus on repairing the incident rather than preserving the information. This would cause a loss of valuable evidence that can help us determine whether we are handling a data breach or an information security incident, and what was affected. The loss of crucial evidence could force a company into a position where they have to notify customers of a data breach since we would be unable to defend the claim that sensitive information was never compromised.

Another overlooked task is the lack of historical documentation or report. The information contained in the incident report should include who performed an action/task, when it was performed, and a description of the action/task.

Finally, I’d say we could run into delays when administrative documentation such as network diagrams or incident response plans are outdated or incomplete. In many instances, and it is more frequent with smaller organizations, all of the network and relevant information rests on the shoulders of one individual. Murphy’s Law would suggest that an incident will happen when that individual is on vacation and out of the country. Be sure that the weight of this responsibility doesn’t lie with just one person in an incident. This is not only useful for an investigation but also provides a form of internal checks and balances for an organization.

What are some best practices used by clients that aid your investigation?
Certainly, all the issues I have previously mentioned should be addressed in some level.  Companies need to have the ability to maintain business continuity while balancing the staff and system requirements needed in an investigation. Here are a few high level bullet items that are useful:

  • Detailed logging information of your network and key systems should at least be retained for 60 to 90 days.
  • Information security awareness training and educating employees to identify risks can help to minimize human error, although it will not prevent the disgruntled employee from taking your data.
  • Network diagrams and incident response plans are also very helpful. Keep in mind that you should not just create an incident response plan but perform a tabletop exercise to test this plan and make sure it works.
  • If the company has acquired solutions for email and data archiving, they need to ensure that the information can be extracted in a usable format. Don’t just implement your products; test them.
  • Key staff should be able to identify where all sensitive data is stored—and sensitive but unrelated data such as HR and financial information should never be stored together or with common group and file shares.

Finally, companies need to pay attention to service level agreements and understand where they are relinquishing control of the data and make sure that it can be accessed when needed. Identifying where your data is physically stored can also be a concern. Your data could be stored in New York, Denver, or maybe even in Europe. Although most vendors are not going to provide you with this information, this does pose a legal challenge if data has to be collected.

In summary…
We feel the forensic investigation is often critical to the insurance company’s ability to make a proper coverage determination in order to pay out on a loss resulting from a data breach event. For example, the insurer needs to factually verify:

  • Date of the breach? This is key to determining whether coverage was in place at the time of the event.
  • Date of detection? This is important to show an Attorney General (or plaintiff lawyer) that the insured business responded in a timely and prudent manner to the event.
  • Location of the breach? Did the incident occur on the insured’s system or a third-party provider’s system or Cloud (which could trigger a later subrogation action)?
  • What controls/safeguards were defeated that contributed to the cause of the loss? This is important in order to document during a postmortem review of the claim that new safeguards are in place to minimize the chance of the incident happening again.

Of course, as Mr. Obuchowski illustrates, a claims investigation typically doesn’t come with a neat bow wrapped around it. Problems often occur. The Cloud, in particular, may present challenges for both the insured client and the insurer. A Cloud provider may not allow a forensic investigation of an alleged breach incident on its network (assuming, of course, that the Cloud provider actually notified the insured client of the alleged breach in the first place).

 

SEC Enforcement for Cyber Risk and Data Breaches

A Q&A with Jacob Olcott
As Jacob Olcott, principal in cybersecurity at Good Harbor Security Risk Management, LLC points out, the SEC Guidance released in 2011 brings the issue of data security out of the IT realm and into corporate governance. But these rules for publicly traded companies are still relatively new and what they mean in terms of legal exposure is still largely untested. Olcott answered a few of my questions about the guidance and how companies can minimize their risks.

Can we have a layperson explanation of the SEC Guidance as it relates to data security?
The idea here, generally speaking, is that publicly traded companies are obligated to disclose material risks to investors. The securities laws have been in place for 80 years—what’s new is that in 2011, the SEC issued guidance for companies to apply that longstanding legal requirement to the cyber security context. We know that every company in the world today has been penetrated and huge volumes of information have been exfiltrated out of corporate networks, largely the loss of intellectual property and trade secrets. But what hasn’t happened yet, necessarily, is disclosure of these incidents and that’s important from an investor’s standpoint. This guidance sits alongside all of the other legal obligations to disclose events when they happen—whether it’s laws regarding the security of health information or financial information—but it also covers information for which there had been no existing legal requirement, such as business secrets and intellectual property.

What’s the financial exposure here for a company that ignores the SEC guidance?
The failure to disclose material information can lead to shareholder lawsuits—there’s decades and decades of history behind that. It can also lead to SEC enforcement acts, which are associated with fines. However, up until now there has never been an example of the SEC bringing an enforcement action against a company for notification around data loss so it’s still unknown. Still, I think the reality is that as companies become more aware of their legal obligations to defend their networks, shareholders will be demanding greater security from the companies they’re investing in.

What concern might a board or CEO have in complying with the guidance? And how can a business mitigate this exposure?
The bottom line here is that boards and CEOs should be very worried about this because it raises a question that most if not all companies cannot answer today: If we had a material event in our system, would we know it? There’s a growing realization in the C suite that we have got to get a better understanding about what our security posture is today, whether it’s because of the SEC guidance or the growing realization that bad guys are here and they’re coming after us. The first step is to think about what would constitute a “material event” to the business and that is very business-dependent so we would tell our clients to figure out what they do and work backwards from there—basically, to figure out what the crown jewels are. If a company has a significant amount of consumer information, for instance, then that’s what they need to be focusing on. If it’s a piece of critical infrastructure like the electrical grid, then keeping lights on and the control systems working is the most sensitive thing to protect.

It’s very important for companies to have a corporate-wide cyber risk committee. If you ask the average IT security guy what “material cyber risks or events” mean they will just look at you dumbfounded—it’s not a term of art in the IT world. This is a good example of why general counsel or even more senior folks like the CEO who understand the business implications have to be more involved in managing cyber risk. It’s also very important for officers and directors to work with the security staff and do tabletop exercises in planning incident response ahead of time because the last thing you want is to be in the middle of a crisis and just thinking about it for the first time.

Can a violation of this SEC guidance lead to a possible directors and officers (D&O) lawsuit?
Yes, officers and directors have a longstanding legal responsibility to disclose material information to investors and I don’t think there’s any question that if they’re not closely examining cyber risk they could be very vulnerable in a potential suit. However, it hasn’t happened yet.

In conclusion …
This is a complex topic. In summary, it’s difficult to define “material risk.” After all, some companies face multiple malicious attacks/attempts on a daily basis that they may consider a nuisance but routine—and it would have to be an actual breach to be deemed “material.” For another company, the close calls could pose a material risk. And what if it’s only a minor breach, like a lost laptop? To report material risk publicly the clients will need to involve counsel skilled in security and privacy matters and thoughtful about balancing the needs of outside investors with the company’s interests while not releasing too much information about their own loss control measures to the outside world. To build on Mr. Olcott’s insightful comments, it will be interesting to see if the SEC follows up here with significant penalties for willful violators of the guideline’s intent AND whether plaintiff lawyers leverage the SEC noncompliance argument in their data breach class action lawsuit complaints.

No more posts.