Man in the Middle Attacks: A New Line of Defense

businessman using laptop with thief shadow

A Q&A with Norm Brogdon of Data Stream Protector
Much like eavesdropping, man in the middle (MITM) attacks allow a perpetrator to imperceptibly steal data—a malicious and insidious threat that has been underreported in the media. I spoke with Norm Brogdon of Data Stream Protector about the MITM exploit and how it can be stopped.

Continue Reading

Ethical Innovation and Big Data Privacy

Guest Author: Jamie Sheller Esq. NetDiligence®

‘Big Data’ may be changing the world but it is not changing American’s belief in the value of protecting privacy.

In one of the few areas of liberal and conservative consensus, Americans stand firmly behind the Fourth Amendment to the Constitution which protects the “right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.”

Continue Reading

Data Breach Liability from a Class Action Trial Lawyer’s Standpoint

A Q&A with Jay Edelson of Edelson LLC
With court attitudes around privacy issues constantly evolving, it can be a challenge to understand what constitutes a significant data breach case and the consequences liable organizations face. I asked counsel Jay Edelson about how he chooses his class action cases and how the current legal climate is treating them.

What are some traits or hallmarks that you look for when determining whether a data breach case might be ripe for a class action proceeding?
The first thing we look for is the degree of sensitivity of the information that was left unguarded. We are more likely to choose something that seems like a serious breach, like those involving health records or private information of children.  For a suit to be successful, it needs to connect with the judge and jury on an emotional level—in short, they must be convinced that the loss of information truly matters to people. The reason that this is such an important hurdle to clear comes from the fact that data breach litigation is an emerging issue of the law. Courts don’t have the extensive precedent to look to, as in other consumer cases. If we can’t sell the case on an emotional level then it will be significantly harder to get them to be receptive to our broader arguments. Next, we will look at why or how the breach occurred. If it was something we think was preventable—for example, misplaced laptops that didn’t have basic encryption, as opposed to, say, a sophisticated hack from Eastern Europe—then we are more likely to take it on. The key that many plaintiffs’ attorneys unfortunately sometimes aren’t attuned to is that simply because a breach occurred does not automatically mean that the company acted negligently. Hackers and thieves are increasingly more sophisticated and there are certain times that it would be unreasonable for a company to have done more to guard their consumer data. Most data breach cases tend to be large so the size of the class doesn’t tend to be a determining factor, though of course the larger number of people involved the easier it is to justify putting more resources to the case.

In the past, it seems many plaintiffs/victims were only offered a year of free credit monitoring, which, arguably, is of little value to them. What are some additional settlement remedies your team (or your peers) are now pursuing to compensate victims?
Prior to a few years ago, the law was fairly settled and the thinking was that the fact that your information was out there in and of itself wasn’t enough to harm you—you’d have to show something more to the court to demonstrate harm. But that’s changed recently with decisions such as Resnick v. Avmed from the United States Court of Appeals, Eleventh Circuit. In Avmed, the court recognized that if people are paying the defendant money and they have a reasonable expectation that part of that money will go towards protecting their information, they have been essentially “overpaid” for their goods or services if the company was not following through on its promises. Due to cases like this, we’re starting to see settlements move away from the “free credit monitoring” deals to monetary compensation.                      

What regulations do you call on to bring a case, and what is a typical negligence claim you’ve made?

We don’t look to regulations so much as the common law. We’re looking at the types of express or implied promises made to consumers.  In terms of negligence, our theories are pretty simple: We’re bringing cases when we believe that the defendant wasn’t following industry security standards.

Which security shortcomings of breached organizations drive you nuts?
I think that corporations are not really asking the right questions internally about where the data is stored and how it’s being protected. Sometimes they’ll hire an outside consultant and they think because they’re paying someone money, they’re being responsible. The problem is that the companies aren’t thinking through these issues in a truly robust way. They’re often not asking the basic questions: Who has access to our data? Where is it being stored? What do we do with the laptops that we take out of circulation?

What steps would you recommend to limit exposure in a class action lawsuit?
Well, as a plaintiff’s attorney, I’m not generally in the business of giving advice to the defendant. But the way to limit exposure is to have really terrific protections so that the data isn’t hacked or stolen or lost. Protect— don’t harm—your client.

Which courts are the most sympathetic to these issues? 
A few years ago I would say there were none. Few if any courts were receptive to privacy cases. But there’s been a huge shift in the landscape, partly due to the increased sensitivity of the public. Issues such as the government spying program have changed the view of the judiciary as well, and we are now seeing great decisions coming from all over the country—in Chicago, where I’m located; in California; in Florida. At this point, I’d say there’s not a location in the United States where I’d be hesitant to bring a case.

In summary…
We invited Mr. Edelson to speak at our Marina del Rey Cyber Liability conference (attended by majority of P&C insurers in the industry that offer cyber/privacy liability coverage) and he and his colleague were both very forthcoming and effective in educating the audience about the plaintiff’s (victim’s) perspective—something that often gets lost in the quantum of cyber risk. Hopefully, risk managers are paying attention to these emerging theories of liability from the front lines of class action litigation. As a final comment, I’ll add that Jay and his fellow panelist received some of the highest praise from the hundreds of attendees, which is especially remarkable when you consider that some of those same audience members could be his future adversaries.

See this session recording on the eRisk Hub.

Understanding COPPA and its Risk Ramifications

A Q&A with James Prendergast and Chris DiIenno of Nelson Levine De Luca and Hamilton
First put into effect in 2000, the Children’s Online Privacy Protection Act (COPPA) was designed to protect the PII of children under age 13 online. In July, 2013, the regulation was revised to address more recent ways that children use the internet—namely, through social networking, apps and mobile devices. To better grasp the new amendment’s implications for businesses that collect the PII of children online, I talked to Jim Prendergast and Chris DiIenno, partners in the Privacy and Data Security Group at Nelson Levine De Luca and Hamilton, LLC.

Can you give us a summary of the COPPA amendment that went into effect July 1, 2013?
The main highlights are the following:

  1. The regulation requires parental notification and consent for any entity collecting children’s PII.
  2. Personal information has now been much more expansively defined under COPPA so that collecting some forms of data that were routinely collected in the past without parental consent would now be in clear violation of the regulation. This includes geographic location information, photographs, video, audio, user names and persistent identifiers.
  3. Third party vendors providing plug-ins and ad networks are now expressly required to obtain parental consent and notification.

What are the cyber liability risk ramifications for any company that collects, stores and shares PII from children?
The risks include fines and injunctions from the FTC and class action lawsuits if the data is not collected carefully and properly. This new legislation targets app makers and website operators who have consciously directed their marketing to a younger audience. The FTC is  looking for violations. If they catch violators, expect a substantial fine and bad publicity. I would also say that the third party plug-in providers, which were left out of the first law through a loophole, and have routinely been collecting information, might be the most threatened by this regulation. The worst-case scenario would be an app designer that either hasn’t paid attention to the amendment or has chosen to ignore it and has collected PII from kids for a long time.

What are the penalties?
Any entity that violates the new COPPA statute is subject to the full wrath of the FTC.  The FTC can put  violators out of business—either by substantial fines (up to $16,000 per violation) or by ruining their business reputation. When you’re looking at the fine amounts, consider that a company could be collecting information from 1,000 children and might have multiple violations per child. The FTC, or the states, can also take you to court for an injunction to prohibit you from doing business.

Are you predicting class action lawsuits?
Yes. Class action Lawyers have awaited these modifications with glee. I believe judges would be more inclined to find an identifiable class (which they generally haven’t been for cyber suits) because in this case they are protecting children. And while plaintiffs’ lawyers have had difficulty defining damages in some privacy cases, here, the FTC has done that for them and articulated the $16,000 per violation figure.

What can a company do to mitigate their exposure?

  1. Know the rules.
  2. Get parental consent. If you have any doubt at all that your website is directed at children, go to the COPPA website and figure it out.
  3. Post your privacy policy prominently online.
  4. If you have any change in your data collection at any point you must go out and tell mom and dad that you need their consent again—it’s not good enough to send a notice and then start collecting information.

Another consideration is that companies should try to understand what data they are collecting and how they are using it. They might find that they are collecting and storing data that they once had the intent to use or sell but no longer serves any purpose for them. If you don’t need to collect it, don’t.

Any other thoughts?
Because these regulations are new and directed at children it would definitely help app makers and related vendors to have a privacy liability policy that specifically addresses these issues.

In summary…
COPPA raises the stakes for transparency in a company’s privacy practices if data pertaining to children is involved. For children-directed app makers and others subject to this regulation, staying in compliance may mean facing potential hurdles such as the need for a Direct Notice (email or mail to parent) and getting ‘Verifiable Consent’ by the parents of the child. Some consent methods might be seen as laborious for both the company and parents (such as those that require that parents call, send a fax, mail a signed form, use their credit card, or email with their digital signature) but there’s no way around the regulation. So the big question is: Are website operators and app owners ready to put these practices in place today? Certainly, readiness requires a serious investment. But not tackling these issues might lead to FTC, State AG or plaintiff lawyer suits—something that no company can afford.

Dear Data Analytics … Thank You for the Spam

Reprinted with permission from HB Litigation.

Have you ever wondered why the same advertisement seems to be following you around the Internet?  Toby Merrill of ACE Professional Risk attributes this phenomenon to the increased use of data analytics by advertisement companies.  Data analytics is being used to track online users’ preferences so that companies can specifically target users with advertisements that match their interests.  This type of data collection has led to the hot-button issues of whether companies are infringing on their customers’ privacy rights and whether this data is being wrongfully collected.

These are some of the themes that emerged during a panel discussion at HB’s recent conference titled NetDiligence® Cyber Risk & Privacy Liability Forum (recordings available!).  The panel was moderated byToby Merrill, ACE Professional Risk, and comprised Katherine Race Brin, Federal Trade Commission; Linda Clark, Reed Elsevier; John Graham, Zurich North America Betty Shepherd, S.H. Smith & Company andGabriel Weinberg, DuckDuckGo.

How are companies using data analytics?
The panel recognized many positive ways that data analytics is currently being used.  The education industry is using it to enhance student learning.  Financial institutions are using it to protect credit card customers from fraud.  However, Zurich’s John Graham says “some companies are taking the posture of ‘let’s collect all the data that we can because we don’t even know what we are going to be able to use that for in the future.’”  The panel recognized that this type of mass collection creates a dangerous situation because companies are exposing themselves to liability if that information was wrongfully collected or if that information gets lost or stolen.

Gabriel Weinberg, creator of the online search engine DuckDuckGo, says that “businesses realize that [consumer behavior] data is valuable,” and “over the last five years … data collection has been pretty much hidden from consumers.”  He said that preferential data collection is a growing industry and that the FTC “has to deal with how to reconcile the data collection with what consumers want.”  Gabriel reflected on his own experience running a search engine and said that consumers “really care about [their behaviors being tracked] and [this issue] is not going to go away the same way people have dismissed privacy in the past.”

How is the FTC protecting consumer privacy?
FTC’s Katherine Race Brin said that her agency is focused on protecting consumer rights and has created a privacy report that outlines best practices that companies should follow when handling private information.  She says that the report outlines three main concepts that companies should consider.

First, companies should focus on “privacy by design,” which means that when a company designs a product or service they should be thinking about what data they are collecting or sharing as a result of this product or service.

Second, she said the FTC policy advocates “simplified consumer choice,” which means companies should provide consumers with “clear contextual [privacy] choice options.”

Third, Brin advised, companies should focus on “transparency,” which means simplifying privacy policies so that consumers can understand them.

Brin said the FTC is focused on bringing enforcement actions against companies whose practices are deceptive or unfair in violation of the Federal Trade Commission Act.  She continued by saying that the FTC has recently brought privacy actions against major online companies like Google and Facebook in order to ensure that these companies provided their consumers with privacy protections.

Every panelist agreed that the best way to ensure better privacy protection is to educate the consumers, along with the businesses, about the issues involved with preferential targeting.  It will be interesting to see how consumer privacy laws develop throughout the next decade and how the use of data analytics is affected by this demand for privacy.

HB’s next NetDiligence® Cyber Risk & Privacy Forum will take place in Marina del Rey, California, on October 10-11, 2013.  The event will be Co-Chaired by Mary Guzman, McGriff Seibels & Williams; Oliver Brew, Liberty Insurance; Chris Keegan, FINEX Global; Tim Francis, Travelers Bond & Financial Products; and Mark Schreiber, Edwards Wildman.

Tim Prosky is a 2015 J.D. candidate at Elon University School of Law in Elon, N.C.  He earned his degree in accounting from the University of South Carolina.  Prosky is a 2013 HB Litigation Conferences Summer Associate.

Mobile Devices: Risk and Exposure

A Q&A with Nathan Steuer and Peter Coddington
Mobile devices are essentially computers that can go anywhere the employee goes. While these devices enable powerful computing capabilities, they are also easily lost and left unprotected, creating additional data security risks for organizations that use them. I asked Nathan Steuer, business development director, and Peter Coddington, CEO of PaRaBal, Inc., in Catonsville, MD, about limiting the vulnerabilities of mobile devices.

What are some of the key risk exposures facing businesses with mobile devices?
We believe that data in any enterprise, commercial or public, are the jewels of the kingdom, so when you bring devices into the ecosystem of the organization you are allowing that many more points to touch the data and potentially open it to the outer fringes. There are multiple forms the risk can take, whether it’s rogue behavior or an accidental leak, but ultimately the risk exposure is about losing control over that data.

How can network and data breach events occur through mobile devices?
Smart phones have a number of senses on them—they can communicate with wi-fi, Bluetooth, cellular networks, servers, near field communication technology and they can give out geographic information—so there a lot of ways to interact and all of these interactions are connected to your enterprise network. A rogue agent or employee can put an app on the phone that allows someone to get into the network; there are spearfishing methods through texting that create tiny URLs that lead back to the network; you can lose the phone and if there isn’t a proper password on it, anyone can access the data flowing through apps or email. If you think of all the different ways we communicate on the phone then you see that there are multiple opportunities for a breach.

Do you see any trends in this area?
As Bring Your Own Device (BYOD) is becoming the norm, employees are unknowingly exposing an organization’s data. These employees want to handle data properly on their mobile devices, but in most cases don’t know what constitutes red flag usage on their device. Another area is in undetected malware in Android apps. The sheer number of Android apps has multiplied a thousand-fold every six months. The Kaspersky Security Company released a report that said 99 percent of all attacks on mobile devices in 2012 were against Androids, but that’s not to say iOS isn’t vulnerable as well. So we’re almost seeing a throwback to the old Windows versus Apple security debate, and as Android leads market share it is more challenging.

How can a company mitigate this risk exposure?
It has to be a multipronged approach, through policies, insurance, training and potentially software solutions. There are a number of products attempting to address these issues but most organizations are not electing to run out and get them yet. We think the best place to start is by getting a mobile audit of your enterprise and understanding how many devices are involved in your network, then determining policies and controls for employees using them. But the controls need to be well designed so that they don’t interfere with productivity, for instance, requiring clunky passwords to be entered multiple times. There need to be strong user policies from a liability standpoint and thorough education for employees so they understand the risks they’re taking. Our advice is that now is the time to secure your devices and stay ahead of the curve—it’s only a matter of time until we see a catastrophic data breach on the level of Sony that starts with a mobile device.

What might insurers need to know about mobile device risks?
While a lot of carriers realize there’s a great deal of risk with mobile devices, they don’t necessarily know how to quantify that risk and how to include it in their policies, so we also help with that education on the insurer side.

In summary…
At NetDiligence® we continue to see cyber risk insurers, brokers and risk managers concerned about mobile device risk and security issues. Many have had actual losses and insurance claims paid out due to a breached mobile device housing vast amounts of personal data on their customers (not to mention intellectual property impacting the corporation). Mr. Steuer and Mr. Coddington raise some key issues about organizational risk (and legal liability) emanating from both mobile apps and mobile devices, which we believe will grow immensely over the next several years. The attack statistic trends they reference are staggering. Businesses in all sectors need to get proactive and start managing this exposure.

Vermont Privacy Breach Regulations

A Q&A with Ryan Kriger
Among state Attorneys General, Vermont has gained a reputation for being particularly aggressive about data breach and privacy regulation. To better understand the state’s Consumer Protection Act requirements and processes for data breach investigation, I talked to Ryan Kriger, Assistant Attorney General.

What should a small business know about complying with the Vermont law?
We have a guidance available on our website, which should be helpful. In the case of a breach, they should first contact law enforcement, their insurer, their lawyer, any IT people involved and, if there’s credit card information at stake, their processor. Their primary duty is to figure out what happened and get the situation under control. They have to notify us within 14 days of finding out about the breach. That preliminary notice is kept confidential. We want businesses to give notice to consumers relatively quickly, and the 14-day notice to us allows us to stay on top of things and make sure they are doing that. We did create a waiver last year—if your company has policies in place and you’re confident that you will comply with the law, you can be certified ahead of time as long as you sign the document and get it on file with us before a breach incident. If you have a certification on file, you don’t need to notify us within 14 days. Another subsection says that if the data collector is sure that the data never got into the wrong hands—say, a password protected laptop was lost for five hours, then returned—they can call and ask us if they still need to give notice, and we probably won’t require it.

If it’s a really big breach and we think it could be problematic, we may follow up with questions. If we perceive the company’s actions to be unreasonable, unfair or deceptive, such as in the case with TJX, then we will begin an inquiry. Often, this wouldn’t just be Vermont, but multiple states getting together and asking questions.

How might you approach a data breach incident?
The first step is that we want to make sure the business has covered all of the necessary notification. Notice to consumers should go out “in the most expedient time possible and without unreasonable delay.” Vermont has a 45-day deadline, but we think in many cases notice should go out sooner. We encourage companies to send us their notification letter before it goes out to consumers, and we can help them make sure it’s in line with the statute. Also, the sample letter to consumers gets posted to our website, so consumers can confirm that the letter itself is legitimate. The second thing is to make sure the company fixes the problems that led to the breach. Sometimes smaller businesses think it’s a one-shot deal and don’t want to change their business practices, but we remind them that they are on notice, and that the fine outlined in the Consumer Protection Act is $10,000 per violation. Now, we’ve never had to levy that fine as most people seem to want to resolve the issues, but we want businesses to know that we are here to protect consumers and they need to take that seriously. In the TJX case, it appears that the company may have been collecting credit card information at point of sale and transmitting it, unencrypted, over unprotected wi-fi networks. This sort of blatant violation of standard security practices, and the length of time that it was allowed to continue, clearly justified bringing an enforcement action. We’re not trying to trick people, and in most cases we can resolve things in a cooperative fashion, but when a company drags their feet, we will go after them.

What are some of the key weak spots that lead to a privacy/data breach incident?
It can be all over the map—certainly, not encrypting data where encryption is appropriate is one issue. Over-collecting data you don’t need, such as using SSNs as an identifier, could be another. Other problems we see: collecting credit card data through a homemade system that’s not PCI-compliant when you could be using a secure third-party system. Not changing passwords or updating software. In smaller businesses, it might be negligence about employees who could be stealing credit card information. In general, it’s a good practice to have the occasional forensic analysis or stress test. We have partnered with Norwich University to offer penetration testing to any small business in Vermont that wants it. The Verizon Report has shown us that small businesses are the prime focus of security breaches, so we are particularly sensitive to the needs of small businesses in Vermont.

What type of fines and penalties can a company face for noncompliance? Can the lack of certain actions or controls increase their culpability in your view?
I mentioned the $10,000 per violation fine, and we consider each day you go beyond the deadline a separate penalty. Our Consumer Protection Act doesn’t have an intent requirement, but we obviously take intent, negligence and lack of controls into account when we think about enforcement and penalties. A business suffering a breach calling us to ask what they can do, making it’s clear they want to do the right thing, is very different from a business that denies anything went wrong, after we’ve found out about the breach three months later. We are very cautious with our use of power and we’re not trying to bully anyone, but if we need to use a large fine to get a business into compliance, we will do so. If an enforcement action reaches a settlement agreement, called an assurance of discontinuance or consent judgment, we may seek penalties, but we will also seek injunctive relief, which is asking the business to change its behavior. For example, we may want the business to put security or compliance systems into place, offer restitution for consumers, or take other steps to make sure it doesn’t happen again. In general, we are eager to proactively work with businesses to protect consumers and create a productive, cooperative relationship in order to prevent breaches.

In summary…
I first met AAG Ryan Kriger at our NetDiligence® Cyber Risk & Privacy Liability Forum last year in Marina del Rey. I thought he might be guarded about the state’s approach to enforcement, but boy, was I wrong. He was actually very forthright in talking about how seriously Vermont takes the issue of consumer privacy, including violators of state regulation. He makes the point that his department is willing to work with organizations that suffer a data breach incident and will give them a roadmap to do the right thing by the victims (whose personal information is now in wrongful hands). What is clear is that organizations that demonstrate a lack of care (or even willful nondisclosure) will be penalized.

Ryan is also speaking at the upcoming NetDiligence® Cyber Risk & Privacy Liability Forum in Philadelphia this June 6-7.

Data Collection Liability and Trends

A Q&A with Dominique Shelton
The area of mobile app customer data collection is fraught with heightened regulatory interest. In the past 18 months, industry leaders have come together to create the Best Practices in Mobile Data Collection guidance while the California Attorney General released “Privacy on the Go.” To get a better handle on the issue of mobile app data collection, I spoke with Dominique Shelton of Edwards Wildman Palmer, LLP in Los Angeles, CA.

Can you provide an example of a business being sued for wrongful data collection?
It is important to keep in mind that data collection has been challenged by plaintiffs and regulatory agencies when an argument can be raised that the consumers did not understand that their data was being collected. The focus of the FTC and certain regulators like the California Attorney General has been on compliance with “privacy by design.” This means giving consumers sufficient notice and the choice to make meaningful decisions about their data and how it’s used. Currently, there are over 176 class actions pending around the country, based on behavioral advertising or tracking information to create customized products or targeted advertising. One recent example is the case the California Attorney General brought against Delta Airlines for not posting a mobile privacy disclosure. We have also seen class actions filed against a major studio and search engine in December, challenging mobile apps that allegedly collect behavioral information from users under the age of 13. So in this environment it’s very important for companies to take a look at this issue.

What types of data can get businesses into trouble?
Certainly, any personally identifiable information, such as name, address, phone numbers or email that may be collected without disclosure. And now we’re starting to see risk around behavioral data that identifies the user’s mobile device or social networking ID associated with their Facebook profile, for instance. Those identifiers are considered personally identifiable information by regulators. The CA AG recently issued the “Privacy on the Go” report, in which personally identifiable information is defined as “any data linked to a person or persistently linked to a mobile device—data that can identify a person via personal information or a device via a unique identifier.” Also, the FTC is moving towards a definition of personal information that includes any data “reasonably linkable” to an individual based upon the comments of users that unique identifiers can be linked to other information to identify users personally. Of course, we are also seeing financial and health data as an area of interest and this is considered “sensitive” information as well.

Are there any states with stricter laws that increase liability?
No. California doesn’t have a law yet. There is a Do Not Track bill pending but it hasn’t gone through to a full vote. In the context of class actions, most have been filed using older statutes such as the Electronic Privacy Act and the Computer Abuse Act, claiming that tracking violates those statutes. Although there is not a distinct Do Not Track bill, it is important to know that this issue is getting greater attention by regulators and this in and of itself may create a new standard. For example, “Privacy on the Go” does create a de facto new standard for the country by recommending security, encryption and protection standards for unique identifiers and behavioral data that was previously considered by many companies to be non-personally identifiable information, not subject to enforcement. Further, the class actions that have been brought based upon the creation of ad profiles from the users’ online behavior also acts as a check for how companies should consider compliance. Further, the SEC October 2011 guidance calling for disclosure of all material cyber risks by public companies is useful.

What can a business do to mitigate their risk?
First, take a look at online disclosure and mobile disclosure policies. Supplemental notice is ideal. Make sure that someone in the company is responsible for privacy compliance—it could be the chief privacy officer or someone in the legal department or product development. They should be in touch with self-regulatory groups that have promulgated guidelines to address these issues. At a minimum, the company should be in step with its peers in addition to focusing on the latest guidance materials and which activities will attract the attention of regulators. There needs to be a dialogue between the privacy group and product development and marketing so that when new products are rolled out they can vet their use of customer information and make sure they are up to date on their legal obligations. Also, conducting annual trainings on privacy practices is a good idea and recommended by the CA AG.

In summary…
Ms. Shelton has provided an excellent summary on some emerging ‘big data’ issues that impact organizations that collect and use private information without adhering to reasonable privacy principles. Dominique mentions the Delta Airline case, which could serve as a bellwether case for mobile apps that are deployed without much regard for privacy policy/principles (i.e., customer notice and consent to the use of their personal information). Delta is facing a massive California AG penalty of $2500 per download (reportedly, Delta had a million downloads). One might argue what Delta did with its mobile app is a common practice, one that needs to be ceased immediately.

Understanding the Final HIPAA Security and Privacy Rules

A Q&A with Barbara Bennett
When they were released this past January, the final HITECH regulations amending the HIPAA Security, Privacy, Breach and Enforcement Rules updated those regulations with expansion of the scope of the rules, increased patient protections and more stringent government oversight, including application of the rules to contractors and subcontractors. So what does this mean for healthcare organizations and the service providers that work with them? I asked Barbara Bennett, partner at Hogan Lovells, LLP in Washington, DC to explain the finer points of the final rules.

Can you summarize the key requirements of the HITECH final rules amending HIPAA? What has changed?

  • The biggest changes are the application of much of the HIPAA privacy rule and virtually all of the security rule to “business associates,” with almost all third party contractors and subcontractors that access or maintain protected health information (PHI) now being considered business associates.
  • The second major change is the change in the HIPAA breach rule, which used to have a “risk of harm” standard incorporated into the definition of a data breach that required notice to affected individuals or regulators. Under that standard, an incident was not considered a breach unless the incident posed a significant risk of harm to the affected individuals. The final rule now provides that any unauthorized use or disclosure of PHI is presumed to be a reportable data breach unless the covered entity or business associate demonstrates that there is a low probability that the information has been compromised based on a risk assessment that considers certain factors.
  • Changes to the HIPAA marketing rule include further restrictions on use of PHI for marketing. In the proposed rule there had been an exception for providers to use PHI for subsidized communications and HHS took that exception away so that all subsidized promotional communications require an individual authorization unless they concern a drug or biologic currently prescribed.
  • There are some other, less monumental changes that include the right for an individual to restrict the sending of certain PHI to that individual’s health plan, the ability to get authorizations for research to do more than one study per authorization, and more stringent enforcement penalties.

In looking at the business associates component, how does this change impact healthcare organizations and their vendors?
There is potential for tremendous impact. The bottom line is that a lot of companies out there know they’re business associates and have the wherewithal to implement compliance, but many organizations do not.

In the past, business associates and their subcontractors who provided services to covered entities only had contractual liability, but they now have direct liability. “Business associates” are defined by virtue of their role and not by any agreement with the covered entity. For instance, the rule makes clear that cloud and other storage providers are business associates, whether or not they have a business associate agreement. If you maintain PHI, it does not matter whether you review or access it. You are still responsible and subject to the HIPAA rules, including the breach rule. It’s a big liability to assume and a big expense to implement compliance with these requirements.

Others that could be affected include software and hardware vendors that provide doctors and hospitals with access to information through health information exchanges (even if the vendor is not actually storing the information) as well as consulting and law firms that serve hospitals, doctors and health plans and may need PHI to perform their services. It goes deeper, too. Let’s say that these firms rely on a document management company. They, too, would be business associates subject to the rules and associated liability.

Another clarification that was made, however, is that the conduit exception is retained. If you’re a courier or telephone or internet provider that solely helps transmit or transfer the data—and does not maintain it—you may be exempt from the business associate requirements. And financial institutions that are just cashing a check or performing a payment transaction are not considered business associates, either. However, if they are performing a service for the organization such as accounts receivable or a lockbox that involves PHI, then they would become a business associate.

What are the challenges going forward for small entities and business associates?
First of all, these businesses need to understand whether the law applies to them for purposes of their own risk management, which means taking a good look at their customer base. For small entities, compliance costs can be expensive and require a certain amount of technical expertise, especially with the security rule provisions. There are some changes to the requirements for business associate agreements that need to be reviewed to determine if existing agreements need to be amended—there is an extended compliance period in some cases for those agreements—and business associates are now required to have these agreements with their subcontractors.

There’s also a challenge for an organization that deals with a lot of covered entities, such as a data analytics company or IT vendor that works with multiple hospitals, in managing thousands or millions of records for different covered entities. It is very difficult to comply with different contractual requirements with respect to different sets of data and still operate efficiently and effectively. Those companies might want to develop a standard template because they can’t get that granular in terms of differing compliance requirements.

In particular, HHS said in the preamble to the rules that a business associate must comply with the minimum necessary policies and procedures of the covered entity. That appears to me to be nearly impossible. If you have one customer it may simply be a hassle but if you’re a large service provider working with 10,000 covered entities, how can you possibly comply with all of their various minimum necessary policies and procedures? And why would the covered entities want to disclose their internal policies and procedures, which have been drafted for covered entities and not business associates, anyway? But, to add insult to injury, HHS also has indicated that a violation of the minimum necessary standard (i.e., the use or disclosure of more PHI than required for a specific purpose) can itself constitute a data breach.

In general, though, business associates need to get serious about compliance before September 23, 2013, when these requirements will be enforced. That means doing a survey of activities to gauge compliance and then creating a plan to address any issues; determining when the business associate acts as an agent of the covered entity (which carries additional burdens); training workforce members; and assessing relationships with subcontractors, and even educating those subcontractors where necessary. We recommend engaging outside counsel if there is no internal expertise available to understand these regulations and how they are applied to one’s business operations.

In summary…
We think Ms. Bennett highlighted the key (pressure point) areas of the Final Rule that will cause future legal liability and enforcement actions for both Covered Entities and Business Associates.  A complicating issue is how HHS/OCR interpret their own Security Rule regulations, which seems to vary from one investigator to the next. Moreover, many state Attorneys General are paying particular attention to how health information is safeguarded, and the penalties for noncompliance can be harsh. Having a knowledgeable security attorney (or Breach Coach®, as we call it) is essential for organizations in the healthcare industry and the insurance companies that underwrite their data breach liabilities.

No more posts.