California’s Mobile Application Privacy Regulation

A Q&A with Ronald Raether, Jr.
In an increasingly device-dependent world, the issue of data integrity with regard to mobile applications is becoming ever more critical. In California, the attorney general brought a case against Delta Airlines for not warning customers that it was collecting sensitive data. I asked Ronald Raether, Jr., defense attorney and partner at Faruki Ireland & Cox in Dayton, OH, about the case and its implications for liability and regulatory exposure.

What is the California requirement regarding mobile app privacy policy?
The apparent mobile app privacy policy requirement in California came out of the Online Privacy Protection Act (OPPA), which has been in place since July 2004. Beginning in 2012, the California attorney general has actively pushed the position that OPPA covers mobile apps based on the argument they come within the coverage of online services being offered through the internet. Since then, the attorney general put together a joint statement of principle with a number of leading mobile app platform providers and in January of 2013, released a document called Privacy on the Go, which describes the requirements of OPPA as it relates to mobile apps compliance. The issues it covers include transparency, data minimization (collecting only as much information as you need for the purpose you describe to the consumer), app functionality and accountability. Basically, it boils down to the fact that the consumer shouldn’t be surprised by what information is collected by the app and how it is used.

Do you think this can be an exposure that impacts many companies across the US?
Any company offering a mobile app that is used by a California consumer is subject to this regulation, and since they can’t possibly isolate out that consumer, it applies to everyone. Companies have already been dealing with these jurisdictional questions with regard to websites since OPPA was introduced in 2004. The bigger issue is really how the regulation relates to startups and their need for revenue and cost avoidance as well as the general ignorance around these obligations. Startups tend to focus exclusively on developing towards the concept often without consideration to privacy or security. As a result, once a startup achieves some success they could be putting all of their profits in jeopardy if they haven’t baked in compliance from the beginning. This is even more relevant in the case of mobile apps because the lack of real estate on the smaller screen means there’s less room for compliance announcements. In other states I think we will see similar legislation like that being considered in Maine and attorneys general to   scrutinize companies’ policies and conduct and bring unfair competition claims based on any inconsistencies. The FTC will likely follow suit. We’ve already seen this with the Path application, which was ordered to pay $800,000 to settle FTC charges that it didn’t live up to its privacy promises.

Can a California enforcement action lead to class action exposure?
Yes, it could lead to class action exposure but I am not certain it will. For example, in the wake of the recent suit brought by the California attorney general against Delta we have yet to see a private class action filed. The reason may be that there are no statutory damages that arise from the violation so the incentives are not there for plaintiff’s counsel.

What can a company do to mitigate their risk here?
A company should follow the letter of the law: If it’s offering online services via the internet or via a mobile app the privacy policy as required by the statute needs to be addressed. We talked about startups earlier, but I will just reiterate that the privacy policy needs to be there from the beginning. And companies need to keep the promise of their policies—it’s not just a matter of putting it on paper and meeting the fine print of the statute, you have to be able to make sure that you’re actually doing what you said you would do. Privacy and security needs to be built in the requirements and considered through development and beyond. The adage about being penny wise and pound foolish really applies here.

In conclusion…
To underscore the recommendations and insights of Mr. Raether, we see many companies deploying mobile apps simply as part of an internal marketing effort to appear cutting edge. Yet, too often this is done with little understanding of (a) what type of PII the app is collecting on users, (b) whether data is being leaked intentionally to third party partners, (c) whether an embedded privacy policy is posted and followed, and (d) whether the app is secure. California always seems to be at the forefront of privacy and regulation and other state AGs tend to follow its lead. With significant enforcement fines funding the state AG offices, it will be interesting to see where this trend goes.

Understanding and Avoiding OCR Investigations

A Q&A with Lynn Sessions of Baker Hostetler
In enforcing the HIPAA/HITECH regulations, the Department of Health and Human Services’ Office of Civil Rights (OCR) has been coming down on healthcare organizations with recent fines of US $1.7 million, and yet the OCR and its investigations remains an area of mystery for many organizations. I asked Lynn Sessions, counsel at Baker Hostetler in Houston, TX, for some perspective on OCR’s process for ensuring data security and privacy compliance in healthcare.

In defending clients from an OCR inquiry, what are some of the shortcomings you often see in the data protection efforts that may increase the scorn of the government?
Some of the biggest concerns are an incomplete risk assessment or one that was created several years ago and shelved, a risk management plan that’s not followed or maintained, a lack of an incident response plan, and a lack of organizational support for information security projects. We also see issues with organizational silos in communication, where the people who work in compliance and privacy may not talk to risk management or information security or IT—and all of those people should be talking. Many organizations have data encryption but they don’t have encrypted Blackberrys or laptops or backup tapes. This is important because with some insurance companies, if the data isn’t encrypted, you fall out of coverage. The OCR has come back to some of our clients who had encryption, to point out that there were still laptops that were not encrypted. On the other hand, many organizations say ‘we don’t have to worry because we encrypted,’ but they still have to document what they did with the encrypted devices, that they met safe harbor requirements, and that a risk of harm analysis was conducted in relation to devices.

Explain at a high level how the OCR investigation process may work, from notice letter to enforcement penalty.
After a data breach incident, the OCR will send out a letter to the covered entity that includes approximately 20 different requests for information. It typically starts out broadly–they’re looking to see whether or not the entity is compliant. They narrow the requests more and more as they look at documents in an increasingly detailed fashion. It’s amazing how, by going down what seems like an unrelated rabbit hole, they may find a smoking gun. We’ve worked with several clients whose investigations go back to 2011 breach incidents and are now on the third round of questions from OCR, and investigations from 2010 that are now on their sixth round of questions. We haven’t had to go to the penalty phase yet with any of our clients, but we anticipate that if the OCR found a lack of compliance, the client would be assessed and given a penalty for the violations and the OCR and the organization could take it into settlement to come up with some kind of corrective action plan and fines. The fines are usually proportionate to the size of the organization and the violation. Still, you want to avoid this at all costs: At the point of a penalty, there is always a public announcement that gets picked up by industry publications. Once you’ve been fined, you will have the OCR looking over your shoulder for the next few years. We’ve also heard that in the coming years the fines could be growing, up into the eight-figure range.

Are there one or more key areas that OCR or state attorneys general have been focused upon for most breach incidents?
Mobile devices are still a very hot topic for OCR, as they are for state attorneys general. Third party compliance is another one. We saw recently that the Massachusetts attorney general fined a healthcare organization that where a third party had disposed of records in a dumpster. That’s not necessarily a typical finding, but it’s something we’ve seen.

HIPAA/HITECH leaves some gray area with regard to its classifications of safeguard measures. Can you explain what “addressable” means versus “required” and can HHS/OCR regulators or state attorneys general have different interpretations of the regulation when it comes to a breach incident?
What we’re hearing from OCR is that just because it says it’s “addressable” doesn’t mean you have an option of addressing the issue. You can check the box choosing not to address encryption because it’s not required but what OCR is saying is that if you choose not to, you should have a risk analysis as to why the unencrypted data is still safe. If you haven’t documented what you’re doing to protect the PHI, then the OCR can still come after you. This is important in healthcare, because some devices can’t be encrypted—we’ve heard vendors say that in the case of some medical equipment, patient safety could be compromised by encryption. OCR would say that just because you haven’t encrypted it, doesn’t mean you can’t have other safeguards in place. As long as you can demonstrate what you’ve done to protect the information and you’ve documented those processes, you have some protection.

In conclusion…
The issue of protecting healthcare-related data will only get more attention from regulators and plaintiff lawyers in the coming years. It is advisable for any company worried about safeguarding private health information (PHI) to be proactive and fully assess (document) its enterprise information security posture.  But companies should still expect the inevitable bad event (breach) and have a response plan in place that includes a relationship with leading counsel, like Lynn Sessions, to guide them through the labyrinth of Federal regulations (HIPAA) and state laws (which can be gray). Ideally, legal counsel should also have a working relationship with enforcement regulators to better represent the victimized company and improve the outcome of any regulatory investigation.

The Hidden Privacy and Security Risks of Apps

A Q&A with Kevin Lam of LOCKBOX LLC
Businesses in all sectors are increasingly employing mobile apps to improve market services and stay connected with customers. However, apps can share or leak private customer information, often unbeknownst to the organizations that offer them. A recent news story about the California Attorney General slapping app developers with fines for noncompliance of privacy laws underscores the risks to both third party app developers and businesses that deploy apps in violation of consumer privacy rights. To address this risk exposure, I spoke with Kevin Lam of LOCKBOX LLC (he can be reached at kevinlam@golockbox.com).

What are some of the essential data security issues around apps, and how are they violating the privacy rights of users?
Mobile apps are storing more and more sensitive data. Today, you can use your phone to make purchases on go—which require things like credit card numbers, passwords and other sensitive information. Most apps don’t clearly indicate what they do and don’t store. And many apps don’t have sufficient security controls to begin with to protect the sensitive data that they are storing. That risk is further compounded by the fact that people are constantly losing their mobile devices and the sensitive data on them. Then there are privacy issues: A recent report (http://redtape.nbcnews.com/_news/2013/01/15/16530607-a-shock-in-the-dark-flashlight-app-tracks-your-location?lite&ocid=msnhp&pos=7) found that many mobile apps were tracking information about the user without their knowledge and for no apparent use. For instance, the report highlighted the fact that a flashlight app was actually tracking user’s location and device IDs. What was that information being used for, why is it being collected, and how is it protected? Typically, you will find out after a data breach that the information was indeed not protected.

How can bad guys (hackers) exploit an app?
There are many ways that hackers can exploit apps. Hackers can extract sensitive data from apps by attacking the underlying storage device or external memory cards. They can attack the app directly and exploit some coding vulnerability that developers overlooked. Jail-broken phones, or phones that have certain controls placed by vendors like AT&T removed, leave backdoors into the device that make attacking apps and devices trivial. Then there’s the plain old shoulder surfing attack. If someone is entering their password in a public area, anyone is able to watch them punch in pin numbers to access the app. So the risks run the gamut from low-tech all the way to sophisticated programmatic exploits. Web-based attacks are especially problematic on mobile apps and devices. One of the most common attacks we see are phishing attacks. This is an attack where a hacker tries to trick a user into providing their credentials to what appears to be a legitimate website. On a desktop browser users can simply inspect the site URL and if they don’t recognize it they close the browser and the attack fails. On a mobile app or device, however, the screen is much smaller and inspecting the URL can be difficult, making phishing attacks more likely to succeed. Attacks on apps and data breaches from them are happening right now. Users or businesses just don’t realize it. Some do, but often after it’s too late.

What can a developer or business do to proactively ensure they are deploying safer apps?
Get training on how to develop secure mobile applications and best practices. But beware: There is a lot of training available that will show you how hackers attack apps, but will fall short in terms of providing pragmatic guidance on how to protect apps, which is what you really want. The best courses I’ve seen, and I’ve reviewed many, are the eLearning courses from Security Innovation Inc. (www.securityinnovation.com). They do the best job at helping developers understand the risk and how to build sustainable secure application development skills. Next, developers should start adopting common application security best practices and tools into their development lifecycle for apps. Many of the general security best practices and tools for desktop and server applications translate directly to mobile app development. And many of those tools are free to use, so there’s no reason not to use them. Finally, developers should realize that security is not just something you bolt or add-on. It’s the balance of technology, process and people, and security should be designed into the application from the beginning, rather than after the app has been compromised.

What are some myths about mobile app security?
The most common myth is that apps downloaded from app stores are safe. Vendors like Apple, Google and Microsoft check that apps they list follow certain programming and user interface style guidelines. They don’t perform the in-depth analysis needed to determine the security of the app. When you download and use an app, make sure you can trust the source. Another myth is that apps for non-Microsoft platforms like Apple’s iOS and Google Android are secure. This couldn’t be further from the truth—in fact, hackers are counting on this myth. Code is code and it can be attacked whether it’s on an Apple- or Google-developed platform. Finally, the last myth is that texting is secure and private. Many celebrities, politicians and other public figures have found this out the hard way through public embarrassment and ruined careers. People, however, continue to use texting to send highly private and sensitive data without realizing the risk they are exposing themselves to.

In conclusion …
We’re seeing businesses seek more direct access to individuals via their smart phone devices. They may rush to market with an app that collects PII for marketing or commerce purposes (see Risks). Very often, these companies fail to have the app properly inspected prior to public release, thus exposing both the consumer and the company to additional risk. Recent news reports underscore this reality.

No more posts.