Mobile App Data Security

A Q&A with Jack Walsh of ICSA Labs
With the proliferation of mobile devices, businesses from all sectors are now offering apps for consumer and employee use. However, data insecurity, the potential for lost personal information and a lack of developer experience pose a major liability for companies providing mobile apps. I talked to Jack Walsh, mobility programs manager of ICSA Labs, about the major security and privacy issues connected to mobile apps.

So many companies are offering mobile apps these days. What are some of the key issues that risk managers should be aware of?
One of the main problems is that apps are typically developed by a third party. In the olden days, when everyone simply used a computer, applications came from large, well-known and respected companies such as Microsoft and IBM, which followed lengthy processes and procedures to better ensure safety, security and functionality. With hundreds of thousands of apps available now we have many smaller players developing mobile apps, and processes and procedures can be significantly different from one developer to the next. Even if you choose someone with experience, this is a relatively young field and it’s challenging to ensure that things are done properly. The second problem is that no one is able to test all of these apps—part of the trouble is that there are currently few good tools out there. The third concern is that the apps need to be compliant with whatever regulatory guidance is associated with the app developing company’s given industry, and that problem is going to continue to be a concern for many companies.

What kind of security risks are companies facing now?
An app can actually be malicious, doing something it was either intentionally or inadvertently made to do. For instance, there are disreputable ad networks out there that have served up malware. This is becoming more prevalent. Another area to watch are libraries for different apps—a developer who doesn’t want to do everything from scratch might link in an existing library outside of their control that could get you into trouble as well. Beyond that, apps might have vulnerable code or contain dead or recycled code; they might be overpermissioned or underpermissioned.

Is there a ‘case study’ you can offer as a lesson?
There have been issues with regards to libraries—there are people out there who intentionally create a harmless looking library that contains hidden malware. It’s not an everyday occurrence, but we’re seeing it more. In Russia and China we’ve also seen many apps that send SMS messages surreptitiously to premium numbers, billed to the user without the user’s knowledge. This has happened multiple times recently. There are a number of studies out there of the top 100 free and paid Apple iOS and Google Android apps that describe how those apps – many of which we all use every day – are not safeguarding user’s sensitive information.

What can a company do to mitigate its mobile app risk exposure?
Well, first and foremost, they should ask questions of their developer and find out how they’re building the app, what sort of processes and procedures they follow. Many times a developer will give what seems like an acceptable answer—“I use a secure platform for app development”—but the risk manager should dig deeper. Press them to find out what’s being done beyond that in the way of testing. Who else, if anyone, looks at the code or tests the resulting app for the developer? It’s always easier for someone else to see mistakes than the person who wrote it. Most importantly, does the app adequately protect sensitive information when it stores and transmits it? Does it use encryption and other storage and data transfer safeguards? Is it possible to defeat the authentication mechanism? Can other apps get access to the data through improper use of APIs in one or more library that are outside the direct control of the app developer? There are best practices out there to avoid these occurrences. Note, too, that testing should not be just a one-time thing. Apps need to be tested throughout their life cycle, any time the code is upgraded or the operating system gets an update the app should be looked at again. There’s no such thing as too much testing.

In summary…
Mr. Walsh, who tests mobile apps for security and safety, raises an important red flag here. All types of businesses and organizations, both tech and non-tech, are deploying newer technologies that can drastically increase their cyber liability risk exposure. Just the other day I saw that even my local hoagie shop was offering its own mobile app—God only knows where that user data is going! 

Privacy matters. Moreover, if your mobile app doesn’t have a privacy policy that is transparent about the kinds of data being collected, stored and shared with third parties it can land your company in major legal trouble. “Wrongful data collection,” or running afoul of your own privacy policy, is one of the fastest growing areas of litigation in the cyber risk area. See, for example, the state of California suing Delta airlines for their mobile app, pursuing fines of $2500 per download. Not having a privacy policy in this day and age is simply reckless.

Note: eRisk Hub® users, please see our free mobile app privacy policy template by Ron Raether, Esq. of Faruki Ireland & Cox P.L.L. on the Tools section.

Data Breach Liability from a Class Action Trial Lawyer’s Standpoint

A Q&A with Jay Edelson of Edelson LLC
With court attitudes around privacy issues constantly evolving, it can be a challenge to understand what constitutes a significant data breach case and the consequences liable organizations face. I asked counsel Jay Edelson about how he chooses his class action cases and how the current legal climate is treating them.

What are some traits or hallmarks that you look for when determining whether a data breach case might be ripe for a class action proceeding?
The first thing we look for is the degree of sensitivity of the information that was left unguarded. We are more likely to choose something that seems like a serious breach, like those involving health records or private information of children.  For a suit to be successful, it needs to connect with the judge and jury on an emotional level—in short, they must be convinced that the loss of information truly matters to people. The reason that this is such an important hurdle to clear comes from the fact that data breach litigation is an emerging issue of the law. Courts don’t have the extensive precedent to look to, as in other consumer cases. If we can’t sell the case on an emotional level then it will be significantly harder to get them to be receptive to our broader arguments. Next, we will look at why or how the breach occurred. If it was something we think was preventable—for example, misplaced laptops that didn’t have basic encryption, as opposed to, say, a sophisticated hack from Eastern Europe—then we are more likely to take it on. The key that many plaintiffs’ attorneys unfortunately sometimes aren’t attuned to is that simply because a breach occurred does not automatically mean that the company acted negligently. Hackers and thieves are increasingly more sophisticated and there are certain times that it would be unreasonable for a company to have done more to guard their consumer data. Most data breach cases tend to be large so the size of the class doesn’t tend to be a determining factor, though of course the larger number of people involved the easier it is to justify putting more resources to the case.

In the past, it seems many plaintiffs/victims were only offered a year of free credit monitoring, which, arguably, is of little value to them. What are some additional settlement remedies your team (or your peers) are now pursuing to compensate victims?
Prior to a few years ago, the law was fairly settled and the thinking was that the fact that your information was out there in and of itself wasn’t enough to harm you—you’d have to show something more to the court to demonstrate harm. But that’s changed recently with decisions such as Resnick v. Avmed from the United States Court of Appeals, Eleventh Circuit. In Avmed, the court recognized that if people are paying the defendant money and they have a reasonable expectation that part of that money will go towards protecting their information, they have been essentially “overpaid” for their goods or services if the company was not following through on its promises. Due to cases like this, we’re starting to see settlements move away from the “free credit monitoring” deals to monetary compensation.                      

What regulations do you call on to bring a case, and what is a typical negligence claim you’ve made?

We don’t look to regulations so much as the common law. We’re looking at the types of express or implied promises made to consumers.  In terms of negligence, our theories are pretty simple: We’re bringing cases when we believe that the defendant wasn’t following industry security standards.

Which security shortcomings of breached organizations drive you nuts?
I think that corporations are not really asking the right questions internally about where the data is stored and how it’s being protected. Sometimes they’ll hire an outside consultant and they think because they’re paying someone money, they’re being responsible. The problem is that the companies aren’t thinking through these issues in a truly robust way. They’re often not asking the basic questions: Who has access to our data? Where is it being stored? What do we do with the laptops that we take out of circulation?

What steps would you recommend to limit exposure in a class action lawsuit?
Well, as a plaintiff’s attorney, I’m not generally in the business of giving advice to the defendant. But the way to limit exposure is to have really terrific protections so that the data isn’t hacked or stolen or lost. Protect— don’t harm—your client.

Which courts are the most sympathetic to these issues? 
A few years ago I would say there were none. Few if any courts were receptive to privacy cases. But there’s been a huge shift in the landscape, partly due to the increased sensitivity of the public. Issues such as the government spying program have changed the view of the judiciary as well, and we are now seeing great decisions coming from all over the country—in Chicago, where I’m located; in California; in Florida. At this point, I’d say there’s not a location in the United States where I’d be hesitant to bring a case.

In summary…
We invited Mr. Edelson to speak at our Marina del Rey Cyber Liability conference (attended by majority of P&C insurers in the industry that offer cyber/privacy liability coverage) and he and his colleague were both very forthcoming and effective in educating the audience about the plaintiff’s (victim’s) perspective—something that often gets lost in the quantum of cyber risk. Hopefully, risk managers are paying attention to these emerging theories of liability from the front lines of class action litigation. As a final comment, I’ll add that Jay and his fellow panelist received some of the highest praise from the hundreds of attendees, which is especially remarkable when you consider that some of those same audience members could be his future adversaries.

See this session recording on the eRisk Hub.

No more posts.