Photo of Brian Lam

Brian is an Associate in the firm’s San Diego office. His practice focuses on corporate law matters. He has extensive experience in patent litigation and intellectual property matters, as well as privacy and data protection matters, particularly as to data aggregation, network security, and technology transactions. He is a Certified Information Privacy Professional (US Specialization), and Certified Information Systems Security Professional (CISSP), endorsement pending.

 Uber failed consumers in two key ways: First by misrepresenting the extent to which it monitored its employees’ access to personal information about users and drivers, and second by misrepresenting that it took reasonable steps to secure that data….This case shows that, even if you’re a fast growing company, you can’t leave consumers behind: you must honor your privacy and security promises.”  

–Acting Federal Trade Commission Chair Maureen K. Oldhausen, In the Matter of Uber Technologies, Inc., Consent Order

To read more about this important FTC Consent Order and its implications for all companies with respect to privacy policies and the promises made to users/consumers, check out this Mintz Levin Privacy Alert.

 

 

Recently, the Electronic Privacy Information Center (“EPIC”) asked the FTC to begin an investigation into a Google program called “Store Sales Management.”  The purpose of Store Sales Management is to allow for the matching goods purchased in physical brick and mortar stores to the clicking of online ads, or as we refer to the practice, “Bricks to Clicks.”

The significance of this is immense.  No longer will advertisers have to wonder how much revenue can be tied to a specific campaign, instead the Store Sales Management will give them insight into how actual consumers who viewed advertisements purchased certain products.  Continue Reading FTC Asked to Investigate Google’s Matching of “Bricks to Clicks”

The Internet of Things (“IoT”) can be thought of as a group of different devices that can communicate with each other, perhaps over a network such as the internet. We have written extensively about many of the privacy challenges that IoT devices can create. Recently, the Federal Trade Commission (“FTC”) made clear that its Children’s Online Privacy Protection Rule (the “COPPA Rule”) would continue to be applicable to new business models, including “the growing list of connected devices that make up the Internet of Things. That includes connected toys and other products intended for children that collect personal information, like voice recordings or geolocation data.”

To assist companies in complying with their COPPA obligations, the FTC has released an updated Six Step Compliance Plan. These steps are:

Step 1: Determine if Your Company is a Website or Online Service that Collects Personal Information from Kids Under 13.

Step 2: Post a Privacy Policy that Complies with COPPA.

Step 3: Notify Parents Directly Before Collecting Personal Information from Their Kids.

Step 4: Get Parents’ Verifiable Consent Before Collecting Personal Information from Their Kids.

Step 5: Honor Parents’ Ongoing Rights with Respect to Personal Information Collected from Their Kids.

Step 6: Implement Reasonable Procedures to Protect the Security of Kids’ Personal Information.

Chart: Limited Exceptions to COPPA’s Verifiable Parental Consent Requirement

Notably, per Step 1, the FTC has made it clear that COPPA defines “Website or Online Service” broadly, to include “mobile apps that send or receive information online (like network-connected games, social networking apps, or apps that deliver behaviorally-targeted ads), internet-enabled gaming platforms, plug-ins, advertising networks, internet-enabled location-based services, voice-over internet protocol services, connected toys or other Internet of Things devices.” A key takeaway for companies everywhere is that, if your service collects personal information from kids under 13, it is unlikely that the FTC will be swayed by an argument that your service is not subject to the COPPA Rule. Instead, entitles would be wise to either limit their data collection activities such that personal information is not collected, or take the time to understand and comply with their COPPA obligations from the outset.

If your IoT device or app does collect personal information from kids under 13, “verifiable parental consent” is the most important compliance concept, and also tricky to implement. There are exceptions to this “verifiable parental consent” requirement in the COPPA Rule, but those exceptions are limited and reliance on any exception should only be done with careful consideration of your collection practices and the COPPA Rule.

Similarly, the FBI has warned consumers, regarding Internet connected toys presenting privacy concerns for children. Companies may wish to pay particular attention to the recommendations that the FBI has for consumers, as many of them involve the consumer researching whether the company has used basic measures to protect the privacy of children that use these toys, including using authentication and encryption as well as providing for security patches at the device level. Companies may wish to consider whether these suggestions could form part of the basis for a reasonable standard of care, and whether, given their IoT devices “use case,” a failure to support one or more of these measures could subject them to additional liability.

If you have any questions regarding COPPA compliance, please do not hesitate to contact the team at Mintz Levin.

Recently the United States Computer Emergency Readiness Team (US-CERT), an organization within the Department of Homeland Security’s (DHS) National Protection and Programs Directorate (NPPD) and a branch of the Office of Cybersecurity and Communications’ (CS&C) National Cybersecurity and Communications Integration Center (NCCIC), encouraged users and administrators to review a recent article from the Federal Bureau of Investigation (FBI) regarding Building a Digital Defense with an Email Fortress.

Are we have discussed in many posts before, phishing — the fraudulent practice of sending emails purporting to be from a reputable entity to induce an individual to reveal privileged information such as a password — remains a major security threat.  Within the article, the FBI provides several helpful actions for businesses can take to reduce their risk of being phished, including reporting and deleting suspicious e-mails, and making sure that countermeasures such as firewalls, virus software, and spam filters are robust and up-to-date.

We encourage each of our readers to review the FBI’s guidance and consider whether their organization could benefit from any of the methods of protection provided.

Companies with any questions regarding any of these issues should not hesitate to contact the team at Mintz Levin.

Recently, a Google researcher discovered a serious flaw with the content delivery network (CDN) provided by CloudFlare.  This vulnerability has now become known as Cloudbleed, in a nod to the earlier Heartbleed SSL vulnerability.  The Cloudfare CDN allows users of the service to have their content stored at Cloudflare Network Points of Presence (PoPs) rather than a single origin server.  This reduces the amount of time it takes to serve websites in disparate geographical locations.  The service is popular, with Cloudflare having over five million customers, including Uber, OkCupid, and FitBit.

The Cloudbleed vulnerability involved a situation where sensitive data was inadvertently displayed or “leaked” when visiting a website that used certain Cloudflare functionality.  Cloudflare has estimated that the leak was executed 1,242,071 times between September 22nd and February 18th.  Search engines such as Bing, Yahoo, Baidu and Google also cached the leaked data.  The researcher who discovered the leak found all sorts of sensitive data being leaked, including private messages from major dating sites, full messages from a well-known chat service, online password manager data and hotel bookings, passwords and keys.

The Clouldbleed vulnerability is a reminder that companies that leverage external vendors to receive, process, store, or transfer sensitive data must find ways to reduce the risk created by the relationship to an acceptable level.  We have three steps that companies should consider taking to accomplish this.  

First, companies should understand how external vendors will interact with their data flows.  Companies that leverage Cloudflare services have given it access to sensitive data, including private messages, passwords, and keys.  The risks of providing this data to external vendors cannot be understood if the company itself does not understand at a senior organizational level what is being transferred.  Ask questions about the proposed procurement of vendor-provided services to understand what interaction the service/vendor has with your data.

Second, companies should make sure that they have permission to transfer user data to third parties, based on its existing terms of use and privacy policy documents that the relevant data subjects have agreed to.  Generally speaking, in most cases, the company collecting the data from the data subject will remain responsible for any issues that occur downstream, including loss or breach of the data through a third party vendor relationship.

Third, companies should carefully negotiate their vendor contracts in light of their own risk tolerance.  The contract should contemplate the data at issue, including by type and category, such as private messages and passwords, and should to the extent feasible transfer all risk of a breach on the vendor side to the vendor.  In many cases, it will be appropriate to require that the vendor carry insurance to satisfy its obligations under the agreement, including data breach remediation should it become an issue.

Companies with any questions regarding this process should not hesitate to contact the Privacy and Security team at Mintz Levin.

Five Things You (and Your M&A Diligence Team) Should Know

Recently it was announced that Verizon would pay $350 million less than it had been prepared to pay previously for Yahoo as a result of data breaches that affected over 1.5 billion users, pending Yahoo shareholder approval. Verizon Chief Executive Lowell McAdam led the negotiations for the price reduction.  Yahoo took two years, until September of 2016, to disclose a 2014 data breach that Yahoo has said affected at least 500 million users, while Verizon Communications was in the process of acquiring Yahoo.  In December of 2016, Yahoo further disclosed that it had recently discovered a breach of around 1 billion Yahoo user accounts that likely took place in 2013.

While some may be thinking that the $350 million price reduction has effectively settled the matter, unfortunately, this is far from the case. These data breaches will likely continue to cost both Verizon and Yahoo for years to come.  Merger and acquisition events that are complicated by pre-existing data breaches will likely face at least four categories of on-going liabilities.  The cost of each of these events will be difficult to estimate during the deal process, even if the breach event is disclosed during initial diligence.

Continue Reading Data Breaches Will Cost Yahoo and Verizon Long After Sale

The Securities and Exchange Commission (SEC) is investigating whether Yahoo! should have reported the two massive data breaches it experienced earlier to investors, according to individuals with knowledge.  The SEC will probably question Yahoo as to why it took two years, until September of 2016, to disclose a 2014 data breach that Yahoo has said affected at least 500 million users.  The September 2016 disclosure came to light while Verizon Communications was in the process of acquiring Yahoo.  As of now, Yahoo has not confirmed publically the reason for the two year gap.  In December of 2016, Yahoo also disclosed that it had recently discovered a breach of around 1 billion Yahoo user accounts.  As Yahoo appears to have disclosed that breach near in time to discovery, commentators believe that it is less likely that the SEC will be less concerned with it.

After a company discovers that it has experienced an adverse cyber incidents, it faces a potentially Faustian choice: attempt to remediate the issue quietly and avoid reputational harm, or disclose it publically in a way that complies with SEC guidance, knowing that public knowledge could reduce public confidence in the company’s business and could even prove to be the impetus for additional litigation.

Part of the issue may be that while the SEC has various different mechanisms to compel publically traded companies to disclose relevant adverse cyber events, including its 2011 guidance, exactly what and when companies are required to disclose has been seen as vague.  Commentators have argued that companies may have a legitimate interest in delaying disclosure of significant adverse cyber incidents to give law enforcement and cyber security personnel a chance to investigate, and that disclosing too soon would hamper those efforts, putting affected individuals at more risk.

Even so, many see the two year gap period between Yahoo’s 2014 breach and its September 2016 disclosure as a potential vehicle for the SEC to clarify its guidance, due to the unusually long time period and large number of compromised accounts. As a result of its investigation, it is possible that the SEC could release further direction for companies as to what constitutes justifiable reasons for delaying disclosure, as well as acceptable periods of delay.  As cybersecurity is one of the SEC’s 2017 Examination Priorities, at a minimum, companies should expect the SEC to increase enforcement of its existing cybersecurity guidance and corresponding mechanisms.  Whatever the SEC decides during its investigation of Yahoo, implementing a comprehensive Cybersecurity Risk Management program will help keep companies out of this quagmire to begin with.

If you have any questions regarding compliance with SEC cyber incident guidance, please do not hesitate to contact the team at Mintz Levin.

Developers and operators of educational technology services should take note.  Just before the election, California Attorney General Kamala Harris provided a document laying out guidance for those providing education technology (“Ed Tech”).  “Recommendations for the Ed Tech Industry to Protect the Privacy of Student Data” provides practical direction that operators of websites and online services of a site or service used for K-12 purposes can use to implement best practices for their business models.

Ed Tech, per the Recommendations, comes in three categories: (1) administrative management systems and tools, such as cloud services that store student data; (2) instructional support, including testing and assessment; (3) content, including curriculum and resources such as websites and mobile apps.  The Recommendations recognize the important role that educational technology plays in classrooms by citing the Software & Information Industry Association; the U.S. Market for PreK-12 Ed Tech was estimated at $8.38 billion in 2015.

The data that may be gathered by through Ed Tech systems and services can be extremely sensitive, including medical histories, social and emotional assessments and test results.  At the Federal level, the Family Educational Rights and Privacy Act (FERPA) and the Children’s Online Privacy Protection Rule (COPPA) govern the use of student data.  However, according to the Recommendations, these laws “are widely viewed as having been significantly outdated by new technology.”

Recognizing this, California has enacted laws in this space to fill in gaps in the protection.  Cal. Ed. Code § 49073.1, requires that local education agencies (county offices of education, school districts, and charter schools) that contract with third parties for systems or services that manage, access, or use pupil records, to include specific provisions regarding the use, ownership and control of pupil records. On the private side, the Student Online Personal Information Privacy Act (SOPIPA), requires Ed Tech provides to comply with baseline privacy and security protections.

Building on this backdrop of legislation, Attorney General Harris’ office provided six recommendations for Ed Tech providers, especially those that provide services in the pre-kindergarten to twelfth grade space.

  • Data Collection and Retention: Minimization is the Goal 

Describe the data being collected and the methods being used, while understanding that data can be thought of to include everything from behavioral data to persistent identifiers.  If your service links to another service, disclose this in your privacy policy and provide a link to the privacy policy of the external service.  If you operate the external service, maintain the same privacy and security protections for the external service that users enjoyed with the original service.  Minimize the data collected to only that necessary to provide the service, retain the data for only as long as necessary, and be able to delete personally identifiable information upon request.

  • Data Use: Keep it Educational

Describe the purposes of the data you are collecting.  Do not use any personally identifiable data for targeted advertising, including persistent identifiers, whether within the original service, or any other service.  Do not create profiles other than those necessary for the school purposes that your service was intended for.  If you use collected data for product improvement, aggregate or de-identify the data first.

  • Data Disclosure: Make Protections Stick 

Specifically describe any third parties you share personally identifiable data with. If disclosing for school purposes, only do so to further the school specific purpose of your site.  If disclosing for research purposes, only disclose personally identifiable information if you are required by federal or state law, or if allowed under federal and state law, and the disclosure is under the direction of a school, district or state education department.  Service providers should be contractually required to use any personally identifiable data only for the contracted service, not disclose the information, take reasonable security measures, delete the information when the contract is completed, and notify you of any unauthorized disclosure or breach.  Do not sell any collected information, except as part of a merger or acquisition.

  • Individual Control: Respect Users’ Rights 

Describe procedures for parents, legal guardians, and eligible students to access, review and correct personally identifiable data.  Provide procedures for students to transfer content they create to another service, and describe these procedures in your privacy policy.

  • Data Security: Implement Reasonable and Appropriate Safeguards

Provide a description of the reasonable and appropriate security you use, including technical, administrative and physical safeguards, to protect student information.  Describe your process for data breach notification.  Provide training for your employees regarding your policies and procedures and employee obligations.

  • Transparency: Provide a Meaningful Privacy Policy

Make available a privacy policy, using a descriptive title such as Privacy Policy, in a conspicuous manner that covers all student information, including personally identifiable information.  The policy should be easy for parents and educators to understand.  Consider getting feedback regarding your actual privacy policy, including from parents and students.  Include an effective date on the policy and describe how you will provide notice to the account holder, such as a school, parent, or eligible student.  Include a contact method in the policy, at a minimum an email address, and ideally also a toll-free number.

Given the size of the California market, any guidance issued by the California Attorney General’s office should be carefully considered and reviewed.   If you are growing an ed tech company, this is the time to build in data privacy and security controls.   if you are established, it’s time to review your privacy practices against this Guidance and see how you match up.  If you have any questions or concerns as to how these recommendations could be applied to your company, please do not hesitate to contact the team at Mintz Levin.

 

Over the last week, details have become available to explain how an attack against a well-known domain name service (DNS) provider occurred.  What about the potential legal risks?  We will attempt to provide insights into mitigating the legal risks for the various companies involved, including the companies that may have unwittingly provided the mechanism through which the attacks were conducted.

The Mechanics of The Recent Distributed Denial of Service Attacks 

Recently, Dyn, a Manchester, New Hampshire-based provider of domain name services, experienced service outages as a result of what appeared to be well coordinated attack.  Dyn provides domain name services used to direct users to a website after typing in a human readable domain name, for example, google.com.  On October 21st, 2016, many websites including:  Twitter, Netflix, Spotify, Airbnb, Reddit, Etsy, SoundCloud and The New York Times, were reported inaccessible by users.  Dyn was attacked using a vector that is often referred to as a Distributed Denial of Service  (DDoS) attack. A DDoS attack essentially involves sending a resource, such as a publically facing website too many communication requests at one time such that the service is denied to legitimate would-be users of the resource.

The term distributed comes from the nature in which the attack is usually conducted.  An attacker does not usually possess a single resource with the necessary bandwidth or communication “pipe” to overwhelm providers such as Dyn.  Instead, the attacker creates a network of smaller resources, distributed throughout a network such as the Internet, and directs the network of devices to attack the chosen target.  In the recent attack, the perpetrators appear to have used, at least in part, a network of consumer devices from the Internet of Things (IoT), a term used to describe so-called “smart” devices that can communicate with each other.  Attackers exploited an open vector within these devices such that they were able to control them and utilize them as part of a DDoS attack network to direct unwanted traffic to Dyn.

Identification of Cyber Security Attack Risk 

A given cyber security attack will have different effects on the ability of an entity to function based on the aspects of the infrastructure being targeted.  Identifying cyber security risk involves two parts.  First, the entity needs to understand how the various components that make up its information technology infrastructure function in relation to each other to provide services to the entity itself and other external actors.  Second, an evaluation of the exposed aspects of the components needs to be conducted, keeping in mind how the components function as a whole.

For example, with Dyn, a certain portion of the architecture that played a role in providing domain name services was likely exposed in a publically facing manner.  A known risk of such public facing exposure is a DDoS attack.

The devices that were harnessed to provide the malicious DDoS traffic, appear to have contained components that were publically addressable via an identified mechanism through the Internet.  Furthermore, the devices were susceptible to accepting malicious instructions causing undesired operation, in this case, their unwitting use as part of a bot net for a DDoS attack on Dyn.

For the various websites affected, including Twitter, Netflix, Spotify, Airbnb, Reddit, Etsy, SoundCloud and The New York Times, most likely components of their information architecture that dealt with processing DNS information were rendered unable to function, probably at least in part because their DNS provider ceased to operate.

Proactive Mitigation of Cyber Security Risk 

Effective mitigation of cyber security risk will involve understanding how the obligations of the entity to others, such as its customers, as well as the obligations of those that provide services to the entity, interact with the cyber security risks identified via the previous section’s methods.  This process is greatly facilitated by experienced counsel that have dealt with these issues before.

For example, Dyn faced a risk of being unable to provide effective DNS services to its customers, which if identified in advance could have been accounted for via a provision in the Service Level Agreement (SLA) terms in the relevant agreement.  Upon agreeing to these terms, potential customers could either choose to accept the business risk of downtime, perhaps mitigating the risk via insurance, or have sought a suitable agreement with another vendor, whereby the vendor would provide a failover mechanism should the primary vendor, here Dyn, became unavailable.

Companies with other business models such as those that sold the Internet of Things devices that were harnessed as part of the DDoS attack against Dyn face their own risks, including complying with regulations and using ordinary care in the creation, testing, and selling, of these devices.  In some situations, it may be possible for such device manufactures to transfer the risk to their customers via a contractual provision.  In many cases, insurance is likely to also play a major risk mitigation role.  Future litigation will likely give us greater insight to the standard of case such device manufactures owe their customers as well as third parties.

Imagine you are the CEO of company sitting across from an interviewer. The interviewer asks you the age old question, “So tell me about your company’s strengths and weaknesses?”  You start thinking about your competitive advantages that distinguish you from competitors.  You decide to talk about how you know your customers better than the competition, including who they are, what they need, and how your products and services fit their needs and desires.  The interviewer, being somewhat cynical, asks “Aren’t you worried about the liabilities involved with collecting all that data?”

In honor of National Cyber Security Awareness Month, we at Mintz Levin wanted to take the chance to remind our readers of data’s value as an asset and the associated liabilities that stem from its collection and use, as well as provide guidelines for maximizing its value and minimizing its liabilities.  Continue Reading 3 Guidelines to Maximize Value of Data