Recently, a Google researcher discovered a serious flaw with the content delivery network (CDN) provided by CloudFlare.  This vulnerability has now become known as Cloudbleed, in a nod to the earlier Heartbleed SSL vulnerability.  The Cloudfare CDN allows users of the service to have their content stored at Cloudflare Network Points of Presence (PoPs) rather than a single origin server.  This reduces the amount of time it takes to serve websites in disparate geographical locations.  The service is popular, with Cloudflare having over five million customers, including Uber, OkCupid, and FitBit.

The Cloudbleed vulnerability involved a situation where sensitive data was inadvertently displayed or “leaked” when visiting a website that used certain Cloudflare functionality.  Cloudflare has estimated that the leak was executed 1,242,071 times between September 22nd and February 18th.  Search engines such as Bing, Yahoo, Baidu and Google also cached the leaked data.  The researcher who discovered the leak found all sorts of sensitive data being leaked, including private messages from major dating sites, full messages from a well-known chat service, online password manager data and hotel bookings, passwords and keys.

The Clouldbleed vulnerability is a reminder that companies that leverage external vendors to receive, process, store, or transfer sensitive data must find ways to reduce the risk created by the relationship to an acceptable level.  We have three steps that companies should consider taking to accomplish this.  

First, companies should understand how external vendors will interact with their data flows.  Companies that leverage Cloudflare services have given it access to sensitive data, including private messages, passwords, and keys.  The risks of providing this data to external vendors cannot be understood if the company itself does not understand at a senior organizational level what is being transferred.  Ask questions about the proposed procurement of vendor-provided services to understand what interaction the service/vendor has with your data.

Second, companies should make sure that they have permission to transfer user data to third parties, based on its existing terms of use and privacy policy documents that the relevant data subjects have agreed to.  Generally speaking, in most cases, the company collecting the data from the data subject will remain responsible for any issues that occur downstream, including loss or breach of the data through a third party vendor relationship.

Third, companies should carefully negotiate their vendor contracts in light of their own risk tolerance.  The contract should contemplate the data at issue, including by type and category, such as private messages and passwords, and should to the extent feasible transfer all risk of a breach on the vendor side to the vendor.  In many cases, it will be appropriate to require that the vendor carry insurance to satisfy its obligations under the agreement, including data breach remediation should it become an issue.

Companies with any questions regarding this process should not hesitate to contact the Privacy and Security team at Mintz Levin.

Five Things You (and Your M&A Diligence Team) Should Know

Recently it was announced that Verizon would pay $350 million less than it had been prepared to pay previously for Yahoo as a result of data breaches that affected over 1.5 billion users, pending Yahoo shareholder approval. Verizon Chief Executive Lowell McAdam led the negotiations for the price reduction.  Yahoo took two years, until September of 2016, to disclose a 2014 data breach that Yahoo has said affected at least 500 million users, while Verizon Communications was in the process of acquiring Yahoo.  In December of 2016, Yahoo further disclosed that it had recently discovered a breach of around 1 billion Yahoo user accounts that likely took place in 2013.

While some may be thinking that the $350 million price reduction has effectively settled the matter, unfortunately, this is far from the case. These data breaches will likely continue to cost both Verizon and Yahoo for years to come.  Merger and acquisition events that are complicated by pre-existing data breaches will likely face at least four categories of on-going liabilities.  The cost of each of these events will be difficult to estimate during the deal process, even if the breach event is disclosed during initial diligence.

Continue Reading Data Breaches Will Cost Yahoo and Verizon Long After Sale

The Securities and Exchange Commission (SEC) is investigating whether Yahoo! should have reported the two massive data breaches it experienced earlier to investors, according to individuals with knowledge.  The SEC will probably question Yahoo as to why it took two years, until September of 2016, to disclose a 2014 data breach that Yahoo has said affected at least 500 million users.  The September 2016 disclosure came to light while Verizon Communications was in the process of acquiring Yahoo.  As of now, Yahoo has not confirmed publically the reason for the two year gap.  In December of 2016, Yahoo also disclosed that it had recently discovered a breach of around 1 billion Yahoo user accounts.  As Yahoo appears to have disclosed that breach near in time to discovery, commentators believe that it is less likely that the SEC will be less concerned with it.

After a company discovers that it has experienced an adverse cyber incidents, it faces a potentially Faustian choice: attempt to remediate the issue quietly and avoid reputational harm, or disclose it publically in a way that complies with SEC guidance, knowing that public knowledge could reduce public confidence in the company’s business and could even prove to be the impetus for additional litigation.

Part of the issue may be that while the SEC has various different mechanisms to compel publically traded companies to disclose relevant adverse cyber events, including its 2011 guidance, exactly what and when companies are required to disclose has been seen as vague.  Commentators have argued that companies may have a legitimate interest in delaying disclosure of significant adverse cyber incidents to give law enforcement and cyber security personnel a chance to investigate, and that disclosing too soon would hamper those efforts, putting affected individuals at more risk.

Even so, many see the two year gap period between Yahoo’s 2014 breach and its September 2016 disclosure as a potential vehicle for the SEC to clarify its guidance, due to the unusually long time period and large number of compromised accounts. As a result of its investigation, it is possible that the SEC could release further direction for companies as to what constitutes justifiable reasons for delaying disclosure, as well as acceptable periods of delay.  As cybersecurity is one of the SEC’s 2017 Examination Priorities, at a minimum, companies should expect the SEC to increase enforcement of its existing cybersecurity guidance and corresponding mechanisms.  Whatever the SEC decides during its investigation of Yahoo, implementing a comprehensive Cybersecurity Risk Management program will help keep companies out of this quagmire to begin with.

If you have any questions regarding compliance with SEC cyber incident guidance, please do not hesitate to contact the team at Mintz Levin.

Developers and operators of educational technology services should take note.  Just before the election, California Attorney General Kamala Harris provided a document laying out guidance for those providing education technology (“Ed Tech”).  “Recommendations for the Ed Tech Industry to Protect the Privacy of Student Data” provides practical direction that operators of websites and online services of a site or service used for K-12 purposes can use to implement best practices for their business models.

Ed Tech, per the Recommendations, comes in three categories: (1) administrative management systems and tools, such as cloud services that store student data; (2) instructional support, including testing and assessment; (3) content, including curriculum and resources such as websites and mobile apps.  The Recommendations recognize the important role that educational technology plays in classrooms by citing the Software & Information Industry Association; the U.S. Market for PreK-12 Ed Tech was estimated at $8.38 billion in 2015.

The data that may be gathered by through Ed Tech systems and services can be extremely sensitive, including medical histories, social and emotional assessments and test results.  At the Federal level, the Family Educational Rights and Privacy Act (FERPA) and the Children’s Online Privacy Protection Rule (COPPA) govern the use of student data.  However, according to the Recommendations, these laws “are widely viewed as having been significantly outdated by new technology.”

Recognizing this, California has enacted laws in this space to fill in gaps in the protection.  Cal. Ed. Code § 49073.1, requires that local education agencies (county offices of education, school districts, and charter schools) that contract with third parties for systems or services that manage, access, or use pupil records, to include specific provisions regarding the use, ownership and control of pupil records. On the private side, the Student Online Personal Information Privacy Act (SOPIPA), requires Ed Tech provides to comply with baseline privacy and security protections.

Building on this backdrop of legislation, Attorney General Harris’ office provided six recommendations for Ed Tech providers, especially those that provide services in the pre-kindergarten to twelfth grade space.

  • Data Collection and Retention: Minimization is the Goal 

Describe the data being collected and the methods being used, while understanding that data can be thought of to include everything from behavioral data to persistent identifiers.  If your service links to another service, disclose this in your privacy policy and provide a link to the privacy policy of the external service.  If you operate the external service, maintain the same privacy and security protections for the external service that users enjoyed with the original service.  Minimize the data collected to only that necessary to provide the service, retain the data for only as long as necessary, and be able to delete personally identifiable information upon request.

  • Data Use: Keep it Educational

Describe the purposes of the data you are collecting.  Do not use any personally identifiable data for targeted advertising, including persistent identifiers, whether within the original service, or any other service.  Do not create profiles other than those necessary for the school purposes that your service was intended for.  If you use collected data for product improvement, aggregate or de-identify the data first.

  • Data Disclosure: Make Protections Stick 

Specifically describe any third parties you share personally identifiable data with. If disclosing for school purposes, only do so to further the school specific purpose of your site.  If disclosing for research purposes, only disclose personally identifiable information if you are required by federal or state law, or if allowed under federal and state law, and the disclosure is under the direction of a school, district or state education department.  Service providers should be contractually required to use any personally identifiable data only for the contracted service, not disclose the information, take reasonable security measures, delete the information when the contract is completed, and notify you of any unauthorized disclosure or breach.  Do not sell any collected information, except as part of a merger or acquisition.

  • Individual Control: Respect Users’ Rights 

Describe procedures for parents, legal guardians, and eligible students to access, review and correct personally identifiable data.  Provide procedures for students to transfer content they create to another service, and describe these procedures in your privacy policy.

  • Data Security: Implement Reasonable and Appropriate Safeguards

Provide a description of the reasonable and appropriate security you use, including technical, administrative and physical safeguards, to protect student information.  Describe your process for data breach notification.  Provide training for your employees regarding your policies and procedures and employee obligations.

  • Transparency: Provide a Meaningful Privacy Policy

Make available a privacy policy, using a descriptive title such as Privacy Policy, in a conspicuous manner that covers all student information, including personally identifiable information.  The policy should be easy for parents and educators to understand.  Consider getting feedback regarding your actual privacy policy, including from parents and students.  Include an effective date on the policy and describe how you will provide notice to the account holder, such as a school, parent, or eligible student.  Include a contact method in the policy, at a minimum an email address, and ideally also a toll-free number.

Given the size of the California market, any guidance issued by the California Attorney General’s office should be carefully considered and reviewed.   If you are growing an ed tech company, this is the time to build in data privacy and security controls.   if you are established, it’s time to review your privacy practices against this Guidance and see how you match up.  If you have any questions or concerns as to how these recommendations could be applied to your company, please do not hesitate to contact the team at Mintz Levin.

 

Over the last week, details have become available to explain how an attack against a well-known domain name service (DNS) provider occurred.  What about the potential legal risks?  We will attempt to provide insights into mitigating the legal risks for the various companies involved, including the companies that may have unwittingly provided the mechanism through which the attacks were conducted.

The Mechanics of The Recent Distributed Denial of Service Attacks 

Recently, Dyn, a Manchester, New Hampshire-based provider of domain name services, experienced service outages as a result of what appeared to be well coordinated attack.  Dyn provides domain name services used to direct users to a website after typing in a human readable domain name, for example, google.com.  On October 21st, 2016, many websites including:  Twitter, Netflix, Spotify, Airbnb, Reddit, Etsy, SoundCloud and The New York Times, were reported inaccessible by users.  Dyn was attacked using a vector that is often referred to as a Distributed Denial of Service  (DDoS) attack. A DDoS attack essentially involves sending a resource, such as a publically facing website too many communication requests at one time such that the service is denied to legitimate would-be users of the resource.

The term distributed comes from the nature in which the attack is usually conducted.  An attacker does not usually possess a single resource with the necessary bandwidth or communication “pipe” to overwhelm providers such as Dyn.  Instead, the attacker creates a network of smaller resources, distributed throughout a network such as the Internet, and directs the network of devices to attack the chosen target.  In the recent attack, the perpetrators appear to have used, at least in part, a network of consumer devices from the Internet of Things (IoT), a term used to describe so-called “smart” devices that can communicate with each other.  Attackers exploited an open vector within these devices such that they were able to control them and utilize them as part of a DDoS attack network to direct unwanted traffic to Dyn.

Identification of Cyber Security Attack Risk 

A given cyber security attack will have different effects on the ability of an entity to function based on the aspects of the infrastructure being targeted.  Identifying cyber security risk involves two parts.  First, the entity needs to understand how the various components that make up its information technology infrastructure function in relation to each other to provide services to the entity itself and other external actors.  Second, an evaluation of the exposed aspects of the components needs to be conducted, keeping in mind how the components function as a whole.

For example, with Dyn, a certain portion of the architecture that played a role in providing domain name services was likely exposed in a publically facing manner.  A known risk of such public facing exposure is a DDoS attack.

The devices that were harnessed to provide the malicious DDoS traffic, appear to have contained components that were publically addressable via an identified mechanism through the Internet.  Furthermore, the devices were susceptible to accepting malicious instructions causing undesired operation, in this case, their unwitting use as part of a bot net for a DDoS attack on Dyn.

For the various websites affected, including Twitter, Netflix, Spotify, Airbnb, Reddit, Etsy, SoundCloud and The New York Times, most likely components of their information architecture that dealt with processing DNS information were rendered unable to function, probably at least in part because their DNS provider ceased to operate.

Proactive Mitigation of Cyber Security Risk 

Effective mitigation of cyber security risk will involve understanding how the obligations of the entity to others, such as its customers, as well as the obligations of those that provide services to the entity, interact with the cyber security risks identified via the previous section’s methods.  This process is greatly facilitated by experienced counsel that have dealt with these issues before.

For example, Dyn faced a risk of being unable to provide effective DNS services to its customers, which if identified in advance could have been accounted for via a provision in the Service Level Agreement (SLA) terms in the relevant agreement.  Upon agreeing to these terms, potential customers could either choose to accept the business risk of downtime, perhaps mitigating the risk via insurance, or have sought a suitable agreement with another vendor, whereby the vendor would provide a failover mechanism should the primary vendor, here Dyn, became unavailable.

Companies with other business models such as those that sold the Internet of Things devices that were harnessed as part of the DDoS attack against Dyn face their own risks, including complying with regulations and using ordinary care in the creation, testing, and selling, of these devices.  In some situations, it may be possible for such device manufactures to transfer the risk to their customers via a contractual provision.  In many cases, insurance is likely to also play a major risk mitigation role.  Future litigation will likely give us greater insight to the standard of case such device manufactures owe their customers as well as third parties.

Imagine you are the CEO of company sitting across from an interviewer. The interviewer asks you the age old question, “So tell me about your company’s strengths and weaknesses?”  You start thinking about your competitive advantages that distinguish you from competitors.  You decide to talk about how you know your customers better than the competition, including who they are, what they need, and how your products and services fit their needs and desires.  The interviewer, being somewhat cynical, asks “Aren’t you worried about the liabilities involved with collecting all that data?”

In honor of National Cyber Security Awareness Month, we at Mintz Levin wanted to take the chance to remind our readers of data’s value as an asset and the associated liabilities that stem from its collection and use, as well as provide guidelines for maximizing its value and minimizing its liabilities.  Continue Reading 3 Guidelines to Maximize Value of Data

 

It’s time for a compliance check on those website or mobile app privacy policies, before the California Attorney General comes knocking.

Attorney General Kamala D. Harris has announced the release of a new tool for consumers to report websites, mobile applications, and other online services that may be in violation of the California Online Privacy Protection Act (CalOPPA).  The form is available at https://oag.ca.gov/reportprivacy.  As a reminder, a website owner or app operator may violate CalOPPA by failing to post privacy policies or posting incomplete or inadequate policies that do not meet the requirements of the statute.

As we have previously written on this blog, the potential cost for not meeting the CalOPPA requirements can be substantial.  Violations of CalOPPA may result in penalties of up to $2,500 per violation which, for mobile applications, means up to $2,500 for each copy of the non-compliant application that is downloaded by California consumers.

“In the information age, companies doing business in California must take every step possible to be transparent with consumers and protect their privacy,” said Attorney General Harris. “As the devices we use each day become increasingly connected and more Americans live their lives online, it’s critical that we implement robust safeguards on what information is shared online and how. By harnessing the power of technology and public-private partnerships, California can continue to lead the nation on privacy protections and adapt as innovations emerge.”

Mobile app creators should be aware that the Attorney General’s office will not only be relying on consumers to identify non-compliant apps.  The Office is also partnering with the Usable Privacy Policy Project at Carnegie Mellon University to develop a tool that will identify mobile apps that may be in violation of CalOPPA by looking for discrepancies between disclosures in a given privacy policy and the mobile app’s actual data collection and sharing practices (for example, a company might share personal information with third parties but doesn’t disclose that in its privacy policies).

If you have any questions regarding CalOPPA compliance, please do not hesitate to contact the team at Mintz Levin.

 

 

The term “cloud computing,”  — a process by which remote computers are used to store, manage and process data — is no longer an unfamiliar term. According to at least one estimate, “approximately 90 percent of businesses using the cloud in some fashion.” American Airlines is assessing major providers of cloud services for an eventual relocation of certain portions of its customer website and other applications to the cloud.

What some may not realize is that there are actually three main types of clouds: public, private and hybrid.  Public clouds are those run by a service provider, over a public network.  For example, Amazon Web Services offers public cloud services, among others.  A private cloud is operated for a single entity, and may be hosted internally or by a third-party service provider.  A hybrid cloud is a composition of two or more clouds, such as a private cloud and a public cloud, such that the benefits of both can be realized where appropriate.  Each of these cloud infrastructure types has different advantages and disadvantages.

For a given company looking to migrate to the cloud, the appropriate option will be motivated in part by business considerations; however, data privacy and security laws, compliance best practices, and contractual obligations will provide mandatory baselines that companies cannot ignore. As such, relevant laws, best practices, and contractual obligations serve as a useful starting point when evaluating the appropriate cloud option.

Most every organization has data flow systems that receive data, and then process and use the data to deliver a service. Below are three initial steps a decision maker should take when evaluating a potential cloud infrastructure choice.

 

First, consider the statutory implications of the types of data being processed

For example, is the system collecting social security numbers and driver’s license numbers? Pursuant to California Civil Code Section 1798.81.5, businesses that “own or license” personal information concerning a California resident are required to “implement and maintain reasonable security procedures and practices . . . to protect the personal information from unauthorized access, destruction, use modification, or disclosure.”  Of course, many other state and federal laws may also provide additional obligations, such as the HIPAA Security Rule, which applies to certain health information under certain circumstances.

Deciding which relevant laws apply, and then interpreting language such as “reasonable security procedures and practices” is a complicated process. Companies should consult experienced legal counsel regarding these risks, especially in light of potential liability.

Second, consider any relevant contractual obligations

For example, many companies may have contracts that provide for certain service level availability (SLA) obligations for services they provide. It is also possible that these contracts could have their own security requirements in place that must be met.

Third, decide which cloud architecture option makes sense in light of the first two steps as well as business considerations

After senior decision makers, with the benefit of experienced legal counsel, have decided what elements of applicable laws, best practices, and contractual obligations apply, further business considerations may need to be addressed from an operational standpoint.  For example, interoperability with other services may be an issue, or scalability may be an issue.

 

Through these requirements, in conjunction with appropriate information technology stakeholders, the appropriate cloud architecture can be chosen. Private clouds can offer the strongest security controls, as they are operated by a single entity and can offer security options not present in public clouds.  As such, a private cloud may be appropriate where a very strong security stance is deemed necessary.  Public clouds are often less expensive, but offer a more limited range of security options.  A hybrid cloud may be appropriate where an entity hosts certain high security data flow systems, as well as other systems with less sever security requirements.  For example an entity that has an HR system that contains social security numbers, as well as an employee shift scheduling system might choose to host the HR system on a private cloud, while hosting the customer feedback system on a public cloud system, with limited cross over and interoperability between the two systems.

Once you have chosen which cloud suits your business and data flow, the real work of getting appropriate contract documents in place begins.   We’ll discuss those issues in a future blog post.

 

The U.S. Court of Appeals for the Ninth Circuit recently issued a decision that could have far reaching implications for the relationships between companies that provide online services, their customers or users, and third parties. In Facebook v. Vachani, the Ninth Circuit found that Power Ventures violated the Computer Fraud and Abuse Act (“CFAA”) and California Penal Code Section 502.  Power Ventures did this by continuing to access Facebook’s computer system after receiving Facebook’s letter to cease and desist such activity.  Although Power Ventures had permission from relevant Facebook users, the users’ authorization had been revoked by Facebook itself through its letter.

Vachani’s Business Model

Power Ventures (“Power”), is a company founded by CEO Steven Vachani. As part of its business model, Mr. Vachani operated a social networking site, Power.com.  The idea was that Power.com would act as a social network aggregator, by allowing users to see all of their social network contacts across different services on a single page. The user could then use the Power.com service to access the individual social networking sites.

Read on to understand what occurred in the case and what key takeaways it provides for senior decision makers and in-house counsel. Continue Reading Facebook v. Vachani – User Authorization Can Be Revoked By Service Providers

The Department of Homeland Security (DHS) and the Department of Justice (DOJ) have issued the long-awaited final procedures for both Federal and Non-Federal Entities under the Cybersecurity Information Sharing Act (CISA) (“Final Procedures”) that provide information on how DHS will implement CISA.  In addition to the Final Procedures, the agencies also released “Guidance to Non-Federal Entities to Share Cyber Threat Indicators and Defensive Measures with Federal Entities under the Cybersecurity Information Sharing Act of 2015 (the “Guidance”).

As we have written previously, a company may share cyber threat indicators (CTIs) and defensive measures (DMs) for cybersecurity purposes “notwithstanding any other provision of law,” and receive certain liability protections for sharing in accordance with the Act.  The Final Procedures and the Guidance are finalized versions of interim guidance previously discussed.  Any decision to share information under CISA is complex and involves factual and legal determinations.

Read on to find out what CTIs and DMs are, and information on the procedures companies must follow to obtain liability protection for sharing CTIs and DMs with the Federal Government.   Continue Reading “Interim” No More: DHS and DOJ Publish Final CISA Guidance on Cybersecurity Sharing