Header graphic for print

Beyond IP Law

A look at what's going on in the world of intellectual property.

In re Nickelodeon Consumer Privacy Litigation: An IP Address Is Not Always Personally Identifiable Information

Posted in Data Security, Privacy

What’s the Case About? In re Nickelodeon Consumer Privacy Litigation[1] is a multi-district consolidated class action filed on behalf of children under the age of thirteen alleging that Viacom used child directed websites it owned to collect, without parental consent, data from the class members which it then provided to co-defendant Google. The data Viacom captured from children included their gender, birthdate, IP address, the webpages visited and the names of videos the children viewed. The court considered an issue of first impression as to whether an IP address is personally identifiable information (PII) under the Video Privacy Protection Act (VPPA) and whether the collection of the data constituted intrusion upon seclusion under New Jersey law. Plaintiffs argued that the vulnerability of children coupled with public aversion to mining them for data supported liability of Viacom.[2]

VPPA allegations dismissed: The court held that Viacom did not violate the VPPA by collecting the IP addresses of children. The decision was based in part, on the precedent set by In re Hulu Privacy Litigation.[3] The Hulu court determined that static digital identifiers such as IP addresses identify the location of a computer, which, without additional information, cannot be used to identify an individual. Under this rationale, an IP address is not PII, because an address alone cannot “reasonably” lead to the identification of a person. The court also noted that the VPPA is just too old and brittle to encompass technology so distant from its origins as a by-product of Blockbuster, Erol’s Video Club and Hollywood Video stores.   A nuance as to why Google escaped liability under VPPA for another reason is touched upon below.

New Jersey state law claims remanded: The court remanded the claim against Viacom for violation of the New Jersey intrusion upon seclusion law. The court did not look favorably upon Viacom’s failure to honor its notice to parents that it would not collect any data from children.

The allegations against Viacom: Viacom owns the websites, Nick.Jr. and Nick.com (Nick Sites), both of which are associated with the Nickelodeon channel. The Nick Sites offer games and streaming videos to children and included this notice to parents:

HEY GROWN-UPS: We don’t collect ANY personal information about your kids. Which means we couldn’t share it even if we wanted to![4]

When children registered on one of the Nick Sites, they received a nickname of an avatar based on a Nickelodeon cartoon character of the same gender and approximate age as the child. The plaintiffs alleged that Viacom used first-party cookies it placed on the children’s computers to obtain information about which games and videos the children accessed.  Viacom disclosed the information it collected to Google and permitted Google to place ads on the Nick Sites.

The allegations against Google: The plaintiffs alleged that Google (1) placed third-party cookies via advertisement on the computers of children who accessed the Nick Sites, (2) used those cookies to track the children on any website displaying a Google ad, and (3) used “Doubleclick.net cookies”[5] to track the browsing of whomever used the computer across any website Google owned, such as Gmail, YouTube and Google Maps.

Analysis of the VPPA: Congress enacted the VPPA after the 1987 Senate Judiciary Committee’s hearings regarding Supreme Court nominee, Robert Bork. During the hearings, a newspaper obtained and publicized a list of titles of 146 films Judge Bork or members of his family rented from a local video store.[6] The list of videos was, even by 1987 standards, unremarkable — not a single NC-17 film on the list. Congress agreed, however, that a person’s video viewing history should be private. Consequently, under the VPPA, a user must give permission for his or her video viewing data to be shared. How does this translate to current technology? It doesn’t. The court likened applying the VPPA to internet technology to putting a square peg in a round hole.[7] Additionally, the court referred to the VPPA as a rigid law that lacked the flexibility of the Children’s Online Privacy Protection Act (COPPA) to regulate effectively technology that is “in flux.”[8]

The key definitions under the VPPA are:

Consumer: any renter, purchaser or subscriber of goods or services from a video tape service provider.

Video tape service provider: any person, engaged in the business, in or affecting interstate or foreign commerce, of rental, sale or delivery of prerecorded video cassette tapes or similar audio visual materials.

Personally identifiable information (PII): includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.

18 U.S.C. § 2710(a). A violation of the VPPA occurs when “[a] video tape service provider … knowingly disclose[d], to any person, personally identifiable information concerning any consumer of such provider.” Id.

The VPPA was created to protect information specific to viewers of movies. The court noted that if the definition of PII was expanded for all statutes to include an IP address, there would be no end to litigation and the distinctions between the groups protected by certain statutes would be eroded. Congress’ decision to omit a new definition of PII in the 2013 amendment of the VPPA further emphasized that the VPPA “serves different purposes, and protects different constituencies, than other, broader privacy laws.”[9] For example, if “Google were to start purposefully leaking its customers’ YouTube video-watching histories,” the VPPA “would almost certainly” be violated.[10]

Extending the VPPA to regulate current technology would likely result in unlimited violations. Defining an IP address as PII within the context of the VPPA would mean that “the disclosure of an IP address to any Internet company with registered users might trigger “liability” when an IP address is regularly transmitted to an Internet service provider (ISP) with each search.[11] The court also pointed out that there is a spectrum of PII, with first and last name at one end, and an IP address at the other, lower, end of the spectrum, given that the IP address alone may be insufficient to identify a person. The case cited by the court to illustrate the need for a subpoena to identify a person is a copyright infringement case, Warner Bros. Records Inc. v. Walker, 704 F.Supp. 2d 460 (W.D. Pa. 2010). Warner Bros. needed a subpoena to identify the student who was assigned the IP address used to illegally download some songs. The student, who shared a room with multiple roommates, possibly would not have been identified without a subpoena, given that several people may have used the computer. It was not “reasonably” likely for Warner Bros. to identify the person responsible for the downloads without a subpoena. Understandably, a subpoena may be necessary in a fluid environment such as a college where multiple people may have access to a computer.

Time-out: It’s one thing for Warner Bros. to need help from the college to identify which of multiple people may have used an IP address assigned by the college. It’s something altogether different when Google, which the court describes as “a company whose entire business model is purportedly driven by the aggregation of information about Internet users,” wants to identify a person. The plaintiffs’ amicus very astutely provided some real-world perspective about what happens when Google wants to find out who you are: “concluding ‘that Google is unable to identify a user based on a combination of IP address … and other browser cookie data … would be like concluding the company that produces the phone book is unable to deduce the identity of an individual based on their telephone number.’”[12] Enough said. Resume play.

The court affirmed the dismissal of intrusion upon seclusion claim against Google: Although the court acknowledged that many people, and some courts, find the monetization and collection of data from children without parental consent repugnant, those acts alone, did not establish a claim for intrusion upon seclusion. Under New Jersey law, an intrusion upon seclusion claim occurs upon a showing that (i) an intentional intrusion (ii) upon the seclusion of another that is (iii) highly offensive to a reasonable person.[13] The court disregarded the fact that children, instead of adults, were tracked, because third-party cookies serve a legitimate commercial purpose for advertisers and Google uses them on the Nick Sites the same way it uses them on other non-child directed sites.

This is why Viacom may be liable for intrusion upon seclusion: When Viacom notified parents that it did not collect any personal information about children, it was reasonable for a jury to decide that parents may have permitted their children unsupervised access to the Nick Sites based on the disclaimer. If the parents of the plaintiff class members didn’t already have an expectation of privacy, Viacom’s notice created an expectation of privacy. Viacom’s violation of that trust by surreptitiously collecting data from children could be considered highly offensive under the applicable law.


COPPA and HIPAA V2An IP address has been likened to a computer’s fingerprint. If a statute identifies an IP address or other static assigned number as PII, that number is a great starting point to identify a user. For example, under COPPA and HIPAA, an IP address is as high on the spectrum of PII as a user’s first and last name. The rationale behind the ranking of an IP address in these statutes is that sometimes it is reasonable that an IP address can lead you to the user. Who’s looking for you also matters. It is reasonable that Google, using third-party cookies, can reasonably use your IP address to identify you.

Sometimes an IP address can only identify a computer, i.e., it cannot “reasonably” be used to identify you. Without a subpoena or some alternate means of creating a mosaicked identity, you may have to resort to battling “John Doe” until a subpoena grants you the right to retrieve additional information about the IP address. In these instances, IP addresses are not considered to be PII. At the end of the day, you have found a computer. Good job.


What did we learn?

  • Don’t over sell your privacy policy. Viacom faces potential liability because it violated its own privacy notice to parents.
  • Do the right thing. Don’t get information from children under the age of 13 that is defined as PII under any privacy law without parental consent. These days there are few things about which 90 percent of Americans agree — Viacom’s actions on the Nick Sites are considered to be highly offensive.
  • Now that you know that sometimes children’s browsing history, IP address and other information can be collected through third-party cookies without parental consent, educate your children. The Federal Trade Commission provides guidance on helping children distinguish between ads and entertainment: https://www.commonsensemedia.org/website-reviews/admongo.
  • Understand that a trade-off for having the world at your fingertips may mean sharing your computer’s fingerprint with inquiring minds.

Stay safe.

[1] In re Nickelodeon Consumer Privacy Litigation, 2016 WL 3513782 No. 15-1441 (3rd Cir. Jun. 28, 2016).

[2] Id. at * 4 (alleging that (1) targeting ads to children is more profitable than targeting ads to adults, in part, “because children are generally unable to distinguish between content and advertisements;” (2) 80% and 90% of 2,000 adult respondents, respectively, oppose an advertiser’s tracking of children and believed advertisers should obtain a parent’s permission before installing cookies on a device used by a minor child; and (3) companies can use “browser fingerprinting” to identify specific users).

[3] In re Hulu Privacy Litigation, No. 11-CV-3764 (LB), 2014 WL 1724344 (N.D. Cal. Apr. 28, 2014).

[4] In re Nickelodeon, 2016 WL 3513782 No. 15-1441 at * 3 (3rd Cir. Jun. 28, 2016).

[5] Id.

[6] Michael Dolan, “The Bork Tapes,” City Paper, Sept. 25–Oct. 1, 1987, at 1.

[7] In re Nickelodeon, 2016 WL 3513782 No. 15-1441 at * 15 (3rd Cir. Jun. 28, 2016).

[8] Id., at * 16-19.

[9] In re Nickelodeon, 2016 WL 3513782 No.: 15-1441 * 19 (3rd Cir. Jun. 28, 2016).

[10] Id. at * 17.

[11] Id. at * 20-21.

[12] Id. at * 20.

[13] Hennessy v. Coastal Eagle point Oil Co., 609 A.2d 11, 17 (N.J. 1992 (citing Restatment (Second) of Torts § 652B (1977)).

Reputation Matters: Don’t Lose Opportunities Due to Inaccurate Personal Data

Posted in Data Security, Privacy

“A reputation once broken may possibly be repaired, but the world will always keep their eyes on the spot where the crack was.” ― Joseph Hall

Consumers may be injured by inaccurate data that they cannot review or correct. There’s a hole in the bucket, dear Congress.[1] 

The children’s song, “There’s a Hole in the Bucket,” exemplifies the conundrum many consumers experience when they are denied opportunities or inappropriately solicited. Data brokers maintain files with over 1,500 pieces of pieces of personal data on each of us. There are over 3,500 data brokers in the U.S. Only about one-third of them permit individuals to “opt-out” of inclusion in their data banks, usually for a fee. Unless and until you recognize an unexplained pattern of lost job opportunities, rejected apartment applications or are targeted by unsolicited marketing, you may not care what data brokers maintain in your files.

Imagine this: You are 22 years old and gung-ho to use your brand-spanking new business organization degree as an entry-level traveling corporate trainer. You grant recruiters the right to conduct background checks after they indicate their interest in you based on your resume. You get rejection after rejection. You finally muster the courage to call a recruiter and ask why, and she explains that you are not a good fit based on background information that describes you as 39 years old, the parent of four young children, having a Ph.D. and a sufferer of agoraphobia. None of this information is true.

Your prospective employers may have relied on information provided by data brokers or credit rating agencies (CRA) in determining that you are not a viable candidate. Now that you know inaccurate information is being reported about you, you are confident that you can correct your files and the employers will reverse their decisions. You can if the inaccurate information is from a CRA. But if data brokers provided the incorrect information, you will find yourself in the miserable position of knowing your files are wrong and being powerless to correct them. You know prospective employers have considered inaccurate information about you, but you don’t know which employers relied on which data brokers or which inaccuracies in your files made you undesirable for hire. You don’t know how many data brokers have files on you or what evidence you can provide to disprove the inaccurate information about you. You and “Dear Henry” share the predicament of wanting to fix the hole in the bucket but lacking the tools to do so.

Let the screening begin.

Many decisions about consumers, job applicants and second dates are based on inaccurate information provided by data brokers. Data brokers sell consumers’ personally identifiable information (PII) to be used in marketing, people searches, fraud detection and risk management. The FTC defines data brokers as “companies that collect information, including personal information about consumers, from a wide variety of sources for the purpose of reselling such information to their customers for various purposes, including verifying an individual’s identity, differentiating records, marketing products, and preventing financial fraud.”[2]  The Fair Credit Reporting Act (FCRA) applies to CRAs like Experian, TransUnion and Equifax, not data brokers. CRAs must take reasonable steps to ensure the accuracy of the consumer PII they distribute and they must provide consumers the opportunity to review and correct their records. There are zero federal statutes or regulations affecting data brokers in this regard. If enacted as introduced in 2009, the Data Accountability and Trust Act (DATA) would provide procedures for individuals to audit and verify the accuracy of data held by data brokers. The swathe of data collected by data brokers is astounding and troubling. Add the fact that data brokers are generally less expensive to use than CRAs, and employers and individuals are at a distinct disadvantage relative to data brokers.

Here’s what’s in the bucket.

Reports about consumers are based on information showing what they own or use; who they are with, have lost or are fighting; how they make, save and spend their money; and what interests or ails them, including mental, genetic and “other” diseases that may be embarrassing.[3] For example, when you register your car, record a deed of trust, activate a warranty, join a Facebook group, fill a new prescription, or get sued, married, divorced or widowed, data brokers collect that information. It is tacitly understood that PII from data brokers is not accurate and enables discrimination in hiring, the provision of resources and opportunities.[4] Consumer advocacy groups report that information used in people search sites is not vetted — the consumer has the responsibility of figuring out which of 67 people named “Pamela Samuelson” authored Protecting Privacy Through Copyright Law?. Marketing information is more accurate, but is still unreliable. For example, a data broker may correctly report that a household member purchased a new car, but err by addressing car wash coupons to the resident third-grader. Risk mitigation information is the most accurate, because it is expected to at least correspond to the correct person, even if the results are outdated.

This brings to mind a character on a show who changed his name, because he shared it with well-known artist who was convicted of sex crimes. His new name, unfortunately, was shared with a well-known artist who was convicted of murder. How do you feel knowing that you may be judged by the bad report of someone who has a name similar to yours? The identities of entities using bad data may influence your answer.

Who’s looking in the bucket?

Financial institutions, government agencies, political organizers and insurance companies use the services of data brokers. As of May, one of the largest data broker’s customers included “47 Fortune 100 clients; 12 of the top 15 credit card issuers; seven of the top 10 retail banks; eight of the top 10 telecom/media companies; seven of the top 10 retailers; nine of the top 10 property and casualty insurers; three of the top five domestic airlines; and six of the top 10 U.S. hotels.”[5] How likely are you to recognize that after your namesake niece filed for bankruptcy that the hotel prices you were offered increased by 18%?

FactsCan you look in the bucket?

No. If data brokers filled the bucket, no federal law gives an individual the right to look in the bucket. A subpoena or other discovery procedure may be your best option to see your file. If a CRA filled the bucket, yes, an individual has the right to review and correct the information in the bucket.

What can you do?

  •  Educate yourself about your rights. See whether your state has any laws that offer you protection. California, for example, shields victims of violent crimes from having their PII publicized on the internet.
  • Opt out of as many of the data broker sites as is reasonable. Visit this website to get started: http://www.computerworld.com/article/2849263/doxxing-defense-remove-your-personal-info-from-data-brokers.html.
  •  Lobby your federal and state legislators and align yourself with organizations that advocate for the right to control your PII.

Stay safe.

[1] There’s a Hole in the Bucket, Songwriters: Harry Belafonte, Odetta Felious Gordon © Next Decade Entertainment, Inc.

[2] FTC, Protecting Consumer Privacy in an Era of Rapid Change, at 68 (Mar. 2012).

[3]Steve Kroft, The Data Brokers: Selling Your Personal Information, 60 Minutes, ep. “The Data Brokers” aired on CBS Mar. 9, 2014, http://www.cbsnews.com/news/the-data-brokers-selling-your-personal-information/.

[4] Exec. Office of the Pres., Big Data: Seizing Opportunities, Preserving Values, pp. 51-53, May 2014, http://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf.

[5] U.S. Senate Commerce Committee, A Review of the Data Broker Industry: Collection, Use, and Sale of Consumer Data for Marketing Purposes  (December 2013).

Safe or Good? We All Have Choices to Make

Posted in Data Security

“He’s not safe, but he’s good.” (Referring to Aslan, the Lion, in The Lion, the Witch and the Wardrobe.) ― C.S. Lewis

I planned to write about the inspired, better-than-sliced-bread security option of using fingerprint authentication to protect our mobile devices. That imploded. In 2014, and earlier this year, courts in Virginia and California, respectively, issued warrants requiring suspects to provide their fingerprints to unlock phones so the government could access potentially incriminating evidence believed to be stored there.[1]

All “do,” no “talk.”

In contrast, courts have not forced individuals to reveal the passcodes used to secure their mobile devices.[2] What gives? Albert Gidari, the director of privacy at Stanford Law School’s Center for Internet and Society, explains that the Fifth Amendment protects thoughts, not things: “Unlike disclosing passcodes, you are not compelled to speak or say what’s ‘in your mind’ to law enforcement,” Gidari said. “‘Put your finger here’ is not testimonial or self-incriminating.” For example, you can be compelled to provide the key to your heart, but no one can make you reveal what is in your heart.

Why chain the door when all the windows are open?

The fingerprint authentication platform is only as good as its gaps. The maker of one of the top mobile operating systems has stored fingerprints as unencrypted images in local storage. Fingerprint data stored by two cellphone companies was breached despite the use of TrustZone, an encryption platform created by mobile manufacturers.[3] WhatsApp, which was mentioned in a previous blog, has also experienced data theft.

Studies reveal the ineffectiveness of security software provided to users by their data providers. The software is largely ineffective, because …  PEOPLE DON’T DOWNLOAD IT! People shred their mail, but don’t download the platforms devised to protect their privacy if it’s formatted as data. The most common reasons people don’t download updates include: (1) suspicion that the updates are malware sent by hackers; (2) belief that the update won’t benefit them if they are otherwise satisfied with their current service; (3) lack of understanding that the updates provide security patches; and (4) expectation that the updates will take too long or will use too much memory. The authenticity of updates to your operating systems can be checked by visiting the app store, your manufacturer’s website or conducting an internet search to find information about update releases.

A critical flaw in fingerprint authentication is hiding in plain sight.

The convenience and high-tech sexiness of using fingerprint authentication on our phones has clouded our judgment regarding some of the most basic things we know about security. Fingerprints have a characteristic that is counter to a cornerstone of cyber-security — Fingerprints are immutable. If someone steals your password, you can change it. Quarterly mandatory password expirations illustrate the adage, the best password, is a new password.[4] Heads would spin and roll in IT departments the world round if it were decreed that passwords will never be changed again.

And just like that, the floodgates are open.

A much touted advancement of fingerprint authentication is that no one can steal your fingerprint. That’s fine, but the image of your fingerprint can be stolen like any other image. The image of your fingerprint can give someone access to apps, browsers, photo albums, cloud files and online accounts, some of which may be secured by passwords cached in your phone history. Finally, does it make sense to have an expectation of privacy in our fingerprints? The legal answer is, no. Since we literally leave our fingerprints everywhere, maybe we should reconsider relying on them to secure our privacy. Our unspoken thoughts are inalienable property. Apparently, fingerprints are just keys.

Convenience is usually a good thing. Good things may not be safe. We each have to weigh whether the convenience of opening our phones with a finger to swipe instead of entering our PINs is worth the risk of losing our privacy.

Stay secure.

[1]Matt Hamilton, The government wants your fingerprint to unlock your phone. Should that be allowed?,  LA Times, Apr. 30, 2016, http://www.latimes.com/local/california/la-me-iphones-fingerprints-20160430-story.html; and Quinton Plummer, Virginia police can force you to unlock your smartphone using fingerprint: Here’s why, Tech Times, Nov. 3, 2014, http://www.techtimes.com/articles/19288/20141103/virginia-police-can-force-you-to-unlock-your-smartphone-using-fingerprint-heres-why.htm.
[2] SEC v. Huang, No. 15-269 (E.D. Pa. Sept. 23, 2015); Virginia v. Baust, No. CR14-1439 (Va. Cir. Oct. 28, 2014).
[3] Shubham Verma, Why I’m Not a Fan of Smartphones With Fingerprint Scanners, Gadgets360 Oct. 30, 2015, http://gadgets.ndtv.com/mobiles/opinion/why-im-not-a-fan-of-smartphones-with-fingerprint-scanners-759160.
[4] To be fair, there is some belief that changing passwords regularly is more harmful than not.  E.g., Andrea Peterson, Why changing your may do more harm than good, The Washington Post, Mar. 2, 2016, https://www.washingtonpost.com/news/the-switch/wp/2016/03/02/the-case-against-the-most-annoying-security-measure-virtually-every-workplace-uses/.

An Addendum to the Scariest Hack So Far

Posted in Privacy

Andy Johnson-Laird, President of Johnson-Laird, Inc., was kind enough to offer advice on three security techniques he recommends to detect and deny hackers, such as those we discussed in last week’s blog post.

  1. Intrusion Detection Systems identify external probing or port scanning of known IP addresses. He describes it as “the equivalent of lying in bed at night and listening for someone rattling your front door handle. It’s a more or less constant rattling when it comes to port scanning.” See https://en.wikipedia.org/wiki/Intrusion_detection_system.
  2.  Network Address Translation (NAT-ing) prevents the routing of internal IP addresses. Internal IP addresses will be selected from the range of “non-routable” addresses reserved by the Internet Engineering Task Force and the Internet Assigned Numbers Authority. See https://en.wikipedia.org/wiki/Reserved_IP_addresses.
  3. Internet Protocol Security (IPSec) is an end-to-end security scheme that requires mutual authentication at the beginning of communication and the negotiation of cryptographic keys during the session. See https://en.wikipedia.org/wiki/IPsec.

News Flash, May 12, 2016:

The Department of Justice proposed that the biometric database the FBI has been amassing for eight years should be exempt from privacy laws.[1] If approved, the proposal would free the FBI to save images of faces, handprints, tattoos, iris scans and biographies of people who don’t know of the existence of, let alone the content of, their files. The largely unvetted findings of investigations other federal agencies conducted on job applicants will also be included in the database. In a nutshell, anyone could be detrimentally affected based on inaccurate information and never know it. Stay tuned for updates on this developing proposal.

[1] See, Tim Cushing, Doesn’t Want Privacy Laws To Apply To Its Biometric Database from the and-doesn’t-want-to-let-citizens-know-how-THEIR-privacy-is-affected dept, TechDirt, May 12, 2016, https://www.techdirt.com/articles/20160508/13574834381/fbi-doesnt-want-privacy-laws-to-apply-to-biometric-database.shtml.

The Scariest Hack So Far

Posted in Data Security

Hackers have upped the ante. Data controllers wax fondly about the good old days when data was outright stolen. Back then, in 2013, there was a sense of fair play. Trolls did troll things. Assuming the victim implemented and maintained a “comprehensive information security program”[1] to protect the type of data that was compromised, its insurance carrier may have provided coverage and the issue was resolved. Now, ransomware, extortion and data sabotage may lead to ongoing issues for data controllers. Each of these types of cyberattacks is evolving in ways that are truly devious.

Data theft is to head cold as ransomware and extortion are to chicken pox.    

If the theft of your data is not on the scale experienced by Target, Wyndham or Living Social, your operations may recover in a small amount of time. Think of a localized data breach as a head cold. After the breach, you will be out of commission for a while, but if you take steps to avoid reinfection (implement a privacy policy), protect others from your symptoms (use firewalls and authentication) and complete your course of treatment (follow your plan and comply with breach notification laws), you will ultimately be fine.

Think of your computer network as patient zero. Ransomware is akin to the chicken pox — it hits fast, is contagious, and the signs of the illness can stay with you until you adopt an assumed name. Ransomware malware previously “only” locked your keyboard or uploaded unsavory files to your system. Attackers would notify you of the amount of Bitcoin required to regain access to your data or remove the offensive files. In 2013, hackers significantly increased their use of ransomware to (1) infect your system and, (2) install a cryptographic key to lock and unlock your data. Once in, the attackers would gauge whether to access your financial accounts directly or send a ransom demand with a countdown showing when your data would become permanently inaccessible. Now, ransomware such as CryptoWall spread and infect the shared drives that connect to patient zero. If your whole office is infected, a quarantine may be required until all viruses are eliminated.

Data sabotage initially seems to be an asymptomatic attack, but can quickly become fatal. 

Hackers using data sabotage can remain innocuous while they mine data. Only when hackers know enough about your data to cripple you or enrich themselves, is the true measure of their destructive nature realized.[2] Data sabotage may occur over a period of time and employ many distinct steps. Manipulation of the numbers reported in a Form 10-Q can cause a corporation’s stock to crash and affect an entire industry. Competitors may also find vulnerabilities in your security that they may exploit.

A case with espionage, extortion and pseudonyms is a sign of things to come.

Wire Swiss GmbH (Wire Swiss) is currently seeking a declaratory judgment and alleges civil extortion against its competitor, Quiet Riddle Ventures dba Open Whisper Systems, and Moxie Marlinspike.[3] The litigants develop end-to-end encrypted messaging software. Wire Swiss claims the defendants threatened to accuse Wire Swiss of infringing on copyrighted software code and publicize “vulnerabilities” in the security of Wire Swiss’ encryption software. Wire Swiss’ payment of a $2 million licensing fee would prevent the threatened action. Wire Swiss claims that the specter of publication of security vulnerabilities in its encryption software could cause catastrophic damage to its reputation. Wire Swiss further claims that the defendants’ threat coincided with the announcement that their Signal software had been incorporated into the WhatsApp messaging application. If true, the plaintiff’s allegations are a prime example of how data saboteurs profit from their hacks. This case may also be fodder for legislation to create a safe harbor for security self-evaluation.

The best policy may be to trust no one.

Developing a zero-trust, multilayer security plan may be your best method of protection. Here are some common tips that may help keep your data virus and hack free:

  • Encrypt or anonymize your data.
  • Erect firewalls.
  • Invest in “anti”— anti-virus, anti-malware and anti-spyware software.
  • Update your software regularly.
  • Consider using a “kill switch”— when suspicious events happen, the IT department should automatically be notified and the network should shut down if no protective measures are taken.
  • Ensure granular access control is used.
  • Regenerate session identification on every inquiry to you.
  • Use double or triple authentication.
  • Log errors instead of displaying them to potential hackers.
  • Revoke credentials when certain events occur.
  • Implement “eventing” so you know when certain categories of data are accessed and/or modified.

Sadly, no safeguard is guaranteed. Using multiple defenses will at a minimum, ensure you are not the slowest one running from the bear. Good luck, and may your houses remain pox-free.

[1] Fed. Trade Comm’n v. Wyndham Worldwide Corp., No. 2:13-cv-1887 * 4 (D.N.J. Dec. 9, 2015).
[2] See, Edmund Lee, “AP Twitter Account Hacked in Market-Moving Attack”, Apr. 24, 2013 http://www.bloomberg.com/news/articles/2013-04-23/dow-jones-drops-recovers-after-false-report-on-ap-twitter-page (S&P 500 Index dropped $136 billion in value following a fake tweet alleging Pres. Obama had been injured)
[3] Wire Swiss GmbH v. Quiet Riddle Ventures, LLC, et al., No. 2:16-cv-02340 (C.D. Cal. Apr. 6, 2016).  Mr. Marlinspike may (or may not) be Matthew Rosenfeld or Mike Benham.

Expectations of Privacy: Location Matters

Posted in Data Security, Privacy

Sometimes law enforcement needs a warrant to access cellphone data, sometimes a court order. Sometimes nothing is required.


Roaming while you roam.

Depending on where you use your cellphone, law enforcement may obtain your location records from your wireless provider without a court order or warrant — neither is required in Washington state. In urban areas where there are multiple cell towers, a phone’s location can be identified to within half a mile. Turning off your location services or powering down the cellphone alone will not shield you from law enforcement.



Are you where you said you’d be?

When processing data, cellphones communicate with the strongest available cell tower signal. Calls, texts and internet browsing generate cell-site location information (CSLI). CSLI is time-stamped and linked to the phone number. Cell towers emit different signals in each direction, so the phone’s movement is tracked by its angular position relative to a tower. From CSLI, law enforcement can track where you are; what cell phones are near you; with what phone numbers you communicate; how long you communicate; and what routes you travel. In essence, everything except the actual communication is recorded. If you aren’t a criminal, you may not care. Depending on how CSLI is produced, even the innocent can catch the attention of law enforcement.


Despite your innocence, law enforcement may receive your cellphone usage records.

“Tower dumps” entail producing CSLI for all cellphones that processed data sent through a tower. If your cellphone uses a targeted tower, your data may be captured. You go from having fun with your friends to law enforcement tracking you by your number.


Producing CSLI in real-time transfers your records contemporaneously as they are created by cellphone usage.  If this happens, you are probably having a bad day — this method may be used in exigent circumstances.  Collection of historical CSLI shows all CSLI for a cellphone for a specific period of time.


One person’s record is another person’s dirty laundry.

In U.S. v. Timothy Carpenter[1], the Sixth U.S. Circuit Court of Appeals joined the Eleventh Circuit[2] in ruling that law enforcement agencies do not need a warrant to track a caller’s location through cell tower records. Timothy Carpenter and Timothy Sanders robbed nine cellphone stores (ironic, isn’t it?) in Michigan and Ohio within four months.

cautionThe FBI requested the “transactional records” of the Timothys’ wireless providers, aka, the Timothys’ historical CSLI. Court orders were issued pursuant to the Stored Communications Act (SCA) after the FBI showed there were reasonable grounds to believe the CSLI was relevant to the investigation. The FBI reviewed 127 days of CSLI for one Timothy and 88 days for the other. The government established through the historical CSLI that the Timothys were located within a half-mile to two miles of each armed robbery when they occurred. On appeal, the defendants argued that the Fourth Amendment required the government to show probable cause and use a search warrant to access the CSLI.


The opinion focused on the following:

  1. There was no search, because the FBI collected the wireless providers’ data routing information, which was gathered in the ordinary course of business.
  2. CSLI does not refer to the content of the defendants’ private communications.
  3. Every cellphone user who has paid roaming fees knows that wireless carriers collect locational information, so there was no expectation of privacy.
  4. The CSLI is so comparatively imprecise compared to GPS, that there is no expectation that the cellphone user can be located exactly.
  5. The SCA requires the government to meet the “reasonable grounds” to get a court order, not the “probable cause” standard to obtain CSLI.


Tracking or stalking? Duration matters.

The concurring opinion in Carpenter questioned whether the business record standard of proof applies in the review of an alleged violation of Fourth Amendment rights. The rationale behind the question is that a business’s production of credit card records showing purchases, for example, may be sufficiently distinct from the production of cellphone records showing personal location to require a more stringent analysis. The concurring justice also found the scope of the location monitoring troubling. Lawfully tailing a suspect is one thing. Lawfully tailing a suspect for a period of three to four months transmutes the surveillance into the realm of privacy invasion.


So, how do I keep law enforcement out of my data?

iOS and Android operating systems and apps offer some protection of CSLI location data. Start by turning off the location services for all your existing apps. Download apps that discard the location data cached on your cellphone. Get and stay off the grid by using localized Wi-Fi connections. Rely on an offline map or app that anonymizes the cellphone, encrypts the location data and permanently deletes your data within a certain amount of time. Regularly monitor new technology used by law enforcement and cybersecurity experts.


After all, when you build a better mousetrap, law enforcement will build a better mouse. Justice William O. Douglas called upon his Pacific Northwest ideals when he wrote, “The right to be let alone is indeed the beginning of all freedom.” Cheers to freedom.


[1] www.ca6.uscourts.gov/opinions.pdf/16a0089p-06.pdf
[2] https://www.eff.org/document/us-v-q-davis-opinion

Foundation for the Lost Boys and Girls of Sudan v. Alcon Entertainment (N.D. Georgia, March 23, 2016)

Posted in Copyright, Intellectual Property

Rarely does one think of copyright issues surrounding how research is conducted for feature films that are based on real life events, but 54 Sudanese refugees are forging that new connection between research and copyright in one lawsuit pending in the U.S. District Court of Georgia sitting in Atlanta. 

Those refugees, who survived starvation, disease and the militia attacks in Darfur, arrived in the U.S. and then sat down with film producer Robert Newmyer and screenwriter Margaret Nagle back in 2003 where they shared their life stories in a series of recorded interviews. The court found the 54 refugees to be joint authors of the script for the 2014 “The Good Lie” film starring Reese Witherspoon. The refugees created original expression in their oral telling of their stories that were fixed in tangible medium in the recorded interviews. “While some common elements of the Lost Boys’ story were publicly available, Newmyer and Nagle wanted to create a movie with real, personal and emotional details otherwise unavailable to the public at large,” says the lawsuit. “Newmyer and Nagle needed the details from the Lost Boys’ personal stories and permission to use those details in a screenplay and subsequent film. The title of The Good Lie, for instance, is said to refer to a lie once told by one of the refugees on the taped interview to would-be-captors that saved his life.

The defendants argued that the 54 refugees’ answers in the interview do not possess the modicum of creativity to constitute an original work of authorship under copyright law.  U.S. District Judge Leigh Martin May found otherwise in her recent 54-page ruling:

“The Interviews, however, did not consist merely of ‘ideas, facts and opinion made during a conversation,’ like the interviews by journalists in the cases Defendants cite,” responds May. “Rather, the Interviews were a creative process designed to create material for a screenplay and film. All that an ‘original work’ must possess is ‘some minimal degree of creativity’ … even a slight amount will suffice. Plaintiffs’ telling of their personal stories in response to questions designed to elicit material to create a fictional script for a feature film likely includes enough creativity to render the Interviews an original work of authorship.”

A major flaw in the case arose when it was shown that the 54 refugees do not have U. S. Copyright registrations of the taped interviews. The defendants argued that the case must be dismissed for the plaintiffs’ failure to obtain the requisite registrations; however, the court held that the failure to register was solely due to the defendants’ refusal to provide the 54 refugees with copies of the interviews. The defendants were found to have been the sole obstacle to the plaintiff having the tape recordings to satisfy the U.S. Copyright Office requirement that the tapes be submitted as specimens of the work at issue.

The court found that the umbrella organization, the Foundation for Lost Boys and Girls of Sudan, and the 54 refugees’ lawsuit may proceed based on copyright infringement, breach of the joint venture agreement, breach of fiduciary duty, conversion of plaintiffs’ ideas, breach of the covenant of good faith and fair dealing, unjust enrichment, promissory estoppel and other claims. The complaint alleges that Robert Newmyer and others orally promised the 54 refugees that the Foundation  would be one of the joint  members of the movie venture; and, that the Foundation would be the sole beneficiary of any fundraising efforts associated with the movie. The court held in its March 2016 opinion both that a jointly authored copyright arose in the recorded interviews; and also that an oral promise to be joint participants in the venture is enforceable notwithstanding Newmyer’s untimely death in 2005.

This lawsuit is noteworthy even beyond the allegations of the Hollywood exploitation of Sudanese refugees as it is an ambitious attempt at a declaratory judgment over authorship predicated upon tape recorded interviews. This is a case to watch.


Synopsys Win Is a Win for Copyright

Posted in Copyright, Intellectual Property

Static timing analysis (STA) is a simulation method of computing the expected timing of a digital circuit without requiring a simulation of the full circuit. This complicated area of measuring effective timing and power characterization flows in asynchronous circuits is often the basis of patent applications. The code of these complicated analytical tools also forms the basis of copyrights and copyright applications.

The Northern California jury’s award yesterday to Synopsys of $30 million against ATopTech was not on the basis of a patent infringement but for copyright infringement. The software of ATopTech’s product “Aprisa” infringes the Synopsys product “Prime Time”; and the jury awarded the damages allowed under the Copyright Act of the summation of the profits that ATopTech generated ($22 million) plus the lost profits that Synopsys failed to accrue by reason of the infringement ($8.2 million) for a total exceeding $30 million.

This case represents a trend in the intellectual property world post-Alice. Since June of 2014 when the Supreme Court struck down the patents in Alice for “abstractness”, the successful enforcement of patents has proven unpredictable. Over 70 percent of the patents in inter partes disputes have been found invalid — a new low water mark for patent enforceability in the U.S. As a consequence, intellectual property portfolio owners have moved the basis of their enforcement actions from patents (where the result is less certain) to copyrights where the result, as seen today in the Synopsys case, is more predictable and for sizable awards.

A Warning to Phishers in the Yahoo! Pond

Posted in Domain Name, Intellectual Property

In a recent February 15 decision by the World Intellectual Property Organization (WIPO), approximately 40 domain names were ordered transferred from their registrants to Yahoo! (WIPO Case D2015-2323). The domains included yahoopasswordreset.com, yahootechnicalsupport.net, yahootechsupport.net, yahoosupportnumber.com, yahoo-customerservice.com, yahoomailcustomersupport.com, yahoo-tech-support.com, etc. The domain names were registered by multiple parties, however, Yahoo! was able to identify common management and control of the parties. Yahoo! alleged that the registrants sought to obtain remote access to users’ computers through a phishing scheme that compromised users’ personal and sensitive information for financial gain by “spoofing (Yahoo!’s) technical support.” This common control was established through the use of the same MSN email account, the same IP address and the same physical address in the registration details. There were also similar patterns of operation that were established.  Yahoo! filed extensive documentation to identify the patterns and the WIPO panel was convinced.

Only two of the registrants sent communications in response to the WIPO notices. The first originally claimed that they were doing nothing wrong, and then subsequently denied any connection to the website. The other merely stated that they were no longer associated with the domains.

The WIPO panel had little trouble establishing the three prongs of the test:

1.  The disputed names are identical or confusingly similar to a trade mark or service mark owned by Yahoo!.

2.  The registrants have no rights or legitimate interest to the domains:

The panel stated that “Phishing” is a form of Internet fraud that aims to steal valuable information such as credit cards, social security numbers, user Ids, passwords, etc. A fake website or email address is created that is similar to that of a legitimate organization and this is used for identity theft and other predatory activities. See, e.g., Halifax plc. v. Sontaja Sunduci, WIPO Case No. D2004-0237. The domains in question resolve to websites that give the impression of being customer support or are used for email accounts.

3.  The disputed domain names have been registered and are being used in bad faith:

The panel believes that establishing a phishing website is a strong example of bad faith. “Such conduct is squarely of the type that the Policy is designed to prevent.”

While the common control may not have been immediately evident, the WIPO panelist was quite willing to determine the common control. When dealing with multiple entities, it would pay dividends to have as much evidence gathered at the time of filing and to be prepared to go back and identify other linking factors. The Uniform Domain-Name Dispute-Resolution Policy (UDRP) process can move quickly. Yahoo! filed its initial action in late December 2015, and a decision was issued on February 15, 2016.

Building a Valuable Patent Family Using the Technique of Functional Deconstruction

Posted in Intellectual Property, Patent

There are several advantages to pursing protection for a key innovation in the form of a “family”[1] of patents and pending patent applications. These include the possibility of obtaining protection in the form of claims[2] covering different aspects of an invention, and different infringing actions (of a user, maker or seller of an invention) as those actions would be described from different perspectives (e.g., server-side, client-side, internally, externally, etc.).

It is important to note that some of those uses or perspectives may not be apparent when the application is filed, and may depend upon the business environment, business models being pursued, or technologies that are developed to solve problems that arise in the industry. Each of these aspects of a business may change over time in response to economic developments or efforts made by competitors.

A family of patents and applications may also provide the option of having a choice of which among multiple claim sets is asserted or licensed; this may (in some cases) enable the assertion of claims in some patents without putting claims in the un-asserted patents at immediate risk.[3] A family of patents also increases the likelihood that a targeted party will have to expend resources to fight an assertion or an aggressive licensing effort.

In order to obtain the benefits of pursing protection for an invention in the form of a family of patents and applications, it is necessary that the initially filed description (termed “the specification”) and figures be sufficient to support the other perspectives, uses cases or implementations, and that they do so in a way that satisfies the enablement and written description requirements of the patent law.[4] Enablement requires that the sum of the information disclosed in the description and figures contain enough information to “enable” one of ordinary skill in the art at the time the invention was made to implement the invention without encountering significant obstacles that would require further inventive efforts to overcome. In some sense, an enabling disclosure in the hands of one of ordinary skill is one sufficient to allow that person to produce the invention using standard engineering techniques and abilities. The “written description” requirement has been interpreted to mean that the sum of the description and the figures are sufficient to indicate that the inventor was in possession (i.e., they recognized the implementation approach and/or use case) of the form of the invention that is represented by the claims being asserted.

However, drafting a patent application in a way that can provide the proper support is a non-trivial exercise. One way to accomplish this goal is to follow a process that attempts to preserve the possibility that the application may be used to support claims directed to other likely use cases and implementation methods. This may be done (in part) by considering alternate industries in which the solution represented by the invention might have value and alternate ways in which a function or operation that is part of the invention could reasonably be implemented.[5]

In some sense the task is to develop a specific tool that can be used to obtain leverage in order to assist in gaining or maintaining a competitive advantage. However, certain constraints apply: (1) the tool will not be available for at least 2 to 3 years, but it must be described in detail at present; (2) the possible situations in which the tool will be needed are only generally known at present, and the ways in which it is desired to use the tool may change between now and when the tool becomes available; and (3) the specific form of the tool that will be needed is only known in a general sense at this time (e.g., the tool may need to be capable of being implemented in one of several ways, all of which must be properly supported).

Given the practical considerations and constraints, how does one prepare a patent application in a way that ensures a greater likelihood that it contains adequate support when filed for claims that are intended to protect against uses that may not be known or fully understood until years later? One way is to craft patent applications that not only describe an invention and its use by the company itself, but also provide support for claims covering a generic implementation of the solution represented by the invention, where that solution may be relevant to multiple use cases, operating environments and industries. Using this approach, the patent application is not simply a description of a single product feature or process, but instead represents a description of a solution to a problem or class of problems, with the solution being expressed in varying levels of detail and in terms of being applicable to multiple operating environments.

An important part of the process is deciding how to characterize an innovation; in order to do this, it is helpful to consider a new product feature or service from a different perspective than it might typically be considered. Instead of viewing a feature or service as providing a new option for a user, it should be viewed as providing a solution to a specific technical and/or business problem. This is easier said than done, as it requires an engineer or patent attorney to figuratively “step back” and formulate a “generic description” of a higher-level problem that is being solved, rather than focus on the details of an implementation that produces the new feature or service.

Such a generic description will typically be based on a number of considerations, including (a) the context in which the feature or service is used (using general, non-limiting terms to describe the operating environment for the feature or service); and (b) what does the feature or service enable a user to do? Based on the answers to (a) and (b), it may be possible to formulate a statement that, at a high level, describes the technical or business problem for which the innovation provides a solution.

Next, it is necessary to determine other possible use cases or situations in which the invention may be applicable and hence other entities for which such a patent would have value. In order to do this, it is helpful to consider other contexts in which the same or a similar problem arises (where this is based to some degree on the generic description of the invention developed previously). For example, if the problem being solved relates to how to optimally allocate bandwidth among multiple content delivery channels given a certain constraint, then it may be helpful to consider other industries in which a similar constraint arises or in which multiple streams of “product” are delivered via different delivery channels (where, for example, those “channels” may be communications channels, different networks or different physical delivery paths, and the “products” may be email messages, downloaded content or physical packages).

One of the keys to describing an innovation in a way that can support multiple use cases and implementations is a technique I refer to as “functional deconstruction.”   This is an organized way to identify information needed to prepare an effective application and to structure that information into a description of the innovation that provides value to a company by creating a useful business asset.

Functional deconstruction is a form of system analysis, where instead of focusing on the details of a specific implementation of an invention, the operative elements or process steps are generalized and expressed as broader concepts. These broader concepts may be classes of elements or groups of processes that operate to perform a similar function (e.g., sorting, ordering, scaling, selecting, estimating, filtering, comparing, generating a type of output, etc.).

In order to practice this approach, the first step is to isolate each of the elements of a system or process that can be used to implement a generic example of the invention. In some ways, this step is similar to that of generating the “problem statement” referred to previously.   The goal is to identify the primary functional elements or components of a generic implementation of the invention, and to do so using high-level and nonspecific language where feasible. For example, this might be accomplished by generating a high-level, functional block diagram of a system for implementing a generic example of the invention.

The result of this approach will be a set of elements, steps or functional components that are needed to implement a generic version of the invention. For example, there may be an element that permits a user to input data, an element that processes the input(s) to extract a specific characteristic, an element that compares that characteristic to determine a relationship between what the user entered and what the system has previously stored, and an element that transforms one form of input into a different one having a certain characteristic. It is important to make sure that the set of elements, steps or functional components includes those needed to perform what are believed to be the novel steps or functions of the innovative device, process or method.

After identifying the set of elements, steps or functional components, it is necessary to generate a list of the values or types that each element can take and still be capable of implementing the invention. This is a form of listing the “variations on the theme” for each element. The variations may be different devices that can be used, different data processing techniques (though all produce an equivalent output for purposes of the invention), different filters, different transforms, different ingredients (though all have a common characteristic), or different physical structures that can perform a function that is equivalent for purposes of the invention.   Communications with the inventor may be needed at this point to isolate those “variations” that are practical in a business sense and/or technically feasible, or at least to remove those that are not. In addition to listing these variations, it may also be helpful to include several similar, but arguably different ways of describing each function or step of an innovative method, or each function or operation of the elements of a system or device (these might be termed “substantially functionally equivalent synonyms” for the purposes of implementing the invention).

Next, it is possible to generate various combinations of the possible values or types for each functional element and from those to construct a set of possible implementations of the invention. These possible implementations should be constrained in that only those combinations that are practical in both a business and technical sense should be part of the set.

Given the above information, a patent application can (in theory) properly support claims directed to one or more of the following:

(a) the embodiment of the invention implemented by the inventor (which is typically the product feature the inventor developed);

(b) possible uses of the invention to solve a similar problem that arises in different operating environments (and certain modifications to the implementation that are necessitated by the different environment); and

(c) multiple possible ways of implementing each of the primary functional steps or elements of the invention (which may be applicable to the original and/or to other environments).

In theory, the developed information enables the patent application to satisfy the written description and enablement requirements in so far as supporting a wide range of claims. This permits the introduction of claims that specifically address different operating environments and different implementations of the conceptual basis of the invention. As a result, the application can be “tuned” in response to recognizing a suitable strategic value proposition (such as a situation in which having leverage or a stake would be beneficial). Thus, by thinking in advance about how to describe and generalize an innovation and focusing on the underlying conceptual foundation for the invention, the value of a patent application directed to that innovation can be increased substantially. As a result, it can be used to generate one or more patents or applications that are part of a family and function as business assets.


[1] A family of patents and/or applications refers to two or more patents or pending applications that are related by virtue of having a common priority date and common source of the initial description of the invention (such as a provisional patent application that includes a sufficient amount of detail to provide an enabling disclosure). While similar in some ways (such as having basically the same description and figures), there are commonly differences between members of a family in that different patents or applications contain claims that focus on different elements, different potential infringers (e.g., end-users, service providers, makers or sellers), or different perspectives of how an invention may be practiced or used (e.g., from a server-side perspective, a client-side perspective or other perspective for a system of cooperating elements, etc.). Further, in some cases, a continuation-in-part (CIP) application may be filed that describes an invention having a priority date and basic description in common with other applications, but also adds additional information which is not entitled to the same priority date.

Note that although many of the process steps described herein are most applicable to software implemented inventions, they also are relevant to electronic and hardware structures used to implement inventive processes.

[2] The claims of a patent define the legally enforceable aspects of what a patent describes as an invention.

[3] Note that an entity may be a direct infringer of a claim in a first application and also be a direct or contributory infringer of a claim in another application, where both applications claim priority from the same original application. This is because the claims in different patent applications, even related ones, do not necessarily cover mutually exclusive inventions.

[4] 35 U.S.C. §112(a) – The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.

[5] A more detailed description of the process discussed in this article may be found in my article entitled “A Key Obstacle to Implementing a Patent Strategy and One Way to Overcome It.”