Data does not exist in a vacuum.  There are some very lucrative careers based on the extraction of multiple, divergent uses of a single data set.  It follows that the fitness tracker that collects and analyzes data to encourage healthy habits may serve you well after your death.  Attorneys and litigants must be cognizant of the new horizon data presents. Criminal, personal injury and maybe, inheritance law may soon rely on data in some surprising new ways.

Dead people do tell tales. Thanks in part to data on the victim’s fitness tracker, Connecticut State police recently arrested Richard Debate for the 2015 murder of his wife, Connie.  The investigators used data to contradict Richard’s various versions of the crime.

The story the accused tells:  Richard claims he was home alone when an armed intruder entered the house demanding cash.  When Connie walked into the house after her spin class, the intruder immediately shot her to death. The intruder tied Richard to a chair.  After the intruder left the house, Richard freed one of his hands and immediately triggered the home security panic alarm to summon the police.

The story the decedent tells:  Connie’s fitness tracker data indicates that Connie, or someone wearing her tracker, took 1,200 steps in the house, over the period of one hour after Richard said Connie was dead.  Six minutes after the tracker collected the final steps, Richard triggered the panic alarm of the home security system.

The story the dogs tell: K9 dogs found no evidence of an intruder or anyone other than Richard and Connie in the house during the murder. The dogs, well, doggedly, followed Richard when tasked with finding the scent of the alleged intruder.

The story the accused accidentally tells:  Richard’s computer, cell phone and social media data revealed an affair and a pregnant girlfriend.

Police obtained an arrest warrant and charged Richard with Connie’s murder, filing false reports and tampering with evidence.  Fitness trackers can inform on the wearer’s health, mood, location and ability to focus.  Trackers surveille more accurately than a combination of video, audio and a retired junior high teacher.

Fitness trackers are the equivalent of a plane’s black box.  Fitness trackers are essentially lie detectors rigged with a time and location stamp.  Depending on the sophistication of your tracker, it measures blood pressure, pulse, respiration, and skin conductivity—the same things measured during a polygraph test.  As premised above, the data that can assist the wearer to track physical activity has multiple uses, some life-saving, some life-affirming.

  • Analysis of the wearer’s vital signs saved a wearer’s life—doctors treated undiagnosed blood clots in the wearer’s lungs after the wearer noticed her excessively high resting heart rate.[1]
  • Red lines show the GPS movements of Seattle gladiator, Kelly Herron, as she battled for her life during an attack in park restroom.[2] Kelly’s intensity and relentless self-defense is obvious in the movements her tracker collected. Her account of the savage attack and fight corresponds with her fitness tracker data.

IP Blog 052517

Fitness trackers are emerging as a source of evidence in civil disputes.  The plaintiff, a former personal trainer, seeks to prove her diminished physical capacity after her injury through fitness tracker data showing she is not as active as the average woman of her age.  What next?  Might inheritance laws and wrongful death suits soon hinge on fitness tracker data?  Roll back time to the summer of 1991.

On July 16, 1999, the nation gasped and mourned the death of the son of Camelot.

John F. Kennedy, Jr., his wife, Carolyn Bessette-Kennedy, and his sister-in-law, Lauren Bessette, died when the plane Kennedy piloted crashed.  The National Transportation Safety Board determined pilot error was the cause of the crash.  When someone, especially someone wealthy, may be liable for an accidental death, a wrongful death lawsuit can be expected.  However, when spouses die simultaneously (actually, within 120 hours of each other in Washington) without children or a document such as a trust that explicitly directs the allocation of their estates after their simultaneous deaths, the inheritance would pass according to state law.  Many state simultaneous death statutes require that each spouse is considered to have predeceased the other.  This means that the next closest living relatives of the decedents divide the estates equally, according to the degree of kinship.  While laws have remained largely dormant, technology may force the legal field to evolve at a faster pace.

Fitness tracker data may have altered the division of the estates of the John, Jr. and Carolyn Bessette-Kennedy. 

Fast forward to 2017.  John, Jr. and Carolyn are wearing a fitness trackers when the plane crashes and data indicates one spouse died before the other.  Might the estates be disbursed differently now?  Would the heirs of the later expiring spouse have basis to allege a wrongful death claim?  Apply this hypothetical to other areas of law:

  • If you rear end a car will your fitness tracker data be used to show you are chronically sleep deprived?
  • If your tracker shows your heart rates increases when you are near your suspected lover, can that evidence be introduced in a divorce?

At first this may seem far-fetched, but consider that police are asking Amazon to release recordings from an Alexa in a murder investigation. This is truly an exercise in innovativeness.   Data in itself is neither good nor evil, helpful or harmful.  Make sure you know what data is [i]being collected, and how it is being used.  Ponder this as you achieve your daily step goal.


[1] CNN ROBERT JIMISON. “Fitness tracker clues woman in to life-threatening condition.” LOCALSYR. N.p., 04 Apr. 2017. Web. 25 May 2017. <http://www.localsyr.com/news/health-news/fitness-tracker-clues-woman-in-to-lifethreatening-condition/686839857>.

[2] “Instagram post by Kelly Herron ‍♀️ • Mar 6, 2017 at 6:48pm UTC.” Instagram. N.p., n.d. Web. 25 May 2017. <https://www.instagram.com/p/BRTqqdHD1No/?taken-by=run_kiwi_run>.

[i] Thanks to my Lane Powell colleague, Mary Lee Moseley for her advice on the Uniform Simultaneous Death Act.  http://www.lanepowell.com/20679/mary-lee-moseley/

Five months after rejecting a less detailed executive order on cybersecurity, Trump signed one on May 11, 2017 (Order). The Order notifies federal agencies that the “President will hold heads of executive departments and agencies (Agency Heads) accountable for managing cybersecurity risk to their enterprises.” Other key goals of the Order are to: (1) assess the scope and sufficiency of the U.S. cybersecurity workforce; (2) increase education opportunities to ensure the U.S. has the work force to protect itself and be competitive internationally; and (3) increase transparency in the software market.[1]  Proponents of the Order hope to strengthen the security of federal networks and protect the nation’s critical infrastructure. Coincidentally, the executive branch released the Order on the eve of one of the largest and very avoidable ransomware attacks in history, the viscious #WannaCry.

The President premises the Order on the belief that federal agencies are complicit in their vulnerability to cyber threats.

IP_PuppyThe Order identifies known, immediate threats to the security of the U.S. infrastructure including:

> Being lax about cybersecurity risk management;

> Using operating systems after the expiration of vendor support; and

> Disregarding patches and updates to remedy vulnerabilities.

Essentially, many federal agencies are complacent about cybersecurity.

The Order comes with a simple but proven tool box.

The National Institute of Standards and Technology (NIST) is a non-regulatory division of the Department of Commerce.  NIST’s original mission in the cybersecurityIP_Toolbox industry was to develop a voluntary framework to manage the cybersecurity risks that threaten the U.S. electric power grid and other critical infrastructure (Framework). The Framework’s simplicity and usability led private and public entities throughout the world and across countless industries to adopt it.

The Order requires that Agency Heads must use the Framework to develop realistic risk management plans. Agency Heads must modernize equipment and software, while increasing the sharing of IT services across agencies and being more receptive to monitor and address vulnerabilities. The timing of the Order, coming one day before the worldwide #WannaCry ransomware meltdown, illustrates the important, but too-long neglected cybersecurity and privacy concerns within the U.S. government.

WannaCry? Here is something to cry about.

Almost 250,000 computers in 99 countries (and counting) would not have been vulnerable to the WannaCry or WannaCrypt ransomware if their users had just restarted the network. Here is a quick primer on the WannaCry destruction.

Ransomware 101. Ransomware locks a computer’s files and the computer operator will get a message requiring the payment of somewhere between $300 and $1 million, give or take a few bitcoin, to get the files released.
IP_Shed_2Malware, called worms, spread ransomware attacks. How does malware get on a computer?  It, the worm, enters through a “hole” in the cybersecurity fence. How does a hole develop?  Inattention. How do you fix holes?  First, you have to monitor your fence line and know there is a hole. When you find a hole, you patch it. Then, you — the vendor or IT specialist — send updates to users, if the patch cannot be updated automatically, reminding them to restart their computers or networks to download the patch. Just like that…fixed.[2]

The perfect storm of using outdated software, not looking for holes and ignoring updates permits the nightmare of WannaCry ransomware to spread. One of the most exploited targets of WannaCry is a version of software that its vendor has not sold since 2008 or supported since 2014. Users who updated the exploitable software were safe. Those who did not update to acquire the patch may have been infected.

No solution is fail-safe. Nevertheless, at a minimum, use the tools provided to you by the software vendors, IT technicians and NIST. The continued safety of the U.S. relies in part on keeping the infrastructure free of malware and fully patched. The Order has a wide-range of obligations, analysis and goals that ultimately may not be achieved within the stated timeframe. There’s a patch for that.


[1] Software is treated as a one-time capital expense, depreciating over time. This pricing model ignores the reality that software is never truly finished — it always needs a new version or update to increase efficiency and reduce vulnerabilities. This aspect of the Order merits discussion, but not here, not now.

[2] The author clearly has a problem with holes in buckets also:  http://www.beyondiplaw.com/2016/06/23/reputation-matters-dont-lose-opportunities-due-to-inaccurate-personal-data/

Beyond_IP_Law_Black-and-WhiteAmerican and Canadian data scientists, librarians, hackers and activists have united in a “rogue” movement to locate and archive climate data maintained by U.S. agencies.  Environmental scholars and librarians at the University of Pennsylvania founded DataRefuge in late 2016.  The purpose of DataRefuge is to prevent the new U.S. administration from adapting former Canadian Prime Minister Stephen Harper’s treatment of a centuries’ worth of scientific data.

What happened in Canada?  How, pray tell, did P.M. Harper’s administration treat environmental data when he controlled the administration between 2006 and 2015?  The administration: Burned it; Stripped it from websites; Closed research libraries; Expanded budgets to include salaries for assigned administrative “minders” to accompany scientists to media and scholarly events; Threw hard copies of generations of data into dumpsters; and Created an effective public media blackout on climate change, because the bureaucracy so frustrated journalists that media coverage of government environment research dropped by 80%.[1]

Ok.  So this sounds bad, but nothing really bad happens when a government scientific data is not available to the public, right?  Not much.  Just overfishing, lag time in addressing viruses affecting plant and animal based food sources and the underestimation of the levels of radiation released from a nuclear plant in Japan.[2]  By the time Canadian scientists held a mock funeral for the “death of scientific evidence” on Parliament Hill in 2013, the public had lost access to scientific data needed to set appropriate fishing quotas that are needed to provide stewardship of the fishing industry for decades.

What is happening in the U.S.?  Newly formed oversight organizations such as the Environmental Data & Governance Initiative[3] are documenting ‘changes’ in the U.S. government’s transparency on scientific issues.  “Changes,” means the public can no longer access data that the government has compiled for over a century at great cost, mostly funded by public monies.  The United Kingdom announced the funding of £14 million (approximately, $17 million U.S.) “to ensure that the published outcomes of publicly funded research are made widely accessible as quickly as possible.”[4]  Most would say that despite their different approaches to personal privacy, the U.S. and the U.K. share the same core ideology about government; yet the U.S. is actively hiding data that was previously available to the public as the U.K. is broadening access to data that has not yet been collected.  Oh the irony!

M.I.A. Data.  What is missing from U.S. federal government websites this Beyond_IP_Law_Puppiesmonth?  Data from the: (1) Department of Energy showing the correlation between burning coal and greenhouse gas emissions, (2) Interior Department related to the negative effects of hydraulic fracturing on federal land,[5] and the (3) U.S. Department of Agriculture’s Animal and Plant Health Inspection Service identifying circuses, zoos, research labs and puppy mills that violated the Animal Welfare Act and the Horse Protection Act.[6]

None of the agencies provided prior notice of the changes to their websites as required by the 1995 Paperwork Reduction Act.  The USDA reposted some records about violators after the American Humane Society threatened to sue, but data about puppy mills and zoos is still missing.[7]  Puppies! Cue, Sarah McLachlan’s “In the Arms of an Angel” commercial for the ASPCA.  Seriously, puppies. I would like to know which businesses mistreat them so I can do something about it.  Open access groups anticipate additional limits on the types of data will be disclosed.[8]  So, what can be done to keep data accessible? Are there private actors that can fill the holes in the government’s reporting?

What are the risks to the citizens and political oponents?  Activists are concerned that the public will not have access to the data necessary to hold the government accountable.  Accountability requires knowledge.  Remember when the hardest working, most experienced beat cops solved crimes on TV by wearing down the suspects and not stopping until every witness had been interviewed?  Now cops solve crimes with algorithms, autoclaves and TrueAllele.  Villains no longer steal gold ingots; they steal thumb drives.  The power shift from power to knowledge is evident in cinema and life.  “The Firm,” a movie that is about photocopying records to help the FBI, is converse to the strategy employed by Enron, which was to shred data before the government could seize it.

Dr. Bethany Wiggin voiced her concern that the politicization of science may keep “knowledge out of the hands of your political opponents,” which is “an effective win.”[9] Without data to correlate the presence of certain chemicals to illnesses lawsuits, implementation of product safety measures and conservation of animals will be much more difficult.  Knowing that child mortality increases when the new power plant opened in town is a fine thing to know.  But, only when you can prove the connection between the two is when you have the power to effect change.

“Rogue” U.S. data scientists, librarians and hackers are mobilized and making a difference.  Absent government websites with comprehensive scientific data, the public will be forced to rely on Freedom of Information Act requests or through nonprofit journalism websites like Mother Jones or the Sunlight Foundation.  Non-profits may be the best option to preserve existing data. Since November, DataRefuge, alone has hosted almost 20 events in several cities, including New York, Ann Arbor, Chicago, Los Angeles and Toronto.  Data scientists are not waiting to see how far the deletions go.  Instead, they assume that what happened in Canada will happen in the U.S.

Sometimes, apparently, you have to play dirty to preserve data.   Data scientists identify where the data is, or was, downloadable. Determine whether you can scrape the data off pages with web crawlers, if you need to write data-harvesting scripts or use other means.  Save the data somewhere safe.  The DataRefuge reportedly saves climate data to the Internet Archive or a research library.  The librarians (I LOVE librarians!) organize the data consistent with its original descriptors and try to keep the data free of evidence of handling.  Simple, right?  No.  Below are links to organizations that seem to have it all sorted out for you.  I do NOT support hacking; but I understand.

I don’t know about you, but if data scientists and librarians are consorting with hackers, I am terrified what will happen if they lose.  Let’s hope no fires in the U.S. are fueled by 60 years of climate data.

Beyond_IP_Law_Volcano Harvard Business Review and The Guardian have described data scientists like this:  Data Scientist: The Sexiest Job of the 21st Century and Data scientists: ‘As rare as unicorns.’  Now data scientists have an origin story.

Below are links to nonprofit organizations that are able and willing to assist others in joining the effort to preserve data before it is out of the public’s reach.  You can support and contact data preservationists at the links below:

http://www.ppehlab.org/

http://tinyurl.com/datarefute

https://data.world/

https://climate.daknob.net/

http://www.ucsusa.org/

http://foropengov.org/wordpress/

https://envirodatagov.org/

http://librariesnetwork.org/

https://sunlightfoundation.com/

https://archive.org/index.php

https://envirodatagov.org/version-tracking/

https://docs.google.com/spreadsheets/d/12-__RqTqQxuxHNOln3H5ciVztsDMJcZ2SVs1BrfqYCc/edit#gid=0

Endnotes


[1] Palen, Wendy, When Canadian Scientists Were Muzzled by Their Government, NY Times, Feb. 14, 2017, https://www.nytimes.com/2017/02/14/opinion/when-canadian-scientists-were-muzzled-by-their-government.html?rref=collection%2Ftimestopic%2FHarper%2C%20Stephen%20J.&action=click&contentCollection=timestopics&region=stream&module=stream_unit&version=latest&contentPlacement=1&pgtype=collection; and Sowunmi, Jordan, The Harper Government has Trashed and Destroyed Environmental Books and Documents, Vice, Jan. 15, 2014, https://www.vice.com/en_ca/article/the-harper-government-has-trashed-and-burned-environmental-books-and-documents.

[2] Manasan, Althea, FAQ: The issues around muzzling government scientists, CBC News, May 20, 2015, http://www.cbc.ca/news/technology/faq-the-issues-around-muzzling-government-scientists-1.3079537.

[3] https://envirodatagov.org/.

[4] Research Councils UK, RCUK announces 2016/17 Block Grant for Open Access, Oct. 19, 2016, http://www.rcuk.ac.uk/media/news/161019/.

[5] Harmon, Amy, Activists Rush to Save Government Science Data – If They Can Find It, Save the US EPA, Mar. 7, 2017, https://savetheusepa.org/2017/03/07/activists-rush-to-save-government-science-data-if-they-can-find-it/.

[6] Chan, Melissa, The Government Purged Animal Welfare Data.  Now the Humane Society is Threatening to Sue, Time, Feb. 6, 2017, http://time.com/4661446/usda-animal-website-humane-society-lawsuit/.  

[7]Lewis, Lauren, Puppy Mill Inspection Reports Still Missing From Partially Restored USDA Database, World Animal News, Feb. 21, 2017, http://worldanimalnews.com/puppy-mill-inspection-reports-still-missing-partially-restored-usda-database/.

[8] In January, a bill was introduced to limit “the collection and disclosure of data about racial disparities in fair housing.”  Larson, Selena, Why Trump’s election scares data scientists: Trump administration removing info from websites, CNNMoney, Feb. 25, 2017, http://money.cnn.com/2017/02/25/technology/data-refuge-saving-data/index.html.

[9] Id.

The Privacy Shield in a nutshell. 

pugThe Privacy Shield permits U.S. businesses to process and control the personal data of individuals, aka data subjects, located in the European Union (EU). Without the Privacy Shield, U.S. businesses risk losing hundreds of millions of dollars if they cannot transfer personal data from the EU — businesses that cannot establish offices in the EU or negotiate agreements with each of the EU member countries will forego commerce with EU companies and data subjects. The U.S. government has agreed to enforce the Privacy Shield against U.S. businesses on behalf of EU data subjects. The U.S. government necessarily has to execute its enforcement duties with diligence. You might say, U.S. government agencies must bite as bad as they bark.

Is certification the best option for your company?

EU privacy standards that protect the data of its citizens are much stricter than those of the U.S. The EU requires U.S. companies to comply with privacy principles that comprise the EU/U.S. Privacy Shield. The U.S. Department of Commerce (Commerce Department) oversees U.S. businesses’ applications and certifications under the Privacy Shield. Your company may decide to be certified under the Privacy Shield if your business is subject to the jurisdiction of the Federal Trade Commission (FTC) or Department of Transportation (DOT); and EU citizens access your website, they do business with you or you conduct business in an EU member country. Each circumstance must be analyzed on a case-by-case basis. Issues such as volume, whether you are a data controller or processor, and whether you have multinational affiliates have bearing on your analysis.

How does the Privacy Shield compare to the Safe Harbor?

The Privacy Shield is more stringent than the Safe Harbor; some privacy principles that were merely guidelines under Safe Harbor are now affirmative covenants under the Privacy Shield. The U.S. government also must meet a higher standard under the Privacy Shield. The EU obligates the FTC and DOT to investigate and enforce penalties against U.S. companies that violate the Privacy Shield Principles.

What is the cost of certification?

While certification under the Privacy Shield is voluntary, U.S. businesses that receive personal data transfers from the EU must meet the same requirements as U.S. businesses that are certified. The fees for certification are based on the business’ annual revenue: the minimum fee is $250 per year for up to $5 million in revenue, and the maximum fee is $2,500 per year for more than $500 million in revenue. U.S. companies that are required to resolve disputes by an EU Data Privacy Authority must pay additional fees.

The application process itself is no more complicated than most other business certification processes.  The “real” cost of becoming certified under the Privacy Shield will likely be in personnel resources, especially if the business is not already compliant with the Safe Harbor rules.  For example, the business must dedicate personnel to develop privacy policies, educate employees about the policies, monitor the actions of employees and third party data processors, and take action against parties who violate the policies. There are also costs associated with verifying that third party processors update their security and privacy policies in step with Privacy Shield requirements.  You can review a summary of the five basic steps U.S. businesses must take to apply for certification here. You can review the seven Privacy Shield Principles here.

Alternatives to self-certification under the Privacy Shield.

It may be more cost effective for a business with limited personnel to use a private company to assist with the certification process, establish compliant policies and procedures, and provide ongoing monitoring, auditing, education and advice. The Commerce Department maintains an ever-expanding list of companies that transfer data to U.S. companies from abroad in compliance with the Privacy Shield[1] and the Madrid Resolution, U.S./Swiss Safe Harbor or the privacy rules adopted by the Asia-Pacific Economic Cooperation EU, European Economic Area, Switzerland and Asia Pacific Economic Cooperation.  When evaluating private companies, you should pay close attention to which party to the agreement is liable for violations of the Privacy Shield and the extent to which the contract covers transfers of data to third parties.

Binding Corporate Rules (BCR), model contract clauses and unambiguous consent are also options that you may consider if self-certification is unfeasible for your business.  BCRs are available to multinational companies.  An affiliated company located in the EU may transfer personal data to its U.S. location subject to BCRs.  Model Contracts, drafted by the European Commission, require U.S. businesses to provide adequate levels of protection of the privacy of data subjects.  If you are a data processor, not a data controller, you may have the option of entering into a Direct Data Processing Agreement or adopt the Model Clauses for Processors to eliminate the negotiation of broader issues that apply to controllers, but not processors. If you receive data from a limited number of known EU data subjects, the most cost effective way for you to transfer their data to the U.S. would be to obtain from each of them clear, unambiguous statements that they freely permit the transfer of their personal data.

What are the possible repercussions of not complying with the Privacy Shield?

BearThe FTC can investigate alleged violations of the Privacy Shield, enter consent orders and findings of contempt, and impose administrative penalties. Currently, administrative penalties may be up to $40,000 per violation or per day, for continuing violations. Additional penalties against a business include the FTC’s removal of a company from the Privacy Shield list, resulting in liability under the False Statements Act if the company claims it is certified. Learn from the lessons of others — the FTC has issued record-breaking fines in the past two years, including a $1.3 billion fine issued in the past month. The data owners in the EU, the EU Commission and/or data privacy authority may also have private rights of action against a U.S. company that violates relevant rules.

 

The wrap-up:

  • Assess how your U.S. based business receives personal data from EU data subjects. Based on the volume, your relationship to the data owners, and whether you process or control the data, you may have to delegate an employee or contractor who is knowledgeable about data privacy and cybersecurity to monitor, update and enforce the policy and verify that the privacy notice meets all applicable state, federal and international rules.
  • Consult all aspects of your company organization to assess which option is best for you. Privacy is not a distinct division within your company. Verify that operations, human resources and enforcement of policies work in concert to maintain the standards of the Privacy Shield.

[1] See, the “Privacy Shield List” at https://www.privacyshield.gov/list.

What’s the Case About? In re Nickelodeon Consumer Privacy Litigation[1] is a multi-district consolidated class action filed on behalf of children under the age of thirteen alleging that Viacom used child directed websites it owned to collect, without parental consent, data from the class members which it then provided to co-defendant Google. The data Viacom captured from children included their gender, birthdate, IP address, the webpages visited and the names of videos the children viewed. The court considered an issue of first impression as to whether an IP address is personally identifiable information (PII) under the Video Privacy Protection Act (VPPA) and whether the collection of the data constituted intrusion upon seclusion under New Jersey law. Plaintiffs argued that the vulnerability of children coupled with public aversion to mining them for data supported liability of Viacom.[2]

VPPA allegations dismissed: The court held that Viacom did not violate the VPPA by collecting the IP addresses of children. The decision was based in part, on the precedent set by In re Hulu Privacy Litigation.[3] The Hulu court determined that static digital identifiers such as IP addresses identify the location of a computer, which, without additional information, cannot be used to identify an individual. Under this rationale, an IP address is not PII, because an address alone cannot “reasonably” lead to the identification of a person. The court also noted that the VPPA is just too old and brittle to encompass technology so distant from its origins as a by-product of Blockbuster, Erol’s Video Club and Hollywood Video stores.   A nuance as to why Google escaped liability under VPPA for another reason is touched upon below.

New Jersey state law claims remanded: The court remanded the claim against Viacom for violation of the New Jersey intrusion upon seclusion law. The court did not look favorably upon Viacom’s failure to honor its notice to parents that it would not collect any data from children.

The allegations against Viacom: Viacom owns the websites, Nick.Jr. and Nick.com (Nick Sites), both of which are associated with the Nickelodeon channel. The Nick Sites offer games and streaming videos to children and included this notice to parents:

HEY GROWN-UPS: We don’t collect ANY personal information about your kids. Which means we couldn’t share it even if we wanted to![4]

When children registered on one of the Nick Sites, they received a nickname of an avatar based on a Nickelodeon cartoon character of the same gender and approximate age as the child. The plaintiffs alleged that Viacom used first-party cookies it placed on the children’s computers to obtain information about which games and videos the children accessed.  Viacom disclosed the information it collected to Google and permitted Google to place ads on the Nick Sites.

The allegations against Google: The plaintiffs alleged that Google (1) placed third-party cookies via advertisement on the computers of children who accessed the Nick Sites, (2) used those cookies to track the children on any website displaying a Google ad, and (3) used “Doubleclick.net cookies”[5] to track the browsing of whomever used the computer across any website Google owned, such as Gmail, YouTube and Google Maps.

Analysis of the VPPA: Congress enacted the VPPA after the 1987 Senate Judiciary Committee’s hearings regarding Supreme Court nominee, Robert Bork. During the hearings, a newspaper obtained and publicized a list of titles of 146 films Judge Bork or members of his family rented from a local video store.[6] The list of videos was, even by 1987 standards, unremarkable — not a single NC-17 film on the list. Congress agreed, however, that a person’s video viewing history should be private. Consequently, under the VPPA, a user must give permission for his or her video viewing data to be shared. How does this translate to current technology? It doesn’t. The court likened applying the VPPA to internet technology to putting a square peg in a round hole.[7] Additionally, the court referred to the VPPA as a rigid law that lacked the flexibility of the Children’s Online Privacy Protection Act (COPPA) to regulate effectively technology that is “in flux.”[8]

The key definitions under the VPPA are:

Consumer: any renter, purchaser or subscriber of goods or services from a video tape service provider.

Video tape service provider: any person, engaged in the business, in or affecting interstate or foreign commerce, of rental, sale or delivery of prerecorded video cassette tapes or similar audio visual materials.

Personally identifiable information (PII): includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.

18 U.S.C. § 2710(a). A violation of the VPPA occurs when “[a] video tape service provider … knowingly disclose[d], to any person, personally identifiable information concerning any consumer of such provider.” Id.

The VPPA was created to protect information specific to viewers of movies. The court noted that if the definition of PII was expanded for all statutes to include an IP address, there would be no end to litigation and the distinctions between the groups protected by certain statutes would be eroded. Congress’ decision to omit a new definition of PII in the 2013 amendment of the VPPA further emphasized that the VPPA “serves different purposes, and protects different constituencies, than other, broader privacy laws.”[9] For example, if “Google were to start purposefully leaking its customers’ YouTube video-watching histories,” the VPPA “would almost certainly” be violated.[10]

Extending the VPPA to regulate current technology would likely result in unlimited violations. Defining an IP address as PII within the context of the VPPA would mean that “the disclosure of an IP address to any Internet company with registered users might trigger “liability” when an IP address is regularly transmitted to an Internet service provider (ISP) with each search.[11] The court also pointed out that there is a spectrum of PII, with first and last name at one end, and an IP address at the other, lower, end of the spectrum, given that the IP address alone may be insufficient to identify a person. The case cited by the court to illustrate the need for a subpoena to identify a person is a copyright infringement case, Warner Bros. Records Inc. v. Walker, 704 F.Supp. 2d 460 (W.D. Pa. 2010). Warner Bros. needed a subpoena to identify the student who was assigned the IP address used to illegally download some songs. The student, who shared a room with multiple roommates, possibly would not have been identified without a subpoena, given that several people may have used the computer. It was not “reasonably” likely for Warner Bros. to identify the person responsible for the downloads without a subpoena. Understandably, a subpoena may be necessary in a fluid environment such as a college where multiple people may have access to a computer.

Time-out: It’s one thing for Warner Bros. to need help from the college to identify which of multiple people may have used an IP address assigned by the college. It’s something altogether different when Google, which the court describes as “a company whose entire business model is purportedly driven by the aggregation of information about Internet users,” wants to identify a person. The plaintiffs’ amicus very astutely provided some real-world perspective about what happens when Google wants to find out who you are: “concluding ‘that Google is unable to identify a user based on a combination of IP address … and other browser cookie data … would be like concluding the company that produces the phone book is unable to deduce the identity of an individual based on their telephone number.’”[12] Enough said. Resume play.

The court affirmed the dismissal of intrusion upon seclusion claim against Google: Although the court acknowledged that many people, and some courts, find the monetization and collection of data from children without parental consent repugnant, those acts alone, did not establish a claim for intrusion upon seclusion. Under New Jersey law, an intrusion upon seclusion claim occurs upon a showing that (i) an intentional intrusion (ii) upon the seclusion of another that is (iii) highly offensive to a reasonable person.[13] The court disregarded the fact that children, instead of adults, were tracked, because third-party cookies serve a legitimate commercial purpose for advertisers and Google uses them on the Nick Sites the same way it uses them on other non-child directed sites.

This is why Viacom may be liable for intrusion upon seclusion: When Viacom notified parents that it did not collect any personal information about children, it was reasonable for a jury to decide that parents may have permitted their children unsupervised access to the Nick Sites based on the disclaimer. If the parents of the plaintiff class members didn’t already have an expectation of privacy, Viacom’s notice created an expectation of privacy. Viacom’s violation of that trust by surreptitiously collecting data from children could be considered highly offensive under the applicable law.

Summary

COPPA and HIPAA V2An IP address has been likened to a computer’s fingerprint. If a statute identifies an IP address or other static assigned number as PII, that number is a great starting point to identify a user. For example, under COPPA and HIPAA, an IP address is as high on the spectrum of PII as a user’s first and last name. The rationale behind the ranking of an IP address in these statutes is that sometimes it is reasonable that an IP address can lead you to the user. Who’s looking for you also matters. It is reasonable that Google, using third-party cookies, can reasonably use your IP address to identify you.

Sometimes an IP address can only identify a computer, i.e., it cannot “reasonably” be used to identify you. Without a subpoena or some alternate means of creating a mosaicked identity, you may have to resort to battling “John Doe” until a subpoena grants you the right to retrieve additional information about the IP address. In these instances, IP addresses are not considered to be PII. At the end of the day, you have found a computer. Good job.

 

What did we learn?

  • Don’t over sell your privacy policy. Viacom faces potential liability because it violated its own privacy notice to parents.
  • Do the right thing. Don’t get information from children under the age of 13 that is defined as PII under any privacy law without parental consent. These days there are few things about which 90 percent of Americans agree — Viacom’s actions on the Nick Sites are considered to be highly offensive.
  • Now that you know that sometimes children’s browsing history, IP address and other information can be collected through third-party cookies without parental consent, educate your children. The Federal Trade Commission provides guidance on helping children distinguish between ads and entertainment: https://www.commonsensemedia.org/website-reviews/admongo.
  • Understand that a trade-off for having the world at your fingertips may mean sharing your computer’s fingerprint with inquiring minds.

Stay safe.


[1] In re Nickelodeon Consumer Privacy Litigation, 2016 WL 3513782 No. 15-1441 (3rd Cir. Jun. 28, 2016).

[2] Id. at * 4 (alleging that (1) targeting ads to children is more profitable than targeting ads to adults, in part, “because children are generally unable to distinguish between content and advertisements;” (2) 80% and 90% of 2,000 adult respondents, respectively, oppose an advertiser’s tracking of children and believed advertisers should obtain a parent’s permission before installing cookies on a device used by a minor child; and (3) companies can use “browser fingerprinting” to identify specific users).

[3] In re Hulu Privacy Litigation, No. 11-CV-3764 (LB), 2014 WL 1724344 (N.D. Cal. Apr. 28, 2014).

[4] In re Nickelodeon, 2016 WL 3513782 No. 15-1441 at * 3 (3rd Cir. Jun. 28, 2016).

[5] Id.

[6] Michael Dolan, “The Bork Tapes,” City Paper, Sept. 25–Oct. 1, 1987, at 1.

[7] In re Nickelodeon, 2016 WL 3513782 No. 15-1441 at * 15 (3rd Cir. Jun. 28, 2016).

[8] Id., at * 16-19.

[9] In re Nickelodeon, 2016 WL 3513782 No.: 15-1441 * 19 (3rd Cir. Jun. 28, 2016).

[10] Id. at * 17.

[11] Id. at * 20-21.

[12] Id. at * 20.

[13] Hennessy v. Coastal Eagle point Oil Co., 609 A.2d 11, 17 (N.J. 1992 (citing Restatment (Second) of Torts § 652B (1977)).

“A reputation once broken may possibly be repaired, but the world will always keep their eyes on the spot where the crack was.” ― Joseph Hall

Consumers may be injured by inaccurate data that they cannot review or correct. There’s a hole in the bucket, dear Congress.[1] 

The children’s song, “There’s a Hole in the Bucket,” exemplifies the conundrum many consumers experience when they are denied opportunities or inappropriately solicited. Data brokers maintain files with over 1,500 pieces of pieces of personal data on each of us. There are over 3,500 data brokers in the U.S. Only about one-third of them permit individuals to “opt-out” of inclusion in their data banks, usually for a fee. Unless and until you recognize an unexplained pattern of lost job opportunities, rejected apartment applications or are targeted by unsolicited marketing, you may not care what data brokers maintain in your files.

Imagine this: You are 22 years old and gung-ho to use your brand-spanking new business organization degree as an entry-level traveling corporate trainer. You grant recruiters the right to conduct background checks after they indicate their interest in you based on your resume. You get rejection after rejection. You finally muster the courage to call a recruiter and ask why, and she explains that you are not a good fit based on background information that describes you as 39 years old, the parent of four young children, having a Ph.D. and a sufferer of agoraphobia. None of this information is true.

Your prospective employers may have relied on information provided by data brokers or credit rating agencies (CRA) in determining that you are not a viable candidate. Now that you know inaccurate information is being reported about you, you are confident that you can correct your files and the employers will reverse their decisions. You can if the inaccurate information is from a CRA. But if data brokers provided the incorrect information, you will find yourself in the miserable position of knowing your files are wrong and being powerless to correct them. You know prospective employers have considered inaccurate information about you, but you don’t know which employers relied on which data brokers or which inaccuracies in your files made you undesirable for hire. You don’t know how many data brokers have files on you or what evidence you can provide to disprove the inaccurate information about you. You and “Dear Henry” share the predicament of wanting to fix the hole in the bucket but lacking the tools to do so.

Let the screening begin.

Many decisions about consumers, job applicants and second dates are based on inaccurate information provided by data brokers. Data brokers sell consumers’ personally identifiable information (PII) to be used in marketing, people searches, fraud detection and risk management. The FTC defines data brokers as “companies that collect information, including personal information about consumers, from a wide variety of sources for the purpose of reselling such information to their customers for various purposes, including verifying an individual’s identity, differentiating records, marketing products, and preventing financial fraud.”[2]  The Fair Credit Reporting Act (FCRA) applies to CRAs like Experian, TransUnion and Equifax, not data brokers. CRAs must take reasonable steps to ensure the accuracy of the consumer PII they distribute and they must provide consumers the opportunity to review and correct their records. There are zero federal statutes or regulations affecting data brokers in this regard. If enacted as introduced in 2009, the Data Accountability and Trust Act (DATA) would provide procedures for individuals to audit and verify the accuracy of data held by data brokers. The swathe of data collected by data brokers is astounding and troubling. Add the fact that data brokers are generally less expensive to use than CRAs, and employers and individuals are at a distinct disadvantage relative to data brokers.

Here’s what’s in the bucket.

Reports about consumers are based on information showing what they own or use; who they are with, have lost or are fighting; how they make, save and spend their money; and what interests or ails them, including mental, genetic and “other” diseases that may be embarrassing.[3] For example, when you register your car, record a deed of trust, activate a warranty, join a Facebook group, fill a new prescription, or get sued, married, divorced or widowed, data brokers collect that information. It is tacitly understood that PII from data brokers is not accurate and enables discrimination in hiring, the provision of resources and opportunities.[4] Consumer advocacy groups report that information used in people search sites is not vetted — the consumer has the responsibility of figuring out which of 67 people named “Pamela Samuelson” authored Protecting Privacy Through Copyright Law?. Marketing information is more accurate, but is still unreliable. For example, a data broker may correctly report that a household member purchased a new car, but err by addressing car wash coupons to the resident third-grader. Risk mitigation information is the most accurate, because it is expected to at least correspond to the correct person, even if the results are outdated.

This brings to mind a character on a show who changed his name, because he shared it with well-known artist who was convicted of sex crimes. His new name, unfortunately, was shared with a well-known artist who was convicted of murder. How do you feel knowing that you may be judged by the bad report of someone who has a name similar to yours? The identities of entities using bad data may influence your answer.

Who’s looking in the bucket?

Financial institutions, government agencies, political organizers and insurance companies use the services of data brokers. As of May, one of the largest data broker’s customers included “47 Fortune 100 clients; 12 of the top 15 credit card issuers; seven of the top 10 retail banks; eight of the top 10 telecom/media companies; seven of the top 10 retailers; nine of the top 10 property and casualty insurers; three of the top five domestic airlines; and six of the top 10 U.S. hotels.”[5] How likely are you to recognize that after your namesake niece filed for bankruptcy that the hotel prices you were offered increased by 18%?

FactsCan you look in the bucket?

No. If data brokers filled the bucket, no federal law gives an individual the right to look in the bucket. A subpoena or other discovery procedure may be your best option to see your file. If a CRA filled the bucket, yes, an individual has the right to review and correct the information in the bucket.

What can you do?

  •  Educate yourself about your rights. See whether your state has any laws that offer you protection. California, for example, shields victims of violent crimes from having their PII publicized on the internet.
  • Opt out of as many of the data broker sites as is reasonable. Visit this website to get started: http://www.computerworld.com/article/2849263/doxxing-defense-remove-your-personal-info-from-data-brokers.html.
  •  Lobby your federal and state legislators and align yourself with organizations that advocate for the right to control your PII.

Stay safe.


[1] There’s a Hole in the Bucket, Songwriters: Harry Belafonte, Odetta Felious Gordon © Next Decade Entertainment, Inc.

[2] FTC, Protecting Consumer Privacy in an Era of Rapid Change, at 68 (Mar. 2012).

[3]Steve Kroft, The Data Brokers: Selling Your Personal Information, 60 Minutes, ep. “The Data Brokers” aired on CBS Mar. 9, 2014, http://www.cbsnews.com/news/the-data-brokers-selling-your-personal-information/.

[4] Exec. Office of the Pres., Big Data: Seizing Opportunities, Preserving Values, pp. 51-53, May 2014, http://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf.

[5] U.S. Senate Commerce Committee, A Review of the Data Broker Industry: Collection, Use, and Sale of Consumer Data for Marketing Purposes  (December 2013).

“He’s not safe, but he’s good.” (Referring to Aslan, the Lion, in The Lion, the Witch and the Wardrobe.) ― C.S. Lewis

I planned to write about the inspired, better-than-sliced-bread security option of using fingerprint authentication to protect our mobile devices. That imploded. In 2014, and earlier this year, courts in Virginia and California, respectively, issued warrants requiring suspects to provide their fingerprints to unlock phones so the government could access potentially incriminating evidence believed to be stored there.[1]

All “do,” no “talk.”

In contrast, courts have not forced individuals to reveal the passcodes used to secure their mobile devices.[2] What gives? Albert Gidari, the director of privacy at Stanford Law School’s Center for Internet and Society, explains that the Fifth Amendment protects thoughts, not things: “Unlike disclosing passcodes, you are not compelled to speak or say what’s ‘in your mind’ to law enforcement,” Gidari said. “‘Put your finger here’ is not testimonial or self-incriminating.” For example, you can be compelled to provide the key to your heart, but no one can make you reveal what is in your heart.

Why chain the door when all the windows are open?

The fingerprint authentication platform is only as good as its gaps. The maker of one of the top mobile operating systems has stored fingerprints as unencrypted images in local storage. Fingerprint data stored by two cellphone companies was breached despite the use of TrustZone, an encryption platform created by mobile manufacturers.[3] WhatsApp, which was mentioned in a previous blog, has also experienced data theft.

Studies reveal the ineffectiveness of security software provided to users by their data providers. The software is largely ineffective, because …  PEOPLE DON’T DOWNLOAD IT! People shred their mail, but don’t download the platforms devised to protect their privacy if it’s formatted as data. The most common reasons people don’t download updates include: (1) suspicion that the updates are malware sent by hackers; (2) belief that the update won’t benefit them if they are otherwise satisfied with their current service; (3) lack of understanding that the updates provide security patches; and (4) expectation that the updates will take too long or will use too much memory. The authenticity of updates to your operating systems can be checked by visiting the app store, your manufacturer’s website or conducting an internet search to find information about update releases.

A critical flaw in fingerprint authentication is hiding in plain sight.

The convenience and high-tech sexiness of using fingerprint authentication on our phones has clouded our judgment regarding some of the most basic things we know about security. Fingerprints have a characteristic that is counter to a cornerstone of cyber-security — Fingerprints are immutable. If someone steals your password, you can change it. Quarterly mandatory password expirations illustrate the adage, the best password, is a new password.[4] Heads would spin and roll in IT departments the world round if it were decreed that passwords will never be changed again.

And just like that, the floodgates are open.

A much touted advancement of fingerprint authentication is that no one can steal your fingerprint. That’s fine, but the image of your fingerprint can be stolen like any other image. The image of your fingerprint can give someone access to apps, browsers, photo albums, cloud files and online accounts, some of which may be secured by passwords cached in your phone history. Finally, does it make sense to have an expectation of privacy in our fingerprints? The legal answer is, no. Since we literally leave our fingerprints everywhere, maybe we should reconsider relying on them to secure our privacy. Our unspoken thoughts are inalienable property. Apparently, fingerprints are just keys.

Convenience is usually a good thing. Good things may not be safe. We each have to weigh whether the convenience of opening our phones with a finger to swipe instead of entering our PINs is worth the risk of losing our privacy.

Stay secure.


[1]Matt Hamilton, The government wants your fingerprint to unlock your phone. Should that be allowed?,  LA Times, Apr. 30, 2016, http://www.latimes.com/local/california/la-me-iphones-fingerprints-20160430-story.html; and Quinton Plummer, Virginia police can force you to unlock your smartphone using fingerprint: Here’s why, Tech Times, Nov. 3, 2014, http://www.techtimes.com/articles/19288/20141103/virginia-police-can-force-you-to-unlock-your-smartphone-using-fingerprint-heres-why.htm.
[2] SEC v. Huang, No. 15-269 (E.D. Pa. Sept. 23, 2015); Virginia v. Baust, No. CR14-1439 (Va. Cir. Oct. 28, 2014).
[3] Shubham Verma, Why I’m Not a Fan of Smartphones With Fingerprint Scanners, Gadgets360 Oct. 30, 2015, http://gadgets.ndtv.com/mobiles/opinion/why-im-not-a-fan-of-smartphones-with-fingerprint-scanners-759160.
[4] To be fair, there is some belief that changing passwords regularly is more harmful than not.  E.g., Andrea Peterson, Why changing your may do more harm than good, The Washington Post, Mar. 2, 2016, https://www.washingtonpost.com/news/the-switch/wp/2016/03/02/the-case-against-the-most-annoying-security-measure-virtually-every-workplace-uses/.

Hackers have upped the ante. Data controllers wax fondly about the good old days when data was outright stolen. Back then, in 2013, there was a sense of fair play. Trolls did troll things. Assuming the victim implemented and maintained a “comprehensive information security program”[1] to protect the type of data that was compromised, its insurance carrier may have provided coverage and the issue was resolved. Now, ransomware, extortion and data sabotage may lead to ongoing issues for data controllers. Each of these types of cyberattacks is evolving in ways that are truly devious.

Data theft is to head cold as ransomware and extortion are to chicken pox.    

If the theft of your data is not on the scale experienced by Target, Wyndham or Living Social, your operations may recover in a small amount of time. Think of a localized data breach as a head cold. After the breach, you will be out of commission for a while, but if you take steps to avoid reinfection (implement a privacy policy), protect others from your symptoms (use firewalls and authentication) and complete your course of treatment (follow your plan and comply with breach notification laws), you will ultimately be fine.

Think of your computer network as patient zero. Ransomware is akin to the chicken pox — it hits fast, is contagious, and the signs of the illness can stay with you until you adopt an assumed name. Ransomware malware previously “only” locked your keyboard or uploaded unsavory files to your system. Attackers would notify you of the amount of Bitcoin required to regain access to your data or remove the offensive files. In 2013, hackers significantly increased their use of ransomware to (1) infect your system and, (2) install a cryptographic key to lock and unlock your data. Once in, the attackers would gauge whether to access your financial accounts directly or send a ransom demand with a countdown showing when your data would become permanently inaccessible. Now, ransomware such as CryptoWall spread and infect the shared drives that connect to patient zero. If your whole office is infected, a quarantine may be required until all viruses are eliminated.

Data sabotage initially seems to be an asymptomatic attack, but can quickly become fatal. 

Hackers using data sabotage can remain innocuous while they mine data. Only when hackers know enough about your data to cripple you or enrich themselves, is the true measure of their destructive nature realized.[2] Data sabotage may occur over a period of time and employ many distinct steps. Manipulation of the numbers reported in a Form 10-Q can cause a corporation’s stock to crash and affect an entire industry. Competitors may also find vulnerabilities in your security that they may exploit.

A case with espionage, extortion and pseudonyms is a sign of things to come.

Wire Swiss GmbH (Wire Swiss) is currently seeking a declaratory judgment and alleges civil extortion against its competitor, Quiet Riddle Ventures dba Open Whisper Systems, and Moxie Marlinspike.[3] The litigants develop end-to-end encrypted messaging software. Wire Swiss claims the defendants threatened to accuse Wire Swiss of infringing on copyrighted software code and publicize “vulnerabilities” in the security of Wire Swiss’ encryption software. Wire Swiss’ payment of a $2 million licensing fee would prevent the threatened action. Wire Swiss claims that the specter of publication of security vulnerabilities in its encryption software could cause catastrophic damage to its reputation. Wire Swiss further claims that the defendants’ threat coincided with the announcement that their Signal software had been incorporated into the WhatsApp messaging application. If true, the plaintiff’s allegations are a prime example of how data saboteurs profit from their hacks. This case may also be fodder for legislation to create a safe harbor for security self-evaluation.

The best policy may be to trust no one.

Developing a zero-trust, multilayer security plan may be your best method of protection. Here are some common tips that may help keep your data virus and hack free:

  • Encrypt or anonymize your data.
  • Erect firewalls.
  • Invest in “anti”— anti-virus, anti-malware and anti-spyware software.
  • Update your software regularly.
  • Consider using a “kill switch”— when suspicious events happen, the IT department should automatically be notified and the network should shut down if no protective measures are taken.
  • Ensure granular access control is used.
  • Regenerate session identification on every inquiry to you.
  • Use double or triple authentication.
  • Log errors instead of displaying them to potential hackers.
  • Revoke credentials when certain events occur.
  • Implement “eventing” so you know when certain categories of data are accessed and/or modified.

Sadly, no safeguard is guaranteed. Using multiple defenses will at a minimum, ensure you are not the slowest one running from the bear. Good luck, and may your houses remain pox-free.


[1] Fed. Trade Comm’n v. Wyndham Worldwide Corp., No. 2:13-cv-1887 * 4 (D.N.J. Dec. 9, 2015).
[2] See, Edmund Lee, “AP Twitter Account Hacked in Market-Moving Attack”, Apr. 24, 2013 http://www.bloomberg.com/news/articles/2013-04-23/dow-jones-drops-recovers-after-false-report-on-ap-twitter-page (S&P 500 Index dropped $136 billion in value following a fake tweet alleging Pres. Obama had been injured)
[3] Wire Swiss GmbH v. Quiet Riddle Ventures, LLC, et al., No. 2:16-cv-02340 (C.D. Cal. Apr. 6, 2016).  Mr. Marlinspike may (or may not) be Matthew Rosenfeld or Mike Benham.

Sometimes law enforcement needs a warrant to access cellphone data, sometimes a court order. Sometimes nothing is required.

 

Roaming while you roam.

Depending on where you use your cellphone, law enforcement may obtain your location records from your wireless provider without a court order or warrant — neither is required in Washington state. In urban areas where there are multiple cell towers, a phone’s location can be identified to within half a mile. Turning off your location services or powering down the cellphone alone will not shield you from law enforcement.

lemur

 

Are you where you said you’d be?

When processing data, cellphones communicate with the strongest available cell tower signal. Calls, texts and internet browsing generate cell-site location information (CSLI). CSLI is time-stamped and linked to the phone number. Cell towers emit different signals in each direction, so the phone’s movement is tracked by its angular position relative to a tower. From CSLI, law enforcement can track where you are; what cell phones are near you; with what phone numbers you communicate; how long you communicate; and what routes you travel. In essence, everything except the actual communication is recorded. If you aren’t a criminal, you may not care. Depending on how CSLI is produced, even the innocent can catch the attention of law enforcement.

 

Despite your innocence, law enforcement may receive your cellphone usage records.

“Tower dumps” entail producing CSLI for all cellphones that processed data sent through a tower. If your cellphone uses a targeted tower, your data may be captured. You go from having fun with your friends to law enforcement tracking you by your number.

sheep

Producing CSLI in real-time transfers your records contemporaneously as they are created by cellphone usage.  If this happens, you are probably having a bad day — this method may be used in exigent circumstances.  Collection of historical CSLI shows all CSLI for a cellphone for a specific period of time.

 

One person’s record is another person’s dirty laundry.

In U.S. v. Timothy Carpenter[1], the Sixth U.S. Circuit Court of Appeals joined the Eleventh Circuit[2] in ruling that law enforcement agencies do not need a warrant to track a caller’s location through cell tower records. Timothy Carpenter and Timothy Sanders robbed nine cellphone stores (ironic, isn’t it?) in Michigan and Ohio within four months.

cautionThe FBI requested the “transactional records” of the Timothys’ wireless providers, aka, the Timothys’ historical CSLI. Court orders were issued pursuant to the Stored Communications Act (SCA) after the FBI showed there were reasonable grounds to believe the CSLI was relevant to the investigation. The FBI reviewed 127 days of CSLI for one Timothy and 88 days for the other. The government established through the historical CSLI that the Timothys were located within a half-mile to two miles of each armed robbery when they occurred. On appeal, the defendants argued that the Fourth Amendment required the government to show probable cause and use a search warrant to access the CSLI.

 

The opinion focused on the following:

  1. There was no search, because the FBI collected the wireless providers’ data routing information, which was gathered in the ordinary course of business.
  2. CSLI does not refer to the content of the defendants’ private communications.
  3. Every cellphone user who has paid roaming fees knows that wireless carriers collect locational information, so there was no expectation of privacy.
  4. The CSLI is so comparatively imprecise compared to GPS, that there is no expectation that the cellphone user can be located exactly.
  5. The SCA requires the government to meet the “reasonable grounds” to get a court order, not the “probable cause” standard to obtain CSLI.

 

Tracking or stalking? Duration matters.

The concurring opinion in Carpenter questioned whether the business record standard of proof applies in the review of an alleged violation of Fourth Amendment rights. The rationale behind the question is that a business’s production of credit card records showing purchases, for example, may be sufficiently distinct from the production of cellphone records showing personal location to require a more stringent analysis. The concurring justice also found the scope of the location monitoring troubling. Lawfully tailing a suspect is one thing. Lawfully tailing a suspect for a period of three to four months transmutes the surveillance into the realm of privacy invasion.

 

So, how do I keep law enforcement out of my data?

iOS and Android operating systems and apps offer some protection of CSLI location data. Start by turning off the location services for all your existing apps. Download apps that discard the location data cached on your cellphone. Get and stay off the grid by using localized Wi-Fi connections. Rely on an offline map or app that anonymizes the cellphone, encrypts the location data and permanently deletes your data within a certain amount of time. Regularly monitor new technology used by law enforcement and cybersecurity experts.

husky

After all, when you build a better mousetrap, law enforcement will build a better mouse. Justice William O. Douglas called upon his Pacific Northwest ideals when he wrote, “The right to be let alone is indeed the beginning of all freedom.” Cheers to freedom.

 


[1] www.ca6.uscourts.gov/opinions.pdf/16a0089p-06.pdf
[2] https://www.eff.org/document/us-v-q-davis-opinion