Does your business run and maintain a website? Does it create or license website or other content? Does it run and implement software? If you’ve answered yes to these questions, but haven’t yet considered the importance of copyright to your business, here are 10 tips to ring in a safe and proactive 2017.

  1. The Digital Millennium Copyright Act (DMCA) imposes strict piracy “anti-circumvention” measures meant to protect copyright holders from copyright infringement on the internet. But because the DMCA captures an otherwise large swath of internet service providers who do not have control over what content is posted to or distributed from their websites, the DMCA has a “safe harbor” provision that protects service providers from incurring legal liability in copyright infringement actions. If you qualify for the DMCA Safe Harbor, you must register your company as a Designated Agent for Service of Process at the Copyright Office to benefit from the Safe Harbor protections. To learn more about the DMCA, read this recent white paper.
  1. Unsure how copyright applies to your business? Continue your own copyright education and task someone within your company to keep abreast of the latest developments in copyright protection and compliance.
  1. Develop a written copyright policy or copyright guidelines. Have the same copyright questions arisen again and again in your organization? Year end is an ideal time to compile nagging questions and prepare short practical answers. Circulate copyright Q&As to your colleagues or post them to your intranet site. Having a copyright knowledge database may help your organization better comply with copyright laws, and help you identify how you can improve or update your policies or guidelines.
  1. Do you create content that you distribute to the public? Your content may be a “work” that qualifies for copyright protection. Although copyright registration is voluntary in most countries, consider registering your works with your country’s copyright office. Rather than registering individual works, register a group or collection of works produced during the year to save time and registration fees. While the grant of copyright arises automatically, copyright registration is necessary if you plan to enforce your rights through legal action.
  1. Have you licensed content outside of your organization? Prepare a database of all content your organization has licensed. Whether it’s an image to use on a promotional brochure, or content from a large electronic database, include all content in a single searchable database that allows you to quickly and easily locate that content and determine what rights you have in it.
  1. Review your license agreements and create an “ultimate” list of protections and guarantees that your organization needs from its license agreements. Do you need remote access or the right to share a PDF file? Do you need to make print-outs, or post articles to your intranet? What about using portions of the database for internal education or external seminars? Use the list to set the parameters for your future license negotiations.
  1. Consider your 2017 budget for permissions, licenses and copyright training. Consult with subject matter experts in your organization to understand their needs and preferences. Prepare a budget and ensure you have the funds you need to meet your copyright management goals. Brainstorm how to get the copyright message to your colleagues and employees. Often, recurring lunches or coffee chats with a diverse group of internal stakeholders and speakers from photography, library services, web design or other professional disciplines with copyright expertise can help sensitize your office to important copyright issues without overburdening your team with cold policy documents or rote lectures.
  1. Review your agreements with consultants. Does your company expect to retain copyright ownership in consulting reports? Make sure that this is clearly stated in your agreement and, if necessary, provide for an assignment of rights to your organization. If consultants own their works, review the rights your organization retains in any consultant work. If you are a consultant, review and understand the rights you have in your work.
  1. Undergo an intellectual property (IP) audit. It’s important to know that the content and computer software you are using is legal and properly licensed. Also, an IP audit will identify the IP that you own. Armed with that knowledge, you can then learn how to market, license and/or profit from leveraging your IP.
  1. Set up a mechanism for monitoring the legal use of your own online content on an international basis. This can be as simple as regularly undertaking search engine searches. But you can also hire a professional who specializes in finding unauthorized uses of content. Piracy is not only the domain of the software and entertainment industries. You may be surprised to find that your individual or organization’s rights are being exploited, and that your works are being used and perhaps even sold without your permission.

Copyright is a value-add opportunity for every business, and the best benefits are obtained by acting proactively to ascertain ownership and keep the copyright assets in ready order for offensive or defensive play.

In Brief: The recent Second Circuit decision in EMI Christian Music, Inc. v. MP3tunes, LLC builds on BMG Rights Management et al v. Cox Communications, further emphasizing that courts expect online service providers and website platforms to develop and enforce robust infringement response policies — or else risk losing their safe harbor under the Digital Millennium Copyright Act (DMCA). EMI stands as an important notice to all online service providers, especially those with peer-to-peer sharing or user-generated content features, to be vigilant and thoughtful about addressing infringement.


Online service providers should not turn a blind eye to copyright infringement happening on their watch. Since the rise of peer-to-peer file sharing, Internet service providers (ISPs) and website platforms have become the front line for preventing copyright infringement.

Content sharing on the Internet sparked a new line of case law in the early 1990s, including the seminal Religious Technology Center v. Netcom On-Line Communication Services, Inc., 907 F. Supp. 1361 (N.D. Cal. 1995), wherein the court found that a large Internet access provider could be liable for the copyright infringement of its subscribers. A few years after the case, in 1998, Congress passed the DMCA as a means of regulating online service providers and giving them some safeguards.

As peer-to-peer sharing platforms began to gain popularity in the early 2000s, copyright holders could no longer easily pursue individual infringers and began looking for alternative ways to protect their interests. The DMCA proved to initially be a barrier because it has a safe harbor to shield online service providers from monetary liability when someone brings a copyright infringement claim, making recovery against online service providers difficult for copyright holders. But the safe harbor is not a free lunch — online service providers must meet a variety of criteria in order to remain eligible for its protections.

Copyright holders have taken hold of the safe harbor eligibility requirements as a litigation tool for recovering from ISPs and website platforms. In BMG Rights Management et al. v. Cox Communications, 2016 WL 4224964 (E.D. Va. Aug. 8, 2016), which has been appealed to the Fourth Circuit, the court began its opinion by setting the historical stage: “This lawsuit is the latest in a years-long initiative by copyright holders to enlist the courts in the effort to curb the rampant infringement made possible by peer-to-peer file sharing on the Internet.”

One major statutory requirement for safe harbor protection is that the service provider must have adopted and reasonably implemented an internal policy for terminating subscribers and users of its system or network who are repeat infringers. Courts have increasingly emphasized the importance of this statutory requirement. It has become a key battleground, forming the center of the disputes in two pivotal cases: BMG Rights Management et al. v. Cox Communications, 2016 WL 4224964 (E.D. Va. Aug. 8, 2016) and EMI Christian Music, Inc. v. MP3tunes, LLC, 2016 WL 6211836 (2d Cir. Oct. 25, 2016).

In BMG, the copyright holder went directly to the source — the ISP — as the party responsible for preventing copyright management and ensuring users do not abuse website platforms. The BMG court allowed this approach, putting the onus on ISPs to develop real programs to combat infringement. The work begun in BMG was amplified in EMI, wherein the court held that online service providers that passively allow website or service users to engage in copyright infringement and fail to develop real policies to prevent abuse run the risk of losing their DMCA safe harbor protections. The resounding message from this line of cases: If ISPs and website platforms want to maintain their DMCA safe harbor, they need to develop and enforce robust response mechanisms.

Repeated Infringement Policies and Safe Harbor: A Closer Look

The BMG case involved BMG, a music company, suing Cox Communications, an ISP, for users of Cox’s ISP engaging in illegal music sharing on their network. Cox claimed that it could rely on the DMCA safe harbor because it had a response system for addressing repeat infringers. The court described Cox’s system as “essentially a thirteen-strike policy” wherein a user would have to commit several violations in order to be terminated — and even then could be reactivated quickly. The jury sided with BMG, finding Cox liable for willful contributory infringement and awarding BMG $25 million in damages. Cox sought to have the court overturn the verdict, but the Court declined. Instead, the court found, among other things, that “there was sufficient evidence for a reasonable jury to hold Cox responsible for the infringement of BMG’s copyrights on its network. . . . Cox could not [ ] turn a blind eye to specific infringement occurring on its network.” The court acknowledged that exposing ISPs and other intermediary online service providers to exposure for their users’ behavior “raises the specter of undesirable consequences that may follow,” but the court hoped the BMG case “may provide the vehicle for consideration of those questions.” The BMG case certainly held open the door for other courts to consider what policies and practices ISPs and other online service providers must have in place against repeat infringers to keep their DMCA safe harbor intact.

In EMI, the Second Circuit took up this question with MP3tunes, a music storage site that allows users to upload and store songs obtained from elsewhere on the Internet. EMI brought an action against MP3tunes in the Southern District of New York, claiming that MP3tunes was lax in responding to repeat infringers. The District Court disagreed, but the Circuit Court overturned the District Court and held MP3tunes in violation. The Circuit Court issued two main holdings:

  1. The meaning of “repeat infringers” in the DMCA does not require that the infringers be willful. Someone can qualify as a repeat infringer even if they have no knowledge that their behavior is infringing another’s work. “[T]he legislative history of the DMCA indicates that a ‘repeat infringer’ does not need to know of the infringing nature of its online activities, or to upload rather than download content.” This puts more pressure on online service providers to take serious efforts to connect infringing activity to users, checking for patterns and notifying all users who infringe. MP3tunes failed to do this because it did not investigate those who downloaded content and did not attempt to monitor use patterns in a consistent way.
  1. The EMI Circuit Court turned to what triggers need to be present before the online service provider needs to act to keep its DMCA safe harbor. The DMCA does not obligate a service provider to actively monitor or act on only a “generalized awareness” of infringement. If a copyright owner wants to overcome a service provider’s DMCA safe harbor, the copyright owner needs to prove that the service provider had 1) actual knowledge or 2) an awareness of facts or circumstances making specific acts of infringement obvious. The Circuit Court, based on this rule, upheld the jury’s verdict due to MP3tunes’ apparent knowledge.

Following the EMI ruling, MP3tunes requested reconsideration under the argument that the EMI court had effectively eliminated the DMCA safe harbor. The Second Circuit recently declined to reconsider its decision.

The Bottom Line: Take Steps to Stop Repeat Infringers

The case law is still in flux, but what the BMG and EMI opinions demonstrate is an increasing judicial focus on ensuring that ISPs and online service providers take tangible measures to prevent repeat infringers — or risk jeopardizing their DMCA safe harbor.

Open Access is one of the publishing models that arose with the ease of internet distribution. Before the internet, publishing was governed largely by a scarcity or “closed” model of distribution. Since the internet, the distribution models fill a wide spectrum between “open” and “closed” models with many in between the two opposites. “Open Access” is a specific phrase relating to publishing, especially focusing in the scientific and medical research fields. Open Access removes price barriers (subscriptions, licensing fees, pay-per-view fees) and permission barriers (most copyright and licensing restrictions). The Open Access shorthand definition is “free availability and unrestricted use” — succinctly captures both elements. There is considerable disquiet in the research community about the extent to which monopoly prices are charged for material that many believe ought to be feely and widely accessible as it expedites research and reduces redundant research.

The Gates Foundation is on the brink of implementing an Open Access policy that is the purest in the world. Under the Gates Foundation policy, “information sharing and transparency is promoted by unrestricted access and reuse of all peer-reviewed published research funded, in whole or in part, by the foundation, including any underlying data sets.” The radical aspect of the Gates Foundation Open Access policy is that no embargoes, no waivers and no exceptions are allowed.

This is not a sudden policy implementation. Three years ago, the Gates Foundation announced and implemented a modified form of this policy. The date of rigorous implementation without embargoes, waivers or exceptions was explicitly detailed as taking effect on January 1, 2017.

With less than two weeks to go before January 1, 2017, the University of Michigan and the Gates Foundation published this op-ed piece discussing the goals and benefits of Open Access, entitled: “Let’s speed up science by embracing open access publishing.”

For ease of access, that op-ed piece is complete below, reprinted with permission of the authors.

 


Let’s speed up science by embracing open access publishing

 

By Richard Wilder and Melissa Levine
December 19, 2016

Six years ago, Harvard scientist Jay Bradner discovered something unusual. His laboratory had isolated a molecule in mice, named JQ1, that appeared to reverse the effects of a serious cancer. But what he did with JQ1 was even more unusual: Rather than submit the findings to a prestigious journal, Bradner openly distributed the structure of the molecule — free and reusable for anyone.

Bradner’s goal was to disseminate his lab’s finding as widely as possible to encourage and expedite collaborations. And although the original concept for JQ1 fell through, he and others ultimately succeeded on different fronts: JQ1 now has broad usage in treatments for HIV infection, heart disease, pancreatic cancer, and more.

Despite this success story, most scientific research today is not published openly — meaning freely, to everyone, without delay from the time of publication. Instead, it lives behind time embargoes and paywalls, costing as much as $35 per article to access. Even when scientific information is free to read, it is subject to copyright restrictions that prevent it from being recast quickly in new ways.

A growing movement for open access seeks to change this because limitations on the use of scientific discoveries hinder the efficiency of research, increase costs, and ultimately delay even impede scientific progress. The possibilities of open access could be transformative.

Open Peer Review

It’s time to overhaul the secretive peer review process.

If published research and data were freely accessible and reusable by researchers of diverse interests, urgently needed solutions could be greatly accelerated. Scientists could quickly cross-check important studies, catching potentially consequential mistakes. Medical providers could access the latest technical guidance, improving patient care. And students around the world could build on each other’s work. With openness, good ideas could truly come from anyone, anywhere.

Some early open access efforts — like the Human Genome Project, one of the world’s most ambitious programs undertaken to sequence the complete set of DNA in a human body — illustrate just how significant the impact of open access could be. The Human Genome Project shared each of the body’s three billion DNA letters freely and rapidly online, leading to the discovery of more than 1,800 disease genes and generating over $1 trillion in economic impact across a diversity of sectors, including health care, energy and agriculture. Critical to this was enabling all uses of the data by anyone for any purpose — academic or commercial.

This is the future envisioned by a growing number of institutions and universities that sponsor research and development. In the last few years, NASA, Research Councils UK, and Wellcome, as well as the Massachusetts Institute of Technology and the University of California system have all implemented open access policies.

The Bill & Melinda Gates Foundation will usher in 2017 by making all published research funded by its grants available on full open access terms. It will base its open access policy on a robust legal framework that supports full reuse rights — the Creative Commons Attribution-Only license, also known as CC BY.  This license, which is also applied to all materials authored by librarians and staff of the University of Michigan Library, lets anyone distribute and build upon a work’s underlying data as long as they credit the original creation.

CC BY facilitates processes such as text-mining, helping researchers understand patterns in large datasets. It also promotes innovation, unlocking the ability of people across sectors and geographies to build on one another’s work. Unlike its counterpart, the CC BY-NC license, CC BY permits commercial reuse, meaning it directly enables the kinds of public-private partnerships that are so essential to scientific innovation.

That’s why, in part, the Gates Foundation adopted it — because it permits the widest range of actors to do their best work on the widest set of problems.

To be clear: free and open reuse does not mean misuse. Under CC BY, researchers and publishers do not give up the value of their name. The license includes significant mechanisms to guard against academic misrepresentation, and protects the logos that journals use to brand their reprints and products.

CC BY is a growing license of choice among open publishers, including the Lancet and Cell Reports. It is the top choice for researchers. In 2015, the Nature Publishing Group reported that 96 percent of researchers publishing open access selected the CC BY license.

Changing the status quo is tough, but it is made possible when people come together to reconsider outdated assumptions and practices. With the support of researchers, publishers, and the academic community more broadly, legal frameworks like CC BY can expand the interconnected web of human knowledge and facilitate its use by everyone.

In a fast-changing world, making these adjustments will advance what’s best about the scientific process: the ability of humans to generate new and life-changing ideas by building on the work of the past in a world where sharing and using data in medicine, public health, and the environment is the norm rather than the exception.

Richard Wilder is the associate general counsel at the Bill & Melinda Gates Foundation. Melissa Levine is the lead copyright officer at University of Michigan Library and ex officio member of the library’s open access committee. (Reprinted with permission of the authors.)

Washington State Begins Beta Testing Its Pioneering Privacy Modeling App

Seattle — and by extension, Washington — has a purist attitude in its fusion of technology and life. All other states offer their employees manuals, a help line, and if well-funded, a legal department to answer questions about privacy. If employees are lucky enough to have access to legal advice, they may be able to submit a question, receive a response and finally execute on the advice within a few business days. That’s sad.

The Washington State Office of Privacy and Data Protection is leaving those other states eating Mt. Rainier dust. Through collaboration with policy makers, attorneys and tech gurus, Alex Alben, the Chief Privacy Officer in the state’s Office of the Chief Information Officer, has realized his stated goals of examining privacy policies across state agencies and strengthening protections for personal data. The newly launched Privacy Modeling application (PM App), an initiative of the Office of Privacy and Data Protection, provides links to relevant federal and state law regulating security of data collected from individuals. Users immediately receive answers regarding what must, may or never ever can be done with different types of personal data.

The PM App does not just spit out a canned response to inform the user that social security numbers should be protected. Instead, the PM App offers truly helpful analysis of relevant statutes for people who are on the frontline of protecting citizen data being processed by the government. For example, a clerk working in the licensing department may want to know whether the agency can sell information that identifies recently licensed, male aestheticians within a certain age range, who are veterans, who have worked for at least three employers, and who have received disability benefits. By running a query, the user will learn that data such as first and last names, employment addresses, gender and age can probably be sold, but data related to veteran status and disability status must be shielded from disclosure.

MIND BLOWN.

How the PM App Works

Users select the data points to be analyzed:

  • The sector served by the agency, such as banking, health and medical, or education;
  • Types of data that the agency handles to fulfill its mission, such as veteran records, audio recordings, and driver or professional license numbers; and
  • Whether the agency plans to sell or market the data, share it with third parties or use it to grant social benefits.

The PM App analyzes the selections and produces a Results Matrix that uses color-coding to identify statutes that apply to the data and whether the proposed use of each type of data is “No Specific Privacy Law Found,” “Allowed With Limitations” or “Forbidden,” with links to the texts of relevant laws. Users also receive information about the laws and policies excluded from the PM App analysis, the statutes that should be reviewed by an attorney and a reminder that data must be collected in a lawful manner.

The PM App is a giant leap forward for the owners of the data, the agencies and their employees.

Why the PM App Is Important and Revolutionary 

Government agencies that process data have bad reputations for being invasive and self-interested to the detriment of the individual. It’s no accident that the DMV is the portal to hell in a television and comic book series. Results of a search for “big brother” include words like, “scary,” “creepy” and “overbearing.”[1]  Government has been characterized as stripping away the humanity of its subjects[2] or mining them for data. The PM App is a rehab of sorts — it embodies the self-regulatory spirit adopted by many industry sectors in the U.S.

The PM App also signifies the state’s interest in protecting the data entrusted to it.  Let’s be clear: people generally have no option to withhold their personal information if they want to benefit from government services. The privilege of conducting business in some industries requires the provision of fingerprints, background checks and photographs.  Information about our professional licenses are regularly marketed.

Initiatives like the PM App reduce cynicism about having to turn over personal data in order to pursue a dream of becoming a pyrotechnic operator. Development of the PM App signifies the state is invested in protecting our data. If George Orwell is accurate, and I am no more than a collection of my data, I welcome this added layer of protection.

The PM App also has significance for the users. Subject to the express and implied limitations in the User Guide, the PM App immediately provides links to federal and state statutes, and an analysis of whether the proposed use of the data complies with those statutes, enabling users to gain significant familiarity with the statutes that would ordinarily be accessible only through legal counsel. Users who desire to delve deeper into the laws may review definitions, exemptions and penalties to contextualize the laws to their agencies and privacy policies.  Feedback from the testing is anticipated to reveal additional intangible benefits to performance accuracy and morale.

You Too, Can Have an Attorney in a Box

 

 

 

 

One of the most basic tasks that attorneys do is identify and analyze laws that are relevant to their clients. The PM App empowers employees who handle personal data by granting them access to laws that directly affect if, how and when data can be handled throughout its life cycle. In accordance with the requirements of the funding source, the PM App will be available through open access. The PM App Guide shows developers how they may tailor the PM App to their own uses. Caution: The PM Guide is great, but attorney advice is still very necessary when making tough calls about privacy issues.

Click here to access the PM App beta test. You can provide feedback or ask questions via an embedded link on the site. You can also gain additional information about the process of developing the PM App here.

Keep your eyes on the PM App, the Office of the Chief Information Officer, and trends in privacy. A conference focusing on data protection in the evolving digital environment is scheduled for early 2017 in Washington. Keep an eye on everything.

It’s when you look away that you are apt to get hurt.


[1] “Big Brother isn’t watching. He’s singing and dancing. He’s pulling rabbits out of a hat. Big Brother’s busy holding your attention every moment you’re awake. He’s making sure you’re always distracted. He’s making sure you’re fully absorbed.”
Chuck Palahniuk, Lullaby

[2] “Does Big Brother exist?”
“Of course he exists. The Party exists. Big Brother is the embodiment of the Party.”
“Does he exist in the same way as I exist?”
“You do not exist.”
George Orwell (1984)

In a World Intellectual Property Office (WIPO) domain name decision, WIPO has ordered the cancellation of 175 domain names that include the famous Range Rover, Land Rover and Jaguar trademarks. The domains were registered in the .au ccTLD by the Trustee for the Trivett Family Trust (Trivett). Representative samples of the domains include rangeroverservicecentre.com.au., jaguarhybrid.com.au and landrover.net.au. Jaguar Land Rover (JLR) submitted its complaint in August. The decision was rendered on October 10, 2016. The ownership of the Jaguar, Land Rover and Range Rover marks was not in dispute. Trivett submitted that the domains were acquired by Trivett for the purposes of developing a proposed “Maintain My” web platform that would connect consumers to a range of service providers, including manufacturers of both genuine and non-genuine automotive spare parts.

JLR submitted that the use of well-known trademarks together with geographic or descriptive terms creates a domain name that is confusingly similar to the well-known trademark. Furthermore, JLR did not license or permit Trivett to use the trademarks, nor was the proposed use of the trademarks tantamount to a bona fide offering of goods and services. Trivett relied upon the test set out in the Oki-Data Americas, Inc. v ASD decision that sets out four minimum factors used to help decide whether there was a bona fide use of the domain name. Those factors include:

  1. The respondent must actually be offering the goods or services (Trivett had said that it was going to use the names starting in 2017 as part of its “Maintain My” platform);
  2. Only genuine trademarked goods could be sold on the website;
  3. The site must accurately disclose the relationship between the registrant and the trademark owner, and may not falsely suggest that the registrant is the trademark owner or is an official site; and
  4. The respondent must not try to corner the market in all domain names (175!) depriving the trademark owner of its own use of the mark.

The WIPO panel discussed that the decision may have been different had Trivett been able to show development of its “Maintain My” mark and if they could show that they actually sold JLR vehicles and they had not registered 175 domains.

The result: Oki-Data is still a reasonable test for the bona fide intent. It just so happened that Trivett did not qualify for any of the prongs of the test.

A recent case out of the Second Circuit says “Yes. Third parties have standing to assert the defense that a copyright owner’s claim of ownership is erroneously based on Work Made for Hire (WMFH).”

Urbont v. Sony Music held that the defendant did have standing to challenge the validity of the WMFH ownership claim. Sony claimed that they owned music by virtue of WMFH; and Jack Urbont, defendant, challenged that WMFH status of ownership. This is noteworthy because it was Marvel, not Sony, who was the copyright claimant under a theory of WMFH.  The music at issue was the theme song to the movie “IronMan.” Marvel was an unrelated party to Sony at the time that the work was created.

The defendant rested its claim on the premise that Marvel had no employees who were composers; and that music does not qualify for WMFH  status under the nine categories of work.

The Second Circuit ruled that whether the work was indeed a WMFH was a relevant issue; and that defendants were entitled to raise the defense; and such a defense rested on facts that precluded summary judgment.

The opinion came down on July 29 of this year. Discussing it before now seemed premature as it wasn’t clear whether Sony was going to appeal. This week, Urbont is reported to be the prevailing party. Urbont v. Sony settled on confidential terms. There will be no further appeal.

Pro-tip take away:  An assignment would have solved the problem.  The nine narrow categories of WMFH applying to non-employees is increasingly being read narrowly.  “Audio-visual works” are covered in one of the nine categories, but not just audio alone (namely music). To refresh your recollection, the nine categories listed in Section 101 of Title 17 of the copyright statute are:

  1. A contribution to a collective work (like a piece for a magazine, anthology or encyclopedia);
  2. A part of a motion picture or other audiovisual work;
  3. A translation;
  4. A supplementary work (like a foreword, afterword, bibliography, appendix, index or editorial notes);
  5. A compilation (like an anthology, database or anything that qualifies as a “collective work” from category 1 above);
  6. An instructional text (generally, any text that could go in a textbook);
  7. A test;
  8. Answer material for a test; and
  9. An atlas.

And even if your work falls into one of these nine categories, the statute states that there must also be a written agreement stating that it’s a “Work Made For Hire.” To be safe, the agreement should be signed before the work is created and signed by both parties.

Urbont v. Sony Music Entertainment Inc., Second Circuit July 29, 2016,  15-1778

The Privacy Shield in a nutshell. 

pugThe Privacy Shield permits U.S. businesses to process and control the personal data of individuals, aka data subjects, located in the European Union (EU). Without the Privacy Shield, U.S. businesses risk losing hundreds of millions of dollars if they cannot transfer personal data from the EU — businesses that cannot establish offices in the EU or negotiate agreements with each of the EU member countries will forego commerce with EU companies and data subjects. The U.S. government has agreed to enforce the Privacy Shield against U.S. businesses on behalf of EU data subjects. The U.S. government necessarily has to execute its enforcement duties with diligence. You might say, U.S. government agencies must bite as bad as they bark.

Is certification the best option for your company?

EU privacy standards that protect the data of its citizens are much stricter than those of the U.S. The EU requires U.S. companies to comply with privacy principles that comprise the EU/U.S. Privacy Shield. The U.S. Department of Commerce (Commerce Department) oversees U.S. businesses’ applications and certifications under the Privacy Shield. Your company may decide to be certified under the Privacy Shield if your business is subject to the jurisdiction of the Federal Trade Commission (FTC) or Department of Transportation (DOT); and EU citizens access your website, they do business with you or you conduct business in an EU member country. Each circumstance must be analyzed on a case-by-case basis. Issues such as volume, whether you are a data controller or processor, and whether you have multinational affiliates have bearing on your analysis.

How does the Privacy Shield compare to the Safe Harbor?

The Privacy Shield is more stringent than the Safe Harbor; some privacy principles that were merely guidelines under Safe Harbor are now affirmative covenants under the Privacy Shield. The U.S. government also must meet a higher standard under the Privacy Shield. The EU obligates the FTC and DOT to investigate and enforce penalties against U.S. companies that violate the Privacy Shield Principles.

What is the cost of certification?

While certification under the Privacy Shield is voluntary, U.S. businesses that receive personal data transfers from the EU must meet the same requirements as U.S. businesses that are certified. The fees for certification are based on the business’ annual revenue: the minimum fee is $250 per year for up to $5 million in revenue, and the maximum fee is $2,500 per year for more than $500 million in revenue. U.S. companies that are required to resolve disputes by an EU Data Privacy Authority must pay additional fees.

The application process itself is no more complicated than most other business certification processes.  The “real” cost of becoming certified under the Privacy Shield will likely be in personnel resources, especially if the business is not already compliant with the Safe Harbor rules.  For example, the business must dedicate personnel to develop privacy policies, educate employees about the policies, monitor the actions of employees and third party data processors, and take action against parties who violate the policies. There are also costs associated with verifying that third party processors update their security and privacy policies in step with Privacy Shield requirements.  You can review a summary of the five basic steps U.S. businesses must take to apply for certification here. You can review the seven Privacy Shield Principles here.

Alternatives to self-certification under the Privacy Shield.

It may be more cost effective for a business with limited personnel to use a private company to assist with the certification process, establish compliant policies and procedures, and provide ongoing monitoring, auditing, education and advice. The Commerce Department maintains an ever-expanding list of companies that transfer data to U.S. companies from abroad in compliance with the Privacy Shield[1] and the Madrid Resolution, U.S./Swiss Safe Harbor or the privacy rules adopted by the Asia-Pacific Economic Cooperation EU, European Economic Area, Switzerland and Asia Pacific Economic Cooperation.  When evaluating private companies, you should pay close attention to which party to the agreement is liable for violations of the Privacy Shield and the extent to which the contract covers transfers of data to third parties.

Binding Corporate Rules (BCR), model contract clauses and unambiguous consent are also options that you may consider if self-certification is unfeasible for your business.  BCRs are available to multinational companies.  An affiliated company located in the EU may transfer personal data to its U.S. location subject to BCRs.  Model Contracts, drafted by the European Commission, require U.S. businesses to provide adequate levels of protection of the privacy of data subjects.  If you are a data processor, not a data controller, you may have the option of entering into a Direct Data Processing Agreement or adopt the Model Clauses for Processors to eliminate the negotiation of broader issues that apply to controllers, but not processors. If you receive data from a limited number of known EU data subjects, the most cost effective way for you to transfer their data to the U.S. would be to obtain from each of them clear, unambiguous statements that they freely permit the transfer of their personal data.

What are the possible repercussions of not complying with the Privacy Shield?

BearThe FTC can investigate alleged violations of the Privacy Shield, enter consent orders and findings of contempt, and impose administrative penalties. Currently, administrative penalties may be up to $40,000 per violation or per day, for continuing violations. Additional penalties against a business include the FTC’s removal of a company from the Privacy Shield list, resulting in liability under the False Statements Act if the company claims it is certified. Learn from the lessons of others — the FTC has issued record-breaking fines in the past two years, including a $1.3 billion fine issued in the past month. The data owners in the EU, the EU Commission and/or data privacy authority may also have private rights of action against a U.S. company that violates relevant rules.

 

The wrap-up:

  • Assess how your U.S. based business receives personal data from EU data subjects. Based on the volume, your relationship to the data owners, and whether you process or control the data, you may have to delegate an employee or contractor who is knowledgeable about data privacy and cybersecurity to monitor, update and enforce the policy and verify that the privacy notice meets all applicable state, federal and international rules.
  • Consult all aspects of your company organization to assess which option is best for you. Privacy is not a distinct division within your company. Verify that operations, human resources and enforcement of policies work in concert to maintain the standards of the Privacy Shield.

[1] See, the “Privacy Shield List” at https://www.privacyshield.gov/list.

What’s the Case About? In re Nickelodeon Consumer Privacy Litigation[1] is a multi-district consolidated class action filed on behalf of children under the age of thirteen alleging that Viacom used child directed websites it owned to collect, without parental consent, data from the class members which it then provided to co-defendant Google. The data Viacom captured from children included their gender, birthdate, IP address, the webpages visited and the names of videos the children viewed. The court considered an issue of first impression as to whether an IP address is personally identifiable information (PII) under the Video Privacy Protection Act (VPPA) and whether the collection of the data constituted intrusion upon seclusion under New Jersey law. Plaintiffs argued that the vulnerability of children coupled with public aversion to mining them for data supported liability of Viacom.[2]

VPPA allegations dismissed: The court held that Viacom did not violate the VPPA by collecting the IP addresses of children. The decision was based in part, on the precedent set by In re Hulu Privacy Litigation.[3] The Hulu court determined that static digital identifiers such as IP addresses identify the location of a computer, which, without additional information, cannot be used to identify an individual. Under this rationale, an IP address is not PII, because an address alone cannot “reasonably” lead to the identification of a person. The court also noted that the VPPA is just too old and brittle to encompass technology so distant from its origins as a by-product of Blockbuster, Erol’s Video Club and Hollywood Video stores.   A nuance as to why Google escaped liability under VPPA for another reason is touched upon below.

New Jersey state law claims remanded: The court remanded the claim against Viacom for violation of the New Jersey intrusion upon seclusion law. The court did not look favorably upon Viacom’s failure to honor its notice to parents that it would not collect any data from children.

The allegations against Viacom: Viacom owns the websites, Nick.Jr. and Nick.com (Nick Sites), both of which are associated with the Nickelodeon channel. The Nick Sites offer games and streaming videos to children and included this notice to parents:

HEY GROWN-UPS: We don’t collect ANY personal information about your kids. Which means we couldn’t share it even if we wanted to![4]

When children registered on one of the Nick Sites, they received a nickname of an avatar based on a Nickelodeon cartoon character of the same gender and approximate age as the child. The plaintiffs alleged that Viacom used first-party cookies it placed on the children’s computers to obtain information about which games and videos the children accessed.  Viacom disclosed the information it collected to Google and permitted Google to place ads on the Nick Sites.

The allegations against Google: The plaintiffs alleged that Google (1) placed third-party cookies via advertisement on the computers of children who accessed the Nick Sites, (2) used those cookies to track the children on any website displaying a Google ad, and (3) used “Doubleclick.net cookies”[5] to track the browsing of whomever used the computer across any website Google owned, such as Gmail, YouTube and Google Maps.

Analysis of the VPPA: Congress enacted the VPPA after the 1987 Senate Judiciary Committee’s hearings regarding Supreme Court nominee, Robert Bork. During the hearings, a newspaper obtained and publicized a list of titles of 146 films Judge Bork or members of his family rented from a local video store.[6] The list of videos was, even by 1987 standards, unremarkable — not a single NC-17 film on the list. Congress agreed, however, that a person’s video viewing history should be private. Consequently, under the VPPA, a user must give permission for his or her video viewing data to be shared. How does this translate to current technology? It doesn’t. The court likened applying the VPPA to internet technology to putting a square peg in a round hole.[7] Additionally, the court referred to the VPPA as a rigid law that lacked the flexibility of the Children’s Online Privacy Protection Act (COPPA) to regulate effectively technology that is “in flux.”[8]

The key definitions under the VPPA are:

Consumer: any renter, purchaser or subscriber of goods or services from a video tape service provider.

Video tape service provider: any person, engaged in the business, in or affecting interstate or foreign commerce, of rental, sale or delivery of prerecorded video cassette tapes or similar audio visual materials.

Personally identifiable information (PII): includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.

18 U.S.C. § 2710(a). A violation of the VPPA occurs when “[a] video tape service provider … knowingly disclose[d], to any person, personally identifiable information concerning any consumer of such provider.” Id.

The VPPA was created to protect information specific to viewers of movies. The court noted that if the definition of PII was expanded for all statutes to include an IP address, there would be no end to litigation and the distinctions between the groups protected by certain statutes would be eroded. Congress’ decision to omit a new definition of PII in the 2013 amendment of the VPPA further emphasized that the VPPA “serves different purposes, and protects different constituencies, than other, broader privacy laws.”[9] For example, if “Google were to start purposefully leaking its customers’ YouTube video-watching histories,” the VPPA “would almost certainly” be violated.[10]

Extending the VPPA to regulate current technology would likely result in unlimited violations. Defining an IP address as PII within the context of the VPPA would mean that “the disclosure of an IP address to any Internet company with registered users might trigger “liability” when an IP address is regularly transmitted to an Internet service provider (ISP) with each search.[11] The court also pointed out that there is a spectrum of PII, with first and last name at one end, and an IP address at the other, lower, end of the spectrum, given that the IP address alone may be insufficient to identify a person. The case cited by the court to illustrate the need for a subpoena to identify a person is a copyright infringement case, Warner Bros. Records Inc. v. Walker, 704 F.Supp. 2d 460 (W.D. Pa. 2010). Warner Bros. needed a subpoena to identify the student who was assigned the IP address used to illegally download some songs. The student, who shared a room with multiple roommates, possibly would not have been identified without a subpoena, given that several people may have used the computer. It was not “reasonably” likely for Warner Bros. to identify the person responsible for the downloads without a subpoena. Understandably, a subpoena may be necessary in a fluid environment such as a college where multiple people may have access to a computer.

Time-out: It’s one thing for Warner Bros. to need help from the college to identify which of multiple people may have used an IP address assigned by the college. It’s something altogether different when Google, which the court describes as “a company whose entire business model is purportedly driven by the aggregation of information about Internet users,” wants to identify a person. The plaintiffs’ amicus very astutely provided some real-world perspective about what happens when Google wants to find out who you are: “concluding ‘that Google is unable to identify a user based on a combination of IP address … and other browser cookie data … would be like concluding the company that produces the phone book is unable to deduce the identity of an individual based on their telephone number.’”[12] Enough said. Resume play.

The court affirmed the dismissal of intrusion upon seclusion claim against Google: Although the court acknowledged that many people, and some courts, find the monetization and collection of data from children without parental consent repugnant, those acts alone, did not establish a claim for intrusion upon seclusion. Under New Jersey law, an intrusion upon seclusion claim occurs upon a showing that (i) an intentional intrusion (ii) upon the seclusion of another that is (iii) highly offensive to a reasonable person.[13] The court disregarded the fact that children, instead of adults, were tracked, because third-party cookies serve a legitimate commercial purpose for advertisers and Google uses them on the Nick Sites the same way it uses them on other non-child directed sites.

This is why Viacom may be liable for intrusion upon seclusion: When Viacom notified parents that it did not collect any personal information about children, it was reasonable for a jury to decide that parents may have permitted their children unsupervised access to the Nick Sites based on the disclaimer. If the parents of the plaintiff class members didn’t already have an expectation of privacy, Viacom’s notice created an expectation of privacy. Viacom’s violation of that trust by surreptitiously collecting data from children could be considered highly offensive under the applicable law.

Summary

COPPA and HIPAA V2An IP address has been likened to a computer’s fingerprint. If a statute identifies an IP address or other static assigned number as PII, that number is a great starting point to identify a user. For example, under COPPA and HIPAA, an IP address is as high on the spectrum of PII as a user’s first and last name. The rationale behind the ranking of an IP address in these statutes is that sometimes it is reasonable that an IP address can lead you to the user. Who’s looking for you also matters. It is reasonable that Google, using third-party cookies, can reasonably use your IP address to identify you.

Sometimes an IP address can only identify a computer, i.e., it cannot “reasonably” be used to identify you. Without a subpoena or some alternate means of creating a mosaicked identity, you may have to resort to battling “John Doe” until a subpoena grants you the right to retrieve additional information about the IP address. In these instances, IP addresses are not considered to be PII. At the end of the day, you have found a computer. Good job.

 

What did we learn?

  • Don’t over sell your privacy policy. Viacom faces potential liability because it violated its own privacy notice to parents.
  • Do the right thing. Don’t get information from children under the age of 13 that is defined as PII under any privacy law without parental consent. These days there are few things about which 90 percent of Americans agree — Viacom’s actions on the Nick Sites are considered to be highly offensive.
  • Now that you know that sometimes children’s browsing history, IP address and other information can be collected through third-party cookies without parental consent, educate your children. The Federal Trade Commission provides guidance on helping children distinguish between ads and entertainment: https://www.commonsensemedia.org/website-reviews/admongo.
  • Understand that a trade-off for having the world at your fingertips may mean sharing your computer’s fingerprint with inquiring minds.

Stay safe.


[1] In re Nickelodeon Consumer Privacy Litigation, 2016 WL 3513782 No. 15-1441 (3rd Cir. Jun. 28, 2016).

[2] Id. at * 4 (alleging that (1) targeting ads to children is more profitable than targeting ads to adults, in part, “because children are generally unable to distinguish between content and advertisements;” (2) 80% and 90% of 2,000 adult respondents, respectively, oppose an advertiser’s tracking of children and believed advertisers should obtain a parent’s permission before installing cookies on a device used by a minor child; and (3) companies can use “browser fingerprinting” to identify specific users).

[3] In re Hulu Privacy Litigation, No. 11-CV-3764 (LB), 2014 WL 1724344 (N.D. Cal. Apr. 28, 2014).

[4] In re Nickelodeon, 2016 WL 3513782 No. 15-1441 at * 3 (3rd Cir. Jun. 28, 2016).

[5] Id.

[6] Michael Dolan, “The Bork Tapes,” City Paper, Sept. 25–Oct. 1, 1987, at 1.

[7] In re Nickelodeon, 2016 WL 3513782 No. 15-1441 at * 15 (3rd Cir. Jun. 28, 2016).

[8] Id., at * 16-19.

[9] In re Nickelodeon, 2016 WL 3513782 No.: 15-1441 * 19 (3rd Cir. Jun. 28, 2016).

[10] Id. at * 17.

[11] Id. at * 20-21.

[12] Id. at * 20.

[13] Hennessy v. Coastal Eagle point Oil Co., 609 A.2d 11, 17 (N.J. 1992 (citing Restatment (Second) of Torts § 652B (1977)).

“A reputation once broken may possibly be repaired, but the world will always keep their eyes on the spot where the crack was.” ― Joseph Hall

Consumers may be injured by inaccurate data that they cannot review or correct. There’s a hole in the bucket, dear Congress.[1] 

The children’s song, “There’s a Hole in the Bucket,” exemplifies the conundrum many consumers experience when they are denied opportunities or inappropriately solicited. Data brokers maintain files with over 1,500 pieces of pieces of personal data on each of us. There are over 3,500 data brokers in the U.S. Only about one-third of them permit individuals to “opt-out” of inclusion in their data banks, usually for a fee. Unless and until you recognize an unexplained pattern of lost job opportunities, rejected apartment applications or are targeted by unsolicited marketing, you may not care what data brokers maintain in your files.

Imagine this: You are 22 years old and gung-ho to use your brand-spanking new business organization degree as an entry-level traveling corporate trainer. You grant recruiters the right to conduct background checks after they indicate their interest in you based on your resume. You get rejection after rejection. You finally muster the courage to call a recruiter and ask why, and she explains that you are not a good fit based on background information that describes you as 39 years old, the parent of four young children, having a Ph.D. and a sufferer of agoraphobia. None of this information is true.

Your prospective employers may have relied on information provided by data brokers or credit rating agencies (CRA) in determining that you are not a viable candidate. Now that you know inaccurate information is being reported about you, you are confident that you can correct your files and the employers will reverse their decisions. You can if the inaccurate information is from a CRA. But if data brokers provided the incorrect information, you will find yourself in the miserable position of knowing your files are wrong and being powerless to correct them. You know prospective employers have considered inaccurate information about you, but you don’t know which employers relied on which data brokers or which inaccuracies in your files made you undesirable for hire. You don’t know how many data brokers have files on you or what evidence you can provide to disprove the inaccurate information about you. You and “Dear Henry” share the predicament of wanting to fix the hole in the bucket but lacking the tools to do so.

Let the screening begin.

Many decisions about consumers, job applicants and second dates are based on inaccurate information provided by data brokers. Data brokers sell consumers’ personally identifiable information (PII) to be used in marketing, people searches, fraud detection and risk management. The FTC defines data brokers as “companies that collect information, including personal information about consumers, from a wide variety of sources for the purpose of reselling such information to their customers for various purposes, including verifying an individual’s identity, differentiating records, marketing products, and preventing financial fraud.”[2]  The Fair Credit Reporting Act (FCRA) applies to CRAs like Experian, TransUnion and Equifax, not data brokers. CRAs must take reasonable steps to ensure the accuracy of the consumer PII they distribute and they must provide consumers the opportunity to review and correct their records. There are zero federal statutes or regulations affecting data brokers in this regard. If enacted as introduced in 2009, the Data Accountability and Trust Act (DATA) would provide procedures for individuals to audit and verify the accuracy of data held by data brokers. The swathe of data collected by data brokers is astounding and troubling. Add the fact that data brokers are generally less expensive to use than CRAs, and employers and individuals are at a distinct disadvantage relative to data brokers.

Here’s what’s in the bucket.

Reports about consumers are based on information showing what they own or use; who they are with, have lost or are fighting; how they make, save and spend their money; and what interests or ails them, including mental, genetic and “other” diseases that may be embarrassing.[3] For example, when you register your car, record a deed of trust, activate a warranty, join a Facebook group, fill a new prescription, or get sued, married, divorced or widowed, data brokers collect that information. It is tacitly understood that PII from data brokers is not accurate and enables discrimination in hiring, the provision of resources and opportunities.[4] Consumer advocacy groups report that information used in people search sites is not vetted — the consumer has the responsibility of figuring out which of 67 people named “Pamela Samuelson” authored Protecting Privacy Through Copyright Law?. Marketing information is more accurate, but is still unreliable. For example, a data broker may correctly report that a household member purchased a new car, but err by addressing car wash coupons to the resident third-grader. Risk mitigation information is the most accurate, because it is expected to at least correspond to the correct person, even if the results are outdated.

This brings to mind a character on a show who changed his name, because he shared it with well-known artist who was convicted of sex crimes. His new name, unfortunately, was shared with a well-known artist who was convicted of murder. How do you feel knowing that you may be judged by the bad report of someone who has a name similar to yours? The identities of entities using bad data may influence your answer.

Who’s looking in the bucket?

Financial institutions, government agencies, political organizers and insurance companies use the services of data brokers. As of May, one of the largest data broker’s customers included “47 Fortune 100 clients; 12 of the top 15 credit card issuers; seven of the top 10 retail banks; eight of the top 10 telecom/media companies; seven of the top 10 retailers; nine of the top 10 property and casualty insurers; three of the top five domestic airlines; and six of the top 10 U.S. hotels.”[5] How likely are you to recognize that after your namesake niece filed for bankruptcy that the hotel prices you were offered increased by 18%?

FactsCan you look in the bucket?

No. If data brokers filled the bucket, no federal law gives an individual the right to look in the bucket. A subpoena or other discovery procedure may be your best option to see your file. If a CRA filled the bucket, yes, an individual has the right to review and correct the information in the bucket.

What can you do?

  •  Educate yourself about your rights. See whether your state has any laws that offer you protection. California, for example, shields victims of violent crimes from having their PII publicized on the internet.
  • Opt out of as many of the data broker sites as is reasonable. Visit this website to get started: http://www.computerworld.com/article/2849263/doxxing-defense-remove-your-personal-info-from-data-brokers.html.
  •  Lobby your federal and state legislators and align yourself with organizations that advocate for the right to control your PII.

Stay safe.


[1] There’s a Hole in the Bucket, Songwriters: Harry Belafonte, Odetta Felious Gordon © Next Decade Entertainment, Inc.

[2] FTC, Protecting Consumer Privacy in an Era of Rapid Change, at 68 (Mar. 2012).

[3]Steve Kroft, The Data Brokers: Selling Your Personal Information, 60 Minutes, ep. “The Data Brokers” aired on CBS Mar. 9, 2014, http://www.cbsnews.com/news/the-data-brokers-selling-your-personal-information/.

[4] Exec. Office of the Pres., Big Data: Seizing Opportunities, Preserving Values, pp. 51-53, May 2014, http://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf.

[5] U.S. Senate Commerce Committee, A Review of the Data Broker Industry: Collection, Use, and Sale of Consumer Data for Marketing Purposes  (December 2013).

“He’s not safe, but he’s good.” (Referring to Aslan, the Lion, in The Lion, the Witch and the Wardrobe.) ― C.S. Lewis

I planned to write about the inspired, better-than-sliced-bread security option of using fingerprint authentication to protect our mobile devices. That imploded. In 2014, and earlier this year, courts in Virginia and California, respectively, issued warrants requiring suspects to provide their fingerprints to unlock phones so the government could access potentially incriminating evidence believed to be stored there.[1]

All “do,” no “talk.”

In contrast, courts have not forced individuals to reveal the passcodes used to secure their mobile devices.[2] What gives? Albert Gidari, the director of privacy at Stanford Law School’s Center for Internet and Society, explains that the Fifth Amendment protects thoughts, not things: “Unlike disclosing passcodes, you are not compelled to speak or say what’s ‘in your mind’ to law enforcement,” Gidari said. “‘Put your finger here’ is not testimonial or self-incriminating.” For example, you can be compelled to provide the key to your heart, but no one can make you reveal what is in your heart.

Why chain the door when all the windows are open?

The fingerprint authentication platform is only as good as its gaps. The maker of one of the top mobile operating systems has stored fingerprints as unencrypted images in local storage. Fingerprint data stored by two cellphone companies was breached despite the use of TrustZone, an encryption platform created by mobile manufacturers.[3] WhatsApp, which was mentioned in a previous blog, has also experienced data theft.

Studies reveal the ineffectiveness of security software provided to users by their data providers. The software is largely ineffective, because …  PEOPLE DON’T DOWNLOAD IT! People shred their mail, but don’t download the platforms devised to protect their privacy if it’s formatted as data. The most common reasons people don’t download updates include: (1) suspicion that the updates are malware sent by hackers; (2) belief that the update won’t benefit them if they are otherwise satisfied with their current service; (3) lack of understanding that the updates provide security patches; and (4) expectation that the updates will take too long or will use too much memory. The authenticity of updates to your operating systems can be checked by visiting the app store, your manufacturer’s website or conducting an internet search to find information about update releases.

A critical flaw in fingerprint authentication is hiding in plain sight.

The convenience and high-tech sexiness of using fingerprint authentication on our phones has clouded our judgment regarding some of the most basic things we know about security. Fingerprints have a characteristic that is counter to a cornerstone of cyber-security — Fingerprints are immutable. If someone steals your password, you can change it. Quarterly mandatory password expirations illustrate the adage, the best password, is a new password.[4] Heads would spin and roll in IT departments the world round if it were decreed that passwords will never be changed again.

And just like that, the floodgates are open.

A much touted advancement of fingerprint authentication is that no one can steal your fingerprint. That’s fine, but the image of your fingerprint can be stolen like any other image. The image of your fingerprint can give someone access to apps, browsers, photo albums, cloud files and online accounts, some of which may be secured by passwords cached in your phone history. Finally, does it make sense to have an expectation of privacy in our fingerprints? The legal answer is, no. Since we literally leave our fingerprints everywhere, maybe we should reconsider relying on them to secure our privacy. Our unspoken thoughts are inalienable property. Apparently, fingerprints are just keys.

Convenience is usually a good thing. Good things may not be safe. We each have to weigh whether the convenience of opening our phones with a finger to swipe instead of entering our PINs is worth the risk of losing our privacy.

Stay secure.


[1]Matt Hamilton, The government wants your fingerprint to unlock your phone. Should that be allowed?,  LA Times, Apr. 30, 2016, http://www.latimes.com/local/california/la-me-iphones-fingerprints-20160430-story.html; and Quinton Plummer, Virginia police can force you to unlock your smartphone using fingerprint: Here’s why, Tech Times, Nov. 3, 2014, http://www.techtimes.com/articles/19288/20141103/virginia-police-can-force-you-to-unlock-your-smartphone-using-fingerprint-heres-why.htm.
[2] SEC v. Huang, No. 15-269 (E.D. Pa. Sept. 23, 2015); Virginia v. Baust, No. CR14-1439 (Va. Cir. Oct. 28, 2014).
[3] Shubham Verma, Why I’m Not a Fan of Smartphones With Fingerprint Scanners, Gadgets360 Oct. 30, 2015, http://gadgets.ndtv.com/mobiles/opinion/why-im-not-a-fan-of-smartphones-with-fingerprint-scanners-759160.
[4] To be fair, there is some belief that changing passwords regularly is more harmful than not.  E.g., Andrea Peterson, Why changing your may do more harm than good, The Washington Post, Mar. 2, 2016, https://www.washingtonpost.com/news/the-switch/wp/2016/03/02/the-case-against-the-most-annoying-security-measure-virtually-every-workplace-uses/.