On June 18, 2024, California Attorney General (AG) Rob Bonta announced a third CCPA enforcement settlement, this one with Tilting Point Media LLC. Tilting Point was allegedly using its mobile app game “SpongeBob: Krusty Cook-Off” to collect, share, and sell the data of minors, in violation of the California Consumer Privacy Act (CCPA), California’s Unfair Competition Law (UCL), and the Children’s Online Privacy Protection Act (COPPA). Tilting Point agreed to pay a $500,000 civil penalty and implement certain measures to address the alleged violations. The settlement is notable for combining enforcement of COPPA alongside the CCPA, targeting similar practices but different age groups under each law. Also notably, the AG investigated Tilting Point after the Children’s Advertising Review Unit (CARU) of BBB National Programs issued findings alleging that Tilting Point’s practices violated COPPA. The AG alleged that Tilting Point failed to correct its practices following the investigation by CARU. The case illustrates the risks of ignoring industry self-regulatory reviews and provides a roadmap other states can use to leverage multiple laws against the same activities.

The AG’s complaint focused on the key allegations outlined below:

Continue Reading California Attorney General’s Recent Enforcement of CCPA and COPPA

A recent en banc Federal Circuit decision overruled the unique test for obviousness of design patents and advised that the same analysis should apply to both utility patents and design patents. LKQ Corporation v. GM Global Technology Operations (LKQ). Courts had previously used the Rosen-Durling test in determining invalidity due to obviousness of design patents. The Federal Circuit’s decision overruled the Rosen-Durling test and instructs courts to instead apply the Supreme Court’s analysis in its KSR decision and utilize the Graham factors as they would with a utility patent when evaluating the obviousness of a design patent. This important en banc decision may cause uncertainty surrounding the application of the Graham factors to design patents and the enforcement of design patents generally.

Continue Reading Federal Circuit Overrules Obviousness Test for Design Patents and Decades of Precedent

In an effort to provide its musical artists some protection from AI-generated deepfakes of their voices, the state of Tennessee recently enacted ELVIS seeking a cure. Specifically, Tennessee passed the Ensuring Likeness, Voice and Image Security (ELVIS) Act, which goes into effect July 1 of this year. The ELVIS Act replaced Tennessee’s existing rights law (the Personal Rights Protection Act, PRPA), which only protected a person’s name, photograph, or likeness and limited that protection to only barring use in advertising. PRPA also added postmortem rights in an effort to protect the state’s most famous resident, Mr. Presley himself. Interestingly, unique to Tennessee and two other states, the protected rights under the act are characterized as property rights as opposed to rights of publicity, which is more typical.

Continue Reading ELVIS Adds (His) Voice to the Protection of Artists Against AI-Generated Deepfakes

On February 6, 2024, in Philpot v. Independent Journal Review, the U.S. Court of Appeals for the Fourth Circuit issued a copyright fair use decision in a photograph infringement case that is noteworthy for a number of reasons. Those who plan to use photos based on a fair use defense should take heed of this decision.

In this case, photographer Larry Philpot sued news website Independent Journal Review for using Philpot’s photo of singer Ted Nugent in an online article. One of the more interesting facts here was that Philpot uploaded his photo to Wikimedia Commons, which is governed by a Creative Commons license requiring attribution. In other words, he simply required that users of his photo give him attribution, not pay him. Users could use Philpot’s photo free of charge, provided they included the following attribution: “Photo Credit: Larry Philpot of www.soundstagephotography.com.” Instead, Independent Journal Review hyperlinked to Mr. Nugent’s Wikipedia page, where the photo was featured.

Yet another noteworthy fact is that the photo apparently generated only approximately $2 or $3 in revenue for the Independent Journal Review.

Continue Reading Fourth Circuit Hands Photographer a Clean Sweep Victory in Copyright Fair Use Appeal Over News Website’s Use of Free of Charge Photo

On Thursday, October 12, a bipartisan group of senators—Chris Coons (D-Del.), Thom Tillis (R-N.C.), Marsha Blackburn (R-Tenn.), and Amy Klobuchar (D-Minn.)—released a Discussion Draft of the Nurture Originals, Foster Art, and Keep Entertainment Safe (dubbed the “NO FAKES”) Act that would protect the voice, image, or visual likeness of all individuals from unauthorized AI-generated digital replicas, also referred to as “deepfakes.” This draft bill, while focusing on protections required by the advancement of AI, would establish the first federal right of publicity—the right to protect and control the use of one’s voice, image, and visual likeness. The NO FAKES Act could have widespread impacts on the entertainment and media industries, among others.

Generative AI has opened new worlds of creative opportunities, but with these creative innovations also comes the ability to exploit another’s voice, image, or visual likeness by creating nearly indistinguishable digital replicas. This has caused great concern among musicians, celebrities, actors, and politicians regarding viral AI-created deepfakes circulating on social media and the Internet more broadly. To date, advancements in AI technology used to create digital replicas have outpaced the current legal framework governing unauthorized use. Although there are existing laws that may be used to combat digital replicas, these laws either vary from state to state, creating a patchwork of differing protections based on where one is located, or do not directly address the harms caused by producing and distributing unauthorized digital replicas.

Continue Reading AI Deepfake Bill: Senators Contemplate the First Federal Right of Publicity

The U.S. Supreme Court’s recent decision in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith is unlikely to shed much light on whether the use of copyrighted material in artificial intelligence (AI) content will lead to liability. The Court’s decision mandates that courts look to the “specific use” of the copyrighted material at issue when evaluating fair use under the Copyright Act. So, what specific factors should AI developers and users consider when using copyrighted content in the AI space post-Warhol?

The Copyright Act and Generative AI

Under the Copyright Act, copyright holders have the exclusive right to reproduce their work, prepare derivative works, distribute copies of the work, perform the work, and display the work publicly. In developing an AI system, programmers and companies can violate exclusive rights of copyright holders at two distinct points:

  • By using copyrighted material as an input to teach the AI software
  • By creating an unauthorized derivative output of the copyrighted work because of the AI application. The distinctions between inputs and outputs involving this space are detailed here and here.
Continue Reading How Will Use of Copyrighted Content in Artificial Intelligence Be Evaluated After the Supreme Court’s Warhol Decision?

Last week, the Supreme Court issued a long-awaited copyright fair use decision in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith et al. In short, the Supreme Court looked at whether it was fair use under the copyright law for the Andy Warhol Foundation to license a print (known as Orange Prince) to Condé Nast when such print was based on a 1981 photo that photographer Lynn Goldsmith took of Prince in 1981. The 7-2 decision, featuring extremely sharp and contrasting views from common allies, Justice Sotomayor in the majority and Justice Kagan in the dissent, illustrates the complexities and murkiness of copyright fair use precedent. In the end, the majority held that Warhol’s Orange Prince did not constitute fair use of Goldsmith’s copyrighted photograph of Prince, based upon “the purpose and character of the use,” which is the first factor in the four-factor fair use test.

In sum, the fair use defense is a defense to a copyright infringement claim. There are four factors courts must consider, and each of the four need not be given equal weight, something that can make analyzing risk challenging. For years, relying on a fair use defense has been and, after this recent decision, will continue to be, very risky, and likely even more so than in the past. The factors are:

Continue Reading The Supreme Court’s Warhol Ruling Makes Fair Use Defense Seem Even Riskier

Innovations in artificial intelligence (AI) have made it easier than ever to replicate a person’s name, image, and likeness (NIL), particularly if that person is a celebrity. AI algorithms require massive amounts of “training data”— videos, images, and soundbites—to create “deepfake” renderings of persona in a way that feels real. The vast amount of training data available for celebrities and public figures make them easy targets. So, how can celebrities protect their NIL from unauthorized AI uses?

The Right of Publicity

The right of publicity is the primary tool for celebrity NIL protection. The right of publicity protects against unauthorized commercial exploitation of an individual’s persona, from appearance and voice to signature catchphrase. Past right of publicity cases provide some context for how this doctrine could be applied to AI-generated works.

Continue Reading Artificial Intelligence Wants Your Name, Image and Likeness – Especially If You’re a Celebrity

Generative AI is creating previously unimaginable possibilities for influencers and brands to engage with consumers. Rather than merely posting on social media, influencers will be able to utilize AI to have two-way conversations that feel authentic. Influencers can do this literally in their own voice, having unique dialogs with countless people at the same time.

Influencers and brands are accustomed to the rules governing what can be said on social media, but now they’ll need to start thinking about what sort of information they can elicit from their fans and consumers in the course of unique and unpredictable interactions, and what they can do with that information, because they will have the ability to gather more consumer information than ever before, and in ways that may be difficult to control.

Continue Reading Let’s Chat: Influencers and Brands Testing the Waters of Generative AI Must Navigate Data Privacy and FTC Issues

The recent explosion in popularity of generative artificial intelligence (AI), such as ChatGPT, has sparked a legal debate over whether the works created by this technology should be afforded copyright protections. Despite that several lawsuits on the subject have been filed, and the U.S. Copyright Office has recently issued guidance clarifying its position, that the bounds of copyright protections for works created using AI are not yet clearly defined and many questions remain unanswered. For now, it appears that copyright eligibility for such works depends on the extent of human involvement in the creative process and whether any use of copyrighted work to generate a new work falls within the purview of the fair use doctrine.

The analysis of this issue has been framed around two key aspects of the technology itself: input data and output data. Input data are the pre-existing data that human users introduce into the AI system that the system then uses to generate new works. Output data are the works ultimately created by the system—the finished product. Thus, copyright eligibility for AI-generated or AI-assisted works depends on whether the AI system’s use of copyrighted works as input data is permissible and whether the output data is itself copyrightable.

Continue Reading ChatGPT and the Rise of Generative Artificial Intelligence Spark Debate on Copyright Protections of AI-Generated Works