In an effort to provide its musical artists some protection from AI-generated deepfakes of their voices, the state of Tennessee recently enacted ELVIS seeking a cure. Specifically, Tennessee passed the Ensuring Likeness, Voice and Image Security (ELVIS) Act, which goes into effect July 1 of this year. The ELVIS Act replaced Tennessee’s existing rights law (the Personal Rights Protection Act, PRPA), which only protected a person’s name, photograph, or likeness and limited that protection to only barring use in advertising. PRPA also added postmortem rights in an effort to protect the state’s most famous resident, Mr. Presley himself. Interestingly, unique to Tennessee and two other states, the protected rights under the act are characterized as property rights as opposed to rights of publicity, which is more typical.

Continue Reading ELVIS Adds (His) Voice to the Protection of Artists Against AI-Generated Deepfakes

On February 6, 2024, in Philpot v. Independent Journal Review, the U.S. Court of Appeals for the Fourth Circuit issued a copyright fair use decision in a photograph infringement case that is noteworthy for a number of reasons. Those who plan to use photos based on a fair use defense should take heed of this decision.

In this case, photographer Larry Philpot sued news website Independent Journal Review for using Philpot’s photo of singer Ted Nugent in an online article. One of the more interesting facts here was that Philpot uploaded his photo to Wikimedia Commons, which is governed by a Creative Commons license requiring attribution. In other words, he simply required that users of his photo give him attribution, not pay him. Users could use Philpot’s photo free of charge, provided they included the following attribution: “Photo Credit: Larry Philpot of www.soundstagephotography.com.” Instead, Independent Journal Review hyperlinked to Mr. Nugent’s Wikipedia page, where the photo was featured.

Yet another noteworthy fact is that the photo apparently generated only approximately $2 or $3 in revenue for the Independent Journal Review.

Continue Reading Fourth Circuit Hands Photographer a Clean Sweep Victory in Copyright Fair Use Appeal Over News Website’s Use of Free of Charge Photo

On Thursday, October 12, a bipartisan group of senators—Chris Coons (D-Del.), Thom Tillis (R-N.C.), Marsha Blackburn (R-Tenn.), and Amy Klobuchar (D-Minn.)—released a Discussion Draft of the Nurture Originals, Foster Art, and Keep Entertainment Safe (dubbed the “NO FAKES”) Act that would protect the voice, image, or visual likeness of all individuals from unauthorized AI-generated digital replicas, also referred to as “deepfakes.” This draft bill, while focusing on protections required by the advancement of AI, would establish the first federal right of publicity—the right to protect and control the use of one’s voice, image, and visual likeness. The NO FAKES Act could have widespread impacts on the entertainment and media industries, among others.

Generative AI has opened new worlds of creative opportunities, but with these creative innovations also comes the ability to exploit another’s voice, image, or visual likeness by creating nearly indistinguishable digital replicas. This has caused great concern among musicians, celebrities, actors, and politicians regarding viral AI-created deepfakes circulating on social media and the Internet more broadly. To date, advancements in AI technology used to create digital replicas have outpaced the current legal framework governing unauthorized use. Although there are existing laws that may be used to combat digital replicas, these laws either vary from state to state, creating a patchwork of differing protections based on where one is located, or do not directly address the harms caused by producing and distributing unauthorized digital replicas.

Continue Reading AI Deepfake Bill: Senators Contemplate the First Federal Right of Publicity

The U.S. Supreme Court’s recent decision in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith is unlikely to shed much light on whether the use of copyrighted material in artificial intelligence (AI) content will lead to liability. The Court’s decision mandates that courts look to the “specific use” of the copyrighted material at issue when evaluating fair use under the Copyright Act. So, what specific factors should AI developers and users consider when using copyrighted content in the AI space post-Warhol?

The Copyright Act and Generative AI

Under the Copyright Act, copyright holders have the exclusive right to reproduce their work, prepare derivative works, distribute copies of the work, perform the work, and display the work publicly. In developing an AI system, programmers and companies can violate exclusive rights of copyright holders at two distinct points:

  • By using copyrighted material as an input to teach the AI software
  • By creating an unauthorized derivative output of the copyrighted work because of the AI application. The distinctions between inputs and outputs involving this space are detailed here and here.
Continue Reading How Will Use of Copyrighted Content in Artificial Intelligence Be Evaluated After the Supreme Court’s Warhol Decision?

Last week, the Supreme Court issued a long-awaited copyright fair use decision in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith et al. In short, the Supreme Court looked at whether it was fair use under the copyright law for the Andy Warhol Foundation to license a print (known as Orange Prince) to Condé Nast when such print was based on a 1981 photo that photographer Lynn Goldsmith took of Prince in 1981. The 7-2 decision, featuring extremely sharp and contrasting views from common allies, Justice Sotomayor in the majority and Justice Kagan in the dissent, illustrates the complexities and murkiness of copyright fair use precedent. In the end, the majority held that Warhol’s Orange Prince did not constitute fair use of Goldsmith’s copyrighted photograph of Prince, based upon “the purpose and character of the use,” which is the first factor in the four-factor fair use test.

In sum, the fair use defense is a defense to a copyright infringement claim. There are four factors courts must consider, and each of the four need not be given equal weight, something that can make analyzing risk challenging. For years, relying on a fair use defense has been and, after this recent decision, will continue to be, very risky, and likely even more so than in the past. The factors are:

Continue Reading The Supreme Court’s Warhol Ruling Makes Fair Use Defense Seem Even Riskier

Innovations in artificial intelligence (AI) have made it easier than ever to replicate a person’s name, image, and likeness (NIL), particularly if that person is a celebrity. AI algorithms require massive amounts of “training data”— videos, images, and soundbites—to create “deepfake” renderings of persona in a way that feels real. The vast amount of training data available for celebrities and public figures make them easy targets. So, how can celebrities protect their NIL from unauthorized AI uses?

The Right of Publicity

The right of publicity is the primary tool for celebrity NIL protection. The right of publicity protects against unauthorized commercial exploitation of an individual’s persona, from appearance and voice to signature catchphrase. Past right of publicity cases provide some context for how this doctrine could be applied to AI-generated works.

Continue Reading Artificial Intelligence Wants Your Name, Image and Likeness – Especially If You’re a Celebrity

Generative AI is creating previously unimaginable possibilities for influencers and brands to engage with consumers. Rather than merely posting on social media, influencers will be able to utilize AI to have two-way conversations that feel authentic. Influencers can do this literally in their own voice, having unique dialogs with countless people at the same time.

Influencers and brands are accustomed to the rules governing what can be said on social media, but now they’ll need to start thinking about what sort of information they can elicit from their fans and consumers in the course of unique and unpredictable interactions, and what they can do with that information, because they will have the ability to gather more consumer information than ever before, and in ways that may be difficult to control.

Continue Reading Let’s Chat: Influencers and Brands Testing the Waters of Generative AI Must Navigate Data Privacy and FTC Issues

The recent explosion in popularity of generative artificial intelligence (AI), such as ChatGPT, has sparked a legal debate over whether the works created by this technology should be afforded copyright protections. Despite that several lawsuits on the subject have been filed, and the U.S. Copyright Office has recently issued guidance clarifying its position, that the bounds of copyright protections for works created using AI are not yet clearly defined and many questions remain unanswered. For now, it appears that copyright eligibility for such works depends on the extent of human involvement in the creative process and whether any use of copyrighted work to generate a new work falls within the purview of the fair use doctrine.

The analysis of this issue has been framed around two key aspects of the technology itself: input data and output data. Input data are the pre-existing data that human users introduce into the AI system that the system then uses to generate new works. Output data are the works ultimately created by the system—the finished product. Thus, copyright eligibility for AI-generated or AI-assisted works depends on whether the AI system’s use of copyrighted works as input data is permissible and whether the output data is itself copyrightable.

Continue Reading ChatGPT and the Rise of Generative Artificial Intelligence Spark Debate on Copyright Protections of AI-Generated Works

The blockchain community has debated for years whether Decentralized Autonomous Organizations (DAOs) can or should be analogized to a corporate form and whether they operate to insulate DAO members from legal liabilities. Some states have passed statutes regarding how DAOs are classified, such as Wyoming’s “DAO LLCs” law and Utah’s DAO Act. In Sarcuni v. bZx DAO, a class action pending in the Southern District of California, the liability of DAO members is at the forefront, and the first round of the fight was not a good one for them. On March 27, the court denied a motion to dismiss filed by members of the DAO, finding that the bZx DAO and its successor Ooki DAO are plausibly alleged to be a general partnership in which the members of the DAO are the partners. This is a case of first impression where a DAO’s members (its token holders) could be jointly and severally liable for the actions of the DAO.

bZx DAO operates a blockchain-based software system called bZx Protocol. The bZx Protocol was hacked in 2021, and its users lost approximately $55 million in digital tokens. To compensate those impacted by the hack, the bZx DAO developed and approved a compensation plan, but recoupment would take many years. Plaintiffs, who are 19 bZx Protocol users who lost $1.7 million collectively in the hack, filed a lawsuit in June 2022 claiming that bZx DAO’s negligent security protocols led to the hack.

Continue Reading DAO or Dare: The Implications of <em>Sarcuni v. bZx DAO </em>for DAO Member Liability

The start of 2023 hasn’t gone much better for the blockchain and cryptocurrency industry than the end of 2022 did. In declining to dismiss a case alleging that non-fungible tokens (NFTs) called Moments are considered securities, a federal judge held in Friel v. Dapper Labs that a lawsuit against the creator of the NBA Top Shot platform can proceed. By surviving the motion to dismiss, the Plaintiffs plausibly alleged that these NBA Top Shot NFTs, and only these NFTs, could be a security. While the first of its kind to hold that an NFT could be considered a security, this seemingly narrow ruling could have far-reaching implications for other NFT projects and marketplaces.

NBA Top Shot is an NFT platform, owned and operated by Dapper Labs, that allows consumers to buy, sell, and trade Moments (digital video clips of player highlights) on Dapper Lab’s Flow Blockchain. On February 22, 2023, the United States District Court for the Southern District of New York denied Dapper Labs’ motion to dismiss, holding that although “it [is] a close call and the Court’s decision is narrow,” Moments qualify as a security under the Howey test. In its decision to deny the motion to dismiss, the court focused on prongs two and three of the Howey test.

Continue Reading Layup or Airball? Court Holds NBA Top Shot NFTs May Be a Security in Friel v. Dapper Labs