Artificial Intelligence

In an effort to provide its musical artists some protection from AI-generated deepfakes of their voices, the state of Tennessee recently enacted ELVIS seeking a cure. Specifically, Tennessee passed the Ensuring Likeness, Voice and Image Security (ELVIS) Act, which goes into effect July 1 of this year. The ELVIS Act replaced Tennessee’s existing rights law (the Personal Rights Protection Act, PRPA), which only protected a person’s name, photograph, or likeness and limited that protection to only barring use in advertising. PRPA also added postmortem rights in an effort to protect the state’s most famous resident, Mr. Presley himself. Interestingly, unique to Tennessee and two other states, the protected rights under the act are characterized as property rights as opposed to rights of publicity, which is more typical.Continue Reading ELVIS Adds (His) Voice to the Protection of Artists Against AI-Generated Deepfakes

On Thursday, October 12, a bipartisan group of senators—Chris Coons (D-Del.), Thom Tillis (R-N.C.), Marsha Blackburn (R-Tenn.), and Amy Klobuchar (D-Minn.)—released a Discussion Draft of the Nurture Originals, Foster Art, and Keep Entertainment Safe (dubbed the “NO FAKES”) Act that would protect the voice, image, or visual likeness of all individuals from unauthorized AI-generated digital replicas, also referred to as “deepfakes.” This draft bill, while focusing on protections required by the advancement of AI, would establish the first federal right of publicity—the right to protect and control the use of one’s voice, image, and visual likeness. The NO FAKES Act could have widespread impacts on the entertainment and media industries, among others.

Generative AI has opened new worlds of creative opportunities, but with these creative innovations also comes the ability to exploit another’s voice, image, or visual likeness by creating nearly indistinguishable digital replicas. This has caused great concern among musicians, celebrities, actors, and politicians regarding viral AI-created deepfakes circulating on social media and the Internet more broadly. To date, advancements in AI technology used to create digital replicas have outpaced the current legal framework governing unauthorized use. Although there are existing laws that may be used to combat digital replicas, these laws either vary from state to state, creating a patchwork of differing protections based on where one is located, or do not directly address the harms caused by producing and distributing unauthorized digital replicas.Continue Reading AI Deepfake Bill: Senators Contemplate the First Federal Right of Publicity

The U.S. Supreme Court’s recent decision in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith is unlikely to shed much light on whether the use of copyrighted material in artificial intelligence (AI) content will lead to liability. The Court’s decision mandates that courts look to the “specific use” of the copyrighted material at issue when evaluating fair use under the Copyright Act. So, what specific factors should AI developers and users consider when using copyrighted content in the AI space post-Warhol?

The Copyright Act and Generative AI

Under the Copyright Act, copyright holders have the exclusive right to reproduce their work, prepare derivative works, distribute copies of the work, perform the work, and display the work publicly. In developing an AI system, programmers and companies can violate exclusive rights of copyright holders at two distinct points:

  • By using copyrighted material as an input to teach the AI software
  • By creating an unauthorized derivative output of the copyrighted work because of the AI application. The distinctions between inputs and outputs involving this space are detailed here and here.

Continue Reading How Will Use of Copyrighted Content in Artificial Intelligence Be Evaluated After the Supreme Court’s Warhol Decision?

Innovations in artificial intelligence (AI) have made it easier than ever to replicate a person’s name, image, and likeness (NIL), particularly if that person is a celebrity. AI algorithms require massive amounts of “training data”— videos, images, and soundbites—to create “deepfake” renderings of persona in a way that feels real. The vast amount of training data available for celebrities and public figures make them easy targets. So, how can celebrities protect their NIL from unauthorized AI uses?

The Right of Publicity

The right of publicity is the primary tool for celebrity NIL protection. The right of publicity protects against unauthorized commercial exploitation of an individual’s persona, from appearance and voice to signature catchphrase. Past right of publicity cases provide some context for how this doctrine could be applied to AI-generated works.Continue Reading Artificial Intelligence Wants Your Name, Image and Likeness – Especially If You’re a Celebrity

Generative AI is creating previously unimaginable possibilities for influencers and brands to engage with consumers. Rather than merely posting on social media, influencers will be able to utilize AI to have two-way conversations that feel authentic. Influencers can do this literally in their own voice, having unique dialogs with countless people at the same time.

Influencers and brands are accustomed to the rules governing what can be said on social media, but now they’ll need to start thinking about what sort of information they can elicit from their fans and consumers in the course of unique and unpredictable interactions, and what they can do with that information, because they will have the ability to gather more consumer information than ever before, and in ways that may be difficult to control.Continue Reading Let’s Chat: Influencers and Brands Testing the Waters of Generative AI Must Navigate Data Privacy and FTC Issues