In the past few months, the administration, the Copyright Office, and the courts have weighed in on several material issues at the cross section of copyright law and AI. The White House’s recent announcement of its AI Action Plan offers an opportunity to examine the interesting alignment and discord on key issues relating to fair use.

Specifically, this article dissects three key issues and how they are being considered in the evaluation of fair use: the use of pirated works for training AI models; the “dilution” theory of market harm; and whether legislation and regulation are necessary. While there are clear points of divergence between the White House, the Copyright Office, and the courts, the areas of alignment provide a foundational framework for stakeholders to navigate today’s landscape while also preparing for tomorrow’s inevitable changes. Continue Reading Whose Rules Rule? Different Approaches to Key AI and Copyright Fair Use Principles Across the Administration, Copyright Office, and the Courts

On Thursday, October 12, a bipartisan group of senators—Chris Coons (D-Del.), Thom Tillis (R-N.C.), Marsha Blackburn (R-Tenn.), and Amy Klobuchar (D-Minn.)—released a Discussion Draft of the Nurture Originals, Foster Art, and Keep Entertainment Safe (dubbed the “NO FAKES”) Act that would protect the voice, image, or visual likeness of all individuals from unauthorized AI-generated digital replicas, also referred to as “deepfakes.” This draft bill, while focusing on protections required by the advancement of AI, would establish the first federal right of publicity—the right to protect and control the use of one’s voice, image, and visual likeness. The NO FAKES Act could have widespread impacts on the entertainment and media industries, among others.

Generative AI has opened new worlds of creative opportunities, but with these creative innovations also comes the ability to exploit another’s voice, image, or visual likeness by creating nearly indistinguishable digital replicas. This has caused great concern among musicians, celebrities, actors, and politicians regarding viral AI-created deepfakes circulating on social media and the Internet more broadly. To date, advancements in AI technology used to create digital replicas have outpaced the current legal framework governing unauthorized use. Although there are existing laws that may be used to combat digital replicas, these laws either vary from state to state, creating a patchwork of differing protections based on where one is located, or do not directly address the harms caused by producing and distributing unauthorized digital replicas.Continue Reading AI Deepfake Bill: Senators Contemplate the First Federal Right of Publicity

Innovations in artificial intelligence (AI) have made it easier than ever to replicate a person’s name, image, and likeness (NIL), particularly if that person is a celebrity. AI algorithms require massive amounts of “training data”— videos, images, and soundbites—to create “deepfake” renderings of persona in a way that feels real. The vast amount of training data available for celebrities and public figures make them easy targets. So, how can celebrities protect their NIL from unauthorized AI uses?

The Right of Publicity

The right of publicity is the primary tool for celebrity NIL protection. The right of publicity protects against unauthorized commercial exploitation of an individual’s persona, from appearance and voice to signature catchphrase. Past right of publicity cases provide some context for how this doctrine could be applied to AI-generated works.Continue Reading Artificial Intelligence Wants Your Name, Image and Likeness – Especially If You’re a Celebrity

The recent explosion in popularity of generative artificial intelligence (AI), such as ChatGPT, has sparked a legal debate over whether the works created by this technology should be afforded copyright protections. Despite that several lawsuits on the subject have been filed, and the U.S. Copyright Office has recently issued guidance clarifying its position, that the bounds of copyright protections for works created using AI are not yet clearly defined and many questions remain unanswered. For now, it appears that copyright eligibility for such works depends on the extent of human involvement in the creative process and whether any use of copyrighted work to generate a new work falls within the purview of the fair use doctrine.

The analysis of this issue has been framed around two key aspects of the technology itself: input data and output data. Input data are the pre-existing data that human users introduce into the AI system that the system then uses to generate new works. Output data are the works ultimately created by the system—the finished product. Thus, copyright eligibility for AI-generated or AI-assisted works depends on whether the AI system’s use of copyrighted works as input data is permissible and whether the output data is itself copyrightable.Continue Reading ChatGPT and the Rise of Generative Artificial Intelligence Spark Debate on Copyright Protections of AI-Generated Works