top of page

Your Voice, Your Face, Your Lawsuit: The AI Risk No One Is Addressing

  • Writer: Craig Gilgallon
    Craig Gilgallon
  • Mar 19
  • 2 min read

Updated: Apr 22

Cybersecurity protects your systems. It does not protect your identity.


AI can now replicate a person’s voice, image, and mannerisms from ordinary business content—recorded meetings, marketing videos, investor calls. From that data, a third party can generate a realistic version of “you” saying or endorsing things you never approved.


That capability is already in circulation. The legal exposure is not theoretical.


Where Liability Attaches

The risk extends beyond the bad actor to any entity that creates, deploys, or benefits from synthetic identity.

1. Misappropriation of Likeness (Right of Publicity)

Unauthorized commercial use of a person’s identity remains actionable under state law, including New Jersey. AI does not change the rule—it scales the violation.

2. False Endorsement (Lanham Act)

Synthetic content implying affiliation or endorsement can trigger federal liability, particularly in marketing and branded communications.

3. Data Use and Training Risk

If identifiable voice or image data is used to train AI systems—internally or by vendors—consent, contractual representations, and downstream outputs all become points of exposure.


The Governance Gap

Most organizations have adopted general AI policies. Few have addressed identity rights in any disciplined way.


Typical gaps:

  • No inventory of where executive or employee likeness is captured and stored

  • No contractual limits on vendor use of recorded data

  • No approval framework for synthetic content

  • No monitoring for unauthorized external use

The result is unmanaged risk embedded in ordinary operations.


What a Defensible Approach Looks Like

This is a governance issue, not a technical one.

A credible framework includes:

  • Internal policy: clear consent, ownership, and use restrictions for likeness and synthetic media

  • Vendor controls: prohibitions on training use, data provenance representations, and indemnities

  • Disclosure standards: clarity around when AI-generated content is used

  • Monitoring: active detection and enforcement against unauthorized use


Bottom Line

The law is not waiting for AI-specific statutes. Existing doctrines—publicity rights, unfair competition, false endorsement—already support liability.


What has changed is scale and ease.

If your organization uses AI, or engages vendors who do, your exposure is already present. The question is whether it is identified, controlled, and contractually allocated—or simply assumed away.


In this category, risk does not arise from intent.


It arises from capability left unmanaged.


Craig S. Gilgallon

Attorney at Law

(973)605-8800


 
 
 

Recent Posts

See All

Comments


bottom of page