Howdy, friends.
This week, two stories landed that could have been ripped from a law school exam — except they're real, they're happening now, and the people caught in the middle didn't see them coming. One is about what results when an AI system uses your face without permission. (I hate it when that happens.) The other is a cautionary tale of why an athlete should never forget to file important paperwork.
Both are expensive lessons. Neither one had to happen.
Let's get into it.
Cover Your Assets
Your Face Is Your IP. Grok Just Reminded Everyone.
In late December, users of X discovered they could tag xAI's chatbot Grok in a post and ask it to edit photos from someone's public profile. Predictably, things got ugly fast. Over an eleven-day stretch ending January 8th, Grok generated approximately three million sexualized images — most of them of real, named women who never consented to any of it.
Conservative influencer Ashley St. Clair was one of those women. She described finding explicit AI-generated images of herself, including some that depicted her as a minor. She was also targeted with images placing her in antisemitic contexts. "I felt so disgusted and violated," she told Fortune. She's now suing xAI in New York federal court.
xAI's response? They almost immediately countersued, claiming St. Clair violated X's terms of service, which require disputes to be brought in Texas — in a federal district where 10 of 11 active judges were appointed by Republican presidents, and where cases aren't randomly assigned. St. Clair's position is that the ToS doesn't govern her claim at all, since she's suing over what was done to her, not over anything she did on the platform. That's a genuinely interesting legal argument, and the venue fight alone will be worth watching.
The important takeaway for you here is that your likeness is your intellectual property. You have the right to control how your face, voice, and image are used commercially — that's the right of publicity, and most states recognize it. The harder question is what happens when AI generates content about you without your direct involvement, especially when the platform hosting it claims broad immunity under Section 230 - a law that says websites are generally not responsible for what their users post. That immunity wasn't designed to shield AI-generated nonconsensual content, and courts are only beginning to sort it out.
The practical steps are limited right now, but they exist.
First, know your state's deepfake laws — nearly 20 states have enacted some form of protection against nonconsensual AI-generated intimate images, and that number is growing.
Second, document everything: screenshots, timestamps, links, and any communications about the content before it disappears.
Third, understand that platforms can take content down even if they're not legally required to — and a well-documented, formal request is more effective than a public callout. And if someone is monetizing AI-generated content using your likeness, that's a cleaner legal claim than you might think.
The St. Clair lawsuit won't resolve quickly. But it's already doing something useful: making AI companies aware that "we didn't generate it, a user prompted it" may not be a complete defense forever.
The NIL Scouting Report
Read the Rules. File the Paperwork. The NIL Go Situation at Nebraska.
Earlier this year, the College Sports Commission — the enforcement body created under the House v. NCAA settlement — quietly contacted the University of Nebraska to inform them that athletes were being investigated for failing to report NIL deals through a system called NIL Go. Under the current rules, all Division I athletes must submit every NIL deal worth more than $600 for review and approval. The purpose is to distinguish legitimate endorsements from pay-for-play arrangements disguised as brand deals.
The Nebraska situation appears to have resolved without punishment. According to emails obtained through a public records request, the athletes in question submitted their missing deal information after being contacted, and the compliance staff explained there had been "confusion about the exact timing of certain deals." Nebraska is the second school identified in this initial wave of inquiries — LSU dealt with something similar, also without punishment.
A few things worth knowing if you're an athlete, a parent, or someone advising either:
The $600 threshold is not a suggestion. Any NIL deal above that number needs to get reported to NIL Go. Not after the season. Not when you get around to it. Before — or immediately when — money changes hands.
"I didn't know" doesn't protect you the way you might think it does. Nebraska's compliance staff described the violations as a timing and awareness issue, not intentional cheating. The CSC apparently agreed — this time. But as the enforcement machinery gets more sophisticated (they're actively hiring attorneys and former law enforcement), leniency for first-time paperwork failures won't be guaranteed.
Compliance offices exist for a reason. Use them. The moment you're discussing any deal — endorsement, appearance fee, social post, branded content — loop in your school's compliance staff before you sign anything. That's free advice that can save your eligibility.
The CSC's enforcement power is still being established. Schools haven't fully signed onto the participation framework, so there's genuine ambiguity about what the CSC can actually do to enforce violations right now. That ambiguity won't last. Build the habit of compliance before it becomes mandatory in the most painful way possible.
The Nebraska story is worth noting not because it ended badly, but because it's a preview of how enforcement is going to work. Inquire, disclose, and resolve — or don't, and find out what happens next. The smarter play is obvious.
See you next time,
Hank
P.S. Know a creator, athlete, or coach who'd find this useful? Forward it their way.
About Hank's IP Brew*
Creator IP Academy helps creators understand and protect their intellectual property. Got a question? Reply to this email.
