Red Soles, Black Boxes: A Case Comment On AI-Generated Evidence In Trademark Disputes

Author(s): Mrinalika. A.B and P. Aakash Kannan

Paper Details: Volume 3, Issue 3

Citation: IJLSSS 3(3) 44

Page No: 544 – 548

ABSTRACT

This case comment looks at a recent Delhi High Court decision involving Christian Louboutin’s famous red soles, but with a surprising twist—AI evidence. The court was asked to consider a ChatGPT response as proof of public association between red soles and the brand. While that may sound innovative, the court wasn’t convinced. Through this piece, I’ve explored what this means for evidence law in India, especially when tech tools start entering legal arguments. I’ve also looked at a U.S. case where a lawyer faced serious consequences for relying on fake AI citations. Both cases raise the same question: how far should we trust AI in court? This comment argues that while technology may help, human reasoning still has to lead the

INTRODUCTION

You know, something about the way courts are starting to deal with AI really got my attention recently. I was reading about this case—Christian Louboutin v. Shutiq[1]—and at first, it just looked like a trademark fight. Designer shoes, red soles, the usual stuff. But then I saw that someone had actually used ChatGPT as part of their legal argument. That stopped me.

They didn’t just cite case law or surveys. They went and asked ChatGPT if red soles mean Louboutin, got a “yes,” and submitted that to court.

That’s when it hit me—this is way bigger than just a shoe company defending its brand. This case is testing how far we can go with AI in serious legal work. Can you really use a chatbot to prove public perception? Should we even try?

As someone still studying law—and also someone who’s seen how fast tech is changing everything—I couldn’t look away. So in this piece, I’m unpacking what the court said, why it matters, and how this connects to another crazy example in the U.S. where AI actually got lawyers into trouble. There’s a lot going on here, and honestly, I think we’re just getting started.

THE STORY BEHIND THE DISPUTE

The case starts off pretty simply. Christian Louboutin—the brand almost everyone links with high-end red-soled shoes—filed a case against Shutiq, a company in India. The allegation? That Shutiq was selling shoes with a similar red sole, which might confuse people into thinking they were linked to Louboutin. That part isn’t all that unusual in trademark cases.

But what really made this case different is what Louboutin’s legal team did next. Instead of just relying on brand surveys or expert opinions, they asked ChatGPT whether red soles were connected to Louboutin. The AI said yes—and they submitted that as part of their evidence.

Now that’s the twist. Suddenly, this wasn’t just about shoes or brand identity. It turned into a question of whether courts should treat a chatbot’s response as proof of what the public thinks.

That opened up some bigger legal questions. Like:

  • Can an AI tool really reflect public perception in a country like India?
  • Does something ChatGPT says hold any weight in court?
  • And, more broadly, are we starting to rely too much on machines in places where human judgment should still matter?

What looked like a simple trademark fight turned into something much more complicated—and way more relevant to where the legal world is heading.

WHAT THE LAW SAYS (AND WHAT IT DOESN’T SAY ABOUT AI)

Okay, before getting into what the judge decided, I think it’s worth just going over what the law actually says about trademarks—and where AI fits into all this. The thing is, our laws don’t directly talk about tools like ChatGPT yet. But the principles still give us a good idea of how to think about it.

In India, trademarks are protected under the Trade Marks Act, 1999[2]. The main idea is simple: if your mark helps people figure out where the product came from and separates it from others, then it’s doing its job. But the minute there’s confusion—like someone thinking two totally different companies are somehow connected—that’s a red flag legally.

There’s this really important case, Cadila Healthcare v. Cadila Pharmaceuticals (2001)[3], which basically says courts should see things from the eyes of an average buyer. Like, not someone who studies trademarks all day—just a regular person. If they might get confused, then it matters, even if the companies didn’t mean to cause trouble.

Now let’s talk about digital records. Here’s where it gets technical. The Indian Evidence Act, 1872, and especially Section 65B[4], lays down the rules. If you’re bringing in electronic evidence, it has to be solid. That means it should be traceable, certified, and clearly linked to a real source. Otherwise, it doesn’t really hold weight in court.

This is where ChatGPT falls short. It’s not giving you a source you can verify. It doesn’t say where it got that info about red soles or who said what. It’s just generating text based on a massive pool of stuff it read somewhere online. Sounds smart? Sure. Reliable enough for court? Not really.

So while AI tools are interesting and definitely part of our lives now, our current legal system isn’t ready to treat them like actual evidence. And honestly, that kind of makes sense.

WHAT THE COURT SAID – AND WHY IT MATTERS

So when this matter landed in the Delhi High Court, Justice Prathiba M. Singh took a pretty grounded approach. She didn’t deny that Louboutin has a strong brand presence. Honestly, most people do know about the red soles. But she didn’t buy the argument based on ChatGPT’s answer.

And I think that was smart. The judge pointed out that ChatGPT is still growing—it’s not really something the court can rely on to make a legal call. I mean, it’s not even clear where it’s pulling its answers from. That alone makes it risky.

In her own words, and I’m quoting the order here:

“ChatGPT is an AI-based tool which is still evolving… The accuracy and reliability of the answers given by ChatGPT is still being tested. Thus, the Court cannot rely upon such material to form a legal opinion.[5]

That one line kind of sums it all up, doesn’t it? You can’t just bring a chatbot’s opinion to court and expect it to stand like real evidence.

So, instead of entertaining the AI stuff, the court focused on what it always does—checking if the products look similar, considering how the average buyer would react, and whether people might actually get confused.

And honestly? That approach felt reassuring. In a time when everyone’s rushing to use tech tools for everything, this judgment reminded us that legal reasoning still needs to come from actual people. Not just screens and algorithms.

COMPARING WITH THE U.S. – MATA V. AVIANCA

While this whole Louboutin thing was happening in India, something kind of unbelievable played out in the U.S. too. In a case called Mata v. Avianca[6], a lawyer actually used ChatGPT to help write a court brief. But here’s where it went off the rails—some of the cases the AI mentioned? They weren’t even real. Like, completely made up.

When the judge figured it out, the lawyer admitted he hadn’t checked. He’d just copied what ChatGPT said and assumed it was legit. That didn’t go over well. The court hit him with sanctions. And honestly, it was a wake-up call for a lot of people.

But the judge didn’t say, “Don’t use AI at all.” What he really said was, “If you use it, you’re still responsible for the result.” That feels fair. I mean, using tech isn’t wrong—but trusting it blindly? That’s where it gets risky.

Now, when we look back at the Louboutin case, the situation was a bit different. No fake citations, but still, the court had to decide whether it should trust something ChatGPT said about public perception. And just like in the U.S. case, the Indian court didn’t ban AI outright. It just said, “This isn’t enough for legal proof.”

So yeah, both courts were making the same larger point: AI might be part of the process now, but it doesn’t get the final say. That still belongs to us.

FINAL REFLECTIONS

Not gonna lie, this case left me kind of confused. Not because of the trademark stuff—that part made sense. But the AI part? That threw me off. Like, how are we supposed to treat something like ChatGPT in a real courtroom?

I’ve used it. A lot of us have. It spits out answers fast. But now I’m wondering… is that a good thing? I mean, you can’t even tell where it’s getting its info. Sometimes it just sounds confident, even when it’s wrong.

And then that U.S. lawyer? He got burned. Cited fake cases. That’s not just embarrassing—it’s dangerous. I get it though. Pressure’s real. But still. Courts need facts. Real ones.

So yeah, I’m thinking we can’t treat these tools like magic. They’re not. They help, but they don’t think. We do. Or at least we’re supposed to.

That’s all. No big takeaway. Just… stuff to think about.


[1] Christian Louboutin v. Shutiq, 2023 SCC OnLine Del 5152.

[2] The Trade Marks Act, No. 47 of 1999, § 2(1)(zb), India Code (1999).

[3] Cadila Health Care Ltd. v. Cadila Pharm. Ltd., (2001) 5 SCC 73.

[4] The Indian Evidence Act, No. 1 of 1872, § 65B, India Code (1872).

[5] Christian Louboutin v. Shutiq, 2023 SCC OnLine Del 5152, 22.

[6] Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), 2023 WL 4114965 (S.D.N.Y. June 22, 2023).

Scroll to Top