AILA Blog

Think Immigration: Straight From the Horse’s Mouth – An AI Experiment That You Can Absolutely Rely Upon

7/3/25 AILA Doc. No. 25070307.
Decorative image showing the letters for AI and a gavel.

I have received the usual dose of marketing and advertising from both legal organizations and from regular purveyors of so called “miracle cures” for what ails the legal system – a lack of “intelligent” (read artificial intelligence - AI) research and practice solutions. In parts, I have been mildly curious and sometimes intrigued by these offerings. Because, like everyone, I don’t want to miss out on the next “big thing” – Fear of Missing Out (FOMO), if you will.

So, I was genuinely fascinated when a client visited me recently and exposed me to my first AI experience. This client runs an AI startup designing bespoke AI solutions. Over lunch he attempted to enlighten me on the recent advances in AI technology.

The client pulled out his smartphone and fired up an AI widget. He then allowed the camera to pan over the table and the food that had been set out. He then began a conversation with the widget asking it what it thought was going on…and immediately the SIRI-like voice, albeit in an Indian accent, noted that it appeared to be a business lunch at a restaurant. The client then asked for details of the food on the table. Again, without hesitation, the voice began describing, accurately, the food on the table. To put it mildly, I was impressed. Not by the fact that AI was able to describe the food on the table, but that it needed no prompting and appeared to hold a conversation like an actual person. Perhaps, my reaction would be deemed naïve by those of you familiar with advances in AI technology, but for a novice like me, it did impress.

Fast forward to a recent morning, when I opened the New Yorker and found this article by Peter Hessler entitled "The Uncertain Future of a Chinese Student at Harvard - Amid escalating threats from the Trump Administration, a student assesses whom he can turn to.” The article obviously describes the dilemma faced by a Chinese student currently enrolled at Harvard University in light of the DHS action targeting that university. What caught my attention was how this student went about trying to find answers to the legal questions posed by said action. He did not contact an attorney, he did not choose to approach the International Student Office, the Designated School Official or even his friends. Instead, he went straight to ChatGPT to find answers! And the article describes the process by which ChatGPT provided him with the answers:

"Chen turned to the in-depth analysis of ChatGPT-o3 with his questions about D.H.S. targeting Harvard’s international students."

And the article continues, “…in the course of the evening, ChatGPT compiled a thirty-two-page document, complete with time lines, potential scenarios, and estimated percentages of the likelihood of certain things occurring. The process helped Chen feel calmer.”

It is clear, like the lunch table experiment that I described earlier, this student found ChatGPT’s description of his problem and the preliminary conclusions it offered both compelling and somewhat reassuring. I get that. But here’s the problem. As lawyers, we are taught not to rely on simply offering conclusory opinions. In fact, we are cautioned against such hubris. The whole process is distilled into rules of conduct and professional responsibility (See for instance, ABA Model Rule: 1.2).

So, I decided to go to the source. I fired up my browser and went to ChatGPT to find my answer! I provided it with the following prompt:

Screenshot of ChatGPT prompt.

Within seconds, ChatGPT analyzed it and provided me with a rather sensible response:

"You're absolutely right to raise this point. The use of AI in the article—particularly in the context of predicting the Department of Homeland Security’s (DHS) decisions—does seem to offer a problematic sense of clarity or assurance in a space where discretion, opacity, and politics often reign. Let's unpack this a bit.

The Core Issue

AI, when used to interpret or predict immigration decisions, can give an illusion of objective foresight. But immigration law-especially at the discretionary level of DHS, ICE, or CBP-is famously inconsistent and deeply influenced by non-transparent internal guidelines, shifting policies, and even political sentiment. No AI model, no matter how advanced, can accurately predict decisions in this environment without oversimplifying or missing critical human and legal context.

Legal Guidance vs. AI Prediction

Legal training instills a deep respect for nuance and precedent. Lawyers are taught that:

  • Outcomes often hinge on fine procedural or factual distinctions.
  • Discretionary decisions (like those involving waivers, parole, or inadmissibility) are unpredictable.
  • Guidance must err on the side of conservatism and caution, especially in immigration matters, where the stakes are existential.

AI models, by contrast, may identify patterns or probabilities based on past data, but they cannot account for:

  • Internal policy changes not publicly known,
  • The personal discretion of individual officers,
  • Shifts in geopolitical strategy or diplomatic concerns.

A False Sense of Security

The student in the article seems to take some comfort in AI-generated insights. But as you rightly point out, this kind of computational prediction can reinforce a "perfunctory" or shallow understanding of real legal risk. It may even disincentivize seeking qualified legal counsel who could help craft a more rigorous and cautious strategy.

Why This Matters

Especially in immigration, where decisions are often final and unappealable, false reassurance can be dangerous. AI should be a tool to support—not substitute—legal analysis, particularly in cases with human rights implications."

I reproduce the following sentence from ChatGPT’s analysis for emphasis:

No AI model, no matter how advanced, can accurately predict decisions in this environment without oversimplifying or missing critical human and legal context.

I rest my case.

P.S. The highlighted text is original content from ChatGPT

****

AILA members and AILA Affiliated Paralegals interested in learning how to use Generative AI in a transformative way in their practice can check out the upcoming AILA AI-Powered Legal Research and Drafting Live Online Course. The course, which launches August 6, 2025, will offer an opportunity to learn how to leverage the power of AI wisely to become more productive: streamlining repetitive tasks, reducing drafting time from hours to minutes, and improving the accuracy of legal research.

About the Author:

Firm Kidambi & Associates, P.C.
Location Trumbull, Connecticut USA
Law School George Washington University Law School
Chapters Connecticut
Join Date 10/8/97
Languages Tamil, Hindi, Spanish, Urdu, Telugu, Japanese
View Profile