AILA Blog

Think Immigration: How Artificial Intelligence May Be Failing Us

11/13/25 AILA Doc. No. 25111300.
Decorative graphic featuring a digitized gavel with circuitry coming out of it.

As part of our efforts to amplify the AILA Law Journal, Delaram Rezaeikhonakdar and Craig Shagin put their article about the importance of public scrutiny and review of AI in immigration enforcement into context, sharing more about why they felt this topic was both timely and important.

Since the dawn of time, stories have served as our bridge to the world around us. From the tales shared by tribal elders, passing wisdom and history through generations, to the myths that explained a society’s origins and culture, narration has helped us understand where we come from and the legacies we inherit. Yet as time marches on and the voices of these elders fade, many of these stories vanish with them. What remains are fragmented moments: the present without memory, experience without reflection, and a future approached without contemplation.

Today, many of us inhabit digital worlds meticulously designed by technology companies. We scroll endlessly, and are trapped in cycles reacting rather than reflecting. Our attention becomes a commodity, and our online behaviors a source of profit. While technology offers undeniable benefits, it has also fostered a culture of distraction and surveillance. This is not to say that technology is inherently harmful. When deployed responsibly, it can, undeniably, advance our lives. Yet placing blind faith in technology—and to be more specific, Artificial Intelligence (AI)—carries significant risks, especially in sensitive contexts such as immigration.

AI is no longer a futuristic concept; it is a rapidly expanding force reshaping immigration enforcement worldwide, including in countries like Germany, Italy, Netherlands, Austria, Romania, Denmark, Sweden, and Finland. In the United States, agencies are increasingly adopting AI tools to streamline complex operations. A good example of AI integration into government operations is TSA’s collaboration with a private company called CLEAR, which allows CLEAR Plus members to use eGates for real-time biometric verification.

So the critical question is: Are we sufficiently prepared for the AI wave in immigration enforcement? These systems do not operate in a vacuum. They are trained on human data, inheriting the same biases and inequalities that already exist. Errors in data collection, programming decisions, or algorithmic design can introduce bias, whether through oversight or misplaced trust in AI outputs. When such data reflects systemic injustice, the technology built upon it can silently reinforce those inequities.

The problem is compounded by two related issues. First, many of the individuals designing, training, and regulating these systems lack the expertise to fully understand the risks associated with AI in the specific context of immigration. Second, when agencies both develop and audit AI tools, the boundary between oversight and self-regulation becomes dangerously unclear. For instance, although agencies like Customs and Border Protection (CBP) and the Department of Homeland Security (DHS) conduct internal privacy assessments, and offices such as the DHS Privacy Office and the Office for Civil Rights and Civil Liberties (CRCL) provide oversight, these entities remain part of the broader DHS structure, which can limit their independence. Without independent review and public scrutiny, the efficiency AI promises may come at a steep cost: due process, equality, and human dignity.

About the Author:

Location Philadelphia, Pennsylvania USA
Law School
View Profile
Firm The Shagin Law Group, LLC
Location Harrisburg, Pennsylvania USA
Law School Villanova University, Charles Widger School of Law
Chapters Philadelphia
Join Date 2/24/98
View Profile