Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Friday, May 9, 2025

Poli sci research dazzles with deep dive on judicial bias, asylum woes, AI to police corruption

The New England Political Science Association (NEPSA) met at Bretton Woods, N.H., late last month.

I always look forward to the NEPSA meeting, as political scientists are just about the warmest crowd of academics I know. No other kind of social scientist so eagerly shares knowledge as the political scientist, who similarly embraces interdisciplinary feedback, even from a non-PhD such as me.

True, political scientists can and do argue about anything. They put law professors to shame in that regard. You make a mistake of parliamentary procedure at the political science business meeting at your peril. 

But only the political scientist compromises her or his confident disputation with a wink of the eye that acknowledges the house of cards we've built around ourselves. You won't find that kind of concession in the grim gaze of an economist.

I saw a great many fabulous papers as always at NEPSA, and I had the privilege of chairing and discussing the papers on a compelling panel on law and public policy on April 26. The panel comprised Dr. Ihsan Alkhatib, Murray State University; the Hon. Sarada Prasad Nayak, UMass Amherst; and attorney Nicole Norval, Eastern Connecticut State University.

All of the panelists, like me, are recovering lawyers. Dr. Alkhatib practiced family law and immigration law for a decade in the Detroit area, representing mostly Arab- and Muslim-American clients. Nayak was a judge in various capacities, including family court, in Odisha State, south of Kolkata, in India for 30 years, before moving to the United States for a new pursuit in academics. Norval was a real estate attorney in South Africa before she left to trot the globe as a tech exec, reg counsel, and business law professor.

In the projects presented, Dr. Alkhatib is studying U.S. immigration law, and in particular the awful consequences of failing to recognize violence against women expressly as a basis for asylum. Judge Nayak, with co-author Dr. Paul Collins Jr. at UMass Amherst, is poring over an extraordinary database of cases in India to understand religious and gender biases in judicial decision-making, with transnational implications. And attorney Norval, with co-author Sameer Somal, CEO of Blue Ocean Global Technology, is looking at the potential for AI to detect and police corruption in business and finance, testing real tech on models such as FIFA and FTX, with promising results.

These authors' paper abstracts are copied below. The full 2025 NEPSA conference program with abstracts is available for a limited time here.

The NEPSA conference was stewarded as usual by the incomparable Dr. Steven Lichtman, Shippensburg University, a friend who never fails to inspire me with his teaching and in his clear-eyed commitment to supporting colleagues and developing academic talent.

Here on The Savory Tort this coming Monday, May 12, 2025, I will write about Bretton Woods as the location of the founding of the International Monetary Fund, and how that history has come up lately in our politically tumultuous times. Stay tuned.


Ihsan Alkhatib, Murray State University
Gender in Immigration Court: Orientalism on Trial
There are five grounds for asylum. Gender is not one of them. Gender however comes up under the grounds of Particular Social Group. Two approaches to gender claims from the Arab world are presented and compared. I argue that one approach is grounded in Orientalism and perpetuates Islamophobia. The second approach is is grounded in a global view of gender and is more accurate representation of gender reality. Immigration lawyers are bound by Rules of Ethics. Advocacy grounded in the second approach is more consistent with the ethical obligations of lawyers.

Sarada Prasad Nayak, University of Massachusetts - Amherst
Case Backlogs and Bias in Timely Justice Delivery in the Indian Judiciary
Understanding bias in the judicial decision-making process is crucial for ensuring fairness and justice in the legal system. To date, scholarship on judicial bias focuses overwhelmingly on the American legal system, focusing on case outcomes or judges’ voting behavior. In this paper, we shift the focus outside of the U.S. and beyond case outcomes. To do this, we examine judicial delays in India, where prolonged legal processes often serve as a form of punishment. We theorize that bias may infiltrate the amount of time it takes to dispose of cases based on the gender and religion of the judge who is assigned the case, as well as those of the defendants. To subject these expectations to empirical scrutiny, we analyze hundreds of thousands of criminal cases decided in India’s lower courts. Our results indicate that Muslim defendants experience shorter delays when their cases are heard by Muslim judges, providing evidence of in-group bias. However, there do not appear to be differences in the timing of case outcomes based on the defendant or judge gender. This study contributes to the literature by highlighting how judicial delays in less developed countries may reflect subtle forms of bias, mainly along religious lines. 

Nicole Norval, Eastern Connecticut State University
Can AI Reduce Business Corruption - and Prevent Another FIFA … Another FTX?
Regulators consider artificial intelligence (‘AI’) an inevitable tool for compliance with regulations such as anti-corruption laws. How should we regulate AI to improve regulatory compliance without sacrificing the right to privacy? Can we regulate AI to prevent corrupt business practices and improve human rights outcomes? Unifying existing and forthcoming AI regulation in multiple jurisdictions (primarily the United States and the European Union) in a matrix of business corruption reforms, results in a useful legal model. This paper concludes by applying the model to the decades-long Fédération Internationale de Football Association (FIFA) corruption scandal and the recent FTX cryptocurrency exchange bankruptcy to understand the benefits and limitations of this legal framework. We examine why AI is an inevitable tool for regulatory compliance, comparing, AI regulation, guidelines, and recommended practices in the United States, the European Union, and other jurisdictions, in order to extract common objectives of AI regulation such as protecting privacy rights and improving human rights outcomes. We discuss business corruption reforms in general, focusing on the financial services sector as a business sector crucial for such reform initiatives. Integrating these financial services sector reforms with common AI regulation objectives, we construct a legal model for application to business corruption events. We apply this legal model to two business corruption events with significant negative financial impact in order to establish whether the use of AI to identify business corruption signifiers would have reduced these negative financial impacts, protected privacy rights, and improved human rights outcomes. We conclude by identifying limitations and benefits of our legal model for future improvement, examining the moral imperative and impact of this research, and identifying further areas of research.

Friday, August 2, 2024

'Faculty are the least important people on a campus,' but don't worry; administrators will be all right

Adaptation of Joe Loong via Flickr CC BY-SA 2.0

A recent item in The Chronicle of Higher Education (subscription), excerpted by Paul Caron on TaxProf Blog, well captures what it feels like to be a professor in American higher ed nowadays.

The Chronicle item, by Beckie Supiano, talks about tenured and tenure-track professors leaving the "dream job" of academia. Author and consultant Karen Kelsky founded a private Facebook group, now counting 33,000 members, as a virtual home for the disillusioned: "The Professor is Out."

Supiano quoted Kelsky:

"The faculty are the least important people on a campus right now," Kelsky says. If colleges valued their work, she says, they wouldn't have allowed "adjunctification" to happen in the first place. The current wave of faculty departures—which colleges don't even seem to have acknowledged—is simply the latest twist in a decades-long deterioration.

"Institutions' indifference to faculty leaving," she says, "is a reflection of their indifference to faculty's being there."

To some professors, the job they've worked so hard for feels untenable. And that's particularly true for those who ... pour themselves into their positions and strive to connect with students on a personal level. That's something that colleges sell to students, but it's not something they seem actually willing to invest in.

Right: especially that first line about faculty being the least important people on campus. Though "right now" might erroneously suggest a new condition. Rather, this lament is the familiar theme of the widely referenced book by Benjamin Ginsberg, The Fall of the Faculty: The Rise of the All-Administrative University and Why It Matters, in 2011, when the data already were ample.

Despite Ginsberg calling out the trend more than a decade ago, nothing has changed. Faculty governance is practically a dead letter. Faculty work not only for provosts and chancellors, but for every support service office on campus, such as information technology and human resources. We're told when and where, and increasingly how and what to show up and teach. Worse, we're loaded down with hamster-wheeling administrative work. It seems that every new administrator means more work for me, too. I feel ever more like Lucy on the assembly line.

This state of affairs was a refrain at last week's Niagara Conference on Workplace Mobbing (more to come about the conference here at The Savory Tort). In the same vein, I heard mounting faculty anxiety over AI. If universities, as the bottom-line businesses they've become, care about the delivery of services almost to the exclusion of quality, then they will gravitate to the worker that never sleeps and never whines about the rising costs of housing, healthcare, and college for our own kids.

In my workplace, "adjunctification," as Kelsky put it, manifests as noncompetitive compensation for both part-time and full-time faculty. A first-year attorney in Big Law makes substantially more than any of the teaching faculty at the law school where I work—

—excluding deans. A university's priorities ring clear when one compares the qualifications and salaries of teaching faculty with salaries in the bureaucracy. Judge my shop for yourself with a recent top-100 round-up at South Coast Today (or look up anyone in the Massachusetts public sector). Be wary of listed titles. At no. 32, I'm the top paid, still serving, and exclusively teaching "professor." Other "professors" at nos. 5-30 had or have admin roles that fattened the bankroll. The money is in admin and overhead, even while students strain to see the return on that investment.

Indeed, as Kelsky suggests, most of us, teaching faculty, still "strive to connect with students on a personal level," despite lack of incentives to do so. That's probably because it's the character flaw of human compassion that draws us to teaching. I'm working on it: trying to be a good worker by caring less and keeping the assembly line moving. "Speed it up a little!" Maybe, for students' sake, AI will meet us halfway in the humanity game.

Tuesday, February 6, 2024

AI can make law better and more accessible; it won't

Gencraft AI image
Artificial intelligence is changing the legal profession, and the supply of legal services is growing even more disconnected from demand.

The latter proposition is my assessment, but experts agreed at a national bar conference last week that AI will change the face of legal practice for attorneys and clients, as well as law students and professors.

Lexis and Westlaw each recently launched a generative AI product, Lexis+ AI Legal Assistant and AI-Assisted Research on Westlaw Precision. One might fairly expect that these tools will make legal work faster and more efficient, which in turn would make legal services accessible to more people. I fear the opposite will happen.

The endangered first-year associate. The problem boils down to the elimination of entry-level jobs in legal practice. Panelists at The Next Generation and the Future of Business Litigation conference of the Tort Trial Insurance Practice Section (TIPS) of the American Bar Association (ABA) at the ABA Midyear Meeting in Louisville, Kentucky, last week told audience members that AI now performs the work of first- and second-year associates in legal practice.

The change might or might not be revolutionary. Popular wisdom routinely describes generative AI as a turning point on the evolutionary scale. But panelists pointed out that legal research has seen sea change before, and the sky did not fall. Indeed, doomsayers once predicted the end of responsible legal practice upon the very advent of Lexis and Westlaw in displacement of books and paper—a transformation contemporary with my career. Law practice adapted, if not for the better in every respect.

It's in the work of junior attorneys that AI is having the greatest impact now. It can do the background legal research that a senior lawyer might assign to a junior lawyer upon acquisition of a new client or case. AI also can do the grunt work on which new lawyers cut their teeth, such as pleadings, motions, and discovery.

According to (aptly named) Oregon attorney Justice J. Brooks, lawyers are under huge pressure from clients and insurers to use AI, regardless of the opportunity cost in bringing up new attorneys. Fortune 500 companies are demanding that AI be part of a lawyer's services as a condition of retention. The corporate client will not pay for the five hours it takes an associate to draft discovery requests when AI can do it in 1.5.

Observers of law and technology, as well as the courts, have wrung their hands recently amid high-profile reports of AI-using lawyers behaving badly, for example, filing briefs citing sources that do not exist. Brooks said that a lawyer must review with a "critical eye" the research memorandum that AI produces. Insofar as there have been ethical lapses, "we've always had the problem of lawyers not reading cases," Illinois lawyer Jayne R. Reardon observed.

Faster and cheaper, but not always better, AI. There's the rub for newly minted associates: senior lawyers must bring the same scrutiny to bear on AI work that they bring to the toddling memo of the first-year associate. And AI works faster and cheaper.

Meanwhile, AI performs some mundane tasks better than a human lawyer. More than cutting corners, AI sometimes sees a new angle for interrogatories in discovery, Brooks said. Sometimes AI comes up with an inventive compromise for a problem in mediation, Kentucky attorney Stephen Embry said. AI can analyze dialogs to trace points of agreements and disagreement in negotiation, Illinois lawyer Svetlana Gitman reported.

AI does a quick and superb job on the odd request for boilerplate, North Carolina attorney Victoria Alvarez said. For example, "I need a North Carolina contract venue clause." And AI can organize quickly large data sets, she said, generating spreadsheets, tables, and graphics.

What AI cannot yet do well is good jobs news for senior lawyers and professors such as me: AI cannot make complex arguments, Brooks said. In fact, he likes to receive AI-drafted memoranda from legal opponents. They're easily recognizable, he said, and it's easy to pick apart their arguments, which are on par with the sophistication of a college freshman.

Similarly, Brooks said, AI is especially bad at working out solutions to problems in unsettled areas of law. It is confused when its training materials—all of the law and most of the commentary on it—point in different directions. 

In a way, AI is hampered by its own sweeping knowledge. It has so much information that it cannot readily discern what is important and what is not. A lawyer might readily understand, for example, that a trending theory in Ninth Circuit jurisprudence is the peculiar result of concurring philosophical leanings among involved judges and likely will be rejected when the issue arises in the Fifth Circuit, where philosophical leanings tend to the contrary. AI doesn't see that. That's where human insight still marks a peculiar distinction—for now, at least, and until I retire, I hope.

It's that lack of discernment that has caused AI to make up sources, Brandeis Law Professor Susan Tanner said. AI wants to please its user, Oregon lawyer Laura Caldera Loera explained. So if a lawyer queries AI, "Give me a case that says X," AI does what was asked. The questioner presumes the case exists, and the AI follows that lead. If it can't find the case, it extrapolates from known sources. And weirdly, as Tanner explained it, "[AI] wants to convince you that it's right" and is good at doing so.

Client confidences. The panelists discussed other issue with AI in legal practice, such as the importance of protecting client confidences. Information fed into an open AI in asking a question becomes part of the AI's knowledge base. A careless lawyer might reveal confidential information that the AI later discloses in response to someone else's different query.

Some law firms and commercial services are using closed AIs to manage the confidentiality problem. For example, a firm might train a closed AI system on an internal bank of previously drafted transactional documents. Lexis and Westlaw AIs are trained similarly on the full data sets of those proprietary databases, but not, like ChatGPT, on the open internet—Pornhub included, clinical psychologist Dan Jolivet said.

But any limited or closed AI system is then limited correspondingly in its ability to formulate responses. And closed systems still might compromise confidentiality around ethical walls within a firm. Tanner said that a questioner cannot instruct AI simply to disregard some information; such an instruction is fundamentally contrary to how generative AI works.

Law schools in the lurch.  Every panelist who addressed the problem of employment and training for new lawyers insisted that the profession must take responsibility for the gap that AI will create at the entry level. Brooks said he pushes back, if sometimes futilely, on client demands to eliminate people from the service chain. Some panelists echoed the tantalean promise of billing models that will replace the billable hour. But no one could map a path forward in which there would be other than idealistic incentives for law firms to hire and train new lawyers.

And that's a merry-go-round I've been on for decades. For the entirety of my academic career, the bar has bemoaned the lack of "practice ready" lawyers. And where have practitioners placed blame? Not on their bottom-line-driven, profit-making business models, but on law schools and law professors.

And law schools, under the yoke of ABA accreditation, have yielded. The law curriculum today is loaded with practice course requirements, bar prep requirements, field placement requirements, and pro bono requirements. We have as well, of course, dedicated faculty and administrative positions to meet these needs.

That's not bad in of itself, of course. The problem arises, though, in that the curriculum and staffing are zero-sum games. When law students load up on practice-oriented hours, they're not doing things that law students used to do. When finite employment lines are dedicated to practice roles, there are other kinds of teachers absent who used to be there.

No one pauses to ask what we're missing.

My friend and mentor Professor Andrew McClurg, retired from the University of Memphis, famously told students that they should make the most of law school, because for most of them, it would be the last time in their careers that they would be able to think about the law.

Take the elective in the thing that stimulates your mind, McClurg advised students (and I have followed suit as an academic adviser). Explore law with a not-nuts-and-bolts seminar, such as law and literature or international human rights. Embrace the theory and philosophy of law—even in, say, your 1L torts class.

When, like my wife once was, you're a legal services attorney struggling to pay on your educational debt and have a home and a family while trying to maintain some semblance of professional responsibility in managing an impossible load of 70 cases and clients pulling 24/7 in every direction, you're not going to have the luxury of thinking about the law.

Profit machines. What I learned from law's last great leap forward was that the "profession" will not take responsibility for training new lawyers. Lawyer salaries at the top will reach ever more for the heavens, while those same lawyers demand ever more of legal education, and of vastly less well compensated legal educators, to transform and give of themselves to be more trade school and less graduate education.

Tanner put words to what the powers-that-be in practice want for law schools to do with law students today: "Train them so that they're profitable."  In other words, make billing machines, not professionals.

Insofar as that has already happened, the result has been a widening, not narrowing, of the gap between supply and demand for legal services. Wealthy persons and corporations have the resources to secure bespoke legal services. They always will. In an AI world, bespoke legal services means humans capable of discernment and complex argument, "critical eyes." 

Ordinary people have ever less access to legal services. What law schools have to do is expensive, and debt-burdened students cannot afford to work for what ordinary people are able to pay.

A lack of in-practice training and failure of inculcation to law as historic profession rather than workaday trade will mean more lawyers who are minimally, but not more, competent; lawyers who can fill out forms, but not conceive new theories; lawyers who have been trained on simulations and pro bono hours, but were never taught or afforded an opportunity to think about the law

These new generations of lawyers will lack discernment. They will not be able to make complex arguments or to pioneer understanding in unsettled areas of law. They will be little different from and no more capable than the AIs that clients pay them to access, little better than a human equivalent to a Staples legal form pack.

These lawyers will be hopelessly outmatched by their bespoke brethren. The ordinary person's lawyer will be employed only because the economically protectionist bar will forbid direct lay access to AI for legal services.

The bar will comprise two tribes: a sparsely populated sect of elite lawyer-professionals, and a mass of lawyer-tradespeople who keep the factory drums of legal education churning out form wills and contracts to keep the rabble at bay.

The haves and the have nots. 

It's a brave new world, and there is nothing new under the sun.

The first ABA TIPS panel comprised Victoria Alvarez, Troutman Pepper, Charlotte, N.C., moderator; Laura Caldera Loera and Amanda Bryan, Bullivant Houser Bailey, Portland, Ore.; Professor Susan Tanner, Louis D. Brandeis School of Law, Louisville, Ky.; and Justice J. Brooks, Foster Garvey, Portland, Ore. The second ABA TIPS panel referenced here comprised Svetlana Gitman, American Arbitration Association-International Center for Dispute Resolution, Chicago, Ill., moderator; Stephen Embry, EmbryLaw LLC and TechLaw Crossroads, Louisville, Ky.; Reginald A. Holmes, arbitrator, mediator, tech entrepreneur, and engineer, Los Angeles, Cal.; and Jayne R. Reardon, Fisher Broyles, Chicago, Ill.