Can AI Pass a Law Exam? Study Reveals Strengths and Weaknesses in Legal Reasoning

AI Shows Potential in Law Exams but Falls Short in Complex Legal Reasoning
AI performance in tests
Written By:
Mwangi Enos
Published on

A recent study by the University of Wollongong tested AI’s ability to pass an undergraduate law exam. While AI-generated responses were nearly indistinguishable from human-written ones, performance varied significantly. AI excelled in essay-style questions but struggled with complex legal reasoning and critical analysis. The results suggest that AI can support legal studies but is not yet capable of replacing human intellect in nuanced legal decision-making.

The Study and Its Methodology

The test had 225 participants and they were divided into two parts. One part consisted of reading a case study on criminal offences and the other had an essay and short answer questions. The examination aimed to test the subject’s legal knowledge and ability to think critically and form rational arguments.

Responses were created with the help of AI through different models. Initially, five responses were prepared by just entering the examination questions through a prompt without any specific instructions. Another five were prepared through detailed prompts with relevant legal materials to enhance performance. It is important to note that these answers were handwritten in official exam booklets and submitted anonymously alongside actual student answers. Tutors graded them without knowing which were AI-generated.

How AI Performed in the Law Exam

The study found that AI-written responses were not easily distinguishable from human-written ones. Tutors did not suspect that any answers came from an AI model. However, the AI-generated papers did not perform exceptionally well.

Responses created without prompts performed poorly, with three failing and two barely passing. On average, these AI-generated responses outperformed only 4.3% of students. When detailed prompts were used, performance improved, with one AI-generated paper scoring 73.3% and another 78%. On average, the prompted AI papers outperformed 39.9% of students.

AI showed strength in essay-style responses but struggled with complex legal analysis. The inability to apply nuanced reasoning indicates AI’s current limitations in handling intellectually demanding tasks in legal education.

Implications for Education and AI Usage

The study highlights that AI is not yet capable of replacing human intellect in critical thinking tasks. Instead, AI should be seen as a tool to enhance learning and analysis rather than a substitute for human effort.

Universities may need to rethink traditional assessment methods to incorporate AI-assisted learning. Encouraging students to use AI for drafting, verifying and refining work could be a valuable approach, emphasizing the role of human oversight and critical thinking in academic settings.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

Related Stories

No stories found.
Sticky Footer Banner with Fade Animation
logo
Analytics Insight
www.analyticsinsight.net