Determining Legal Relevance with LLMs using Relevance Chain Prompting
In legal reasoning, part of determining whether evidence should be admissible in court requires assessing its relevance to the case, often formalized as its probative value---the degree to which its being true or false proves a fact in issue. However, determining probative value is an imprecise proc...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2024-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Subjects: | |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/135477 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850271203390914560 |
|---|---|
| author | Onur Bilgin John Licato |
| author_facet | Onur Bilgin John Licato |
| author_sort | Onur Bilgin |
| collection | DOAJ |
| description | In legal reasoning, part of determining whether evidence should be admissible in court requires assessing its relevance to the case, often formalized as its probative value---the degree to which its being true or false proves a fact in issue. However, determining probative value is an imprecise process and must often rely on consideration of arguments for and against the probative value of a fact. Can generative language models be of use in generating or assessing such arguments? In this work, we introduce relevance chain prompting, a new prompting method that enables large language models to reason about the relevance of evidence to a given fact and uses measures of chain strength. We explore different methods for scoring a relevance chain grounded in the idea of probative value. Additionally, we evaluate the outputs of large language models with ROSCOE metrics and compare the results to chain-of-thought prompting. We test the prompting methods on a dataset created from the Legal Evidence Retrieval dataset. After postprocessing with the ROSCOE metrics, our method outperforms chain-of-thought prompting. |
| format | Article |
| id | doaj-art-2ea25372ab1346668e782a8f5c31af2f |
| institution | OA Journals |
| issn | 2334-0754 2334-0762 |
| language | English |
| publishDate | 2024-05-01 |
| publisher | LibraryPress@UF |
| record_format | Article |
| series | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| spelling | doaj-art-2ea25372ab1346668e782a8f5c31af2f2025-08-20T01:52:19ZengLibraryPress@UFProceedings of the International Florida Artificial Intelligence Research Society Conference2334-07542334-07622024-05-013710.32473/flairs.37.1.13547771850Determining Legal Relevance with LLMs using Relevance Chain PromptingOnur Bilgin0https://orcid.org/0009-0002-1690-4779John Licato1Department of Computer Science and Engineering, University of South FloridaDepartment of Computer Science and Engineering, University of South FloridaIn legal reasoning, part of determining whether evidence should be admissible in court requires assessing its relevance to the case, often formalized as its probative value---the degree to which its being true or false proves a fact in issue. However, determining probative value is an imprecise process and must often rely on consideration of arguments for and against the probative value of a fact. Can generative language models be of use in generating or assessing such arguments? In this work, we introduce relevance chain prompting, a new prompting method that enables large language models to reason about the relevance of evidence to a given fact and uses measures of chain strength. We explore different methods for scoring a relevance chain grounded in the idea of probative value. Additionally, we evaluate the outputs of large language models with ROSCOE metrics and compare the results to chain-of-thought prompting. We test the prompting methods on a dataset created from the Legal Evidence Retrieval dataset. After postprocessing with the ROSCOE metrics, our method outperforms chain-of-thought prompting.https://journals.flvc.org/FLAIRS/article/view/135477relevance-chainlegal relevanceprobative valuelegal evidence retrieval |
| spellingShingle | Onur Bilgin John Licato Determining Legal Relevance with LLMs using Relevance Chain Prompting Proceedings of the International Florida Artificial Intelligence Research Society Conference relevance-chain legal relevance probative value legal evidence retrieval |
| title | Determining Legal Relevance with LLMs using Relevance Chain Prompting |
| title_full | Determining Legal Relevance with LLMs using Relevance Chain Prompting |
| title_fullStr | Determining Legal Relevance with LLMs using Relevance Chain Prompting |
| title_full_unstemmed | Determining Legal Relevance with LLMs using Relevance Chain Prompting |
| title_short | Determining Legal Relevance with LLMs using Relevance Chain Prompting |
| title_sort | determining legal relevance with llms using relevance chain prompting |
| topic | relevance-chain legal relevance probative value legal evidence retrieval |
| url | https://journals.flvc.org/FLAIRS/article/view/135477 |
| work_keys_str_mv | AT onurbilgin determininglegalrelevancewithllmsusingrelevancechainprompting AT johnlicato determininglegalrelevancewithllmsusingrelevancechainprompting |