Robustness of large language models in moral judgements
With the advent of large language models (LLMs), there has been a growing interest in analysing the preferences encoded in LLMs in the context of morality. Recent work has tested LLMs on various moral judgement tasks and drawn conclusions regarding the alignment between LLMs and humans. The present...
Saved in:
| Main Authors: | Soyoung Oh, Vera Demberg |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
The Royal Society
2025-04-01
|
| Series: | Royal Society Open Science |
| Subjects: | |
| Online Access: | https://royalsocietypublishing.org/doi/10.1098/rsos.241229 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Manner implicatures in large language models
by: Yan Cong
Published: (2024-11-01) -
SbSER: Step-by-Step Enhanced Reasoning Framework for Large Language Model with External Subgraph Generation
by: FENG Tuoyu, WANG Gangliang, QIAO Zijian, LI Weiping, ZHANG Yusong, GUO Qinglang
Published: (2025-02-01) -
Enhancing In-Context Learning of Large Language Models for Knowledge Graph Reasoning via Rule-and-Reinforce Selected Triples
by: Shaofei Wang
Published: (2025-01-01) -
Context is Key: Aligning Large Language Models with Human Moral Judgments through Retrieval-Augmented Generation
by: Matthew Boraske, et al.
Published: (2025-05-01) -
Relationship between Moral Sensitivity and Moral Reasoning with Moral Courage in Nursing Students
by: Atefeh Babaei, et al.
Published: (2025-03-01)