Could generative artificial intelligence serve as a psychological counselor? Prospects and limitations
Humanity’s ability to embrace artificial intelligence (AI), or the skills and “knowledge” that it can impart, depends not only on the control of input fed to AI, but also on output management. When properly managed, the AI output, including of large language models (LLMs) such as ChatGPT, can comple...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
South Kazakhstan Medical Academy
2024-12-01
|
| Series: | Central Asian Journal of Medical Hypotheses and Ethics |
| Subjects: | |
| Online Access: | https://cajmhe.com/index.php/journal/article/view/403 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850102065798316032 |
|---|---|
| author | J.A. Teixeira da Silva Y. Yamada |
| author_facet | J.A. Teixeira da Silva Y. Yamada |
| author_sort | J.A. Teixeira da Silva |
| collection | DOAJ |
| description | Humanity’s ability to embrace artificial intelligence (AI), or the skills and “knowledge” that it can impart, depends not only on the control of input fed to AI, but also on output management. When properly managed, the AI output, including of large language models (LLMs) such as ChatGPT, can complement human endeavor and excellence. Yet, if abused or left to its own computational vices, AI might cause harm to humans and thus humanity. Within this in mind, this perspective paper offers a reflection on whether LLM-based AI, having the capacity to integrate text, voice and speech, could assist in personal or psychological counseling processes. Cognizant that psychological counseling places the human factor as a central premise of therapy, AI could be perceived as a risk of replacing human-centered counseling roles, even though it might provide assistance to humans under strictly controlled conditions. While the replacement of human-based counseling is not being advocated, there is value in considering the possibility of applying LLM-based AI tools as counseling aides, as AI-human teams, under strict human supervision, and following stringent testing, provided that an ethical working framework and reliability in AI performance can be established. |
| format | Article |
| id | doaj-art-7ee87e3311fb4defb0d5df3bafb9a712 |
| institution | DOAJ |
| issn | 2708-9800 |
| language | English |
| publishDate | 2024-12-01 |
| publisher | South Kazakhstan Medical Academy |
| record_format | Article |
| series | Central Asian Journal of Medical Hypotheses and Ethics |
| spelling | doaj-art-7ee87e3311fb4defb0d5df3bafb9a7122025-08-20T02:39:51ZengSouth Kazakhstan Medical AcademyCentral Asian Journal of Medical Hypotheses and Ethics2708-98002024-12-015429730310.47316/cajmhe.2024.5.4.06403Could generative artificial intelligence serve as a psychological counselor? Prospects and limitationsJ.A. Teixeira da Silva0Y. Yamada1Independent researcher, Miki-cho, JapanFaculty of Arts and Science, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395, JapanHumanity’s ability to embrace artificial intelligence (AI), or the skills and “knowledge” that it can impart, depends not only on the control of input fed to AI, but also on output management. When properly managed, the AI output, including of large language models (LLMs) such as ChatGPT, can complement human endeavor and excellence. Yet, if abused or left to its own computational vices, AI might cause harm to humans and thus humanity. Within this in mind, this perspective paper offers a reflection on whether LLM-based AI, having the capacity to integrate text, voice and speech, could assist in personal or psychological counseling processes. Cognizant that psychological counseling places the human factor as a central premise of therapy, AI could be perceived as a risk of replacing human-centered counseling roles, even though it might provide assistance to humans under strictly controlled conditions. While the replacement of human-based counseling is not being advocated, there is value in considering the possibility of applying LLM-based AI tools as counseling aides, as AI-human teams, under strict human supervision, and following stringent testing, provided that an ethical working framework and reliability in AI performance can be established.https://cajmhe.com/index.php/journal/article/view/403ai value acquisitionartificial social intelligencehealth informaticsmachine learningmental healthsocializationtrust |
| spellingShingle | J.A. Teixeira da Silva Y. Yamada Could generative artificial intelligence serve as a psychological counselor? Prospects and limitations Central Asian Journal of Medical Hypotheses and Ethics ai value acquisition artificial social intelligence health informatics machine learning mental health socialization trust |
| title | Could generative artificial intelligence serve as a psychological counselor? Prospects and limitations |
| title_full | Could generative artificial intelligence serve as a psychological counselor? Prospects and limitations |
| title_fullStr | Could generative artificial intelligence serve as a psychological counselor? Prospects and limitations |
| title_full_unstemmed | Could generative artificial intelligence serve as a psychological counselor? Prospects and limitations |
| title_short | Could generative artificial intelligence serve as a psychological counselor? Prospects and limitations |
| title_sort | could generative artificial intelligence serve as a psychological counselor prospects and limitations |
| topic | ai value acquisition artificial social intelligence health informatics machine learning mental health socialization trust |
| url | https://cajmhe.com/index.php/journal/article/view/403 |
| work_keys_str_mv | AT jateixeiradasilva couldgenerativeartificialintelligenceserveasapsychologicalcounselorprospectsandlimitations AT yyamada couldgenerativeartificialintelligenceserveasapsychologicalcounselorprospectsandlimitations |