Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

The attribution of human concepts to conversational artificial intelligence (CAI) simulating human characteristics and conversation in psychotherapeutic settings presents significant conceptual and normative challenges. First, this article analyzes the concept of epistemic trust by identifying its problematic conditions when attributed to CAI, arguing for conceptual shift. We propose a conceptual, visual tool to navigate this shift. Second, three conceptualizations of AI are analyzed to understand their influence on the interpretation and evaluation of conceptual shift of epistemic trust and associated risks. We contrast two common AI conceptualizations from literature: a dichotomic account, distinguishing between AI's real and simulated abilities, and a relational account. Finally, we propose a novel approach: conceptualizing AI as a fictional character to combine their strengths, arguing for shifting focus from merely simulating human abilities to addressing CAI's actual strengths and weaknesses. The article sheds light on underlying theoretical assumptions that influence the ethical analysis of CAI.

Original publication

DOI

10.1080/15265161.2025.2526734

Type

Journal article

Journal

Am J Bioeth

Publication Date

22/07/2025

Pages

1 - 16

Keywords

Conversational AI, epistemic trust, ethical design, ethics, psychotherapy, responsible human-AI interaction