Human-Like Epistemic Trust? A Conceptual and Normative Analysis of Conversational AI in Mental Healthcare.
Sedlakova J., Lucivero F., Pavarini G., Kerasidou A.
The attribution of human concepts to conversational artificial intelligence (CAI) simulating human characteristics and conversation in psychotherapeutic settings presents significant conceptual and normative challenges. First, this article analyzes the concept of epistemic trust by identifying its problematic conditions when attributed to CAI, arguing for conceptual shift. We propose a conceptual, visual tool to navigate this shift. Second, three conceptualizations of AI are analyzed to understand their influence on the interpretation and evaluation of conceptual shift of epistemic trust and associated risks. We contrast two common AI conceptualizations from literature: a dichotomic account, distinguishing between AI's real and simulated abilities, and a relational account. Finally, we propose a novel approach: conceptualizing AI as a fictional character to combine their strengths, arguing for shifting focus from merely simulating human abilities to addressing CAI's actual strengths and weaknesses. The article sheds light on underlying theoretical assumptions that influence the ethical analysis of CAI.