Effects of AI vs. Human Moderators and Interactive Transparency on Perceived Trust and Acceptance of Content Classification Systems
- Molina Davila, Maria
- [University Park, Pennsylvania] : Pennsylvania State University, 2020.
- Physical Description:
- 1 electronic document
- Additional Creators:
- Sundar, S. Shyam
- Restrictions on Access:
- Open Access.
- To identify and counter the upsurge of dangerous or harmful content posted on social media (e.g., hate speech, dangerous challenges), companies like Facebook and Twitter are starting to leverage the power of artificial intelligence (AI) to moderate user-generated content. Through this process, when content is identified as belonging to a particular category (e.g., hate speech), it is flagged and oftentimes taken down. These efforts have faced criticism, mainly due to lack of clarity regarding the source of classification and lack of understanding about how the system operates. Specifically, users would like to know who or what is classifying the content--a human moderator or an AI algorithm--and what criteria are being used to reach a classification decision. So, if a system were to address these questions and reveal the identity of the source of classification and provide transparency about the process, will users agree more with the classification and show higher trust toward the system? This dissertation addressed this question with a 3 (Source: AI, Human, Both) x 3 (Transparency: No Transparency, Transparency Only, Interactive Transparency) x 2 (Classification Decision: Positive, Negative) between-subjects online experiment examining two contexts of classification, hate speech and suicidal ideation. Findings of this study reveal that, in general, users have a preference or bias that favors not flagging a post (vs. flagging a post). Furthermore, users trust AI for content classification just as much as they do human moderators; however, acceptance of AI depends on the valence of machine attributes invoked when AI is identified as the source of classification. Specifically, if machines are perceived as objective and accurate, they are more accepted than if they are seen as incapable of identifying human subjective judgements. Finally, while providing transparency about the functioning of the system was better than not providing transparency at all, allowing users to participate in the classification process elicited higher trust by increasing users' sense of agency. Theoretically, this dissertation supports a general model of user agency in the AI domain and reveals the importance of source cues and transparency in promoting trust toward a system. Findings of this dissertation also have practical implications for the design of content classification systems, in terms of disclosing the source as well as the rules of classification, and affording interactivity to increase user agency en route to a better user experience with AI.
- Other Subject(s):
- Dissertation Note:
- Ph.D. Pennsylvania State University 2020.
- Technical Details:
- The full text of the dissertation is available as an Adobe Acrobat .pdf file ; Adobe Acrobat Reader required to view the file.
View MARC record | catkey: 31803977