Comparing ChatGPT With Experts’ Responses to Scenarios that Assess Psychological Literacy
by M. Anthony Machin , Tanya M. Machin, and Natalie Gasson
We took a deep dive into how ChatGPT compares to experts’ responses to scenarios that assess psychological literacy and this is what we found.
Our research reveals ChatGPT’s capacity to demonstrate psychological literacy aligns closely with subject matter experts (SMEs). The study evaluated ChatGPT against SMEs by analyzing responses to 13 psychology research methods scenarios, including the rating of predetermined response options. ChatGPT’s performance was impressive, showcasing a high level of psychological literacy, with Pearson’s correlations between ChatGPT and SME ratings reaching .73 and .80. Further, Spearman’s rhos were .81 and .82, and Kendall’s tau were .67 and .68, indicating a strong concordance. This suggests that generative AI, like ChatGPT, can mirror expert psychological literacy, potentially reshaping learning and application in higher education.