The humanities in an AI world: Personal reflections from SXSW EDU 2025

Bill McConkey. Mecahnical Man. n.d. Wellcome Collection.
In early March, Dylan Ruediger (Ithaka S+R) and I organized a meet-up at the SXSW EDU 2025 Conference, providing a space for educators to discuss their experiences incorporating AI into real-world classroom instruction. While approximately 200 participants came from diverse educational sectors including K-12, technical schools, and community colleges, my focus in this blog will be on the unique challenges and opportunities facing humanities instructors. This blog shares my personal reflections and perspectives on those discussions as well as insights from our work to learn about AI across our teams at ITHAKA.
AI’s impact on pedagogical approaches
At our meet-up, I observed that humanities instructors expressed the strongest concerns about AI’s role in education. This was consistent with a national survey on generative AI in postsecondary instruction conducted by our Ithaka S+R team, which revealed that 45% of humanities faculty disagreed or strongly disagreed that AI would have a positive impact on instructional practices in their field” (Ruediger, Blankstein, & Love, 2024). And many faculty, especially in the humanities, continue to prohibit students from using generative AI.
Throughout the discussions, I came to appreciate that much of this skepticism stems from concerns about how students might interact with AI. A legitimate worry is that AI could replace, rather than augment, the cognitive processes involved in learning. One high school history teacher shared a particularly illustrative example: a student in her class used AI to write an essay, bypassing the valuable learning that comes through the research and writing process. She added that she could only catch that the essay was AI-generated because of a fabricated citation in the references.
While I sympathize with these frustrations about AI potentially circumventing learning processes, I was encouraged by examples of humanities instructors finding innovative approaches to enhance student growth using AI. Dr. Alexa Alice Joubin of George Washington University shared in a webinar hosted by our Constellate team that she teaches students to use AI to simulate reactions from various communities on the students’ own thoughts, thereby developing metacognitive skills (Joubin 2024). Dr. Joubin is not alone in promoting AI for metacognition development. In a recent Harvard tech talk on how writers use ChatGPT, Ken Liu described using ChatGPT to create a Socratic dialogue, helping him refine his own thinking through structured questioning.
These contrasting examples suggest to me that while concerns about AI in humanities education are valid, there are also promising opportunities for leveraging AI to enhance higher-order thinking. From my perspective, a key insight from these discussions is that education in the AI era might benefit from emphasizing teaching students how to analyze, interpret and engage critically with texts and ideas while incorporating AI as a tool that can help students monitor and refine their interpretive and reflective processes. I believe that creating spaces where humanities instructors can share successful AI integration strategies, particularly those fostering metacognitive skills, could help demonstrate these tools’ potential value for deep learning while acknowledging the legitimate concerns many educators hold.
AI’s impact on assessment
In humanities education, assessment typically revolves around written artifacts — essays, exams and reports — which are evaluated as a final product representing a student’s understanding and analytical skills. This product-based evaluation approach naturally influences how humanities instructors respond to AI technologies. When the culmination of learning is a piece of writing that AI can simulate, it’s understandable why many educators perceive AI as a threat to learning.
However, as I reflected on the discussions from the previous section, I believe AI’s most promising role in education is not in generating prose but in fostering self-regulated learning. This potential shift in learning processes seems to call for a corresponding evolution in assessment strategies.
From the conversations at the meet-up, I found myself drawn to process-based assessment as a viable alternative. Contrary to product-based assessment, which focuses solely on the final outcome, process-based assessment evaluates the steps and strategies that students employ throughout the learning process. The primary aim of process-based assessment is to evaluate how students approach a task, their problem-solving skills and how they reflect on their own learning. Since the thought process is harder to outsource to AI, process-based assessment is more robust than product-based assessment in the AI era.
Process-based assessment is not an unfamiliar concept. For example, formative assessment in project-based learning is used to provide feedback, identify improvement areas, and facilitate the learning throughout the project. In project-based learning, the project frames the curriculum and instruction and therefore is relatively a long-term endeavor. Along the project trajectory, instructors may adopt product-based assessment for milestone deliverables that check whether students have met the learning objectives in a certain phase of the project. With the development of AI, the process-based assessment used at a macro level before needs to be implemented at the micro level too. For each student product, instructors will need to evaluate the thought process rather than the product alone.
While this might initially appear to create additional work for instructors, some pioneering educators who have incorporated AI in teaching have come to appreciate the “inefficiency” brought by AI. Christopher Mcvey, in a substack post, shared how interactions with ChatGPT opened new opportunities for his students. Exchanges with ChatGPT pushed students to reconsider their positions on a topic, encouraged them to explore a research question that initially seemed uninteresting to them, and helped students practice verifying the scholarly sources identified by AI. All these activities, while slowing things down, created what Christopher thinks a rewarding experience in which students appreciated the hard work they invested before producing a piece of writing. Christopher noted that the majority of his students chose to do the writing by themselves because they felt proud of their work, even though his class policy permitted using AI for up to 50% of submitted work.
AI’s impact on educator roles
During the meet-up, I was intrigued by a question: what will be the role of educators in the AI era? Some participants envisioned a future where academic instruction is largely outsourced to AI systems, which would deliver personalized content explanations. For example, AI could design soccer-related examples to help a student who loves soccer understand historical themes through a lens that speaks to their own interests. In this scenario, instructors would potentially focus primarily on providing social-emotional support to students. At ITHAKA, we have heard a similar perspective on healthcare. Paul LeBlanc, an invited speaker on our Defining the Moment panel at Next Wave 2025, suggested that in the future, healthcare providers will primarily offer emotional support while AI handles diagnostic and analytical tasks.
While I find this vision intriguing, I’m skeptical about its underlying assumption that AI will become a fully reliable source of information, an assumption that is not yet supported by current technology. Today’s AI models, particularly generative pre-trained transformer-based large language models (LLMs), are essentially probabilistic data generators. They recognize patterns in training datasets and try to reproduce those patterns in response to prompts, but their outputs are not guaranteed to be accurate or demonstrate the critical thinking that humanities education aims for. Given these limitations, I believe AI cannot replace humanities educators as domain experts, at least not in the near future.
That said, I was particularly interested in a possibility that emerged from the discussions. AI might serve as a thoughtful peer rather than educator, given its adaptability in playing different roles designated by prompts. For example, a postdoc from UT Austin shared findings from an A/B test she did indicating that students who asked AI to play the role of a peer achieved much better learning outcomes than those who asked AI to play the role of a mentor. In the context of humanities education, where exchanges of inter subjectivities are central, this insight suggests that AI might best be used as a conversation partner rather than an authority.
From my perspective, rather than replacing educators, AI is best positioned as a collaborator. I believe educators will continue to play a crucial role in providing both domain knowledge and emotional support, but AI can enhance and personalize these interactions, enriching the learning experience.
What I have learned
Throughout the SXSW EDU meet-up and in the reflections that followed, I’ve found myself pondering how AI might reshape various aspects of humanities pedagogy if thoughtfully adopted. During the conversations at the meet-up, I appreciated the legitimate concerns many humanities educators expressed about preserving the core values of humanistic inquiry and time-honored educational practices.
While these concerns merit careful consideration, they also invite us to think about how AI integration might influence the educational landscape in humanities. As AI gradually gains presence in education, I wonder if we might witness several transitions in humanities:
- A potential evolution in the pedagogical approach—complementing the commitment to textual analysis by inviting students to develop metacognitive skills and self-regulated learning with AI as a thinking partner.
- Perhaps a gradual enrichment of assessment methods—valuing the well-crafted final outputs while emphasizing the learning journey itself, finding new ways to appreciate the process of developing ideas alongside the expression of those ideas.
- Possibly a reconsideration of the role of educators—where educators remain central to the educational experience, but incorporate AI into the classroom as an additional collaborator to enhance both academic and social-emotional learning experience.
At ITHAKA, we continue to conduct research, convene educators, and provide solutions to support instructors and institutions as they navigate the intersection of education and emerging technologies. While no one has definite answers right now, I believe we will find great value in the shared explorations of both the educational challenges and opportunities that lie ahead.
References
CS50. (2025, Feb 20). CS50 Tech Talk with OpenAI – ChatGPT for Writers YouTube.
Joubin, A. A. (2024). Enhancing the trustworthiness of generative artificial intelligence in responsive pedagogy in the context of humanities higher education. In General Aspects of Applying Generative AI in Higher Education: Opportunities and Challenges (pp. 207-220). Cham: Springer Nature Switzerland.
Mcvey, C. (2025, Feb 5). The Case for Slowing Down – Why AI in Education Should Value Inefficiency. The Important Work. https://theimportantwork.substack.com/p/the-case-for-slowing-down?utm_campaign=post&triedRedirect=true
About the author

Zhuo Chen is a text analysis instructor at Constellate of ITHAKA, where she develops and teaches programming tutorials using Python and R to help researchers analyze textual data. At ITHAKA, Zhuo works closely with libraries, faculty, and students to promote broader access to digital literacy education.
Zhuo holds a bachelor’s degree and master’s degree in English from Wuhan University, a master’s degree in Education from the University of Delaware, and a PhD in Linguistics from the Graduate Center of the City University of New York.