The humanities in the age of AI: Notes on a participatory exploration at MLA 2025

Jackson Pollock. Untitled. 1948-49. The Metropolitan Museum of Art.
How can artificial intelligence enhance humanities research and teaching while preserving the discipline’s rich traditions and interpretive depth? This question guided a dynamic session at the 2025 Modern Language Association (MLA) Conference—a gathering of thousands of educators, students, and scholars passionate about language, literature, and culture.
Led by Beth LaPensee, Principal Product Manager for JSTOR, and Diba Kaya, Senior Insights Researcher for JSTOR, the session invited participants to critically examine AI’s evolving role in the humanities. Through a hands-on workshop, attendees discussed topics like how AI might improve accessibility, streamline research, and offer innovative learning tools—while also cautioning against risks such as bias, diminished critical thinking, and over-reliance on automation.
Framing the discussion was JSTOR’s development of an interactive AI-powered research tool, designed to support learning and inquiry by helping users engage more deeply with texts. Rather than replacing traditional research methodologies, the goal of the tool—and the session itself—was to initiate a nuanced conversation about how AI can responsibly enhance scholarly work. By bringing together diverse perspectives, the session underscored the need for careful, human-centered integration of technology to ensure that AI serves as a meaningful complement to research and teaching in the humanities.
Setting the stage
Beth LaPensee opened the session by grounding the discussion in JSTOR’s principles for integrating AI into research tools. She emphasized a commitment to supporting, rather than supplanting, the research process.
“We focus on centering the content,” Beth explained, “creating a safe, trustworthy space for researchers to engage with AI.”
JSTOR’s interactive research tool, which has been in beta since August 2023, aims to exemplify this approach. Designed to enhance engagement with texts, it allows users to ask questions and receive recommendations for related content. While still evolving, the tool reflects an iterative process informed by user feedback and a desire to thoughtfully integrate AI into the humanities.
The session was structured into a series of interactive activities in small groups designed to capture participants’ thoughts and emotions:
- One-word reaction exercise:
Participants described their feelings toward AI using a single word to open discourse on the topic. - Opportunities brainstorm:
In small groups, attendees discussed potential practical applications for AI in the humanities. - Concerns discussion:
Participants debated possible challenges and risks associated with the use of AI in academic settings. - Preserving humanities traditions reflection:
The group reflected on the core values and practices of the humanities that should be maintained in an AI-enhanced future.
Activity highlights
What participants felt about AI: A spectrum of emotions
Participants described their feelings toward AI using a single word. The responses ranged dramatically—from “excitement,” “curiosity,” and “hope” to “panic,” “terror,” and “skepticism.”
Excitement and curiosity:
Many participants expressed enthusiasm about AI’s capacity to open up new avenues for scholarly inquiry. There was a strong belief that AI could democratize access to tools and resources, especially benefiting first-generation students and underrepresented communities. The possibility of enhancing language learning, offering creative writing assistance, and facilitating interdisciplinary collaboration fueled this excitement.
Apprehension and fear:
At the same time, the rapid pace of AI development sparked anxiety. Participants were vocal about fears that AI might diminish critical thinking skills by offering too-easy shortcuts to analysis and writing. There was concern that a heavy reliance on AI-generated content might lead to a “flattening” of academic voices, reducing the uniqueness and interpretive depth that characterize humanities scholarship.
Skepticism and uncertainty:
Some attendees were wary of AI’s potential biases and the ethical implications of its use. They questioned whether AI systems—trained on vast but unvetted datasets—might inadvertently reinforce dominant narratives or propagate outdated ideologies, thereby undermining the diversity of thought essential to the humanities.
“AI has the potential to break down barriers in access to knowledge, especially for first-generation students,” noted one participant. Another, however, countered: “My biggest fear is that AI will homogenize academic writing, making everything sound the same.”
Opportunities for AI in the humanities
The group brainstormed practical applications for AI in teaching and research. Ideas included:
-
- Language learning: AI tutors could provide safe, judgment-free spaces for students to practice speaking, particularly for those unable to study abroad.
- Administrative tasks: Automating repetitive tasks like formatting letters of recommendation or processing funding requests can free educators to focus on teaching and research.
- Research efficiency: AI-powered tools can accelerate large-scale textual analysis, helping scholars extract key themes from vast archives.
- Teaching writing: Students can critically engage with the AI-generated text through a structured revision exercise, identifying weaknesses, inaccuracies, and areas requiring improvement, fostering critical thinking and an awareness of AI’s limitations.
- Teaching support: AI can assist in scaffolding learning experiences, offering writing prompts, or helping students critically engage with texts.
These use cases highlighted AI’s potential to enhance productivity, accessibility, and learning while setting the stage for exploring questions about over-reliance and the potential loss of critical skills.
Addressing concerns
As the discussions deepened, participants pivoted to consider concerns about AI. “Who benefits from AI?” one attendee asked. “Often, it’s large tech companies, not the educators or researchers using these tools.” Some themes explored included:
- Bias and representation: Attendees expressed concerns about AI’s tendency to reinforce dominant cultural narratives and biases, leading to misrepresentations in literary analysis and historical interpretation.
- Erosion of critical thinking: The ease of AI-generated summaries and analyses raised worries about students losing essential interpretive and analytical skills. Some educators shared concerns that students increasingly view AI outputs as authoritative without questioning their validity.
- Intellectual dependence: The risk of over-relying on AI for research and writing was likened to the impact of calculators on mental arithmetic—convenient, but potentially diminishing fundamental skills.
- Ethical and economic questions: Attendees questioned who truly benefits from AI in academia. The predominance of corporate-developed AI tools raises concerns about their profit motives, their accessibility to institutions with limited funding, and their potential to dictate the future of academic research.
- Environmental costs: AI’s high energy consumption and the environmental impact of maintaining large-scale AI models were also highlighted, prompting discussions about sustainability in digital research.
Preserving humanities traditions
In the final activity, participants reflected on the unique aspects of the humanities that must endure in an AI-driven world. Suggestions included:
- Critical thinking and interpretation: Humanities disciplines emphasize nuanced, multi-layered interpretation—something AI struggles to replicate. Participants stressed the need to cultivate analytical skills that help students question and contextualize information, rather than passively consuming AI-generated insights.
- Close reading and deep engagement: The ability to engage deeply with texts, appreciating literary and historical complexities, remains fundamental. The fear is that AI may encourage a surface-level approach to reading and comprehension.
- Authenticity and individual creativity: Writing, interpretation, and scholarly argumentation require human creativity and originality. Attendees agreed that AI should serve as a tool to enhance, rather than replace, the personal intellectual labor that defines humanities scholarship.
- Ethical use of information: With AI’s ability to generate and manipulate text at an unprecedented scale, the humanities have a crucial role in fostering ethical literacy—helping students and scholars critically assess sources, authorship, and intent.
- Diversity of thought and perspective: AI models often reflect the biases present in their training data. Participants emphasized the importance of ensuring that human-centered, diverse perspectives continue to shape research and scholarship.
Looking ahead
As the session concluded, attendees posed thoughtful questions about the future of AI in the humanities and JSTOR’s plans for incorporating their feedback. Beth and Diba assured participants that their insights would directly inform both short-term feature development and a long-term strategy for integrating AI into JSTOR’s platform.
“We have many disciplines that we cover on JSTOR, and so one of the things that we’re looking at is where there’s uniqueness and how we can do this work in a way that feels like it’s supporting you,” Beth said.
Join the conversation
JSTOR invites educators, researchers, and technologists to continue this critical dialogue to support insights-driven product development. By sharing ideas, concerns, and best practices, we can shape a future where AI enhances, rather than detracts from, the richness of humanities education.
If you’re interested in contributing to this conversation, request access to the interactive research tool or learn more about it. If you’ve tried JSTOR’s interactive research tool, you’re invited to share your experiences and thoughts with us. Together, we can ensure that the humanities thrive in the age of AI. We look forward to continuing the conversation and sharing how your feedback is directly influencing the evolution of leading-edge technologies on JSTOR.
About the authors
Beth LaPensee is a Principal Product Manager at ITHAKA, where she leads user-centered innovations that enhance research and learning experiences. Drawing on her background in library science and user experience design, she recently spearheaded the rapid development of a generative AI research tool, balancing cutting-edge technology with the standards and needs of researchers. Beth brings deep expertise in aligning emerging technologies like AI with trusted, high-quality content to better serve users.
Diba Kaya is a Senior Insights Researcher at ITHAKA, where she leads user-centered research initiatives across the product development lifecycle to help shape product strategy and guide the development of AI-driven solutions. Diba specializes in providing foresight into emerging trends and ensuring technological advancements align with the needs of researchers and educators. By collaborating with engineering, design, and data teams, she helps create effective, user-focused tools that enhance and align with research and learning experiences.