John Benjamin Dancer, and J H Steward. Student Microscope in Case, by J.H. Steward, 457 S. 1875-1900. Part of Open: Science Museum Group.

In early 2023, researchers from ITHAKA teamed up with students at the University of California, Berkeley to address an open question in the growing generative AI literature: How do students really use ChatGPT? The success of this collaborative project also served as a springboard for JSTOR’s development of an AI tool and enhanced search capabilities geared toward meeting student needs. 

In this blog, we’ll share how this work—rooted in ITHAKA’s ethos of collaboration and guided by our commitment to expanding access to knowledge—epitomizes our community-driven approach to new technologies that continues to inspire our work today. 

Combining strengths: ITHAKA teams up with UX research students 

At ITHAKA, our mission is to expand access to knowledge and education. On JSTOR we accomplish this in part through the responsible use of innovative technologies, which date back to the early use of the internet to make digitized journals available in the 1990s.

But it’s not simply a matter of applying technology to solve objective problems. To ensure we’re meeting real users’ needs, it’s important to make sure their points of view are incorporated into our solutions. ITHAKA’s User Insights team of research experts helps guide this work by leading consistent cycles of user learning across all aspects of JSTOR to ensure their viewpoints shape our product development.

We approached the rapid adoption of ChatGPT in early 2023 accordingly. We wanted to understand generative AI’s evolving role in education and use that knowledge to to leverage its value in keeping with our commitment to integrity. 

From the start, the conversation around ChatGPT in scholarship largely revolved around questions of academic honesty, and concerns about students using it to plagiarize. Recognizing the potential reluctance among students to discuss their use of Chat GPT openly, especially due to fears of plagiarism accusations, we found a unique avenue for inquiry through a partnership with the UC Berkeley School of Information. 

Our collaboration took shape as a capstone project within the School of Information’s UX Research Course, allowing us to explore this terrain from the inside out. Led by students, but under the guidance of our User Insights team, this initiative not only facilitated a student-to-student dialogue, ensuring a more open and genuine exchange of experiences, but also allowed for truly reciprocal learning. 

As students honed their research skills, they unveiled critical insights into the student experience with ChatGPT, offering us a deeper understanding of its use and impact. This process wasn’t just about gathering data; it was about nurturing a new generation of researchers while enriching our grasp of generative AI’s role in education.

Getting the idea for gen AI on JSTOR: Collaborative methods and actionable outcomes

Our collaboration with user research students from Berkeley was characterized by active engagement and robust support systems. We prioritized ongoing dialogue and the exchange of ideas with the aim of helping students learn to think like researchers. 

To keep the conversation open and ongoing, we held weekly meetings, offered ad hoc advice sessions and office hours, and utilized a dedicated Slack channel. To ensure students had tools to help them fine-tune their research skills, we shared guides on crafting questions, developing hypotheses, and designing experiments. We provided past examples and numerous walkthroughs to outline final artifact expectations.

Through these interactions, students’ insights played a crucial role in shaping the purpose and direction of the generative AI tool we developed and released in beta on JSTOR this fall. We gained a clearer understanding of the value proposition of this technology within the academic context, and uncovered nuanced sensibilities among students who yearned for something they could feel better about using. 

Overall, students were looking for ways to fast-track some of the more time-intensive aspects of research—ways to quickly ascertain content relevance, for example—but had concerns about the trustworthiness and permissibility of ChatGPT. Students also foresaw a future and careers that would require them to have knowledge of AI, and sought ways of engaging with this technology that were reliable, and would genuinely benefit rather than harm their work.

From idea to iteration: Building the gen AI tool on JSTOR

Based on what we learned with and from thet Berkeley students, we began an iterative development process for a research tool leveraging generative AI on JSTOR. It was clear to us that a summarization capability would be of high value to student users, but that there were many risks we’d need to mitigate to create something reliable. From the earliest stages of development,  we engaged Subject Matter Experts (SMEs) to critically assess the summarization feature, examining outputs for bias, completeness, accuracy, quality, and formatting. 

As we researched user behaviors and needs, and began exploring further functionality accordingly, the evaluation framework evolved dynamically. Among other things, we learned from students that they had a lot of feedback on ChatGPT, and we wanted to capture as much of this as possible to develop a truly trustworthy and useful feature. Thus, community feedback is not only baked into the tool foundationally, but is also vital to its ongoing evolution. 

To facilitate a real-time feedback loop, we integrated multiple opportunities to share insights within the generative AI tool itself. We constantly assess and use this feedback to make adaptations based on ongoing engagement with users.

The impact of collaboration on generative AI on JSTOR

Collaboration has shaped the development and effectiveness of JSTOR’s gen AI tool, as well as our approach to these new technologies as a whole. By involving our community, we can move forward with evidence-backed solutions and foster an environment where real-world problems are addressed authentically and collaboratively. This process enhances the utility of what we produce, and also strengthens trust.

In the face of rapidly evolving technology, we have a responsibility to stay informed about changing trends and adapt in alignment with our values. By empowering researchers and stakeholders to drive innovation, we ensure that educational tools evolve alongside user needs and technological advancements. 

This ongoing collaboration with the community underscores its growing necessity, serving as a cornerstone for navigating the complexities of a rapidly evolving technological landscape. This ensures  that our work remains aligned with the changing needs of society.

How to get involved

We invite readers to join our collaborative projects and studies by volunteering to participate in beta testing, where your insights and feedback will play a crucial role in driving innovation. Here’s how:

  • Users who are signed into a personal account and have institutional authentication on JSTOR will receive a pop-up that asks if they would like to sign up to try the tool. If you do not see this pop-up and believe you should, please email support@jstor.org.

Note that access to our testing environment will be limited so that we can create a controlled learning experience, where our product and technology teams can best study user interaction with these new features. We are not able to guarantee access to the tool. Learn more about generative AI on JSTOR.