Integrating Generative AI in Teaching and Learning
Integrating Generative AI in Teaching and Learning
Faculty approaches across Barnard
Barnard faculty are reimagining their course policies, assignments, and activities to refocus on student learning and transparently communicate expectations to their students about the use of generative AI. In what follows, faculty across disciplines provide a glimpse into their approaches as they look ahead to the Fall 2023 semester.
“My approach to generative AI products is twofold: first, to motivate students to use their own creativity such that they are not incentivized to outsource the work to a machine, and second, to make the perils and possibilities of these technologies part of what we learn in the classroom.”
Michelle Greene, Assistant Professor in Psychology
“The goal of the assignment is to have the students use ChatGPT in these different roles and then probe the reasons why one role may be more effective than the other.”
Alexander Cooley, Claire Tow Professor of Political Science
“Instead of restricting students from using such [AI] products, I recommend (and sometimes require) their use for my course on Coding Markets.”
Rajiv Sethi, Professor of Economics in the Dept. of Economics and Human Rights
“ I'm hoping to keep an open mind and I'm hoping to profoundly grasp the opportunity in using [generative AI].”
Katie Glasner, Senior Associate in Dance
“I suggested that students use ChatGPT as a writing assistant for one of their assignments where I wanted them to be focused more on the content of their work rather than on their writing style.”
Luca Iema, Term Assistant Professor in Neuroscience & Behavior
"I am still adapting my courses, and am particularly excited to address ways that generative AI can be used to enhance student learning along with making restrictions on its use for some purposes."
Rebecca Wright, Druckenmiller Professor of Computer Science
"I will do a ChatGPT exercise with the students leading up to the imagined performance paper asking them how this tool might help them and also mislead them..."
Laurie Postlewate, Senior Lecturer in French
"Apart from academic integrity, we think it’s critical that students understand how generative AI perpetuates biases and systems of oppression that our students really care about dismantling."
Wendy Schor-Haim, Director of the First-Year Writing Program
Faculty Examples
Read on to learn how faculty have experimented with AI in their classrooms and are planning to teach AI literacy and help students use AI ethically.
My approach to generative AI products is twofold: first, to motivate students to use their own creativity such that they are not incentivized to outsource the work to a machine, and second, to make the perils and possibilities of these technologies part of what we learn in the classroom. David Wiley has termed many college assignments "disposable" because they do not have a readership beyond student and professor and are quickly forgotten by each. I've been inspired by leaders in Open Pedagogy, such as Karen Cangialosi, who have students write for the public to create products with lasting impact and meaning. When I was at Bates College, I had students collectively write their own open textbook. Knowing that their writing would be visible to the world inspired them to bring their best selves, and they were proud to be authors on a product with a citable DOI. I plan to keep this approach at Barnard. For example, having students create zines based on class content instead of a more traditional essay. Regarding the second approach, I am lucky to teach at the intersection of cognition and AI. In my spring seminar, Modeling the Mind, students will be assigned to find cognitive tasks that are easy for humans but hard for ChatGPT. This not only shows the design principles behind this style of AI but also shows students that these tools are not human intelligence.
I’ve developed an assignment using ChatGPT for my upcoming “Transitional Kleptocracy” class in the Fall. The pedagogical justification for the assignment is as follows.
This assignment uses ChatGPT to explore how wealthy/politically influential individuals with controversial pasts (corruption scandals or political controversies) actively manage their global public profiles. The assignment asks students to use ChatGPT to write two profiles: a positive profile (in the style of a public relations firm) and a more critical profile (in the style of a human rights or anti-corruption watchdog) of a given individual with a controversial history (some may even have been sanctioned).
The first part of the assignment invites students, using some preliminary research, to develop detailed prompts to construct these contrasting profiles.
The second part of the assignment (without ChatGPT) invites the students to critically assess the relative strengths of the two profiles they have just generated. It is likely (though not certain) that the more positive profile will be the most convincing, as the publicly available information that ChatGPT has gathered will be more sanitized and plentiful than the controversial information, which tends to be deleted or sanitized (usually by reputation management firms) from public sources such as “Wikipedia.”
The goal of the assignment is to have the students use ChatGPT in these different roles and then probe the reasons why one role may be more effective than the other.
AI products like ChatGPT are excellent at producing computer programs in basically any language to solve clearly specified problems. Instead of restricting students from using such products, I recommend (and sometimes require) their use for my course on Coding Markets. Even trying to understand the code that has been produced in this way can be instructive, and students can learn to code faster.
I'm hoping to get some tactics about talking to the students about using AI (when really all I want to do is ignore it) from a webinar I'm attending next week hosted by the Reacting to the Past community. I'm hoping to keep an open mind and I'm hoping to profoundly grasp the opportunity in using the "helper."
The first assignment in the First Year Seminar/Reacting to the Past class asks that the students work with a claim, reasons for that claim, and the evidence to support the claim. They are not - initially - asked for a counterargument, although many students do include this in the piece. It is a short piece of writing (500-600 words) and the content is completely their choice. Topics have included the right to bear arms, the dissolution of the American legal drinking age, why dogs are good pets to own, international travel by rail, why LaLa Land is The Best Movie Ever, and reasons to eat a plant based diet based on climate issues. Of course there are many more, but these topics are the ones that come quickly to mind.
For the students' first written assignment, which has nothing to do with the content of the games that I'll run in Reacting to the Past, I may have them write the entire paper and then copy/paste paragraph by paragraph into ChatGPT with the request that AI give a counter argument to their positions. This, in theory, gets them thinking about what their opponents' counter arguments – broadly – might look like, thereby getting them ready for game play.
Or, I might ask that they write that first paper in its entirety and then plug it in to ChatGPT asking how to improve the paper. I'd then work with both documents and their revision might incorporate the AI generated suggestions or my comments or they might just go in a different direction on their own brain power.
I'm entirely ambivalent about using AI to brainstorm. I think walking is better. I may have to shift my POV. I'd like to get to the point where, if I really have to go there, "AI could be used as a tool to push their learning, not take it over" (from RTTP community member Cydni Vandiver/New Mexico Military Institute). I've not looked through the BU document carefully, but will be doing so and hoping there's something in there that I can latch onto. Vandiver also suggests playlab.ai. I'll be looking at this later this week.
I suggested that students use ChatGPT as a writing assistant for one of their assignments where I wanted them to be focused more on the content of their work rather than on their writing style. Content refers to the main information and meaning that needs to be conveyed (the methods and results of a research paper, the relevance, and interpretation of a finding, etc.). Style represents the more superficial presentation of the content (is the information conveyed using simple and accurate language or is it too technical? are everyday examples used or is the presentation very abstract? etc.). Of course, great style is necessary for conveying content accurately so content and style are definitely entangled. However, in certain cases (e.g., when you have a lot of information to go through), it is important to focus the resources on understanding the content first before developing a beautiful style.
Example of assignment can be accessed here.
I am still adapting my courses, and am particularly excited to address ways that generative AI can be used to enhance student learning along with making restrictions on its use for some purposes. So far, I include a discussion of generative AI tools and their appropriate use as sources for material in turned-in work along with a more general discussion of appropriate references and citations, collaboration, and plagiarism. In my class on privacy, we discuss the ways in which generative AI tools may impinge on privacy, including by leaking in their results information about individuals in the training data and by making use of data about people without explicit or sufficiently clear consent and/or compensation to those people. We also discuss how AI-based decision-making tools can have negative privacy implications and other negative implications and can have disparate impacts on different groups. Finally, we discuss related policy and sociotechnical issues, such as actual and possible regulation applicable to AI-based tools.
This fall I will teach Major French Texts I, a survey of French literature covering the Middle Ages to 18th century that I have taught many times. Our first unit is on the short narratives called lais attributed to the twelfth-century writer Marie de France. Consideration of medieval manuscript production and oral transmission leads us to discuss how the performance (as opposed to silent reading) of a literary text allows a plurality of interpretations–a clearly expressed intention of the author. This semester the students will be assigned a paper written in French to describe an “imagined performance” of one of the lais; they will be required to consider various aspects of such a performance including vocalization, timing, facial expression and gesture, costume, etc.; they may choose a performance in either a medieval or modern context. Their performance must include explanation of textual passages and present how these fit into the overall interpretation of the lai. This assignment, which replaces more traditional textual analysis based on themes and tropes, requires a certain level of personal investment on the part of the student in the text itself; the performance is “imagined” but it is nonetheless an experience that requires a thoughtful and informed reading grounded in textual evidence–and in Old French no less! The paper assignment will be followed by a session in which each student discusses briefly in French with the entire class their imagined performance.
Interestingly, the responses created by ChatGPT (both 3.5 and 4.0) to the performance prompt for the lai entitled “Chaitivel” produced some useful general recommendations on how to do a narrative performance, but it made glaring errors in basic elements of the narrative itself, as if it had not actually read the text. I will do a ChatGPT exercise with the students leading up to the imagined performance paper asking them how this tool might help them and also mislead them to the point of failing the paper.
The early periods that we discuss allow us to consider, in addition to classical literary motifs and tropes, the older forms of the language and sociohistorical consideration of the modes of literary production and transmission in a manuscript and early print culture. All lectures, readings and student work for the course are in French.
After much thought, discussion, and reading, FYW has a policy (as of Fall 2023) prohibiting the use of generative AI in any way.
Our policy has a list of readings to help students understand why generative AI is prohibited in FYW; these readings explain how and why generative AI is often factually incorrect and how and why it reproduces misogyny and racial and cultural biases. I (Wendy) am going to require that students read and, in a written assignment, reflect on these readings (they’re brief), and I think other instructors will, too.
We do think it’s really important to emphasize the issue of academic integrity – that generative AI is giving students the product of other people’s work, which the students are then passing off as their own. Even if they try to cite, they can’t, and they can’t check sources for credibility, verifiability, etc. – the complete lack of transparency makes generative AI a really bad choice for students who are trying to do good work in FYW. In this way, it’s far inferior to Wikipedia, which at least has clear sources much of the time! Apart from academic integrity, we think it’s critical that students understand how generative AI perpetuates biases and systems of oppression that our students really care about dismantling.
Related Resources
ChatGPT and Other Artificial Intelligence (AI) in the Classroom. Digital Futures Institute, Teachers College.
Considerations for AI Tools in the Classroom. Columbia University Center for Teaching and Learning.
Generative AI & the College Classroom. Center for Engaged Pedagogy, Barnard.
Learner Perspectives on AI Tools: Digital Literacy, Academic Integrity, and Student Engagement. Students as Pedagogical Partners. Columbia University Center for Teaching and Learning.
Thinking About Assessment in the Time of Generative Artificial Intelligence. Digital Futures Institute, Teachers College.
This resource was developed in collaboration with the Digital Futures Institute at Teachers College and the Columbia Center for Teaching and Learning.
Submit Your Approach
Interested in contributing to this resource? The CEP welcomes all Barnard faculty to email us at pedagogy@barnard.edu if they would like their assignments or approach highlighted in this resource.