I was early to the generative AI wave in higher education: I was among the first professors who teach writing to publish in an academic journal about generative AI and critical thinking, and I am now part of an interdisciplinary team at Babson College thinking about how AI is impacting education, industry and society.
But that does not mean I am all in on AI -- nor am I anti-AI. I am pro-learning. As my co-authors and I argue in a forthcoming book on realizing the promise of higher education, even the most powerful tools are only as good as the learning environments we build around them.
So what does "getting learning right" look like in the age of generative AI? It involves a lot of experimentation and leaning in with students as a co-learner when I don't have all of the answers, while remaining staunchly committed to sharing my expertise in writing, critical thinking and learning. I also hope that they trust me enough to follow my lead and persevere when the work becomes difficult.
From hope to grief
Navigating the rise of generative AI seemed easier to me in the earlier days. In spring 2023, for example, soon after ChatGPT went public, I asked students to use it to research their favorite musical artist and then fact-check the results as part of a unit in my senior-level social media class. The responses sounded polished and confident, but they were often wrong. Album dates were scrambled. Tours were invented. At one point, a student threw up her hands and shouted, "It lies!" The room erupted. The "lies" were especially apparent with less popular artists, about whom less had been written. "How might that translate to other knowledge areas?" I asked. They were pretty quick to thinking about whose voices might not make the cut in a different scenario.








