Annotated Bibliography

AI is one of the fundamental issues that education is currently grappling with and will continue to grapple with for the foreseeable future. This is especially true for composition, where it can occasionally feel like an existential issue. If not existential, then something that has the potential, if big capital has its way, to reorient the field. My research will center on AI and its impact on writing classrooms in K-12 and higher education.

My questions are: How is AI currently being used in composition? What are the positives and negatives? What are some resources, pedagogies, or activities that will teach students to approach AI with a curious but cautious eye?

Acker, Tiffany. 3 December 2025. Teaching Writing in an Age of AI. ENG 510: Teaching First Year Composition

Annotation/Summary: Tiffany discusses how she was unfamiliar with AI and, as a result, she chose not to speak on it. She discusses how a friend of hers at another institution of higher education; they are encouraging their students to use AI. This college is also heavily associated with the government and that in both aspects they are encouraging the use of AI to “complete their projects, papers, and other assignments.” Tiffany then goes on to say that she views the incorporation of AI as inevitable, but there are ways to teach about it that center the concerns that AI carries in order to ensure that students have the most information.

Commentary: Tiffany’s report from other institutions of higher education, particularly those so closely related to the government, is great. They’re a wonderful insight into just how ubiquitous AI is becoming across the country.

Baz, Mehmet Ali, and Sevil Hasırcı Aksoy. “The effect of feedback on informative text writing: Ai or teacher?” Open Praxis, vol. 17, no. 3, 11 Aug. 2025, https://doi.org/10.55982/openpraxis.17.3.871. https://doaj.org/article/fb6526b6b158406cba5cdc0c15c99439

Annotation/Summary: Baz and Aksoy conducted an experiment testing the efficacy of AI feedback in writing informative texts for fifth-grade students in Turkey. They did this over the course of eight weeks, giving one group of students only AI feedback, and another group of students only teacher feedback. The study found that “teacher feedback resulted in a significant difference in the retention test, whereas AI feedback did not” (621).

After analyzing the feedback content of both teachers and AI, they identified a key difference between the two kinds of feedback. While the AI feedback was comprehensive, the teacher feedback was, for lack of a better word, more human. “The mnemonic, emphatic, stimulating, and associative information aspects of teacher feedback, which addresses the individuality of the student were lacking in AI-generated feedback, although this may change as AI advances” (622).

The article contends, like many others, that AI will be beneficial in reducing the time and cognitive workload required for feedback. The article also suggests that by not investing in AI now, it will reduce access in the future. Therefore, there will be an educational inequality between schools that invest and those that don’t (622).

Commentary: I can use this to continue to show that human feedback is more effective in long-term retention of content, extending the work of Escalante and others. What is especially exciting about this article is that it uses a different population. 5th grade Turkish students as opposed to other studies that have used university English learners in their studies. AI cannot slip, or mushfake, into a discourse community as easily as the teacher. They lack the multimodal ability to construct meaning in human interactions.

Behizadeh, Nadia, et al. “Invited response: Promise and perils of Genai in english education: Reflections from the National Technology Leadership summit.” English Education, vol. 56, no. 1, 1 Oct. 2023, pp. 8–19, https://doi.org/10.58680/ee20235618.

Annotation/Summary: The authors of this piece reflect on their time at the National Technology Leadership Summit (NTLS) and “dive more into the perils of GenAI, expanding our focus from the classroom space to the wider ecology of society to consider impacts on human and nonhuman entities” (10). They were disturbed by the fact that at this conference “almost all of the authors urged teacher educators and K-12 teachers to find ways to use AI productively in spite of concerns” (9). Each of the authors constructed a section in which they discussed their own concerns.

Nadia’s reflection was focused on reflecting on the environmental and social justice related concerns. “What does using GenAI mean for our planet? What are the intersections of environmental justice, social justice, and curriculum and instruction in teacher education programs?” (12).

Lindy’s Reflection was more concerned with how AI will change the lives of teachers. “We considered what GenAI means for generating new knowledge and how it has the potential to reshape the kinds of knowledge and expertise teachers will need” (13).

Meridith’s reflection was more of an amalgamation of the other two reflections. In conclusion, the authors pointed out that when it came to AI, there were a lot more questions than there were answers. They advocate “to move from theorizing and hypothesizing to develop a research agenda to answer these questions on GenAI in English education” (16). 

Commentary: The authors do a great job of pointing out the concerns around AI and poking a hole in the hype bubble that has developed around AI. This article is a call to action that offers different avenues of study, by providing a multitude of questions that could and should be researched.

Chen, Chen, and Yang (Frank) Gong. “The role of AI-Assisted Learning in academic writing: A mixed-methods study on Chinese as a Second language students.” Education Sciences, vol. 15, no. 2, 24 Jan. 2025, p. 141, https://doi.org/10.3390/educsci15020141. https://doaj.org/article/0ecd63df0de24b658bee570199808da3.

Annotation/Summary: Chen and Gong studied the impact of students using AI to help them with Chinese as a Second Language education. They separated the students into two groups of 25. One group used AI while the other did not. They found that the students who used AI over traditional teacher feedback scored significantly higher on their writing samples (82 vs. 89) (9). Other significant findings include the idea that students perceived AI as a knowledgeable teacher but didn’t feel the same pressure of communicating with an actual teacher (11).

The article identifies several issues with the students who used AI. The major one being an overreliance on AI, with one student claiming: “that ‘without the tool, I would write nothing correctly’” (12). Another student claimed, “that the tool ‘was so good that I completely relied on its instruction’” (12). And still another stating that they “used ChatGPT to ‘check and revise almost every sentence or even every word I wrote’” (12). The paper concludes that “AI tools in this study inspired ideas, generated grammatically accurate sentences, and adjusted language to meet academic style expectations, ultimately improving students’ performance in academic writing tasks” (13).

Commentary: This paper follows a similar process as both Escalante and Baz and Aksoy, but this is the first paper to attest to finding a significant improvement in attainment of skills. However, it seems clear to me that these results should be seriously questioned based on the interviews with the students. If AI is generating ideas, creating grammatically accurate sentences, and adjusting language to meet academic style, then what exactly are the students doing? Furthermore, just because they got a higher score on a paper, does that mean they retained the skills? It is difficult to tell, but it also seems that the students were allowed to use ChatGPT to complete their final paper. Wouldn’t a better test be to remove ChatGPT and see how the students fared without it?

Deep, Promethi Das, and Yixin Chen. “The role of AI in academic writing: Impacts on writing skills, critical thinking, and Integrity in Higher Education.” Societies, vol. 15, no. 9, 4Sept. 2025, p. 247, https://doi.org/10.3390/soc15090247. https://doaj.org/article/f61e7ea165ef450a81be42adb77a7d98

Annotation/Summary: Deep and Chen conduct a narrative review of scholarly work focused on the role of academic writing in AI. They started with 261 articles and combed through until they found 20 that met the outlined eligibility criteria. Deep and Chen found that the research suggests that students who use AI provide more polished and well-organized work. The researchers also found that AI can be especially beneficial with ELL and in educational environments where there is not enough faculty to provide students with individual feedback.

However, they also identified many negatives in the review. AI use can lead to an overreliance, stating, “It does little to encourage the cognitive effort necessary for students to form their own ideas and arguments.” They also identified ethical concerns centered on sourcing, inability to understand context and nuance, and the distribution of false, misleading, or even harmful information. The paper concludes that AI will be used in the future and that it is important for educators to engage with it in order to incorporate it effectively.

Commentary: This article provides a list of primary research sources that can be used for further research. This paper supports and extends the neoliberal focus on speed and efficiency outlined by Escalante. At the same time, it provides some valuable classroom activities that could develop AI literacy. It also shows that the research in the field suggests that about half of students don’t want AI, most instructors don’t want AI, so where is the push coming from? This dovetails nicely with Escalante and how students prefer human-feedback.

This article also outlines areas of further exploration that can be extremely beneficial for anyone who is interested in researching AI and composition, such as myself.

Escalante, Juan, et al. “Ai-generated feedback on writing: Insights into efficacy and ENL student preference.” International Journal of Educational Technology in Higher Education, vol. 20, no. 1, 27 Oct. 2023, https://doi.org/10.1186/s41239-023-00425-2. https://doaj.org/article/a43ee9464eb041d3a7fbbd2bcf0122aa.

 Annotation/Summary: Escalante et al. studied the efficacy of AI (GPT-4) as an automated writing evaluation tool and whether students preferred this feedback to human feedback. The study found that there was no significant difference in terms of improvement between the students who received AI feedback of human feedback. The study found that 50% of students preferred human feedback, citing human interaction as beneficial to their writing process, while 50% preferred AI feedback, citing speed and clarity as beneficial to their writing process.

While the paper found no significant improvement and found that 50% of students preferred human feedback, it still recommended the incorporation of AI into educational practices. The number one argument that they used was the time that is saved with AI feedback. So, despite student preference, the focus is on improving efficiency and labor saving (money saving) benefits. They also leaned heavily on the idea that AI will improve and therefore the feedback will improve.

Commentary: This paper leans into neoliberal ideas of value, increasing speed and efficiency as a reason for incorporation. This is the only reason for incorporating AI as there was no improvement in the actual mastery of content. I could see the use of AI-feedback as beneficial, particularly in K-12 where ELL programs are under-funded and understaffed. AI could be a quick and easy solution.

However, this paper raises more questions than answers. My biggest worry is about the impact this will have on cognitive and social development. What will happen to students who receive most of their knowledge from a machine? How will this impact how they interact with humans, particularly after they get out of school? How will this impact how they interact with their professors? How will this impact how they interact with fellow students? Students constantly talk about feeling isolated. By incorporating AI feedback, are we not increasing student isolation? Is this creating the most well-rounded individuals who will go out into the workforce? Soft social skills are becoming increasingly in demand. Does this limit students’ access to these skills?

Fernandes, Maggie, and McIntyre, Megan. “Kairos 29.2: Fernandes and Mcintyre, Giving Voice to Generative Ai Refusal in Rhetoric and Writing Studies – Episode 1.” Kairos 29.2: Fernandes and McIntyre, Giving Voice to Generative AI Refusal in Rhetoric and Writing Studies – Episode 1, Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 15 Jan. 2025, kairos.technorhetoric.net/29.2/disputatio/fernandes-mcintyre/ep1.html.

Annotation/Summary: Fernandes and McIntyre use a multimodal article to explore reasoning and practice for engaging or refusing to engage with AI in rhetoric and writing studies. They use the written word and podcasts to achieve this goal. In the first episode they talk about how there are two camps with AI in rhetoric and comp “1) this is the future so we must adapt, and 2) this is the end of academia, and so we must stop it; we must police” (episode 1). They position themselves as more middle of the road, preferring instead to refuse to adapt and refuse to police.

In the second episode, they talk about the different ways that they are engaging with AI at the University of Arkansas. They are using an “up-to-the-teacher kind of thing” (transcript 3). Their focus was on making sure that TAs knew the ethical implications of AI use and how to show students to use it should the TAs incorporate it into their class.

The third episode continues to focus on “centering perspectives based in refusal.” They bring in different perspectives, such as “ambivalence” towards AI. Fernandes and McIntyre also focus on the idea of time, stating: “We’ve lived through short-lived hype cycles before” (Episode 3). They voiced concerns that many shared as AI is being so rapidly incorporated into higher-ed. “Those initial conversations seem to have quickly given way to how-tos, and the how-tos were really aimed at integration. And that was surprising to me… and made me question my own understanding of where we are, who we are, what we want, and we do now” (transcript 7).

In episode 4, Fernandes and McIntyre interviewed Dr. Michael black discussing more about tech hype, the importance of discussing labor and AI, and how writing as a discipline is built on process, but AI is completely product focused. There continues to be a discussion about the rhetoric of inevitability. Black raises the important point that “Editing work is in some ways a lot harder than writing, and I’m very skeptical of this assumption that we’ll all just become editors of AI work when even editing human work can be very difficult, especially when you have somebody to talk to and try to figure out what their goals were” (episode 4 transcript, 4)

Commentary: This article and podcast can act as a starting point for outlining the dissent against AI. The most important insights relate to the contradictions between AI and writing as a discipline—process versus product, labor. The other important insight is the fact that no one is questioning the inevitability narrative that the academy and the world at large are being force-fed. As a discipline, we know what works, and AI seems contradictory to the foundations and best practices of the discipline.

Hesse, Douglas D. “In dialogue: People writing, human identities.” English Education, vol. 57, no. 2, 1 Jan. 2025, pp. 166–170, https://doi.org/10.58680/ee2025572166.

Annotation/Summary: This in response to the ELATE statement that I annotated below. The text is mostly positive about the statement however they “worry the statement misses the full implications of ‘incorporating’” AI (166). The author is focused on the way that AI destroys students identity and this identity is wrapped up in the humanities: “these include forming and expressing creative and social identities, ethical and moral positions: the old-fashioned domain of the humanities” (166). When we discuss AI we are far too concerned with the economic impact that the technology can have e.g. increasing efficiency, but not enough credence is given to quality or the soft-skills that are increasingly in demand.

The author, in the spirit of exploration wonders about the impact AI use will have on “cognitive development, which matters not only for individuals but for democracy. Complex times call for complexly thinking people” (168). Writing studies has long considered that writing is a tool of critical thinking and learning. Therefore, AI takes away the ability for students to struggle, grow, think critically, and learn through writing. The author ends the piece by calling “to privilege a place for what Coleridge called the ‘primary imagination,’ students’ making things out of their own encounters with the world… We need to prize students’ reactions and interpretations, what they can do out of imperfectly developing skills viewing the world” (168). 

Commentary: This is an excellent piece that extends the ELATE statement. Together they can build a framework for how to facilitate a student-centered, constructivist pedagogy that combats the blanket-use of AI. We need to prioritize student voice and student engagement with the world. We need students to recognize their uniqueness. We need students to know that they don’t need to conform perfectly to the rules but rather need to produce something that is a part of them and therefore has value.

Kacena, Melissa A., et al. “The Use of Artificial Intelligence in Writing Scientific Review Articles.” Current Osteoporosis Reports, vol. 22, no. 1, 16 Jan. 2024, pp. 115–121, https://doi.org/10.1007/s11914-023-00852-0. https://link-springer-com.ezp.lib.cwu.edu/article/10.1007/s11914-023-00852-0.

Annotation/Summary: Kacena et al. tested AI’s ability to write publishable scientific papers. Once again, researchers identify time saving, e.g. money saving as the most important impact of this study, opening the paper with “Time is valuable, and advancements of artificial intelligence (AI) provide new avenues to save this precious resource. The authors developed three different methods. One paper was written using only AI; however, they quickly abandoned this method because ChatGPT only had access to articles written prior to 2021 (118-119). Another paper was written with a combination of human intervention and AI assistance. Finally, the last paper was written with only humans.

The researchers found that after writing these articles and submitting them to journals, 2 of the 8 papers submitted in the study were returned “with the need for extensive reorganization of the manuscript text” (119). The papers returned were both written by a combination of human and AI, meaning that all the papers written purely by humans were only in need of minor revisions. The researchers also found that the AI-only group reported “up to 70% incorrect references” (120). Researchers also found that AI-assisted drafts scored higher on a plagiarism detector (120).

Commentary: This paper can show how other disciplines are incorporating AI into their writing and not having great success with it. This is important for writing studies, particularly the subject of writing in the disciplines. The factual inaccuracies along with the plagiarism are helpful in showcasing the dangers of writing with AI. Finally, this paper extends the belief among many researchers in the field that AI’s most impactful area will be in the reduction of time and labor.

Kertysova, Katarina. Artificial Intelligence and Disinformation: How AI Changes the Way Disinformation is Produced, Disseminated, and Can be Countered”. Security and Human Rights, 55-81. 2018. https://research-ebsco-com.ezp.lib.cwu.edu/c/ilo4yp/viewer/pdf/stg53r4735.

Annotation/Summary: The author argues for a multi-pronged attack against AI. The author advocates for further investment in big tech solutions—in other words, investing in AI to fight AI. This investment should come from Western governments and be invested in Western companies. The author justifies this AI arms race by citing dangerous foreign actors that seek to undermine and destabilize Western democracies with disinformation. The author seems to suggest that the only way to fight AI is with more AI. Another prong of this attack would come from education and digital literacy initiatives. The author also calls for Western governments to invest in AI literacy education. 

Commentary: This showcases how the West is thinking about AI and also how it is going to be a major priority for democratic nations in the future. Literacy initiatives are important and are one of the major foundations of combating disinformation. Helps showcase the value of the subject and how it can be used in education.

Libertino, Melissa. “Reimagining Ela: Artificial intelligence as a literacy intervention tool.” English Leadership Quarterly, vol. 47, no. 1, 1 Aug. 2024, pp. 19–24, https://doi.org/10.58680/elq202447119. http://research-ebsco-com.ezp.lib.cwu.edu/c/ilo4yp/viewer/pdf/quhez6uz45?route=details

Annotation/Summary: Libertino outlines in the article how she has used AI in her high-school English classroom, particularly how it can accommodate students with learning disabilities. She identifies an interesting point that is reminiscent of a point that bellhooks made: “Some students who needed support the most resisted the standardized writing outlines” (19). Libertino explains that students can use AI to give more personalized and effective outlines. However, upon examination, these outlines lean into the common myths hindering the writing process.

Libertino frames the use of AI uniquely: “It wasn’t about replacing my guidance or their thinking; rather, it was about empowering students to navigate their writing journeys on their own terms” (19). She sees AI as a guide and accommodation tool (20). The primary way that AI can be used as an accommodation tool is in helping students create outlines, revision, and research (22). AI should not be used for the actual process of writing but as a helpful tool in the prewriting process and the revision process. 

Libertino makes the claim that AI can be used “as a tool to maximize learning and foster independence” (23). However, there is no evidence to suggest that Libertino has seen AI maximize learning, nor does she define what she means by independence.

Commentary: Understanding trends in high school, particularly those being used in 11th and 12th grades, will help college educators see what students will expect when they come to college. If students receive AI as an accommodation, then we will probably see students who expect this practice to continue. It showcases how AI encourages poor practices for writing, e.g. 5-paragraph essay, topic sentence-evidence-reasoning-reasoning-evidence-reasoning-reasoning-conclusion sentence paragraph structure.

Marhaban, Saiful, et al. “Artificial Intelligence in writing: students’ writing competencies and voices.” JEELS (Journal of English Education and Linguistics Studies), vol. 12, no. 1, 5 May 2025, pp. 403–425, https://doi.org/10.30762/jeels.v12i1.3927. https://jurnalfaktarbiyah.iainkediri.ac.id/index.php/jeels/article/view/3927.

Annotation/Summary: Marhaban et al. set out to answer two questions: “1. Does AI enhance students’ writing competencies? 2. How do students perceive artificial intelligence in terms of writing process?” (405). They used quantitative (students took a test) and qualitative (students filled out a questionnaire) data in this study (406). They had a group of 27 students integrate AI into a process-based approach to writing, using AI during pre-writing, drafting, and revising (406). During the pre-writing stage, AI was used to help them “recall and enhance their background knowledge base in order to identify potential topics for their writing” (407). During the drafting stage, “students appeared to be engaged in the process of writing and consulting with AI to ensure the accuracy of word choices and identify potential supplementary information” (408). Finally, for revising, AI was used “to facilitate the delivery of personalised feedback and guidance” (408).

Marhaban et al. found that while there was some improvement in global skills (organization and content), the most significant improvement was with local skills (grammar, punctuation, syntax) (419).

Of the 27 students, all agreed that AI helped them to write more effectively. The only variation was whether they agreed or strongly agreed (412).

Commentary: Consistent findings with Escalante, Deep and Chen, Chen and Gong, and Baz. AI can be a tool used to assist students, particularly those who need individual learning plans and SLLs. They also outline similar concerns, both ethical and student overreliance. These findings don’t so much extend as support or lend credence to the other findings. AI was used for generating a topic for the paper, fixing grammar, writing sentences, identifying supplementary information and finally providing feedback and guidance for revision. It is no wonder that the students’ scores were higher; it seems like AI did all the work. At the same time, the students all said that they thought AI helped improve their writing, and again it is no wonder. The most confusing part is how students could take credit for and receive credit from the people conducting this experiment.

Nash, Brady L. “Navigating generative AI – You can have it both ways: Teaching with and against Generative AI.” English Journal, vol. 115, no. 1, 1 Sept. 2025, pp. 98–101, https://doi.org/10.58680/ej2025115198.

Annotation/Summary: In this column Nash seeks to open a continuous dialogue about AI and its role in the English classroom. He focus on the problems of environmental destruction, harmful labor practices in the field of AI, the illusion of unbiased information that reinforces societal prejudices, and trying to get rid of teachers as harmful aspects of AI (98). “The central challenge AI tools pose for educators broadly and for English educators in particular is that AI separates the central process of literacy—reading, writing, thinking, even speech—from their products” (98).

Nash advocates for the incorporation/examination of AI through the lens of inevitability rhetoric. However, he provides a nuance approach: “This perspective is important, but its emphasis on inevitability risks a kind of digital AI colonization for students who may themselves want to opt out of unethical systems” (99). He lays out several areas worthy of exploration when it comes to AI in English teaching: teaching about AI, critical and situated practice, AI assisted teaching, teaching without AI (100). Finally, he concludes the article by suggesting that there is a middle path with AI and advocating for this middle path: “We can thoughtfully and critically address AI in our classes when its incorporation supports the most important goals of our learning or because it is important to think critically about it. And we can keep AI out of the classroom when we see that its use will replace the thinking our students need to be doing or because the ethical challenges loom too large” (100).

Commentary: This work provides an interesting bridge between many of the articles which advocate for its incorporation and those that are opposed to AI in the English classroom. The list of possible areas of exploration provides important areas of focus and study as well as different ways to discuss and think about AI with colleagues. I think an important thing to acknowledge within this article is that the unethicality of AI is almost taken for granted, but the rhetoric of inevitability means that we must push on with it anyhow particularly when it comes to student involvement. It will be important to somehow square that circle.

Nash, Brady L., et al. “ELATE position statement: Exploring, incorporating, and questioning Generative Artificial Intelligence in English teacher education.” English Education, vol. 57, no. 2, 1 Jan. 2025, pp. 158–165, https://doi.org/10.58680/ee2025572158.

Annotation/Summary: This article is a joint statement developed over the course of two years by “NCTE Commission on Digital Literacies in Teacher Education (D-LITE) AI working group and with teachers working in schools” (159). They emphasize the importance of “a human centered approach to education, stressing the significance of human connections, creativity, and critical thinking that cannot be replaced by GenAI” (159). The statement also points out that AI is a new technology and its application in ELA is even newer. Therefore this a time of exploration rather than expertise. The focus should be on exploring the possibilities of this new product rather than pretending to know exactly how it will change the discipline. It is important that educators understand different aspects of AI such as “the physical technologies needed for it to operate harm the planet, the economic imperatives that drive its development and govern its corporate platform-holders, and the myriad voices used without permission or left our of AI data corpuses (159).

With all of this in mind, the statement comes up with a list of guiding statements that are beneficial for all educators, but particularly in ELA: “GenAI platforms are literacy technologies… ELA teacher educators cannot ignore AI technologies… GenAI literacy includes understanding how GenAI platforms work… GenAI includes varied multimodal platforms, GenAI platforms can reproduce existing biases and prejudices that must be critically examined… ELA teacher educators should facilitate their students situated practice with GenAI platforms… ELA teacher educators should not eliminate the productive struggle of writing… Learning to read GenAI texts critically is crucial… ELA teacher educators must discuss and model ethical practices when using GenAI… Teaching with GenAI is still a human-centered process” (160-163)

Commentary: This article walks the line between healthy skepticism and a desire to understand the role of AI. I think the definition of this time period as being a moment of exploration rather than expertise to be valuable. That could be an effective way to frame this whole paper, determine whether how different research can fall under the role of exploration rather than expertise.

Peterson, Kristina, and Dennis Magliozzi. “Writing in the era of AI: Chatgpt in the writing workshop model.” English Journal, vol. 114, no. 2, 1 Nov. 2024, pp. 95–102, https://doi.org/10.58680/ej2024114295.

Annotation/Summary: Peterson and Magliozzi wrote this article to examine how two writing teachers incorporated “ChatGPT as a writing partner in their ninth-grade English classes at a suburban, public high school in New Hampshire” (96). The authors start out the paper by discussing how by utilizing the “new car dilemma” as a justification for using AI. “A car is just a car until someone puts it to use. One could drive to the bank to make a deposit one day and use it as a getaway car in a robbery the next” (96). An argument reminiscent of conservative arguments against gun control.

The authors use a workshop-style classroom, which they suggest is a perfect environment in which AI can be incorporated. “Adding AI to this mix does not dismantle the foundation workshop-style classrooms are built on. AI use within a writing workshop offers an additional opportunity to writers and teachers alike” (97). Peterson and Magliozzi take a brief detour to discuss the importance of prompt engineering before diving into examples of situations where they have walked students through the use of AI. They used ChatGPT to help students overcome writers’ block/brainstorming comparing AI to both a writing group and an effective editor (100). Like many authors they cite time-saving measures as the most beneficial aspect of AI: “One of the first noticeable differences in a conference aided by AI is the immediacy of response. ChatGPT offers writers instant feedback, whereas teachers might need time to process, consider, or even revisit a piece before offering comprehensive feedback” (100).

Commentary: This article seems to give evidence of using the rhetorical strategies discussed in the prompt engineering article which is a compelling connection. The other element that is interesting is the idea of using AI for feedback and asking different questions, but then having the teacher sort of be a mediator for this process. This could possibly be an effective bridge for several of the other articles where they discuss AI feedback being more efficient, but human-feedback being more effective.

Ranade, Nupoor, et al. “Using rhetorical strategies to design prompts: A human-in-the-loop approach to make ai useful.” AI & SOCIETY, vol. 40, no. 2, 1 Apr. 2024, pp. 711–732, https://doi.org/10.1007/s00146-024-01905-3. https://link-springer-com.ezp.lib.cwu.edu/article/10.1007/s00146-024-01905-3.

Annotation/Summary: Ranade et al. studied the impact of applying the rhetorical situation to prompt engineering, citing an increased interest in prompt engineering and advancements in AI as their motivation for completing the study.  (711) They used Bitzer’s theory of a rhetorical situation along with variations and critiques that have been leveled against Bitzer’s theory (712). The research questions are: “How can we leverage human and AI collaborations for writing, especially within technical communication (TC), through rhetorical prompt engineering? What are the implications of using rhetorical engineering on practice and pedagogy?” (712).

Their findings suggest that prompt engineering that incorporates the rhetorical situation can be beneficial to “achieve effective automation of content development tasks, but also enhance capabilities of both human and AI” (720). The researchers created a general formula for prompt generation that they claim can be used in an iterative process of cultivating the best response. The researchers also suggest that using the rhetorical formula they have created, they can limit bias and unethical content by limiting how much users rely “on the model’s assumptions, of which could lead to biased, unethical content” (722). They identify another of other advantages including better responses for generic applications, creating a more human-in-the-loop approach, and eliminating retraining costs associated with ineffective prompt engineering (722-723). Finally, researchers suggest that this will increase student ability to learn and create because the rhetorical situation gets them to think deeply about what they are trying to get AI to say (723)

Commentary: This article is beneficial because it is the only article that I have read that takes a theory from writing studies and tries to incorporate it into AI. The findings suggest that as the researchers incorporated more and more elements of the rhetorical situation, the quality of the response increased, suggesting that elements of writing studies can improve the quality of AI responses. However, there is no inquiry into the impact that this would have on student learning, which would be the next essential line of inquiry.

Riffel, Rachel. 26 November 2025. ePortfolios Discussion. ENG 510: Teaching First Year Composition

Annotation/Summary: Rachel analyzed the ePortfolio website caronmade. The demo video was voiced by AI. Later on in the response, when Rachel talks about her next goal as a teacher she mentions wanting to show the students that writing doesn’t “have to be perfect and that AI is not really the answer to their insecurities about writing.” One reason students decide to use AI is because of a lack of confidence in their own writing.

Commentary: Like the other discussion posts, this shows instructor apprehension about using AI in writing. The focus on student confidence and the reasons that students use AI is an important angle from which to approach the future of AI in writing studies.

Shauchenka, Siarhei. 4 December 2025. Teaching Writing in the Age of AI. ENG 510: Teaching First Year Composition.

Annotation/Summary: Siarhei talks about how it is getting harder to tell AI from human writing as AI has gotten better. However, from his own experience with AI, he has found it to be “sloppy, sycophantic and it fails to create nuanced conversation in its delivery of information.” He believes AI can serve a purpose in the classroom, but he thinks that this application will be limited to busywork. He also believes that AI will be effective for technical writing that doesn’t require much creativity.

Commentary: This insight from a potential future instructor shows how not all teachers have full faith in the potential for AI in the writing classroom. His differentiation of the types of writing that AI might be effective for is insightful. I wonder if soon we will get instruction manuals that make little sense and were clearly written by AI.

Wu, Chris. 15 October 2025. Rhetorical Situations Discussion. ENG 510: Teaching First Year Composition.

Annotation/Summary: Writers have understood the importance of context, but it wasn’t really brought into perspective until Bitzer. Wu believes that in the age of generative AI, it is even more important to understand the context in which a text is created. If we do not understand the rhetorical situation, then it can be used against us to achieve who knows what.

Commentary: Wu makes the important point that if we don’t understand the rhetorical situation, it can be used against us, particularly in the age of generative AI. This also showcases the reticence that future instructors might have against using AI in Writing Studies.

Wu, Chris. 8 December 2025. Teaching Writing in the Age of AI. ENG 510: Teaching First Year Composition.

Annotation/Summary: Wu offers a reflection on his time with AI. He previously worked for an AI startup and got a good look into how these sorts of companies operate. From this perspective he says, “While I don’t hate it… I am overall negative on AI as a whole, especially with regards to using it for writing”. The main reason for not wanting to incorporate AI centers on a lack of accountability. This lack of accountability is found not just in students, but in AI itself. Wu discusses cases where AI has been wrong about something and has either doubled down on its incorrectness or, at other times, where AI admits it was incorrect. AI lacks the emotional accountability to care whether it gives people the correct information. “AI feels no shame for being wrong. In a similar way, it doesn’t care if what it says is true or not, it doesn’t feel sorry or guilty for being wrong, it doesn’t fear accountability or responsibility.”

Commentary: I had never considered the lack of emotional accountability that AI offers. This might feed into the idea that is propagated by many, that this detachment somehow makes AI unbiased. In reality, this makes it biased towards delivering false information. It is like a sociopath who is not afraid of disseminating lies.